Welcome to the Veterinary Assistant Crew project, powered by crewAI and crewAI docker image. This repository is dedicated to creating an experimental, comprehensive Veterinary Assistant application that leverages artificial intelligence for efficient and accurate medical consultations.
The Veterinary Assistant Crew project aims to develop an AI-powered veterinary assistant capable of providing insights, diagnoses, treatment plans, and prescriptions based on symptoms provided by pet owners. The system will utilize advanced machine learning algorithms and natural language processing to assist veterinarians in making informed decisions.
The project utilizes my sageil/crewai-docker-image crewAI development Docker image. You can build the image locally or pull it from Docker Hub to get started quickly.
Note
Due to recent changes to CrewAI API, I have included the docker file and the supporting files to build the image.
You can build the image locally and use it as you wish by using docker image build -t mycrewai .
.
You can replace crewai:latest
with your locally built image in the below instructions.
Note
In its current state, this project depends on locally running LLMS using Ollama.
install (Ollama)[https://ollama.com/].
Once Ollama installed, install ollama run openhermes:v2.5 and by running ollama run openhermes:v2.5
and ollama run gemma:latest
from your terminal.
See changing models below to use other models
To run the application your machine, follow these steps:
- Install Docker on your machine if you haven't already.
- Install Ollama
- Clone this repository to your local machine.
- Run the following command to start the container
docker container run -e P="veterinary_assistant" --network host -it --rm --mount type=bind,source="$(pwd)",target=/app sageil/crewai:latest bash
- Run
poetry install
- Run
poetry shell
- Edit the project files using your favourite IDE or editor.
- To use the terminal, run the application using
poetry run veterinary_assistant
or if you prefer to use the web interface, runstreamlit run web/app.py
- Access the crew using http://localhost:8501/
- Start a container using
docker container run --name veterinary_assistance --network host -it sageil/crewai:0.41.1 bash
- Once the container starts, navivate to the
/app/
directorycd /app/
- Close the repository
git clone https://github.com/sageil/veterinary_assistant.git
- Change directory to
veterinary_assistant
directory - Run
poetry install
- Run
poetry shell
- To use the terminal, run the application using
poetry run veterinary_assistant
or if you prefer to use the web interface, runstreamlit run web/app.py
- Access the crew using http://localhost:8501/
- Use the included neovim installation to edit the project by typing
nvim .
in the project directory
Caution
Using local large models will have a performance impact.
If you observe performance issues, change to a smaller model like phi3:3.8b
To change the model used by the agents, you need to update the model
parameter in the crew.py
file located at veterinary_assistant/src/veterinary_assistant
.
diagnosticianllm = Ollama(model="openhermes:v2.5", base_url="http://host.docker.internal:11434", temperature=0.1)
reportinganalystllm = Ollama(model="gemma:latest", base_url="http://host.docker.internal:11434", temperature=0.30)
If you want to use publicly available models, please use the following steps;
- Change the model property to match the desired LLM.
- import model's langchain_openai implementation.
To use ChatGPT, import it using from langchain_openai import ChatOpenAI
then use it to configure the LLMs using the following:
# GPT based LLMS
diagnosticianllm = ChatOpenAI(
model="gpt-40", temperature= 0.1)
reportinganalystllm== ChatOpenAI(
model="gpt-4-turbo", temperature= 0.30)
- Include your
OPENAI_API_KEY
in the .env file in the root of the project.
The reports
directory contains a few answers provided by my locally installed agents
Reports.
Enabling host network on Docker Desktop is required to run this project using local LLM. while the feature is ready for Linux, it is in beta on Windows and Mac. Read more.
- Recreate report.md using the prompt
- Create GUI for user interaction using streamlit
- Introduce human interaction
- Add RAG pipeline to include local datasets to support local LLM