Skip to content

sageil/veterinary_assistant

Repository files navigation

Veterinary Assistant Crew

Welcome to the Veterinary Assistant Crew project, powered by crewAI and crewAI docker image. This repository is dedicated to creating an experimental, comprehensive Veterinary Assistant application that leverages artificial intelligence for efficient and accurate medical consultations.

Project Overview

The Veterinary Assistant Crew project aims to develop an AI-powered veterinary assistant capable of providing insights, diagnoses, treatment plans, and prescriptions based on symptoms provided by pet owners. The system will utilize advanced machine learning algorithms and natural language processing to assist veterinarians in making informed decisions.

The project utilizes my sageil/crewai-docker-image crewAI development Docker image. You can build the image locally or pull it from Docker Hub to get started quickly.

Note

Due to recent changes to CrewAI API, I have included the docker file and the supporting files to build the image.
You can build the image locally and use it as you wish by using docker image build -t mycrewai ..
You can replace crewai:latest with your locally built image in the below instructions.

Running the Application

Option 1: Using a docker mount locally

Note

In its current state, this project depends on locally running LLMS using Ollama.
install (Ollama)[https://ollama.com/].
Once Ollama installed, install ollama run openhermes:v2.5 and by running ollama run openhermes:v2.5 and ollama run gemma:latest from your terminal.
See changing models below to use other models

To run the application your machine, follow these steps:

  1. Install Docker on your machine if you haven't already.
  2. Install Ollama
  3. Clone this repository to your local machine.
  4. Run the following command to start the container
docker container run -e P="veterinary_assistant" --network host -it --rm --mount type=bind,source="$(pwd)",target=/app sageil/crewai:latest bash
  1. Run poetry install
  2. Run poetry shell
  3. Edit the project files using your favourite IDE or editor.
  4. To use the terminal, run the application using poetry run veterinary_assistant or if you prefer to use the web interface, run streamlit run web/app.py
  5. Access the crew using http://localhost:8501/

Option 2: Running the application in Docker

  1. Start a container using
docker container run --name veterinary_assistance --network host -it sageil/crewai:0.41.1 bash
  1. Once the container starts, navivate to the /app/ directory cd /app/
  2. Close the repository git clone https://github.com/sageil/veterinary_assistant.git
  3. Change directory to veterinary_assistant directory
  4. Run poetry install
  5. Run poetry shell
  6. To use the terminal, run the application using poetry run veterinary_assistant or if you prefer to use the web interface, run streamlit run web/app.py
  7. Access the crew using http://localhost:8501/
  8. Use the included neovim installation to edit the project by typing nvim . in the project directory

Changing currently used models

Caution

Using local large models will have a performance impact. If you observe performance issues, change to a smaller model like phi3:3.8b

To change the model used by the agents, you need to update the model parameter in the crew.py file located at veterinary_assistant/src/veterinary_assistant.

diagnosticianllm = Ollama(model="openhermes:v2.5", base_url="http://host.docker.internal:11434", temperature=0.1)
reportinganalystllm = Ollama(model="gemma:latest", base_url="http://host.docker.internal:11434", temperature=0.30)

Using publicly available LLMs.

If you want to use publicly available models, please use the following steps;

  1. Change the model property to match the desired LLM.
  2. import model's langchain_openai implementation.

To use ChatGPT, import it using from langchain_openai import ChatOpenAI then use it to configure the LLMs using the following:

# GPT based LLMS
diagnosticianllm = ChatOpenAI(
    model="gpt-40",  temperature= 0.1)
reportinganalystllm== ChatOpenAI(
    model="gpt-4-turbo",  temperature= 0.30)
  1. Include your OPENAI_API_KEY in the .env file in the root of the project.

Example

The reports directory contains a few answers provided by my locally installed agents Reports.

Docker Desktop Users

Enabling host network on Docker Desktop is required to run this project using local LLM. while the feature is ready for Linux, it is in beta on Windows and Mac. Read more.

Having issues?

Report any issues

Screen Capture

Browser

TODO

  • Recreate report.md using the prompt
  • Create GUI for user interaction using streamlit
  • Introduce human interaction
  • Add RAG pipeline to include local datasets to support local LLM

About

CrewAI Veterinary Assistant Agents using Docker

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published