- Motivation
- Prerequisites
- Installation
- Usage
- Tools
- Supported Versions
- Tips
- Example Projects
- Screen Captures
- Issues
- License
This Docker image provides a convenient & secure way to create and run CrewAI agents without having to install any dependencies locally. I created the image to experiment with ollama code generation and execution.
- Docker
- Basic knowledge of python
- Basic knowledge of crewAI
- Ollama installed locally or access to remote AI services like chatGPT
- Desire to learn and have fun
Run the following command after replacing container_name
with the name of your container, project_name
with the name of your project and tag
with the tag of the image you want to use.
docker run -it --network host --name <container_name> -e P=<project_name> sageil/crewai:<tag> bash
Tip
if you leave out the P
completely -e P=<project_name>
from the command, a default crew will be created with the name default_crew.
- Changing the container local configuration.
- Type
v .
to open neovim - Open Lazyvim Explorer using
SPACE+e
- Show hidden files using
SHIFT+H
- Change the model and the
API_BASE
tohttp://host.docker.internal:11434
andMODEL
a model you have pulled on llama.
- Type
- Running your crew
- Open lazyvim terminal using
CTRL+/
- Run
crewai run
- Open lazyvim terminal using
- Change your selected provider and model:
- Type
v .
to open neovim - Open Lazyvim Explorer using
SPACE+e
- Show hidden files using
SHIFT+H
- Change the model and the
OPENAI_API_KEY
to your key andMODEL
a model you wish to use.
- Type
- Running your crew
- Open lazyvim terminal using
CTRL+/
- Run
crewai run
- Open lazyvim terminal using
- Open your crew's crew.py
- Add
from crewai.llm import LLM
to the imports - In your crew's
CrewBase
class create your LLM
myllm = LLM (
model='ollama/deepseek-r1:7b',
base_url="http://localhost:11434",
temperature=0.2)
)
- Assign the LLM to your agent by assigning it to the
llm
property
@agent
def researcher(self) -> Agent:
return Agent(
config=self.agents_config['researcher'],
# tools=[MyCustomTool()], # Example of custom tool, loaded on the beginning of file
verbose=True, # Print out all actions
llm=self.myllm
)
Tip
When working with remote services, you can also remove the --network host part of the command as its only required to allow the container access to the host's network.
- neoVim Latest stable version built from source neovim
- uv Dependency management tool for Python projects uv
- lazyVim A highly optimized Vim-like editor for Neovim lazyvim
- crewAI Platform for Multi AI Agents Systems official CrewAI documentation
- crewAI 0.108.0 crewai-tools 0.38.1
- crewAI 0.105.0 crewai-tools 0.37.0
- crewAI 0.102.0 crewai-tools 0.36.0
- crewAI 0.100.0 crewai-tools 0.33.0
- crewAI 0.98.0 crewai-tools 0.32.1
- crewAI 0.95.0 crewai-tools 0.25.8
- crewAI 0.85.0 crewai-tools 0.17.9
- crewAI 0.85.0 crewai-tools 0.17.0
- crewAI 0.83.0 crewai-tools 0.14.0
- crewAI 0.80.0 crewai-tools 0.14.0
- crewAI 0.79.4 crewai-tools 0.14.0
- crewAI 0.76.9 crewai-tools 0.13.4
- crewAI 0.76.2 crewai-tools 0.13.2
crewAI 0.74.1 crewai-tools 0.13.2crewAI 0.70.1 crewai-tools 0.12.1crewAI 0.65.2 crewai-tools 0.12.1crewAI 0.64.0 crewai-tools 0.12.1crewAI 0.61.0 crewai-tools 0.12.1crewAI 0.55.2 crewai-tools 0.8.3crewAI 0.51.0 crewai-tools 0.8.3crewAI 0.41.1 crewai-tools 0.4.26crewAI 0.36.0 && crewai-tools 0.4.26
- v:
alias v='nvim
&alias vim='nvim'
- Running
newcrew <project_name>
will create a new crew project with the provided name, install dependencies and configure the project virtual environment. - You can restart a container after stopping it by using
docker container start -ai <container_name>
Known issues:
- Copying from nvim fails due to display driver
- Icon fonts are not rendered correctly in the container's terminal? Watch. if the video peaked your interest in Wezterm, you can use my configuration from Wezterm configs
New Issues:
Please report other issues you encounter on the Issues including steps to reproduce them.
This project is licensed under the MIT License.