Skip to content
/ kwaak Public

Run a team of autonomous AI agents on your code, right from your terminal!

License

Notifications You must be signed in to change notification settings

bosun-ai/kwaak

Repository files navigation

Table of Contents

CI Crate Badge Contributors Stargazers MIT License LinkedIn Discord


Logo

Kwaak

Run a team of autonomous AI agents on your code, right from your terminal!
Powered by swiftide »

Report Bug · Request Feature · Discord

What is Kwaak?

Always wanted to run a team of AI agents locally from your own machine? Write code, improve test coverage, update documentation, or improve code quality, while you focus on building the cool stuff? Kwaak enables you to run a team of autonomous AI agents right from your terminal, in parallel.

demo

Powered by Swiftide, Kwaak is aware of your codebase and can answer questions about your code, find examples, write and execute code, create pull requests, and more. Unlike other tools, Kwaak is focussed on autonomous agents, and can run multiple agents at the same time.

Caution

Kwaak can be considered alpha software. The project is under active development; expect breaking changes. Contributions, feedback, and bug reports are very welcome.

Kwaak is part of the bosun.ai project. An upcoming platform for autonomous code improvement.

(back to top)

High level features

  • Run multiple agents in parallel
  • Quacking terminal interface
  • As fast as it gets; written in Rust, powered by Swiftide
  • Agents operate on code, use tools, and can be interacted with
  • Sandboxed execution in docker
  • Python, TypeScript/Javascript, Java, Ruby, and Rust

(back to top)

Latest updates on our blog 🔥

Getting started

Requirements

Before you can run Kwaak, make sure you have Docker installed on your machine.

Kwaak expects a Dockerfile in the root of your project. This Dockerfile should contain all the dependencies required to test and run your code. Additionally, it expects the following to be present:

  • git: Required for git operations
  • fd github: Required for searching files. Note that it should be available as fd, some systems have it as fdfind.
  • ripgrep github: Required for searching in files. Note that it should be available as rg.

If you already have a Dockerfile for other purposes, you can either extend it or provide a new one and override the dockerfile path in the configuration.

For an example Dockerfile in Rust, see this project's Dockerfile

Additionally, you will need an OpenAI API key (if OpenAI is your LLM provider).

If you'd like kwaak to be able to make pull requests, search github code, and automatically push to a remote, a github token.

(back to top)

Installation and setup

Pre-built binaries are available from the releases page.

Homebrew

brew install bosun-ai/tap/kwaak

Linux and MacOS (using curl)

 curl --proto '=https' --tlsv1.2 -LsSf https://github.com/bosun-ai/kwaak/releases/latest/download/kwaak-installer.sh | sh

Cargo

cargo install kwaak

Setup

Once installed, you can run kwaak init in the project you want to use Kwaak in. This will create a kwaak.toml in your project root. You can edit this file to configure Kwaak.

After verifying the default configuration, one required step is to set up the test and coverage commands. There are also some optional settings you can consider.

Api keys can be prefixed by env:, text: and file: to read secrets from the environment, a text string, or a file respectively.

Running Kwaak

You can then run kwaak in the root of your project. This will start the Kwaak terminal interface. On initial bootup, Kwaak will index your codebase. This can take a while, depending on the size. Once indexing has been completed, subsequent startups will be faster.

Keybindings:

  • ctrl-s: Send the current message to the agent
  • ctrl-x: Exit the agent
  • ctrl-q: Exit kwaak
  • ctrl-n: Create a new agent
  • Page Up: Scroll chat up
  • Page Down: Scroll chat down
  • tab: Switch between agents

(back to top)

How does it work?

On initial boot up, Kwaak will index your codebase. This can take a while, depending on the size. Once indexing has been completed once, subsequent startups will be faster. Indexes are stored with lancedb, and indexing is cached with redb.

Kwaak provides a chat interface similar to other LLM chat applications. You can type messages to the agent, and the agent will try to accomplish the task and respond.

When starting a chat, the code of the current branch is copied into an on-the-fly created docker container. This container is then used to run the code and execute the commands.

After each chat completion, kwaak will lint, commit, and push the code to the remote repository if any code changes have been made. Kwaak can also create a pull request. Pull requests include an issue link to #48. This helps us identify the success rate of the agents, and also enforces transparency for code reviewers.

(back to top)

Configuration

Kwaak supports configuring different Large Language Models (LLMs) for distinct tasks like indexing, querying, and embedding to optimize performance and accuracy. Be sure to tailor the configurations to fit the scope and blend of the tasks you're tackling.

General Configuration

All of these are inferred from the project directory and can be overridden in the kwaak.toml configuration file.

  • project_name: Defaults to the current directory name. Represents the name of your project.
  • language: The programming language of the project, for instance, Rust, Python, JavaScript, etc.
  • cache_dir, log_dir: Directories for cache and logs. Defaults are within your system's cache directory.
  • indexing_concurrency: Adjust concurrency for indexing, defaults based on CPU count.
  • indexing_batch_size: Batch size setting for indexing. Defaults to a higher value for Ollama and a lower value for OpenAI.
  • endless_mode: DANGER If enabled, agents run continuously until manually stopped or completion is reached. This is meant for debugging and evaluation purposes.
  • otel_enabled: Enables OpenTelemetry tracing if set and respects all the standard OpenTelemetry environment variables.
  • tool_executor: Defaults to docker. Can also be local. We HIGHLY recommend using docker for security reasons unless you are running in a secure environment.

Command Configuration

  • test: Command to run tests, e.g., cargo test.
  • coverage: Command for running coverage checks, e.g., cargo llvm-cov --summary-only. Expects coverage results as output. Currently handled unparsed via an LLM call. A friendly output is preferred
  • lint_and_fix: Optional command to lint and fix project issues, e.g., cargo clippy --fix --allow-dirty; cargo fmt in Rust.

API Key Management

  • API keys and tokens can be configured through environment variables (env:KEY), directly in the configuration (text:KEY), or through files (file:/path/to/key).

Docker and GitHub Configuration

  • docker.dockerfile, docker.context: Paths to Dockerfile and context, default to project root and Dockerfile.
  • github.repository, github.main_branch, github.owner, github.token: GitHub repository details and token configuration.

LLM Configuration

Configure LLMs such as OpenAI and Ollama by specifying models for different tasks:

  • OpenAI Configuration:
[llm.indexing]
api_key = "env:KWAAK_OPENAI_API_KEY"
provider = "OpenAI"
prompt_model = "gpt-4o-mini"

[llm.query]
api_key = "env:KWAAK_OPENAI_API_KEY"
provider = "OpenAI"
prompt_model = "gpt-4o"

[llm.embedding]
api_key = "env:KWAAK_OPENAI_API_KEY"
provider = "OpenAI"
embedding_model = "text-embedding-3-large"
  • Ollama Configuration:
[llm.indexing]
provider = "Ollama"
prompt_model = "llama3.2"

[llm.query]
provider = "Ollama"
prompt_model = "llama3.3"

[llm.embedding]
provider = "Ollama"
embedding_model = { name = "bge-m3", vector_size = 1024 }

For both you can provide a base_url to use a custom API endpoint.

These configurations allow you to leverage the strengths of each model effectively for indexing, querying, and embedding processes.

Other integrations

  • tavily_api_key: Enables the agent to use tavily for web search. Their entry-level plan is free. (we are not affiliated)

Upcoming

  • Support for more LLMs
  • Tools for code documentation
  • More sources for additional context
  • ... and more! (we don't really have a roadmap, but we have a lot of ideas)

(back to top)

Troubleshooting & FAQ

Q: Kwaak feels very slow

A: Try increasing the resources available for docker. For docker desktop this is in Settings -> Resources -> Advanced. On MacOS, adding your terminal and/or kwaak to developer tools can also help.

Q:: There was an error during a chat, have I lost all progress?

A: Kwaak commits and pushes to the remote repository after each completion, so you should be able to recover the changes.

Q: I get a redb/lancedb error when starting, what is up?

A: Possibly your index got corrupted, or you have another kwaak instance running on the same project. Try clearing the index with kwaak clear-index and restart kwaak. Note that this will require a reindexing of your codebase.

Q: I get an error from Bollard: Socket not found /var/run/docker.sock

A: Enable the default Docker socket in docker desktop in General -> Advanced settings.

Community

If you want to get more involved with kwaak, have questions or want to chat, you can find us on discord.

(back to top)

Contributing

If you have a great idea, please fork the repo and create a pull request.

Don't forget to give the project a star! Thanks again!

Testing agents is not a trivial matter. We have (for now) internal benchmarks to verify agent behaviour across larger datasets.

If you just want to contribute (bless you!), see our issues or join us on Discord.

  1. Fork the Project
  2. Create your Feature Branch (git checkout -b feature/AmazingFeature)
  3. Commit your Changes (git commit -m 'feat: Add some AmazingFeature')
  4. Push to the Branch (git push origin feature/AmazingFeature)
  5. Open a Pull Request

See CONTRIBUTING for more

(back to top)

License

Distributed under the MIT License. See LICENSE for more information.

(back to top)