Skip to content

ajeetraina/genai-app-demo

Repository files navigation

GenAI App Demo with Docker Model Runner

A modern, full-stack chat application demonstrating how to integrate React frontend with a Go backend and run local Large Language Models (LLMs) using Docker's Model Runner.

Overview

This project showcases a complete Generative AI interface that includes:

  • React/TypeScript frontend with a responsive chat UI
  • Go backend server for API handling
  • Integration with Docker's Model Runner to run Llama 3.2 locally

Features

  • 💬 Interactive chat interface with message history
  • 🔄 Real-time streaming responses (tokens appear as they're generated)
  • 🌓 Light/dark mode support based on user preference
  • 🐳 Dockerized deployment for easy setup and portability
  • 🏠 Run AI models locally without cloud API dependencies
  • 🔒 Cross-origin resource sharing (CORS) enabled
  • 🧪 Integration testing using Testcontainers

Architecture

The application consists of three main components:

┌─────────────┐     ┌─────────────┐     ┌─────────────┐
│   Frontend  │ >>> │   Backend   │ >>> │ Model Runner│
│  (React/TS) │     │    (Go)     │     │ (Llama 3.2) │
└─────────────┘     └─────────────┘     └─────────────┘
      :3000              :8080               :12434

Connection Methods

There are two ways to connect to Model Runner:

1. Using Internal DNS (Default)

This method uses Docker's internal DNS resolution to connect to the Model Runner:

  • Connection URL: http://model-runner.docker.internal/engines/llama.cpp/v1/
  • Configuration is set in backend.env

2. Using TCP

This method uses host-side TCP support:

  • Connection URL: host.docker.internal:12434
  • Requires updates to the environment configuration

Prerequisites

  • Docker and Docker Compose
  • Git
  • Go 1.19 or higher (for local development)
  • Node.js and npm (for frontend development)

Before starting, pull the required model:

docker model pull ignaciolopezluna020/llama3.2:1B

Quick Start

  1. Clone this repository:

    git clone https://github.com/ajeetraina/genai-app-demo.git
    cd genai-app-demo
  2. Start the application using Docker Compose:

    docker compose up -d --build
  3. Access the frontend at http://localhost:3000

Development Setup

Frontend

The frontend is built with React, TypeScript, and Vite:

cd frontend
npm install
npm run dev

This will start the development server at http://localhost:3000.

Backend

The Go backend can be run directly:

go mod download
go run main.go

Make sure to set the required environment variables from backend.env:

  • BASE_URL: URL for the model runner
  • MODEL: Model identifier to use
  • API_KEY: API key for authentication (defaults to "ollama")

How It Works

  1. The frontend sends chat messages to the backend API
  2. The backend formats the messages and sends them to the Model Runner
  3. The LLM processes the input and generates a response
  4. The backend streams the tokens back to the frontend as they're generated
  5. The frontend displays the incoming tokens in real-time

Project Structure

├── compose.yaml           # Docker Compose configuration
├── backend.env            # Backend environment variables
├── main.go                # Go backend server
├── frontend/              # React frontend application
│   ├── src/               # Source code
│   │   ├── components/    # React components
│   │   ├── App.tsx        # Main application component
│   │   └── ...
│   ├── package.json       # NPM dependencies
│   └── ...
└── ...

Customization

You can customize the application by:

  1. Changing the model in backend.env to use a different LLM
  2. Modifying the frontend components for a different UI experience
  3. Extending the backend API with additional functionality

Testing

The project includes integration tests using Testcontainers:

cd tests
go test -v

Troubleshooting

  • Model not loading: Ensure you've pulled the model with docker model pull
  • Connection errors: Verify Docker network settings and that Model Runner is running
  • Streaming issues: Check CORS settings in the backend code

License

MIT

Contributing

Contributions are welcome! Please feel free to submit a Pull Request.

  1. Fork the repository
  2. Create your feature branch (git checkout -b feature/amazing-feature)
  3. Commit your changes (git commit -m 'Add some amazing feature')
  4. Push to the branch (git push origin feature/amazing-feature)
  5. Open a Pull Request

About

A Sample GenAI Application

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published