Skip to content

kth8/llama-server-vulkan

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

26 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

llama.cpp server and Qwen 3 1.7B model bundled together inside a Docker image. Compiled with Vulkan support and without AVX requirement to run on old hardware. Tested using i3-3220 CPU with RX 470 GPU.

docker container run \
  --detach \
  --init \
  --device /dev/kfd \
  --device /dev/dri \
  --read-only \
  --tmpfs /root/.cache/mesa_shader_cache \
  --restart always \
  --name llama-vulkan \
  --label io.containers.autoupdate=registry \
  --publish 8001:8080 \
  ghcr.io/kth8/llama-server-vulkan:latest

Verify if the server is running by going to http://127.0.0.1:8001 in your web browser or using the terminal:

curl -X POST http://127.0.0.1:8001/v1/chat/completions -H "Content-Type: application/json" -d '{"messages":[{"role":"user","content":"Hello"}]}'

To load another model you can download the GGUF from Hugging Face then mount that into the container and set the LLAMA_ARG_MODEL environment variable to the model file name, for example by adding this to the Docker run command:

-v ~/Downloads/DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf:/DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf:z \
-e LLAMA_ARG_MODEL=DeepSeek-R1-Distill-Qwen-7B-Q4_K_M.gguf

About

Run llama.cpp server with Vulkan

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages