-
Notifications
You must be signed in to change notification settings - Fork 11
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark HiFive Premier P550 #17
Comments
Running benchmark 3 times using model: llama3.2:3b (13.5W)
Running benchmark 3 times using model: llama3.1:8b (13.6W)
Running benchmark 3 times using model: llama2:13b (13.6W)
|
On the first run-through, Deepseek wouldn't load (got |
Running benchmark 3 times using model: deepseek-r1:1.5b (13.5W)
... After four hours I gave up trying to run deepseek-r1:8b or 14b :) |
See: geerlingguy/sbc-reviews#65
To install Ollama on RISC-V, I followed this process:
More details: geerlingguy/sbc-reviews#65 (comment) — and fully documented in my blog post: How to build Ollama to run LLMs on RISC-V Linux.
The text was updated successfully, but these errors were encountered: