Replies: 1 comment
-
There are two ways to do this:
In my experience, setting |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
How can I run llama cpp on specific GPU (I have few GPUs on my PC), both for main and server executables?
Beta Was this translation helpful? Give feedback.
All reactions