Is it possible and meaningful to use llama.cpp for DeepSeek models to create a LoRA-adapter model? #12298
Unanswered
timo-kurtz
asked this question in
Q&A
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
I want to ask if its is possible and meaningful to use llama.cpp for DeepSeek R1 models to create a LoRA-adapter model?
For example, there are also Qwen models used, which are not Llama models. Are they compatible?
Beta Was this translation helpful? Give feedback.
All reactions