You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I think we should make going from cloud to local and back super user friendly. Currently you can download cloud-fine-tunes now, but need to use some libraries/CLI to get them running locally.
This is a WIP spec - please comment below on what you'd want in this space!
Download GGUF format from the "Fine-tune" tab (just Lora or flattened model?)
Automatic Ollama integration: One button to export model to Ollama?
Is Ollama enough, or do people want llama.cpp, vLLM, and other engines?
I think we should make going from cloud to local and back super user friendly. Currently you can download cloud-fine-tunes now, but need to use some libraries/CLI to get them running locally.
This is a WIP spec - please comment below on what you'd want in this space!
Priority here is Fireworks since they have 60+ downloadable models compared to Together's 4. Downloading is easy (https://docs.fireworks.ai/api-reference/get-model-download-endpoint) but we'll need to unpack, convert and maybe quantize.
The text was updated successfully, but these errors were encountered: