Build a LLM app with RAG to chat with PDF using Llama 3.2 running locally on your computer. Llama 3.2, which includes small and medium-sized vision LLMs (11B and 90B), and lightweight, text-only models (1B and 3B) that fit onto edge and mobile devices, including pre-trained and instruction-tuned versions. The Llama 3.2 1B and 3B models support context length of 128K tokens and are state-of-the-art in their class for on-device use cases like summarization, instruction following, and rewriting tasks running locally at the edge. These models are enabled on day one for Qualcomm and MediaTek hardware and optimized for Arm processors. Supported by a broad ecosystem, the Llama 3.2 11B and 90B vision models are drop-in replacements for their corresponding text model equivalents, while exceeding on image understanding tasks compared to closed models, such as Claude 3 Haiku. Unlike other open multimodal models, both pre-trained and aligned models are available to be fine-tuned for custom applications using torchtune and deployed locally using torchchat.
-
Notifications
You must be signed in to change notification settings - Fork 1
Build a LLM app with RAG to chat with PDF using Llama 3.2 running locally on your computer
License
Mchockalingam/llama3.2
About
Build a LLM app with RAG to chat with PDF using Llama 3.2 running locally on your computer
Resources
License
Stars
Watchers
Forks
Releases
No releases published
Packages 0
No packages published