An implementation of FastVLM/LLaVA or any llm/vlm model using FastAPI (backend) and react js (backend) + Action/Caption mode and frame control
-
Updated
Sep 6, 2025 - TypeScript
An implementation of FastVLM/LLaVA or any llm/vlm model using FastAPI (backend) and react js (backend) + Action/Caption mode and frame control
Task-aligned prompting improves zero-shot detection of AI-generated images by Vision-Language Models
Add a description, image, and links to the vlm-inference topic page so that developers can more easily learn about it.
To associate your repository with the vlm-inference topic, visit your repo's landing page and select "manage topics."