Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

idea: Support multiple LLMs & embedding models through server #4867

Open
Capsar opened this issue Apr 2, 2025 · 1 comment
Open

idea: Support multiple LLMs & embedding models through server #4867

Capsar opened this issue Apr 2, 2025 · 1 comment
Labels

Comments

@Capsar
Copy link

Capsar commented Apr 2, 2025

Problem Statement

As a developer, I am also interested in hosting embedding models locally, besides the usual coder LLMs used in the IDE.

Furthermore, there are other types of models, such as image-to-text for OCR / or document extraction which are also lacking support at the moment.

Feature Idea

@Capsar Capsar added the type: feature request A new feature label Apr 2, 2025
@github-project-automation github-project-automation bot moved this to Investigating in Menlo Apr 2, 2025
@DKostk
Copy link

DKostk commented Apr 2, 2025

Greetings!
Is it possible to add a function when creating a prompt, for example, using @ or # with the name of the local AI version?
Thus, different ais will perform all tasks according to prompta.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
Status: Investigating
Development

No branches or pull requests

2 participants