You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
For each course, create an LLM configuration object that stores the prompt, model type + other relevant configs. There will also be a general prompt applied to all LLM configuration object to tune the LLM to the task of responding to office hours questions based on course materials
Create an API endpoint to edit these configs (prompt engineering the LLM)
Create an API endpoint that given a search query, calls the VectorDB API to return relevant documents, ingest the documents into the LLM using the OpenAI API, query the OpenAI API using the search query and return the response
Since this API is priced based on usage, log the amount of usage along with the cost for each course using the LLM.
Implement rate-limits on the LLM Answering API
The text was updated successfully, but these errors were encountered:
This is related to #279. An article for reference.
For each course, create an LLM configuration object that stores the prompt, model type + other relevant configs. There will also be a general prompt applied to all LLM configuration object to tune the LLM to the task of responding to office hours questions based on course materials
Create an API endpoint to edit these configs (prompt engineering the LLM)
Create an API endpoint that given a search query, calls the VectorDB API to return relevant documents, ingest the documents into the LLM using the OpenAI API, query the OpenAI API using the search query and return the response
Since this API is priced based on usage, log the amount of usage along with the cost for each course using the LLM.
Implement rate-limits on the LLM Answering API
The text was updated successfully, but these errors were encountered: