-
Notifications
You must be signed in to change notification settings - Fork 4.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
chat stuck in infinite loop #1213
Comments
This happened to me, result of an outdated characterfile. Try a new one? |
Thanks - I have isolated my issue to the llama local model. I tried with a gemini API key and everything worked as expected. If I set a character to use the Llama local model then that is when I get into the infinite loop type issue. |
I tried editing one of the characters in the characters folder to use "llama_local" as the modelProvider but that resulted in the same behavior of the model just talking to itself rather than waiting for further input from the user. |
same issue here on local machine and h100 with 22.04.
|
Same issue here using a local model with cuda. |
same issue with an ollama local |
I am having the same problem trying to run locally. |
same issue with llama_local |
meet the same issue when running locally with llama_local model |
Filed #1575 , seems like the same issue. Will close that in favor of this one. |
not using llama_local but still seeing this error, tried gpt-4o and meta-llama and hermes, all same result, repeats endlessly expecting a different response from the model like it isn't formatting properly. EDIT: forgot to specify, using Apple M1. EDIT: i left out the --character="..." part and got the error, now i added it back in and get nothing. no server request detected. EDIT: i removed --character again, and changes to gpt-4o-mini and finally got a single response EDIT: here's a gist of what i'm dealing with: https://gist.github.com/briancullinan2/fed55ab2f982baebda5f32e1095e7b3f |
Meanhile llama_local is known to have issues genrally other models not. |
Any update? I'm facing the same issue |
seems it’s caused by the response generation logic in llama.ts. I fixed it locally, it seems to work, and submitted a PR |
it works for me with v0.1.7-alpha.2 |
I just tried your PR and it works to fix the infinite loop and get the chat messages to the front end :) I am still getting the exact same responses every time from the model no matter what question I ask, but that may be a different issue. |
Fixed bu @zoe27 with PR |
Have tried installing and reinstalling many times, chat with any agent always gets stuck in loop where agent just keeps replying to itself.
Happens each time I successfully start an agent with npm start (either chatting in terminal in older versions or with localhost:5173)
I would expect the agent to have a dialogue with me but it becomes impossible when the agent just keeps saying things to itself over and over.
The text was updated successfully, but these errors were encountered: