Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

chat stuck in infinite loop #1213

Closed
sam-coffey opened this issue Dec 18, 2024 · 19 comments
Closed

chat stuck in infinite loop #1213

sam-coffey opened this issue Dec 18, 2024 · 19 comments
Labels
bug Something isn't working

Comments

@sam-coffey
Copy link

Have tried installing and reinstalling many times, chat with any agent always gets stuck in loop where agent just keeps replying to itself.

Happens each time I successfully start an agent with npm start (either chatting in terminal in older versions or with localhost:5173)

I would expect the agent to have a dialogue with me but it becomes impossible when the agent just keeps saying things to itself over and over.

@sam-coffey sam-coffey added the bug Something isn't working label Dec 18, 2024
@madjin
Copy link
Collaborator

madjin commented Dec 18, 2024

This happened to me, result of an outdated characterfile. Try a new one?

@sam-coffey
Copy link
Author

Thanks - I have isolated my issue to the llama local model. I tried with a gemini API key and everything worked as expected. If I set a character to use the Llama local model then that is when I get into the infinite loop type issue.

@sam-coffey
Copy link
Author

I tried editing one of the characters in the characters folder to use "llama_local" as the modelProvider but that resulted in the same behavior of the model just talking to itself rather than waiting for further input from the user.

@kmozurkewich
Copy link

kmozurkewich commented Dec 19, 2024

same issue here on local machine and h100 with 22.04.

:// Show less responses...
```json
```json
:// Add your response here
```json
```json
```json
```json
```json
```json
```json
```json
```json
```json`

@brogovia
Copy link

Same issue here using a local model with cuda.

@mosayeri
Copy link

mosayeri commented Dec 28, 2024

same issue with an ollama local

@BrandonFlorian
Copy link

I am having the same problem trying to run locally.

@tercel
Copy link
Contributor

tercel commented Dec 30, 2024

same issue with llama_local

@zoe27
Copy link
Contributor

zoe27 commented Dec 31, 2024

meet the same issue when running locally with llama_local model

@hiteshjoshi1
Copy link

Filed #1575 , seems like the same issue. Will close that in favor of this one.

@briancullinan2
Copy link

briancullinan2 commented Jan 2, 2025

not using llama_local but still seeing this error, tried gpt-4o and meta-llama and hermes, all same result, repeats endlessly expecting a different response from the model like it isn't formatting properly.

EDIT: forgot to specify, using Apple M1.

EDIT: i left out the --character="..." part and got the error, now i added it back in and get nothing. no server request detected.

EDIT: i removed --character again, and changes to gpt-4o-mini and finally got a single response # Response { "user": "Eliza", "text": ".... Is there a bug in loading characters maybe?

EDIT: here's a gist of what i'm dealing with: https://gist.github.com/briancullinan2/fed55ab2f982baebda5f32e1095e7b3f

@AIFlowML
Copy link
Collaborator

AIFlowML commented Jan 3, 2025

Meanhile llama_local is known to have issues genrally other models not.
Can you please try with the latest version and report back ?

@danielNg25
Copy link

Any update? I'm facing the same issue

@zoe27
Copy link
Contributor

zoe27 commented Jan 3, 2025

seems it’s caused by the response generation logic in llama.ts. I fixed it locally, it seems to work, and submitted a PR

@modddypan
Copy link

seems it’s caused by the response generation logic in llama.ts. I fixed it locally, it seems to work, and submitted a PR

it works for me with v0.1.7-alpha.2

@BrandonFlorian
Copy link

BrandonFlorian commented Jan 3, 2025

seems it’s caused by the response generation logic in llama.ts. I fixed it locally, it seems to work, and submitted a PR

I just tried your PR and it works to fix the infinite loop and get the chat messages to the front end :)

I am still getting the exact same responses every time from the model no matter what question I ask, but that may be a different issue.

@zoe27
Copy link
Contributor

zoe27 commented Jan 3, 2025

I am still getting the exact same responses every time from the model no matter what question I ask, but that may be a different issue

seems things goes well in my side
image

@BrandonFlorian
Copy link

I am still getting the exact same responses every time from the model no matter what question I ask, but that may be a different issue

seems things goes well in my side image

Thank you for confirming and for fixing the bug.

It seems I have a separate issue to figure out.

@AIFlowML
Copy link
Collaborator

AIFlowML commented Jan 6, 2025

Fixed bu @zoe27 with PR
User confirm it work.
Closing this issue now.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests