Skip to content

Conversation

obigroup
Copy link

@obigroup obigroup commented Jul 16, 2025

Description

This PR adds a switch_llm function to the manager that allows changing the model during execution.

Changes

  • Added switch_llm() function in the manager
  • The function resets tools and nodes but preserves state
  • Enables runtime model switching without losing conversation context

Known Issue

However, there's a persistent problem:

When TextFrame objects are received by LLMUserContextAggregator or LLMAssistantContextAggregator, a loop is created and text gets added multiple times to the context.

Example:

  • Input: Hello my name is joe.
  • Context records: Hello Hello my Hello my name Hello Hello Hello my name is

Help Needed

If someone could help me progress on this issue, I would appreciate it. I've tried adding logs everywhere in the Aggregators but I still don't understand where this is coming from.

Testing

  • Basic functionality works
  • State preservation confirmed
  • Tools and nodes reset correctly
  • Text duplication issue needs investigation

Note: This PR is functional but the text duplication issue in the context aggregators needs to be resolved before merging.

Copy link

vercel bot commented Jul 16, 2025

The latest updates on your projects. Learn more about Vercel for Git ↗︎

Name Status Preview Updated (UTC)
pipecat-flows ✅ Ready (Inspect) Visit Preview Jul 16, 2025 4:33pm

@markbackman
Copy link
Contributor

This is being handled in Pipecat. We now have an llm_switching.py example using that in Flows. More LLMs will be supported in Pipecat very soon.

@markbackman markbackman closed this Sep 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants