Skip to content

Fix issue #2517: Use agent's LLM for function calling when no function_calling_llm is specified #2518

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Conversation

devin-ai-integration[bot]
Copy link
Contributor

Description

This PR fixes issue #2517 where users would get OpenAI authentication errors even when using non-OpenAI models like Gemini.

The root cause was that when a user didn't explicitly specify a function_calling_llm for their agent or crew, the system would default to using the crew's function_calling_llm, which could be None, leading to OpenAI API key errors even when the agent was configured to use a different model like Gemini.

The fix ensures that when no function_calling_llm is specified, the agent uses its own LLM for function calling rather than defaulting to the crew's function_calling_llm.

Testing

Added tests to verify that agents with non-OpenAI models can use tools without needing OpenAI credentials.

Link to Devin run: https://app.devin.ai/sessions/d6e3c2922797403d8eb47ff23f2f8477
Requested by: Joe Moura (joao@crewai.com)

devin-ai-integration bot and others added 2 commits April 3, 2025 11:40
…n_calling_llm is specified

Co-Authored-By: Joe Moura <joao@crewai.com>
Co-Authored-By: Joe Moura <joao@crewai.com>
Copy link
Contributor Author

🤖 Devin AI Engineer

I'll be helping with this pull request! Here's what you should know:

✅ I will automatically:

  • Address comments on this PR. Add "(aside)" to your comment to have me ignore it.
  • Look at CI failures and help fix them

Note: I can only respond to comments from users who have write access to this repository.

⚙️ Control Options:

  • Disable automatic comment and CI monitoring

Co-Authored-By: Joe Moura <joao@crewai.com>
@joaomdmoura
Copy link
Collaborator

Disclaimer: This review was made by a crew of AI Agents.

Code Review Comment for PR #2518

Summary

This PR enhances the crewAI framework by implementing a method for LLM assignment to agents, allowing them to utilize their own LLMs when a specific function_calling_llm is not provided. This change is a key progression in enabling flexibility within the agent functionalities, while importantly maintaining backward compatibility.

Detailed Findings

Code Improvements

  1. Type Hints in kickoff Method:

    • Suggest implementing type hints to enhance code readability:
    def kickoff(
        self,
        agent: BaseAgent,
        function_calling_llm: Optional[BaseLLM] = None
    ) -> None:

    The above implementation clarifies expected parameters and return type, making it explicit for future developers.

  2. Docstring Addition:

    • A docstring elucidating the fallback behavior for LLM assignment should be included:
    """
    Sets the function calling LLM for the agent.
    If no specific function_calling_llm is provided, uses the agent's own LLM.
    """
  3. Agent Test Clarity:

    • Consider reorganizing tests for clarity and maintainability:
    @pytest.mark.vcr(filter_headers=["authorization"])
    def test_agent_uses_own_llm_for_function_calling_when_not_specified():
        """Test that the agent defaults to its own LLM if none is specified."""

Historical Context

Reviewing related PRs can provide insights into previous discussions around LLM assignment and agent behavior. Past changes have focused on compliance with various LLM models, such as non-OpenAI integrations, which are crucial for expanding use cases. Refer to previous PRs here.

Implications for Related Files

  • agent_test.py & crew_test.py: Both test files show extensive coverage for the new functionality. Future tests should focus on utilizing combinations of LLMs and provide more edge cases to ensure system robustness.

Suggestions for Improvement

  • Documentation Enhancement: A clear section in the documentation should explain the implications of using various LLMs within crewAI, particularly noting backwards compatibility and limitations when incorporating models like Gemini alongside OpenAI functionality.

  • Error Handling:

    • Adding validation for LLM compatibility when making function calls could prevent potential runtime errors.
    • Ensure the messages returned during errors are clear and explicit, enhancing user experience during model operations.
  • Test Implementation: Future tests should not only evaluate functionality but should also assess performance when multiple LLM integrations are active. Further, edge cases involving empty or invalid configurations can help fortify the application against unexpected inputs.

General Comments

The proposed changes represent a forward-thinking approach that enhances the flexibility of the crewAI framework while maintaining a high level of testing and documentation standards. With the implementation of the suggested improvements, this PR should deliver solid, maintainable code ready for production deployment.

Thank you for your contributions!

devin-ai-integration bot and others added 2 commits April 3, 2025 11:50
Co-Authored-By: Joe Moura <joao@crewai.com>
Co-Authored-By: Joe Moura <joao@crewai.com>
Copy link
Contributor Author

Closing due to inactivity for more than 7 days.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant