-
Notifications
You must be signed in to change notification settings - Fork 1.8k
returned session id and used it in evals. #2735
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
returned session id and used it in evals. #2735
Conversation
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @ShaharKatz, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This PR refactors the evaluation process to ensure that the session ID generated during inference is consistently returned and utilized. This is particularly important for services like Vertex AI, where the session ID might be generated externally and needs to be propagated for accurate evaluation tracking.
Highlights
- Function Signature Change: The _generate_inferences_from_root_agent function now explicitly returns the session ID alongside the inference results.
- Session ID Propagation: Updates were made to cli_eval.py and local_eval_service.py to correctly capture and use this returned session ID in evaluation flows.
- Vertex AI Integration: Specific handling for VertexAiSessionService was introduced to retrieve the session ID from the created session object, ensuring compatibility with external session management.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
Response from ADK Triaging Agent Hello @ShaharKatz, thank you for your contribution! Before we can proceed with the review, could you please address the following points from our contribution guidelines:
You can find more details in our contribution guidelines. Thank you! |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request updates the evaluation logic to support services like VertexAiSessionService
that generate their own session IDs. The changes correctly propagate the returned session_id
through the evaluation flow. My review includes suggestions to improve the implementation's robustness and clarity. Specifically, I've pointed out a problematic top-level import for an optional dependency that should be localized, suggested simplifying some logic by relying on type contracts instead of hasattr
, and noted a minor code clarity issue with variable shadowing along with a typo in a comment.
@@ -28,6 +28,7 @@ | |||
from ..memory.in_memory_memory_service import InMemoryMemoryService | |||
from ..runners import Runner | |||
from ..sessions.base_session_service import BaseSessionService | |||
from ..sessions.vertex_ai_session_service import VertexAiSessionService # TODO: some users won't have this dependency. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As the TODO
comment correctly points out, this top-level import can cause a ModuleNotFoundError
for users who haven't installed the optional dependencies required by VertexAiSessionService
. This can break the application on startup.
To ensure this dependency remains optional, the import should be moved into the _generate_inferences_from_root_agent
method, where VertexAiSessionService
is used. This is typically done within a try...except ImportError
block to handle cases where the module isn't available.
# TODO: this overrides the seesion_id that was generated locally in the prefix format. | ||
inference_result, session_id = ( |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The TODO
comment has a typo ('seesion_id').
More importantly, the variable shadowing on line 234 for session_id
can be confusing. The session_id
generated on line 231 is passed to the function, and then the same variable name is used to capture the returned session ID, which might be different (e.g., when using VertexAiSessionService
). This makes the code harder to follow.
To improve clarity, consider renaming the variable to avoid shadowing, though this would require changes outside the current diff.
# TODO: this overrides the seesion_id that was generated locally in the prefix format. | |
inference_result, session_id = ( | |
# TODO: this overrides the session_id that was generated locally in the prefix format. | |
inference_result, session_id = ( |
if hasattr(vertex_session, "id"): | ||
session_id = vertex_session.id | ||
else: | ||
raise ValueError("Session ID was not returned by the Vertex AI session service during session creation.") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The create_session
method is type-hinted to return a Session
object, which should have an id
attribute according to its definition. Using hasattr
to check for the attribute is less idiomatic and can mask underlying type issues. It's better to directly access vertex_session.id
and rely on the type contract. If an implementation of create_session
violates this contract, an AttributeError
will be raised, which provides a clearer and faster failure signal.
session_id = vertex_session.id
No description provided.