Skip to content

Conversation

ac-machache
Copy link

Summary

Adds OpenAI Realtime API support in ADK with the following capabilities:

  • Tool calls (function calling) work in realtime.
  • Agent‑as‑Tool works.
  • Transfer to sub‑agents works when the parent agent is using openai realtime model; sub‑agent can be an OpenAI Realtime agent or a Gemini Live agent.
  • Input/output audio transcription is saved to the session service for persistence; raw audio is not persisted.
  • Session resumption works.

Known limitation:

  • An OpenAI Realtime agent cannot be used as a sub‑agent under a Gemini Live agent (Can be added later).

To do next:

  • Add context window management (summarization/trimming).

Motivation

Add support for the OpenAI Realtime API in Google ADK.

Changes

Key implementation updates:

  • Models
    • OpenAIRealtime (connects via wss and issues session.update with tools/instructions)
    • OpenAILlmConnection
      • Maps Realtime events to ADK LlmResponse
      • Streams text/audio deltas; buffers and flushes final text
      • Streams input/output transcription deltas with partial=True; flushes full transcript at turn end
      • Reconstructs streaming function‑call arguments and emits function_call parts
      • Attaches usage metadata from response.done to the final LlmResponse
  • Flows
    • OpenAILlmFlow/OpenAutoFlow
      • Reuse base processors (basic, identity, instructions, contents, NL planning, code‑exec, output schema)
      • Minimal send/receive tailored for Realtime
      • On function_call: run tool handling and, if target is an OpenAI Realtime agent, call run_realtime; otherwise fallback to run_live
  • Config & processors
    • basic.py: pass through RunConfig.openai_realtime_session into llm_request.config.labels['adk_openai_session_json']
    • RunConfig: clarified doc example for OpenAI session settings

Files (high‑level):

  • Added/Updated provider: adk-python/src/google/adk/models/openai_llm.py, openai_llm_connection.py
  • Flow: adk-python/src/google/adk/flows/llm_flows/openai_llm_flow.py
  • Processor: adk-python/src/google/adk/flows/llm_flows/basic.py (OpenAI session passthrough)

Backwards compatibility

  • Existing Gemini/Anthropic/LiteLLM paths are untouched.
  • Runner.run_realtime(...) is additive; run_live(...) behavior is unchanged.

Testing Plan

Unit tests (pytest)

New tests added under tests/unittests:

  • Models
    • models/test_openai_llm_connection.py: history send, text/audio deltas, transcript streaming/flush, function‑call argument streaming, usage metadata
    • models/test_openai_llm.py: connect() builds session.update from labels/tools/instructions; supported models regex
  • Flows
    • flows/llm_flows/test_openai_llm_flow.py: end‑to‑end OpenAI flow; function_call → function_response; realtime sub‑agent transfer path
    • flows/llm_flows/test_basic_processor_openai.py: OpenAI session passthrough via basic processor
  • Agents & Runner
    • agents/test_llm_agent_realtime_transfer.py: realtime sub‑agent transfer via OpenAI flow
    • agents/test_llm_agent_realtime_flow_selection.py: realtime entrypoint availability
    • test_runners_realtime.py: Runner.run_realtime delegates via agent entrypoint

Run:

pytest -q \
  adk-python/tests/unittests/models/test_openai_llm_connection.py \
  adk-python/tests/unittests/flows/llm_flows/test_openai_llm_flow.py \
  adk-python/tests/unittests/flows/llm_flows/test_basic_processor_openai.py \
  adk-python/tests/unittests/test_runners_realtime.py \
  adk-python/tests/unittests/agents/test_llm_agent_realtime_transfer.py \
  adk-python/tests/unittests/agents/test_llm_agent_realtime_flow_selection.py \
  adk-python/tests/unittests/models/test_openai_llm.py

Latest results:

30 passed, 2 warnings (deprecation note on Runner session arg), 0 failed

Manual E2E (optional)

Using tests_runs/adk-streaming-ws/app/main.py:

  1. Set OPENAI_API_KEY (and optionally OPENAI_RT_LANGUAGE).
  2. Start the FastAPI app (e.g., uvicorn app.main:app --reload).
  3. Connect via the provided web client and test:
    • Text input & tool calling
    • Audio input (if enabled) with VAD and transcript streaming
    • Sub‑agent transfer to realtime child agent

Documentation

  • RunConfig.openai_realtime_session docstring updated with audio config example and removal of trimmer notes.
  • No separate docs repo changes required beyond feature announcement (optional).

Issue(s)

Closes #2719

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @ac-machache, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly extends the Google ADK by integrating support for the OpenAI Realtime API. The primary goal is to enable seamless, real-time conversational experiences with OpenAI models, including capabilities like live tool calls, agent-as-tool functionality, and dynamic transfers between parent and sub-agents. This enhancement allows ADK to leverage OpenAI's advanced real-time features for more interactive and responsive AI applications, while ensuring session continuity and transcription persistence.

Highlights

  • Real-time Agent Execution: Introduces a new run_realtime entry point in the BaseAgent and Runner classes, specifically designed for video/audio-based conversational interactions.
  • OpenAI Realtime API Integration: Adds OpenAIRealtime model and OpenAILlmConnection to support OpenAI's WebSocket-based real-time API, enabling streaming text, audio, and function calls.
  • Specialized OpenAI LLM Flows: Implements a dedicated OpenAILlmFlow (with OpenSingleFlow and OpenAutoFlow variants) to manage the unique aspects of OpenAI's real-time communication, including tool calls, agent-as-tool functionality, and sub-agent transfers.
  • Configurable Real-time Session Settings: Enhances RunConfig with openai_realtime_session to allow granular, vendor-specific control over OpenAI Realtime session settings, such as modalities and turn detection.
  • Transcription Persistence: Ensures persistence of input/output audio transcription to the session service, while raw audio is not persisted, maintaining session continuity.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@adk-bot adk-bot added bot triaged [Bot] This issue is triaged by ADK bot models [Component] Issues related to model support labels Aug 25, 2025
Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant new functionality by adding support for the OpenAI Realtime API. The changes are extensive, including new agent entry points, dedicated flows, model connections, and configuration options. The overall structure is well-designed, reusing existing processors and patterns where possible, which is great for maintainability. The implementation correctly handles various real-time events like text and audio deltas, function calls, and transcriptions.

My review focuses on improving robustness and performance. I've identified several places where broad exception handlers could mask underlying issues; I recommend adding logging to these for easier debugging. I've also pointed out a couple of fixed-delay asyncio.sleep(1) calls that could impact latency and should be justified or removed. Finally, there are a few minor issues like a typo in an error message and some leftover debugging code that needs to be cleaned up.

Overall, this is a solid contribution that adds valuable capabilities to the ADK. Addressing these points will make the implementation more robust and performant.

@ac-machache
Copy link
Author

@hangfei, can you check this pls and tell me if there are any other fixes suggestion i need to add thank you

@boyangsvl
Copy link
Collaborator

Thank you @ac-machache! This is a really good feature! Have you considered contributing it to https://github.com/google/adk-python-community? I also wonder if you can achieve this by just implementing a BaseLlm. I don't think you need to add run_realtime(). We expect people to continue to use run_live and do type conversions under the hood.

@ac-machache
Copy link
Author

@boyangsvl, concerning the run_realtime() it is not needed, but since I did not want to alter the actual adk run_live and because the base_llm_flow is not stable yet and still lacks some functionalities, I implemented it that way. However, I can revert to using run_live.

For the adk-community, I didn’t know it existed (which I think should be added to the triaging bot to push contributors to add features there).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bot triaged [Bot] This issue is triaged by ADK bot models [Component] Issues related to model support
Projects
None yet
Development

Successfully merging this pull request may close these issues.

extend google ADK live capabilities with openai realtime api
3 participants