Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

MaxListenersExceededWarning: Possible EventTarget memory leak detected - not cleaning up AbortSignal listeners #7963

Open
jscott-yps opened this issue Mar 20, 2025 · 9 comments
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature auto:question A specific question about the codebase, product, project, or how to use a feature

Comments

@jscott-yps
Copy link

MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. MaxListeners is 10. Use events.setMaxListeners() to increase limit

I am not entirely sure what is causing this. There was the same issue apparently fixed in langchainjs

#6461

Did it somehow not make it's way into this?

const messages = [{
      role: "user",
      content: data?.message || `Do something`
}];

const config = { configurable: {thread_id: "conversation-num-1"} };

for await (const event of graph.streamEvents(
    { messages },
    { version: "v2", ...config }
)) {
    const kind = event.event;
    console.log(`Event: ${kind}: ${event.name}`);
}
Event: on_chain_start: Branch<agent>
Event: on_chain_end: Branch<agent>
Event: on_chain_end: agent
Event: on_chain_stream: New Agent
Event: on_chain_start: tools
Event: on_tool_start: subtract
Calling subtract with args {"a":5,"b":9}
Event: on_tool_end: subtract

 ERROR  (node:52053) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. MaxListeners is 10. Use events.setMaxListeners() to increase limit

Event: on_chain_start: ChannelWrite<...,tools>
Event: on_chain_end: ChannelWrite<...,tools>
Event: on_chain_end: tools
Event: on_chain_stream: New Agent
Event: on_chain_start: agent
Event: on_chat_model_start: ChatOpenAI
@limcolin
Copy link

Bumping this as it's happening here for me too.

Event: on_chain_start: LangGraph
Event: on_chain_start: __start__
Event: on_chain_start: ChannelWrite<...>
Event: on_chain_end: ChannelWrite<...>
Event: on_chain_start: ChannelWrite<__start__:supervisor>
Event: on_chain_end: ChannelWrite<__start__:supervisor>
Event: on_chain_end: __start__
Event: on_chain_start: supervisor
Event: on_chain_start: RunnableSequence
Event: on_prompt_start: ChatPromptTemplate
Event: on_prompt_end: ChatPromptTemplate
Event: on_chat_model_start: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_end: ChatOpenAI
Event: on_chain_end: RunnableSequence
Event: on_chain_start: ChannelWrite<...,supervisor>
Event: on_chain_end: ChannelWrite<...,supervisor>
Event: on_chain_start: Branch<supervisor>
Event: on_chain_end: Branch<supervisor>
Event: on_chain_end: supervisor
Event: on_chain_stream: LangGraph
Event: on_chain_start: conversationalist
Event: on_chain_start: LangGraph
Event: on_chain_start: __start__
Event: on_chain_start: ChannelWrite<...>
Event: on_chain_end: ChannelWrite<...>
Event: on_chain_start: ChannelWrite<__start__:agent>
Event: on_chain_end: ChannelWrite<__start__:agent>
Event: on_chain_end: __start__
Event: on_chain_start: agent
Event: on_chain_start: RunnableSequence
Event: on_chain_start: prompt
Event: on_chain_end: prompt
Event: on_chat_model_start: ChatOpenAI

(node:13572) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
(Use `node --trace-warnings ...` to show where the warning was created)

Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_end: ChatOpenAI
Event: on_chain_end: RunnableSequence
Event: on_chain_start: ChannelWrite<...,agent>
Event: on_chain_end: ChannelWrite<...,agent>
Event: on_chain_start: Branch<agent,continue,__end__>
Event: on_chain_end: Branch<agent,continue,__end__>
Event: on_chain_end: agent
Event: on_chain_end: LangGraph
Event: on_chain_start: ChannelWrite<...,conversationalist>
Event: on_chain_end: ChannelWrite<...,conversationalist>
Event: on_chain_end: conversationalist
Event: on_chain_stream: LangGraph
Event: on_chain_start: supervisor
Event: on_chain_start: RunnableSequence
Event: on_prompt_start: ChatPromptTemplate
Event: on_prompt_end: ChatPromptTemplate
Event: on_chat_model_start: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_end: ChatOpenAI
Event: on_chain_end: RunnableSequence
Event: on_chain_start: ChannelWrite<...,supervisor>
Event: on_chain_end: ChannelWrite<...,supervisor>
Event: on_chain_start: Branch<supervisor>
Event: on_chain_end: Branch<supervisor>
Event: on_chain_end: supervisor
Event: on_chain_stream: LangGraph
Event: on_chain_start: FINISH
Event: on_chain_start: RunnableSequence
Event: on_chat_model_start: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_stream: ChatOpenAI
Event: on_chat_model_end: ChatOpenAI
Event: on_parser_start: StructuredOutputParser
Event: on_parser_end: StructuredOutputParser
Event: on_chain_end: RunnableSequence
Event: on_chain_start: ChannelWrite<...,FINISH>
Event: on_chain_end: ChannelWrite<...,FINISH>
Event: on_chain_end: FINISH
Event: on_chain_stream: LangGraph
Event: on_chain_end: LangGraph

@nikhilshinday
Copy link

bumping this as it's happening for me as well. It seems like creating more than ~10 LLM calls at the same time does this.

@benjamincburns
Copy link
Collaborator

benjamincburns commented Mar 31, 2025

Hi @jscott-yps, @limcolin, and @nikhilshinday - thanks for reporting this!

Is this only occurring on streamEvents or are you seeing it when you call invoke or stream? What do you see when you run with --trace-warnings? Note - if you're not executing via node directly you can still pass it via the NODE_OPTIONS env var. Example: NODE_OPTIONS=--trace-warnings

Also if one of you could please share an MRE that would help us to debug this more quickly.

@benjamincburns
Copy link
Collaborator

benjamincburns commented Mar 31, 2025

Unfortunately I'm unable to reproduce this without an MRE.

I ran a few variations on the test below to try to trigger this warning, but was unable. I ran it with both ChatOpenAI and FakeListChatModel (including the latter version here since it can be executed without API access). I tested on the current main as well as on 0.2.55. Version 0.2.55 was running against @langchain/core@0.3.40 and latest main was running against @langchain/core@0.3.43.

import {describe, expect, it } from "@jest/globals";
import { FakeListChatModel } from "@langchain/core/utils/testing";
import {
  MessagesAnnotation,
  START,
  StateGraph,
} from "@langchain/langgraph";

  it("should not warn about too many AbortSignal event listeners", async () => {
    const llm = new FakeListChatModel({
      sleep: 1,
      responses: Array(500).fill("Why hello!"),
    });
    
    const getResponse = async () => {
      return {
        messages: await llm.invoke([
          {
            role: "user",
            content: "Hi there.",
          },
        ]),
      };
    }

    const builder = new StateGraph<
      typeof MessagesAnnotation["spec"],
      typeof MessagesAnnotation["State"],
      typeof MessagesAnnotation["Update"],
      string
    >(MessagesAnnotation);

    for (let i = 0; i < 50; i += 1) {
      builder.addNode(`node-${i}`, getResponse);
      if (i === 0) {
        builder.addEdge(START, `node-${i}`);
      } else {
        builder.addEdge(`node-${i - 1}`, `node-${i}`);
      }
    }

    const graph = builder.compile();
    
    await graph.invoke({ messages: [] }, { recursionLimit: 51 });
    
    const stream = await graph.stream({ messages: [] }, { recursionLimit: 51 });
    for await (const _ of stream) { }
    
    const streamEvents = graph.streamEvents({ messages: [] }, { version: "v2", recursionLimit: 51 });

    for await (const _ of streamEvents) { }
  });

@limcolin
Copy link

limcolin commented Mar 31, 2025

Hey @benjamincburns thanks for looking into this.

It's happening only on StreamEvents for me - the logs show the sequence of events before the warning is displayed, although I can't say for certain if that neccessarily reflects a particular event being the trigger.

From the logs from @jscott-yps it looks like StreamEvents too

@keisokoo
Copy link

keisokoo commented Mar 31, 2025

It seems like there’s a potential memory leak in the consumeRunnableStream function in /langchain-core/src/runnables/base.ts.

Specifically, this block:

options.signal.addEventListener(
  "abort",
  () => {
    abortController.abort();
  },
  { once: true }
);

adds an abort listener to the passed options.signal, but never removes it manually.

When Runnable.stream() or streamEvents() is called repeatedly with the same signal (or multiple times in parallel), the listeners accumulate, eventually triggering:

(node:75561) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. MaxListeners is 10. Use events.setMaxListeners() to increase limit
(Use node --trace-warnings ... to show where the warning was created)

In long-running apps or with multiple concurrent streams, this can become problematic.

Although the listener uses { once: true }, it still accumulates if the signal is never triggered, which likely leads to a memory leak over time.

I’m not 100% certain yet, but I suspect this happens because addEventListener is used without a corresponding removeEventListener, especially when calling streamEvents repeatedly in a long-lived app.

Just wanted to flag this in case it’s the root cause — I’ll try confirming with a minimal reproduction soon.

@limcolin
Copy link

In case it helps, I am only calling streamEvents() once, not multiple times or in parallel.
However, this is for a multi-agent (supervisor) workflow, which could possibly be the reason for this.

I've used streamEvents() (also single call) in a single-agent pattern without getting the warning.

@migrad
Copy link

migrad commented Apr 3, 2025

I'm also getting the same warning. I've attached the trace:

(node:4027766) MaxListenersExceededWarning: Possible EventTarget memory leak detected. 11 abort listeners added to [AbortSignal]. Use events.setMaxListeners() to increase limit
    at [kNewListener] (node:internal/event_target:534:17)
    at [kNewListener] (node:internal/abort_controller:239:24)
    at EventTarget.addEventListener (node:internal/event_target:645:23)
    at node_modules/@langchain/core/dist/utils/signal.js:19:20
    at new Promise (<anonymous>)
    at raceWithSignal (node_modules/@langchain/core/dist/utils/signal.js:15:9)
    at RunnableSequence.invoke (node_modules/@langchain/core/dist/runnables/base.js:1274:39)
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
    at async RunnableCallable.callModel [as func] (node_modules/@langchain/langgraph/dist/prebuilt/react_agent_executor.js:235:27)
    at async RunnableCallable.invoke (node_modules/@langchain/langgraph/dist/utils.js:79:27)
----

@benjamincburns benjamincburns transferred this issue from langchain-ai/langgraphjs Apr 4, 2025
@benjamincburns benjamincburns changed the title MaxListenersExceededWarning: Possible EventTarget memory leak detected. MaxListenersExceededWarning: Possible EventTarget memory leak detected - not cleaning up AbortSignal listeners Apr 4, 2025
@benjamincburns
Copy link
Collaborator

Transferred this to the LangChain JS repo, per the trace above.

@dosubot dosubot bot added auto:bug Related to a bug, vulnerability, unexpected error with an existing feature auto:question A specific question about the codebase, product, project, or how to use a feature labels Apr 4, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
auto:bug Related to a bug, vulnerability, unexpected error with an existing feature auto:question A specific question about the codebase, product, project, or how to use a feature
Projects
None yet
Development

No branches or pull requests

6 participants