Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Stoped to write records to file and started to use all server's memory #9823

Open
miksir opened this issue Jan 10, 2025 · 0 comments
Open

Stoped to write records to file and started to use all server's memory #9823

miksir opened this issue Jan 10, 2025 · 0 comments

Comments

@miksir
Copy link

miksir commented Jan 10, 2025

Bug Report

Describe the bug
Fluent-bit stopped to process records by output. Input still receive records. Fluent-bit started to use memory up to server's limit and killed by OOM.

To Reproduce
There is no steps to reproduce, it's happens time to time (~ once per week)

Screenshots
image
image
image

Your Environment

  • Version used: v3.2.3
  • Configuration: see below
  • Environment name and version (e.g. Kubernetes? What version?): pure linux
  • Server type and version:
  • Operating System and version: Debian GNU/Linux 12, 6.1.0-28-amd64
  • Filters and plugins: input forward, rewrite_tag filter, file output

Configuration

service:
  flush: 1
  log_level: info
  hot_reload: 'on'
  scheduler.cap: 60
  http_server: 'on'
  http_listen: 127.0.0.1
  http_port: 2020
  storage.metrics: 'on'
pipeline:
  inputs:
  - name: forward
    listen: 0.0.0.0
    port: 7000
    threaded: true
    tag: input_my
  filters:
  - name: rewrite_tag
    match: input_my
    rule: $file ^(.+)$ $host.$1 false
  outputs:
  - name: file
    match: '*'
    path: /var/log
    mkdir: true
    format: template
    template: '{message}'
    workers: 10

All input traffic generated by same version fluent-bit, about 24 hosts
Configuration of the log source

service:
  flush: 1
  log_level: error
  hot_reload: 'on'
  scheduler.cap: 60
  http_server: 'on'
  http_listen: 127.0.0.1
  http_port: 2020
pipeline:
  inputs:
  - name: tail
    tag: file_log
    path: /var/log/file_log.log
    path_key: file
    skip_long_lines: true
    key: message
    db: /var/lib/fluent-bit/file_log.db
    threaded: true
    buffer_chunk_size: 4k
    buffer_max_size: 16k
    mem_buf_limit: 5M
  filters:
  - name: record_modifier
    match: file_log
    record:
    - host ${HOSTNAME}
  outputs:
  - name: forward
    match: file_log
    host: destination_hostname
    port: 7000
    require_ack_response: true
    workers: 1
    retry_limit: false

Additional context
lot of log messages always:

Jan 09 16:48:46 [ warn] [input] emitter.1 paused (mem buf overlimit)
Jan 09 16:48:46 [ info] [input] pausing emitter_for_rewrite_tag.0
Jan 09 16:48:46 [ info] [input] resume emitter_for_rewrite_tag.0
Jan 09 16:48:46 [ info] [input] emitter.1 resume (mem buf overlimit)
Jan 09 16:48:48 [ warn] [input] emitter.1 paused (mem buf overlimit)
Jan 09 16:48:48 [ info] [input] pausing emitter_for_rewrite_tag.0
Jan 09 16:48:48 [ info] [input] resume emitter_for_rewrite_tag.0
Jan 09 16:48:48 [ info] [input] emitter.1 resume (mem buf overlimit)
Jan 09 16:48:50 [ warn] [input] emitter.1 paused (mem buf overlimit)
Jan 09 16:48:50 [ info] [input] pausing emitter_for_rewrite_tag.0
Jan 09 16:48:50 [ info] [input] resume emitter_for_rewrite_tag.0
Jan 09 16:48:50 [ info] [input] emitter.1 resume (mem buf overlimit)
Jan 09 16:48:54 [ warn] [input] emitter.1 paused (mem buf overlimit)
Jan 09 16:48:54 [ info] [input] pausing emitter_for_rewrite_tag.0
Jan 09 16:48:54 [ info] [input] resume emitter_for_rewrite_tag.0
Jan 09 16:48:54 [ info] [input] emitter.1 resume (mem buf overlimit)

but right after problem started - no more this messages, but few:

[error] [input:forward:forward.0] could not enqueue records into the ring buffer

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

1 participant