Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[bitnami/postgresql-ha] Pgpool fails replication check due to hostname mismatch between conninfo and backend_hostnameX in default Helm chart configuration #32950

Open
abhi1693 opened this issue Apr 10, 2025 · 0 comments
Assignees
Labels
in-progress postgresql-ha tech-issues The user has a technical issue about an application

Comments

@abhi1693
Copy link
Contributor

Name and Version

bitnami/postgresql-ha 15.3.12

What architecture are you using?

amd64

What steps will reproduce the bug?

  1. Deploy the Bitnami postgresql-ha Helm chart with default settings:
helm install my-release oci://registry-1.docker.io/bitnamicharts/postgresql-ha
  1. Wait for the cluster to initialize with:
  • One primary node
  • Two standby nodes
  • Pgpool
  1. On one of the standby pods (e.g., postgresql-ha-postgresql-1), run:
kubectl -n postgresql-ha exec -it postgresql-ha-postgresql-1 -- \
  psql -U repmgr -d postgres -c "SELECT conninfo FROM pg_stat_wal_receiver;"
  1. Observe that the output includes a fully qualified domain name (FQDN), e.g.:
host=postgresql-ha-postgresql-0.postgresql-ha-postgresql-headless.postgresql-ha.svc.cluster.local ...
  1. Check the value of backend_hostname0 in the generated pgpool.conf (e.g., by inspecting the mounted config or printing from the pod):
kubectl -n postgresql-ha exec -it postgresql-ha-pgpool-XXXXX -- \
  grep backend_hostname /opt/bitnami/pgpool/conf/pgpool.conf
  1. Observe that it contains only the short DNS name:
backend_hostname0 = 'postgresql-ha-postgresql-0.postgresql-ha-postgresql-headless'
  1. In the Pgpool logs, observe repeated validation errors:
verify_backend_node_status: primary 0 does not connect to standby 1
verify_backend_node_status: primary 0 owns only 0 standbys out of 2

Are you using any custom parameters or values?

No

What is the expected behavior?

Pgpool should correctly verify that the primary node is streaming to all standby nodes without logging false negatives like:

verify_backend_node_status: primary X does not connect to standby Y
primary X owns only 0 standbys out of N

Given that:

  • Streaming replication is working
  • Standbys show status = streaming in pg_stat_wal_receiver
  • conninfo.host and pgpool.conf.backend_hostnameX refer to the same node

Expected behavior:

  • Pgpool resolves the primary-to-standby connection as valid
  • sr_check_worker logs show that the primary owns all expected standbys
  • No false detachments or warnings are triggered when replication is functioning normally.

What do you see instead?

2025-04-10 12:28:11.431: main pid 1: LOG:  Backend status file /opt/bitnami/pgpool/logs/pgpool_status does not exist
2025-04-10 12:28:11.431: main pid 1: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2025-04-10 12:28:11.431: main pid 1: LOG:  memory cache initialized
2025-04-10 12:28:11.431: main pid 1: DETAIL:  memcache blocks :64
2025-04-10 12:28:11.431: main pid 1: LOG:  allocating (165084192) bytes of shared memory segment
2025-04-10 12:28:11.431: main pid 1: LOG:  allocating shared memory segment of size: 165084192 
2025-04-10 12:28:11.542: main pid 1: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2025-04-10 12:28:11.542: main pid 1: LOG:  health_check_stats_shared_memory_size: requested size: 12288
2025-04-10 12:28:11.542: main pid 1: LOG:  memory cache initialized
2025-04-10 12:28:11.542: main pid 1: DETAIL:  memcache blocks :64
2025-04-10 12:28:11.544: main pid 1: LOG:  pool_discard_oid_maps: discarded memqcache oid maps
2025-04-10 12:28:11.559: main pid 1: LOG:  create socket files[0]: /opt/bitnami/pgpool/tmp/.s.PGSQL.5432
2025-04-10 12:28:11.559: main pid 1: LOG:  listen address[0]: *
2025-04-10 12:28:11.559: main pid 1: LOG:  Setting up socket for 0.0.0.0:5432
2025-04-10 12:28:11.559: main pid 1: LOG:  Setting up socket for :::5432
2025-04-10 12:28:11.581: main pid 1: LOG:  find_primary_node_repeatedly: waiting for finding a primary node
2025-04-10 12:28:11.616: main pid 1: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:28:11.617: main pid 1: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:28:11.617: main pid 1: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2
2025-04-10 12:28:11.617: main pid 1: LOG:  find_primary_node: primary node is 0
2025-04-10 12:28:11.617: main pid 1: LOG:  find_primary_node: standby node is 1
2025-04-10 12:28:11.617: main pid 1: LOG:  find_primary_node: standby node is 2
2025-04-10 12:28:11.617: main pid 1: LOG:  create socket files[0]: /opt/bitnami/pgpool/tmp/.s.PGSQL.9898
2025-04-10 12:28:11.617: main pid 1: LOG:  listen address[0]: localhost
2025-04-10 12:28:11.617: main pid 1: LOG:  Setting up socket for ::1:9898
2025-04-10 12:28:11.617: main pid 1: LOG:  Setting up socket for 127.0.0.1:9898
2025-04-10 12:28:11.618: pcp_main pid 267: LOG:  PCP process: 267 started
2025-04-10 12:28:11.618: sr_check_worker pid 268: LOG:  process started
2025-04-10 12:28:11.618: health_check pid 269: LOG:  process started
2025-04-10 12:28:11.618: health_check pid 270: LOG:  process started
2025-04-10 12:28:11.619: health_check pid 271: LOG:  process started
2025-04-10 12:28:11.621: main pid 1: LOG:  pgpool-II successfully started. version 4.6.0 (chirikoboshi)
2025-04-10 12:28:11.621: main pid 1: LOG:  node status[0]: 1
2025-04-10 12:28:11.621: main pid 1: LOG:  node status[1]: 2
2025-04-10 12:28:11.621: main pid 1: LOG:  node status[2]: 2
2025-04-10 12:28:11.646: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:28:11.646: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:28:11.646: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2
2025-04-10 12:28:41.679: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:28:41.679: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:28:41.679: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2
2025-04-10 12:29:11.706: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:29:11.707: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:29:11.707: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2
2025-04-10 12:29:41.734: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:29:41.735: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:29:41.735: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2
2025-04-10 12:30:11.764: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:30:11.765: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:30:11.765: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2
2025-04-10 12:30:41.794: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:30:41.794: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:30:41.794: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2
2025-04-10 12:31:11.823: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:31:11.824: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:31:11.824: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2
2025-04-10 12:31:41.854: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 1
2025-04-10 12:31:41.855: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 does not connect to standby 2
2025-04-10 12:31:41.855: sr_check_worker pid 268: LOG:  verify_backend_node_status: primary 0 owns only 0 standbys out of 2

Additional information

No response

@abhi1693 abhi1693 added the tech-issues The user has a technical issue about an application label Apr 10, 2025
@github-actions github-actions bot added the triage Triage is needed label Apr 10, 2025
@javsalgar javsalgar changed the title Pgpool fails replication check due to hostname mismatch between conninfo and backend_hostnameX in default Helm chart configuration [bitnami/postgresql-ha] Pgpool fails replication check due to hostname mismatch between conninfo and backend_hostnameX in default Helm chart configuration Apr 11, 2025
@github-actions github-actions bot removed the triage Triage is needed label Apr 11, 2025
@github-actions github-actions bot assigned jotamartos and unassigned javsalgar Apr 11, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
in-progress postgresql-ha tech-issues The user has a technical issue about an application
Projects
None yet
Development

No branches or pull requests

3 participants