Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collect bpf helper arguments related to bpf map #453

Open
wants to merge 8 commits into
base: main
Choose a base branch
from

Conversation

jschwinger233
Copy link
Member

@jschwinger233 jschwinger233 commented Nov 8, 2024

This PR adds --output-bpfmap flag to collect and print bpfmap ID, name, key(hex) and value(hex).

#  pwru --output-caller --filter-track-skb --filter-track-bpf-helpers --output-bpfmap 'src port 19233 and tcp[tcpflags]=tcp-syn'
2025/01/05 19:28:56 Attaching kprobes (via kprobe-multi)...
1641 / 1641 [-----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------] 100.00% ? p/s
2025/01/05 19:28:56 Attached (ignored 0)
2025/01/05 19:28:56 Listening for events..
2025/01/05 19:28:56 Failed to retrieve all ifaces from all network namespaces: open /proc/272927/ns/net: no such file or directory. Some iface names might be not shown.
SKB                CPU PROCESS          NETNS      MARK/x        IFACE       PROTO  MTU   LEN   TUPLE FUNC CALLER
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0               0         0x0000 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) ip_local_out __ip_queue_xmit
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0               0         0x0000 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) __ip_local_out ip_local_out
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0               0         0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) ip_output      ip_local_out
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) nf_hook_slow   ip_output
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) apparmor_ip_postroute nf_hook_slow
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) ip_finish_output      ip_output
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) __ip_finish_output    ip_finish_output
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) ip_finish_output2     __ip_finish_output
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) neigh_resolve_output  ip_finish_output2
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) __neigh_event_send    neigh_resolve_output
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) eth_header            neigh_resolve_output
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  60    10.244.3.249:19233->10.244.2.187:8080(tcp) skb_push              eth_header
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1373  74    10.244.3.249:19233->10.244.2.187:8080(tcp) __dev_queue_xmit      neigh_resolve_output
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) netdev_core_pick_tx   __dev_queue_xmit
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) validate_xmit_skb     __dev_queue_xmit
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) netif_skb_features    validate_xmit_skb
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) passthru_features_check netif_skb_features
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) skb_network_protocol    netif_skb_features
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) skb_csum_hwoffload_help validate_xmit_skb
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) validate_xmit_xfrm      validate_xmit_skb
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) dev_hard_start_xmit     __dev_queue_xmit
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) skb_clone_tx_timestamp  veth_xmit[veth]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) __dev_forward_skb       veth_xmit[veth]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) __dev_forward_skb2      __dev_forward_skb
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) skb_scrub_packet        __dev_forward_skb2
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533888 0            eth0:11      0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) eth_type_trans          __dev_forward_skb2
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  60    10.244.3.249:19233->10.244.2.187:8080(tcp) __netif_rx              veth_xmit[veth]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  60    10.244.3.249:19233->10.244.2.187:8080(tcp) netif_rx_internal       __netif_rx
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  60    10.244.3.249:19233->10.244.2.187:8080(tcp) enqueue_to_backlog      netif_rx_internal
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  60    10.244.3.249:19233->10.244.2.187:8080(tcp) __netif_receive_skb     process_backlog
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  60    10.244.3.249:19233->10.244.2.187:8080(tcp) __netif_receive_skb_one_core __netif_receive_skb
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_skb_event_output         bpf_prog_a5a51cee1ed29ce6_cil_from_container[
bpf]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_skb_pull_data            bpf_prog_e78b6847176ca467_tail_handle_ipv4[bp
f]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) skb_ensure_writable          bpf_skb_pull_data
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_skb_load_bytes           bpf_prog_e78b6847176ca467_tail_handle_ipv4[bp
f]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) __htab_map_lookup_elem       arch_rethook_trampoline
map_id: 422
map_name: cilium_lb4_serv
key(12):
00000000  0a f4 02 bb 1f 90 00 00  00 00 00 00              |............|
value(12):
00000000  00 00 00 00 00 00 00 00  00 00 00 00              |............|

0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_skb_load_bytes           bpf_prog_81acdf07efeda1c9_tail_ipv4_ct_egress
[bpf]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_skb_load_bytes           bpf_prog_81acdf07efeda1c9_tail_ipv4_ct_egress
[bpf]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_skb_event_output         bpf_prog_81acdf07efeda1c9_tail_ipv4_ct_egress
[bpf]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_skb_event_output         bpf_prog_81acdf07efeda1c9_tail_ipv4_ct_egress
[bpf]
0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_map_lookup_elem          arch_rethook_trampoline
map_id: 436
map_name: cilium_ct4_glob
key(14):
00000000  0a f4 02 bb 0a f4 03 f9  4b 21 1f 90 06 01        |........K!....|
value(56):
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000020  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000030  00 00 00 00 00 00 00 00                           |........|

0xffff90d46abbe0e8 3   ~in/curl:1200370 4026533203 0        ~0e72e3a7e883:12 0x0800 1500  74    10.244.3.249:19233->10.244.2.187:8080(tcp) bpf_map_lookup_elem          arch_rethook_trampoline
map_id: 436
map_name: cilium_ct4_glob
key(14):
00000000  0a f4 03 f9 0a f4 02 bb  1f 90 4b 21 06 00        |..........K!..|
value(56):
00000000  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000010  00 00 00 00 00 00 00 00  00 00 00 00 00 00 00 00  |................|
00000020  5b 3b 00 00 00 00 00 00  00 00 02 00 b9 04 00 00  |[;..............|
00000030  1f 3b 00 00 00 00 00 00                           |.;......|

Fixes: #448

@jschwinger233
Copy link
Member Author

I hope this helps me understand how Cilium CT works. Even after 18 months since onboarding, CT is still a mystery to me.

@brb
Copy link
Member

brb commented Nov 11, 2024

👍 the idea

@jschwinger233 jschwinger233 force-pushed the gray/bpf-map-args branch 4 times, most recently from 8fb88e5 to 8cb08f5 Compare January 5, 2025 11:30
@jschwinger233 jschwinger233 marked this pull request as ready for review January 5, 2025 11:39
@jschwinger233 jschwinger233 requested a review from a team as a code owner January 5, 2025 11:39
@jschwinger233 jschwinger233 changed the title [Draft] Collect bpf helper arguments related to bpf map Collect bpf helper arguments related to bpf map Jan 5, 2025
@jschwinger233
Copy link
Member Author

I labelled this "don't merge' because it's on the top of #477 which is pending.

@brb
Copy link
Member

brb commented Jan 15, 2025

#477 has been merged. Could your rebase? Thanks

This patch doesn't introduce any functional change but defines
corresponding new fields and struct in both bpf and userspace programs.

Signed-off-by: gray <gray.liang@isovalent.com>
No functional changes.

Signed-off-by: gray <gray.liang@isovalent.com>
This patch collects bpfmap id, name, key, value at bpf_map_update_elem.

Signed-off-by: gray <gray.liang@isovalent.com>
This patch collects bpfmap id, name, key, value at bpf_map_delete_elem.

Signed-off-by: gray <gray.liang@isovalent.com>
We can only get the map value at return hook (kretprobe), that's why
event instance has to be stashed in a PERCPU array (event_stash)
temporarily at entry hook (kprobe) and retrieved at return hook
(kretprobe), where we can read bpfmap value from %rax (x64).

kretprobe_bpf_map_lookup_elem also needs to be excluded from pcap
injection.

Signed-off-by: gray <gray.liang@isovalent.com>
We search BTF to find bpfmap funcs by first parameter of type "struct
bpf_map *". Function name suffix determine which bpf program is attached
to:
- *_lookup_elem: {kprobe,kretprobe}_bpf_map_lookup_elem
- *_update_elem: kprobe_bpf_map_lookup_elem
- *_delete_elem: kprobe_bpf_map_delete_elem

Signed-off-by: gray <gray.liang@isovalent.com>
Signed-off-by: gray <gray.liang@isovalent.com>
By adding 1 to bpf_get_smp_processor_id(), we can safely rely on
"if event.PrintBpfmapId > 0" to decide whether there is bpfmap data to
read.

Signed-off-by: gray <gray.liang@isovalent.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging this pull request may close these issues.

Improve observability of bpf map operations
2 participants