Skip to content

Conversation

@Amaindex
Copy link
Contributor

Added tcpdrop tool, consisting of tcpdrop.bpf.c and tcpdrop.c, to trace TCP kernel-dropped packets using eBPF. Supports IPv4/IPv6 filtering and network namespace filtering, with output including timestamp, PID, IP addresses, ports, TCP state, and drop reason. Based on tcptop(8) from BCC.

Added tcpdrop tool, consisting of tcpdrop.bpf.c and tcpdrop.c, to
trace TCP kernel-dropped packets using eBPF. Supports IPv4/IPv6
filtering and network namespace filtering, with output including
timestamp, PID, IP addresses, ports, TCP state, and drop reason.
Based on tcptop(8) from BCC.

Signed-off-by: Lance Yang <[email protected]>
Signed-off-by: Zi Li <[email protected]>
Signed-off-by: Amaindex <[email protected]>
@Amaindex
Copy link
Contributor Author

Hi @chenhengqi , we’ve got a C version of tcpdrop in this PR (#5329), sticking close to the Python version’s features and options. Could you take a peek when you’ve got a sec? Would love your thoughts :)

@Amaindex
Copy link
Contributor Author

Amaindex commented Jul 1, 2025

Hi @ekyooo and @chenhengqi ,

Thanks for the great feedback! I've made the following updates based on your suggestions:

  1. Switched to ksyms__load and ksyms__map_addr for symbol resolution in tcpdrop.c.
  2. Updated tcpdrop.bpf.c and tcpdrop.c to follow Linux kernel coding style.
  3. Improved IPv6 address handling with __u32 saddr_v6[4] and in6_u.u6_addr32 in both files.
  4. Removed bpf_printk debug statements from tcpdrop.bpf.c.
  5. Added /tcpdrop to .gitignore.
  6. Moved event struct to tcpdrop.h to avoid duplication.

Please take a look and let me know if there's anything else I can tweak!

@Amaindex
Copy link
Contributor Author

Amaindex commented Jul 1, 2025

Hi @chenhengqi ,

Regarding your suggestion to copy reason enums from the kernel for tcpdrop, we previously used this approach in tcpdrop.py. However, recent experience shows these enums vary across kernel versions and distros, and they're easy to verify. So, I think dynamic loading via parse_reason_enum is more robust. It might be good to update tcpdrop.py to match this approach for consistency. What do you think, or is there another way to handle this?

- Use ksyms__load and ksyms__map_addr for kernel symbol resolution.
- Follow Linux kernel coding style in tcpdrop.bpf.c and tcpdrop.c.
- Optimize IPv6 address handling with __u32 arrays and in6_u.u6_addr32.
- Remove bpf_printk debug statements from tcpdrop.bpf.c.
- Add /tcpdrop to .gitignore to exclude the binary.
- Define event struct in tcpdrop.h to prevent duplicate definitions.
- Check drop reason with bpf_core_field_exists in tcpdrop.bpf.c.

Signed-off-by: Zi Li <[email protected]>
Signed-off-by: Amaindex <[email protected]>
@chenhengqi
Copy link
Collaborator

Hi @chenhengqi ,

Regarding your suggestion to copy reason enums from the kernel for tcpdrop, we previously used this approach in tcpdrop.py. However, recent experience shows these enums vary across kernel versions and distros, and they're easy to verify. So, I think dynamic loading via parse_reason_enum is more robust. It might be good to update tcpdrop.py to match this approach for consistency. What do you think, or is there another way to handle this?

Do you have an example of these enums vary across kernel versions and distros ?
We have enum skb_drop_reason in vmlinux.h

@Amaindex
Copy link
Contributor Author

Amaindex commented Jul 3, 2025

Hi @chenhengqi ,
Regarding your suggestion to copy reason enums from the kernel for tcpdrop, we previously used this approach in tcpdrop.py. However, recent experience shows these enums vary across kernel versions and distros, and they're easy to verify. So, I think dynamic loading via parse_reason_enum is more robust. It might be good to update tcpdrop.py to match this approach for consistency. What do you think, or is there another way to handle this?

Do you have an example of these enums vary across kernel versions and distros ? We have enum skb_drop_reason in vmlinux.h

Take NETFILTER_DROP as an example. In kernel v5.15.186, as you can see in include/linux/skbuff.h, the skb_drop_reason enum lists NETFILTER_DROP as the 7th value (index 6):

enum skb_drop_reason {
    SKB_DROP_REASON_NOT_SPECIFIED,  /* 0 */
    SKB_DROP_REASON_NO_SOCKET,      /* 1 */
    SKB_DROP_REASON_PKT_TOO_SMALL,  /* 2 */
    SKB_DROP_REASON_TCP_CSUM,       /* 3 */
    SKB_DROP_REASON_SOCKET_FILTER,  /* 4 */
    SKB_DROP_REASON_UDP_CSUM,       /* 5 */
    SKB_DROP_REASON_NETFILTER_DROP, /* 6 */
    ...
};

This is reflected in the tracepoint format for /sys/kernel/debug/tracing/events/skb/kfree_skb/format, where NETFILTER_DROP is mapped to index 6 in the __print_symbolic output.

Now, fast forward to kernel v6.15.4, and things shift in include/net/dropreason-core.h. The skb_drop_reason enum has new entries, and NETFILTER_DROP moves to index 12:

enum skb_drop_reason {
    SKB_NOT_DROPPED_YET,           /* 0 */
    SKB_CONSUMED,                  /* 1 */
    SKB_DROP_REASON_NOT_SPECIFIED, /* 2 */
    SKB_DROP_REASON_NO_SOCKET,     /* 3 */
    SKB_DROP_REASON_SOCKET_CLOSE,  /* 4 */
    SKB_DROP_REASON_SOCKET_FILTER, /* 5 */
    SKB_DROP_REASON_SOCKET_RCVBUFF,/* 6 */
    SKB_DROP_REASON_UNIX_DISCONNECT,/* 7 */
    SKB_DROP_REASON_UNIX_SKIP_OOB, /* 8 */
    SKB_DROP_REASON_PKT_TOO_SMALL, /* 9 */
    SKB_DROP_REASON_TCP_CSUM,      /* 10 */
    SKB_DROP_REASON_UDP_CSUM,      /* 11 */
    SKB_DROP_REASON_NETFILTER_DROP,/* 12 */
    ...
};

The tracepoint format in v6.15.4 confirms this, with NETFILTER_DROP now at index 12 in the __print_symbolic output. This isn’t just a case of appending new values at the end—new entries like SKB_CONSUMED, SOCKET_CLOSE, SOCKET_RCVBUFF, etc., are inserted in the middle, shuffling the indices around.

Considering the skb_drop_reason index changes across kernel versions, parse_reason_enum for dynamic loading feels more adaptable than hardcoding the enums.

@chenhengqi
Copy link
Collaborator

Hi @chenhengqi ,
Regarding your suggestion to copy reason enums from the kernel for tcpdrop, we previously used this approach in tcpdrop.py. However, recent experience shows these enums vary across kernel versions and distros, and they're easy to verify. So, I think dynamic loading via parse_reason_enum is more robust. It might be good to update tcpdrop.py to match this approach for consistency. What do you think, or is there another way to handle this?

Do you have an example of these enums vary across kernel versions and distros ? We have enum skb_drop_reason in vmlinux.h

Take NETFILTER_DROP as an example. In kernel v5.15.186, as you can see in include/linux/skbuff.h, the skb_drop_reason enum lists NETFILTER_DROP as the 7th value (index 6):

enum skb_drop_reason {
    SKB_DROP_REASON_NOT_SPECIFIED,  /* 0 */
    SKB_DROP_REASON_NO_SOCKET,      /* 1 */
    SKB_DROP_REASON_PKT_TOO_SMALL,  /* 2 */
    SKB_DROP_REASON_TCP_CSUM,       /* 3 */
    SKB_DROP_REASON_SOCKET_FILTER,  /* 4 */
    SKB_DROP_REASON_UDP_CSUM,       /* 5 */
    SKB_DROP_REASON_NETFILTER_DROP, /* 6 */
    ...
};

This is reflected in the tracepoint format for /sys/kernel/debug/tracing/events/skb/kfree_skb/format, where NETFILTER_DROP is mapped to index 6 in the __print_symbolic output.

Now, fast forward to kernel v6.15.4, and things shift in include/net/dropreason-core.h. The skb_drop_reason enum has new entries, and NETFILTER_DROP moves to index 12:

enum skb_drop_reason {
    SKB_NOT_DROPPED_YET,           /* 0 */
    SKB_CONSUMED,                  /* 1 */
    SKB_DROP_REASON_NOT_SPECIFIED, /* 2 */
    SKB_DROP_REASON_NO_SOCKET,     /* 3 */
    SKB_DROP_REASON_SOCKET_CLOSE,  /* 4 */
    SKB_DROP_REASON_SOCKET_FILTER, /* 5 */
    SKB_DROP_REASON_SOCKET_RCVBUFF,/* 6 */
    SKB_DROP_REASON_UNIX_DISCONNECT,/* 7 */
    SKB_DROP_REASON_UNIX_SKIP_OOB, /* 8 */
    SKB_DROP_REASON_PKT_TOO_SMALL, /* 9 */
    SKB_DROP_REASON_TCP_CSUM,      /* 10 */
    SKB_DROP_REASON_UDP_CSUM,      /* 11 */
    SKB_DROP_REASON_NETFILTER_DROP,/* 12 */
    ...
};

The tracepoint format in v6.15.4 confirms this, with NETFILTER_DROP now at index 12 in the __print_symbolic output. This isn’t just a case of appending new values at the end—new entries like SKB_CONSUMED, SOCKET_CLOSE, SOCKET_RCVBUFF, etc., are inserted in the middle, shuffling the indices around.

Considering the skb_drop_reason index changes across kernel versions, parse_reason_enum for dynamic loading feels more adaptable than hardcoding the enums.

Sounds reasonable. I am OK with this approach.

Remove print_drop_reasons function and replace its call with a warning message
in main when parse_reason_enum fails.

Signed-off-by: Zi Li <[email protected]>
Signed-off-by: Amaindex <[email protected]>
@Amaindex
Copy link
Contributor Author

Hi @chenhengqi ,
I’ve removed print_drop_reasons and added a warning for parse failures in tcpdrop.c as you suggested, and reordered headers in tcpdrop.bpf.c to avoid compilation issues. Let me know if it looks good to go!

@chenhengqi
Copy link
Collaborator

Some comments are not resolved, please check.

…cpdrop

Move ipv4_only, ipv6_only, and netns_id to rodata section for better memory
management. Optimize tcpdrop.bpf.c by declaring variables upfront and
reordering operations for clarity. Update event struct to place stack_id
correctly. Fix missing newlines at file ends.

Signed-off-by: Zi Li <[email protected]>
Signed-off-by: Amaindex <[email protected]>
@Amaindex
Copy link
Contributor Author

Some comments are not resolved, please check.

Hi @chenhengqi ,
My apologies, I just saw these comments and have pushed the corresponding fixes.
Thank you for the detailed feedback. I learned a lot from your suggestions, and the patch is much better for it.

Defer the BPF ring buffer event allocation in tcpdrop.bpf.c until all
preliminary checks are passed, reducing unnecessary discards and
improving performance. This ensures the event is only reserved when the
skb meets all processing conditions, minimizing resource waste.

Signed-off-by: Zi Li <[email protected]>
Signed-off-by: Amaindex <[email protected]>
@Amaindex
Copy link
Contributor Author

Hi @chenhengqi,
Thanks for the feedback. I’ve delayed the event allocation to cut down on unnecessary discards. Also, I removed the ringbuffer capacity check, which I overlooked from an earlier version, as it’s no longer needed.

Merge protocol validation and event population in tcpdrop.bpf.c for better
readability and efficiency. Remove braces from single-line if statements to
streamline code while preserving functionality.

Signed-off-by: Zi Li <[email protected]>
Signed-off-by: Amaindex <[email protected]>
@Amaindex
Copy link
Contributor Author

Amaindex commented Aug 3, 2025

Hi @chenhengqi, appreciate the input! I split the code before to cut down on event discards, but it was probably overdone. I’ve merged the protocol checks and trimmed the single-line ifs for a cleaner approach. What do you think of this version?

@Amaindex
Copy link
Contributor Author

Hi @chenhengqi, appreciate the input! I split the code before to cut down on event discards, but it was probably overdone. I’ve merged the protocol checks and trimmed the single-line ifs for a cleaner approach. What do you think of this version?

@chenhengqi Hi, hope all’s good! I replied to your last comments—any chance you could take a look or let me know if there’s more to tweak? Appreciate your time!

@chenhengqi
Copy link
Collaborator

I got this locally:

Verifier logs
libbpf: prog 'tp__skb_free_skb': BPF program load failed: -EACCES
libbpf: prog 'tp__skb_free_skb': -- BEGIN PROG LOAD LOG --
Unrecognized arg#0 type PTR
; skb = args->skbaddr;
0: (79) r7 = *(u64 *)(r1 +8)
; if (!skb)
1: (15) if r7 == 0x0 goto pc+110
 R1=ctx(id=0,off=0,imm=0) R7_w=inv(id=0) R10=fp0
2: (b7) r2 = 1
; if (bpf_core_field_exists(args->reason))
3: (15) if r2 == 0x0 goto pc+3
last_idx 3 first_idx 0
regs=4 stack=0 before 2: (b7) r2 = 1
; if (args->reason <= SKB_DROP_REASON_NOT_SPECIFIED)
4: (61) r2 = *(u32 *)(r1 +28)
5: (b7) r3 = 3
; if (args->reason <= SKB_DROP_REASON_NOT_SPECIFIED)
6: (2d) if r3 > r2 goto pc+105
 R1=ctx(id=0,off=0,imm=0) R2_w=inv(id=0,umin_value=3,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R3_w=inv3 R7_w=inv(id=0) R10=fp0
; protocol = args->protocol;
7: (69) r8 = *(u16 *)(r1 +24)
; if (protocol != ETH_P_IP && protocol != ETH_P_IPV6)
8: (15) if r8 == 0x86dd goto pc+1
 R1=ctx(id=0,off=0,imm=0) R2=inv(id=0,umin_value=3,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R3=inv3 R7=inv(id=0) R8_w=inv(id=0,umax_value=65535,var_off=(0x0; 0xffff)) R10=fp0
9: (55) if r8 != 0x800 goto pc+102
 R1=ctx(id=0,off=0,imm=0) R2=inv(id=0,umin_value=3,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R3=inv3 R7=inv(id=0) R8_w=inv2048 R10=fp0
; if (ipv4_only && protocol != ETH_P_IP)
10: (18) r2 = 0xffffc900003c5000
12: (71) r2 = *(u8 *)(r2 +0)
 R1=ctx(id=0,off=0,imm=0) R2_w=map_value(id=0,off=0,ks=4,vs=8,imm=0) R3=inv3 R7=inv(id=0) R8_w=inv2048 R10=fp0
; if (ipv4_only && protocol != ETH_P_IP)
13: (15) if r8 == 0x800 goto pc+1
last_idx 13 first_idx 7
regs=100 stack=0 before 12: (71) r2 = *(u8 *)(r2 +0)
regs=100 stack=0 before 10: (18) r2 = 0xffffc900003c5000
regs=100 stack=0 before 9: (55) if r8 != 0x800 goto pc+102
regs=100 stack=0 before 8: (15) if r8 == 0x86dd goto pc+1
regs=100 stack=0 before 7: (69) r8 = *(u16 *)(r1 +24)
; if (ipv6_only && protocol != ETH_P_IPV6)
15: (18) r2 = 0xffffc900003c5001
17: (71) r2 = *(u8 *)(r2 +0)
 R1=ctx(id=0,off=0,imm=0) R2_w=map_value(id=0,off=1,ks=4,vs=8,imm=0) R3=inv3 R7=inv(id=0) R8_w=invP2048 R10=fp0
; if (ipv6_only && protocol != ETH_P_IPV6)
18: (15) if r8 == 0x86dd goto pc+1
19: (55) if r2 != 0x0 goto pc+92
last_idx 19 first_idx 18
regs=4 stack=0 before 18: (15) if r8 == 0x86dd goto pc+1
 R1=ctx(id=0,off=0,imm=0) R2_rw=invP0 R3=inv3 R7=inv(id=0) R8_rw=invP2048 R10=fp0
parent didn't have regs=4 stack=0 marks
last_idx 17 first_idx 7
regs=4 stack=0 before 17: (71) r2 = *(u8 *)(r2 +0)
20: (b7) r2 = 32
21: (bf) r3 = r7
22: (0f) r3 += r2
23: (bf) r2 = r10
; bpf_core_read(&sk, sizeof(sk), &skb->sk);
24: (07) r2 += -8
25: (bf) r6 = r1
26: (bf) r1 = r2
27: (b7) r2 = 8
28: (85) call bpf_probe_read_kernel#113
last_idx 28 first_idx 18
regs=4 stack=0 before 27: (b7) r2 = 8
29: (79) r3 = *(u64 *)(r10 -8)
; if (netns_id && sk) {
30: (18) r1 = 0xffffc900003c5004
32: (61) r1 = *(u32 *)(r1 +0)
 R0=inv(id=0) R1_w=map_value(id=0,off=4,ks=4,vs=8,imm=0) R3_w=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP2048 R10=fp0 fp-8=mmmmmmmm
; if (netns_id && sk) {
33: (15) if r1 == 0x0 goto pc+22
last_idx 33 first_idx 29
regs=2 stack=0 before 32: (61) r1 = *(u32 *)(r1 +0)
; if (inum != netns_id)
56: (b7) r1 = 208
57: (bf) r3 = r7
58: (0f) r3 += r1
59: (bf) r1 = r10
; if (bpf_core_read(&head, sizeof(head), &skb->head) ||
60: (07) r1 += -16
61: (b7) r2 = 8
62: (85) call bpf_probe_read_kernel#113
last_idx 62 first_idx 29
regs=4 stack=0 before 61: (b7) r2 = 8
; if (bpf_core_read(&head, sizeof(head), &skb->head) ||
63: (55) if r0 != 0x0 goto pc+48
 R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP2048 R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm
64: (b7) r1 = 196
65: (bf) r3 = r7
66: (0f) r3 += r1
67: (bf) r1 = r10
; bpf_core_read(&network_header, sizeof(network_header),
68: (07) r1 += -18
69: (b7) r2 = 2
70: (85) call bpf_probe_read_kernel#113
last_idx 70 first_idx 63
regs=4 stack=0 before 69: (b7) r2 = 2
; &skb->network_header) ||
71: (55) if r0 != 0x0 goto pc+40
 R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP2048 R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mm??????
72: (b7) r1 = 194
73: (0f) r7 += r1
74: (bf) r1 = r10
; bpf_core_read(&transport_header, sizeof(transport_header),
75: (07) r1 += -20
76: (b7) r2 = 2
77: (bf) r3 = r7
78: (85) call bpf_probe_read_kernel#113
last_idx 78 first_idx 71
regs=4 stack=0 before 77: (bf) r3 = r7
regs=4 stack=0 before 76: (b7) r2 = 2
; if (bpf_core_read(&head, sizeof(head), &skb->head) ||
79: (55) if r0 != 0x0 goto pc+32
 R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP2048 R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm????
; event = bpf_ringbuf_reserve(&events, sizeof(*event), 0);
80: (18) r1 = 0xffff888a8d1f5400
82: (b7) r2 = 80
83: (b7) r3 = 0
84: (85) call bpf_ringbuf_reserve#131
; if (!event)
85: (15) if r0 == 0x0 goto pc+26
 R0_w=mem(id=0,ref_obj_id=2,off=0,imm=0) R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP2048 R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? refs=2
86: (bf) r9 = r0
; 
87: (bf) r7 = r0
; 
88: (69) r1 = *(u16 *)(r10 -18)
89: (79) r3 = *(u64 *)(r10 -16)
90: (0f) r3 += r1
; if (protocol == ETH_P_IP) {
91: (55) if r8 != 0x800 goto pc+22
92: (bf) r1 = r10
; if (bpf_core_read(&ip, sizeof(ip), head + network_header) ||
93: (07) r1 += -80
94: (b7) r2 = 20
95: (85) call bpf_probe_read_kernel#113
last_idx 95 first_idx 91
regs=4 stack=0 before 94: (b7) r2 = 20
; if (bpf_core_read(&ip, sizeof(ip), head + network_header) ||
96: (55) if r0 != 0x0 goto pc+12
 R0_w=inv0 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm refs=2
97: (bf) r1 = r10
; ip.protocol != IPPROTO_TCP ||
98: (07) r1 += -80
99: (71) r1 = *(u8 *)(r1 +9)
; ip.protocol != IPPROTO_TCP ||
100: (55) if r1 != 0x6 goto pc+8
 R0=inv0 R1=inv6 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm refs=2
; bpf_core_read(&tcp, sizeof(tcp), head + transport_header)) {
101: (69) r1 = *(u16 *)(r10 -20)
102: (79) r3 = *(u64 *)(r10 -16)
103: (0f) r3 += r1
104: (bf) r1 = r10
105: (07) r1 += -100
106: (b7) r2 = 20
107: (85) call bpf_probe_read_kernel#113
last_idx 107 first_idx 100
regs=4 stack=0 before 106: (b7) r2 = 20
; if (bpf_core_read(&ip, sizeof(ip), head + network_header) ||
108: (15) if r0 == 0x0 goto pc+23
 R0=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; 
109: (bf) r1 = r7
110: (b7) r2 = 0
111: (85) call bpf_ringbuf_discard#133
; }
112: (b7) r0 = 0
113: (95) exit

from 108 to 132: R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; if (bpf_core_read(&ip6, sizeof(ip6), head + network_header) ||
132: (b7) r1 = 4
; event->ip_version = 4;
133: (63) *(u32 *)(r7 +16) = r1
R0=inv0 R1_w=inv4 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
134: (bf) r1 = r10
135: (07) r1 += -80
; event->saddr_v4 = ip.saddr;
136: (61) r2 = *(u32 *)(r1 +12)
; event->saddr_v4 = ip.saddr;
137: (63) *(u32 *)(r7 +24) = r2
R0=inv0 R1_w=fp-80 R2_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; event->daddr_v4 = ip.daddr;
138: (61) r1 = *(u32 *)(r1 +16)
; event->daddr_v4 = ip.daddr;
139: (63) *(u32 *)(r7 +40) = r1
R0=inv0 R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R2_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
140: (05) goto pc+19
; bpf_core_read(&event->daddr_v6, sizeof(event->daddr_v6),
160: (bf) r1 = r10
;
161: (07) r1 += -100
162: (69) r2 = *(u16 *)(r1 +0)
163: (dc) r2 = be16 r2
164: (6b) *(u16 *)(r7 +56) = r2
R0=inv0 R1_w=fp-100 R2_w=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
165: (b7) r2 = 2
166: (0f) r1 += r2
last_idx 166 first_idx 160
regs=4 stack=0 before 165: (b7) r2 = 2
167: (69) r1 = *(u16 *)(r1 +0)
168: (dc) r1 = be16 r1
169: (6b) *(u16 *)(r7 +58) = r1
R0=inv0 R1_w=inv(id=0) R2_w=invP2 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
170: (71) r1 = *(u8 *)(r10 -87)
171: (73) *(u8 *)(r7 +61) = r1
R0=inv0 R1_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) R2_w=invP2 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
172: (18) r1 = 0xffffffff
174: (b7) r2 = 1
; if (bpf_core_field_exists(args->reason))
175: (15) if r2 == 0x0 goto pc+1
last_idx 175 first_idx 160
regs=4 stack=0 before 174: (b7) r2 = 1
; event->drop_reason = args->reason;
176: (61) r1 = *(u32 *)(r6 +28)
;
177: (63) *(u32 *)(r7 +12) = r1
R0=inv0 R1_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R2_w=invP1 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; pid_tgid = bpf_get_current_pid_tgid();
178: (85) call bpf_get_current_pid_tgid#14
179: (bf) r8 = r0
; pid = pid_tgid >> 32;
180: (77) r8 >>= 32
; event->timestamp = bpf_ktime_get_ns();
181: (85) call bpf_ktime_get_ns#5
; event->pid = pid;
182: (63) *(u32 *)(r7 +8) = r8
R0_w=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; event->timestamp = bpf_ktime_get_ns();
183: (7b) *(u64 *)(r7 +0) = r0
R0_w=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
184: (bf) r1 = r9
; bpf_get_current_comm(&event->comm, sizeof(event->comm));
185: (07) r1 += 62
; bpf_get_current_comm(&event->comm, sizeof(event->comm));
186: (b7) r2 = 16
187: (85) call bpf_get_current_comm#16
R0_w=inv(id=0) R1_w=mem(id=0,ref_obj_id=2,off=62,imm=0) R2_w=inv16 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8_w=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
last_idx 187 first_idx 179
regs=4 stack=0 before 186: (b7) r2 = 16
; event->stack_id = bpf_get_stackid(args, &stack_traces, 0);
188: (bf) r1 = r6
189: (18) r2 = 0xffff888112188000
191: (b7) r3 = 0
192: (85) call bpf_get_stackid#27
193: (b7) r1 = 127
; event->state = 127;
194: (73) *(u8 *)(r7 +60) = r1
R0_w=inv(id=0) R1_w=inv127 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; event->stack_id = bpf_get_stackid(args, &stack_traces, 0);
195: (63) *(u32 *)(r7 +20) = r0
R0_w=inv(id=0) R1_w=inv127 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; if (sk)
196: (79) r3 = *(u64 *)(r10 -8)
; if (sk)
197: (15) if r3 == 0x0 goto pc+9
R0_w=inv(id=0) R1_w=inv127 R3_w=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
198: (b7) r1 = 18
199: (0f) r3 += r1
200: (bf) r1 = r10
; if (!bpf_core_read(&state, sizeof(state),
201: (07) r1 += -37
202: (b7) r2 = 1
203: (85) call bpf_probe_read_kernel#113
last_idx 203 first_idx 188
regs=4 stack=0 before 202: (b7) r2 = 1
; if (!bpf_core_read(&state, sizeof(state),
204: (55) if r0 != 0x0 goto pc+2
R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-40=????m??? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; event->state = state;
205: (71) r1 = *(u8 *)(r10 -37)
; event->state = state;
206: (73) *(u8 *)(r7 +60) = r1
R0=inv0 R1_w=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-40=????m??? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; bpf_ringbuf_submit(event, 0);
207: (bf) r1 = r7
208: (b7) r2 = 0
209: (85) call bpf_ringbuf_submit#132
210: (05) goto pc-99
; }
112: (b7) r0 = 0
113: (95) exit

from 204 to 207: R0=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=inv(id=0,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-40=????m??? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=2
; bpf_ringbuf_submit(event, 0);
207: (bf) r1 = r7
208: (b7) r2 = 0
209: (85) call bpf_ringbuf_submit#132
210: (05) goto pc-99
; }
112: (b7) r0 = 0
113: (95) exit

from 197 to 207: safe

from 100 to 109: R0=inv0 R1=inv(id=0,umax_value=255,var_off=(0x0; 0xff)) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm refs=2
;
109: (bf) r1 = r7
110: (b7) r2 = 0
111: (85) call bpf_ringbuf_discard#133
; }
112: (b7) r0 = 0
113: (95) exit

from 96 to 109: R0_w=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=2,off=0,imm=0) R8=invP2048 R9=mem(id=0,ref_obj_id=2,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-64=????mmmm fp-72=mmmmmmmm fp-80=mmmmmmmm refs=2
;
109: (bf) r1 = r7
110: (b7) r2 = 0
111: (85) call bpf_ringbuf_discard#133
112: safe

from 85 to 112: safe

from 79 to 112: safe

from 71 to 112: safe

from 63 to 112: safe

from 9 to 112: safe

from 8 to 10: R1=ctx(id=0,off=0,imm=0) R2=inv(id=0,umin_value=3,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R3=inv3 R7=inv(id=0) R8=inv34525 R10=fp0
; if (ipv4_only && protocol != ETH_P_IP)
10: (18) r2 = 0xffffc900003c5000
12: (71) r2 = *(u8 *)(r2 +0)
R1=ctx(id=0,off=0,imm=0) R2_w=map_value(id=0,off=0,ks=4,vs=8,imm=0) R3=inv3 R7=inv(id=0) R8=inv34525 R10=fp0
; if (ipv4_only && protocol != ETH_P_IP)
13: (15) if r8 == 0x800 goto pc+1
last_idx 13 first_idx 10
regs=100 stack=0 before 12: (71) r2 = *(u8 *)(r2 +0)
regs=100 stack=0 before 10: (18) r2 = 0xffffc900003c5000
R1=ctx(id=0,off=0,imm=0) R2=inv(id=0,umin_value=3,umax_value=4294967295,var_off=(0x0; 0xffffffff)) R3=inv3 R7=inv(id=0) R8_rw=invP34525 R10=fp0
parent didn't have regs=100 stack=0 marks
last_idx 8 first_idx 7
regs=100 stack=0 before 8: (15) if r8 == 0x86dd goto pc+1
regs=100 stack=0 before 7: (69) r8 = *(u16 *)(r1 +24)
14: (55) if r2 != 0x0 goto pc+97
last_idx 14 first_idx 10
regs=4 stack=0 before 13: (15) if r8 == 0x800 goto pc+1
regs=4 stack=0 before 12: (71) r2 = *(u8 *)(r2 +0)
; if (ipv6_only && protocol != ETH_P_IPV6)
15: (18) r2 = 0xffffc900003c5001
17: (71) r2 = *(u8 *)(r2 +0)
R1=ctx(id=0,off=0,imm=0) R2_w=map_value(id=0,off=1,ks=4,vs=8,imm=0) R3=inv3 R7=inv(id=0) R8=invP34525 R10=fp0
; if (ipv6_only && protocol != ETH_P_IPV6)
18: (15) if r8 == 0x86dd goto pc+1
20: (b7) r2 = 32
21: (bf) r3 = r7
22: (0f) r3 += r2
23: (bf) r2 = r10
; bpf_core_read(&sk, sizeof(sk), &skb->sk);
24: (07) r2 += -8
25: (bf) r6 = r1
26: (bf) r1 = r2
27: (b7) r2 = 8
28: (85) call bpf_probe_read_kernel#113
last_idx 28 first_idx 10
regs=4 stack=0 before 27: (b7) r2 = 8
29: (79) r3 = *(u64 *)(r10 -8)
; if (netns_id && sk) {
30: (18) r1 = 0xffffc900003c5004
32: (61) r1 = *(u32 *)(r1 +0)
R0=inv(id=0) R1_w=map_value(id=0,off=4,ks=4,vs=8,imm=0) R3_w=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP34525 R10=fp0 fp-8=mmmmmmmm
; if (netns_id && sk) {
33: (15) if r1 == 0x0 goto pc+22
last_idx 33 first_idx 29
regs=2 stack=0 before 32: (61) r1 = *(u32 *)(r1 +0)
; if (inum != netns_id)
56: (b7) r1 = 208
57: (bf) r3 = r7
58: (0f) r3 += r1
59: (bf) r1 = r10
; if (bpf_core_read(&head, sizeof(head), &skb->head) ||
60: (07) r1 += -16
61: (b7) r2 = 8
62: (85) call bpf_probe_read_kernel#113
last_idx 62 first_idx 29
regs=4 stack=0 before 61: (b7) r2 = 8
; if (bpf_core_read(&head, sizeof(head), &skb->head) ||
63: (55) if r0 != 0x0 goto pc+48
R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP34525 R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm
64: (b7) r1 = 196
65: (bf) r3 = r7
66: (0f) r3 += r1
67: (bf) r1 = r10
; bpf_core_read(&network_header, sizeof(network_header),
68: (07) r1 += -18
69: (b7) r2 = 2
70: (85) call bpf_probe_read_kernel#113
last_idx 70 first_idx 63
regs=4 stack=0 before 69: (b7) r2 = 2
; &skb->network_header) ||
71: (55) if r0 != 0x0 goto pc+40
R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP34525 R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mm??????
72: (b7) r1 = 194
73: (0f) r7 += r1
74: (bf) r1 = r10
; bpf_core_read(&transport_header, sizeof(transport_header),
75: (07) r1 += -20
76: (b7) r2 = 2
77: (bf) r3 = r7
78: (85) call bpf_probe_read_kernel#113
last_idx 78 first_idx 71
regs=4 stack=0 before 77: (bf) r3 = r7
regs=4 stack=0 before 76: (b7) r2 = 2
; if (bpf_core_read(&head, sizeof(head), &skb->head) ||
79: (55) if r0 != 0x0 goto pc+32
R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP34525 R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm????
; event = bpf_ringbuf_reserve(&events, sizeof(*event), 0);
80: (18) r1 = 0xffff888a8d1f5400
82: (b7) r2 = 80
83: (b7) r3 = 0
84: (85) call bpf_ringbuf_reserve#131
; if (!event)
85: (15) if r0 == 0x0 goto pc+26
R0_w=mem(id=0,ref_obj_id=4,off=0,imm=0) R6=ctx(id=0,off=0,imm=0) R7=inv(id=0) R8=invP34525 R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? refs=4
86: (bf) r9 = r0
;
87: (bf) r7 = r0
;
88: (69) r1 = *(u16 *)(r10 -18)
89: (79) r3 = *(u64 *)(r10 -16)
90: (0f) r3 += r1
; if (protocol == ETH_P_IP) {
91: (55) if r8 != 0x800 goto pc+22
; }
114: (bf) r1 = r10
; if (bpf_core_read(&ip6, sizeof(ip6), head + network_header) ||
115: (07) r1 += -80
116: (b7) r2 = 40
117: (85) call bpf_probe_read_kernel#113
last_idx 117 first_idx 91
regs=4 stack=0 before 116: (b7) r2 = 40
; if (bpf_core_read(&ip6, sizeof(ip6), head + network_header) ||
118: (55) if r0 != 0x0 goto pc+12
R0_w=inv0 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=4,off=0,imm=0) R8=invP34525 R9=mem(id=0,ref_obj_id=4,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-48=mmmmmmmm fp-56=mmmmmmmm fp-64=mmmmmmmm fp-72=mmmmmmmm fp-80=mmmmmmmm refs=4
119: (bf) r1 = r10
; ip6.nexthdr != IPPROTO_TCP ||
120: (07) r1 += -80
121: (71) r1 = *(u8 *)(r1 +6)
; ip6.nexthdr != IPPROTO_TCP ||
122: (55) if r1 != 0x6 goto pc+8
R0=inv0 R1=inv6 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=4,off=0,imm=0) R8=invP34525 R9=mem(id=0,ref_obj_id=4,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-48=mmmmmmmm fp-56=mmmmmmmm fp-64=mmmmmmmm fp-72=mmmmmmmm fp-80=mmmmmmmm refs=4
; bpf_core_read(&tcp, sizeof(tcp), head + transport_header)) {
123: (69) r1 = *(u16 *)(r10 -20)
124: (79) r3 = *(u64 *)(r10 -16)
125: (0f) r3 += r1
126: (bf) r1 = r10
127: (07) r1 += -100
128: (b7) r2 = 20
129: (85) call bpf_probe_read_kernel#113
last_idx 129 first_idx 122
regs=4 stack=0 before 128: (b7) r2 = 20
; if (bpf_core_read(&ip6, sizeof(ip6), head + network_header) ||
130: (15) if r0 == 0x0 goto pc+10
R0=inv(id=0) R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=4,off=0,imm=0) R8=invP34525 R9=mem(id=0,ref_obj_id=4,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-48=mmmmmmmm fp-56=mmmmmmmm fp-64=mmmmmmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=4
131: (05) goto pc-23
;
109: (bf) r1 = r7
110: (b7) r2 = 0
111: (85) call bpf_ringbuf_discard#133
112: safe

from 130 to 141: R0=inv0 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=4,off=0,imm=0) R8=invP34525 R9=mem(id=0,ref_obj_id=4,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-48=mmmmmmmm fp-56=mmmmmmmm fp-64=mmmmmmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=4
; event->daddr_v4 = ip.daddr;
141: (b7) r1 = 6
; event->ip_version = 6;
142: (63) *(u32 *)(r7 +16) = r1
R0=inv0 R1_w=inv6 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=4,off=0,imm=0) R8=invP34525 R9=mem(id=0,ref_obj_id=4,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-48=mmmmmmmm fp-56=mmmmmmmm fp-64=mmmmmmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? refs=4
143: (b7) r1 = 8
144: (bf) r8 = r10
145: (07) r8 += -80
146: (bf) r3 = r8
147: (0f) r3 += r1
last_idx 147 first_idx 130
regs=2 stack=0 before 146: (bf) r3 = r8
regs=2 stack=0 before 145: (07) r8 += -80
regs=2 stack=0 before 144: (bf) r8 = r10
regs=2 stack=0 before 143: (b7) r1 = 8
148: (bf) r1 = r9
; bpf_core_read(&event->saddr_v6, sizeof(event->saddr_v6),
149: (7b) *(u64 *)(r10 -112) = r1
150: (07) r1 += 24
151: (b7) r2 = 16
152: (85) call bpf_probe_read_kernel#113
R0=inv0 R1_w=mem(id=0,ref_obj_id=4,off=24,imm=0) R2_w=inv16 R3_w=fp-72 R6=ctx(id=0,off=0,imm=0) R7=mem(id=0,ref_obj_id=4,off=0,imm=0) R8_w=fp-80 R9=mem(id=0,ref_obj_id=4,off=0,imm=0) R10=fp0 fp-8=mmmmmmmm fp-16=mmmmmmmm fp-24=mmmm???? fp-48=mmmmmmmm fp-56=mmmmmmmm fp-64=mmmmmmmm fp-72=mmmmmmmm fp-80=mmmmmmmm fp-88=mmmmmmmm fp-96=mmmmmmmm fp-104=mmmm???? fp-112_w=mmmmmmmm refs=4
last_idx 152 first_idx 130
regs=4 stack=0 before 151: (b7) r2 = 16
153: (b7) r1 = 24
154: (0f) r8 += r1
last_idx 154 first_idx 153
regs=2 stack=0 before 153: (b7) r1 = 24
; bpf_core_read(&event->daddr_v6, sizeof(event->daddr_v6),
155: (79) r1 = *(u64 *)(r10 -112)
156: (07) r1 += 40
157: (b7) r2 = 16
158: (bf) r3 = r8
159: (85) call bpf_probe_read_kernel#113
R1 type=inv expected=fp
processed 264 insns (limit 1000000) max_states_per_insn 1 total_states 24 peak_states 24 mark_read 9
-- END PROG LOAD LOG --
libbpf: prog 'tp__skb_free_skb': failed to load: -EACCES
libbpf: failed to load object 'tcpdrop_bpf'
libbpf: failed to load BPF skeleton 'tcpdrop_bpf': -EACCES
Failed to load BPF skeleton: -13

if (ipv6_only && protocol != ETH_P_IPV6)
return 0;

bpf_core_read(&sk, sizeof(sk), &skb->sk);
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Typically, we use BPF_CORE_READ and BPF_CORE_READ_INFO instead.

const volatile __u32 netns_id = 0;

SEC("tracepoint/skb/kfree_skb")
int tp__skb_free_skb(struct trace_event_raw_kfree_skb *args)
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Use raw tracepoint instead ?

Replace bpf_core_read with BPF_CORE_READ for CO-RE compatibility and use
__builtin_memcpy for IPv6 addresses to fix verifier type mismatch (R1 type=inv
expected=fp). Simplify sk and state reads for clarity and efficiency.

Signed-off-by: Zi Li <[email protected]>
Signed-off-by: Amaindex <[email protected]>
@Amaindex
Copy link
Contributor Author

Hi @chenhengqi, thank you for your guidance and feedback! I’ve replaced bpf_core_read with BPF_CORE_READ and used __builtin_memcpy for IPv6 addresses to resolve the verifier issue. Also simplified sk and state reads. Does this address the concerns?
Regarding the raw tracepoint suggestion, I gave it a try, but found parsing the raw context added complexity with manual argument handling and potential kernel version compatibility issues. Unless there’s a specific benefit to switching, I’d lean toward keeping the current tracepoint for simplicity and portability.Thoughts?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants