-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Support IP fragmentation in eBPF #8821
Comments
That is correct observation. Unfortunately, ebpf dataplane does not support ip fragmentation as only the first fragment contains udp ports. The subsequent fragments cannot be matched reliably with the ongoing flow. We cannot reassemble the fragments in eBPF easily (that is a limitation of the technology). This said, we might consider some improvements/workarounds in a future release. |
@tomastigera Wow thanks for the quick reply! Very interesting. Looks like I have some homework regarding eBPF APIs. It'll probably save folks some time by adding this to the eBPF docs for Calico. |
Related: cilium/cilium#25709 (comment) |
Thanks for the pointer. Problem with kfunc is that they are in "newer" kernels only and are not necessarily a stable API. But we could perhaps add it for kernels that have that feature! 👍 |
Seems like the patch ⬆️ is not present in any released kernel :( |
also facing this issue, in my case I noticed that the error only happens when the target is a service IP, if I test from a pod to pod IP it works, would that make sense? |
@diogenxs do you have a different MTU on the pod-pod path than on the "default" route as that is probably what decides the MTu for the service path (larger) ? Do you use overlay (vxlan) ? What is the MTU on your devices? |
Expected Behavior
UDP packet fragments destined for a pod's IP which are not denied by policy arrive on the pod's interface.
Current Behavior
The eBPF data plane appears to be dropping UDP packet fragments by policy. The initial fragment is correctly forwarded from the node interface to the pod interface, but subsequent fragments do not appear on the pod's interface. When a UDP packet fragment is dropped, calico's dropped by policy counter for the interface is incremented. The pod interface eventually responds with "fragment reassembly time exceeded".
The only policies I have defined are k8s network policies. This problem does not occur when using the IPTables data plane.
Possible Solution
No idea. There may be a bug in calico's eBPF policy code.
Steps to Reproduce (for bugs)
Context
I experienced this behavior after migrating from the IPTables data plane to the eBPF data plane. All SNMP responses exceeding the network's MTU caused my SNMP collector to timeout. I used captures from various points to determine where the packets were being dropped.
Your Environment
The text was updated successfully, but these errors were encountered: