Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

When eBPF is enabled, when ippool natOutgoing set to false connection to kubernetes api through serviceIP starts timing out #8829

Open
tomastigera opened this issue May 15, 2024 · 3 comments
Labels
area/bpf eBPF Dataplane issues kind/support

Comments

@tomastigera
Copy link
Contributor

tomastigera commented May 15, 2024

Noticed one different behavior when eBPF is enabled, when ippool natOutgoing set to false connection to kubernetes api through serviceIP starts timing out from pods wanting to use it, since there is no iptables rule sending service ip out to correct node.

How does this work in eBPF mode ?

I am forced to enable outgoing nat to solve this issue, is this a bug or expected behavior?

Originally posted by @ehsan310 in #8812 (comment)

@tomastigera tomastigera added kind/support area/bpf eBPF Dataplane issues labels May 15, 2024
@tomastigera
Copy link
Contributor Author

@ehsan310 Are your API servers on a different network,not reachable without MASQ or overlay?

since there is no iptables rule sending service ip out to correct node.

Destination IP is changed to whatever the service is translated to and then the packet is sent out. It must be routeable. It could be that kube-proxy does the MASQ on its own based on your cluster configuration, i.e. it knows that the traffic is leaving the pod network, and calico ebpf does not do it because it does not have that configuration - nat outgoing serves that purpose.

@ehsan310
Copy link

ehsan310 commented May 16, 2024

service ip is advertised via bgp and calico, there is no overlay network and I have a flat networking peered with ToR.
also kube-proxy is disable and DS is removed , all old iptable rules are also removed. so it's only calico handling the traffic.
Node to Node Mesh is also disabled.

that also make sense based on what you said , because we have separate mgmt and pod/service traffic network and k8s api is listening on mgmt network , so this is expected.

@ehsan310
Copy link

I also hit this issue
#5039

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
area/bpf eBPF Dataplane issues kind/support
Projects
None yet
Development

No branches or pull requests

2 participants