-
Notifications
You must be signed in to change notification settings - Fork 64
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Container networking on Kubernetes broken after Server 2022 July 2024 / KB5040437 (OS Build 20348.2582) update #516
Container networking on Kubernetes broken after Server 2022 July 2024 / KB5040437 (OS Build 20348.2582) update #516
Comments
+@jsturtevant who saw this in kubernetes/test-infra#33042 |
I'm having issues with my AKS testing with latest images. it looks to be related |
Folks over Calico re-pointed to this issue suggesting the issue isn't at Calico's end (projectcalico/calico#9019 (comment)). Currently our workers can't have up to date Security patches because of this. I noticed the ADO label so I am hoping we will have some update soon 🤞 |
We are having this exact same issue with our windows deployments which are using mcr.microsoft.com/dotnet/framework/aspnet:4.8.1-windowsservercore-ltsc2022. Any update or suggestion on a fix would be greatly appreciated. |
@kysu1313 Do you have the Windows patch KB5040437 installed too? |
I do have same issue. After uninstaling KB5040437 - network conectivity is established. |
@ntrappe-msft Can you confirm if we can expect a fix in August CUs ? We have been holding off upgrading to July CU but leaving our cluster unpatched for two or more months consequently has security concerns. |
@avin3sh We're getting this assigned to an engineer right now. Once we do that, they can inform everyone of what the timeline looks like. |
3 weeks went, and we’re not only have a fix for a bug, that making windows containers unusable, but don’t even have a timeline. That looks very strange |
@Nova-Logic Sorry for the delay, we know this is a big blocker. We've switched it to a new engineer and should have an update to provide next week. |
@avin3sh how are you uninstalling KB5040437? I received this error when attempting to uninstall via I also have the General failure ping errors on a fully updated Windows Server 2019 as well.. Calico seems completely broken for Windows in general right now.. |
No mention of this issue in today's patches. I am guessing this was not addressed ? |
How is this still not fixed? We can't update any of our windows nodes as the patch can't even be uninstalled.. |
I just tried and can confirm the August patch / KB5041160 / does not fix the issue. The patch contains Important CVEs which leaves our cluster potentially vulnerable if not patched. @ntrappe-msft I appreciate an engineer is already assigned this issue but is it possible for us to get some update on the fix ? |
We are coming to the end of another week, can we please have the update we were promised
|
Unfortunately, I don't have news to share yet of a fix. We're waiting on a response from the engineer assigned. We'll bump this Issue up in priority. |
Any update? At least a rough estimate / schedule? Currently k8s windows container network is simply broken and not usable. We soon are forced to terminate all our windows nodes as we can't patch them anymore due to this issue. |
We are a large customer of Windows Containers and are deeply concerned that this issue remains unresolved. Neither the July nor August security updates even acknowledge this issue under the "Known issues in this update" section. We are curious what criteria a Containers issue must meet to warrant expedited support and official mention in monthly updates. Does "everything about container networking is broken after July" not meet these criteria? The support on this problem so far has raised several internal questions about stability of Windows Containers as a platform. The way Microsoft handles this problem will dictate how seriously we would be able to take Windows Containers for any initiatives going forward. |
It's really sad, but I believe we should admit this: It's hard to ruin product reputation more than Microsoft did — release the CU that broke container networking and then just ghost the customers, for more than a month. MS even didn't bothered (or it's possible that actually MS still didn't fully aware of the problem) to write about the issues in known problems. We(I mean community) can try to check if Microsoft cares about this product by spreading that insane story everywhere across dev/devops/tech bloggers and look at MS reaction. |
As we head into another week, do we have any new update ? As we inch closer to next month's patches, the growing uncertainty about the fix means we will have to force the hosts to update anyway and look at some alternative for hosting the workloads - can't leave the Windows workers unpatched for three months in a row. All of this tedious, extra work can be avoided or at least planned better if there is some transparency on how Windows Containers team is planning to tackle this issue. If this issue is affecting even the official sig-windows Kubernetes e2e tests, not prioritizing this problem paints a very bad picture of Windows Containers as a product, for both existing and future potential customers. I tried some experimentation with Docker Swarm with overlay networking but couldn't reproduce this specific scenario, which seems to suggest the issue might be specific to encapsulation mode or ACLs on HNS Endpoints -- but again my guess as is as good as anyone else's and without some insights into the issue from the product team, it is difficult to even think of a workaround. |
27 August, still no fix |
I apologize for my ignorance, but I'd really appreciate if someone here in the community can clarify the nature and scope of this issue for me. My understanding from the thread above is that Microsoft's July update for Windows Server 2022 has somehow borked networking for Windows pods/containers deployed to Kubernetes nodes running that version of Windows Server. However, do we know the extent to which the various local/cloud flavours of Kubernetes environment(s) might affected? For example, has anyone observed this same behaviour when using the latest versions of the Amazon "Kubernetes optimized AMIs" in EKS, or similar counterparts in AKS? As for what might be causing the issue, I wonder if there is a potential for some underlying dependency issue with the [versions of the] tools used to build the Windows container images themselves? For example, the version/patching of the Windows base image that the container is built from? Regardless, the apparent lack of any cogent response from Microsoft is it's definitely... disquieting. |
Hi @grcusanz, are you in a position to better describe the exact nature and scope of the problem as you understand it at this time? For example, is it limited to HNS implementations as some have posited above, or is CNI impacted too? |
Hi everyone, please follow these steps and comment to let me know if it resolves the issue with the July or August update installed.
Name : FwPerfImprovementChange
CAUTION! Network connectivity will be lost to all containers on the node during an HNS restart! Container networking should automatically recover. Please report back if you have a different experience. |
@JamesKehr at this moment looks like it helped, would continue testing on this weekend and post follow-up on Monday |
Thank you for the confirmation, @Nova-Logic! Please let me know if the status changes. |
Thanks James for identifying and sharing the workaround! The initial fix that caused this was implemented to resolve a customer issue with Calico network policy at scale. It shipped in April, disabled by default in Windows, but was enabled by default for AKS nodes. There were no issues that we were aware of with this fix in AKS. Following our standard process, this then became enabled in Windows by default in July. James's workaround is the first step, we're now investigating the root cause of why this fix broke networking in July and will report back here when we have more info, and again when we have a permanent fix available. |
Thanks for sharing the background @grcusanz.
This seem to suggest there are missing gaps somewhere in the test/release process. Given the scale of effect a simple change like this had, would the team be open to cover all various common configurations mentioned over this issue, since these seem to be popular with Windows Containers customers aside the standard AKS setup with Azure CNI - it looks like networking tests covering Calico VXLAN/overlay may have helped identify this problem early on and prevented the change going into monthly patches. |
Can I do this before updating, or will this be overwritten by the update? |
As the key does not exist after updating, I strongly guess it will not be overwritten. So yes, I guess. But to be sure, just check after updating if it still is 0 🤷 |
@doctorpangloss you can safely add the registry value prior to updating. The default value applies only when the registry value is not present. A present reg value will always take precedence over the default value. @wech71 Spot on! |
I updated the steps to include a no reboot option. The registry value is read during the start of the HNS service. Restarting the HNS service will cause the reg value change to be read and container networking will be rebuilt. CAUTION! Network connectivity will be lost to all containers on the node during the HNS restart! Container networking should automatically recover. Please report back if you have a different experience. |
Thank you @JamesKehr for the workaround.
|
@aaabdallah Thank you for confirmation and the PowerShell commands! |
Hello, |
Hello, |
October CU confirms that this is fixed:
Is it safe to not proactively apply the workaround when adding new worker nodes or rebuilding existing ones ? |
Hi, thanks for asking a follow-up question. We're currently waiting on a response from the responsible team. |
Use master hashrel build for Win FVs. Use k8s and kind versions from metadata.mk in Win FVs. Extract latest KUBE_VERSION from az images to use in capz cluster (as they might not exactly match the versions from metadata.mk). Bump capz versions. Add node IP bootstrapping on k8s v1.29+ (as kubelet no longer sets node IPs on external cloud-providers). Change generated ssh/scp helpers to use full node IPs. Enable felix debug logging and collect pod logs at the end of tests. Add more logging on powershell commands in windows policy_test.go Add workaround for microsoft/Windows-Containers#516 to CAPZ Win FVs. Disable Felix CAPZ Windows FVs temporarily.
Use master hashrel build for Win FVs. Use k8s and kind versions from metadata.mk in Win FVs. Extract latest KUBE_VERSION from az images to use in capz cluster (as they might not exactly match the versions from metadata.mk). Bump capz versions. Add node IP bootstrapping on k8s v1.29+ (as kubelet no longer sets node IPs on external cloud-providers). Change generated ssh/scp helpers to use full node IPs. Enable felix debug logging and collect pod logs at the end of tests. Add more logging on powershell commands in windows policy_test.go Add workaround for microsoft/Windows-Containers#516 to CAPZ Win FVs. Disable Felix CAPZ Windows FVs temporarily.
Use master hashrel build for Win FVs. Use k8s and kind versions from metadata.mk in Win FVs. Extract latest KUBE_VERSION from az images to use in capz cluster (as they might not exactly match the versions from metadata.mk). Bump capz versions. Add node IP bootstrapping on k8s v1.29+ (as kubelet no longer sets node IPs on external cloud-providers). Change generated ssh/scp helpers to use full node IPs. Enable felix debug logging and collect pod logs at the end of tests. Add more logging on powershell commands in windows policy_test.go Add workaround for microsoft/Windows-Containers#516 to CAPZ Win FVs. Disable Felix CAPZ Windows FVs temporarily.
This issue has been open for 30 days with no updates. |
Describe the bug
Pod networking breaks after installing the July CU on Windows Server 2022. For eg,
ping microsoft.com
from within the container returnsGeneral failure
. The pod is not reachable from the other pods or through a Service.Uninstalling
KB5040437
fixes the issue.To Reproduce
Expected behavior
The pod should be able to reach to external network as well should be reachable from other pods
Configuration:
/label Windows on Kubernetes
The text was updated successfully, but these errors were encountered: