Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes Egress #198

Open
sarnowski opened this issue Dec 14, 2016 · 12 comments
Open

Kubernetes Egress #198

sarnowski opened this issue Dec 14, 2016 · 12 comments

Comments

@sarnowski
Copy link

sarnowski commented Dec 14, 2016

The goal is to know where network connections are going to outside of the cluster and authorizing those in advance. All connections that leave the cluster network need to be whitelisted. For that, I propose an Egress resource to specify the whitelist.

Requirements

  • No network traffic going out of the cluster by default.
  • Explicit whitelisting possible for the user.
  • Explicit whitelisting possible for system bootstrapping (like Docker registry and OAuth).

Not a requirement:

  • Fine granular rules per pod. Only on a cluster level.

Example specification

apiVersion: "zalando.org/v1"
kind: EgressRuleSet
metadata:
   name: my-app-targets
spec:
   targets:
   - mydependency1.example.com:443
   - mydependency2.example.com:443
   - *.example.org:80

In some cases, pinning it down to predetermined domains doesn't work. Examples would be crawler or applications that need to react on user input like webhooks. In this case, one needs a switch to allow everything:

apiVersion: "zalando.org/v1"
kind: EgressRuleSet
metadata:
   name: world-access
spec:
   targets:
   - *:80

Multiple rule sets can exist at the same time and a union of the targets would determine the set of the whole cluster.

Wildcarding everything is okay as its still an explicit choice that could be checked during deployment. Wildcarding ports should be discouraged (and even not implemented) unless we find any valid use case where this is not purely security insensitive.

Example integration

Since this would probably be implemented as an HTTP proxy, the integration pattern should be that the standard environment variable http_proxy is set by default in every container that starts without the user having to specify it.

Example implementation

One should set up a HA/scalable HTTP proxy like squid. In addition, an egress-controller should observe the EgressRuleSet resources and reconfigure the squid accordingly.

The AWS Security Group of all Kubernetes nodes would not have the default "allow outbound" rule so that every traffic going out would be dropped. The HTTP proxy would need to run outside of the security group in some kind of "DMZ" setup (like the ALBs and ELBs) where Kubernetes nodes can go to. The HTTP proxy server then itself has full outbound rules in its Security Group.

@sarnowski
Copy link
Author

Another minor advantage that comes out of having an HTTP proxy in place would be the possibility to enable caching for certain resources. This would be a huge advantage for our future build systems that typically download massive amounts of static resources from various repositories (maven, apt, npm, ...). Having a caching proxy here would decrease build times immensely.

@sarnowski
Copy link
Author

@hwinkel
Copy link

hwinkel commented Apr 13, 2017

Hi, How is going with that. We have currently a use case where we need to proxy out the egress traffic to give the traffic a defined (set) of IP(s). The Destination has some IP based Filter to only allow requests from this given IPs. To make it pretty we could envision to announce the Source IP of the Node running the egress proxy by a L3 Networking model. ?!?

@hjacobs
Copy link
Contributor

hjacobs commented Apr 13, 2017

@hwinkel we haven't started working on egress yet, but for your use case it could be a simple routing rule in AWS to go via NAT gateways with fixed IP.

@dolftax
Copy link

dolftax commented Apr 14, 2017

@hjacobs I was looking for something silimar. Point me to relevant meetings or docs? I can help you guys build this :)

@hjacobs
Copy link
Contributor

hjacobs commented May 30, 2017

FYI: we are currently experimenting with dante as a SOCKS server on AWS. It really looks promising and Java has automatic SOCKS support (https://docs.oracle.com/javase/7/docs/technotes/guides/net/proxies.html).

@shruti-palshikar
Copy link

Is the selective egress a supported functionality now?

@mikkeloscar
Copy link
Contributor

@shruti-palshikar No, we have not done any work on this, and it's also not something we are planning right now since we don't need it ourselves at the moment :)

@szuecs
Copy link
Member

szuecs commented Jan 31, 2018

@shruti-palshikar dependent on what you are searching for you might use calico or cillium and network policies, I know that there is some effort in sig-network specifying egress network policies.
@hwinkel shameless plug https://github.com/szuecs/kube-static-egress-controller , is a controller that will create a CF stack to create NAT GWs and create routing table entries to select only a list of target networks to route through the NAT GW.

@shruti-palshikar
Copy link

@mikkeloscar Thanks for the response. @szuecs : My usecase is to allow egress from pods to certain selected external domains. With the default policy of a namespace being deny-all egress traffic, I am looking for ways to whitelist a few domains that the pods are allowed to reach out to.

@szuecs
Copy link
Member

szuecs commented Jan 31, 2018

@shruti-palshikar domains are not an internet routeable entity for egress nor ingress. It's based on Layer3 of the OSI model and DNS is used to resolve a name to an IP. IP is the routeable entity, which you might can do in kubernetes with the mentioned CNI plugins + egress-traffic-policy object, but this repository is not the right audience.

szuecs added a commit that referenced this issue Sep 10, 2018
* add dualstack (IPv6) support (#209)
* added a flag to blacklist certificate ARNs, that will not be considered
* Allow control of the SSL policy applied to https listeners via flag (#20 @jhohertz)
* Detach non-existing target groups from ASGs (#198)
* Reduce cert lookups within a single update (#193)
* Cleanup controller initialization and termination (#194)

Signed-off-by: Sandor Szücs <sandor.szuecs@zalando.de>
This was referenced Sep 10, 2018
@HerrmannHinz
Copy link

was looking for something similar, the folks of cillium did something which could solve that problem?

see: https://cilium.io/blog/2018/09/19/kubernetes-network-policies/

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants