Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

K8s Deployment and Documentation Suggestions #21

Open
sherif-fanous opened this issue Apr 14, 2023 · 4 comments
Open

K8s Deployment and Documentation Suggestions #21

sherif-fanous opened this issue Apr 14, 2023 · 4 comments

Comments

@sherif-fanous
Copy link

sherif-fanous commented Apr 14, 2023

First off, thanks for a great solution. It's unfortunate that zerotier don't directly provide/maintain a router based image as Tailscale do.

I managed to get zerotier:router running on my home lab K8s cluster.

The starter deployment provided in the repo here helped but I believe it could do with a few enhancements along with some documentation.

Here is the deployment manifest I ended up with

apiVersion: apps/v1
kind: Deployment
metadata:
  namespace: zerotier
  name: zerotier
  labels:
    app: zerotier
spec:
  replicas: 1
  selector:
    matchLabels:
      app: zerotier
  template:
    metadata:
      name: zerotier
      labels:
        app: zerotier
    spec:
       initContainers:
         - name: network-joiner
           image: busybox:latest
           env:
             - name: NETWORK_ID
               value: <Network_ID>
           command:
             - /bin/sh
             - -ec
             - mkdir -p /var/lib/zerotier-one/networks.d && touch /var/lib/zerotier-one/networks.d/$(NETWORK_ID).conf
           volumeMounts:
             - name: zerotier-working-directory
               mountPath: /var/lib/zerotier-one
      containers:
        - name: zerotier
          image: zyclonite/zerotier:router
          env:
            - name: ZEROTIER_ONE_GATEWAY_MODE
              value: inbound
            - name: ZEROTIER_ONE_LOCAL_PHYS
              value: eth0
            - name: ZEROTIER_ONE_NETWORK_IDS
              value: <Network_ID>
            - name: ZEROTIER_ONE_USE_IPTABLES_NFT
              value: "false"
          imagePullPolicy: Always
          securityContext:
            capabilities:
              add:
                - NET_ADMIN
                - NET_RAW
                - SYS_ADMIN
          volumeMounts:
            - name: tun
              mountPath: /dev/net/tun
              readOnly: true
            - name: zerotier-working-directory
              mountPath: /var/lib/zerotier-one
      securityContext:
        sysctls:
          - name: net.ipv4.ip_forward
            value: "1"
      volumes:
        - name: tun
          hostPath:
            path: /dev/net/tun
        - name: zerotier-working-directory
          persistentVolumeClaim:
            claimName: zerotier

The main issue with the deployment in the repo is that it is missing the following

securityContext:
  sysctls:
    - name: net.ipv4.ip_forward
      value: "1"

Just adding this to the deployment manifest is not enough though as per https://kubernetes.io/docs/tasks/administer-cluster/sysctl-cluster/#enabling-unsafe-sysctls

In my particular case (Cluster created with kubeadm), I had to add the following to the Kubelet config file. In my case the config file is /var/lib/kubelet/config.yaml

allowedUnsafeSysctls:
- net.ipv4.ip_forward

With this setup the pod is able to set net.ipv4.ip_forward to 1 and route traffic between the zerotier network and my K8s overlay pod and service networks.

@Slyke
Copy link

Slyke commented Feb 15, 2024

How did you deal with configuring iptables for custom routes and NATing?

@sherif-fanous
Copy link
Author

@Slyke I didn't have to do anything. The container handles all this automatically.

@Slyke
Copy link

Slyke commented Feb 18, 2024

@sherif-fanous I'm having a terrible time getting it to route over Kubernetes, lol. Why is /dev/net/tun required?

@sherif-fanous
Copy link
Author

@Slyke See the instructions here

I have this setup successfully working in 3 clusters using the instructions in the first post.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants