Brings native understanding of Nix packages to containerd.
Key features • Getting started • Installation • Architecture • Contributing
- Instead of downloading image layers, software packages come directly from a Nix store.
- Packages can be fetched from a Nix binary cache or built on the fly.
- Backwards-compatible with existing non-Nix images.
- Nix-snapshotter layers can be interleaved with normal layers.
- Provides CRI Image Service to allow Kubernetes to "pull images" from a Nix store, allowing you to run containers without a Docker Registry.
- Fully declarative Kubernetes resources, where the image reference is a Nix store path
Nix is a package manager / build system that has a complete understanding
of build & runtime inputs for every package. Nix packages are stored in a
global hashed path like: /nix/store/s66mzxpvicwk07gjbjfw9izjfa797vsw-hello-2.12.1
.
Packages usually follow a FHS convention, so Nix packages are typically
directories containing other directories like bin, share, etc. For example, the
hello binary would be available via /nix/store/s66mzxpvicwk07gjbjfw9izjfa797vsw-hello-2.12.1/bin/hello
.
Runtime dependencies down to glibc are also inside /nix/store
, so it
really has a complete dependency graph. In the case of hello
, the
complete closure is following:
/nix/store/3n58xw4373jp0ljirf06d8077j15pc4j-glibc-2.37-8
/nix/store/fz2c8qahxza5ygy4yvwdqzbck1bs3qag-libidn2-2.3.4
/nix/store/q7hi3rvpfgc232qkdq2dacmvkmsrnldg-libunistring-1.1
/nix/store/ryvnrp5n6kqv3fl20qy2xgcgdsza7i0m-xgcc-12.3.0-libgcc
/nix/store/s66mzxpvicwk07gjbjfw9izjfa797vsw-hello-2.12.1
If you inspect its ELF data, you can indeed see its linked against that
specific glibc
:
$ readelf -d /nix/store/s66mzxpvicwk07gjbjfw9izjfa797vsw-hello-2.12.1/bin/hello | grep runpath
0x000000000000001d (RUNPATH) Library runpath: [/nix/store/3n58xw4373jp0ljirf06d8077j15pc4j-glibc-2.37-8/lib]
This means that a root filesystem containing that closure is sufficient to run
hello
even if it's dynamically linked. This is similar to minimal images
containing a statically compiled go binary or like distroless
which leverages Bazel to the same effect.
The easiest way to try this out is run a NixOS VM with everything pre-configured.
Note
You'll need Nix installed with flake support and unified CLI enabled, which comes pre-enabled with Determinate Nix Installer.
Trying without Nix installed
If you have docker or another OCI runtime installed, you can run
docker run --rm -it nixpkgs/nix-flakes
:
nix run github:pdtpartners/nix-snapshotter#vm
nix run "github:pdtpartners/nix-snapshotter#vm"
nixos login: root # (Ctrl-a then x to quit)
Password: root
# Running `pkgs.hello` image with nix-snapshotter
nerdctl run ghcr.io/pdtpartners/hello
# Running `pkgs.redis` image with kubernetes & nix-snapshotter
kubectl apply -f /etc/kubernetes/redis/
# Wait a few seconds...
watch kubectl get pods
# And a kubernetes service will be ready to forward port 30000 to the redis
# pod, so you can test it out with a `ping` command
redis-cli -p 30000 ping
Or you can try running in rootless mode:
nix run "github:pdtpartners/nix-snapshotter#vm-rootless"
nixos login: rootless # (Ctrl-a then x to quit)
Password: rootless
# `nerdctl run` with rootless k3s containerd currently not supported yet
# See: https://github.com/containerd/nerdctl/issues/2831
#
# If rootless kubernetes not needed, `nerdctl run` does work with rootless
# containerd + nix-snapshotter.
# Running `pkgs.redis` image with kubernetes & nix-snapshotter
kubectl apply -f /etc/kubernetes/redis/
# Wait a few seconds...
watch kubectl get pods
# And a kubernetes service will be ready to forward port 30000 to the redis
# pod, so you can test it out with a `ping` command
redis-cli -p 30000 ping
NixOS and Home Manager modules are provided for easy installation.
Important
Requires at least nixpkgs 23.05+
-
Home Manager
Flake
{ inputs = { nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable"; home-manager = { url = "github:nix-community/home-manager"; inputs.nixpkgs.follows = "nixpkgs"; }; nix-snapshotter = { url = "github:pdtpartners/nix-snapshotter"; inputs.nixpkgs.follows = "nixpkgs"; }; }; outputs = { nixpkgs, home-manager, nix-snapshotter, ... }: { homeConfigurations.myuser = home-manager.lib.homeManagerConfiguration { pkgs = import nixpkgs { system = "x86_64-linux"; }; modules = [ { home = { username = "myuser"; homeDirectory = "/home/myuser"; stateVersion = "23.11"; }; programs.home-manager.enable = true; # Let home-manager automatically start systemd user services. # Will eventually become the new default. systemd.user.startServices = "sd-switch"; } ({ pkgs, ... }: { # (1) Import home-manager module. imports = [ nix-snapshotter.homeModules.default ]; # (2) Add overlay. # # NOTE: If using NixOS & home-manager.useGlobalPkgs = true, then add # the overlay at the NixOS level. nixpkgs.overlays = [ nix-snapshotter.overlays.default ]; # (3) Enable service. virtualisation.containerd.rootless = { enable = true; nixSnapshotterIntegration = true; }; services.nix-snapshotter.rootless = { enable = true; }; # (4) Add a containerd CLI like nerdctl. home.packages = [ pkgs.nerdctl ]; }) ]; }; }; }
Non-flake
{ pkgs, ... }: let nix-snapshotter = import ( builtins.fetchTarball "https://github.com/pdtpartners/nix-snapshotter/archive/main.tar.gz" ); in { imports = [ # (1) Import home-manager module. nix-snapshotter.homeModules.default ]; # (2) Add overlay. # # NOTE: If using NixOS & home-manager.useGlobalPkgs = true, then add # the overlay at the NixOS level. nixpkgs.overlays = [ nix-snapshotter.overlays.default ]; # (3) Enable service. virtualisation.containerd.rootless = { enable = true; nixSnapshotterIntegration = true; }; services.nix-snapshotter.rootless = { enable = true; }; # (4) Add a containerd CLI like nerdctl. home.packages = [ pkgs.nerdctl ]; }
-
NixOS
Flake
{ inputs = { nixpkgs.url = "github:NixOS/nixpkgs/nixos-unstable"; nix-snapshotter = { url = "github:pdtpartners/nix-snapshotter"; inputs.nixpkgs.follows = "nixpkgs"; }; }; outputs = { nixpkgs, nix-snapshotter, ... }: { nixosConfigurations.myhost = nixpkgs.lib.nixosSystem { system = "x86_64-linux"; modules = [ ./hardware-configuration.nix ({ pkgs, ... }: { # (1) Import nixos module. imports = [ nix-snapshotter.nixosModules.default ]; # (2) Add overlay. nixpkgs.overlays = [ nix-snapshotter.overlays.default ]; # (3) Enable service. virtualisation.containerd = { enable = true; nixSnapshotterIntegration = true; }; services.nix-snapshotter = { enable = true; }; # (4) Add a containerd CLI like nerdctl. environment.systemPackages = [ pkgs.nerdctl ]; }) ]; }; }; }
Non-flake
{ pkgs, ... }: let nix-snapshotter = import ( builtins.fetchTarball "https://github.com/pdtpartners/nix-snapshotter/archive/main.tar.gz" ); in { imports = [ ./hardware-configuration.nix # (1) Import nixos module. nix-snapshotter.nixosModules.default ]; # (2) Add overlay. nixpkgs.overlays = [ nix-snapshotter.overlays.default ]; # (3) Enable service. virtualisation.containerd = { enable = true; nixSnapshotterIntegration = true; }; services.nix-snapshotter = { enable = true; }; # (4) Add a containerd CLI like nerdctl. environment.systemPackages = [ pkgs.nerdctl ]; }
-
Manual
See the manual installation docs.
See package.nix for the Nix interface. You can also repeat the asciinema demo above in examples/declarative-k8s.nix.
pkgs = import nixpkgs {
overlays = [ nix-snapshotter.overlays.default ];
};
# Builds a native Nix image but intended for an OCI-compliant registry.
redis = pkgs.nix-snapshotter.buildImage {
name = "ghcr.io/pdtpartners/redis";
tag = "latest";
config.entrypoint = [ "${pkgs.redis}/bin/redis-server" ];
};
# Running "${redis.copyToRegistry {}}/bin/copy-to-registry" will copy it to
# an OCI-compliant Registry. It will try to use your Docker credentials to push
# if the target is DockerHub.
# Builds a native Nix image with a special image reference. When running
# the kubelet with `--image-service-endpoint` pointing to nix-snapshotter, then
# it can resolve the image reference to this Nix package.
redis' = pkgs.nix-snapshotter.buildImage {
name = "redis";
resolvedByNix = true;
config.entrypoint = [ "${pkgs.redis}/bin/redis-server" ];
};
# Fully declarative Kubernetes Pod, down to the image specification and its
# contents.
redisPod = pkgs.writeText "redis-pod.json" (builtins.toJSON {
apiVersion = "v1";
kind = "Pod";
metadata = {
name = "redis";
labels.name = "redis";
};
spec.containers = [{
name = "redis";
args = [ "--protected-mode" "no" ];
image = "nix:0${redis'}";
ports = [{
name = "client";
containerPort = 6379;
}];
}];
});
Note
If you want to understand how nix:0
gets resolved, take a look at the docs
for Image Service.
Pull requests are welcome for any changes. Consider opening an issue to discuss larger changes first to get feedback on the idea.
Please read CONTRIBUTING for development tips and more details on contributing guidelines.
Important
To understand how it works behind the scenes, see the Architecture docs for more details.
- What's the difference between this and pkgs.dockerTools.buildImage?
Answer
The upstream buildImage
streams Nix packages into tarballs, compresses them
and pushes them to an OCI registry. Since there is a limit to number of layers
in an image, a heuristic is used to put popular packages together. There is
large amount of duplication between your Nix binary cache and the Docker
Registry tarballs, and even between images that share packages as the layers may
duplicate common packages due to the heuristic-based layering strategy.
With pkgs.nix-snapshotter.buildImage
, containerd natively understand Nix
packages, so everything is pulled at package granularity without the layer
limit. This means all the container content is either already in your Nix store
or fetched from your Nix binary cache.
- When should I choose the rootful (normal) vs rootless mode?
Answer
If you are running a production server, it's best to use the rootful version as rootless containers is still in its early stages in the container ecosystem.
However, if you are running it for personal use, do try the rootless variant first. Although less mature, it is the more secure mode as the container runtime runs as an unprivileged user. It can mitigate potential container-breakout vulnerabilities, though its not a silver bullet.
Typically, rootless mode is more complex to setup. But since it's already distributed as a NixOS / Home Manager module, it's simple as enabling the service.
See https://rootlesscontaine.rs for more details.
- What's the difference between this and Nixery?
Answer
Nixery exposes an API (in the form of an OCI registry) to dynamically build
Nix-based images. It has an improved layering design compared
to upstream pkgs.dockerTools.buildImage
but is still fundamentally a
heuristics- based layering strategy (see above), so it still suffers from the
same inefficiency in duplication. However, Nixery can totally start building
nix-snapshotter images so we can have a Docker Registry that can dynamically
build native Nix images. See this Nixery issue to follow along
the progress.
- What's the difference between this and a nix-in-docker?
Answer
If you run nix inside a container (e.g. nixos/nix
or nixpkgs/nix-flake
)
then you are indeed fetching packages using the Nix store. However, each
container will have its own Nix store instead of de-duplicating at the host
level.
nix-snapshotter is intended to live on the host system (sibling to containerd and/or kubelet) so that multiple containers running different images can share the underlying packages from the same Nix store.
- What's the difference between this and nix2container?
Answer
nix2container improves upon pkgs.dockerTools.buildImage
in a few ways. First
it does something similar to pkgs.dockerTools.streamLayeredImage
where it
avoids writing Nix layer tarballs to Nix store and builds them JIT when
exporting, like with it's passthru attribute copyToRegistry
. This avoids
writing Nix layer tarballs into the Nix store unnecessarily.
Secondly, it separates out image metadata and layer metadata. This means that when updating the image config, layers don't need to be rebuilt. Thirdly, each layer metadata is in its own Nix package, so only updated layers need to be rebuilt.
Lastly, the layer metadata is a JSON that contains the Nix store paths along
with the digest which is computed from the layer tarball which is thrown away.
This lets the tool skopeo
to only copy non-existing layers, which then builds
the requested layer tarballs again JIT.
nix2container is a great improvement, but still suffers same problems pointed
out in the pkgs.dockerTools.buildImage
section. It duplicates data between
Nix binary cache and Docker Registry, and it duplicates packages between layers
due to using a similar heuristic-based strategy.
pkgs.nix-snapshotter.buildImage
has all the same improvements, except that
we do write the final image back to the Nix store since it's tiny and allows us
to resolve image manifests via a Nix package.
The source code developed for nix-snapshotter is licensed under MIT License.
This project also contains modified portions of other projects that are licensed under the terms of Apache License 2.0. See NOTICE for more details.