Replies: 5 comments 11 replies
-
Example / reference build outputs using the new pipeline, based on PR #3877, and published to a GH fork:
|
Beta Was this translation helpful? Give feedback.
-
Do I understand correctly that the main build will be in a "make release"? |
Beta Was this translation helpful? Give feedback.
-
Hi @C0rWin and @pfi79 and @denyeart and @davidkel I tried switching the base docker image from alpine over to ubuntu:20.04. (This is what we are using in the reference binary builders at GH.) The full image size, including gcc, golang, fabric binaries, etc. etc. is well over a GB in size, which is "too large." With a multi-stage build, trimming out gcc, golang, etc. and leaving only Fabric binaries dynamically linked against the system libc adds about 60 MB to the image size over alpine. This is quite manageable, and so far seems 100% stable.
So we either:
The latter two alternatives aren't viable, IMO. Switching over to ubuntu as the base image has a high chance of success for building Fabric images linked in with pkcs11 and HSM shared libraries. I think this is a winner. The cost is 60MB of (cached) image layer size to our base images. Thoughts? |
Beta Was this translation helpful? Give feedback.
-
Candidate PRs for multi-arch support:
|
Beta Was this translation helpful? Give feedback.
-
All set! Fabric 2.5.0-alpha3 and fabric-ca 1.5.6-beta3 are looking good on the M1. It took some real work to unwind all the tendrils. Thank you all for the input, testing, feedback, and encouragement. This was a big project. :| High-level summary of the updates for M1 / multi-arch:
And finally ... my desk is quiet again - no more fan noise!!! |
Beta Was this translation helpful? Give feedback.
-
This discussion is an opportunity to review the impacts of PR #3877 as a general approach for providing multi-architecture builds of Fabric, starting with the 2.5 LTS release.
Providing support for multi-arch runtimes for Fabric has opened a long, complicated, and interleaved discussion with tendrils reaching far back into the original vision for PKCS11 and HSM. During the release-2.5 cycle, we started "peeling" back the layers, starting with what was originally a feature request to add support for Fabric binaries and Docker images for the new M1 / Apple Silicon macs. In this initial work, the updates encountered minimal turbulence, focusing on two points of turbulence:
buildx
to prepare multi-arch Docker images with QEMU emulating the target architecture at build time.After the initial port was complete, we started turning alpha builds of the release-2.5 branch, and encountered a number of SIGSEGV violations running fabric binaries in Docker. Unwinding the dependencies between in-docker builds, release builds, native builds, and builds using the system golang with Fabric Makefiles has been a real challenge. The dependencies between all of these systems, usage contexts, and tools has turned into a bin-packing exercise, for which no single solution exists.
The assumption up to this point (and fatally flawed), was that a single make/build/docker system would be able to serve the needs of all contexts in which Fabric would be employed. In common usage patterns this is less of a concern, but the cross-overs into PKCS11 and HSM support domain have not been successful. Consider the following (legacy) Jira tickets, closed as WONTFIX/NO-SOLUTION/GO-AWAY lingering without resolution:
For PKCS-enabled environments, a dynamically built Fabric runtime, derived FROM ubuntu is required.
For all other environments, a statically built Fabric runtime, derived FROM alpine is strongly desired.
Herein lies the crux of the issue, that Alpine's mechanism for embedding libmusl runtimes is fundamentally incompatible with golang and CGO. Up to this point we have been "lucky," in that the experimental support for the golang-alpine image has worked well. With the introduction of the arm64, nothing really works correctly in this environment, unless the Fabric binaries have been linked statically, which breaks PKCS11 and HSM support.
Unwinding these dependencies has been challenging, considering the cross-platform "support" matrix of build/for/arch/os/runtime/binaries/pkcs/etc. is well over 100 entries and counting. PR #3877 resolves these dependencies, presenting a simplified view of the overall release practices by "binning" the environments into four different categories:
Local builds (
make
) run directly on the host system, using the system go. Nothing fancy here: just go.Release builds (
make release
) use the system go cross-compiler to prepare statically linked executables . In cases requiring CGO (e.g. sqlite), theCC=...
argument specifies the compiler and external CGO link options as "static."Docker builds (
make docker
) package the statically linked binaries onto alpine, without support for PKCS11. No build of any kind is executed within the container - it's literally a simpleCOPY
of the arch-specific, statically linked binaries into a Docker container.PKCS11 and HSM Docker containers must be prepared by building
FROM golang
, using the system golang compiler and CGO to construct a dynamically linked executable. At runtime, the binaries will load vendor-specific HSM .so modules. PKCS11 docker images are NOT currently build outputs generated by Fabric, and support is only provided through documentation and a reference fabric-test suite for study by HSM system integrators.This compromise brings a great simplification, stability, and predictability to Fabric's supported runtimes. Docker images are as small as possible, avoiding issues with the libc runtime by statically compiling under a cross-compiler. HSM implementations are free to extend the dynamically linked Fabric runtimes, avoiding the constraints that doomed the legacy JIRA issues above.
Most encouraging, is that it ... "just works!"
Please help identify cases where this is not so, and call them out here in this GH discussion, so we can make this just work for everyone.
Beta Was this translation helpful? Give feedback.
All reactions