Security Documentation

Flux Security documentation.

Introduction

Flux has a multi-component design, and integrates with many other systems.

This document outlines an overview of security considerations for Flux components, project processes, artifacts, as well as Flux configurable options and what they enable for both Kubernetes cluster and external system security.

See our security processes document for vulnerability reporting, handling, and disclosure of information for the Flux project and community.

Please also have a look at our security-related blog posts. We are writing there to inform you what we are doing to keep Flux and you safe!

Signed container images

The Flux CLI and the controllers’ images are signed using Sigstore Cosign and GitHub OIDC. The container images along with their signatures are published on GitHub Container Registry and Docker Hub.

To verify the authenticity of Flux’s container images, install cosign v2 and run:

$ cosign verify ghcr.io/fluxcd/source-controller:v1.0.0 \
  --certificate-identity-regexp=^https://github\\.com/fluxcd/.*$ \
  --certificate-oidc-issuer=https://token.actions.githubusercontent.com 

Verification for ghcr.io/fluxcd/source-controller:v1.0.0 --
The following checks were performed on each of these signatures:
  - The cosign claims were validated
  - Existence of the claims in the transparency log was verified offline
  - The code-signing certificate was verified using trusted certificate authority certificates

We also wrote a blog post which discusses this in some more detail.

Software Bill of Materials

For the Flux project we publish a Software Bill of Materials (SBOM) with each release. The SBOM is generated with Syft in the SPDX format.

The spdx.json file is available for download on the GitHub release page e.g.:

curl -sL https://github.com/fluxcd/flux2/releases/download/v2.0.0/flux_0.25.3_sbom.spdx.json | jq

The Flux controllers’ images come with SBOMs for each CPU architecture, you can extract the SPDX JSON using Docker’s inspect command:

docker buildx imagetools inspect ghcr.io/fluxcd/source-controller:v1.0.0 \
    --format "{{ json (index .SBOM \"linux/amd64\").SPDX}}"

Or by using Docker’s sbom command:

docker sbom fluxcd/source-controller:v1.0.0

Please also refer to this blog post which discusses the idea and value of SBOMs.

SLSA Provenance

Starting with Flux version 2.0.0, the build, release and provenance portions of the Flux project supply chain provisionally meet SLSA Build Level 3.

Please see the SLSA Assessment documentation for more details on how the provenance is generated and how Flux complies with the SLSA requirements.

Provenance verification

The provenance of the Flux release artifacts (binaries, container images, SBOMs, deploy manifests) can be verified using the official SLSA verifier tool and Sigstore Cosign. Please see the SLSA provenance verification documentation for more details on how to verify the provenance of Flux release artifacts.

Buildkit attestations

The Flux controllers’ images come with provenance attestations which follow the SLSA provenance schema version 0.2.

The provenance attestations are generated at build time with Docker Buildkit and include facts about the build process such as:

  • Build timestamps
  • Build parameters and environment
  • Version control metadata
  • Source code details
  • Materials (files, scripts) consumed during the build

To extract the SLSA provenance JSON for a specific CPU architecture, you can use Docker’s inspect command:

docker buildx imagetools inspect ghcr.io/fluxcd/source-controller:v1.0.0 \
    --format "{{ json (index .Provenance \"linux/amd64\").SLSA}}"

Note that the linux/amd64 can be replaced with another architecture variation of the image, for example linux/arm64 or linux/arm/v7.

Scanning for CVEs

The Flux controllers’ images are based on Alpine, they contain very few OS packages and the controller’s binary which is statically built using Go.

To properly scan Flux container images, the scanner must be able to detect the Alpine apk packages and the Go modules included in the controller’s Go binary. The Go modules and apk packages are also available for inspection in the attached SBOM.

The Flux team recommends users to scan the container images for CVEs using Trivy, which is an OSS scanner made by Aqua Security.

To scan a controller image with Trivy:

trivy image ghcr.io/fluxcd/source-controller:v1.0.0

We ask users to keep Flux up-to-date on their clusters, this is the only way to ensure a Flux deployment is free of CVEs. New Flux versions are published periodically, and the container images are based on the latest Alpine and Go releases. We offer a fully automated solution for keeping Flux up-to-date, please see the Flux GitHub Actions documentation for more details.

Pod security standard

The controller deployments are configured in conformance with the Kubernetes restricted pod security standard:

  • all Linux capabilities are dropped
  • the root filesystem is set to read-only
  • the seccomp profile is set to the runtime default
  • run as non-root is enabled
  • the filesystem group is set to 1337
  • the user and group ID is set to 65534

Controller permissions

While Flux integrates with other systems it is built on Kubernetes core controller-runtime and properly adheres to Kubernetes security model including RBAC 1.

Flux installs a set of RBAC manifests. These include:

  1. A crd-controller ClusterRole, which:
    • Has full access to all the Custom Resource Definitions defined by Flux controllers
    • Can get, list, and watch namespaces and secrets
    • Can get, list, watch, create, patch, and delete configmaps and their status
    • Can get, list, watch, create, patch, and delete coordination.k8s.io leases
  2. A crd-controller ClusterRoleBinding:
    • References crd-controller ClusterRole above
    • Bound to a service accounts for every Flux controller
  3. A cluster-reconciler ClusterRoleBinding:
    • References cluster-admin ClusterRole
    • Bound to service accounts for only kustomize-controller and helm-controller
  4. A flux-view ClusterRole:
    • Grants the Kubernetes builtin view role read-only access to Flux Custom Resources
  5. A flux-edit ClusterRole:
    • Grants the Kubernetes builtin edit and admin roles write access to Flux Custom Resources

Flux uses these two ClusterRoleBinding strategies in order to allow for clear access separation using tools purpose-built for policy enforcement (OPA, Kyverno, admission controllers).

For example, the design allows all controllers to access Flux CRDs (binds to crd-controller ClusterRole), but only binds the Flux reconciler controllers for Kustomize and Helm to cluster-admin ClusterRole, as these are the only two controllers that manage resources in the cluster.

However in a soft multi-tenancy setup, Flux does not reconcile a tenant’s repo under the cluster-admin role. Instead, you specify a different service account in your manifest, and the Flux controllers will use the Kubernetes Impersonation API under cluster-admin to impersonate that service account 2. In this way, policy restrictions for this service account are applied to the manifests being reconciled. If the binding is not defined for the correct service account and namespace, it will fail. The roles and permissions for this multi-tenancy approach are described in detail here: https://github.com/fluxcd/flux2-multi-tenancy.

Cross-Namespace reference policy

Flux’s general premise is to follow Kubernetes best RBAC practices which forbid cross-namespace references to potential sensitive data, i.e. Secrets and ConfigMaps. For sources and events, Flux allows referencing resources from other Namespaces. In these cases, the policy is governed by each controller’s --no-cross-namespace-refs flag. See the Flux multi-tenancy configuration page for further information on this flag.

Further securing Flux Deployments

Beyond the baked-in security features of Flux, there are further best practices that can be implemented to ensure your Flux deployment is as secure as it can be. For more information, checkout the Flux Security Best Practices.


  1. However, by design cross-namespace references are an exception to RBAC. Platform admins have the option to turn off cross-namespace references as described in the installation documentation↩︎

  2. Platform admins have to option to enforce impersonation as described in the installation documentation↩︎


Security Best Practices

Best practices for securing Flux deployments.

Contextual Authorization

Contextual Authorization for securing Flux deployments.

Secrets Management

Managing Secrets in a GitOps way using Flux.

SLSA Assessment

Flux assessment of SLSA Level 3 requirements.