flagger

Istio and App Mesh progressive delivery Kubernetes operator

Github stars Tracking Chart

flagger

build
report
codecov
license
release

Flagger is a Kubernetes operator that automates the promotion of canary deployments
using Istio, Linkerd, App Mesh, NGINX, Contour or Gloo routing for traffic shifting and Prometheus metrics for canary analysis.
The canary analysis can be extended with webhooks for running acceptance tests,
load tests or any other custom validation.

Flagger implements a control loop that gradually shifts traffic to the canary while measuring key performance
indicators like HTTP requests success rate, requests average duration and pods health.
Based on analysis of the KPIs a canary is promoted or aborted, and the analysis result is published to Slack or MS Teams.

flagger-overview

Documentation

Flagger documentation can be found at docs.flagger.app

Canary CRD

Flagger takes a Kubernetes deployment and optionally a horizontal pod autoscaler (HPA),
then creates a series of objects (Kubernetes deployments, ClusterIP services and Istio or App Mesh virtual services).
These objects expose the application on the mesh and drive the canary analysis and promotion.

Flagger keeps track of ConfigMaps and Secrets referenced by a Kubernetes Deployment and triggers a canary analysis if any of those objects change.
When promoting a workload in production, both code (container images) and configuration (config maps and secrets) are being synchronised.

For a deployment named podinfo, a canary promotion can be defined using Flagger's custom resource:

apiVersion: flagger.app/v1alpha3
kind: Canary
metadata:
  name: podinfo
  namespace: test
spec:
  # service mesh provider (optional)
  # can be: kubernetes, istio, linkerd, appmesh, nginx, contour, gloo, supergloo
  provider: istio
  # deployment reference
  targetRef:
    apiVersion: apps/v1
    kind: Deployment
    name: podinfo
  # the maximum time in seconds for the canary deployment
  # to make progress before it is rollback (default 600s)
  progressDeadlineSeconds: 60
  # HPA reference (optional)
  autoscalerRef:
    apiVersion: autoscaling/v2beta1
    kind: HorizontalPodAutoscaler
    name: podinfo
  service:
    # service name (optional)
    name: podinfo
    # ClusterIP port number
    port: 9898
    # container port name or number (optional)
    targetPort: 9898
    # port name can be http or grpc (default http)
    portName: http
    # HTTP match conditions (optional)
    match:
      - uri:
          prefix: /
    # HTTP rewrite (optional)
    rewrite:
      uri: /
    # request timeout (optional)
    timeout: 5s
  # promote the canary without analysing it (default false)
  skipAnalysis: false
  # define the canary analysis timing and KPIs
  canaryAnalysis:
    # schedule interval (default 60s)
    interval: 1m
    # max number of failed metric checks before rollback
    threshold: 10
    # max traffic percentage routed to canary
    # percentage (0-100)
    maxWeight: 50
    # canary increment step
    # percentage (0-100)
    stepWeight: 5
    # Istio Prometheus checks
    metrics:
    # builtin checks
    - name: request-success-rate
      # minimum req success rate (non 5xx responses)
      # percentage (0-100)
      threshold: 99
      interval: 1m
    - name: request-duration
      # maximum req duration P99
      # milliseconds
      threshold: 500
      interval: 30s
    # custom check
    - name: "kafka lag"
      threshold: 100
      query:, avg_over_time(
          kafka_consumergroup_lag{
            consumergroup=~"podinfo-consumer-.*",
            topic="podinfo"
          }[1m]
        )
    # testing (optional)
    webhooks:
      - name: load-test
        url: http://flagger-loadtester.test/
        timeout: 5s
        metadata:
          cmd: "hey -z 1m -q 10 -c 2 http://podinfo.test:9898/"

For more details on how the canary analysis and promotion works please read the docs.

Features, Feature, Istio, Linkerd, App Mesh, NGINX, Gloo, Contour, CNI, --------------------------------------------, ------------------, ------------------, ------------------, ------------------, ------------------, ------------------, ------------------, Canary deployments (weighted traffic), :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_minus_sign:, A/B testing (headers and cookies routing), :heavy_check_mark:, :heavy_minus_sign:, :heavy_check_mark:, :heavy_check_mark:, :heavy_minus_sign:, :heavy_check_mark:, :heavy_minus_sign:, Blue/Green deployments (traffic switch), :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, Webhooks (acceptance/load testing), :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, Manual gating (approve/pause/resume), :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, Request success rate check (L7 metric), :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_minus_sign:, Request duration check (L7 metric), :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_minus_sign:, Custom promql checks, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, :heavy_check_mark:, Traffic policy, CORS, retries and timeouts, :heavy_check_mark:, :heavy_minus_sign:, :heavy_minus_sign:, :heavy_minus_sign:, :heavy_minus_sign:, :heavy_check_mark:, :heavy_minus_sign:, ## Roadmap

  • Integrate with other service mesh like Consul Connect and ingress controllers like HAProxy, ALB
  • Add support for comparing the canary metrics to the primary ones and do the validation based on the derivation between the two

Contributing

Flagger is Apache 2.0 licensed and accepts contributions via GitHub pull requests.

When submitting bug reports please include as much details as possible:

  • which Flagger version
  • which Flagger CRD version
  • which Kubernetes version
  • what configuration (canary, ingress and workloads definitions)
  • what happened (Flagger and Proxy logs)

Getting Help

If you have any questions about Flagger and progressive delivery:

Your feedback is always welcome!

Overview

Name With Ownerfluxcd/flagger
Primary LanguageGo
Program languageGo (Language Count: 6)
Platform
License:Apache License 2.0
Release Count102
Last Release Namev1.37.0 (Posted on )
First Release Name0.0.1 (Posted on )
Created At2018-09-19 21:43:46
Pushed At2024-05-16 12:36:06
Last Commit At
Stargazers Count4.7k
Watchers Count58
Fork Count707
Commits Count2.9k
Has Issues Enabled
Issues Count735
Issue Open Count241
Pull Requests Count771
Pull Requests Open Count36
Pull Requests Close Count101
Has Wiki Enabled
Is Archived
Is Fork
Is Locked
Is Mirror
Is Private
To the top