Skip to content

HTTP Scaling with Argo Rollouts Canary

This guide shows how to run a canary deployment with Argo Rollouts while keeping your application autoscaled by Kedify. The kedify/http Argo Rollouts traffic router plugin translates each canary setWeight step into http.kedify.io/weighted-backends structured annotation on the matching HTTPScaledObject, and the Kedify interceptor turns that annotation into Envoy weighted_clusters. This means the traffic splitting happens inside kedify-proxy rather than at the load balancer or ingress controller.

Argo Rollouts owns the canary lifecycle (image promotion, weight progression, pause conditions). Kedify owns the autoscaling of the underlying Rollout resource and the traffic split between the stable and canary services.

Argo Rollouts → SetWeight(N)
↓ patches
HTTPScaledObject annotation: http.kedify.io/weighted-backends
- service: stable
weight: 100-N
- service: canary
weight: N
↓ observed by
Kedify interceptor → Envoy WeightedClusters
kedify-proxy splits traffic between stable / canary services

A few things worth knowing up front:

  • When the ScaledObject targets the Rollout, the kedify-http scaler reads spec.strategy.canary.stableService from the Rollout, so you shouldn’t set service in the trigger metadata.
  • Ingress autowire keeps your upstream Ingress pointing at kedify-proxy. The ingress controller sees a single, stable backend; the split is done one hop downstream, inside the kedify-proxy.
  • When the rollout fully promotes, the plugin removes the http.kedify.io/weighted-backends annotation and the interceptor reverts to a single-cluster route pointing at the now-promoted stable service.
  • A running Kubernetes cluster.
  • An Ingress controller that publishes an external address on the Ingress status (e.g., a LoadBalancer IP or hostname).
  • KEDA + Kedify (HTTP add-on) installed, with the Kedify agent running. See HTTP Scaling for Ingress-based Applications for the base install.
  • kubectl argo rollouts plugin for driving promotions ergonomically.

Step 1: Install Argo Rollouts with RBAC and the kedify/http plugin

Section titled “Step 1: Install Argo Rollouts with RBAC and the kedify/http plugin”

The Argo Rollouts controller needs two things to drive a Kedify-backed canary:

  1. RBAC permission to patch HTTPScaledObject resources.
  2. The kedify/http traffic router plugin registered under that name in the argo-rollouts-config ConfigMap. The plugin ships as a pre-built binary on its GitHub releases; the controller downloads it on startup, verifies its SHA-256, and caches it on the pod’s local disk.

You have two options for wiring both pieces up.

If you install Argo Rollouts via the argo-helm chart, both pieces fit into one values file. The providerRBAC.additionalRules extends the controller’s ClusterRole, and controller.trafficRouterPlugins populates the argo-rollouts-config ConfigMap that the chart manages.

argo-rollouts-values.yaml
providerRBAC:
additionalRules:
- apiGroups: ["http.keda.sh"]
resources: ["httpscaledobjects"]
verbs: ["get", "list", "watch", "patch", "update"]
controller:
trafficRouterPlugins:
- name: "kedify/http"
location: "https://github.com/kedify/argo-rollouts-plugin/releases/download/v0.0.1/rollouts-plugin-kedify-linux-amd64"
sha256: "6cd7597788f9ceeee3406695b64022c63ddb77e9b946dd0295bf10969b985814"
Terminal window
helm repo add argo https://argoproj.github.io/argo-helm
helm upgrade --install argo-rollouts argo/argo-rollouts \
--namespace argo-rollouts --create-namespace \
--values argo-rollouts-values.yaml

For non-linux-amd64 controller nodes, swap the location for the matching asset and pin its SHA-256 from v0.0.1’s checksums.txt:

AssetSHA-256
rollouts-plugin-kedify-linux-amd646cd7597788f9ceeee3406695b64022c63ddb77e9b946dd0295bf10969b985814
rollouts-plugin-kedify-linux-arm648776c96475b699a05a87699e840ef9263884d40c6382ab1dd4023ac8ff42c123
rollouts-plugin-kedify-darwin-amd6482dced37ae4c0124874f1e07dce503807f1d0099a2301eb9d6d6717e5d415f80
rollouts-plugin-kedify-darwin-arm6422bf1e7364d97cfdd72c98ef78d16c24d769637898d4e58089aa21de1250a2ca

For newer plugin releases, browse the releases page and read the matching checksums.txt.

Option B: stand-alone ClusterRole + ConfigMap patch

Section titled “Option B: stand-alone ClusterRole + ConfigMap patch”

If you can’t (or don’t want to) reinstall Argo Rollouts via the chart, apply RBAC as a separate ClusterRole and ClusterRoleBinding, and patch the existing argo-rollouts-config ConfigMap to register the plugin.

apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRole
metadata:
name: argo-rollouts-kedify-http
rules:
- apiGroups: ["http.keda.sh"]
resources: ["httpscaledobjects"]
verbs: ["get", "list", "watch", "patch", "update"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: ClusterRoleBinding
metadata:
name: argo-rollouts-kedify-http
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: argo-rollouts-kedify-http
subjects:
- kind: ServiceAccount
name: argo-rollouts
namespace: argo-rollouts
---
apiVersion: v1
kind: ConfigMap
metadata:
name: argo-rollouts-config
namespace: argo-rollouts
data:
trafficRouterPlugins: |
- name: "kedify/http"
location: "https://github.com/kedify/argo-rollouts-plugin/releases/download/v0.0.1/rollouts-plugin-kedify-linux-amd64"
sha256: "6cd7597788f9ceeee3406695b64022c63ddb77e9b946dd0295bf10969b985814"

Adjust the ClusterRoleBinding’s subjects: namespace and name if your Argo Rollouts install runs under a different ServiceAccount. See the platform table above for non-linux-amd64 plugin assets.

Restart the controller so it picks up the new plugin:

Terminal window
kubectl -n argo-rollouts rollout restart deploy/argo-rollouts

A complete, runnable example (Rollout + Services + Ingress + ScaledObject) lives at kedify/examples/samples/argo-rollouts-canary. The minimal manifest looks like this:

apiVersion: v1
kind: Service
metadata:
name: rollouts-demo-stable
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: rollouts-demo
---
apiVersion: v1
kind: Service
metadata:
name: rollouts-demo-canary
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: rollouts-demo
---
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: rollouts-demo
spec:
rules:
- host: rollouts-demo.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: rollouts-demo-stable
port:
number: 80
---
apiVersion: argoproj.io/v1alpha1
kind: Rollout
metadata:
name: rollouts-demo
spec:
replicas: 2
selector:
matchLabels:
app: rollouts-demo
template:
metadata:
labels:
app: rollouts-demo
spec:
containers:
- name: rollouts-demo
image: argoproj/rollouts-demo:blue
ports:
- name: http
containerPort: 8080
strategy:
canary:
stableService: rollouts-demo-stable
canaryService: rollouts-demo-canary
trafficRouting:
plugins:
kedify/http:
httpScaledObjectName: rollouts-demo
steps:
- setWeight: 20
- pause: {}
- setWeight: 50
- pause: {duration: 30s}
- setWeight: 80
- pause: {duration: 30s}
---
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: rollouts-demo
spec:
minReplicaCount: 2
maxReplicaCount: 10
scaleTargetRef:
apiVersion: argoproj.io/v1alpha1
kind: Rollout
name: rollouts-demo
triggers:
- type: kedify-http
metricType: AverageValue
metadata:
hosts: rollouts-demo.example.com
pathPrefixes: /
port: "80"
scalingMetric: requestRate
targetValue: "5"
trafficAutowire: ingress

A few notes on the manifest:

  • No service in the trigger metadata - the Kedify scaler resolves stableService from the Rollout spec automatically.
  • minReplicaCount: 2 matches Rollout.spec.replicas: 2. Argo Rollouts splits the desired replicas between the stable and canary ReplicaSet during a canary; setting minReplicaCount lower can let KEDA scale the stable side to zero at higher canary weights, leaving traffic with no ready stable backend between samples.
  • pause: {} (no duration) waits for an explicit promote (manual). Use pause: {duration: 30s} for an automatic delay between steps.
  • keda-operator RBAC - the keda-operator needs to read Rollout resources to resolve the stableService for the scaler. The default Helm install covers this: rbac.scaledRefKinds: [{apiGroup: "*", kind: "*"}] (the chart’s default) generates a wildcard rule that grants get on every API group. If you’ve narrowed scaledRefKinds for a tighter scope, add an entry for Argo Rollouts:
rbac:
scaledRefKinds:
- apiGroup: "argoproj.io"
kind: rollouts
# ...your existing entries

Apply the manifest:

Terminal window
kubectl create ns rollouts-demo
kubectl -n rollouts-demo apply -f manifests.yaml

Once the rollout’s stable pods come up, the Kedify agent’s autowire rewrites the Ingress backend to kedify-proxy, and traffic flows Ingress → kedify-proxy → rollouts-demo-stable. At this point, the only cluster on the route is the stable service:

Terminal window
kubectl -n rollouts-demo get rollout,httpso,scaledobject
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
rollout.argoproj.io/rollouts-demo 2 2 2 2 17s
NAME TARGETWORKLOAD TARGETSERVICE MINREPLICAS MAXREPLICAS AGE ACTIVE
httpscaledobject.http.keda.sh/rollouts-demo 2 10 17s
NAME SCALETARGETKIND SCALETARGETNAME MIN MAX READY ACTIVE FALLBACK PAUSED TRIGGERS AUTHENTICATIONS AGE
scaledobject.keda.sh/rollouts-demo argoproj.io/v1alpha1.Rollout rollouts-demo 2 10 True True False False kedify-http 17s

Patch the rollout to a new image. Argo Rollouts moves to step 1 (setWeight: 20) and the plugin patches the HSO:

Terminal window
kubectl argo rollouts -n rollouts-demo set image rollouts-demo rollouts-demo=argoproj/rollouts-demo:yellow

You can watch the rollout’s status as it progresses through the steps:

Terminal window
kubectl argo rollouts get rollout rollouts-demo -n rollouts-demo -w
Name: rollouts-demo
Namespace: rollouts-demo
Status: Paused
Message: CanaryPauseStep
Strategy: Canary
Step: 1/6
SetWeight: 20
ActualWeight: 20
Images: argoproj/rollouts-demo:blue (stable)
argoproj/rollouts-demo:yellow (canary)
Replicas:
Desired: 2
Current: 3
Updated: 1
Ready: 3
Available: 3
NAME KIND STATUS AGE INFO
rollouts-demo Rollout Paused 4m48s
├──# revision:2
└──⧉ rollouts-demo-85b6995845 ReplicaSet Healthy 3m3s canary
└──□ rollouts-demo-85b6995845-mxgz5 Pod Running 3m2s ready:1/1
└──# revision:1
└──⧉ rollouts-demo-759566c557 ReplicaSet Healthy 4m48s stable
├──□ rollouts-demo-759566c557-c9q2x Pod Running 4m48s ready:1/1
└──□ rollouts-demo-759566c557-plchs Pod Running 4m48s ready:1/1

After a moment, the weighted-backends annotation appears on the HSO:

Terminal window
kubectl -n rollouts-demo get httpso rollouts-demo -o jsonpath='{.metadata.annotations.http\.kedify\.io/weighted-backends}'
- service: rollouts-demo-stable
weight: 80
- service: rollouts-demo-canary
weight: 20

Send some traffic - roughly 20% should hit the new (yellow) version:

Terminal window
for i in $(seq 1 50); do
curl -s -H 'host: rollouts-demo.example.com' http://<ingress-address>/color
echo
done | sort | uniq -c
42 "blue"
8 "yellow"

If you used pause: {} (manual), advance through the remaining steps with the kubectl argo rollouts plugin:

Terminal window
kubectl argo rollouts -n rollouts-demo promote rollouts-demo
rollout 'rollouts-demo' promoted

You can repeat that until the rollout completes. Each setWeight step re-patches the HSO annotation with the new split, and kedify-proxy adjusts the Envoy weighted_clusters accordingly.

When the canary fully promotes, the plugin removes the http.kedify.io/weighted-backends annotation and the interceptor reverts to a single cluster pointing at the stable service (now serving the new image).

To roll back mid-canary:

Terminal window
kubectl argo rollouts -n rollouts-demo abort rollouts-demo

The plugin removes the annotation, the interceptor stops splitting, and traffic goes back to 100% stable while the canary ReplicaSet is scaled down.

  • No http.kedify.io/weighted-backends annotation appears after setWeight - check the argo-rollouts controller logs for permission errors:

    Terminal window
    kubectl -n argo-rollouts logs deploy/argo-rollouts | grep -i "httpscaled\|forbidden"

    If you see forbidden, Step 1’s RBAC is missing or pointing at the wrong ServiceAccount.

  • The plugin binary fails to download or load - check the controller’s startup logs:

    Terminal window
    kubectl -n argo-rollouts logs deploy/argo-rollouts | grep -i "plugin"

    Common causes are a wrong sha256 (the controller refuses to run a plugin that doesn’t match the configured checksum), a location that points at an asset for the wrong OS/arch, or a private network that can’t reach github.com. Mirror the binary internally and update the location if needed.

  • Annotation is set, but traffic isn’t splitting - verify kedify-proxy has the weighted clusters in its config:

    Terminal window
    kubectl -n rollouts-demo port-forward svc/kedify-proxy-admin 9901:9901
    curl -s localhost:9901/config_dump | grep -E '"name": "[^"]*rollouts-demo'

    You should see one cluster per backend, named <namespace>/<hso>/<service>.

  • Canary side has zero ready endpoints - confirm the canary ReplicaSet actually scaled up:

    Terminal window
    kubectl -n rollouts-demo get rs -l app=rollouts-demo
    kubectl -n rollouts-demo get endpointslices

    If the canary RS is stuck at 0 replicas, it’s a Rollout-level issue (e.g., maxSurge: 0 on a 1-replica rollout has nowhere to put the canary pod). Increase replicas or revisit the canary strategy.

Terminal window
kubectl delete ns rollouts-demo

To remove the plugin from Argo Rollouts, delete the entry from the argo-rollouts-config ConfigMap and restart the controller. If you used Option B in Step 1, also delete the ClusterRole and ClusterRoleBinding.