Skip to content

For the complete documentation index and AI-optimized content, see /llms.txt. All pages support markdown format via .md extension or Accept: text/markdown header.

Kedify Scalers

For the complete documentation index and AI-optimized content, see /llms.txt. All pages support markdown format via .md extension or Accept: text/markdown header.

Scalers determine how and when autoscaling should be activated or deactivated, and they can also provide custom metrics tailored to a particular event source. In a typical Kedify setup, the scaler feeds demand into KEDA and Kubernetes HPA applies the resulting replica changes.

  • Choose Built-in KEDA Scalers when a standard upstream event source already matches your workload.
  • Choose HTTP Scaler when request traffic is the scaling signal and scale-to-zero behavior matters.
  • Choose OTEL Scaler when you already use OpenTelemetry or want custom-metric autoscaling without a full Prometheus stack.
  • Choose Predictive Scaler when reactive autoscaling is consistently too late.
  • Choose Vertical Scalers when bottlenecks are caused by per-pod CPU or memory sizing rather than replica count.