Skip to content

Built-in KEDA Scalers

Kedify fully supports all the open-source scalers available in KEDA. That means you can keep using the KEDA event sources you already rely on for queues, databases, cloud services, and batch jobs while running them on a Kedify-managed autoscaling stack.

Under the hood, the model stays familiar:

  1. A KEDA scaler evaluates an event source or custom metric.
  2. KEDA exposes the resulting metric to Kubernetes.
  3. The Horizontal Pod Autoscaler (HPA) adjusts replica count for the target workload.

Kedify keeps that KEDA and HPA flow intact and adds operational capabilities around it, such as centralized management, fleet-wide guardrails, additional scaler options, and managed installation of the Kedify build of KEDA.

Built-in KEDA scalers are usually the right choice when:

  • your workload already scales well from an upstream KEDA event source such as Kafka, RabbitMQ, SQS, or Prometheus
  • you want standard event-driven autoscaling behavior without changing your application architecture
  • you need compatibility with the broad KEDA scaler ecosystem

Choose a Kedify-native capability when the upstream KEDA scaler alone is not enough:

  • HTTP Scaler for HTTP and scale-to-zero traffic patterns without relying on Prometheus
  • HTTP Scaler (Inference) for AI inference traffic, request queuing, and cold-start handling
  • OTel Scaler when you want to autoscale from OpenTelemetry metrics instead of operating a full Prometheus stack
  • Predictive Scaler when reactive autoscaling is not enough and you need forecasts for recurring demand
  • Vertical Scalers when the right answer is changing pod resources, not just replica counts