Skip to content

Welcome to Kedify!

Kedify autoscales any cluster workload to optimize performance and reduce cost by 20% or more.

For a complete and detailed dashboard-based installation guide, please visit Installation page.

Kedify is a managed autoscaling platform built on KEDA and Kubernetes’ built-in Horizontal Pod Autoscaler. It extends event-driven autoscaling with enterprise-ready capabilities for APIs, AI workloads, and multi-cluster operations.

You can use Kedify to:

  • Scale your workloads based on HTTP requests, messages in a queue, or any other event source or custom metric with OpenTelemetry Scaler
  • Scale proactively with Predictive Scaler
  • Use Vertical Scalers for instant utilization-based autoscaling with Pod Resource Autoscaler and declarative/event-driven transitions with Pod Resource Profiles
  • Scale and manage workloads across clusters with Multi-Cluster Scaling
  • Apply fleet guardrails and scheduling rules with Scaling Groups and Scaling Policy
  • Securely install KEDA with no CVEs in less than 90 seconds
  • Monitor and visualize your workload autoscaling
  • Get resource and configuration recommendations

Kedify is made up of several interrelated components:

  • Kedify build of KEDA: the Kedify-managed KEDA distribution installed in your cluster, providing the core event-driven autoscaling engine.
  • Kedify Agent: a secure gRPC-based agent service that manages KEDA, provides telemetry and maintains security settings
  • Kedify Custom Resource Definitions: YAML-based custom resource definitions that provide the foundation for how and when to scale deployments as well as defining event sources
  • Kedify add-ons (scalers): built-in and Kedify-provided scaling integrations such as HTTP Scaler, OpenTelemetry Scaler, and Predictive Scaler
  • Kedify Dashboard: an intuitive user interface to monitor resources and autoscaling activities as well as managing KEDA installations across clusters