Skip to content

Kedify Predictive Scaler - OTLP Receiver

There are currently three ways to get metrics for Predictor:

  • from static CSV file
  • dynamically from existing KEDA trigger (either defined on ScaledObject or ScaledJob)
  • using OTLP receiver

This tutorial will focus on the last method - OTLP gRPC-based receiver.

Make sure the Kedify Agent and Kedify Predictor are both correctly installed and running. To do that you may want to consult installation docs.

OTLP receiver is enabled by default and is agnostic about the metric storage Predictor uses, however, if you want to change either port, kubernetes service or add TLS settings, this can be done using the custom Helm chart values for the Kedify Predictor.

Example:

kedaPredictionController:
otlpReceiver:
enabled: true
port: 4317
# nodePort: 31112
tls:
# (optional) path to CA certificate. When provided, the client certificate will be verified using this CA where "client" ~ another OTLP exporter.
caFile: ""
# (optional) path to TLS certificate that will be used for OTLP receiver
certFile: ""
# (optional) path to TLS key that will be used for OTLP receiver
keyFile: ""
# (optional) specifies the duration after which the certificates will be reloaded.
# This is useful when using the CertManager for rotating the certs mounted as Secrets.
reloadInterval: "5m"

We will deploy a custom Prometheus exporter that periodically fetches the weather information and exposes them as metrics. Each ten minutes it makes a call to openweathermap.org API about the weather in San Franciso.

apiVersion: apps/v1
kind: Deployment
metadata:
name: weather-exporter
spec:
selector:
matchLabels:
app: weather-exporter
strategy:
type: Recreate
template:
metadata:
labels:
app: weather-exporter
spec:
containers:
- name: weather-exporter
image: ghcr.io/kedify/weather-prometheus-exporters
args:
- -config=/cfg/config.json
- -addr=:8080
ports:
- containerPort: 8080
volumeMounts:
- name: weather-exporter-config
mountPath: /cfg
env:
- name: OPEN_WEATHER_APP_ID
value: 4524f75e02d8b944ee74bb0d3ddc6efc
volumes:
- name: weather-exporter-config
configMap:
name: weather-exporter-config
defaultMode: 420
---
apiVersion: v1
kind: ConfigMap
metadata:
name: weather-exporter-config
namespace: default
data:
# SanFrancisco coords
config.json: |
{
"OpenWeather": {
"CurrentWeatherData": {
"Enabled": true,
"Coords": [{ "Lat": 37.7739, "Lon": -122.4312 }],
"Interval": "600s"
}
}
}
Terminal window
# create also a service
kubectl expose deploy/weather-exporter --port 8080 --name weather-exporter

And finally, deploy the OTel collector that will be scraping it and forwarding it to Predictor. With the following configuration, it will scrape the metrics each minute.

Terminal window
cat <<VALS | helm upgrade -i otelcol oci://ghcr.io/open-telemetry/opentelemetry-helm-charts/opentelemetry-collector --version=0.143.0 -f -
image:
repository: otel/opentelemetry-collector-contrib
mode: deployment
alternateConfig:
receivers:
prometheus:
config:
scrape_configs:
- job_name: 'weather'
scrape_interval: 60s
static_configs:
- targets: ['weather-exporter:8080']
exporters:
otlp/kedify-predictor:
endpoint: kedify-predictor.keda.svc:4317
compression: "none"
tls:
insecure: true
debug:
verbosity: detailed
processors:
filter/ottl:
error_mode: ignore
metrics:
metric:
- name != "open_weather_clouds_all"
extensions:
health_check:
endpoint: \${env:MY_POD_IP}:13133
service:
extensions: [health_check]
pipelines:
metrics:
receivers:
- prometheus
processors:
- filter/ottl
exporters: [debug, otlp/kedify-predictor]
VALS

Then in OTel collector logs you should be able to see:

Terminal window
2026-01-20T15:36:02.018Z info ResourceMetrics #0
Resource SchemaURL:
Resource attributes:
-> service.name: Str(weather)
-> server.address: Str(weather-exporter)
-> service.instance.id: Str(weather-exporter:8080)
-> server.port: Str(8080)
-> url.scheme: Str(http)
ScopeMetrics #0
ScopeMetrics SchemaURL:
InstrumentationScope github.com/open-telemetry/opentelemetry-collector-contrib/receiver/prometheusreceiver 0.143.0
Metric #0
Descriptor:
-> Name: open_weather_clouds_all
-> Description:
-> Unit:
-> DataType: Gauge
NumberDataPoints #0
Data point attributes:
-> id: Str(5391959)
-> name: Str(San Francisco)
StartTimestamp: 1970-01-01 00:00:00 +0000 UTC
Timestamp: 2026-01-20 15:36:02.002 +0000 UTC
Value: 75.000000
{"resource": {"service.instance.id": "7301fd7f-b9e9-4703-b5c3-761e08c820ba", "service.name": "otelcol-contrib", "service.version": "0.143.0"}, "otelcol.component.id": "debug", "otelcol.component.kind": "exporter", "otelcol.signal": "metrics"}

This means the metric is flowing from the Prometheus exporter app, via OTel collector that periodically scrapes it, to Kedify Predictor OTLP receiver. This, however, does not mean that Predictor stores it in its metric store. To do that, one has to first create the MetricPredictor custom k8s resource.

To do that:

Terminal window
cat <<MP | kubectl apply -f -
kind: MetricPredictor
apiVersion: keda.kedify.io/v1alpha1
metadata:
name: cloudiness
spec:
source:
otel:
metricName: open_weather_clouds_all
requiredLabels:
name: "San Francisco"
model:
type: Prophet
name: cloudiness
defaultHorizon: 1h
retrainInterval: 2d
MP

By creating the MetricPredictor resource, we effectively did a couple of important things:

  • enabled the storage of incoming metric points for OTLP
  • created the association that all incoming metric points for metric of type Gauge with name ‘open_weather_clouds_all’ will be used for model called ‘cloudiness
  • we restricted the metric to only those that have tag or label name=San Francisco on them
  • and finally enabled the periodic retrain for the model

Once the model is first retrained and it’s MAPE (/performance) is not terrible, one can start using it. Using the trained model for predictions is the same as for the use-case with KEDA metric source.

This means, one can reference the model from kedify-predictive trigger the similar way as in here.

# part of Scaled{Object,Job}
triggers:
---
- type: kedify-predictive
name: cloudiness
metadata:
modelName: cloudiness

This way we can create an autoscaling rule based on a real historical data. In this particular case it’s the cloudiness in San Francisco. Now, if the cloudiness in San Francisco exhibits a seasonal effects that can be predicted to some degree, Prophet will find it and suggest the future values. Predicting the weather is nearly impossible, but there will be probably more clouds during certain seasons or time of a day. If we own a solar power plant, we can scale our infrastructure upfront. Or mix the real measured data with the predicted data using KEDA scaling modifiers. Sky/clouds is the limit.

Prometheus-write as Possible Metric Source

Section titled “Prometheus-write as Possible Metric Source”

We have shown that we can use an arbitrary existing OTel receiver and forward the metrics to Kedify Predictor. The previous example was using the Prometheus receiver that can periodically scrape the metric endpoint using HTTP client that exposes the metrics in plain text.

If we need a metric coming from prometheus-write protocol because our monitoring solution supports this. The example would be the same up to the OTel collector’s configuration.

receivers:
prometheusremotewrite:
endpoint: 0.0.0.0:9090
exporters: ...
processors: ...
extensions: ...
service: ...

Then if the tool that is supposed to feed the metrics is also configured to send the metrics to OTel collector k8s service and port 9090, it will be relayed to Kedify Predictor.

For all the available receivers please consult the upstream docs.