Horizontal Pod Autoscaling with Custom Metric from Different Namespace

Zhimin Wen
ITNEXT
Published in
3 min readJun 21, 2020

--

Horizontal Pod Autoscaling based on custom metrics normally requires the custom metrics to be available in the namespace where the pods are running. Now assuming the customer metrics appears in a different namespace, how can we achieve HPA with that metric?

Prometheus Adapter

First, let’s install the Prometheus adapter to provide the custom metrics. Given the following values.yaml,

metricsRelistInterval: 1m
listenPort: 6443
prometheus:
url: http://prometheus-operated.monitoring.svc
port: 9090
rbac:
create: true
serviceAccount:
create: true
name: prom-adaptor
rules:
default: false

Install the chart in the namespace say monitoring,

helm install prom-adaptor stable/prometheus-adapter -f values.yaml -n monitoring

Create Prometheus ServiceMonitor

Using the Prometheus operator, create the custom ServiceMonitor CRD resource so that the MQ queue depth can be monitored (see my previous paper for more details)

apiVersion: monitoring.coreos.com/v1
kind: ServiceMonitor
metadata:
name: qmon-svc-mon
namespace: monitoring
labels:
app: abt
spec:
selector:
matchLabels:
app: abt
endpoints:
- port: metrics
metricRelabelings:
- sourceLabels: [namespace]
regex: '(.*)'
replacement: myapp
targetLabel: target_namespace

Notice that we add a custom label, “target_namespace” with a fixed value of “myapp” by using the metricRelabelings. The metric collected will have an additional label, as shown below,

{…, target_namespace=“myapp”}

Custom Metric Rules

Now we update the configMap, named as the release name of the chart, to supply our custom metric rules.

kind: ConfigMap
apiVersion: v1
metadata:
#release name of the chart
name: prom-adaptor-prometheus-adapter
namespace: monitoring
data:
config.yaml: |
rules:
- seriesQuery: 'mq_mon_queue_depth'
resources:
overrides:
target_namespace:
resource: "namespace"

pod:
resource…

--

--