Grafana Dashboard in OCP4.2

It's kind of a surprise to me that we can not create any new Grafana dashboard in the out-of-box Grafana instance from OpenShift 4.2. Check out the manual,

The Grafana instance that is provided with the monitoring stack, along with its dashboards, is read-only.

Ok. That’s why. So we have to bring in our own Grafana instance to visualize the monitoring data. Naturally, we could use the Grafana Operator from the OperatorHub and the data source is the Prometheus from OCP.

The Prometheus container is wrapped inside the pod and the port (9090) is bind to the localhost only. The Prometheus service has to be accessed through the sidecar container, prometheus-proxy, so that the access control can be applied. We don’t want to change the port binding to all the interfaces to break any security design.

Instead, we could use the bearer token authentication in the HTTP header, which is available in Grafana 6.3 onwards (based on Grafana website documentation).

However, the current Grafana Operator from the OCP web console indicates that the Grafana Operator is at version 2.0.0. The Grafana version associated with it doesn’t have the custom HTTP header option. We could not use the standard Operator Lifecycle Manager (OLM) way to install and manage the operator.

Since OpenShift is an extended Kubernetes solution, the original concept of Kubernetes can still be applied.

Clone the grafana operator repo, create the required Kubernetes’ objects

git clone create namespace grafana
kubectl create -f deploy/crds
kubectl create -f deploy/roles -n grafana
kubectl create -f deploy/cluster_roles
kubectl create -f deploy/operator.yaml -n grafana

Once the operator installed, we can apply a Grafana CRD and the operator will create the Grafana deployment accordingly.

kind: Grafana
name: grafana
namespace: grafana
enabled: True
mode: "console"
level: "debug"
admin_user: "root"
admin_password: "secret"
disable_login_form: False
disable_signout_menu: True
enabled: False
- matchExpressions:
- {key: app, operator: In, values: [grafana]}

Now the Grafana Pod will be running. We need to configure the Prometheus data source.

Let's create a service account and we will use its token for authentication purpose to the Prometheus sidecar in OCP4.2

oc -n grafana create sa prometheus-readeroc -n grafana adm policy add-cluster-role-to-user view -z prometheus-reader

Instead of creating YAML, we use the oc command line tool to bind the cluster role to the service account.

Similarly, with the oc tool, retrieve the token.

oc -n  grafana serviceaccounts get-token prometheus-readereyJhbGciOiJSUzI1NiIsImtpZCI6IiJ9....Skipped...

Now we could use the custom HTTP header feature to authenticate the access to the OCP Prometheus. Create the following CRD,

kind: GrafanaDataSource
name: grafanadatasource-prometheus
namespace: grafana
name: grafanadatasource-prometheus.yaml
- name: Prometheus
type: prometheus
access: proxy
url: "https://prometheus-k8s.openshift-monitoring.svc:9091"
basicAuth: false
withCredentials: false
isDefault: true
version: 1
editable: true
tlsSkipVerify: true
timeInterval: "5s"
httpHeaderName1: "Authorization"
httpHeaderValue1: "Bearer eyJh....Skipped..."

Apply it, once the Grafana pod restarted, we will get full-blown Grafana running. A sample of the Grafana status dashboard is added and shown as below.

Cloud explorer