I am tracking some EOD job status to check if it starts late than the scheduled time, or overruns. Created as a Prometheus client program with Golang, the data are collected as a Prometheus gauge metric to be displayed in the Grafana dashboard using the table format.

Worked and tested perfectly on my laptop.

However, I had the typical problem that we commonly encountered. Yes, it worked on my laptop ;) It doesn’t work when I deployed into Kubernetes running as a container!

After some troubleshooting, I realized it's the default timezone in the container that causes the issue.



I have a request to perform some Google API search for some keywords and save the result (the URLs) into text files. It's a good chance to use Go’s concurrency pattern to assembly the pipeline, therefore, parallelizing the IO and CPU.

Google Search API

Given a keyword, say a category, the following function calls Google search API using go-resty, then picks up the respective Links from the result using the gjson library.

func SearchGoogle(category string, count int) ([]SearchResult, error) {
results := []SearchResult{}
// ... skip some initial assignment
for i := 0; i < pages; i++ {
resp, err := client.R()…

I need to mine some business metrics from a DB2 database and present it in the Grafana dashboard. A Prometheus scraping target is to be developed.

I published a Medium paper about 3 years ago to run DB2 queries in Golang by using the DB2 ODBC/CLI driver. Following that, let's create a container image as a Prometheus scraping target and run it in Kubernetes.

The app as a scraping target

Some Golang code excerpts to describe how the data collection works,

The DB and metric struct

type MetricConfig struct {
Name string `yaml:"name"`
Desc string `yaml:"desc"`
Sql string `yaml:"sql"`
Frequency string `yaml:"frequency"`
type DBMetricsConfig struct…

I need to run some monitoring in a restricted environment. It is restricted in the sense of first its air-gapped, no internet connection is there. Secondly, the environment is kind of locked, no extra dependency modules can be installed.

The lightweight Kubernetes distribution, K3s, with its self-contained dependencies serves the requirement perfectly. This paper will show how I set up the Prometheus in the above-mentioned restricted environment.

Install K3s in the airgap server

The airgap setup for K3s is well documented. The following is a quick command list of the packages and scripts that need to be downloaded on an internet-facing server.

curl -LO https://github.com/k3s-io/k3s/releases/download/v1.21.1%2Bk3s1/k3s-airgap-images-amd64.tar.gzcurl…


Recently I need to test some J2EE integration work on IBM MQ. Since it is for testing purposes I run it as a container instead of the full-blown VM.

Preparing a custom QMSC file

Other than the default MQ settings, I need to use my custom queue definition and security controls. In the MQ container, this is achieved by defining a custom MQSC file and put it into the specific /etc/mqm directory.

Create the following file 80-my-test.mqsc

define listener(LISTENER) trptype(tcp) control(qmgr) port(1414) replace
start listener(LISTENER)
define authinfo(my.authinfo) authtype(idpwos) chckclnt(reqdadm) chcklocl(optional) adoptctx(yes) replace
alter qmgr connauth(my.authinfo)
refresh security(*) type(connauth)
def chl(SYSTEM.ADMIN.SVRCONN) chltype(SVRCONN) replaceset chlauth('*') type(addressmap)…

Starting from OpenShift 4.6, user workload monitoring is formally supported by introducing a second Prometheus operator instance in a new namespace called openshift-user-workload-monitoring. This paper demonstrates how a user workload can be monitored and the alerts can be created.

Turn on user workload monitoring

Create or update the following configMap in openshift-monitoring

apiVersion: v1
kind: ConfigMap
name: cluster-monitoring-config
namespace: openshift-monitoring
config.yaml: |
enableUserWorkload: true

With the enableUserWorkload key set as true, the 2nd Prometheus operator will be installed, which will create a Prometheus and Thanos Ruler as shown as below,

oc -n openshift-user-workload-monitoring get pods

Let's explore how we can integrate an OpenID Connect (OIDC) implementation, keycloak, as an identity provider for OpenShift, other than the common one such as HTTPasswd, LDAP.

Setup Keycloak on OpenShift

Install the Keycloak operator from the OperatorHub, create a keycloak instance in the namespace of keycloak. We can access the admin web interface once the pods are running. Get the admin user and its password from the corresponding Secret object in the namespace.

Create our own realm, myrealm, first.

Secondly, create the client named idp-4-ocp. In the settings tab, select the “Access Type” as “confidential”. Set the “Validation Redirection URIs” as “https://*” for…

Airgap installation is always a challenging thing for OpenShift. By setting up a mirror registry and applying ImageContentSourcePolicy CRD to the cluster, we can instruct the OCI container engine to retrieve the source image from its mirrored image hosted in the mirror registry. This solves the airgap images for the cluster and the apps.

There is still a 3rd type of image for an airgap environment to tackle, that is the Operator related images. This paper documents the Operator based installation in an air-gapped environment, the steps, and the hiccups, and how it is being resolved.

OpenShift manages operators through…

With the GA release of volume snapshots, the CSI volume snapshot for stateful application data backup and restore is more mature. This paper explores how we can use the standard Kubernetes snapshot resources to backup and restore the data located on the PV.

Let's use the IBM event streams as the test target. It is running a statefulset K8s resource of Kafka based on the Strimizi Operator. Assume we have 3 replicas of the statefulset and the data are saved in the PVC named as data-es-kafka-0, data-es-kafka-1, data-es-kafka-2 respectively. The PVCs are provided by the Rook Ceph.

Volume snapshot class

Just as PVC…

Event Streams Authentication and Authorization

Kafka provides a rich set of command-line tools to manage the topics and clusters. The default settings of the latest IBM Event Streams (Strimzi Operator based) gives some challenges to run these tools.

Broker Listeners

When a Kafka resource is deployed as an operator, a strong security configuration is applied normally. Take a look at the following excerpt of the cluster listener settings

type: scram-sha-512
type: route
type: tls

The brokers will be listening on

  1. Port 9094 for external connections with SRAM-SHA-512 authentication.
  2. Port 9093 for internal communication, where mTLS are used. The m here (mutual) means…

Zhimin Wen

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store