Recently I need to test some J2EE integration work on IBM MQ. Since it is for testing purposes I run it as a container instead of the full-blown VM.
Other than the default MQ settings, I need to use my custom queue definition and security controls. In the MQ container, this is achieved by defining a custom MQSC file and put it into the specific
Create the following file
define listener(LISTENER) trptype(tcp) control(qmgr) port(1414) replace
start listener(LISTENER)define authinfo(my.authinfo) authtype(idpwos) chckclnt(reqdadm) chcklocl(optional) adoptctx(yes) replace
alter qmgr connauth(my.authinfo)
refresh security(*) type(connauth)def chl(SYSTEM.ADMIN.SVRCONN) chltype(SVRCONN) replaceset chlauth('*') type(addressmap)…
Starting from OpenShift 4.6, user workload monitoring is formally supported by introducing a second Prometheus operator instance in a new namespace called
openshift-user-workload-monitoring. This paper demonstrates how a user workload can be monitored and the alerts can be created.
Create or update the following configMap in
With the enableUserWorkload key set as true, the 2nd Prometheus operator will be installed, which will create a Prometheus and Thanos Ruler as shown as below,
oc -n openshift-user-workload-monitoring get pods
Let's explore how we can integrate an OpenID Connect (OIDC) implementation, keycloak, as an identity provider for OpenShift, other than the common one such as HTTPasswd, LDAP.
Install the Keycloak operator from the OperatorHub, create a keycloak instance in the namespace of keycloak. We can access the admin web interface once the pods are running. Get the admin user and its password from the corresponding Secret object in the namespace.
Create our own realm,
Secondly, create the client named
idp-4-ocp. In the settings tab, select the “Access Type” as “confidential”. Set the “Validation Redirection URIs” as “https://*” for…
Airgap installation is always a challenging thing for OpenShift. By setting up a mirror registry and applying ImageContentSourcePolicy CRD to the cluster, we can instruct the OCI container engine to retrieve the source image from its mirrored image hosted in the mirror registry. This solves the airgap images for the cluster and the apps.
There is still a 3rd type of image for an airgap environment to tackle, that is the Operator related images. This paper documents the Operator based installation in an air-gapped environment, the steps, and the hiccups, and how it is being resolved.
OpenShift manages operators through…
With the GA release of volume snapshots, the CSI volume snapshot for stateful application data backup and restore is more mature. This paper explores how we can use the standard Kubernetes snapshot resources to backup and restore the data located on the PV.
Let's use the IBM event streams as the test target. It is running a statefulset K8s resource of Kafka based on the Strimizi Operator. Assume we have 3 replicas of the statefulset and the data are saved in the PVC named as data-es-kafka-0, data-es-kafka-1, data-es-kafka-2 respectively. The PVCs are provided by the Rook Ceph.
Just as PVC…
Kafka provides a rich set of command-line tools to manage the topics and clusters. The default settings of the latest IBM Event Streams (Strimzi Operator based) gives some challenges to run these tools.
When a Kafka resource is deployed as an operator, a strong security configuration is applied normally. Take a look at the following excerpt of the cluster listener settings
The brokers will be listening on
The Etcd is the heart of the Kubernetes. With the operator model prevailing, the Etcd is no longer limited to the usage of the Kubernetes core cluster engine only.
Following is a screen capture of the major Etcd metrics on my OpenShift cluster when an operator based solution framework is deployed. You can see both DB size and the Memory are increased 3 to 4 times on the plain OCP platform.
I have an OpenShift cluster where I dedicate some of the nodes to run my workload by tainting these nodes. To run my normal pods on these nodes, I just need to define the tolerations based on the taint keys.
However, the workload is operator-based, and too bad not all the CRD has the tolerations defined. The “brute-force change” on the Deployment or the Statefulset will not take effect in the end as “the big brother” will rectify it based on the definition in its original mind ;)
The lastest Mirror Maker2 is able to replicate the Kafka from one cluster to the destination cluster. However the backup and restore requirement is still there for the local Kafka cluster. When a Kafka cluster is running on Kubernetes, the traditional backup/restore method needs to be revised. On the other hand, the standardization of Kubernetes Storage API with Container Storage Interface (CSI) makes the backup/restore for stateful app on Kubernetes much easier.
Using IBM event streams V10.1 (Kafka 2.6) as an example, this paper explores how we can backup and restore for a local Kafka Cluster. …
In an airgap environment, the challenge of getting the container image is always there. You can populate into the local container storage directly, or perhaps a more complete solution is to set up a mirror registry and let OpenShift get images from there.
This paper documents the steps for airgap image mirroring using the Rook Ceph operator as an example.
The local registry is used to hosting all the images downloaded from the Internet for the airgap OpenShift to use. Most likely you should have this registry ready which was used to set up the OpenShift. …