With the GA release of volume snapshots, the CSI volume snapshot for stateful application data backup and restore is more mature. This paper explores how we can use the standard Kubernetes snapshot resources to backup and restore the data located on the PV.
Let's use the IBM event streams as the test target. It is running a statefulset K8s resource of Kafka based on the Strimizi Operator. Assume we have 3 replicas of the statefulset and the data are saved in the PVC named as data-es-kafka-0, data-es-kafka-1, data-es-kafka-2 respectively. The PVCs are provided by the Rook Ceph.
Just as PVC…
Kafka provides a rich set of command-line tools to manage the topics and clusters. The default settings of the latest IBM Event Streams (Strimzi Operator based) gives some challenges to run these tools.
When a Kafka resource is deployed as an operator, a strong security configuration is applied normally. Take a look at the following excerpt of the cluster listener settings
The brokers will be listening on
The Etcd is the heart of the Kubernetes. With the operator model prevailing, the Etcd is no longer limited to the usage of the Kubernetes core cluster engine only.
Following is a screen capture of the major Etcd metrics on my OpenShift cluster when an operator based solution framework is deployed. You can see both DB size and the Memory are increased 3 to 4 times on the plain OCP platform.
I have an OpenShift cluster where I dedicate some of the nodes to run my workload by tainting these nodes. To run my normal pods on these nodes, I just need to define the tolerations based on the taint keys.
However, the workload is operator-based, and too bad not all the CRD has the tolerations defined. The “brute-force change” on the Deployment or the Statefulset will not take effect in the end as “the big brother” will rectify it based on the definition in its original mind ;)
The lastest Mirror Maker2 is able to replicate the Kafka from one cluster to the destination cluster. However the backup and restore requirement is still there for the local Kafka cluster. When a Kafka cluster is running on Kubernetes, the traditional backup/restore method needs to be revised. On the other hand, the standardization of Kubernetes Storage API with Container Storage Interface (CSI) makes the backup/restore for stateful app on Kubernetes much easier.
Using IBM event streams V10.1 (Kafka 2.6) as an example, this paper explores how we can backup and restore for a local Kafka Cluster. …
In an airgap environment, the challenge of getting the container image is always there. You can populate into the local container storage directly, or perhaps a more complete solution is to set up a mirror registry and let OpenShift get images from there.
This paper documents the steps for airgap image mirroring using the Rook Ceph operator as an example.
The local registry is used to hosting all the images downloaded from the Internet for the airgap OpenShift to use. Most likely you should have this registry ready which was used to set up the OpenShift. …
Sometimes for performance consideration, you may want to use local storage instead of the network storage. Of course, you will lose the flexibility of moving the pod around the nodes freely. The pod will be bound to the node which provides the local storage.
This paper explores how can we add a disk, create a file system on the immutable OS (RHCOS) in the OpenShift 4.x environment. Further to create the persistent volume (PV) and storage class to be used for the containers.
On the worker nodes, let's add an extra disk. I am using KVM, so the second disk…
I am troubleshooting some legacy Java web start UI applications. As the Java web start is no longer shipped with the latest JRE, I will not waste time to install it locally on my laptop and later remove it, instead, I will run it from a known container image that has the java web start binary still there.
If run the java web start in a container, then I also need an XWindow environment locally on my Macbook. Hmmm, not preferred. What I really want to do is to minimize any installation on my Macbook.
I urgently need to create some K8s resources in an air-gapped environment and too bad my cheatsheet was also not with me :(
The “mission impossible” is still achievable with the help of kubectl command-line tool only.
Let's see we need to restrict the max Pod resource limit in a namespace, which is typically achieved with the “limits” K8s resource. There seems like no option to create it with the kubectl tool through command-line, we have to use the versatile YAML files. But I could not recall the exact syntax.
First, let's find out the resource’s formal name through
This paper explores the different cache options to speed up the Spring Boot app image build when using Kaniko with Tekton pipelines.
Create a sample Spring Boot app with the Sprint Initializr. Select Maven, add the Spring Web dependency, generate, and download the project zip file.
Unzip it, and create the following Dockerfile to build the image.
FROM maven:3.6.3-jdk-11 AS builder
RUN mvn clean install && ls -ltr targetFROM openjdk:11
COPY --from=builder /workspace/target/demo-0.0.1-SNAPSHOT.jar .
CMD ["java", "-jar", "demo-0.0.1-SNAPSHOT.jar"]
I am using Tekton pipelines with Kaniko to build the container image. …