Mutual TLS, Authentication, and Authorization for IBM MQ

When MQ is running on the container platform, the default OS-based authentication is no longer valid considering in OpenShift a random user ID is used to run the MQ. You either need to set up an LDAP server or use certificate-based authentication. Using the IBM MQ deployed on OpenShift with the latest Cloud Pak for Integration, we will examine the mutual TLS communication, certificate to user mapping, and the authorization for MQ objects. We adopt a step-by-step approach recording down the errors encountered so that we can avoid these traps later.

The CA is at the center of the trust. Between the two parties' communication, one side can trust the other side only when a unique certificate is presented and validated. The validation means the certificate must be issued/signed by an authority, the CA, that one side recognizes and therefore trusts. Typically, the server certificate is validated by the client. But for mutual TLS (mTLS), the client certificate is also validated by the server. This is where the mutual comes from.

Let’s create the CA using cfssl tooling. Generate the following key request JSON file,

{
"CN": "myca",
"hosts": [
"myca"
],
"key": {
"algo": "rsa",
"size": 4096
},
"names": [
{
"C": "SG",
"ST": "SG",
"L": "Singapore"
}
]
}

Notice the key size is 4096. Create the self-signed CA

cfssl gencert -initca myca.json | cfssljson -bare myca

Two PEM files are created, one is the myca.pem certificate file, which can be publicly distributed. The other one is the paired private key file, myca-key.pem It should be kept securely.

Define the following profile in JSON format, named ca-config.json

{
"signing": {
"default": {
"expiry": "43800h"
},
"profiles": {
"server": {
"expiry": "43800h",
"usages": [
"signing",
"key encipherment",
"server auth",
"client auth"
]
},
"client": {
"expiry": "43800h",
"usages": [
"signing",
"key encipherment",
"client auth"
]
}
}
}
}

Create a server request JSON file,

{
"CN": "qm1",
"hosts": [ "qm1" ],
"key": {
"algo": "rsa",
"size": 4096
}
}

Create the certificate signed by the above CA,

cfssl gencert -ca=myca.pem -ca-key=myca-key.pem -config=ca-config.json -profile=server -hostname=qm1 serverRequest.json | cfssljson -bare qm1

Now we have the certificate qm1.pem and key qm1-key.pem files.

Use the same approach to create another certificate for the client. Say the CN=mqclient1, and have the mqclient1.pem and mqclient1-key.pem file.

In the Java world, the certificates and keys are stored in the Truststore and Keystore format. In Java 8, the file format is JKS. while after Java 9, the format is changed to PKCS12.

Let’s convert the key into its JKS format.

openssl pkcs12 -export -in certs/mqclient1.pem -inkey certs/mqclient1-key.pem -certfile certs/mqclient1.pem -out mqclient1.p12 -passout pass:passwordkeytool -importkeystore -srckeystore mqclient1.p12 -srcstoretype pkcs12 -destkeystore mqclient1.jks -deststoretype JKS -srcstorepass password -deststorepass password

First, we use the OpenSSL tool to create the PKCS12 format of the cert and key. Then we use the keytool shipped with the JRE to create a JKS keystore by importing the cert from the PKCS format.

In order for the cert validation chain to work, we also import the CA cert. The client program will therefore be able to validate the cert send from the server that is signed by this CA.

keytool -importcert -file certs/myca.pem -keystore mqclient1.jks -storepass password -alias myca -noprompt

Now if we examine the JKS file with the KeyStore explorer, we find the client cert and its issuer, the CA cert, which is also the issuer for the MQ server’s cert.

Assume we have the Cloud Pak for Integration base installed. The MQ operator is subscribed. Now we deploy the MQ instance in the namespace of mqexp with the following CR,

apiVersion: mq.ibm.com/v1beta1
kind: QueueManager
metadata:
name: mq-exp
spec:
license:
accept: true
license: L-RJON-BZFQU2
queueManager:
name: qm1
resources:
limits:
cpu: 500m
requests:
cpu: 500m
storage:
queueManager:
type: persistent-claim
defaultClass: rook-ceph-block
availability:
type: SingleInstance
mqsc:
- configMap:
name: cm-init-mqsc
items:
- init.mqsc

template:
pod:
containers:
- name: qmgr
version: 9.2.3.0-r1
web:
enabled: true
pki:
keys:
- name: qm
secret:
secretName: qm1-tls-secret
items:
- tls.key
- tls.crt
trust:
- name: ca
secret:
secretName: ca-crt-secret
items:
- ca.crt

There is a couple of things to take note of.

  1. Keys and Certificates

Under the pki , there are keys and trust The keys are where we create the certificate for our Queue Manager (qm1) to use. It is defined by a K8s TLS secret with the cert and key files,

oc -n mqexp create secret tls qm1-tls-secret --cert=qm1.pem --key=qm1-key.pem

For the purpose of mTLS to validate the cert from the client, we also import the CA cert in the trust portion as a generic K8s secret because we only require the CA cert but not its paired key.

oc -n mqexp create secret generic ca-crt-secret --from-file=ca.crt=myca.pem

2. MQSC configMap

The MQ operator allows the user to create a set of MQSC command as a K8s configMap to be executed when the container starts. This is the place where we define how the MQ connection can be authenticated and authorized. We will examine more details of this configMap later.

Following the approach in my last paper, create a Kotlin based MQ client to test the connections. The main idea is by setting some environment variable we can test the connection to the queue manager and put a message into a queue. The main program is listed as below,

Create the following Dockerfile,

FROM gradle:jdk11 as builder
WORKDIR /build
ADD . /build
RUN ./gradlew clean build shadowJarFROM openjdk:11
WORKDIR /app
COPY --from=builder /build/app/build/libs/app-all.jar app-all.jar

Build and push the image into the OCP internal registry. Deploy it into K8s with the following deployment YAML,

apiVersion: apps/v1
kind: Deployment
metadata:
name: mq-client
labels:
app: mq-client
spec:
replicas: 1
selector:
matchLabels:
app: mq-client
template:
metadata:
labels:
app: mq-client
spec:
containers:
- name: mq-client
image: image-registry.openshift-image-registry.svc:5000/mqexp/mqclient:v1.0
imagePullPolicy: Always
command:
- sh
- -c
- sleep infinity
volumeMounts:
- name: cert1
mountPath: /app/cert1
- name: cert1-no-ca
mountPath: /app/cert1-no-ca
- name: cert2
mountPath: /app/cert2
volumes:
- name: cert1
configMap:
name: mq-jks-cm
- name: cert1-no-ca
configMap:
name: mq-jks-cm-no-ca
- name: cert2
configMap:
name: mq-jks2-cm

For testing purposes, the container’s command is just set as an infinite sleep. We will exec into the pod and execute the jar file with the different test scenarios.

There are a couple of JKS files that are mounted from k8s’ configMap. Volume cert1’s JKS file contains the cert of the client, the CA cert that signs both MQ server’s cert and this client cert is imported also; Volume cert1-no-ca’s JKS file has the same cert of the client as cert1, but the CA is not imported; Volum cert2 has the client’s cert that is signed by a different CA with the MQ server’s, but it has the MQ server’s cert CA imported.

Define the following MQSC configMap,

apiVersion: v1
kind: ConfigMap
metadata:
name: cm-init-mqsc
data:
init.mqsc: |-
define channel(channel1) chltype(SVRCONN) trptype(TCP) sslcauth(OPTIONAL) sslciph('ANY_TLS12_OR_HIGHER')
define qlocal('testq1') replace

Here we create an SVRCONN channel, CHANNEL1, to allow clients to connect. The SSLCAUTH(OPTIONAL)defines the only the MQ server’s certificate will be validated at the client-side. sslciph options make the connection must be TLS(TLS1.2 or above).

Let’s first use the JKS that is missing the MQ server’s CA. Exec into the client’s pod with the following command.

export MQC_HOST_NAME=mq-exp-ibm-mq.mqexp; export MQC_CHANNEL=CHANNEL1; export MQC_KEYSTORE=/app/cert1-no-ca/mqclient1-no-ca.jks; java -jar /app/app-all.jar

The MQ server’s name is the exposed K8s’ service name, connect through the defined channel, CHANNEL1 (case sensitive). As the MQ server’s cert CA is missing from the client’s JKS Keystore, the cert can not be validated, the communication will not go through. Sure enough, we have the following error that the cert cannot be validated through a chain.

props = {port=1414, hostname=mq-exp-ibm-mq.mqexp, channel=CHANNEL1, SSL Cipher Suite=TLS_RSA_WITH_AES_256_CBC_SHA256, transport=MQSeries Client}
Exception in thread "main" com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2397'.
at com.ibm.mq.MQManagedConnectionJ11.<init>(MQManagedConnectionJ11.java:253)
....
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2397;AMQ9204: Connection to host 'mq-exp-ibm-mq.mqexp(1414)' rejected. [1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2397;AMQ9771: SSL handshake failed. [1=javax.net.ssl.SSLHandshakeException[PKIX path building failed: sun.security.provider.certpath.SunCertPathBuilderException: unable to find valid certification path to requested target],3=mq-exp-ibm-mq.mqexp/172.30.225.74:1414 (mq-exp-ibm-mq.mqexp),4=SSLSocket.startHandshake,5=default]],3=mq-exp-ibm-mq.mqexp(1414),4=,5=RemoteTCPConnection.protocolConnect]
at com.ibm.mq.jmqi.remote.api.RemoteFAP$Connector.jmqiConnect(RemoteFAP.java:13605)
...

If we choose the JKS that has the CA imported,

export MQC_HOST_NAME=mq-exp-ibm-mq.mqexp; export MQC_CHANNEL=CHANNEL1; export MQC_KEYSTORE=/app/cert1/mqclient1.jks; java -jar /app/app-all.jar

We have the following exception.

props = {port=1414, hostname=mq-exp-ibm-mq.mqexp, channel=CHANNEL1, SSL Cipher Suite=TLS_RSA_WITH_AES_256_CBC_SHA256, transport=MQSeries Client}
Exception in thread "main" com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2035'.
at com.ibm.mq.MQManagedConnectionJ11.<init>(MQManagedConnectionJ11.java:253)

The error code indicates this is an error of authorization. The TLS communication is validated and passed through. If we check the error log on the server-side in the MQ pod,

tail -50 /var/mqm/qmgrs/qm1/errors/AMQERR01.LOG

We have the following error,

11/05/21 17:07:57 - Process(455.1310) User(1000740000) Program(amqrmppa)
Host(mq-exp-ibm-mq-0) Installation(Installation1)
VRMF(9.2.3.0) QMgr(qm1)
Time(2021-11-05T17:07:57.049Z)
RemoteHost(10.131.1.35)
CommentInsert1(CHANNEL1)
CommentInsert2(10.131.1.35)
CommentInsert3(MCAUSER(1000740000) CLNTUSER(1000740000) SSLPEER(SERIALNUMBER=75:0D:BB:DB:CE:35:0D:57:60:67:38:9E:AE:D9:A9:DC:06:64:76:06,CN=mqclient1,UNSTRUCTUREDNAME=mqclient1) SSLCERTI(CN=myca,L=Singapore,ST=SG,C=SG))
AMQ9776E: Channel was blocked by useridEXPLANATION:
The inbound channel 'CHANNEL1' was blocked from address '10.131.1.35' because
the active values of the channel were mapped to a userid which should be
blocked. The active values of the channel were 'MCAUSER(1000740000)
CLNTUSER(1000740000)
SSLPEER(SERIALNUMBER=75:0D:BB:DB:CE:35:0D:57:60:67:38:9E:AE:D9:A9:DC:06:64:76:06,CN=mqclient1,UNSTRUCTUREDNAME=mqclient1)
SSLCERTI(CN=myca,L=Singapore,ST=SG,C=SG)'.
ACTION:
Contact the systems administrator, who should examine the channel
authentication records to ensure that the correct settings have been
configured. The ALTER QMGR CHLAUTH switch is used to control whether channel
authentication records are used. The command DISPLAY CHLAUTH can be used to
query the channel authentication records.

MQ is able to identify the remote user from the certificate (CN=mqclient1). But it maps the MCA user to 1000740000 which is the user id randomly assigned by Openshift because of the Security Context Constrint (SCC) settings. We need to map it to a “fixed” MCA user and assign the proper right to it. This leads to the following test case.

In the mqsc configMap, map the user that is using the certificate of CN=mqclient1 to mqclient1,

apiVersion: v1
kind: ConfigMap
metadata:
name: cm-init-mqsc
data:
init.mqsc: |-
define channel(channel1) chltype(SVRCONN) trptype(TCP) sslcauth(OPTIONAL) sslciph('ANY_TLS12_OR_HIGHER')
set chlauth(channel1) type(sslpeermap) sslpeer('CN=mqclient1') usersrc(map) mcauser('mqclient1') action(add) set authrec principal('mqclient1') objtype(qmgr) authadd(all)
set authrec profile('*') principal('mqclient1') objtype(queue) authadd(all)
define qlocal('testq1') replace

In the meantime, assign this MCA user with the right to access the Qmgr and Queues.

Delete the queue manager and the old PVC data, redeploy the MQ.

Run the test client again,

export MQC_HOST_NAME=mq-exp-ibm-mq.mqexp; export MQC_CHANNEL=CHANNEL1; export MQC_KEYSTORE=/app/cert1/mqclient1.jks; java -jar /app/app-all.jar

It still gives the same exception,

props = {port=1414, hostname=mq-exp-ibm-mq.mqexp, channel=CHANNEL1, SSL Cipher Suite=TLS_RSA_WITH_AES_256_CBC_SHA256, transport=MQSeries Client}
Exception in thread "main" com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2035'.
at com.ibm.mq.MQManagedConnectionJ11.<init>(MQManagedConnectionJ11.java:253)
...

However, when checking the server logs we have the following,

11/06/21 08:38:36 - Process(489.24) User(1000740000) Program(amqrmppa)
Host(mq-exp-ibm-mq-0) Installation(Installation1)
VRMF(9.2.3.0) QMgr(qm1)
Time(2021-11-06T08:38:36.184Z)
ArithInsert1(2) ArithInsert2(2035)
CommentInsert1(mqclient1)
AMQ9557E: Queue Manager User ID initialization failed for 'mqclient1'.EXPLANATION:
The call to initialize the User ID 'mqclient1' failed with CompCode 2 and
Reason 2035. If an MQCSP block was used, the User ID in the MQCSP block was ''.
If a userID flow was used, the User ID in the UID header was '' and any CHLAUTH
rules applied prior to user adoption were evaluated case-sensitively against
this value.
ACTION:
Correct the error and try again.

Ok, the user ID is correctly mapped. In our client code, we didn’t define any user id and password. So more likely it's the “userID flow”.

On the MQ pod, runmqsc,

display qmgr CONNAUTH
11 : display qmgr CONNAUTH
AMQ8408I: Display Queue Manager details.
QMNAME(qm1)
CONNAUTH(SYSTEM.DEFAULT.AUTHINFO.IDPWOS)

By default, the Queue Manager qm1 is using the “SYSTEM.DEFAULT.AUTHINFO.IDPWOS” authentication. Check its detail,

dis authinfo(SYSTEM.DEFAULT.AUTHINFO.IDPWOS)
14 : dis authinfo(SYSTEM.DEFAULT.AUTHINFO.IDPWOS)
AMQ8566I: Display authentication information details.
AUTHINFO(SYSTEM.DEFAULT.AUTHINFO.IDPWOS)
AUTHTYPE(IDPWOS) ADOPTCTX(YES)
DESCR( ) CHCKCLNT(OPTIONAL)
CHCKLOCL(OPTIONAL) FAILDLAY(1)
AUTHENMD(OS) ALTDATE(2021-11-05)
ALTTIME(15.19.32)

The mapped user ID will be authenticated by the OS (though the chkclnt is optional). Running in a container, by default, we don’t have the user and its password created. One possible way to solve this problem is to build a custom container image defining the user id and password.

In MQ 9.2.1 release, we are able to avoid any OS user creations. Check out more details from this Blog.

In our QueueManager definition, add the ini field, which will mount a configMap as an ini file for the Queue manager to pickup

apiVersion: mq.ibm.com/v1beta1
kind: QueueManager
metadata:
name: mq-exp
spec:
license:
accept: true
license: L-RJON-BZFQU2
queueManager:
name: qm1
resources:
limits:
cpu: 500m
requests:
cpu: 500m
storage:
queueManager:
type: persistent-claim
defaultClass: rook-ceph-block
availability:
type: SingleInstance
mqsc:
- configMap:
name: cm-init-mqsc
items:
- init.mqsc
ini:
- configMap:
name: cm-qm-ini-ext-user
items:
- qm.ini

template:
pod:
containers:
- name: qmgr
version: 9.2.3.0-r1
web:
enabled: true
pki:
keys:
- name: qm
secret:
secretName: qm1-tls-secret
items:
- tls.key
- tls.crt
trust:
- name: ca
secret:
secretName: ca-crt-secret
items:
- ca.crt

Define the configMap as below,

apiVersion: v1
kind: ConfigMap
metadata:
name: cm-qm-ini-ext-user
data:
qm.ini: |-
Service:
Name=AuthorizationService
EntryPoints=14
SecurityPolicy=UserExternal

The SecurityPolicy UserExternal effectively ignores the validation of checking whether the user exists in the OS.

Redeploy the QueueManager. Run the client program again. The program is able to put a message into the queue without any exception.

Finally, let's turn on mutual TLS by setting sslcauth as “required”.

apiVersion: v1
kind: ConfigMap
metadata:
name: cm-init-mqsc
data:
init.mqsc: |-
define channel(channel1) chltype(SVRCONN) trptype(TCP) sslcauth(REQUIRED) sslciph('ANY_TLS12_OR_HIGHER')
alter authinfo(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) authtype(IDPWOS) chckclnt(OPTIONAL)
set chlauth(channel1) type(sslpeermap) sslpeer('CN=mqclient1') usersrc(map) mcauser('mqclient1') action(add) set authrec principal('mqclient1') objtype(qmgr) authadd(all)
set authrec profile('*') principal('mqclient1') objtype(queue) authadd(all)
define qlocal('testq1') replace

Redeploy the QueueManager. Run the client program with mqclient2.jks Keystore.

export MQC_HOST_NAME=mq-exp-ibm-mq.mqexp; export MQC_CHANNEL=CHANNEL1; export MQC_KEYSTORE=/app/cert2/mqclient2.jks; java -jar /app/app-all.jar

In this Keystore, it has the MQ server’s cert CA imported. But the client cert is signed by a different CA. So the MQ server won’t be able to validate this certificate.

As expected, there is a communication exception,

props = {port=1414, hostname=mq-exp-ibm-mq.mqexp, channel=CHANNEL1, SSL Cipher Suite=TLS_RSA_WITH_AES_256_CBC_SHA256, transport=MQSeries Client}
Exception in thread "main" com.ibm.mq.MQException: MQJE001: Completion Code '2', Reason '2059'.
at com.ibm.mq.MQManagedConnectionJ11.<init>(MQManagedConnectionJ11.java:253)
...
Caused by: com.ibm.mq.jmqi.JmqiException: CC=2;RC=2059;AMQ9204: Connection to host 'mq-exp-ibm-mq.mqexp(1414)' rejected. [1=com.ibm.mq.jmqi.JmqiException[CC=2;RC=2059;AMQ9503: Channel negotiation failed. [3=CHANNEL1 ]],3=mq-exp-ibm-mq.mqexp(1414),4=,5=RemoteConnection.analyseErrorSegment]
at com.ibm.mq.jmqi.remote.api.RemoteFAP$Connector.jmqiConnect(RemoteFAP.java:13605)

It’s interesting that the server-side log did not show it cannot validate the certificate but complained the cert is not sent over,

11/06/21 09:23:02 - Process(553.29) User(1000740000) Program(amqrmppa)
Host(mq-exp-ibm-mq-0) Installation(Installation1)
VRMF(9.2.3.0) QMgr(qm1)
Time(2021-11-06T09:23:02.184Z)
RemoteHost(10.131.1.35)
CommentInsert1(CHANNEL1)
CommentInsert2(SSLCAUTH)
CommentInsert3(10.131.1.35)
AMQ9637E: During handshake, the remote partner sent no certificate.EXPLANATION:
The conversation cannot begin because a certificate has not been supplied by
the remote partner.
The channel name is 'CHANNEL1'.The remote host is '10.131.1.35'.If this error message is written on the receiving side of the channel, then the
channel attributes 'SSLCAUTH' caused the check to be made.
ACTION:
Look at the key repository on the remote side of this channel, and make sure
the appropriate certificates are present, with correct labels.

Run the client program again, but choose the Keystore whose certificate is signed by the same CA of the MQ server’s cert,

export MQC_HOST_NAME=mq-exp-ibm-mq.mqexp; export MQC_CHANNEL=CHANNEL1; export MQC_KEYSTORE=/app/cert1/mqclient1.jks; java -jar /app/app-all.jar

The mTLS communication passes through, the message can be put successfully.

props = {port=1414, hostname=mq-exp-ibm-mq.mqexp, channel=CHANNEL1, SSL Cipher Suite=TLS_RSA_WITH_AES_256_CBC_SHA256, transport=MQSeries Client}
status = 0

Without customizing or extending the base MQ container image, we are able to achieve mutual TLS, certificate-based authentication, and authorization by using config maps for the QueueManager operator. The configuration is ready for production usage.

Cloud explorer