This paper explores how to configure Instana and MQ to monitor the IBM MQ running on OpenShift.
Instana sensor connects to the detected MQ to get the monitoring metrics. Typically the connection need to be authenticated againt the OS. However, when MQ is running on container, the default OS based authentication is no longer available. To authenticate the connection and further authorize it with the right permission, we will use mutual TLS, certificate based authentication for the Instana sensor.
Configuration for Instana MQ sensor
The Instana agent has been deployed as instana-agent helm chart. The agent is running on each nodes of the cluster as Daemonset. If the MQ is running on any of the node, the Instana agent will discover it and start to monitor it based on a set of predefined configuration. The configuration is created as a configMap. The following is a sample configMap for the MQ monitoring,
availabilityZone: 'IBM MQ'
The MQ monitoring is defined through the
com.instana.plugin.ibmmq in the configuration.yaml.
For each queue manager to be monitored, set its properties under the queueManagers. Because of this setting, the queue manager’s name have to be unique. For the above example, the qm1 queue manager is configured as below,
- Connect through the MQ channel named as INSTANA. (case sensitive)
- The username and password are not used as we are going to use the certificate based authentication.
- The MQ port and host is not required here as the Instana agent will discover the local MQ automatically.
The certificate is important as its not only encrypt the traffic but also is used as an authenticate mechanism for the MQ.
Mutual TLS is used. The Instana sensor must be able to validate the cert from the MQ, meanwhile, the queue manager must be able to validate the cert from the sensor also. Therefore we need to use the same CA to create the certificates for the queue manager and the Instana sensor.
The JKS keystore will store the Instana sensor’s certificate and key pair. It also store the CA cert for the validation of the queue manager.
Assume we have created the cert and key pair signed by the same CA with the CN name as
CN=mqplugin. Perform the following to create the JKS format keystore,
openssl pkcs12 -export -in certs/mqplugin.pem -inkey certs/mqplugin-key.pem -certfile certs/mqplugin.pem -out mqplugin.p12 -passout pass:password keytool -importkeystore -srckeystore mqplugin.p12 -srcstoretype pkcs12 -destkeystore mqplugin.jks -deststoretype JKS -srcstorepass password -deststorepass password
Import the CA cert into the Keystore to make the validation chain complete.
keytool -importcert -file certs/myca.pem -keystore mqplugin.jks -storepass password -alias mqca -noprompt
We have the mqplugin.jks ready. How we make it available for the Instana agent? The answer is to create the JKS as configMap and mount it to the Pod.
oc -n instana-agent create cm mq-plugin-jks-cm --from-file=mqplugin.jks=mqplugin.jks
As the Daemonset is already created by the Helm, we need to patch it to mount the configMap. Lets use kustomize to achieve this.
oc -n instana-agent get ds instana-agent -o yaml > instana-agent.ds.yaml
Create the following Kustomize file to patch the Daemonset to mount the configMap.
- op: add
- op: add
Generate the patched YAML and apply it.
kustomize build > ds.patched.yamlkubectl apply -f ds.patched.yaml
Watch the pods are restarted and ready. The MQ sensor configuration is completed.
Configuration for Queue Manager
The MQ queque manager is created on OpenShift by using MQ operator. The CR is defined as below,
type: SingleInstance mqsc:
- name: qmgr
enabled: true pki:
- name: mq
- name: ca
The pki/keys defines the K8s secret of certificate and key for the queue manager. To enable mutual TLS, the k8s secret of CA is added in pki/trust for the queque manager to validate the cert sent from the Instana sensor.
The ini block disable the look up of OS users in the container. The corresponding configMap is defined as below,
The mqsc is the MQSC script to configure the queue manager at its first startup. The corresponding configMap is defined as below,
define channel(instana) chltype(SVRCONN) trptype(TCP) sslcauth(REQUIRED) sslciph('ANY_TLS12_OR_HIGHER')
alter authinfo(SYSTEM.DEFAULT.AUTHINFO.IDPWOS) authtype(IDPWOS) chckclnt(OPTIONAL) set chlauth(instana) type(SSLPEERMAP) sslpeer('CN=mqplugin') usersrc(MAP) mcauser('mqplugin') action(add)
set authrec principal('mqplugin') objtype(qmgr) authadd(all)
set authrec profile('**') principal('mqplugin') objtype(queue) authadd(all)
set authrec profile('**') principal('mqplugin') objtype(listener) authadd(all)
set authrec profile('**') principal('mqplugin') objtype(topic) authadd(all)
We create a channel, instana, for the instana sensor to connect. It’s noticed if the value is not single quoted, the MQSC will automatically converted to uppercase.
The sslcauth(REQUIRED) set the mutual TLS. The chlauth maps the certificate
CN=mqplugin to the MCA user of
mqplugin . Then for the MCA user ‘mqplugin’, we assign the proper permission for the queue manager, queues, listener, and topics. The wildcard “**” enables all the objects for that type. Tightening of these permissions can be done once the exact requirements document is published.
Apply the CR and watch the MQ started properly. The Instana sensor will detect the MQ and start to monitor it
Sample Instana UI
Goto Infrastructure, filter by the queue manager’s name, qm1. Click on the found icon in the dashboard, open the queue manager’s dashboard.
Check the Instana MQ sensor’s log
Find which nodes the MQ is running on, then on the same node, find the instana agent pod, run the following to get the MQ sensor related log
oc logs instana-agent-gm28q -c instana-agent | grep ibmmq
Check the MQ log
Exec into the MQ pod, check the detailed MQ log under,