Member-only story
OpenShift Data Foundation - the Hard Way
Thanks to the OpenShift Operator and its console, the OpenShift Data Foundation (ODF) installation becomes to just a few mouse click. But sometimes, you may still prefer the traditional command line and YAML file approach, especially you want to automate your cluster setup as the infrastructure as code.
Let’s explore the “hard” way.
Physical Resources
First let’s make sure we have 3 worker nodes dedicated for storage purpose with enough resources (10 cpu and 14GB memory at minimum for each node).
We also have add the additional raw disk to the nodes without any partition or LVM created on it
Create the Local Volume Set
ODF manage the disk through the local storage operator. Let’s install the local storage operator in the namespace of openshift-local-storage
oc create namespace openshift-local-storage
Create a operator group for the namespace and subscribe to the local-storage-operator operator with the following YAML,
apiVersion: operators.coreos.com/v1
kind: OperatorGroup
metadata:
name: og-local-storage
namespace: openshift-local-storage
spec:
targetNamespaces:
- openshift-local-storage
---
apiVersion: operators.coreos.com/v1alpha1…