diff --git a/docs/src/charm/howto/ceph-csi.md b/docs/src/charm/howto/ceph-csi.md index 00e49b989..625a459fb 100755 --- a/docs/src/charm/howto/ceph-csi.md +++ b/docs/src/charm/howto/ceph-csi.md @@ -1,4 +1,4 @@ -# ceph-csi +# How to integrate {{product}} with ceph-csi [Ceph] can be used to hold Kubernetes persistent volumes and is the recommended storage solution for {{product}}. @@ -6,29 +6,19 @@ storage solution for {{product}}. The ``ceph-csi`` plugin automatically provisions and attaches the Ceph volumes to Kubernetes workloads. -Follow this guide to find how {{product}} can be integrated with Ceph through -Juju. +## What you'll need -## Prerequisites - -This guide assumes that you have an already existing {{product}} cluster. +This guide assumes that you have an existing {{product}} cluster. See the [charm installation] guide for more details. In case of localhost/LXD Juju clouds, please make sure that the K8s units are -configured to use VM containers with ubuntu 22.04 as base. For example, use -the following: - -``` -juju deploy k8s --channel=latest/edge \ - --base "ubuntu@22.04" \ - --constraints "cores=2 mem=8G root-disk=16G virt-type=virtual-machine" -``` +configured to use VM containers with Ubuntu 22.04 as the base and adding the +``virt-type=virtual-machine`` constraint. ## Deploying Ceph -We'll deploy a Ceph cluster containing one monitor and three storage units -(OSDs). For the purpose of this demo, we'll allocate a limited amount of -resources. +Deploy a Ceph cluster containing one monitor and three storage units +(OSDs). In this example, a limited amount of reources is being allocated. ``` juju deploy -n 1 ceph-mon \ @@ -40,10 +30,12 @@ juju deploy -n 3 ceph-osd \ juju integrate ceph-osd:mon ceph-mon:osd ``` -If using LXD, please configure the OSD units to use VM containers by adding the -following constraint: ``virt-type=virtual-machine``. +If using LXD, configure the OSD units to use VM containers by adding the +constraint: ``virt-type=virtual-machine``. -Once the units are ready, deploy ``ceph-csi`` like so: +Once the units are ready, deploy ``ceph-csi``. By default, this enables +the ``ceph-xfs`` and ``ceph-ext4`` storage classes, which leverage +Ceph RBD. ``` juju deploy ceph-csi --config provisioner-replicas=1 @@ -51,8 +43,7 @@ juju integrate ceph-csi k8s:ceph-k8s-info juju integrate ceph-csi ceph-mon:client ``` -By default, this enables the ``ceph-xfs`` and ``ceph-ext4`` storage classes, -which leverage Ceph RBD. CephFS support can optionally be enabled like so: +CephFS support can optionally be enabled: ``` juju deploy ceph-fs @@ -62,8 +53,8 @@ juju config ceph-csi cephfs-enable=True ## Validating the CSI integration -Use the following to ensure that the storage classes are available and that the -csi pods are running: +Ensure that the storage classes are available and that the +CSI pods are running: ``` juju ssh k8s/leader -- sudo k8s kubectl get sc,po --namespace default @@ -72,7 +63,7 @@ juju ssh k8s/leader -- sudo k8s kubectl get sc,po --namespace default The list should include the ``ceph-xfs`` and ``ceph-ext4`` storage classes as well as ``cephfs``, if it was enabled. -Let's make sure that Ceph PVCs work as expected. Connect to the k8s leader unit +Verify that Ceph PVCs work as expected. Connect to the k8s leader unit and define a PVC like so: ``` @@ -97,7 +88,7 @@ EOF sudo k8s kubectl apply -f /tmp/pvc.yaml ``` -Next, we are going to create a pod that writes to a Ceph volume: +Next, create a pod that writes to a Ceph volume: ``` cat < /tmp/writer.yaml