Skip to content

Commit

Permalink
Update ceph-csi.md
Browse files Browse the repository at this point in the history
edited the language to be more in line with a how-to guide rather than tutorial. Also changes to layout and heading
  • Loading branch information
nhennigan authored Nov 22, 2024
1 parent 7e38a99 commit c355416
Showing 1 changed file with 17 additions and 26 deletions.
43 changes: 17 additions & 26 deletions docs/src/charm/howto/ceph-csi.md
Original file line number Diff line number Diff line change
@@ -1,34 +1,24 @@
# ceph-csi
# How to integrate {{product}} with ceph-csi

[Ceph] can be used to hold Kubernetes persistent volumes and is the recommended
storage solution for {{product}}.

The ``ceph-csi`` plugin automatically provisions and attaches the Ceph volumes
to Kubernetes workloads.

Follow this guide to find how {{product}} can be integrated with Ceph through
Juju.
## What you'll need

## Prerequisites

This guide assumes that you have an already existing {{product}} cluster.
This guide assumes that you have an existing {{product}} cluster.
See the [charm installation] guide for more details.

In case of localhost/LXD Juju clouds, please make sure that the K8s units are
configured to use VM containers with ubuntu 22.04 as base. For example, use
the following:

```
juju deploy k8s --channel=latest/edge \
--base "ubuntu@22.04" \
--constraints "cores=2 mem=8G root-disk=16G virt-type=virtual-machine"
```
configured to use VM containers with Ubuntu 22.04 as the base and adding the
``virt-type=virtual-machine`` constraint.

## Deploying Ceph

We'll deploy a Ceph cluster containing one monitor and three storage units
(OSDs). For the purpose of this demo, we'll allocate a limited amount of
resources.
Deploy a Ceph cluster containing one monitor and three storage units
(OSDs). In this example, a limited amount of reources is being allocated.

```
juju deploy -n 1 ceph-mon \
Expand All @@ -40,19 +30,20 @@ juju deploy -n 3 ceph-osd \
juju integrate ceph-osd:mon ceph-mon:osd
```

If using LXD, please configure the OSD units to use VM containers by adding the
following constraint: ``virt-type=virtual-machine``.
If using LXD, configure the OSD units to use VM containers by adding the
constraint: ``virt-type=virtual-machine``.

Once the units are ready, deploy ``ceph-csi`` like so:
Once the units are ready, deploy ``ceph-csi``. By default, this enables
the ``ceph-xfs`` and ``ceph-ext4`` storage classes, which leverage
Ceph RBD.

```
juju deploy ceph-csi --config provisioner-replicas=1
juju integrate ceph-csi k8s:ceph-k8s-info
juju integrate ceph-csi ceph-mon:client
```

By default, this enables the ``ceph-xfs`` and ``ceph-ext4`` storage classes,
which leverage Ceph RBD. CephFS support can optionally be enabled like so:
CephFS support can optionally be enabled:

```
juju deploy ceph-fs
Expand All @@ -62,8 +53,8 @@ juju config ceph-csi cephfs-enable=True

## Validating the CSI integration

Use the following to ensure that the storage classes are available and that the
csi pods are running:
Ensure that the storage classes are available and that the
CSI pods are running:

```
juju ssh k8s/leader -- sudo k8s kubectl get sc,po --namespace default
Expand All @@ -72,7 +63,7 @@ juju ssh k8s/leader -- sudo k8s kubectl get sc,po --namespace default
The list should include the ``ceph-xfs`` and ``ceph-ext4`` storage classes as
well as ``cephfs``, if it was enabled.

Let's make sure that Ceph PVCs work as expected. Connect to the k8s leader unit
Verify that Ceph PVCs work as expected. Connect to the k8s leader unit
and define a PVC like so:

```
Expand All @@ -97,7 +88,7 @@ EOF
sudo k8s kubectl apply -f /tmp/pvc.yaml
```

Next, we are going to create a pod that writes to a Ceph volume:
Next, create a pod that writes to a Ceph volume:

```
cat <<EOF > /tmp/writer.yaml
Expand Down

0 comments on commit c355416

Please sign in to comment.