You can not select more than 25 topics Topics must start with a letter or number, can include dashes ('-') and can be up to 35 characters long.
tobi 4bb12b0bc5 readme 3 months ago
ceph_native cephfs k8s done 3 months ago
fs cephfs k8s done 3 months ago
static_pv idk 4 months ago
README.md readme 3 months ago
block-2-rep.yaml idk 4 months ago
dynamic_pvc.yaml dynamic 3 months ago
example_mount.yaml example mount 3 months ago
mons.yaml init 4 months ago
secret.yaml init 4 months ago

README.md

Ceph Standalone Kubernetes

Bootstrapping Ceph

You should have debian installed using [Buster_Setup_Preseed] https://gitea.tobias-huebner.org/tobi/Buster_Setup_Preseed) and also ran “core.sh” (like described in the README.md).

This will have downloaded and installed cephadm and also any command line utilities you will later on to manage your cluster.

My core.sh script only add the ceph apt repository key. Otherwise the following steps are based on Deploying a new Cluster using Cephadm.

For our first ceph cluster node we will need to create the directory where the ceph bootstrap command will put configuration and key files. Most importantly this directory contains the ceph public key. This key will need to be installed on future nodes that should join the cluster.

mkdir -p /etc/ceph
cephadm bootstrap --mon-ip 10.0.0.XX

When finished this command will print a url to the dashboard and some login data. This dashboard has some limited functionality (nothing you can’t do using the ceph and rbd command line utilities) and is helpful to get the big picture of ceph. Exploring it a bit is massively helpful.

After you installed the key using ssh-copy-id -f -i /etc/ceph/ceph.pub root@NEWHOST, you can add the node via ceph orch host add NEWHOST.

! once you have installed the ceph public key you should disable ssh password login like described in the Buster Preseed README.md !

Prepare Partion to be used as an OSD

umount /storage
nano /etc/fstab
wipefs --force --all /dev/nvme0n1p3

Add OSD

ceph orch daemon add osd mikasa:/dev/nvme0n1p3

Dynamic provisioning (integration with Kubernetes Volumes)

Connects a standalone Ceph Cluster with Kubernetes.

Follow the docs from https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/.

In the ceph_native folder you will find fixed versions of some of the yaml files used in that documentation + an “encryption.yaml” file which is outright missing.

Additionally you will find a provisioner from quay.io for cephfs (not to mistake with ceph / rbds).

Using existing ceph images

Look at the example file in “static_pv/pvc.yaml”. This contains an PersistentVolume which maps directly to the ceph image and a PersistentVolumeClaim which will then claim the manually created PersistentVolume and make it accessible for mounting in a single pod.

CephFS

To create cephfs with a single command simply run ceph fs volume create myfs, this will create pools, create the fs and start metadata servers (mds).

If something doesn’t work you might want to look at the following part.

Detailed step by step

  1. Create two pools for the filesystem
ceph osd pool create myfs_data
# I like my data to be only replicated twice, since that is plenty
ceph osd pool set myfs_data size 2

ceph osd pool create myfs_metadata
  1. Create the filesystem using ceph fs new myfs myfs_metadata myfs_data.

  2. Create an manager daemon for the filesystem using ceph orch apply mds myfs --placement="2 mikasa warmin". It might only work on hosts, that dont already have a monitor daemon running.

Removing mds (or monitors/ods)

The ceph orchestrator (ceph orch) takes care of deploying docker containers on the different machines. Sadly the ceph documentations is not really complete on that, so I had to figure

this thing out for myself.

  • List services run by the orchestrator - ceph orch ls

  • Delete a service (like and mds) - ceph orch rm SERIVCNEAME

Mounting Cephfs

  1. Create a kubernetes secret from the contents of “/etc/ceph/ceph.client.admin.keyring”, which should look like this.
[client.admin]
        key = AQC0NZxff3JuLxAAk8xDGbScP61XQq/R/xNA+A==

  1. Copy the key and base64 encode it using echo AQC0gfhydgP61XQgdfg/xNA+A== | base64

  2. Create a kubernetes secret from it (like “fs/secret.yaml”)

apiVersion: v1
kind: Secret
metadata:
  name: ceph-secret
data:
  key: QVFDMGdmaHlkZ1A2MVhRZ2RmZy94TkErQT09Cg==

  1. Apply the “ceph_native/cephfs-provisioner.yaml” file.

  2. Have fun with the examples in the fs folder. You can exec into the pod using kubectl exec -it mount-direct/mount-pvc -- sh. Alternatively you can mount cephfs on your dev machine directly (after installing ceph-common there too apt install ceph-common/via cephadm from the node setup earlier) using mkdir /mnt/test && mount -t ceph 10.0.0.10:6789,10.0.0.12:6789:/ /mnt/test -o name=admin,secret=AQC0gfhydgP61XQgdfg/xNA+A==.

Good 2 Know

The following bugs are fixxed in my copies from the yamls:

  • encryption.yaml is missing from offical Docs, needs to be applied too.

  • when attempting to join nodes via thier hostname /etc/hosts needs to changed BEFORE starting the cluster. If you do it afterwards you have to restart the node

  • allow pool deletion ceph tell mon.* injectargs --mon_allow_pool_delete true

  • remove file system with ceph fs fail FILESYSTEMNAME && ceph fs rm FILESYSTEMNAME --yes-i-really-mean-it

  • remove a filesystem created with the ceph fs volume create ... command - ceph fs volume rm myfs --yes-i-really-mean-it

  • remove pool with ceph osd pool delete POOLNAME POOLNAME --yes-i-really-really-mean-it

  • general info polling with ceph status, ceph mon dump, ceph fs dump and ceph osd pool ls

  • listing crypto keys ceph auth ls

Make ceph-common clis usable on other nodes

Copy /etc/ceph.conf and the keyring file to the same directory somewhere else