|
3 months ago | |
---|---|---|
ceph_native | 3 months ago | |
fs | 3 months ago | |
static_pv | 4 months ago | |
README.md | 3 months ago | |
block-2-rep.yaml | 4 months ago | |
dynamic_pvc.yaml | 3 months ago | |
example_mount.yaml | 3 months ago | |
mons.yaml | 4 months ago | |
secret.yaml | 4 months ago |
You should have debian installed using [Buster_Setup_Preseed] https://gitea.tobias-huebner.org/tobi/Buster_Setup_Preseed) and also ran “core.sh” (like described in the README.md).
This will have downloaded and installed cephadm
and also any command line utilities you will later on to manage your cluster.
My core.sh script only add the ceph apt repository key. Otherwise the following steps are based on Deploying a new Cluster using Cephadm.
For our first ceph cluster node we will need to create the directory where the ceph bootstrap command will put configuration and key files. Most importantly this directory contains the ceph public key. This key will need to be installed on future nodes that should join the cluster.
mkdir -p /etc/ceph
cephadm bootstrap --mon-ip 10.0.0.XX
When finished this command will print a url to the dashboard and some login data. This dashboard has some limited functionality (nothing you can’t do using the ceph
and rbd
command line utilities) and is helpful to get the big picture of ceph. Exploring it a bit is massively helpful.
After you installed the key using ssh-copy-id -f -i /etc/ceph/ceph.pub root@NEWHOST
, you can add the node via ceph orch host add NEWHOST
.
! once you have installed the ceph public key you should disable ssh password login like described in the Buster Preseed README.md !
umount /storage
nano /etc/fstab
wipefs --force --all /dev/nvme0n1p3
Add OSD
ceph orch daemon add osd mikasa:/dev/nvme0n1p3
Connects a standalone Ceph Cluster with Kubernetes.
Follow the docs from https://docs.ceph.com/en/latest/rbd/rbd-kubernetes/.
In the ceph_native folder you will find fixed versions of some of the yaml files used in that documentation + an “encryption.yaml” file which is outright missing.
Additionally you will find a provisioner from quay.io for cephfs (not to mistake with ceph / rbds).
Look at the example file in “static_pv/pvc.yaml”. This contains an PersistentVolume
which maps directly to the ceph image and a PersistentVolumeClaim
which will then claim the manually created PersistentVolume
and make it accessible for mounting in a single pod.
To create cephfs with a single command simply run ceph fs volume create myfs
, this will create pools, create the fs and start metadata servers (mds).
If something doesn’t work you might want to look at the following part.
ceph osd pool create myfs_data
# I like my data to be only replicated twice, since that is plenty
ceph osd pool set myfs_data size 2
ceph osd pool create myfs_metadata
Create the filesystem using ceph fs new myfs myfs_metadata myfs_data
.
Create an manager daemon for the filesystem using ceph orch apply mds myfs --placement="2 mikasa warmin"
. It might only work on hosts, that dont already have a monitor daemon running.
The ceph orchestrator (ceph orch
) takes care of deploying docker containers on the different machines. Sadly the ceph documentations is not really complete on that, so I had to figure
this thing out for myself.
List services run by the orchestrator - ceph orch ls
Delete a service (like and mds) - ceph orch rm SERIVCNEAME
[client.admin]
key = AQC0NZxff3JuLxAAk8xDGbScP61XQq/R/xNA+A==
Copy the key and base64 encode it using echo AQC0gfhydgP61XQgdfg/xNA+A== | base64
Create a kubernetes secret from it (like “fs/secret.yaml”)
apiVersion: v1
kind: Secret
metadata:
name: ceph-secret
data:
key: QVFDMGdmaHlkZ1A2MVhRZ2RmZy94TkErQT09Cg==
Apply the “ceph_native/cephfs-provisioner.yaml” file.
Have fun with the examples in the fs folder. You can exec into the pod using kubectl exec -it mount-direct/mount-pvc -- sh
. Alternatively you can mount cephfs on your dev machine directly (after installing ceph-common there too apt install ceph-common
/via cephadm
from the node setup earlier) using mkdir /mnt/test && mount -t ceph 10.0.0.10:6789,10.0.0.12:6789:/ /mnt/test -o name=admin,secret=AQC0gfhydgP61XQgdfg/xNA+A==
.
The following bugs are fixxed in my copies from the yamls:
encryption.yaml is missing from offical Docs, needs to be applied too.
when attempting to join nodes via thier hostname /etc/hosts needs to changed BEFORE starting the cluster. If you do it afterwards you have to restart the node
allow pool deletion ceph tell mon.* injectargs --mon_allow_pool_delete true
remove file system with ceph fs fail FILESYSTEMNAME && ceph fs rm FILESYSTEMNAME --yes-i-really-mean-it
remove a filesystem created with the ceph fs volume create ...
command - ceph fs volume rm myfs --yes-i-really-mean-it
remove pool with ceph osd pool delete POOLNAME POOLNAME --yes-i-really-really-mean-it
general info polling with ceph status
, ceph mon dump
, ceph fs dump
and ceph osd pool ls
listing crypto keys ceph auth ls
Copy /etc/ceph.conf and the keyring file to the same directory somewhere else