|
4 months ago | |
---|---|---|
README.md | 4 months ago | |
block_storage_2rep.yaml | 4 months ago | |
block_storage_3rep.yaml | 4 months ago | |
busybox.yaml | 4 months ago | |
dashboard.yaml | 4 months ago | |
fs.yaml | 4 months ago | |
fs_storage.yaml | 4 months ago | |
pvc-clone.yaml | 4 months ago | |
pvc.yaml | 4 months ago | |
toolbox-job.yaml | 4 months ago | |
toolbox.yaml | 4 months ago |
For some general information checkout my blog post about ceph. Otherwise the Rook.io documentation itself is really good, for prepping your cluster for ceph.
Some deployments have the requirement that they can be read from more than one pod. This is where cephfs (fs.yaml) comes in. In ceph you can have object storage (AWS S3 kind of stuff, where you get large binary objects tagged with some meta data), block storage (behaving like a single linux disk that can be mounted on only a single host), cephfs (which is a filesystem layer on top of that, that acts more like a network file system - only this time with all the ceph goodness underneath).
list pools
$ ceph osd lspools
1 device_health_metrics
6 myfs-metadata
7 myfs-data0
8 2rep
9 3rep`
list volumes
$ rbd ls 2rep
csi-vol-026ae300-0acc-11eb-9eec-22555fb3e090
csi-vol-09127e78-0b30-11eb-9eec-22555fb3e090
csi-vol-173dd3bf-0b06-11eb-9eec-22555fb3e090
csi-vol-17559d06-0b06-11eb-9eec-22555fb3e090
csi-vol-264f7292-0b29-11eb-9eec-22555fb3e090
csi-vol-852e4c76-175f-11eb-9eec-22555fb3e090
csi-vol-9320336e-0b08-11eb-9eec-22555fb3e090
csi-vol-b3939699-0b09-11eb-9eec-22555fb3e090
csi-vol-b3ab32df-0b09-11eb-9eec-22555fb3e090
csi-vol-dbb1a612-0acc-11eb-9eec-22555fb3e090
backup volume to file
rbd export 2rep/csi-vol-026ae300-0acc-11eb-9eec-22555fb3e090 backup.img
Exporting image: 100% complete...done.