Cephalocon 2024 Demo Materials #184
phlogistonjohn
started this conversation in
Show and tell
Replies: 1 comment
-
Pre-recorded Demo Video File - no audio, video is indented to be narrated live. |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Background, scripts, and files related to John's presentation for Cephalocon 2024.
NOTE: The demonstration recordings prepared for SDC 2024 and Cephalocon 2024 are similar but not the same. In particular, I felt that Cephalocon attendees are more likely to get hands on with the feature and this session includes some basic background and debugging information not found in the SDC recording. However, most of the cluster setup is the same.
Prerequisites
Ceph cluster can be installed just prior to this procedure.
Windows client may need additional time to join to AD, we used the AD DC from samba-container project with a default configuration.
Copy the contents of
cluster2.yaml
to the ceph admin node. Content below.Setup Phase
SMB Module and filesystem
ceph mgr module enable smb
ceph fs volume create cephfs
ceph fs subvolumegroup create cephfs demos
for sub in uv1 uv2 domv1 domv2 ; do ceph fs subvolume create cephfs --group-name=demos --sub-name=$sub --mode=0777 ; done
Host labeling
ceph orch host label add ceph0 first
We label the host nodes this way to make it easy to determine where the smb services will be placed later on. This is not always going to be necessary for general use.
First Cluster
a declarative set of resource descriptions
has used Ceph’s NFS module before
Cephadm orchestration
ceph smb cluster ls
- there should be no clusters yetceph smb cluster create starter user --define-user-pass=test%D3m0123 --define-user-pass=test2%0th3rD3m0 --placement=’1 label:first
ceph smb share create starter share1 --cephfs-volume=cephfs --subvolume=demos/uv1 --path=/
ceph smb share create starter share2 --cephfs-volume=cephfs --subvolume=demos/uv2 --path=/ --share-name='Share Two'
ceph smb show
ceph smb show ceph.smb.share
or just one specific clusterceph smb show ceph.smb.cluster.starter --format=yaml
ceph orch ls
to show a running services, including a new smb serviceQuick CLI Test
We can do a quick test using Samba's
smbclient
tool. It's fast to use but you can skip this step or use a windows client if desired.smbclient -U ‘test%D3m0123’ ∕∕192.168.76.200∕share1
smbclient -U ‘test2%0th3rD3m0’ ∕∕192.168.76.200∕share1
smbclient -U ‘test2%0th3rD3m0’ ∕∕192.168.76.200∕"Share Two"
Second Cluster
This cluster will be installed using the "declarative method". We define everything we need to set up our cluster: cluster settings, shares, and domain join authentication info in a single yaml file we'll call
cluster2.yaml
:This YAML file defines a cluster that will use both Active Directory and Clustering. Samba's clustering system, CTDB, will manage the supplied public IP Addresses.
ceph smb apply -i - < cluster2.yaml
ceph orch ls
ceph smb show ceph.smb.cluster --format=yaml
ceph smb show ceph.smb.cluster.ccad --format=yaml
ceph orch ls
should show3/3
daemons running and we're ready to test client access.Windows Client
From the windows client, log into
\\192.168.76.50\Cluster Share One
, where the IP address is any of the public IP addresses from the YAML spec. We can read and write files to the shares.Log into
\\192.168.76.200\share1
(using the IP of the first cluster node). This should prompt for login. Provide the settings of one of the deinfed users with clusterstarter
.Taking a Deeper Look
SMB on Ceph makes use of multiple systemd services running on a single node.
Run
systemctl list-units | grep smb
to see some of them on a node that is running an smb service (seeceph orch ps
for more info).Each service can be examined indivudally using
systemctl status <unit-name>
. This includes the service for the init containers and the services for the sidecar containers.When the smb service is deployed using additional features, such as Active Directory support or true smbd level clustering, additional sidecar systemd services will be deployed. For example, run
systemctl list-units | grep smb | grep winbind
to narrow down the list of services to show the winbdind sidecar. Runsystemctl status <winbind-sidcar-name>
to view details relating to the service.Each service will emit logging to the systemd jounal. Run commands such as
journalctl -u <service_name>
to view journal logging for that service. Similarly, while the container is running the logs can be fetched directly from the container engine. Runpodman logs <container_name>
to view logs for the container directly.There are some helpful commands that one can run within a Samba container running as part of the smb service. Enter the container by running
podman exec -it <container_name> bash
. Runnet conf list
to see a view of Samba's configuration. Attempt test connections using smbclient in the container, for example:smbclient -U 'domain1\bwayne%1115Rose.' -L //localhost
to list shares. Runsmbclient -U 'domain1\bwayne%1115Rose.' //localhost/'Useful Files
to connect to one of the shares we defined earlier.Finally, we touch on one important aspect of how the smb module interacts with the orchestration layer. The smb module writes configuration objects into a RADOS pool named
.smb
. When debugging it can be useful to examine these objects directly. We can do this by first entering a cephadm shell - runcephadm shell
Then use rados commands such as:rados lspools
rados ls --pool=.smb --all
rados get --pool=.smb --namespace=<cluster_id> config.smb /dev/stdout
radod get --pool=.smb --namespace=<cluster_id> spec.smb /dev/stdout
These objects store JSON configuration that may be compact and can be expanded into a more human readable form using
jq
orpython3 -m json.tool
.Deleting a Cluster
ceph smb cluster ls
to list clustersceph smb share ls starter
to list shares belonging to the starter clusterceph smb share rm starter share1
to delete the first shareceph smb share rm starter share2
to delete the second shareceph orch ls
to show the two clusters are still runningceph smb cluster rm starter
to remove the clusterceph orch ls
to show that the first cluster has been deletedBeta Was this translation helpful? Give feedback.
All reactions