Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

-o mfspassword in the CSI #12

Open
saandre15 opened this issue Jun 16, 2023 · 5 comments
Open

-o mfspassword in the CSI #12

saandre15 opened this issue Jun 16, 2023 · 5 comments

Comments

@saandre15
Copy link

mfsmount allows the user to specify the following option to mount to a MooseFS master server that requires password authentication to view a specific directory.

-o mfspassword=PASSWORD

This is useful in a production environment where certain directories on the Master server need to be restricted from the clients. I am just wondering whether such a feature exist on moosefs-csi?

@xandrus
Copy link
Member

xandrus commented Jun 21, 2023

Hi,
at the moment such a feature is not available, but we will definitely add it.

Thank you for bringing it to our attention.

@xandrus
Copy link
Member

xandrus commented Jun 30, 2023

Hi,

I would like to inform that I have added an additional argument to the driver, which helps to pass an additional mfsmount option to the driver.
So basically you can pass an extra option this way:
--mfs-mount-options mfsmd5pass=MD5
or with open text password:
--mfs-mount-options mfspassword=PASSWORD

Also I would like to add that I have build dev container image so image and arguments now should look like this:

          image: registry.moosefs.com/moosefs-csi-plugin:latest-dev
          args:
            - "--mode=controller"
            - "--csi-endpoint=$(CSI_ENDPOINT)"
            - "--master-host=$(MASTER_HOST)"
            - "--master-port=$(MASTER_PORT)"
            - "--root-dir=$(ROOT_DIR)"
            - "--plugin-data-dir=$(WORKING_DIR)"
            - "--mfs-logging=$(MFS_LOGGING)"
            - "--mfs-mount-options=$(MFS_MOUNT_OPTIONS)"

Configuration example is available here:
https://github.com/xandrus/moosefs-csi/tree/master/deploy/kubernetes/with-mfsmaster-password

If everything works without issues, I will create a pull request.
https://github.com/xandrus/moosefs-csi

@saandre15
Copy link
Author

@xandrus,

Were you able to get the passwords -o mfspasswords to work with moosefs-csi?

Thanks

@xandrus
Copy link
Member

xandrus commented Jul 24, 2023

@dretechtips

yes.

As I said in my previous response. I have added one extra argument to moosefs-csi to pass additional options during mfsmount process.

Please check this example:
https://github.com/xandrus/moosefs-csi/tree/master/deploy/kubernetes/with-mfsmaster-password

@saandre15
Copy link
Author

saandre15 commented Jul 25, 2023

@xandrus et. al,

I am getting the following error using the 0.9.6-dev moosefs-csi-plugin in Ubuntu Jammy container from my PVC mounted pods using k3s distribution of Kubernetes (v1.26+).

  Type     Reason              Age                From                     Message
  ----     ------              ----               ----                     -------
  Normal   Scheduled           14m                default-scheduler        Successfully assigned default/my-moosefs-pod to hybrid-001
  Warning  FailedMount         85s (x6 over 12m)  kubelet                  Unable to attach or mount volumes: unmounted volumes=[moosefs-volume], unattached volumes=[moosefs-volume], failed to process volumes=[]: timed out waiting for the condition
  Warning  FailedAttachVolume  15s (x7 over 12m)  attachdetach-controller  AttachVolume.Attach failed for volume "pvc-db01a8fb-e0e9-446a-9af9-ec1033381e4b" : timed out waiting for external-attacher of csi.moosefs.com CSI driver to attach volume pvc-db01a8fb-e0e9-446a-9af9-ec1033381e4b

when deploying the these test case yaml files.

---
kind: Pod
apiVersion: v1
metadata:
  name: my-moosefs-pod
spec:
  containers:
    - name: my-frontend
      image: busybox
      volumeMounts:
        - mountPath: "/data"
          name: moosefs-volume
      command: [ "sleep", "1000000" ]
  volumes:
    - name: moosefs-volume
      persistentVolumeClaim:
        claimName: my-moosefs-pvc
---
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
  name: my-moosefs-pvc
spec:
  storageClassName: moosefs-file-storage
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi
---

Kubernetes Node Setup: 5x Raspberry PI (arm64) & 1x Generic Server (amd64).

MooseFS Storage Setup: 5x Generic Servers (amd64)

MooseFS mfsexports.cfg Configuration

[OBFUSCATED]                 /[OBFUSCATED] rw,alldirs,mingoal=2,maproot=0:0,password=[OBFUSCATED],admin

An overall summary of the Kubernetes deployment is that I using the @Kunde21 csi resizer, attacher, etc. (not moosefs-csi-plugin) from the master branch https://github.com/Kunde21/moosefs-csi since the provided images in the config doesn't work on bare metal. And I am building and fetching moosefs-csi-plugin 0.9.6-dev from https://hub.docker.com/r/dfayl16/moosefs-csi-plugin, which uses Ubuntu Jammy as a base image instead of Debian (,since Debian version does not support ARM64). I can send the full deployment YAML files for Kubernetes for further debugging purposes.

None of my pods are crashing.

Node pod logs

...
time="2023-07-24T21:59:46Z" level=info msg="MountMfs - Successfully mounted [OBFUSCATED] to /mnt/hybrid-001"
time="2023-07-24T21:59:46Z" level=info msg="Setting up Mfs Logging. Mfs path: /DevOps/containers/cluster/active/prod.[REGION].[NETWORK_NAME]/pv_data/logs"
time="2023-07-24T21:59:46Z" level=info msg="Mfs Logging set up!"
time="2023-07-24T21:59:46Z" level=info msg="StartService - endpoint unix:///csi/csi.sock"
time="2023-07-24T21:59:46Z" level=info msg=CreategRPCServer
time="2023-07-24T21:59:46Z" level=info msg="CreateListener - endpoint unix:///csi/csi.sock"
time="2023-07-24T21:59:46Z" level=info msg="CreateListener - Removing socket /csi/csi.sock"
time="2023-07-24T21:59:46Z" level=info msg="StartService - Registering node service"
time="2023-07-24T21:59:46Z" level=info msg="StartService - Starting to serve!"
time="2023-07-24T21:59:49Z" level=info msg=GetPluginInfo
time="2023-07-24T21:59:51Z" level=info msg=NodeGetInfo

However, I find it odd that the Controller Node keeps alternating between Published and Unpublished Volume state.

...
time="2023-07-25T07:48:13Z" level=info msg="ControllerPublishVolume - VolumeId: pvc-db01a8fb-e0e9-446a-9af9-ec1033381e4b NodeId: hybrid-001 VolumeContext: map[storage.kubernetes.io/csiProvisionerIdentity:1690235988382-8081-csi.moosefs.com]"
time="2023-07-25T07:48:41Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-6ce96836-39a1-4908-b050-bafb50fe4545, NodeId: compute-004"
time="2023-07-25T07:48:41Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-6ce96836-39a1-4908-b050-bafb50fe4545, NodeId: hybrid-001"
time="2023-07-25T07:49:50Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-06162ab9-49d5-4f45-b53e-a8bd063ad0d4, NodeId: hybrid-001"
time="2023-07-25T07:49:50Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-6ce96836-39a1-4908-b050-bafb50fe4545, NodeId: hybrid-001"
time="2023-07-25T07:49:50Z" level=info msg="ControllerPublishVolume - VolumeId: pvc-db01a8fb-e0e9-446a-9af9-ec1033381e4b NodeId: hybrid-001 VolumeContext: map[storage.kubernetes.io/csiProvisionerIdentity:1690235988382-8081-csi.moosefs.com]"
time="2023-07-25T07:49:50Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-3bd2b6a3-78d7-4fda-aca4-200442675670, NodeId: compute-001"
time="2023-07-25T07:49:50Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-6ce96836-39a1-4908-b050-bafb50fe4545, NodeId: compute-004"
time="2023-07-25T07:50:05Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-06162ab9-49d5-4f45-b53e-a8bd063ad0d4, NodeId: hybrid-001"
time="2023-07-25T07:51:39Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-3bd2b6a3-78d7-4fda-aca4-200442675670, NodeId: compute-001"
time="2023-07-25T07:53:13Z" level=info msg="ControllerPublishVolume - VolumeId: pvc-db01a8fb-e0e9-446a-9af9-ec1033381e4b NodeId: hybrid-001 VolumeContext: map[storage.kubernetes.io/csiProvisionerIdentity:1690235988382-8081-csi.moosefs.com]"
time="2023-07-25T07:53:41Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-6ce96836-39a1-4908-b050-bafb50fe4545, NodeId: hybrid-001"
time="2023-07-25T07:53:41Z" level=info msg="ControllerUnpublishVolume - VolumeId: pvc-6ce96836-39a1-4908-b050-bafb50fe4545, NodeId: compute-004"

There is a lack of information over the internet over the latest development builds of moosefs-csi-plugin for debugging. Try running a bare metal instance of moosefs-csi-plugin w/ multi-architecture (arm64/amd64). And tell me if you guys were able to successfully get PVC mounting to work using k3s or any other Kubernetes forks.

Thanks,

EDIT: It was an issue w/ my RBAC configuration & symlinks.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants