Skip to content

Latest commit

 

History

History
65 lines (55 loc) · 1.89 KB

README.md

File metadata and controls

65 lines (55 loc) · 1.89 KB

Examples

This directory contains example AutoLFADS experiments. Refer to the individual directories for dataset descriptions and usage instructions. Utility scripts to generate summary figures of firing rate (firing_rate_inference.py) and hyperparameter progression (hp_progression.py) are also found in the root examples directory. Use --help with either script to learn more about their usage.

Useful Tips

KubeFlow Data Management

If you don't have a storage network solution that can connected to your KubeFlow cluster, you can copy data to a Persistent Volume Claim (PVC) using the below configuration. The top block creates the storage request (change the amount to the required value), and the bottom creates a simple shell container where you can inspect and move files around as necessary.

Copy the following to a new file:

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: exp-data
  namespace: kubeflow-user-example-com
spec:
  storageClassName: external-nfs-dynamic
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 5Gi

---
apiVersion: v1
kind: Pod
metadata:
  name: exp-data-debug
  namespace: kubeflow-user-example-com
spec:
  containers:
  - name: alpine
    image: alpine:latest
    command: ['sleep', 'infinity']
    volumeMounts:
    - name: mypvc
      mountPath: /share
      readOnly: False
  volumes:
  - name: mypvc
    persistentVolumeClaim:
      claimName: exp-data

Create the new resources:

kubectl apply -f <filename>.yaml

Transfer files to remote from local:

kubectl cp -n kubeflow-user-example-com data exp-data-debug:/share

(Debugging) Connect to the shell container for inspecting synchronized files:

kubectl exec -it -n kubeflow-user-example-com exp-data-debug -- sh

(Debugging) Transfer files to local from remote:

kubectl cp -n kubeflow-user-example-com exp-data-debug:/share data