To connect SFTPGo to AWS, you need to specify credentials, a bucket
and a region
. Here is the list of available AWS regions. For example, if your bucket is at Frankfurt
, you have to set the region to eu-central-1
. You can specify an AWS storage class too. Leave it blank to use the default AWS storage class. An endpoint is required if you are connecting to a Compatible AWS Storage such as MinIO.
AWS SDK has different options for credentials. We support:
- Providing Access Keys.
- Use IAM roles for Amazon EC2
- Use IAM roles for tasks if your application uses an ECS task definition
- Utilizing IAM roles for service accounts (IRSA) if you operate SFTPGo atop AWS EKS.
- Assuming specific IAM role by setting its ARN.
So, you need to provide access keys to activate option 1, or leave them blank to use the other ways to specify credentials.
You can also use a temporary session token or assume a role by setting its ARN.
Specifying a different key_prefix
, you can assign different "folders" of the same bucket to different users. This is similar to a chroot directory for local filesystem. Each SFTP/SCP user can only access the assigned folder and its contents. The folder identified by key_prefix
does not need to be pre-created.
SFTPGo uses multipart uploads and parallel downloads for storing and retrieving files from S3.
For multipart uploads you can customize the parts size and the upload concurrency. Please note that if the upload bandwidth between the client and SFTPGo is greater than the upload bandwidth between SFTPGo and S3 then the client should wait for the last parts to be uploaded to S3 after finishing uploading the file to SFTPGo, and it may time out. Keep this in mind if you customize these parameters.
The configured bucket must exist.
Some SFTP commands don't work over S3:
chown
andchmod
will fail. If you want to silently ignore these method setsetstat_mode
to1
or2
in your configuration filetruncate
,symlink
,readlink
are not supported- opening a file for both reading and writing at the same time is not supported
- resuming uploads is tricky and disabled by default
Other notes:
rename
is a two step operation: server-side copy and then deletion. So, it is not atomic as for local filesystem.- We don't support renaming non empty directories since we should rename all the contents too and this could take a long time: think about directories with thousands of files: for each file we should do an AWS API call.
- For server side encryption, you have to configure the mapped bucket to automatically encrypt objects.
- A local home directory is still required to store temporary files.
- Clients that require advanced filesystem-like features such as
sshfs
are not supported. chtime
not supported.