-
Notifications
You must be signed in to change notification settings - Fork 94
Using s3curl
Kazuhiro Suzuki edited this page Apr 28, 2016
·
9 revisions
- use
--contentType
and--post
instead of-X POST
,-H 'Content-Type: application/json'
. Otherwise you get Access Denied because s3curl doesn't use curl options to generate the signature. - s3curl has a issue about locale. (no effect to RiakCS) issue report, fixed fork
create ~/.s3curl
%awsSecretAccessKeys = (
admin => {
id => 'CVX6NWQTPOJSOU2FCWUV',
key => 'Wg8-sbxmRUOhRw19OX0MZjRKlmEjhdyT4cx7mw==',
},
);
s3curl.pl --id admin -- -s -v -x localhost:8080 -X PUT http://new.bucket.s3.amazonaws.com/
s3curl.pl --id admin -- -s -v -x localhost:8080 http://new.bucket.s3.amazonaws.com/
s3curl.pl --id admin -- -s -v -x localhost:8080 -X DELETE http://new.bucket.s3.amazonaws.com/
s3curl.pl --id admin -- -s -v -x localhost:8080 http://s3.amazonaws.com/
s3curl.pl --id admin -- -s -v -x localhost:8080 http://bucket-name.s3.amazonaws.com/
s3curl.pl --id admin -- -s -v -x localhost:8080 -H "Range: bytes=0-256" http://bucket-name.s3.amazonaws.com/object-name
% s3curl.pl --id admin -- -X POST -x localhost:8080 "http://bucket-name.s3.amazonaws.com/object-name?uploads" -s | tidy -xml -indent -q
<?xml version="1.0" encoding="utf-8"?>
<InitiateMultipartUploadResult xmlns="http://s3.amazonaws.com/doc/2006-03-01/">
<Bucket>bucket-name</Bucket>
<Key>object-name</Key>
<UploadId>IUE0izZUSDe2bOsmUakq2g==</UploadId>
</InitiateMultipartUploadResult>
s3curl.pl --id admin --put <filename> -- -x localhost:8080 'http://bucket-name.s3.amazonaws.com/object-name?uploadId=IUE0izZUSDe2bOsmUakq2g==&partNumber=1' -s
s3curl.pl --id admin --post <filename> -- -x localhost:8080 'http://bucket-name.s3.amazonaws.com/object-name?uploadId=IUE0izZUSDe2bOsmUakq2g==' -s
File format for completion:
<CompleteMultipartUpload>
<Part>
<PartNumber>part number</PartNumber>
<ETag>etag</ETag>
</Part>
...
</CompleteMultipartUpload>
You can collect all of pairs of partNumber and ETag using List Part API.
s3curl.pl --id admin -- -s -v -x localhost:8080 http://riak-cs.s3.amazonaws.com/stats
s3curl.pl --id admin --post --contentType application/json -- -s -v -x localhost:8080 --data '{"email":"foobar@example.com", "name":"foo bar"}' http://riak-cs.s3.amazonaws.com/user
s3curl.pl --id admin -- -s -v -x localhost:8080 http://usage.s3.amazonaws.com/NQR5J_9GICYRLWEBIYUO
get access statistics docs
% s3curl.pl --id admin -- -s -x localhost:8080 http://riak-cs.s3.amazonaws.com/usage/NQR5J_9GICYRLWEBIYUO/aj/20121023T000000Z/20121024T160000Z | underscore print --color
{
"Access": {
"Nodes": [
{
"Node": "riak-cs@127.0.0.1",
"Samples": [
{
"StartTime": "20121024T050000Z",
"EndTime": "20121024T060000Z",
"ListBuckets": { "BytesOut": 600, "Count": 1 },
"BucketRead": {
"BytesOut": 587,
"Count": 3,
"UserErrorBytesOut": 332,
"UserErrorCount": 2
},
"BucketCreate": { "Count": 4 },
"BucketDelete": { "Count": 1 }
},
{
"StartTime": "20121023T150946Z",
"EndTime": "20121023T160000Z",
"BucketDelete": { "Count": 1, "UserErrorBytesOut": 166, "UserErrorCount": 1 },
"BucketRead": { "BytesOut": 565140, "Count": 5 }
},
{
"StartTime": "20121023T125015Z",
"EndTime": "20121023T130000Z",
"AccountRead": { "BytesOut": 978, "Count": 3 }
},
{
"StartTime": "20121023T090000Z",
"EndTime": "20121023T100000Z",
"AccountRead": { "BytesOut": 652, "Count": 2 },
"UsageRead": { "BytesOut": 621, "Count": 11 },
"BucketRead": {
"BytesOut": 390,
"Count": 2,
"UserErrorBytesOut": 1787,
"UserErrorCount": 10
},
"BucketCreate": { "Count": 2 },
"BucketDelete": { "Count": 1 },
"KeyRead": { "UserErrorCount": 4 }
},
{
"StartTime": "20121023T080637Z",
"EndTime": "20121023T090000Z",
"KeyRead": { "BytesOut": 1328, "Count": 4, "UserErrorCount": 2 },
"BucketRead": {
"BytesOut": 904224,
"Count": 8,
"UserErrorBytesOut": 1306,
"UserErrorCount": 7
},
"AccountRead": { "BytesOut": 978, "Count": 3 },
"ListBuckets": { "BytesOut": 318, "Count": 1 }
},
{
"StartTime": "20121023T071748Z",
"EndTime": "20121023T080000Z",
"BucketRead": {
"BytesOut": 904224,
"Count": 8,
"UserErrorBytesOut": 186,
"UserErrorCount": 1
},
"ListBuckets": { "BytesOut": 1272, "Count": 4 },
"KeyRead": { "UserErrorCount": 2 },
"AccountRead": { "BytesOut": 326, "Count": 1 }
},
{
"StartTime": "20121023T060000Z",
"EndTime": "20121023T070000Z",
"UsageRead": { "BytesOut": 104, "Count": 2 },
"KeyRead": { "UserErrorCount": 7 },
"BucketRead": { "UserErrorBytesOut": 186, "UserErrorCount": 1 }
}
]
}
],
"Errors": [ ]
},
"Storage": "not_requested"
}
get storage statistics docs
% s3curl.pl --id admin -- -s -x localhost:8080 http://riak-cs.s3.amazonaws.com/usage/NQR5J_9GICYRLWEBIYUO/bj/20121023T000000Z/20121024T160000Z | underscore print --color
{
"Access": "not_requested",
"Storage": {
"Samples": [
{
"StartTime": "20121023T070328Z",
"EndTime": "20121023T070329Z",
"s3-compatibility-test": { "Objects": 0, "Bytes": 0 },
"s3-compatibility-test-tmp": { "Objects": 0, "Bytes": 0 },
"sync-test": { "Objects": 366, "Bytes": 3468250 }
}
],
"Errors": [ ]
}
}
$ s3curl.pl --id admin -- -s -v -x localhost:8080 http://sync-test.s3.amazonaws.com/ | tidy -xml -indent
<?xml version="1.0" encoding="utf-8"?>
<ListBucketResult>
<Name>sync-test</Name>
<Prefix />
<Marker />
<MaxKeys>1000</MaxKeys>
<Delimiter>/</Delimiter>
<IsTruncated>false</IsTruncated>
<Contents>
<Key>rebar/.git/FETCH_HEAD</Key>
<Size>310</Size>
<LastModified>2012-10-19T09:34:45.000Z</LastModified>
<ETag>"7c6fc4da1c5b7e3906030598dc6e3061"</ETag>
<Owner>
<ID>
5ce956bd15363cae32b54a4522704118be41ce8baf6355db13b5c3b04e34502c</ID>
<DisplayName>admin</DisplayName>
</Owner>
</Contents>
<Contents>
edit s3curl.pl
# begin customizing here
my @endpoints = ( 's3.amazonaws.com',
's3-us-west-1.amazonaws.com',
's3-eu-west-1.amazonaws.com',
's3-ap-southeast-1.amazonaws.com',
's3-ap-northeast-1.amazonaws.com',
'your.root.domain' );
If using a custom root_host
in riak-cs.conf
or cs_root_host
in advanced.config
, it must be listed in endpoints for proper authentication with Riak CS.
The fork has --endpoint
option. It doesn't need to edit s3curl.pl to use your domain.
s3curl.pl --endpoint your.domain.name --id admin -- -s -v -X PUT http://new.bucket.your.domain.name/