Rclone with brand new AWS account for S3

From a total clean slate, how do I setup my AWS account and Rclone to upload (using the S3 API) to AWS Deep Glacier?
I have a brand new AWS account. No buckets, and only 1 configured user. This user is assigned (via IAM) a single policy: the "[AmazonGlacierFullAccess]" access policy. Beyond that, I have created nothing in the AWS web console.
I have gone through the documented way of creating a new rclone configuration. That has resulted in the following rclone configuration file:

[remotecoldiceS3glaceee]
type = s3
provider = AWS
env_auth = false
access_key_id = <copied text from the appropriate AWS user page></>
secret_access_key = <copied text from the appropriate AWS user page></>
region = us-west-1
location_constraint = us-west-1
acl = private
storage_class = DEEP_ARCHIVE

After completing the AWS S3 instructions in the documentation, which describe making a new configuration, I did the following:

rclone mkdir remotecoldiceS3glaceee:testBucket1

No output was given.

I then did a:

rclone ls remotecoldiceS3glaceee:testBucket1

which gave this output:

2020/04/14 03:09:14 ERROR : S3 bucket testBucket1: Failed to update region for bucket: reading bucket location failed: AllAccessDisabled: All access to this object has been disabled
status code: 403, request id: EAEFC906302B6A34, host id:
2020/04/14 03:09:14 Failed to ls: BucketRegionError: incorrect region, the bucket is not in 'us-west-1' region at endpoint ''
status code: 301, request id: , host id:

I also did a:

rclone lsd remotecoldiceS3glaceee:

which gave this output:

2020/04/14 03:37:00 ERROR : : error listing: AccessDenied: Access Denied
status code: 403, request id: 8BB4F35BEE41E15B, host id:
2020/04/14 03:37:00 Failed to lsd with 2 errors: last error was: AccessDenied: Access Denied
status code: 403, request id: 8BB4F35BEE41E15B, host id:

I also tried a:

rclone sync /data remotecoldiceS3glaceee:testBucket1

But that failed too.

Will you please assist me in setting up new AWS deep glacier storage which can be reached via s3 api and successfully interacted with through rclone? I'm not sure what I am missing or where I have made a incorrect configuration. Thanks.

Remember buckets are global in s3, so I suspect someone else owns that bucket. Pick a different name!

Thanks for the quick reply.
Ok, I'm still getting an error. Even with what i'm certain is a unique bucket name.

I did a :
rclone mkdir remotecoldiceS3glaceee:randomprefix42testbucket186753097778561302

Then I did a :

rclone ls remotecoldiceS3glaceee:randomprefix42testbucket186753097778561302

which resulted in :

2020/04/15 02:11:02 Failed to ls: directory not found

I'm still getting the same errors as before with the other commands.

rclone lsd remotecoldiceS3glaceee:

2020/04/15 02:13:03 ERROR : : error listing: AccessDenied: Access Denied
status code: 403, request id: 19684F420960F9DD, host id:
2020/04/15 02:13:03 Failed to lsd with 2 errors: last error was: AccessDenied: Access Denied
status code: 403, request id: 19684F420960F9DD, host id:

and
rclone sync /data remotecoldiceS3glaceee:randomprefix42testbucket186753097778561302

still results in :

2020/04/15 02:15:18 ERROR : myfile.txt: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>
NoSuchBucketThe specified bucket does not existrandomprefix42testbucket186753097778561302E917B4AE4C9F39D6
2020/04/15 02:15:18 ERROR : S3 bucket randomprefix42testbucket186753097778561302: not deleting files as there were IO errors
2020/04/15 02:15:18 ERROR : S3 bucket randomprefix42testbucket186753097778561302: not deleting directories as there were IO errors
2020/04/15 02:15:18 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>
NoSuchBucketThe specified bucket does not existrandomprefix42testbucket186753097778561302E917B4AE4C9F39D6host id erased for this posting
2020/04/15 02:15:18 ERROR : myfile.txt: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>
NoSuchBucketThe specified bucket does not existrandomprefix42testbucket1867530977785613028BD313F1C8A61096host id erased for this posting
2020/04/15 02:15:18 ERROR : S3 bucket randomprefix42testbucket186753097778561302: not deleting files as there were IO errors
2020/04/15 02:15:18 ERROR : S3 bucket randomprefix42testbucket186753097778561302: not deleting directories as there were IO errors
2020/04/15 02:15:18 ERROR : Attempt 2/3 f...

Do I need to perform further setup/configuration? If so, where and how? My goal is to use rclone to get files into AWS Deep Glacier storage which is associated with my newly created account.
Thanks for your help so far.

I think I see what happened.

You used a bucket name > 63 characters long. Rclone should have reported that as an error but it didn't for some reason.

It should have also reported an error trying to create a bucket that wasn't yours... This bug looks like it was introduced in 1.49

I've just fixed this in the latest beta which will be uploaded in 15-30 mins to https://beta.rclone.org/v1.51.0-163-ge2bf9145-beta/

This would have fixed your original problem - it would have given an error message saying the bucket was in use and it should fix the no error on bucket length too long.

If I understand correctly, I should try it with a shorter bucket name.
I tried it with a shorter bucket name, and I'm unfortunately getting the same error. I haven't updated my rclone version yet.

rclone mkdir remotecoldiceS3glaceee:rpre42buck5309771302

ls remotecoldiceS3glaceee:rpre42buck5309771302

2020/04/15 16:32:13 Failed to ls: directory not found

rclone sync /data remotecoldiceS3glaceee:rpre42buck5309771302

2020/04/15 16:29:11 ERROR : myfile.txt: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>
NoSuchBucketThe specified bucket does not existrpre42buck5309771302CD531C910FCB5575caBXdr7HE5tv+YkS/OJVx/xJYvrQr1Sgq3mpSnzCDW9e1ud5Yh1pyeyRfrVS2EpYzu7mku/lxTE=
2020/04/15 16:29:11 ERROR : S3 bucket rpre42buck5309771302: not deleting files as there were IO errors
2020/04/15 16:29:11 ERROR : S3 bucket rpre42buck5309771302: not deleting directories as there were IO errors
2020/04/15 16:29:11 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>
NoSuchBucketThe specified bucket does not existrpre42buck5309771302CD531C910FCB5575caBXdr7HE5tv+YkS/OJVx/xJYvrQr1Sgq3mpSnzCDW9e1ud5Yh1pyeyRfrVS2EpYzu7mku/lxTE=
2020/04/15 16:29:11 ERROR : myfile.txt: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>
NoSuchBucketThe specified bucket does not existrpre42buck5309771302F4690AE57F762D54W1iHPUD8O8DUPLamI8Njre1lkKyebBE9nWt/cj6U0zyq1gMeJnbHWyvsdxdjmpzvXRSe2veMLdo=
2020/04/15 16:29:11 ERROR : S3 bucket rpre42buck5309771302: not deleting files as there were IO errors
2020/04/15 16:29:11 ERROR : S3 bucket rpre42buck5309771302: not deleting directories as there were IO errors
2020/04/15 16:29:11 ERROR : Attempt 2/3 failed wit

Try again after updating rclone, then the rclone mkdir will show you what is going wrong.

Ok. I'm using docker for all this. I just did a :
docker pull rclone/rclone:latest

latest: Pulling from rclone/rclone
Digest: sha256:bdb9a1a3a579a029ae9c5c628a0cc94c5b746047100113393fd67a577e26aa4c
Status: Image is up to date for rclone/rclone:latest

Seems the newest build hasn't made its way to the dockerhub.
Any chance this latest release can traverse the build system which provides the above docker images?

Ah, ok, I realized I can do docker pull rclone/rclone:beta to get the latest.

Ok, now when I do a:
rclone mkdir remotecoldiceS3glaceee:rpre42buck5309771302

2020/04/16 07:07:41 ERROR : Attempt 1/3 failed with 1 errors and: AccessDenied: Access Denied

status code: 403, request id: 582FC847FC4B7CE9, host id: < I removed the host id from this post>

2020/04/16 07:07:41 ERROR : Attempt 2/3 failed...

I'm using the correct access key id and secret access key I see from the IAM page in the AWS console, so I'm not sure why i'm getting this error.
Is there something I need to configure further to give rclone access to create buckets?

Not usually...

If you run with -vv --dump bodies --retries 1 then you'll see exactly which HTTP call is failing.

1 Like

Ok, I ran the command with those switches.
rclone mkdir remotecoldiceS3glaceee:rpre42buck5309771302 -vv --dump bodies --retries

2020/04/16 15:24:12 DEBUG : rclone: Version "v1.51.0-169-gb07bef2a-beta" starting with parameters ["rclone" "mkdir" "remotecoldiceS3glaceee:rpre42buck5309771302" "-vv" "--dump" "bodies" "--retries" "1"]
2020/04/16 15:24:12 DEBUG : Using config file from "/config/rclone/rclone.conf"
2020/04/16 15:24:12 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2020/04/16 15:24:12 DEBUG : S3 bucket rpre42buck5309771302: Making directory
2020/04/16 15:24:12 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/04/16 15:24:12 DEBUG : HTTP REQUEST (req 0xc0000d4e00)
2020/04/16 15:24:12 DEBUG : PUT / HTTP/1.1
Host: rpre42buck5309771302.s3.us-west-1.amazonaws.com
User-Agent: rclone/v1.51.0-169-gb07bef2a-beta
Content-Length: 153
Authorization: XXXX
X-Amz-Acl: private
X-Amz-Content-Sha256: b024c388d9d069030008afd6e302524ed1308a9f3c108bfa4e481d652b5313cf
X-Amz-Date: 20200416T152412Z
Accept-Encoding: gzip

<CreateBucketConfiguration xmlns="http://s3.amazonaws.com/doc/2006-03-01/"><LocationConstraint>us-west-1</LocationConstraint></CreateBucketConfiguration>
2020/04/16 15:24:12 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2020/04/16 15:24:14 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/04/16 15:24:14 DEBUG : HTTP RESPONSE (req 0xc0000d4e00)
2020/04/16 15:24:14 DEBUG : HTTP/1.1 403 Forbidden
Transfer-Encoding: chunked
Content-Type: application/xml
Date: Thu, 16 Apr 2020 15:24:12 GMT
Server: AmazonS3
X-Amz-Id-2: < hostid removed from this forum post >
X-Amz-Request-Id: B304B451114CDF65

f3

<?xml version="1.0" encoding="UTF-8"?>

<Error><Code>AccessDenied</Code><Message>Access Denied</Message><RequestId>B304B451114CDF65</RequestId><HostId>< hostid removed from this forum post ></HostId></Error>
0

2020/04/16 15:24:14 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2020/04/16 15:24:14 ERROR : Attempt 1/1 failed with 1 errors and: AccessDenied: Access Denied
status code: 403, request id: B304B451114CDF65, host id: < hostid removed from this forum post >
2020/04/16 15:24:14 DEBUG : 4 go routines active
2020/04/16 15:24:14 Failed to mkdir: AccessDenied: Access Denied
status code: 403, request id: B304B451114CDF65, host id: < hostid removed from this forum post >

Any new information from this? From what I can tell it's still a HTTP 403.

I just wanted to check it was the actual bucket creation that was causing the problem. It is by the look of it.

So I think you need to add bucket creation permissions to your identity.

I would be grateful if you could point me to how to allow for that. I am seeing several tutorials on AWS help pages about setting permissions for an already created bucket. However, I don't see how to add an identity permission to allow for bucket creation. Will you please point me to a tutorial which shows how to add a bucket creation policy at the identity level? I currently have zero buckets created in my account.

Ok, I added the extremely open 'AmazonS3FullAccess' policy to my account.

So now I have the ' [AmazonGlacierFullAccess]' policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Action": "glacier:",
"Effect": "Allow",
"Resource": "
"
}
]
}

and the [AmazonS3FullAccess ] policy:

{
"Version": "2012-10-17",
"Statement": [
{
"Effect": "Allow",
"Action": "s3:",
"Resource": "
"
}
]
}

No other policies existing.

With these two policies enabled on my account, the above commands we've been discussing work. So that's great!

Do you have a typical policy json, which is a little more locked down to only allow what's necessary for the s3 and glacier features?

Great!

There is a section in the docs here:

https://rclone.org/s3/#s3-permissions

Also some threads on the forum if you search, like this one

And that is about all I know about it!

Wonderful. Many thanks!

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.