S3 Glacier issue with Access Denied error

What is the problem you are having with rclone?

Hello guys I decided to switch S3 Glacier because of the Google Drive rate limit issue. But Im facing some very strange problem. When I try to copy the files to the S3 Im getting that the region is incorrect which is not true and after that I`m getting access denied. I created user in the IAM with full S3 permissions also with permissions for Backup and Restore. Is there are some other permissions that needs to be configured?

Here is the whole output:

C:\rclone>c:\rclone\rclone.exe --config "C:\Users\Administrator.config\rclone\rclone.conf" lsd -vv --progress S3-Glacier:moneta
2023/05/29 00:00:41 DEBUG : rclone: Version "v1.62.2" starting with parameters ["c:\rclone\rclone.exe" "--config" "C:\Users\Administrator\.config\rclone\rclone.conf" "lsd" "-vv" "--progress" "S3-Glacier:moneta"]
2023/05/29 00:00:41 DEBUG : Creating backend with remote "S3-Glacier:moneta"
2023/05/29 00:00:41 DEBUG : Using config file from "C:\Users\Administrator\.config\rclone\rclone.conf"
2023-05-29 00:00:42 NOTICE: S3 bucket moneta: Switched region to "ap-south-1" from "eu-west-2"
2023-05-29 00:00:42 DEBUG : pacer: low level retry 1/2 (error BucketRegionError: incorrect region, the bucket is not in 'eu-west-2' region at endpoint '', bucket is in 'ap-south-1' region
status code: 301, request id: EVYJ32690GYT3KEK, host id: fflUveJsa/eOa6yWXuWYqwBqwieYwmUO6bsOwBNHptn//tOo5JyepKHL2bm4cCYACUQZtkxY9bE=)
2023-05-29 00:00:42 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2023-05-29 00:00:42 DEBUG : pacer: Reducing sleep to 0s
2023-05-29 00:00:42 ERROR : : error listing: AccessDenied: Access Denied
status code: 403, request id: 9RR8NX3G81G7T8MK, host id: RcxUJ3AlKsdfHCeYxLy11SRFjvak7TELFYKbgBVLDpQAoRmzZ0pp1wmf3Eqwh+Axnr80esAuO8g=
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Errors: 2 (retrying may help)
Elapsed time: 1.0s
2023/05/29 00:00:42 DEBUG : 6 go routines active
2023/05/29 00:00:42 Failed to lsd with 2 errors: last error was: AccessDenied: Access Denied
status code: 403, request id: 9RR8NX3G81G7T8MK, host id: RcxUJ3AlKsdfHCeYxLy11SRFjvak7TELFYKbgBVLDpQAoRmzZ0pp1wmf3Eqwh+Axnr80esAuO8g=

and this is the config file:

[S3-Glacier]
type = s3
provider = AWS
access_key_id = XXXX
secret_access_key = XXXX
region = eu-west-2
endpoint =
location_constraint = eu-west-2
acl = private
server_side_encryption = AES256
storage_class = DEEP_ARCHIVE

hello and welcome to the forum,

i would set the region correctly and test again
perhaps region = ap-south-1

The problem is that Im completley sure abou the Region. and is eu-west-2 London. I dont have anything in Mumbai :slight_smile:

login to AWS console online and investigate there:

https://s3.console.aws.amazon.com/s3/buckets

should show you your buckets and their regions.

maybe by mistake it was created in other region?

also you can remove region from your config - then you can access any region:

[S3-Glacier]
type = s3
provider = AWS
access_key_id = XXXX
secret_access_key = XXXX
acl = private
server_side_encryption = AES256
storage_class = DEEP_ARCHIVE

This way we will rule out any regions issues and make sure that your credentials work. Next you can - if you need add any region constraints.

I did quick test - connecting via us-east-1 and bucket in ap-south-1

$ rclone lsd -vv s3:asia-testing
2023/05/29 07:27:32 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "lsd" "-vv" "s3:asia-testing"]
2023/05/29 07:27:32 DEBUG : Creating backend with remote "s3:asia-testing"
2023/05/29 07:27:33 NOTICE: S3 bucket asia-testing: Switched region to "ap-south-1" from "us-east-1"
2023/05/29 07:27:33 DEBUG : pacer: low level retry 1/2 (error BucketRegionError: incorrect region, the bucket is not in 'us-east-1' region at endpoint '', bucket is in 'ap-south-1' region
status code: 301, request id: X1Z9G3XM9D1PSDBX, host id: 2wo+4q9VCpdE3JqF35sSFWxRJLqpF80ENMwqJk9eg9O5G8jlg+56HBk0x9d1oSw1ReFJ8g5asaE=)
2023/05/29 07:27:33 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2023/05/29 07:27:35 DEBUG : pacer: Reducing sleep to 0s
0 2023-05-29 07:27:35 -1 To read
2023/05/29 07:27:35 DEBUG : 8 go routines active

and despite region error I can list asia-testing bucket content

fix your config so that the error goes away.

only after fixing that error, can we work on the next error AccessDenied

Look at my test results - this error can be part of normal operations. I can list ap-south-1 connecting via us-east-1. Without specifying any region in the config it looks like rclone can sort it out itself.

Now I am only thinking that specifying:

location_constraint = eu-west-2

enforces this region.

But, in that case I will see the S3 Buckets not the S3 Glacier right? Also do you have some idea what are tha IAM roles that I have to apply to the user?

  1. have you checked AWS console? What region is your bucket?

  2. Have you tried config without enforcing region location?

We have to do one by one to move anywhere:)

Bucket is just bucket - Glacier or not is storage class

Storage class defines many things including how much it will cost you and how easy you can download it back - bucket is just container.

In the same bucket you can have different files/folders with different storage classes.

I found the issue :slight_smile: so basically the rclone is not supporting the native Glacier is supporting only the S3 with different options for type of storage. I`m speaking about this:


But now everything is fine it was returning Access denied because I was trying with S3 bucket that was not mine :slight_smile: because I had only S3 Glacier Vault.

Good progress:)

1 Like

My next question is as far as I saw move is not supported right? and how to set retention policy of the backups on 1 week?

what move?

what you mean 1 week retention?

Till now I used Google drive backup and I used it like that:

c:\rclone\rclone.exe --config "C:\Users\Administrator.config\rclone\rclone.conf" move -vv --progress "d:\BackUp" gdrive:MonetaBackups"
c:\rclone\rclone.exe --config "C:\Users\Administrator.config\rclone\rclone.conf" delete --min-age 5d gdrive:MonetaBackups
c:\rclone\rclone.exe --config "C:\Users\Administrator.config\rclone\rclone.conf" cleanup gdrive:MonetaBackups

The idea was to move the backups and not to store them on the local drive and they will be removed only if they are successfully moved to gdrive. But if I just put one delete in the end of the script if for some reason the trasnfer failed I will delete the files. and Also I want to keep only 1 week backups on the S3 everything older to be deleted. What is the right approach about it.

rclone is just a tool which let you copy/move/sync/delete data to/from remote (S3 bucket in this case)

with S3 it is the same as with Gdrive - you can keep using your logic - it just does not make sense to delete after 5 days due to commercials of Glacier storage.

You put file to DEEP_GLACIER you pay for 180 days of storage. You can delete it after 5 days - but you will still pay for 180 days.

Do your reading - Glacier looks great on the surface and it can be very cool - but you have to understand all cost aspects. Or you will end up with a bill you not expect.

Objects that are archived to S3 Glacier Instant Retrieval and S3 Glacier Flexible Retrieval are charged for a minimum storage duration of 90 days, and S3 Glacier Deep Archive has a minimum storage duration of 180 days.

Also from your description I would suggest you look at some proper backup tool. rclone is just rsync for cloud.

I would suggest have a look at:

https://restic.net/

It is closely integrated with rclone as well.

Of course you can use rclone for your purpose as well. It is up to you.

Aha so no worries I`m backing up around 30GB pery day per 0.0018$ per GB from my calculation will be 9.72 for 180 days. So the price is totally fine :slight_smile:

and you will have 180 days history:) not 5