Hello guys I decided to switch S3 Glacier because of the Google Drive rate limit issue. But Im facing some very strange problem. When I try to copy the files to the S3 Im getting that the region is incorrect which is not true and after that I`m getting access denied. I created user in the IAM with full S3 permissions also with permissions for Backup and Restore. Is there are some other permissions that needs to be configured?
Here is the whole output:
C:\rclone>c:\rclone\rclone.exe --config "C:\Users\Administrator.config\rclone\rclone.conf" lsd -vv --progress S3-Glacier:moneta
2023/05/29 00:00:41 DEBUG : rclone: Version "v1.62.2" starting with parameters ["c:\rclone\rclone.exe" "--config" "C:\Users\Administrator\.config\rclone\rclone.conf" "lsd" "-vv" "--progress" "S3-Glacier:moneta"]
2023/05/29 00:00:41 DEBUG : Creating backend with remote "S3-Glacier:moneta"
2023/05/29 00:00:41 DEBUG : Using config file from "C:\Users\Administrator\.config\rclone\rclone.conf"
2023-05-29 00:00:42 NOTICE: S3 bucket moneta: Switched region to "ap-south-1" from "eu-west-2"
2023-05-29 00:00:42 DEBUG : pacer: low level retry 1/2 (error BucketRegionError: incorrect region, the bucket is not in 'eu-west-2' region at endpoint '', bucket is in 'ap-south-1' region
status code: 301, request id: EVYJ32690GYT3KEK, host id: fflUveJsa/eOa6yWXuWYqwBqwieYwmUO6bsOwBNHptn//tOo5JyepKHL2bm4cCYACUQZtkxY9bE=)
2023-05-29 00:00:42 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2023-05-29 00:00:42 DEBUG : pacer: Reducing sleep to 0s
2023-05-29 00:00:42 ERROR : : error listing: AccessDenied: Access Denied
status code: 403, request id: 9RR8NX3G81G7T8MK, host id: RcxUJ3AlKsdfHCeYxLy11SRFjvak7TELFYKbgBVLDpQAoRmzZ0pp1wmf3Eqwh+Axnr80esAuO8g=
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Errors: 2 (retrying may help)
Elapsed time: 1.0s
2023/05/29 00:00:42 DEBUG : 6 go routines active
2023/05/29 00:00:42 Failed to lsd with 2 errors: last error was: AccessDenied: Access Denied
status code: 403, request id: 9RR8NX3G81G7T8MK, host id: RcxUJ3AlKsdfHCeYxLy11SRFjvak7TELFYKbgBVLDpQAoRmzZ0pp1wmf3Eqwh+Axnr80esAuO8g=
Look at my test results - this error can be part of normal operations. I can list ap-south-1 connecting via us-east-1. Without specifying any region in the config it looks like rclone can sort it out itself.
But, in that case I will see the S3 Buckets not the S3 Glacier right? Also do you have some idea what are tha IAM roles that I have to apply to the user?
I found the issue so basically the rclone is not supporting the native Glacier is supporting only the S3 with different options for type of storage. I`m speaking about this:
The idea was to move the backups and not to store them on the local drive and they will be removed only if they are successfully moved to gdrive. But if I just put one delete in the end of the script if for some reason the trasnfer failed I will delete the files. and Also I want to keep only 1 week backups on the S3 everything older to be deleted. What is the right approach about it.
rclone is just a tool which let you copy/move/sync/delete data to/from remote (S3 bucket in this case)
with S3 it is the same as with Gdrive - you can keep using your logic - it just does not make sense to delete after 5 days due to commercials of Glacier storage.
You put file to DEEP_GLACIER you pay for 180 days of storage. You can delete it after 5 days - but you will still pay for 180 days.
Do your reading - Glacier looks great on the surface and it can be very cool - but you have to understand all cost aspects. Or you will end up with a bill you not expect.
Objects that are archived to S3 Glacier Instant Retrieval and S3 Glacier Flexible Retrieval are charged for a minimum storage duration of 90 days, and S3 Glacier Deep Archive has a minimum storage duration of 180 days.