S3 Glacier Commands

So I'm trying to use RClone to copy to S3 Glacier directly. Not to buckets, but to the S3 Glacier vault(s). I'm not finding a lot of information on people doing this.

Here is my config file:
[glacier]
type = s3
provider = AWS
env_auth = false
access_key_id = [REDACTED]
secret_access_key = [REDACTED]
region = us-west-2
location_constraint = us-west-2
acl = private
storage_class = GLACIER
bucket_acl = private
upload_cutoff = 2G
chunk_size = 1G

My vault is setup using IAM access with user having full access to the S3 Glacier Vault. I DO NOT have a json type Vault Access Policy set.

Here is my command and the output:

rclone ls glacier:Unitrends

2019/06/11 07:45:24 ERROR : S3 bucket Unitrends: Failed to update region for bucket: reading bucket location failed: AccessDenied: Access Denied
status code: 403, request id: D4FEA35D5E0F2DF9, host id: bEYqE6LqccRpkrdIdO6YH6CnCGOB7uRnJbFn88mLbpHrVSTunWcUAb/+o3KCFJMhOPYf5sgJQ/o=
2019/06/11 07:45:24 Failed to ls: BucketRegionError: incorrect region, the bucket is not in 'us-west-2' region
status code: 301, request id: , host id:

I have verified my vault and region is in US West region.

I'm not sure what I'm doing wrong here... Any suggestions? Are there specific Glacier commands to test and verify against?

Thanks in advance...

rclone doesn't support the glacier archive API yet which is different to the s3 bucket API.

So I don't think you can do this currently. I'm sure this came up before in the forum but I can't find it now.

Thank you for your response. If this is the case, why does RClone offer Glacier Storage class support:

The storage class to use when storing objects in S3.
Choose a number from below, or type in your own value
1 / Default
\ ""
2 / Standard storage class
\ "STANDARD"
3 / Reduced redundancy storage class
\ "REDUCED_REDUNDANCY"
4 / Standard Infrequent Access storage class
\ "STANDARD_IA"
5 / One Zone Infrequent Access storage class
\ "ONEZONE_IA"
6 / Glacier storage class
\ "GLACIER"
7 / Glacier Deep Archive storage class
\ "DEEP_ARCHIVE"

Am I misinterpreting this?

You can put objects into s3 with glacier class and they will get stored in glacier. However a glacier vault is different to storing objects into an s3 bucket.

I found this really good explanation:

Both are glacier, but accessed via different APIs. With the s3 -> glacier you access the files via s3 api instead of the glacier api.

I'm pretty sure a glacier vault is behind the scenes its own s3 bucket, but you use glacier API to access files.

Biggest difference between these two is that you can list files in glacier only with the s3 model, via the glacier API you need to create a job to fetch the metadata.

Generally the s3 api works fine for cloud backups - you have s3 as a backup target and with lifecycle rules move stuff to glacier.

The glacier API is better for storage directly from an application that understands the glacier api, and manages its own metadata.

But by far best benefit from the glacier API is the vault lock, you can lock stuff down so even your admins (or even root credentials) cannot delete it. Great for compliance. Log storing. Legal holds.

https://docs.aws.amazon.com/amazonglacier/latest/dev/vault-lock.html

1 Like

The problem with this model is the cost. Direct glacier uploads are less expensive then S3 bucket uploads. Apparently, this isn't the utility I'm looking for... :frowning:
I would recommend that the documentation on the S3 / Glacier is updated as it implies that you can use this utility to copy directly to Glacier.

Thanks for the information.

I thought uploading to S3 was free (in terms of network costs)?

It would be possible to write a backend for glacier archive. AWS provide a go library for it: https://docs.aws.amazon.com/sdk-for-go/api/service/glacier/

Fancy having a go at some words? rclone does support the GLACIER s3 storage class via the s3 interface but it doesn't support the Glacier Vault API.

Thank you for your response. Uploads to AWS are free, however you cannot transition from one object class to another until 30 days have passed. This is based per the AWS Life cycle management policy. Maybe the specific object class of glacier accomplishes this?

As for the verbiage, I can see where you are correct using the S3 API. My apologies here. I guess I misunderstood the RClone capabilities with the API classes.

Kind Regards,

Alan

Yes, if you set a storage class of GLACIER it will go straight into GLACIER is my understanding.

You are not the first person to be confused, so I'm going to say the docs need improving here!

I've added a bit to the docs which will hopefully help future people.

I do not believe this is correct. You can create a transition policy that transfers from S3 to Glacier after 0 days (ie: instant).

Additionally, ncw is correct, that if you specify GLACIER or DEEP_ARCHIVE in rclone config, anything you upload will automatically go to that class so you don't even need to use the transition method.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.