I want to trial s3 or more specifically s3 glacier deep archive.
I've created a root user on AWS website, but I am having difficulty what exactly I need to do to create the rclone config, I believe I need an IAM user, is that right?
The config tool and website also mention several different providers that the s3 storage system works with, is there any benefit in using an s3 provider than using s3 direct? or do I lose anything by using an s3 provider and not aws s3 direct.
Also am I right in thinking if I choose the 48-hour option as the retrieval for anything I may have in deep archive storage there is no retrieval cost?
Also when we ask for retrieval the files themselves stay in the deep archive and a copy is placed in a different s3 storage type temporarily for retrieval for X number of days and is then removed?
Do you know what policies I need to assign to IAM user, I can't seem to figure it out and the search function is no help.
I'm getting these errors after I have logged into the IAM user.
User: arn:aws:iam::REDACTED:user/REDACTED
Action: iam:ListMFADevices
Context: no identity-based policy allows the action
User: arn:aws:iam::REDACTED:user/REDACTED
Action: iam:ListAccessKeys
Context: no identity-based policy allows the action
User: arn:aws:iam::REDACTED:user/REDACTED
Action: iam:ListSigningCertificates
Context: no identity-based policy allows the action
User: arn:aws:iam::REDACTED:user/REDACTED
Action: iam:GetLoginProfile
Context: no identity-based policy allows the action
Yes I agree, tried that aws pricing calculator, so I get charged also for uploading data to glacier deep archive?
I'm thinking of just making a copy of my data to deep archive for now whilst trialing it, and as it's low cost I would setup a seperate account and just deposit maybe £10-£15 into it each month, and then use that as a buffer as and when I need to get access to something. Some of my data I haven't needed to access for 5+ years, but was until recently sat in a google workspace account.
This is an unexpected learning curve moving from google to s3.
But it's all good getting there slowly, on another note do aws charge you for uploading content to deep glacier or have I totally misunderstood the confusing pricing calculator?
Nothing is really free with AWS but on positive side you only pay for what you use.
In terms of uploads (including glacier) you do not pay for network bandwidth but you pay for API transactions. From practical perspective it is better to have few big files than many small ones.
yes, it is.
with gdrive, basically, your client id is root user. so it is simple to use, due to lack of security features.
with s3, there are two types of users.
root user - does not require a policy. in production, should never be used.
I originally had 125TB of data in Google Workspace, however I have been pretty ruthless and got it down to approx 8TB that I would hate to lose, this is now in JottaCloud. I have I think approx further 15-20TB that wouldn't be the end of world if I lost it, albeit some of it I would never be able to get again. Of the 8TB mention I think approx 600M to 1TB could be in deep glacier.
ok, but given the very small amount of data, why bother with the complexity of deep glacier, policies and understanding complex costs analysis?
with idrive e2, $2.50/TiB/month, and the first year is half-price.