Copying from Backblaze B2 to AWS S3

Does anyone have any good suggestions on how best to copy 1TB of data from B2 to S3?

I keep backups in B2 (using rclone). I want to keep occasional archive copies of those in S3 (primarily in case B2 suddenly disappears – it is cheap, after all, and could suddenly close down).

I have asked B2 to create a snapshot of the roughly 1TB of backup data. That seems to be a single file (a bit tedious but as this is a “last resort” backup of my backup I can live with that). My plan is to copy that file to Amazon S3 and then move it into Glacier.

I assume rclone will be happy to copy the file for me but I am just trying to work out the cheapest and fastest way to do it. I thought some people here may have relevant experience and advice!

I certainly can’t do it from home so I presume the best option is to use an AWS server instance. I am sure I have to pay B2 download fees as well as Amazon upload fees, but I don’t want to pay Amazon twice (once to the server and then again to S3). I have a server instance I spin up for other things sometimes so I might just use that but does anyone know if I have to be careful about which region I do this from?

Any other advice or issues I should worry about?

It will, however I would use rclone to sync the b2 bucket to the s3 bucket directly. This will mean that you can do incremental copies easily so you could keep it up to date easily.

Data transfer in to S3 is free. Data transfer out of S3 is free in the same region.

The same region would be best.

Both Amazon and B2 charge for operations so you might want to try a sync then see how many operations it uses to get an idea of whether you want to run it regularly or not.

Thanks for the info.

That won’t help in my case as in the timescales I plan to do the archiving it will all have changed anyway.

But does rclone work when the target data has been moved to Glacier?