Project Copy, delete

I have 30TB Data at wasabi
It works fine.
I would like extra security and have made an agreement with scaleway to keep a copy of my data.

Here's what I want:
Data must be synchronized from Wasabi to Scaleway daily.
When data is deleted on Wasabi, it must be deleted on Scaleway 14 days later.

There will be approx. 200GB new data every day and 180GB is deleted every day (Wasabi)

Glaice at Scaleway
Server to Server copy (if possible)

Can it be done with Rclone

I am using the latest version of rclone

hello and welcome to the forum,

rclone, nor any software, can do server to server between two different cloud providers.
there must be some computer to run rclone.

your best bet is to rent a virtual machine at scaleway and run rclone on that.
most providers do not charge ingress fees.

the exact rclone command would depend on the amount of files and the file sizes but this would be a good start
rclone sync wasabi: scaleway: --checksum --fast-list

Hi and thanks for your reply
my files are 90% 30mb files and 9% 16mb
I just tried your command and it works fine👍
Good idea to place it Scaleway, it does not look very expensive.
Do you think it is possible to make a solution so that all deleted files are only deleted after 30 days

i use wasabi, once a file is deleted it is gone forever.

might use versioning as a workaround

also, this might help

so with wasabi, keep in mind wasabi that if you

  1. upload a file today
  2. delete it tomorrow
    you will be charge for 89 days of storage.
    tho there is a trick around that....

You can also use --backup-dir to store the old backups.

So you could $(date -I) in the backup dir and make a new directory for each backup then delete any older than 14 days.

This will require a very small amount of scripting so you only keep the last 14 days of backup dirs.

yes, that is what i always do, an example of that would be
rclone sync /path/to/data remote:data/backup --backup-dir=remote:data/archive/20210405.093036

also, a side question, as i use wasabi and aws s3 deep glacier.
why did you choose scaleway over aws as it is seems much move expensive to store data?
seems to only make sense if you plan to restore from scaleway glacier often.

Thanks for info on Wasabi💪
Here's what I want:
Data must be synchronized from Wasabi to Scaleway daily.
When data is deleted on Wasabi, it must be deleted on Scaleway 14 days later.

i think the price is good C14 Cold Storage €0.002/GB/month

I have read here in the forum that data in glacier (Scaleway) can not be moved.
I have tested, it works with Standard
Is it possible to give the file (to be deleted) a date and then just delete it after 14 days. (Without moving)

imho, that seems very expensive

aws deep glacier is $0.004/GB/month, which i think is €0.0034/GB/month

all files in the bucket?
just some individual files in the bucket?

on scaleway website, you would use a feature called lifecycle.

? that seems very expensive

It will always be a folder to be deleted

not sure what you mean?
you agree that scaleway is very expensive?

AWS €0.0034/GB/month
Scalw €0.002/GB/month

sorry, that price was to aws s3 glacier.

for aws s3 deep glacier, per GB
Amazon S3 Simple Storage Service Pricing - Amazon Web Services
$0.00099 per GB which should be €0.0008

S3 stores objects, has no concept of folders.
rclone has no way to know the creation date of a folder.
Organizing objects in the Amazon S3 console using folders - Amazon Simple Storage Service

and for files, rclone can only use the modification date, has no concept of creation date.
so if you want to delete a file 14 days after creation, rclone does not know the creation date.
tho depending on your use case, there might be a way around that.

scaleway, like most cloud providers, has a feature called lifecycle, that can delete objects after a period of time or transition files from standard storage to glacier storage.

ok it's cheap.:muscle: what about traffic and other expenses

Can it be made as the folders/file to be deleted at Scaleway they will be renamed or the folder/file for a "Tag" "Value" which I can then capture with Lifecycle Rule

as for other expenses, it is all on that weblink i shared.
aws has a very complex pricing scheme.
they charge for api calls, they charge to transition from glacier to standard, they are charge to egress.
so it all depends on the use-case.

for me,
i have a local server.
i keep a copy of recent backups and valuable data wasabi.
everything gets upload to aws s3 deep glacier
i archive some data to external usb hard disks and burn blu-ray discs and keep it off-site

in the three years of using aws s3, i have never downloaded any data and i do not expect to.