Hello!
Many many thanks for the great tool rclone, I have spent a lot of time the past couple of months exploring cloud storage, and I learned a bunch of fun stuff. But you can’t always play and explore, you also have to work. So I am setting up a backup system for all my clients (I only have 4 or 5 that could use this).
In the past I have always worked with rsync, and lately discovered the --link-dest option which is really cool allowing me to offer to the customers versions of all their files for the last 30 days at 06:00 09:00 12:00 and 17:00 each day and to take up only about 300% (instead of 12000% with an equivalent basic standard copy operation) of the original data set.
with rsync my command is: (don’t know how-to quote code)
rsync -av --delete --exclude /backup --exclude /media --exclude .bash_history --exclude lost+found --exclude .cache --link-dest=$LINK_DEST /srv/ $DEST > $FILE_LIST
DEST and LINK_DEST are defined as follows, I think LINK_DEST is basically the most recent backup
DEST=$BASE/srv-$(date +%d-%H)
LINK_DEST=$BASE/$(ls -1 --sort=time $BASE/*/timestamp.txt | head -1 | grep -Eo ‘srv-[0-9]{2}-[0-9]{2}’)
I make sure that $DEST is empty before the rsync command.
This is great cause it’s all local on the samba server, but I want to implement something in the cloud that would allow a dozen copies of the data in the following way:
day, day-1, week, week-1, month, month-1, Jan, Apr, Jul, Dec, year, year-1
So this way the customer could go back quite a bit in time. I’m pretty cost effective and love grinding things, so I found ovh.com to offer some cold storage at 0.0034 CAD/GB/month to be pretty much the cheapest “reliable” (read should not shut down in the next 5 years) one. So I implemented everything with rclone and works fine, the only problem is that im using up 12x150G’s when if I could use rsync and --link-dest this would take up about 1.2-2.5x150G’s. At least I’m not paying for the traffic 12 times cause rclone does “server side copy” when I do the rotations, so that’s great, but I was wondering if I could grind my rclone command to work a bit like the rsync and use up much less space in the cloud? For the moment I’m using a very simple “rclone sync /srv pca-enc:[customerId]/day/srv” and a rotation script that does “rclone sync pca:[customerId]/day pca:[customerId]/[slotName]/srv” when required according to each day, week, month, …
Hope my question is not too long! Good day to all, thanks, Louis