Best way to delete backups older than x days on remote

Hi,

So I’ve successfully setup a digitalocean Droplet with rclone that backups up Google Drive To Backblaze B2. But I need to have a way to delete the backups that are older than 14 days.
(My backups are just folders named for ex: 20161211_1327)

Thx,
Thibault

crontab -e
0 3 * * * find /path/* -type d -ctime +14 | xargs rm -rf >/dev/null 2>&1

Every day at 3am it will delete all folders older then 14 days

thanks! But my problem is: what is the best way to do it on a Rclone remote server.

Do I mount it as a FUSE drive and then use that command?

Yes, just mount the drive and set that as path.

1 Like

Be careful with that. Some remotes don’t store modification times which means you’re going to potentially delete files you don’t want to delete. I’m not familiar with the remotes the OP is using but wanted to mention this for others using remotes like ACD.

Yeah. I’m testing it right now. I have a test bucket on Backblaze right now XD. And it seems like backblazeb2 doesn’t store modification dates for folders.

I’m thinking about letting rclone make a file (after it has done the backup) called BACKUPDATE which contains the date the backup was made in the root of each backup folder… which does mean I would have to make a script which checks those things…

All my folders are called YYYYMMDD_hhmm
So maybe someone could make a script which just checks each folder name and calculates if the folder name date is older than 14 days.

you dont need to calculate you can just do
ls -F /path/ | head -n -5 | xargs rm -rf

Try head or tail not sure how its sorted, basically -n -5 it will show all but first/last 5 ( eg modify number of how many backups you wanna keep )

p.s. To test remove | xargs rm -rf

1 Like

OMG thank you so much. THIS IS PERFECT!

I was already messing around with Python (never used python before, only now bit of bash and JS) XD

This has saved me soooo much time!! :grinning:

I’m having some issues. But I’m not sure if I need to make an issue about it on github.

I successfully mount my Backblaze B2 bucket. and I can ls in de root of the bucket, then cd into a folder… But when I ls in there, it returns 0 results (there are 5 folders in there).

This is my fuse debug info (I did not include the ‘ino’ number because I thought that might be some security number… If it’s not security related, I can replace the placeholder with that if you want)

2016/12/11 17:18:37 rclone: Version “v1.34” starting with parameters [“rclone” “mount” “BackblazeB2Apotheek:GoogleDri veBackup” “/mnt/BackblazeB2Apotheek” “–debug-fuse” “-v”]
2016/12/11 17:18:38 B2 bucket GoogleDriveBackup: Modify window is 1ms
2016/12/11 17:18:38 B2 bucket GoogleDriveBackup: Mounting on “/mnt/BackblazeB2Apotheek”
2016/12/11 17:18:38 B2 bucket GoogleDriveBackup: Root()
2016/12/11 17:19:02 fuse: <- Getattr [ID=0x2 Node=0x1 Uid=0 Gid=0 Pid=2482] 0x0 fl=0
2016/12/11 17:19:02 : Dir.Attr
2016/12/11 17:19:02 fuse: -> [ID=0x2] Getattr valid=1m0s ino=1 size=0 mode=drwxr-xr-x
2016/12/11 17:19:02 fuse: <- Access [ID=0x3 Node=0x1 Uid=0 Gid=0 Pid=2482] mask=0x1
2016/12/11 17:19:02 fuse: -> [ID=0x3] Access
2016/12/11 17:19:09 fuse: <- Lookup [ID=0x4 Node=0x1 Uid=0 Gid=0 Pid=2482] “DAILY”
2016/12/11 17:19:09 DAILY: Dir.Lookup
2016/12/11 17:19:09 : Reading directory
2016/12/11 17:19:32 DAILY: Dir.Lookup OK
2016/12/11 17:19:32 DAILY: Dir.Attr
2016/12/11 17:19:32 fuse: -> [ID=0x4] Lookup 0x2 gen=0 valid=1m0s attr={valid=1m0s ino=██████████ size=0 mo de=drwxr-xr-x}
2016/12/11 17:19:32 fuse: <- Access [ID=0x5 Node=0x2 Uid=0 Gid=0 Pid=2482] mask=0x1
2016/12/11 17:19:32 fuse: -> [ID=0x5] Access
2016/12/11 17:19:59 fuse: <- Open [ID=0x6 Node=0x2 Uid=0 Gid=0 Pid=2501] dir=true fl=OpenReadOnly+OpenDirectory+OpenN onblock
2016/12/11 17:19:59 fuse: -> [ID=0x6] Open 0x1 fl=0
2016/12/11 17:19:59 fuse: <- Read [ID=0x7 Node=0x2 Uid=0 Gid=0 Pid=2501] 0x1 4096 @0x0 dir=true fl=0 lock=0 ffl=OpenR eadOnly+OpenDirectory+OpenNonblock
2016/12/11 17:19:59 DAILY: Dir.ReadDirAll
2016/12/11 17:19:59 DAILY: Reading directory
2016/12/11 17:20:08 DAILY: Dir.ReadDirAll OK with 0 entries
2016/12/11 17:20:08 fuse: -> [ID=0x7] Read 0
2016/12/11 17:20:08 fuse: <- Release [ID=0x8 Node=0x2 Uid=0 Gid=0 Pid=0] 0x1 fl=OpenReadOnly+OpenDirectory+OpenNonblo ck rfl=0 owner=0x0
2016/12/11 17:20:08 fuse: -> [ID=0x8] Release
2016/12/11 17:20:54 fuse: <- Getattr [ID=0x9 Node=0x1 Uid=0 Gid=0 Pid=2482] 0x0 fl=0
2016/12/11 17:20:54 : Dir.Attr
2016/12/11 17:20:54 fuse: -> [ID=0x9] Getattr valid=1m0s ino=1 size=0 mode=drwxr-xr-x
2016/12/11 17:20:54 fuse: <- Lookup [ID=0xa Node=0x1 Uid=0 Gid=0 Pid=2482] “DAILY”
2016/12/11 17:20:54 DAILY: Dir.Lookup
2016/12/11 17:20:54 DAILY: Dir.Lookup OK
2016/12/11 17:20:54 DAILY: Dir.Attr
2016/12/11 17:20:54 fuse: -> [ID=0xa] Lookup 0x2 gen=0 valid=1m0s attr={valid=1m0s ino=████████ size=0 mo de=drwxr-xr-x}
2016/12/11 17:20:54 fuse: <- Lookup [ID=0xb Node=0x2 Uid=0 Gid=0 Pid=2482] “20161209_1150”
2016/12/11 17:20:54 DAILY/20161209_1150: Dir.Lookup
2016/12/11 17:20:54 fuse: -> [ID=0xb] Lookup error=ENOENT
2016/12/11 17:20:54 fuse: <- Lookup [ID=0xc Node=0x2 Uid=0 Gid=0 Pid=2482] “20161209_1150”
2016/12/11 17:20:54 DAILY/20161209_1150: Dir.Lookup
2016/12/11 17:20:54 fuse: -> [ID=0xc] Lookup error=ENOENT
2016/12/11 17:20:54 fuse: <- Lookup [ID=0xd Node=0x2 Uid=0 Gid=0 Pid=2482] “20161209_1150”
2016/12/11 17:20:54 DAILY/20161209_1150: Dir.Lookup
2016/12/11 17:20:54 fuse: -> [ID=0xd] Lookup error=ENOENT

Any idea what’s going on @ncw ?

I can verify that! Can you make an issue about it on github please.

In the mean time to solve your initial problem

If we recast that slightly into find all backups beyond 14 like this

rclone -q lsd b2:bucket | cut -c 44- | egrep ‘^[0-9]{9}_[0-9]{4}$’ | sort | sed -n ‘14,$p’

You can then plug that into a for loop and use rclone purge to delete them one at a time.

Note that I’m planning an rclone backup command which will do pretty much exactly this.

2 Likes