I suspect that I have these odd situations with my TrueNAS storage with some links, like:
/mnt/redsamba🔒 on ☁️ kai.hendry@gmail.com took 1m23s
❯ ll fcpx/CloudFunctions.fcpbundle/.fcpcache
Permissions Size User Date Modified Name
drwxr-xr-x - root 6 Apr 21:47 .fcpcache
drwxr-xr-x - root 27 Jan 2019 24-1-19
.rwxr-xr-x 0 root 24 Jan 2019 .lock
.rwxr-xr-x 358 root 24 Jan 2019 .lock-info
.rwxr-xr-x 0 root 25 Jan 2019 __Sync__
.rwxr-xr-x 106k root 25 Jan 2019 CurrentVersion.flexolibrary
.rwxr-xr-x 324 root 24 Jan 2019 CurrentVersion.plist
.rwxr-xr-x 327 root 24 Jan 2019 Settings.plist
/mnt/redsamba🔒 on ☁️ kai.hendry@gmail.com took 3s
❯ ll fcpx/CloudFunctions.fcpbundle/.fcpcache/.fcpcache
Permissions Size User Date Modified Name
drwxr-xr-x - root 6 Apr 21:47 .fcpcache
drwxr-xr-x - root 27 Jan 2019 24-1-19
.rwxr-xr-x 0 root 24 Jan 2019 .lock
.rwxr-xr-x 358 root 24 Jan 2019 .lock-info
.rwxr-xr-x 0 root 25 Jan 2019 __Sync__
.rwxr-xr-x 106k root 25 Jan 2019 CurrentVersion.flexolibrary
.rwxr-xr-x 324 root 24 Jan 2019 CurrentVersion.plist
.rwxr-xr-x 327 root 24 Jan 2019 Settings.plist
I will try remove all .fcpcache directories, but then what is an equivalent of a rsync --delete to delete the extra data on the destination? I'm a bit nervous about the 7+TB on the e2 bucket, because I've only paid for 5TB!
Tip1: You can filter out the .fcpcache folders by adding --exclude=".fcpcache/**" which can be combined with --delete-excluded (please read docs and be careful here!)
Tip2: You may be able to speed up the sync by adding --checkers=16 (or higher).
Please try with --dry-run first to avoid accidental/unexpected deletion of data.
Rclone will be ignoring the symlinks by default without using -l or -L.
If your syncs are ending with an ERROR then rclone won't be doing the delete phase which could explain the usage going up and up if stuff is continually changing on the source.
/tmp/e2-again.log is 87M, so gist won't even ingest it at this point.
I think at this point I'm going to give up and delete the bucket before I incur too many charges, unless someone has a bright idea?
Sidenote: I am not sure if someone has a good backup ignore list. I realise I should probably not have transferred most dotfiles, except maybe .ssh and ignored "vendor", "node_modules", tmp directories and so on.
Total objects: 3.050M (3049569)
Total size: 4.519 TiB (4968422880640 Byte)
But it only ran successfully on my NAS with rclone v1.57.0-DEV since it hits Failed to size with 11 errors: last error was: directory not found on the samba mount on my Archlinux machine.
I am going to have to try a sync from my TrueNAS machine.
IDrive Support suggest the bucket is swelling because of versioning which makes little sense to me since my NAS is essentially a bunch of immutable / unchanging files.
I'm unable to actually toggle off versioning, so I am having to recreate and re-upload rn.
I suggest you try to find out what happened before deleting/retrying - otherwise you may well end up in the same or a similar situation at a later point.
Perhaps you first tried this:
rclone sync . e2:rednas/backup
and later changed your mind to do like this:
rclone sync . e2:rednas
or you executed from different folders, e.g. first:
rclone sync . e2:rednas # executed as root from /homes
and then later:
rclone sync . e2:rednas # executed as kai from /homes/kai
Perhaps something more subtle or completely different, but still very good to understand. Right now all you know is that something in your understanding of rclone, IDrive or your data is mistaken.
Sure? It looks like it can be toggled off on existing buckets in this IDrive FAQ screenshot: