Size swelling whilst transferring

What is the problem you are having with rclone?

Sync never finishes, destination bucket swells in size.

Run the command 'rclone version' and share the full output of the command.

I use Archlinux btw

❯ rclone version
rclone v1.62.2
- os/version: arch (64 bit)
- os/kernel: 6.2.10-arch1-1 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.2
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

https://app.idrivee2.com/buckets

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync -P . e2:rednas

The rclone config contents with secrets removed.

[e2]
type = s3
provider = IDrive
access_key_id = Du1BBaX56m4ZWkSGqCSi
acl = private
bucket_acl = private
endpoint = r6l4.ldn.idrivee2-17.com

A log from the command with the -vv flag

I suspect that I have these odd situations with my TrueNAS storage with some links, like:

/mnt/redsamba🔒 on ☁️  kai.hendry@gmail.com took 1m23s
❯ ll fcpx/CloudFunctions.fcpbundle/.fcpcache
Permissions Size User Date Modified Name
drwxr-xr-x     - root  6 Apr 21:47  .fcpcache
drwxr-xr-x     - root 27 Jan  2019  24-1-19
.rwxr-xr-x     0 root 24 Jan  2019  .lock
.rwxr-xr-x   358 root 24 Jan  2019  .lock-info
.rwxr-xr-x     0 root 25 Jan  2019  __Sync__
.rwxr-xr-x  106k root 25 Jan  2019  CurrentVersion.flexolibrary
.rwxr-xr-x   324 root 24 Jan  2019  CurrentVersion.plist
.rwxr-xr-x   327 root 24 Jan  2019  Settings.plist

/mnt/redsamba🔒 on ☁️  kai.hendry@gmail.com took 3s
❯ ll fcpx/CloudFunctions.fcpbundle/.fcpcache/.fcpcache
Permissions Size User Date Modified Name
drwxr-xr-x     - root  6 Apr 21:47  .fcpcache
drwxr-xr-x     - root 27 Jan  2019  24-1-19
.rwxr-xr-x     0 root 24 Jan  2019  .lock
.rwxr-xr-x   358 root 24 Jan  2019  .lock-info
.rwxr-xr-x     0 root 25 Jan  2019  __Sync__
.rwxr-xr-x  106k root 25 Jan  2019  CurrentVersion.flexolibrary
.rwxr-xr-x   324 root 24 Jan  2019  CurrentVersion.plist
.rwxr-xr-x   327 root 24 Jan  2019  Settings.plist

I will try remove all .fcpcache directories, but then what is an equivalent of a rsync --delete to delete the extra data on the destination? I'm a bit nervous about the 7+TB on the e2 bucket, because I've only paid for 5TB!

Hi Kay,

rclone sync does --delete-after by default. You can add --delete-before to your command to delete before transferring. More info here https://rclone.org/docs/#delete-before-during-after

Tip1: You can filter out the .fcpcache folders by adding --exclude=".fcpcache/**" which can be combined with --delete-excluded (please read docs and be careful here!)

Tip2: You may be able to speed up the sync by adding --checkers=16 (or higher).

Please try with --dry-run first to avoid accidental/unexpected deletion of data.

Rclone will be ignoring the symlinks by default without using -l or -L.

If your syncs are ending with an ERROR then rclone won't be doing the delete phase which could explain the usage going up and up if stuff is continually changing on the source.

Thank you for looking into this...

Size still looks like it's swelling:

I can't seem to tell if actually deleted anything, after I ran find . -type d -name ".fcpcache" -exec rm -r {} +

Any other tips?!

Else I'm tempted to kill this bucket and just give up.

Interesting, seems like rlcone doesn't respect the --delete-before flag.

This log extract doesn't match my expectations:

2023/04/22 16:17:16 DEBUG : rclone: Version "v1.62.2" starting with parameters ["rclone" "sync" "-P" "." "e2:rednas" "--delete-before" "-vv" "--log-file" "/tmp/e2-again.log"]
2023/04/22 16:17:16 DEBUG : Creating backend with remote "."
2023/04/22 16:17:16 DEBUG : Using config file from "/home/hendry/.config/rclone/rclone.conf"
2023/04/22 16:17:16 DEBUG : fs cache: renaming cache item "." to be canonical "/mnt/redsamba"
2023/04/22 16:17:16 DEBUG : Creating backend with remote "e2:rednas"
2023/04/22 16:17:16 DEBUG : Resolving service "s3" region "us-east-1"
2023/04/22 16:17:16 DEBUG : Waiting for deletions to finish
2023/04/22 16:28:49 DEBUG : S3 bucket rednas: Waiting for checks to finish
2023/04/22 16:28:49 DEBUG : S3 bucket rednas: Waiting for transfers to finish
2023/04/22 16:28:50 DEBUG : db_file.1pDSHK: Size and modification time the same (differ by 0s, within tolerance 1ns)
2023/04/22 16:28:50 DEBUG : db_file.1pDSHK: Unchanged skipping
...
2023/04/22 16:29:02 DEBUG : fcpx/architecture.fcpbundle/.lock: md5 = d41d8cd98f00b204e9800998ecf8427e OK
2023/04/22 16:29:02 INFO : fcpx/architecture.fcpbundle/.lock: Copied (new)
2023/04/22 16:29:02 DEBUG : 2018-04-28/DSCF1470.JPG: Size and modification time the same (differ by 0s, within tolerance 1ns)
2023/04/22 16:29:02 DEBUG : 2018-04-28/DSCF1470.JPG: Unchanged skipping
...

because a new file is transferred while checks are still ongoing, that is all deletions may not be detected/completed yet.

Unfortunately I don't have time to dig any further at the moment, perhaps others can explain/troubleshoot?

:thinking: unless the first pass and deletions are (silently) happening in these 11 minutes:

Do you happen to have some kind of retention or versioning enabled on the target account/bucket?

That would explain why the size doesn't shrink after deletions, but only keeps growing when new/updated files are transferred.

Next morning, it just keeps getting larger and larger.

Bit disappointed I can't see a total here:

│❯ rclone sync -P . e2:rednas --delete-before -vv --log-file /tmp/e2-again.log                                                                                                                                      │
│Transferred:      456.995 GiB / 1.675 TiB, 27%, 12.710 MiB/s, ETA 1d4h9m                                                                                                                                           │
│Errors:                 3 (retrying may help)                                                                                                                                                                      │
│Checks:            212027 / 212027, 100%                                                                                                                                                                           │
│Transferred:         3152 / 13164, 24%                                                                                                                                                                             │
│Elapsed time:   12h3m16.0s                                                                                                                                                                                         │
│Transferring:                                                                                                                                                                                                      

/tmp/e2-again.log is 87M, so gist won't even ingest it at this point.

I think at this point I'm going to give up and delete the bucket before I incur too many charges, unless someone has a bright idea?

Sidenote: I am not sure if someone has a good backup ignore list. I realise I should probably not have transferred most dotfiles, except maybe .ssh and ignored "vendor", "node_modules", tmp directories and so on.

What output do you see if you execute these two commands:

rclone size .
rclone size e2:rednas 

Tip: You can get a better understanding of your data with this command:

rclone ncdu .
Total objects: 3.050M (3049569)
Total size: 4.519 TiB (4968422880640 Byte)

But it only ran successfully on my NAS with rclone v1.57.0-DEV since it hits Failed to size with 11 errors: last error was: directory not found on the samba mount on my Archlinux machine.

I am going to have to try a sync from my TrueNAS machine.

Thanks, what is the output of this command (using v.1.62.2):

 rclone size e2:rednas

IDrive Support suggest the bucket is swelling because of versioning which makes little sense to me since my NAS is essentially a bunch of immutable / unchanging files.

I'm unable to actually toggle off versioning, so I am having to recreate and re-upload rn. :person_facepalming:

I suggest you try to find out what happened before deleting/retrying - otherwise you may well end up in the same or a similar situation at a later point.

Perhaps you first tried this:

rclone sync . e2:rednas/backup

and later changed your mind to do like this:

rclone sync . e2:rednas

or you executed from different folders, e.g. first:

rclone sync . e2:rednas  # executed as root from /homes

and then later:

rclone sync . e2:rednas  # executed as kai from /homes/kai

Perhaps something more subtle or completely different, but still very good to understand. Right now all you know is that something in your understanding of rclone, IDrive or your data is mistaken.

Sure? It looks like it can be toggled off on existing buckets in this IDrive FAQ screenshot:

Source: https://www.idrive.com/object-storage-e2/faq-dashboard#version_bucket

You can do this with rclone too - see the s3 versioning docs

I realise I never did a command like I did for b2 to remove all the old versions.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.