I noticed that at some point I lost more disk space than expected.
I looked into rclone's --cache-dir that I set for a mount, and found 3 directories for the same remote:
$ du -sh RCloneCache/vfs/*
1.4T RCloneCache/vfs/B2{AAAAA}
123G RCloneCache/vfs/B2{BBBBB}
45G RCloneCache/vfs/B2{CCCCC}
$ du -sh RCloneCache/vfsMeta/*
13M RCloneCache/vfsMeta/B2{AAAAA}
9.3M RCloneCache/vfsMeta/B2{BBBBB}
9.1M RCloneCache/vfsMeta/B2{CCCCC}
(I redacted unique strings, because wasn't sure if they're safe to share.)
The mount has --vfs-cache-max-size 1400G, so cumulatively these 3 directories are over limit. Their content is mirroring the same remote, with different files stored locally.
How could I reunite them?
Run the command 'rclone version' and share the full output of the command.
Note: I'm pretty sure this started before I switched to the "deadlock fix" version, but it may have started after the new B2 concurrency code. I'm not sure on the latter.
Just noticed that the 2 new directories were created Sep 23, 2023 and Sep 24, 2023. This was after I jumped to v1.65.0 (first time getting the refactored B2 code), but before I switched to deadlock fix branch.
I believe it uses its own digest / fingerprint value that it adds to the remote name. Remember seeing it in code a while ago.
I checked, and unfortunately journalctl in my case doesn't go back to Sep 23/24 already, probably because I enabled DEBUG mode for another issue and it since rotated logs due to increased volume.
Hm, could this issue happen if data on B2 remote was modified manually? Is that what the random string on the directory name represents — a digest of complete remote state?
I'm always running with debug currently. Just grepped the entire log history for the word canonical (goes back only a couple of days) and it did find this:
Sep 25 23:37:17 htpc rclone[252704]: DEBUG : fs cache: renaming cache item "B2:Bucket/path" to be canonical "B2{BBBBB}:Bucket/path"
BBBBB is 5 alphanumerics corresponding to the 123G RCloneCache/vfs/B2{BBBBB} directory from my original post.
P.S. Thank you for that log sample btw, it reminded me that it's called "canonical name". I was trying to search codebase for it.
The full log is over 6 million lines over 2 days. I don't think I can share that. Trying to find a relevant part. Here are lines around "vfs cache: data root".
Sep 25 23:35:58 htpc rclone[164069]: INFO : vfs cache: cleaned: objects 2011 (was 2011) in use 94, to upload 74, uploading 20, total size 63.731Gi (was 63.731Gi)
Sep 25 23:36:38 htpc systemd[1]: htpc-mount.service: Failed with result 'timeout'.
Sep 25 23:36:38 htpc systemd[1]: Stopped B2 HTPC Mount.
Sep 25 23:36:38 htpc systemd[1]: htpc-mount.service: Consumed 46min 3.574s CPU time.
Sep 25 23:37:16 htpc systemd[1]: Starting B2 HTPC Mount...
Sep 25 23:37:16 htpc rclone[252704]: INFO : Starting bandwidth limiter at 1Mi:off Byte/s
Sep 25 23:37:16 htpc rclone[252704]: DEBUG : rclone: systemd logging support activated
Sep 25 23:37:16 htpc rclone[252704]: NOTICE: Serving remote control on http://127.0.0.1:5572/
Sep 25 23:37:16 htpc rclone[252704]: NOTICE: --fast-list does nothing on a mount
Sep 25 23:37:16 htpc rclone[252704]: DEBUG : Creating backend with remote "B2:Bucket/path"
Sep 25 23:37:16 htpc rclone[252704]: DEBUG : Using config file from "/home/htpc/.config/rclone/rclone.conf"
Sep 25 23:37:16 htpc rclone[252704]: DEBUG : B2: detected overridden config - adding "{BBBBB}" suffix to name
Sep 25 23:37:17 htpc rclone[252704]: DEBUG : Couldn't decode error response: EOF
Sep 25 23:37:17 htpc rclone[252704]: DEBUG : fs cache: renaming cache item "B2:Bucket/path" to be canonical "B2{BBBBB}:Bucket/path"
Sep 25 23:37:17 htpc rclone[252704]: DEBUG : vfs cache: root is "/home/htpc/RCloneCache"
Sep 25 23:37:17 htpc rclone[252704]: DEBUG : vfs cache: data root is "/home/htpc/RCloneCache/vfs/B2{BBBBB}/Bucket/path"
Sep 25 23:37:17 htpc rclone[252704]: DEBUG : vfs cache: metadata root is "/home/htpc/RCloneCache/vfsMeta/B2{BBBBB}/Bucket/path"
Sep 25 23:37:17 htpc rclone[252704]: DEBUG : Creating backend with remote "/home/htpc/RCloneCache/vfs/B2{BBBBB}/Bucket/path"
Sep 25 23:37:17 htpc rclone[252704]: DEBUG : Creating backend with remote "/home/htpc/RCloneCache/vfsMeta/B2{BBBBB}/Bucket/path"
After this, it goes on to work on specific files stored in the mount.
I know it changed at some point over the last year or so and I thought my memory was that it was an environment variable overriding something so that's what is confusing to me.
You're right that it seems config-related. Maybe more config than I expected (not just the one in rclone/config file). In the past few days I did change the following mount parameters (due to that other issue):
And even though I haven't touched any --vfs-* parameters, this seems to suggest that any filesystem-related configs could affect it (I haven't gotten to the bottom of it yet, just making assumptions for now).
Yeah, based on this comment, it seems that overrides can come from command line flags too.
// Overridden discovers which config items have been overridden in the
// configmap passed in, either by the config string, command line
// flags or environment variables
That said, I'm struggling to follow this any deeper, to find out which exact arguments can have this effect. Also, this could still be a bug in that the canonical hex wasn't supposed to change from my actions.
I guess, I would just need to figure out which cache dir is being used now, and delete the other 2? Or stop the mount, copy the biggest cache into the dir being used, and start again? Going to hold off messing with it until we're more certain.
I think it just renames the remote name, but that would cause a problem with the cache dir changing as I think that's a bug/unintended impact (maybe). There might be a valid reason to not share the cache with certain parameters, but as I've not seen something like that prior to today so new to me as well.
Another part of the reason why it's a real issue: some files that were added to the mount, but haven't had a chance to get uploaded yet are now stuck in the old cache dir, so will never get uploaded. Will have to add them to the mount again manually to recover.