I'm mounting a crypted remote (Backblaze B2) via systemd. This works fine most of the time but sometimes a file gets corrupted (0 bytes) during a save.
What is your rclone version (output from rclone version)
The remote is mounted via systemd, this is the output of the systemd log
May 18 20:33:01 mylaptop rclone[1865]: ERROR : testdir/mykeepass.kdbx: vfs cache: failed to open item: internal error: item "testdir/mykeepass.kdbx" already open in the cache
May 18 20:33:01 mylaptop rclone[1865]: ERROR : testdir/mykeepass.kdbx: Non-out-of-space error encountered during open
May 18 20:46:31 mylaptop rclone[1865]: ERROR : testdir/mykeepass.kdbx: vfs cache: failed to open item: internal error: item "testdir/mykeepass.kdbx" already open in the cache
May 18 20:46:31 mylaptop rclone[1865]: ERROR : testdir/mykeepass.kdbx: Non-out-of-space error encountered during open
May 18 20:52:49 mylaptop rclone[1865]: ERROR : testdir/mykeepass.kdbx: vfs cache: failed to open item: internal error: item "testdir/mykeepass.kdbx" already open in the cache
May 18 20:52:49 mylaptop rclone[1865]: ERROR : testdir/mykeepass.kdbx: Non-out-of-space error encountered during open
May 19 10:41:34 mylaptop rclone[1865]: ERROR : testdir/mykeepass.kdbx: vfs cache: failed to open item: internal error: item "testdir/mykeepass.kdbx" already open in the cache
May 19 10:41:34 mylaptop rclone[1865]: ERROR : testdir/mykeepass.kdbx: Non-out-of-space error encountered during open
So as it seems it's the vfs-cache that's causing the issue: "already open in the cache".
Ok, thanks, I removed them.
I thought they were also related to the vfs-cache and wanted to avoid issues in case I use multiple rclone mounts on the same system.
Allright, so that's probably the reason why I'm still having a high number of class C transactions with B2.
According to the docs this is a global flag so I thought it also applied to a mount.
Done. I'll report back with the full log when it happens again.
I know a reproducer would be helpful but unfortunately I haven't found a way to reproduce it yet.
I simply noticed it because at some point my entire Keepass database was suddenly gone, right after I added a new entry.
What happened is that after adding the new entry the kdbx file was auto-saved but got corrupted. Luckily I had a recent backup of the file and after restoring that backup, I re-added the new entry and this time the autosave worked just fine (as usual).
I regularly add or change entries to the Keepass database and from time to time it gets corrupted and I have to restore a backup.
I had a look at the code and I think it might be a race between the cache cleaner and opening files, but I couldn't obviously see the problem.
I tried writing a program to make it happen but I didn't succeed yet.
Can you try with 1.55.1 please? That had an important fix in the VFS layer which might be relevant, but it might not so it would be worth running that to see if it fixes it.
If you could capture a log with -vv of this event happening then it will be nearly as good as a reproduction.
I appreciate the log might be a) large and b) have secret stuff in, so instead of posting a link here you might like to private message me a link or email it to nick@craig-wood.com
Thanks for taking your time to investigate the issue in more detail.
Actually I just upgraded to 1.55.1 because I experienced another issue related to an sftp backend which might be resolved in this latest version as well.
The -vv option is already enabled as well because that's what Animosity022 suggested in an earlier post.
If it happens again with 1.55.1 I will definitely provide you the detailed (-vv) log.
I'll pm you a link to the logfile.
For privacy reasons however I had to strip the logfile because it contained a lot of (very) sensitive customer data. So I removed all older, non-relevant log entries and kept only the entries around the time the issue happened.
That looks fine. All I really need are the log lines that mention keepass and I think they are all there.
On first glance I see that keepass writes to a temporary file then renames it. I bet that is the cause of the problem - there are an unbelievable number of corner cases involved in renaming stuff!
not sure your exact use case but i depend heavily on keepass and try to be paranoid about it.
i would not use rclone mount, as rclone copy/sync/move is more reliable.
i want to get the database backed up as quickly and reliably as possible.
so each time the database is saved.
using keepass, i have a trigger that
copies the database to a local backup server
7zip it with strong password and copy it to wasabi.
copy the database to onedrive to another crypt
each command use --backup-dir with a date.time stamp forever forward incremental backups
and tweak the bucket polices to prevent deletions.
this would be a simplified example rclone.exe copy "C:\Users\user01\AppData\Local\Temp\keepass\zip\keepass.20210317.100528.7z" "wasabicryptbt:en07/keepass/zip/backup" --backup-dir=wasabicryptbt:en07/keepass/rclone/archive/20210317.100528 --log-level=DEBUG --log-file=C:\data\rclone\logs\keepass_wasabicryptbt_files_zip\20210317.100528\rclone.log