When rclone is mount with vfs-mode full is shut down. Next time it starts it looks into the cache directory. If it finds more than ~251 elements to upload, all uploads fail with "batcher is shutting down".
Run the command 'rclone version' and share the full output of the command.
rclone --version
rclone v1.62.2
os/version: ubuntu 22.10 (64 bit)
os/kernel: 5.19.0-41-generic (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.20.2
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Dropbox as CRYPT
The command you were trying to run (eg rclone copy /tmp remote:tmp)
I don't know how to anonymize that so it's not the full log
It's finding files in cache to upload and they get uploaded properly in the backgroud till the startup sequence finds more than arround 251 elements to upload, than all uploads fail with error:
2023/05/11 10:38:29 ERROR : fullPathto.file: Failed to copy: upload failed: batcher is shutting down
This sounds like a bug, but I'm going to need more log to diagnose.
Just add -vv to the rclone rcd command.
If you want to email me nick@craig-wood.com the log (or maybe a link to the log somewhere I can download it) I can take a look. Put a link to this forum article in the email so I've got some context - thanks!
The only solution for now is, to move this files out of dropbox, so the full file is in a temporary folder and safely removed from rclone cache. When the que is empty i can restart rclone and upload the file to the cloud without problems.
Looking at it, I can see the problem. The backend created by the rc mount/mount command shuts down too soon. This is easy to fix though - can you give this a go please?
It looks woking. I'm still uploading, because there is a huge backlog. Once finished i will confirm. How can i compare this release to the source code of the stable release to see which differences u made in order to learn something?
It seems like the remote will not be mounted before all the files are uploaded. In the Debug Log i can see the progress of uploads in the que, but there are no new messages like "queed for upload in 5s" which happens in the startup phase, when the cache folder is read. So it looks like the cache folder has been processed already.
U were right. I had a grep filter on my tail -f logfile. The cache folder is still being processed. I set transfers to 0, to speed that up.
Now it's 10-15x faster with showing messages in the log saying:
We've debated/talked/discussed that particular issue as it's complex.
If you start before you can validate the data is consistent, you'd have problems. Not sure a good solution was found that doesn't have some potential data impact.
I normally have only a handful of files as it's not the size of the file per se, it's the number of files that have to be checked that makes it not start.
I aggree with u 100%, that it is a good idea to wait, to be consistent with the data. And the number of pending files is only so high because of the upload bug. So in the daily usage it schouldn't be an issue. But it would be great to get some more information about how may files need to be proccessed to mount the remote. Easy percentage calculation: Total files in cache devided by already checked files = progress in %