Mounted disk won't apper in Windows 11 if cache needs to complete uploading

When I restart Windows 11, if from previous activity there are still files that need to be flushed from local cache to the remote, the mounted disk won't appear in explorer neither on command line until all files are flushed. I confirmed this looking in the logs.
This happens even if the cache is configured to be way larger than the files still to be flushed.
Is there a way to avoid this behavior and have the disk immediately available when mounted no matter the status of the cache?

No there isn't as it has to check the files to be uploaded before it can start or you'd have a chance for data corruption / missing files.

Thank you for the quick and kind reply.
I really hope this will change in the future because it does not make sense. rclone wants to complete the cache flush before rendering the remote available only if I re-mount it, if it is already mounted the remote is of course available even when there are still files to flush. So there is no reason to have a different behavior at mount time.

Sure there is a reason as it isn't uploading first as you have it a bit confused. This is why the debug log is asked for and super helpful as you can see it and/or I can show if you aren't sure what you are looking at. That is exactly why we have the help and support template and really ask for people to use it.

If there is a change on the remote, the local cache is invalid so it has to check those files against the remote to validate there aren't new ones.

If you don't care about the data, you can always manually remove the cache if it's invalid.

Otherwise if the data is important, it has to validate it before it gives you the mount so data is not lost.

Given that I am very grateful to whoever created this huge open source product, and that obviously nothing is owed to anyone, I believe that in a strictly constructive dialogue logic, however, it is appropriate to point out that how rclone works now when it still has to flush all the files on the remote from the cache is effectively unusable. Very true that from the logs I see that everything works perfectly, and slowly all the files in the cache are migrating to the remote. At the same time, by virtue of the fact that until this migration is completed the disk is actually not even visible in the operating system, I have to admit that rclone is unfortunately unusable. I've been waiting for two days for the cache to finish transferring the 100,000-plus files. I haven't been able to use my remote disk for two days. This is not the correct way to manage cache. With Google Drive and many other products while the cache works the disk is available. Again, this is absolutely not a criticism. It is a constructive observation made in the hope that the product will benefit and improve. In the meantime it is unusable for me. Sorry, it's really a pity.

Without knowing what you are doing, we really can't offer any help or guidance.

That's why we have the template to grab that information to help facilitate a better answer for you.

Being an open source project, if it's implemented wrong or there is a better way to do it, feel free to contribute with a pull request or open some dialogue on what's a better way to do it. I can assure you a lot of thought goes into the way it's done and ensuring data consistency is high on the list.

Unfortunately, there are no details on your use case / OS / cloud remote / command / etc so without knowing more details, it's unlikely going to change much.

Here is the issue that's been around for a bit:

Just hoping it could be useful for the community.
I understand that this is open source software and that nothing is mandatory to be done for me. I am truly grateful to the people that made this huge open source software. Sincerely, I like it a lot, despite some problems.
Despite being a programmer myself since many many years I'm sorry to not be able at the moment to contribute to this project more than just giving these additional info.

What is the problem you are having with rclone?

On Windows 11 I mounted a remote with this command:

rclone mount me-u-crypt:/ P: --vfs-cache-mode full --no-console --cache-dir=C:\Users<my-name>.rclone-caches\me-u-crypt --vfs-cache-max-size 500G --fs-cache-expire-duration 180m --fs-cache-expire-interval 180m --vfs-cache-max-age 8760h --transfers 64 --buffer-size 64M --log-level INFO --log-file=D:.rclone-caches_logs\me-u-crypt.log --rc --rc-web-gui --rc-no-auth --rc-web-gui-update --rc-web-gui-no-open-browser --rc-addr localhost:6004

then I dropped with Windows explorer a dir on this remote with many files and subdir (almost 100,000). The long copy went on (for some hours) and completed without any error.
DURING THE COPY I WAS ABLE TO USE DISK P: (this remote) without any problem. I even made multiple copies at the same time.

The day after I turned off my PC. When I turned on again my PC and mounted again the remote with the same comand shown above the remote P: is not visible in the operating system. looking inside the logs it seems that rclone is still completing the transfer of the many files I copied in the previous session.

Run the command 'rclone version' and share the full output of the command.

C:\Users<my-name>rclone version
rclone v1.61.1

  • os/version: Microsoft Windows 11 Pro 22H2 (64 bit)
  • os/kernel: 10.0.22621.1265 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount me-u-crypt:/ P: --vfs-cache-mode full --no-console --cache-dir=C:\Users\<my-name>\.rclone-caches\me-u-crypt --vfs-cache-max-size 500G --fs-cache-expire-duration 180m --fs-cache-expire-interval 180m --vfs-cache-max-age 8760h --transfers 64 --buffer-size 64M --log-level INFO --log-file=D:\.rclone-caches\_logs\me-u-crypt.log --rc --rc-web-gui --rc-no-auth --rc-web-gui-update --rc-web-gui-no-open-browser --rc-addr localhost:6004

The rclone config contents with secrets removed.

type = drive
client_id = <secrets-removed>
client_secret = <secrets-removed>
scope = drive
token = {"access_token":"<secrets-removed>","token_type":"Bearer","refresh_token":"<secrets-removed>","expiry":"2023-02-21T23:53:52.8911264+01:00"}
team_drive = <secrets-removed>
root_folder_id = 

type = crypt
remote = me-unlimited:/
password = <secrets-removed>
password2 = <secrets-removed>

A log from the command with the -vv flag

It's just a very very long sequence of the like where 2 3 4 are my cheched files

2023/02/21 23:01:42 INFO  : <file-path-1>: Copied (new)
2023/02/21 23:01:42 INFO  : <file-path-2>: Copied (new)
2023/02/21 23:01:42 INFO  : <file-path-3>: vfs cache: upload succeeded try #1
2023/02/21 23:01:42 INFO  : <file-path-4>: Copiedvfs cache: upload succeeded try #1

Ah ok - I was going to guess it was Google Drive.

Google Drive has a very sad limit of only creating about 2-3 files per second and it's god awful on the API for doing a lot of small files so that makes a lot more sense.

What happens is you are checking against that 100k files and it takes forever.

You'd be better off on a work flow of not moving to the mount and just copying directly to the remote.

Either rclone copy or rclone move directly to the remote depending on your use case as doing that on the mount would be very bad on a remount with Google Drive unfortunately.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.