Load the whole file tree of a just-mounted remote

What is the problem you are having with rclone?

My problem is that I want to use the newbie of the rclone family: Proton Drive.
I'm mounting it with the command:

rclone mount --dir-cache-time=1000h --vfs-cache-mode=full --vfs-cache-max-size=150G --vfs-cache-max-age=12h --vfs-fast-fingerprint myprotonremote: /some/empty/directory

However, be it with cd in a console or by clicking in Dolphin, the navigation is litterally snail-slow. It takes ages to open the first folder (and it doesn't even contain that many things).

So I'm looking for a way, ideally a command to run in CLI, to ask Linux to browse the whole file tree of the folder, do some indexation (or whatever the correct term is), so that I can then browse normally.

The only ideas I had so far was the mlocate sudo updatedb command, but that didn't change a thing, and using the tree command (I'm not very sure if it improved things, but it sure took a hell of a long time to finish).

So if anyone has a (better) solution, or any solution at all, please comment!


Run the command 'rclone version' and share the full output of the command.

> rclone version

rclone v1.64.2
- os/version: opensuse-tumbleweed (64 bit)
- os/kernel: 6.5.8-1-default (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: dynamic
- go/tags: none
> rclone config redated

type = protondrive
username = chrysostome.lanoire
password = XXX
mailbox_password = XXX
client_uid = XXX
client_access_token = XXX
client_refresh_token = XXX
client_salted_key_pass = XXX
### Double check the config for sensitive info before posting publicly

I couldn't find a way to reliably show you an output of rclone -vv [...].


welcome to the forum,

did you see this in the rclone docs?

--log-level=DEBUG --log-file=./rclone.log

ls -R /some/empty/directory

or to do the rclone way,

  1. check out my summary of the two rclone mount caches

  2. pre-warm the vfs dir cache.

  • to the mount command add
    --rc --rc-no-auth
  • after the mount is live, then run
    rclone rc vfs/refresh recursive=true

Thanks a lot. But I can't figure out how to use the last command: when I run it, I get:

> rclone rc vfs/refresh recursive=true                                

Failed to rc: connection failed: Post "http://localhost:5572/vfs/refresh": dial tcp [::1]:5572: connect: connection refused

Am I misenderstanding what you are suggesting?


  1. if you are running the rclone mount command, then kill it.
  2. re-run your mount command, adding --rc --rc-no-auth -vv
  3. wait for the mount to be live
  4. run rclone rc vfs/refresh recursive=true -vv

and if any of that does not work, you must post the full output of both commands.
the full output, ok?

that is nice and easy, did you try that?
after running that, cd and Dolphin should perform better?

It's not that it doesn't work, just that it takes ages just to mount, let alone browse. I appended your suggestion to my mount command, unfortunately I can't give you the output because it contains a lot of personal informations in the name of my files and directories.

TLDR: no improvement with --rc --rc-no-auth -vv. As for rclone rc vfs/refresh recursive=true -vv, I only had the time to take a glipse but it doesn't look like it's doing anything either...

It definitely does.

Without a log file, it's unknown if you did something wrong or not.

All right. We'll have to wait some time though, I'm piping the output of the first command and it's already been 15min, and it's still running.

I'll try the second one as soon as the first one is over.

What's your new mount command?

What was the output of this?

I'm sorry it takes a while to upload my data on the Drive (I have almost 1M files ^^'), and I need to wait for that to be over to properly test rclone's behavior.

I'll come back to you as soon as it's done! (probably tomorrow or the day after).


just curios, why use proton versus the many, many other providers?
what makes it unique and worth the risks?

rclone support is alpha/beta, relies on multiple third parties libraries, "Proton Drive doesn't publish its API documentation" and "observing the Proton Drive traffic in the browser", "Proton Drive protocol has evolved over time there may be accounts it is not compatible with"

So: I tried mounting one of my remotes with --rc --rc-no-auth, and running rclone rc vfs/refresh recursive=true after that, and it worked beautifully, the syncing with the Drive was incredibly fast both ways. Now my problem is that I have several folders to mount, and what you told me doesn't allow me to do that:

Failed to start remote control: failed to init server: listen tcp bind: address already in use

What's the solution to this problem? (I guess it must be simple, like giving different names or ports...?)

EDIT: I got a little carried off by my first impression: it still takes more than several minutes for a small PDF to go from the Drive to the mount point...

Think of that number as needed to be unique so on a server, you can only use it one time.

For another mount, just increment it by one and change the refresh command to use the new port. So pick 5573.

rclone rc vfs/refresh recursive=true --url _async=true

for a mount with

# This sets up the remote control daemon so you can issue rc commands locally
--rc \
# This is the default port it runs on
--rc-addr \

Thank you. It works; however, the navigation in the mounted folder is very, very-very slow, which was what I was trying to avoid (in this post, initially)... Do you have any other tips to speed it up?

Anyway, thank you for your help so far.

set --dir-cache-time to some high value - as long as directory data is in cache browsing will be instant.

and BTW - can you stop posting the same/similar issue multiple times? You keep asking almost the same question making it difficult to follow

I'm sorry, my questions seem alike but are not really related, I'm trying not to make a mess of a single post thread...

So, here's my last question. Command:

rclone mount --dir-cache-time=1000h --vfs-cache-mode=full --vfs-cache-max-size=150G --vfs-cache-max-age=12h --vfs-fast-fingerprint proton: /home/tome/Proton --rc --rc-addr &

It works well, but I noticed I can create files on both ends (online or in mounted folder), and the change propagates quite fast, but if I try to delete something (on any end), the change... never happens on the other end?

If you can solve this I'm good to go, so please, one last for me!


NB : I've been trying to solve this problem in this other post as well, where here's my last comment...

I'm very sorry for the mess and the duplicates, I've been doing this all day for more than a week and with fatigue comes confusion

so far, it seems to be working for me.
if i delete a file in the remote, it is reflected in the mount.

please, going forward, create some sort of reproducible test and post the output.

rclone mount proton01: b:\rclone\mount\proton01 --rc --rc-no-auth --dir-cache-time=1000h --vfs-cache-mode=full --cache-dir=d:\rclone\cache\proton01 --config=c:\data\rclone\rclone.conf -vv
#copy file to remote
rclone copy d:\files\file.ext proton01:zork -v --stats-one-line 
INFO  : file.ext: Copied (new)

#without refresh, file should **NOT** appear in mountpoint
rclone ls b:\rclone\mount\proton01\zork 

#refresh the vfs dir cache
rclone rc vfs/refresh dir=/zork

#after refresh, file should appear in mountpoint
rclone ls b:\rclone\mount\proton01\zork 
       17 file.ext

#delete file from remote
rclone delete proton01:zork -v 
INFO  : file.ext: Deleted

#without refresh, the deleted file should continue appear in mountpoint
rclone ls b:\rclone\mount\proton01\zork 
       17 file.ext

#refresh the vfs dir cache
rclone rc vfs/refresh dir=/zork

#after refresh, file should **NOT** appear in mountpoint
rclone ls b:\rclone\mount\proton01\zork 

Sorry, what are those remotes? "proton01", "a", "b", "c", "d" ?
I don't understand at all what I'm supposed to do...?

Edit: I wrote this script (meant to be run from inside the directory where something has been deleted):


echo $PWD > .temp
sed -i 's/\/home\/myusername\/my-drive-root-mount-folder\///g' .temp
chemin="$(cat .temp)"
rm .temp
rclone rc vfs/refresh recursive=true --url dir="$chemin"

... and it seems to work. Are there any improvements I could make? (it is quite slow, even for a directory without too many files)

Edit 2: it seems it works only once for a given directory, I tried to delete more files, launched it, and it took less than a second to run, and the files are still there.
Sorry, I had modified the script in a stupid way, waiting for an input while using none, and exiting if none was given. Back to edit 1: it works, but can I make it better?

Thanks for your patience, I'm sorry I'v not been using this forum properly so far, and please tell me if I still break any rule (even implicit) in the future.

no, i do not think so. in my testing, proton is very, very, very slow and does not ListR and as a result --fast-list

you would get much better results using a polling remote such as gdrive. it supports polling and ListR
that would be the best choice. then perhaps onedrive, it is polling but not supporting ListR
else, a fast non-polling remote that supports ListR, such as S3 providers, wasabi is super fast for api calls, recently, been testing idrive and it seems fast.

sure. no problem.
next time, try to follow my posted example. it was very simple, documented, used only rclone commands and output.
and disproved your statement about file deletes not being propagated.