hello and welcome to the forum,
rclone maintains a cache for the directory and file names.
normally, rclone builds that on the fly as you navigate into a folder.
let's say that i start my rclone mount and need emby to re-scan the entire thing to looks for new media files.
that can take a very long time for rclone to navigate the entire dir/file structure of the remote.
so before i have emby scan do that scan, i have rclone pre-cache the file details.
now, when emby scans for new media, all that needed info in local on the machine and that scan goes very fast.
--rc the the mount command
- after the mount is running, do
rclone rc vfs/refresh recursive=true
local:/temp/, unless there is a specific reason to use a remote, just use the local path.
The second step is to input this line, right?
rclone rc vfs/refresh recursive=true C:/temp cache/
By the way, does this "rc" commend temporary folder needs to be remained there after rclone closes in order to have that data again at the next rclone startup? Reason I asking this is that my temp drive is a RAM drive that deletes with every computer restarts.
or just this line
rclone rc vfs/refresh recursive=true
run that command as given.
the dir cache info only exists while the
rclone mount is running.
ram drive or not, each time you
rclonemount, you need to
rclone rc vfs/refresh
however, the files that have been downloaded do survive a reboot.
that does not apply if using a ram disk.
Ok I will give it a try. Will mark your reply as solution if it works. Thanks.
Maybe I am doing this wrong but. I applied my first line in a cmd window. That cmd window now could no longer accept new commends, so I open another commend windows pointing to my rclone and applied the second line with the "rc" commend.
If this was indeed the correct way to apply the second commend, then it didn't work. My searches were just as slow as before.
not sure that that means?
please post a debug log
second commend, then it didn't work
I meant the "rc" commend line. The first line is just the mount commend line.
C:\Users\A\Desktop\rclone>rclone rc vfs/refresh recursive=true -vv
2021/09/05 15:04:42 DEBUG : rclone: Version "v1.56.0" starting with parameters ["rclone" "rc" "vfs/refresh" "recursive=true" "-vv"]
2021/09/05 15:09:43 DEBUG : 2 go routines active
2021/09/05 15:09:43 Failed to rc: connection failed: Post "http://localhost:5572/vfs/refresh": net/http: timeout awaiting response headers
Press any key to continue . . .
I checked the option "Don't use the index when searching in file folder for system files (searches might take longer)"
That didn't improve search speed.
rclone rc command failed.
post the mount debug log, if it is very large in size, then post the top 20 lines.
as long as a command is running, the cmd window will not accept new commands.
to work around that prefix the command with start
start rclone mount ....
might want to add
--no-console to the
rclone mount command.
Ok before getting the debug log, I run a batch file for my commends. Maybe you help me to correct anything I am doing wrong.
start rclone mount --rc --vfs-cache-mode off --no-console --cache-dir D:/temp remote:/ C:/mount -vv
rclone rc vfs/refresh recursive=true -vv
look like rclone is getting throttled by gdrive.
that could be the reason search using windows explorer is so slow.
have you done this, if not, should do so and test again.
You have to use the search function via https://drive.google.com/ unless there are just a few files.
Windows Explorer has to crawl each file which takes forever when you have many files, because google limits it heavily to prevent abuses.
The throttling here seems to be happening while the refresh command is running. We need to know if it ever completes successfully. OP, you should see a status of "OK" at the end. Searching should not be slow at all once priming is done. I regularly search through tens of thousands of files and folders.
good point, the truth is i made a choice to pretend not to notice.
that i would miss something we know could not happen
so it must be that, i set a trap for you, yes, yes, now that i am thinking about it....
from a lurking vigilante in the background to a poster in public
i know of your experience with
vfs/refresh and gdrive, i deflected, made mention about the need for a client id, without mention that the pacer began after that
it all makes perfect sense to me!
now, back to reality.
rclone vfs/refresh commands, there are no flags to deal with pacer issues, correct?
given that the OP has not posted the config, do you think having a client id would resolve that or should the OP add some flags?
It shouldn't be necessary to run with additional flags, but I do have a few:
rclone rc vfs/refresh recursive=true --drive-pacer-burst 200 --drive-pacer-min-sleep 10ms --timeout 30m --user-agent *******
I never have any issues running a refresh, but I also never look at any logs.
The above runs through roughly 820 TB in four minutes.
Well, my rclone config is everything to default.
The google drive I am connecting is a shared drive and it usually runs into bandwidth limits. By reading what is written in this thread, I guess I can do nothing about the slow search when google is throttling my connection. It is kinda stupid that I cannot use Explorer to find files simply by file names.
that is why i do not use gdrive.
you can try the
vfs/refresh command that was shared by @VBB
I have tried the below with what VBB's have suggested, but it seems the search speed did not improve.