I'm having a timeout error problem with rclone rc, so I wanted to know if doing "ls -R /mnt/" will achieve the same result, if not what are the differences ?
I mean... it will force uncached directories to be cached and already cached directories will be 'refreshed' to have the specified 'dir-cache-time' right ?
Lets say I have "dir-cache-time 24h" and every 24h at midnight I run "ls -R /mnt/" as a cron job. Will this make my directory updated and fast through all directories?
(I should probably add an --async flag to rclone rc)
It is a little more efficient to use the vfs/refresh command as rclone can use --fast-list for the listing but a recursive listing will have much the same effect.
I think so. It used to work fine but until a few weeks ago it stopped. It used to take about 5 to 10 minutes to get the 'OK' output. With 'ls -R' it takes less than 3 minutes.
I'll take a look at it later. I'll keep this post open, even though I got the answer I wanted, so I can post my results using async.
Well, I might be sticking with the 'ls -R' because it's much easier to run that command on /mnt/ with multiple rclone mounts at the same time (instead of having running multiple "vfs/refresh" commands on the different mounts with different ports, even remove completely the 'rc' flag on the mount, it's probably a much cleaner way. But it won't stop me from trying out 'async' on a different machine.
@ncw , can you confirm that, let's say a directory has 5 hours left before it becomes uncached, but then I access it. Will it "refresh automatically" (even though I access the cache) and make it 24h (given by the --dir-cache-time flag) again ? Or it only refreshes when those 5 hours end, then the next "access" will be slow (getting the dir from Google) and cache it after it gets the directory ?
The latter. So if you access it before 5 hours have expired then it will read it from the cache. The time has to have expired before the directory is re-read.
And I thought that something's wrong because the job finished in 0.3 seconds. Then it clicked.. I didn't set it to be recursive so I added "recursive=true" to the command but now I'm stuck in "Failed to rc: can't use --json and parameters together".
Apparently with _async, "The job can be queried for up to 1 minute after it has finished". Can this be changed ?
I want to make sure it ends with the "OK" results, but I'm currently missing all of the results because the job takes between 5-10 minutes and I never know it has finished, I'm always late when I check it and get the "error": "job not found" because it already finished more than 1 minute ago.
$ rclone help flags job
Usage:
rclone help flags [<regexp to match>] [flags]
Flags:
-h, --help help for flags
Global Flags:
--rc-job-expire-duration duration expire finished async jobs older than this value (default 1m0s)
--rc-job-expire-interval duration interval to check for expired async jobs (default 10s)
So far so good with rclone rc --json '{"_async": true, "recursive": "true"}' vfs/refresh on linux but just now I was trying to do the exact same command on windows and it outputs Failed to rc: can't use --json and parameters together
No it is the windows shell which doesn't understand ' as quotes. You'll have to use " and then look up how to escape the embedded " on Windows because I can't remember!