Quick question about rclone rc refresh

I'm having a timeout error problem with rclone rc, so I wanted to know if doing "ls -R /mnt/" will achieve the same result, if not what are the differences ?

I mean... it will force uncached directories to be cached and already cached directories will be 'refreshed' to have the specified 'dir-cache-time' right ?
Lets say I have "dir-cache-time 24h" and every 24h at midnight I run "ls -R /mnt/" as a cron job. Will this make my directory updated and fast through all directories?


Is that because the rc command takes too long? You could try adding the _async flag: https://rclone.org/rc/#running-asynchronous-jobs-with-async-true - you'll have to make your query as JSON.

(I should probably add an --async flag to rclone rc)

It is a little more efficient to use the vfs/refresh command as rclone can use --fast-list for the listing but a recursive listing will have much the same effect.

I think so. It used to work fine but until a few weeks ago it stopped. It used to take about 5 to 10 minutes to get the 'OK' output. With 'ls -R' it takes less than 3 minutes.

I'll take a look at it later. I'll keep this post open, even though I got the answer I wanted, so I can post my results using async.

Well, I might be sticking with the 'ls -R' because it's much easier to run that command on /mnt/ with multiple rclone mounts at the same time (instead of having running multiple "vfs/refresh" commands on the different mounts with different ports, even remove completely the 'rc' flag on the mount, it's probably a much cleaner way. But it won't stop me from trying out 'async' on a different machine.

@ncw , can you confirm that, let's say a directory has 5 hours left before it becomes uncached, but then I access it. Will it "refresh automatically" (even though I access the cache) and make it 24h (given by the --dir-cache-time flag) again ? Or it only refreshes when those 5 hours end, then the next "access" will be slow (getting the dir from Google) and cache it after it gets the directory ?


The latter. So if you access it before 5 hours have expired then it will read it from the cache. The time has to have expired before the directory is re-read.

I think you should, because I'm really confused on how do I apply _async to the "rclone rc vfs/refresh" command. I've read https://rclone.org/rc/#running-asynchronous-jobs-with-async-true and I'm clueless.

This is what I tried:


And I thought that something's wrong because the job finished in 0.3 seconds. Then it clicked.. I didn't set it to be recursive so I added "recursive=true" to the command but now I'm stuck in "Failed to rc: can't use --json and parameters together".

How can I set async and recursive together?

The job will finish instantly because it runs in the background

You put it in the json as "recursive": true so something like

{"_async": true, "recursive": true}

outputs this error

Looks like it needs to be "recursive": "true" due to the strange way the arguments get parsed in vfs/refresh

#̶ ̶r̶c̶l̶o̶n̶e̶ ̶r̶c̶ ̶-̶-̶j̶s̶o̶n̶ ̶{̶"̶_̶a̶s̶y̶n̶c̶"̶:̶ ̶t̶r̶u̶e̶,̶ ̶"̶r̶e̶c̶u̶r̶s̶i̶v̶e̶"̶:̶ ̶"̶t̶r̶u̶e̶"̶}̶ ̶v̶f̶s̶/̶r̶e̶f̶r̶e̶s̶h̶ 
2̶0̶2̶0̶/̶0̶3̶/̶2̶3̶ ̶1̶5̶:̶2̶5̶:̶5̶9̶ ̶F̶a̶i̶l̶e̶d̶ ̶t̶o̶ ̶r̶c̶:̶ ̶c̶a̶n̶'̶t̶ ̶u̶s̶e̶ ̶-̶-̶j̶s̶o̶n̶ ̶a̶n̶d̶ ̶p̶a̶r̶a̶m̶e̶t̶e̶r̶s̶ ̶t̶o̶g̶e̶t̶h̶e̶r̶

edit: I wrote the command wrong.

I assume it's working now. Thanks !

root:~# rclone rc --json '{"_async": true, "recursive": "true"}' vfs/refresh
        "jobid": 39
root:~# rclone rc --json '{ "jobid":39 }' job/status
        "duration": 0,
        "endTime": "0001-01-01T00:00:00Z",
        "error": "",
        "finished": false,
        "group": "job/39",
        "id": 39,
        "output": null,
        "startTime": "2020-03-23T15:28:09.01264194Z",
        "success": false

That looks like it is working! If you re-run the job/status then eventually you'll see the job complete.

New methods, new questions!

Apparently with _async, "The job can be queried for up to 1 minute after it has finished". Can this be changed ?

I want to make sure it ends with the "OK" results, but I'm currently missing all of the results because the job takes between 5-10 minutes and I never know it has finished, I'm always late when I check it and get the "error": "job not found" because it already finished more than 1 minute ago.


$ rclone help flags job
  rclone help flags [<regexp to match>] [flags]

  -h, --help   help for flags

Global Flags:
      --rc-job-expire-duration duration   expire finished async jobs older than this value (default 1m0s)
      --rc-job-expire-interval duration   interval to check for expired async jobs (default 10s)
1 Like

So far so good with rclone rc --json '{"_async": true, "recursive": "true"}' vfs/refresh on linux but just now I was trying to do the exact same command on windows and it outputs Failed to rc: can't use --json and parameters together

Is this a bug? :thinking:

No it is the windows shell which doesn't understand ' as quotes. You'll have to use " and then look up how to escape the embedded " on Windows because I can't remember!

1 Like

input: rclone rc --json "{"_async": true, "recursive": "true"}" vfs/refresh
output: Failed to rc: bad --json input: invalid character '_' looking for beginning of object key string

solution to this is this:

rclone rc --json "{\"_async\": true, \"recursive\": \"true\"}" vfs/refresh
and checked ir with
rclone rc --json "{ \"jobid\":11 }" job/status

That works on WIndows does it? It would work on Linux too I think.

Yes, it works on both Windows and Linux !

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.