Precaching with vfs/refresh fails with an error when having multiple cloud drives

What is the problem you are having with rclone?

I would like to pre-cache the cloud drives. I am following the instructions on: https://rclone.org/rc/#vfs-refresh version

Command "rclone rc vfs/refresh" fails with an error: Unknown key "fs"
I need to supply an fs, because there are multiple cloud drives. What am I missing here?

What is your rclone version (output from rclone version)

rclone v1.53.2

  • os/arch: linux/amd64
  • go version: go1.15.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Docker on QNAP

Which cloud storage system are you using? (eg Google Drive)

Various (google drive, dropbox, pcloud)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone rc vfs/refresh fs=dropbox: recursive=true --log-file=log.txt -vv

Also adding a dir parameter does not make a difference
And the fs does exists. It is the same as in the mount and when I use something that does not exist, I get a different error: no VFS found with name

A log from the command with the -vv flag

2020/11/06 12:41:54 DEBUG : rclone: Version "v1.53.2" starting with parameters ["rclone" "rc" "vfs/refresh" "fs=dropbox:" "recursive=true" "--log-file=log.txt" "-vv"]
2020/11/06 12:41:54 DEBUG : 4 go routines active
2020/11/06 12:41:54 Failed to rc: Failed to read rc response: 500 Internal Server Error: {
        "error": "unknown key \"fs\"",
        "input": {
                "fs": "dropbox:"
        },
        "path": "vfs/refresh",
        "status": 500
}

Not familiar with the fs option,

But you could try specifying different ports for rc for each of your mounts.

use the --rc-addr option to set it

Then you add the port to the refresh command

Do you just have a mount with a rc running?

You can use:

/usr/bin/rclone rc vfs/refresh recursive=true --rc-addr 127.0.0.1:5572 _async=true

I have multiple mounts:

rclone rcd --rc-web-gui --rc-addr :5572 --rc-web-gui-no-open-browser --cache-dir /data/.cache
rclone rc options/set --json '{"main": { "Transfers": 4 }, "vfs": {"CacheMode": 3, "UID": '$PUID', "GID": '$PGID', "Umask": 23}, "mount": {"AllowOther": true}}'
rclone rc mount/mount fs=pcloud: mountPoint=/data/pcloud
rclone rc mount/mount fs=dropbox: mountPoint=/data/dropbox
rclone rc mount/mount fs=googledrive: mountPoint=/data/googledrive

And when I do the command as you suggest (without the fs parameter), I get the error:

2020/11/06 14:09:59 Failed to rc: Failed to read rc response: 500 Internal Server Error: {
        "error": "more than one VFS active - need \"fs\" parameter",
        "input": {
                "recursive": "true"
        },
        "path": "vfs/refresh",
        "status": 500
}

So that is why I added the fs parameter and also the link https://rclone.org/rc/#vfs-refresh says you're supposed to do that

The dir option is for a path on a mounted remote not specifying a remote.

I'm not sure you can specify a remote, but maybe @darthShadow knows for sure as I don't think you can but I've been wrong before and I'm sure I'll be wrong again :slight_smile:

I think this is a bug...

Try this fix

v1.54.0-beta.4877.adefa07c8.fix-vfs-rc on branch fix-vfs-rc (uploaded in 15-30 mins)

I already suspected it was a bug. Thanks for confirming it.
It will be hard for me to test, as I am currently using it inside Docker on a QNAP. So I'll need a little patience there...

I'll try it out on an Ubuntu machine somewhere next week.

I just mounted 1 single cloud drive to test the vfs/refresh, but no matter what I try, the cache is not prefetched (e.g. I added all kind of longer timeout options and vfs cache read options, but nothing I tried seems to make a difference). This cloud drive has GBs of data, but nothing is prefetched.

I tried the following with the _async option as @Animosity022 suggested.
The command completes without error in roughly 5 seconds but nothing seems to happen.

rclone rc vfs/refresh recursive=true _async=true
{
        "jobid": 10
}

rclone rc job/status jobid=10
{
        "duration": 4.246817549,
        "endTime": "2020-11-06T18:36:28.281565195Z",
        "error": "",
        "finished": true,
        "group": "job/10",
        "id": 10,
        "output": {
                "result": {
                        "": "OK"
                }
        },
        "startTime": "2020-11-06T18:36:24.034747631Z",
        "success": true
}

I thought the files would be precached in the folder .cache/vfs/googledrive/ as when I browse through files, they also appear there (and once they are there, you can quickly browse over the files)

Any clues what I could have forgotten?

EDIT: the log with the -vv flag also shows dreadfully little

2020/11/06 19:05:36 DEBUG : rclone: Version "v1.53.2" starting with parameters ["rclone" "rc" "vfs/refresh" "recursive=true" "--log-file=log.txt" "-vv"]
2020/11/06 19:05:40 DEBUG : 4 go routines active

What's the size of you drive? How many objects Mine takes about 4-5 minutes to run.

File size: 7.92GB. Files: 5820, Folders: 474.
I'd wish that could be prefetched in 5s :wink:

I have tried it on an Ubuntu machine and it works :slight_smile:
I used the following test script

PUID=1000
PGID=1000
USERNAME=rclone
PASSWORD=rclone
AUTH="--rc-user=$USERNAME --rc-pass=$PASSWORD"

mkdir -p ~/data/pcloud ~/data/dropbox ~/data/googledrive

rclone rcd --rc-web-gui --rc-addr :5572 --rc-web-gui-no-open-browser --cache-dir ~/data/.cache $AUTH &
sleep 3
rclone rc options/set --json '{"main": { "Transfers": 4 }, "vfs": {"CacheMode": 3, "UID": '$PUID', "GID": '$PGID', "Umask": 23}, "mount": {"AllowOther": true}}' $AUTH
rclone rc mount/mount fs=pcloud: mountPoint=~/data/pcloud $AUTH
rclone rc mount/mount fs=dropbox: mountPoint=~/data/dropbox $AUTH
rclone rc mount/mount fs=googledrive: mountPoint=~/data/googledrive $AUTH

rclone rc vfs/refresh fs=pcloud: recursive=true $AUTH
rclone rc vfs/refresh fs=dropbox: recursive=true $AUTH
rclone rc vfs/refresh fs=googledrive: recursive=true $AUTH

Just as on my QNAP, the cache remains completely empty in Ubuntu.
Each vfs/refresh completes in a few seconds.

Thanks for testing.

I've merged this to master now which means it will be in the latest beta in 15-30 mins and released in v1.54. If we make a 1.53.3 then it will go there too!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.