Rclone rc vfs/refresh not actually refresing

What is the problem you are having with rclone?

rclone rc vfs/refresh recursive=true --url 127.0.0.1:1337 _async=true does not actually refresh the directory cache or I am unable to interpret rclone rc vfs/stats the way it is intented.
I am mounting Dropbox with the following systemd unit:

# cat /etc/systemd/system/rclone-dropbox.service
[Unit]
Description=Dropbox (rclone)
After=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount -vv --vfs-fast-fingerprint --vfs-cache-mode writes --allow-other --gid 2003 --uid 2003 --tpslimit 12 --read-only --vfs-read-chunk-size 64M --vfs-cache-max-age 9999h --dir-cache-time 9999h --tpslimit-burst=0 --cache-dir /cache --vfs-cache-max-size=40G --rc --rc-addr 127.0.0.1:1337 --rc-no-auth --log-file /tmp/rclone.log dropbox_crypt: /mnt/dropbox/
ExecStartPost=/root/vfs_refresh.sh
ExecStop=/usr/bin/fusermount -zu /mnt/dropbox
Restart=on-abort
RestartSec=30
StartLimitInterval=200
StartLimitBurst=5

[Install]
WantedBy=default.target

I had to separate ExecStartPost into a script, as rclone wouldn't mount in time so that the rclone rc vfs/refresh command would error out, hence I introduced a 2 second sleep (could be totally unrelated to rclone was just looking for a quick and dirty fix).
Contents of that file:

#!/bin/bash
/usr/bin/sleep 2
/usr/bin/rclone rc vfs/refresh recursive=true --url 127.0.0.1:1337 _async=true

After I started the rclone mount using systemctl start rclone-dropbox I can see it properly mounted and rclone rc vfs/stats shows the following:

# rclone rc --url 127.0.0.1:1337 vfs/stats
{
        "diskCache": {
                "bytesUsed": 0,
                "erroredFiles": 0,
                "files": 0,
                "hashType": 0,
                "outOfSpace": false,
                "path": "/cache/vfs/dropbox_crypt",
                "pathMeta": "/cache/vfsMeta/dropbox_crypt",
                "uploadsInProgress": 0,
                "uploadsQueued": 0
        },
        "fs": "dropbox_crypt:",
        "inUse": 1,
        "metadataCache": {
                "dirs": 1,
                "files": 0
        },
        "opt": {
                "CacheMaxAge": 35996400000000000,
                "CacheMaxSize": 42949672960,
                "CacheMode": 2,
                "CachePollInterval": 60000000000,
                "CaseInsensitive": false,
                "ChunkSize": 67108864,
                "ChunkSizeLimit": -1,
                "DirCacheTime": 35996400000000000,
                "DirPerms": 2147484141,
                "DiskSpaceTotalSize": -1,
                "FastFingerprint": true,
                "FilePerms": 420,
                "GID": 2003,
                "NoChecksum": false,
                "NoModTime": false,
                "NoSeek": false,
                "PollInterval": 60000000000,
                "ReadAhead": 0,
                "ReadOnly": true,
                "ReadWait": 20000000,
                "UID": 2003,
                "Umask": 18,
                "UsedIsSize": false,
                "WriteBack": 5000000000,
                "WriteWait": 1000000000
        }
}

When I examine the job that has been returned when issuing the rclone rc vfs/refresh command, I can see it is apparently running:

[root@emby ~]# rclone rc --url 127.0.0.1:1337 job/status jobid=1
{
        "duration": 0,
        "endTime": "0001-01-01T00:00:00Z",
        "error": "",
        "finished": false,
        "group": "job/1",
        "id": 1,
        "output": null,
        "startTime": "2023-06-04T15:28:07.016170446+02:00",
        "success": false
}

For me it doesn't seem like it is refreshing the directory tree.
When calling rclone rc vfs/refresh with the dir parameter (e.g. /usr/bin/rclone rc vfs/refresh recursive=true --url 127.0.0.1:1337 _async=true dir='/') , it does refresh the toplevel directory (which includes 27 directories + the 1 root dir):

# rclone rc --url 127.0.0.1:1337 vfs/stats
{
        "diskCache": {
                "bytesUsed": 0,
                "erroredFiles": 0,
                "files": 0,
                "hashType": 0,
                "outOfSpace": false,
                "path": "/cache/vfs/dropbox_crypt",
                "pathMeta": "/cache/vfsMeta/dropbox_crypt",
                "uploadsInProgress": 0,
                "uploadsQueued": 0
        },
        "fs": "dropbox_crypt:",
        "inUse": 1,
        "metadataCache": {
                "dirs": 28,
                "files": 28
        },
        "opt": {
                "CacheMaxAge": 35996400000000000,
                "CacheMaxSize": 42949672960,
                "CacheMode": 2,
                "CachePollInterval": 60000000000,
                "CaseInsensitive": false,
                "ChunkSize": 67108864,
                "ChunkSizeLimit": -1,
                "DirCacheTime": 35996400000000000,
                "DirPerms": 2147484141,
                "DiskSpaceTotalSize": -1,
                "FastFingerprint": true,
                "FilePerms": 420,
                "GID": 2003,
                "NoChecksum": false,
                "NoModTime": false,
                "NoSeek": false,
                "PollInterval": 60000000000,
                "ReadAhead": 0,
                "ReadOnly": true,
                "ReadWait": 20000000,
                "UID": 2003,
                "Umask": 18,
                "UsedIsSize": false,
                "WriteBack": 5000000000,
                "WriteWait": 1000000000
        }
}

Using any deeper directory such as dir=/subdir doesn't refresh the directory tree.

SELinux is also turned off (permissive) for the moment, so I can rule it out as a possible issue:

# getenforce 
Permissive

My interpretation of the vfs/refresh command with _async=true and recursive=true is that it will refresh all directories (and possibly files) starting from the top level directory and going as deep as necessary until the last node is reached. Is that interpretation correct?

The mount itself works without any issues whatsoever. I can access all files and directories when browsing /mnt/dropbox, however the listing takes longer and thus I'd like to have rclone cache directories and files.

I appreciate any hints on this! :slight_smile:

Run the command 'rclone version' and share the full output of the command.

# rclone --version
rclone v1.62.2
- os/version: redhat 8.7 (64 bit)
- os/kernel: 4.18.0-425.19.2.el8_7.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.2
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Dropbox

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone rc vfs/refresh recursive=true --url 127.0.0.1:1337 _async=true

The rclone config contents with secrets removed.

# cat ~/.config/rclone/rclone.conf 
[dropbox]
type = dropbox
client_id = REMOVED
client_secret = REMOVED
token = {"access_token":"REMOVED","token_type":"bearer","refresh_token":"REMOVED","expiry":"2023-06-04T15:57:53.005693054+02:00"}

[dropbox_crypt]
type = crypt
remote = dropbox:data
password = REMOVED
password2 = REMOVED

A log from the command with the -vv flag

# cat /tmp/rclone.log 
2023/06/04 15:28:05 INFO  : Starting transaction limiter: max 12 transactions/s with burst 1
2023/06/04 15:28:05 DEBUG : rclone: Version "v1.62.2" starting with parameters ["/usr/bin/rclone" "mount" "-vv" "--vfs-fast-fingerprint" "--vfs-cache-mode" "writes" "--allow-other" "--gid" "2003" "--uid" "2003" "--tpslimit" "12" "--read-only" "--vfs-read-chunk-size" "64M" "--vfs-cache-max-age" "9999h" "--dir-cache-time" "9999h" "--tpslimit-burst=0" "--cache-dir" "/cache" "--vfs-cache-max-size=40G" "--rc" "--rc-addr" "127.0.0.1:1337" "--rc-no-auth" "--log-file" "/tmp/rclone.log" "dropbox_crypt:" "/mnt/dropbox/"]
2023/06/04 15:28:05 NOTICE: Serving remote control on http://127.0.0.1:1337/
2023/06/04 15:28:05 DEBUG : Creating backend with remote "dropbox_crypt:"
2023/06/04 15:28:05 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2023/06/04 15:28:05 DEBUG : Creating backend with remote "dropbox:data"
2023/06/04 15:28:05 DEBUG : vfs cache: root is "/cache"
2023/06/04 15:28:05 DEBUG : vfs cache: data root is "/cache/vfs/dropbox_crypt"
2023/06/04 15:28:05 DEBUG : vfs cache: metadata root is "/cache/vfsMeta/dropbox_crypt"
2023/06/04 15:28:05 DEBUG : Creating backend with remote "/cache/vfs/dropbox_crypt/"
2023/06/04 15:28:05 DEBUG : fs cache: renaming cache item "/cache/vfs/dropbox_crypt/" to be canonical "/cache/vfs/dropbox_crypt"
2023/06/04 15:28:05 DEBUG : Creating backend with remote "/cache/vfsMeta/dropbox_crypt/"
2023/06/04 15:28:05 DEBUG : fs cache: renaming cache item "/cache/vfsMeta/dropbox_crypt/" to be canonical "/cache/vfsMeta/dropbox_crypt"
2023/06/04 15:28:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:28:05 DEBUG : Encrypted drive 'dropbox_crypt:': Mounting on "/mnt/dropbox/"
2023/06/04 15:28:05 DEBUG : : Root: 
2023/06/04 15:28:05 DEBUG : : >Root: node=/, err=<nil>
2023/06/04 15:28:07 DEBUG : rc: "vfs/refresh": with parameters map[_async:true recursive:true]
2023/06/04 15:28:07 DEBUG : rc: "vfs/refresh": reply map[jobid:1]: <nil>
2023/06/04 15:28:07 DEBUG : : Reading directory tree
2023/06/04 15:29:05 DEBUG : Dropbox root 'data': Checking for changes on remote
2023/06/04 15:29:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:30:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:30:26 DEBUG : rc: "vfs/stats": with parameters map[]
2023/06/04 15:30:26 DEBUG : rc: "vfs/stats": reply map[diskCache:map[bytesUsed:0 erroredFiles:0 files:0 hashType:none outOfSpace:false path:/cache/vfs/dropbox_crypt pathMeta:/cache/vfsMeta/dropbox_crypt uploadsInProgress:0 uploadsQueued:0] fs:dropbox_crypt: inUse:1 metadataCache:map[dirs:1 files:0] opt:{NoSeek:false NoChecksum:false ReadOnly:true NoModTime:false DirCacheTime:9999h0m0s PollInterval:1m0s Umask:18 UID:2003 GID:2003 DirPerms:drwxr-xr-x FilePerms:-rw-r--r-- ChunkSize:64Mi ChunkSizeLimit:off CacheMode:writes CacheMaxAge:9999h0m0s CacheMaxSize:40Gi CachePollInterval:1m0s CaseInsensitive:false WriteWait:1s ReadWait:20ms WriteBack:5s ReadAhead:0 UsedIsSize:false FastFingerprint:true DiskSpaceTotalSize:off}]: <nil>
2023/06/04 15:31:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:32:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:33:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:34:02 DEBUG : rc: "job/status": with parameters map[jobid:1]
2023/06/04 15:34:02 DEBUG : rc: "job/status": reply map[duration:0 endTime:0001-01-01T00:00:00Z error: finished:false group:job/1 id:1 output:<nil> startTime:2023-06-04T15:28:07.016170446+02:00 success:false]: <nil>
2023/06/04 15:34:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:34:07 DEBUG : Dropbox root 'data': Checking for changes on remote
2023/06/04 15:35:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:36:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:37:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:38:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:39:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:39:21 DEBUG : Dropbox root 'data': Checking for changes on remote
2023/06/04 15:40:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:41:05 INFO  : vfs cache: cleaned: objects 0 (was 0) in use 0, to upload 0, uploading 0, total size 0 (was 0)
2023/06/04 15:41:21 DEBUG : rc: "vfs/refresh": with parameters map[_async:true dir:/ fs:dropbox_crypt: recursive:true]
2023/06/04 15:41:21 DEBUG : rc: "vfs/refresh": reply map[jobid:4]: <nil>

It does as I use it all the time.

Dropbox doesn't have a recursive fast list so it's just very slow compared to something like Google that does support fast list.

Mine takes around 10-15 minutes to refresh.

1 Like

indeed - I use it for onedrive and for 250k files and 10k folders it takes good 10 min.

Until job finishes vfs/stats does not show anything new.

When finished job/status jobid=1 will show job not found

1 Like

Thanks to both of you!
Then, indeed, it was a user error, as I thought it would progressively update the directories and files cached when querying the stats. Now, it's clear, that I just need to wait stop being impatient :smiling_face:

In the meanwhile, it already updated the cache and all directories and files are listed!

it would be nice if job/status shows some progress - now it is only running/done. But well. Somebody has to code it.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.