Can I use rclone vfs/refresh to put my entire remote in the dir cache time defined by my settings to save api calls/faster lookup?
Today I was hitting the 100/sec api requests, and I noticed the errors were about listing directories. So I thought about this solution...
Also what happens if the remote changes ? Will it invalidate the cache automatically, or it will only see changes after the time set by --dir-cache-time ?
I've seen some vfs/refresh behavior that I can't quite explain - if I perform an _async job like @Animosity022 suggests (rclone rc vfs/refresh recursive=true --fast-list --rc-addr 127.0.0.1:5572 _async=true) it gets kicked off and I get a job ID. If I look at the job ID, it doesn't look like it is doing anything and the cache is never refreshed - duration should be non-zero, I would think.
If I do a non-async vfs/refresh - (rclone rc vfs/refresh recursive=true --fast-list -v) - it completes after a few minutes and I see the changes on the mount.
time rclone rc vfs/refresh recursive=true --fast-list -v
{
"result": {
"": "OK"
}
}
real 1m46.648s
user 0m0.029s
sys 0m0.010s
I've cron'ed the vfs/resfresh @ 15 minute intervals (testing for using DB for streaming mount) and it has been pretty consistent with the ~2 minute run time.
Wed Nov 11 18:30:01 EST 2020
{
"result": {
"": "OK"
}
}
Wed Nov 11 18:31:36 EST 2020
Both mounts have the almost exact same mount command. Max-cache-age on the ubuntu 18.04 machine is 36H, the 20.04 machine is 15H.
/usr/bin/rclone mount dcrypt: /mnt/drop --config /root/.config/rclone/rclone.conf --vfs-cache-mode full --vfs-cache-max-size 500G --vfs-cache-max-age 15h --dir-cache-time 48000h --cache-dir /home/rclone/vfs_cache --log-level INFO --log-file /tmp/rclone_drop.log --umask 002 --allow-other
Can you try running with -vv --dump headers and look directly at the log - you'll see the http requests go by. In the delayed case do the http requests go by spaced throughout the period or are they all at the start or the end? I'm just wondering if the background task is blocked on something.
I changed the vfs cache time on the slow-working machine (Ubuntu 18.04 - it was 36H) to match the cache time on the working machine (ubuntu 20.04 15H) - once I restarted the mount, the _async job started working properly. Sorry I wasn't able to collect any detailed logs before I did that, though.