Force recreate vfs cache from folder in backend

rclone v1.55.1
ubuntu 20.04 lts server
google cloud storage

parameters:
RCLONE_CONFIG=/etc/rclone.conf
RCLONE_CACHE_DIR=/var/cache/rclone
RCLONE_VFS_CACHE_MODE=full
RCLONE_VFS_CACHE_MAX_AGE=off
RCLONE_VFS_MAX_SIZE=1024G
RCLONE_VFS_POLL_INTERVAL=24h
RCLONE_VFS_WRITE_BACK=12h
RCLONE_DIR_CACHE_TIME=12h
RCLONE_POLL_INTERVAL=6h
RCLONE_ATTR_TIMEOUT=1h
RCLONE_TRANSFERS=16

I have 500gb in the storage backend. Is there a way to force to recreate the vfs cache of all files of a certain folder in backend?

You'd mount and just read all the files and it would download/populate the cache. Is that what you mean?

yes. but how could I force a "read" of all files in a folder?

You can just cat the file into /dev/null or something along those lines if you want the whole file.

felix@gemini:~/test$ for i in `ls`; do cat $i > /dev/null; echo $i; done
blah
hosts
felix@gemini:~/test$ ls
blah  hosts
felix@gemini:~/test$ for i in `ls`; do cat $i ; echo $i; done
127.0.0.1 localhost
127.0.1.1 gemini
blah
127.0.0.1 localhost
127.0.1.1 gemini
hosts
felix@gemini:~/test$
1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.