What is the problem you are having with rclone?
Hello,
I have got specific case. I run rclone on my Linux. Inside of my google drive is for example 30 files with 100gb size each of them. I have got application which frequently try to check info about this files but not download all 100gb. Is it possible to cache info about files?
Currently I try in this way and it works in half :
rclone mount --allow-other --dir-cache-time 720h --poll-interval 15s --cache-dir=/home/work/cache_test/ --vfs-cache-mode full --vfs-cache-max-size 20G --vfs-cache-max-age 99h --transfers 9999999 --vfs-cache-poll-interval 15m folder: /home/work/folder --daemon
The problem is that, that after first request one of file, I need to wait about 20mins to finish. Then it is cached and new requests works very fast like I want. Is it possible to cut this 20min? I don't know why it takes so long. If I have for example 30 files then I need to wait 30 x 20min = 600min
Without vfs flags and without cache option one request takes about 60sec.
Maybe rclone can't do that and I need to use a different tool like squid?
What is your rclone version (output from rclone version
)
rclone v1.50.2
- os/arch: linux/amd64
- go version: go1.13.6
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Linux Ubuntu 20.04 LTS
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone mount --allow-other --dir-cache-time 720h --poll-interval 15s --cache-dir=/home/work/cache_test/ --vfs-cache-mode full --vfs-cache-max-size 20G --vfs-cache-max-age 99h --transfers 9999999 --vfs-cache-poll-interval 15m folder: /home/work/folder --daemon
The rclone config contents with secrets removed.
[test]
type = drive
client_id = XXX
client_secret = XXX
scope = drive
token = {"access_token":"XXX","token_type":"Bearer","refresh_token":"XXXX","expiry":"XXXX"}
team_drive = XXXX