Cache for singel file too big

What is the problem you are having with rclone?

Cache getting to big, want to get file cache size down.
Now the cache is largest for the first file and decreases for each file down the folder, but I only want to cache the first 5-10mb of all of the files for faster indexing.
Files never change.
New files does occur, but they never change.
Sometimes a user might pull the whole file, thats ok, but it doesnt need to be cached at full size for too long.

What I would want to do is to cache a set amount of each file, but purge files over the limit if not in use, and / or after a set amount of time.

What is your rclone version (output from rclone version)


Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 20.04LTS, 64 bit

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

attributes="--vfs-cache-mode full --attr-timeout 1m --cache-dir /home/$user/Desktop/$remote_name-tmp --buffer-size 500M --vfs-read-chunk-size 1M --vfs-read-chunk-size-limit 64M --vfs-cache-max-size 10G --log-file=rclone-$remote_name.log  --log-level INFO"

rclone mount $remote_name: /home/$user/Desktop/$remote_name $attributes

The rclone config contents with secrets removed.

type = drive
client_id = ****
client_secret = ****
root_folder_id = ****
scope = drive
token = ****

A log from the command with the -vv flag

Paste  log here 

hello and welcome to the forum,

--vfs-cache-mode full only downloads the chunks as requested by another application as it uses sparse files.

Thank you

Yes, I know, this is temporary setup until I learn and understand how I can do what I want it to do.

Because now I mount the remote, open the folder in an other program, it scanns all the files and caches them in its own memory, every few minutes it checks again, but then it might have checked some other files meanwhile, so one file can get up to 50GB, another at 30GB, and the rest is at 5-30MB, i want rclone to purge files over a set limit if not in use, so that it always has the start of the file cached.

And also --vfs-cache-max-size 10G is added so it doesn't overpopulate the drive on the server, because earlier it could cache 3-5 files with over 50GB, but now it purges the other files if one file goes over 10GB(not really what I want, but it stops the server from underperforming when the drive is full).
But then it always needs to re cache all the other files after it is done with one file over 10G though. so the cache has lost its intention.

what are you using the mount for?
what is that other program that is scanning and re-scanning the same files over and over again.

It's a LUT for mathematical formulas, most of the data needed to know which table to use is in the first 5-10 MB of the file. every now and then a group has compiled a new set of LUT and it is added to the folder.
And every now and then, a formula needs to check the LUTs and it might load the whole file, or until it has found its solution.

But by doing that the file gets very big in the cache, and it didn't purge, so I set the 10G limit 2-3 hours ago to save me from going in the server 4-5 times a day and deleting big cache files.
But by doing that it redownload alot of the other files again and again because they got purged by the 10G limit, which applies to the whole mount, not singel file.

pretty sure rclone cannot work that way.
as rclone just downloads the chunks requested by the app.

perhap there is a way to change that app which is requesting the data.
create a local database or local file strucutre to cache that first 5-10MB.

Its proprietary software, so it cannot be done.
Can vfs-cache do it, or is written to only support mount as a whole cache, and not on individual files?

That's how the VFS Cache Full works as it's documented up here:

You'd have size the cache properly to meet your needs since it works with sparse/full files.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.