VFS Cache doesnt create chunked files on disk

Hi there,

I am using the current mount

/usr/bin/rclone mount --config=/home/xx/.config/rclone/rclone.conf --allow-other --buffer-size 2G --fast-list --vfs-cache-max-age 72h --vfs-cache-mode minimal --vfs-cache-max-size 512G --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit 0 --dir-cache-time 1000h --async-read=false --timeout 1h --log-level INFO --log-file /home/xx/rclone.log --umask 002 --rc google: /home/xx/plexdrive

I expect that there will be chunk files in the folder /home/xx/.cache/rclone .

The VV option s also says that there will be files in this folder. But sadly there arent any files?

What do i make wrong? I want to make a cached version on the HDD until it is aged maximum 72h or the size of all files is more than 512 GB.

I am currently on the Version rclone v1.51.0-126-g45b63e2d-beta

Thanks for your help!

Did you grab the beta for a specific reason?

Can you share the full log?

I had problems with 1.51 as it crashed a lot on an old server. So it tried a beta one.

What portion of the Log files do you need?

What do you mean it crashed? The whole log is the best bet.

Yes. The mount freezed from time to time. I used the cache drive then. Now i try it with the vfs options. But i dont see any files on the hard drive.

Here is the Log File:


Here you go


What does:

ls -alR /home/xx/.cache/rclone/vfs


No such file or directory

Possible to share the output like I have above?

What does this show?

ls -al /home/xx/

I think you mean this portions of it:

drwxrwxrwx 4 xx xx 4096 Mar 31 10:21 .cache

So what does:

ls -al /home/xx/cache


ls -al /home/xx/.cache
total 16
drwxrwxrwx 4 xx xx 4096 Mar 31 10:21 .
drwxr-xr-x 22 xx xx 4096 Apr 3 14:35 ..
drwx------ 2 xx xx 4096 Mar 30 11:38 dconf
drwxrwxrwx 2 xx xx 4096 Mar 31 15:15 rclone

What is your goal with setting cache mode? What's the problem you are trying to solve?

I had problems using uhd remuxes. I want to mimalize the api hits if some stops the playback und starts again on the next day.

The Cache one works very good. But i thought i can get rid of the cache when i use vfs

Yeah, I am not sure what you are trying to do still.

If you are trying to set vfs cache mode to anything other than off, it has to download the whole file before streaming so that's not a good option to use.

I am not sure what you mean by the 'cache one' works well.

Why are you trying to minimize API hits? You get 1 billion per day.

@ncw is rewriting the vfs cache backend as the 'cache backend' has no maintainer so I would not really use that.

There are multiple Windows users here (as I'm not one of them) that stream via rclone without issues. Perhaps @VBB can chime in as he's got his settings working quite well to my understanding.

If the goal is to keep recently written files in the VFS cache, you can use 'writes' as the mode as anything written would be kept for 72 hours per your settings. This means that once you write something, it doesn't return back to you until the file is uploaded to your remote.

Ok sorry if i stated it wrong.

I want to do multiple things.

First i want that rclone reads Chunks in the Size of 128 MB up to 2 GB ahead. So that there wont be any buffering.

Second i want to make the files stay on the hard drive for a while. So that that i can resume my movie the next day without loading the chunks twice.

Third i dont want to get banned while scraping :slight_smile:

--buffer-size depends on how rclone reads ahead but that is dependent on how the file is opened and closed as buffer-size is discarded when a file is closed.

You can try out the cache backend as that keeps things local, but again, that does not have a maintainer so it has not been fixed/updated for some time.

That's also the cache backend with the same caveats above. You can use --writes in VFS but that's only when a file is written and nothing for reading. You do not want to use anything above writes as that means it has to download the entire file before giving you 1 byte of data back.

Google doesn't ban really so not sure what you mean by that. You only get banned if you violate their TOS.

Google does have daily API quotas per user, which is 1 billion per day. They have a daily download 10TB and upload of 750GB per user. You can't see how you are doing on these anywhere and there is little to no information on how these numbers are actually generated.

There isn't a magic bullet for any of this as it's very dependent on your setup and your clients. I've personally never hit the download quota per day and hit the upload quota here and there.

Yes i mean the daily Api quotas. I thought it is much less. But it is still strange that rclone does not write anything in the folder.

Thank you for your help.

Turn it up to writes and write a file. It'll show up. Minimal really doesn't cache much.

Doesn't look like he's using Windows.

@DJWESTY1985 What's your Internet speed? Like @Animosity022 said, I've been using his settings for a long time now, and I have zero issues streaming even the largest UHD files. This is with a 150/150 connection, so certainly not the fastest.

My current mount:

rclone mount --buffer-size 256M --dir-cache-time 1000h --poll-interval 15s --rc --read-only --timeout 1h -v

I have a very large Plex library, which I scan once a day. I don't believe the scraping has ever resulted in an API ban.

Buffering, especially with large files, can easily be the result of latency, as I experienced not too long ago, when my ISP had major peering issues with Google services. For a couple of weeks, I was not able to play anything 4K, and even some of the HD stuff wouldn't play.