New feature: CACHE


Hi All,

Just discovered this so I’m trying to catch up.

I never experienced bans in a long time since I added the warmup switching. Even full blown scans from scratch never resulted in them. That’s not saying that it’s not possible.
A couple of questions:

  • have you repeated the same test multiple times?
  • was there any other mount active on gdrive at the same time? even other software

Next phase I hope:

So no more bans using rclone?

In what way? Scanning? Streaming?

That’s sounds really bad. Never managed to achieve that performance. Same questions as before:

  • have you repeated the same test multiple times?
  • was there any other mount active on gdrive at the same time? even other software
  • what did you do? Plex scan?

Personally, I think it should actually be CLOUD -> CRYPT -> CACHE.
Cached data shouldn’t be encrypted because it’s local and in your property and you will save your hardware and rclone the hassle to decrypt the data every time it’s read from cache. I’ll try to tackle this problem soon



Sorry I’m confused. In what rclone command? If i’m mounting the Crypt remote pointing to Cache remote, the only command I run is rclone mount Crypt: Should I put the flags there?

I thought it was CLOUD -> CACHE -> CRYPT because ncw said

There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache

During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we’re downloading the full file instead of small chunks. Organizing the remotes in this order yelds better results: cloud remote -> cache -> crypt


Yes. That’s how it works best now to take full advantage of caching. What I said earlier was what would be ideal in the future but again, doesn’t work now.

The cache remote would still get the flags even if you mount a crypt remote over it.


Did anyone tested some 4K streams, this is how its working with plexdrive.
( 71GB movie with high bitrate eg you need at least 80Mbit+ for direct play, will load within 2 seconds )

p.s. Note: Iam using plex media player since chrome wont direct play it and with transcoding it will take 5 seconds + to load.


could you explain to me your configuration?
I use Emby with google-drive-ocamlfuse for the moment I would like to be able to switch to the cache option …


Iam using plexdrive atm ( will test rclone cache soon )

This is the script how i mount:


thanks to you Ajki;)

I tried this:
“rclone mount --allow-other --cache-db-purge -v GGD2_Cache: / GGD2 &”
and I received a quota limitation with the error 403 :frowning:


Does anyone know what this means?

“poll-interval is not supported by this remote” Trying a mount from Google Drive.


you dont use rclone crypt ?
emby is very slow to show a movie


No but that’s a good test to do. Thanks for the suggestion. If you try it too, let me know how it works. It’s not clear from your recording: does that work in your current setup or it’s too much?

It usually expires but you shouldn’t be getting one in any case. Can you paste me your configs and some logs?


I do not understand the principle of cache well.
I use emby, and my movies are on google drive encrypted with rclone.
I mount a folder with plexdrive (or google-drive-ocamlfuse), it works well by 403 error.
then I mount a folder with rclone crypt that points to the google folder:

> rclone mount --allow-other --read-only --default-permissions --uid 1000 --gid 1000 --umask 002 --acd-templink-threshold 0 --buffer-size 10M --timeout 5s --contimeout 5s --stats 1m -v GGD2_Crypt2: / home / emby / GGD2_Crypt2 &

I have an endless waiting time on emby before having the film that starts because Rclone must download this one completely.

here are my other parameters:

plexdrive mount / home / emby / GD -o allow_other --chunk-check-threads = 20 --chunk-load-ahead = 4 --chunk-load-threads = 20 --chunk-size = 5M - max-chunks = 2000 --refresh-interval = 1m -v 3 &


google-drive-ocamlfuse / home / emby / GD -o allow_other, auto_cache -headless &


Can you share the settings you use on Google Drive and for the Cache that don’t result in a ban? As well as the mount command?

I just did this:

rclone-beta --config=/root/.config/rclone/rclone-beta.conf mount --allow-other --uid=1000 --gid=1000 --cache-db-path=/mnt/Data/rclone-cache --syslog --cache-mode=full cryptic: /media-mounts/PlexDrive &

And in rclone.conf it looks like:

type = drive
client_id =
client_secret =
token = {blah}

type = crypt
remote = gdrive-cache:
filename_encryption = standard
password = blah
password2 = blah

type = cache
remote = gdrive:Data
chunk_size = 40M
info_age = 24h
chunk_age = 3h
warmup_age = 24h

And this resulted in a ban. I am also using it with a local unionfs mount, but that shouldn’t affect this.


I can note it was a plex scan as well that caused the ban.


@ncw is it now possible to download torrents or other files directly to cloud with rclone and cache?
should cache-writes be enabled for that ?


If you use --cache-mode then yes in theory… Try minimal and if that doesn’t work try writes.

Note that is fairly new code so any bug reports greatly appreciated!


Hey all. Sorry for the short time away, worked on a bit of features that came mostly as feedback from most of you through issues.

Here’s the latest beta that has significant changes in cache:

Some key points here:

  • there’s now a Plex integration and warm up is gone. It wasn’t really working that well for that use case and by integrating cache with Plex, it will be able to handle scans much more efficient while providing playback times a smooth transition (what warm up tried to do by guessing really)
  • chunks are no longer expiring based on time. You will need to only give it a max cache size and it will clear the oldest chunks
  • all these meant a lot of removing stuff that might have bottlenecked and degraded performance. There was also a nasty bug that caused a lot of read errors which meant degraded performance too. Overall, the latest beta should give out a better experience
  • 403s should not be encountered anymore either. If you still do, let me know please through an issue. Would be glad to help you out. I just reran a scan of my entire library and playing at the same time and I got no 403. I hope everyone else can have a similar experience

I really recommend to run rclone config and point rclone at your Plex installation if you’re using it for that. I’m also feeling the need to slow down Plex’s scan and with this integration I should be able to in the future.

The doc was also updated:

And if you have improvement ideas or suggestions, I will gladly work with you to see if it’s possible.


These are great news!!


I just started testing this with the latest beta and I’ve been running some benchmarks (too early to publish as there’s nothing scientific yet but it seems to be outperforming PlexDrive significantly).

Two issues: I have files “disappearing” from my directory. Not on the actual Google Drive itself but I’ll “ls” a directory from my cached-Google mount and see maybe 10 files and 1 sub directory. I can read/copy the files no problem. Don’t do anything but wait 30 minutes and come back and the directory is empty except for the sub. Killing the mount and starting it up again with the dump cache option brings everything back from the dead. However, it does mean that a full scan of a new Plex server, for example never completes because files keep disappearing.

In the logs, I’m seeing this (not sure if related):

panic: runtime error: slice bounds out of range

goroutine 20496 [running]:*Handle).getChunk(0xc42499a230, 0x5000000, 0xc42499a268, 0xff02e0, 0xc42cf35e08, 0x5a2cd7e5, 0xc427b8d080)
        /home/travis/gopath/src/ +0x524*Handle).Read(0xc42499a230, 0xc424076000, 0x1000, 0x100000, 0x0, 0x0, 0x0)
        /home/travis/gopath/src/ +0xd4, 0xc42499a230, 0xc424076000, 0x1000, 0x100000, 0xe6ef20, 0xf3cc40, 0xeb0f00)
        /home/travis/gopath/src/ +0x72*buffer).read(0xc420193980, 0x7f5be6106a98, 0xc42499a230, 0x7f5be6106a98, 0xc42499a230)
        /home/travis/gopath/src/ +0x58*asyncReader).init.func1(0xc42499a8c0)
        /home/travis/gopath/src/ +0x1e4
created by*asyncReader).init
        /home/travis/gopath/src/ +0x1ab


Sounds like the same behaviour I already mentioned in this thread: New rclone cache feature - files randomly disappear


Yep, for the panic issue I already added a commit to catch it. I can’t pinpoint where it’s coming from. Simple math there that shouldn’t ever let that happen. Furthermore, I have never really seen this happening before during my scans and that type of issue should happen more frequent.

Anyway, the commit catches the panic, errors out the read cause I don’t understand when it happens and what should be that read about but I have added all the details I need to find out in the error log: unexpected conditions during reading. current position: ... I will need those errors in that issue @Shacuih and @kelinger . They would really help me out.

@kelinger can you post your config to see if I can figure out the disappearance issue? And just to confirm: cache is reading, 30 mins later, some directory is now empty on the mount but not on drive?