New feature: CACHE

Can anyone explain us how it works? It’s similar to Plexdrive?

Thanks!

1 Like

Yes it should have a similar effect to plexdrive.

Check out the docs and have a go!

Awesome Feature. Has anyone tried a plex Scan with this ? Are there any known bans ?

Do i unterstand it right that if i Upload my Files via rclone copy to the Cache Target it will be available immediatly After the Upload ?

Thanks for your help

please make sure to read the docs, especially

Known issues
cache and crypt

One common scenario is to keep your data encrypted in the cloud provider using the crypt remote. crypt uses a similar technique to wrap around an existing remote and handles this translation in a seamless way.

There is an issue with wrapping the remotes in this order: cloud remote -> crypt -> cache

During testing, I experienced a lot of bans with the remotes in this order. I suspect it might be related to how crypt opens files on the cloud provider which makes it think we’re downloading the full file instead of small chunks. Organizing the remotes in this order yelds better results: cloud remote -> cache -> crypt

Something is seriosly wrong - file path to cache not found … i try to post a detailed error log tomorrow … File structure gets complete wrong on windows mount … -> Cache -> Crycpt …

ok ... after complete deleting the cache dir and start from scratch - after getting into subdir following error comes up :

2017/11/18 11:39:32 ERROR : IO error: file is too short to be encrypted
2017/11/18 11:39:32 ERROR : worker-1 : failed caching chunk in storage 0: open cache/gcache/fas9qo4eirl3pntkfjqqmi08hs/kp70u7dtkqn63dl5caa9ikirc4/3ct4j7ed9lbofra5m5em55ed527v8tlm3qr5pgc3j4sasp0sunog/dmgjhbr77l4lto6ebr8jm4qfvnq16j69v5106lqkl0e8nie9o19ti21e1m7pf8oaacmj7u0livk7ki6ue28mqefpcgahrk0cu2102992noerbd4hvcsb3ag4qtevk59d/ir2h35b4b3085tagvinv7d3q3nn1mcktheg7gcbh8spqo0nmh700/0: Das System kann den angegebenen Pfad nicht finden.


Could not find the path - is it possible, that the path is to long (in cache dir) for Windows 10 ? (so storage in file system is not the best way to go ?)

Mounted dir then gets out of order - big dir has only one file left (the one i tried to open).

Gdrive -> Cache -> Crypt

Without Cache - everything accessable ... same structure - same storage ..

Using rclone-v1.38-117-g409ba56fβ-linux-amd64

I get a really high CPU usage after a while.

[cache]
remote = myremote:path/to/dir
chunk_size = 8M
info_age = 12h
chunk_age = 3h
warmup_age = 3h

Amazon backend -> cache-> encfs -> emby

This was my mount since 1.36, I suppose I shouldn’t use some of the options or should I?

rclone mount --read-only --allow-non-empty --max-read-ahead 1G --checkers 16 --tpslimit-burst 10 --tpslimit 1 --acd-templink-threshold 0 cache: /mount -v --log-file="/logs/rclone-cache-$NOW.txt" &

EDIT!

I mounted using:

rclone mount --read-only --allow-non-empty --cache-db-purge cache: /mount_path -v --log-file="/logs/rclone-cache-$NOW.txt" &

I no longer get 100% CPU usage :slight_smile:

Yes, I think --max-read-ahead is probably counterproductive when using the cache backend.

1 Like

Just done a quick test and looks like I got it wrong.
GD: = Cache: = Crypt:
mounting crypt using
rclone mount --allow-other --tpslimit 1 --cache-db-purge Plex:/Plex/ /mnt/Plex/ &

That = 403 Forbidden within 30 mins
Before the ban I got some read errors, Read error: low level retry 1/10: EOF

Cache info
chunk_size = 5M
info_age = 1h
chunk_age = 1h
warmup_age = 24h

This feature is only in beta for now right?

is your google drive?

Is the crypt?

Yeah, I was too excited to try it out!

Is there a cache size limit? Eg. when it has 5GB of data it would delete the oldest regardless of chunk_age.

I have it setup this way
Google Drive > Cache > Encrypted mount:
So the Cache in working on everything on the Google Drive and I then setup a crypt mount pointing to the Cache.

Could not see anything about a cache size limit so must be using the default size limit

Plex: = crypt mount pointing to the Cache not direct on Google Drive
mnt/Plex = where the mount is located on my system

If using PlexDrive I use
plexdrive mount -o allow_other -v 2 /mnt/Plexdrive/
rclone mount --read-only --allow-other Plexdrive:/Plex/ /mnt/Plex/ &

That works OK with no bans

Rclone cache seems a lot faster than Plexdrive. maybe running to fast?

OK - tried under my linux VM - everything set up … but after 20 seconds : 403 Forbidden

=> Ban ! (directory was listed - but accessing a file did not write anything to cache dir - and after 20 seconds the ban … maybe bad luck or murphy - but strange)

when i rename a folder, the folder would like to switch invisible until i remount the cloud storage. Then the rename are visible.

I am slightly confused as to how to use this with an rclone encrypted remote.

So for example in my config, I have an Amazon remote

[amazon]

then my cache config where the cache remote is set to my amazon remote

[amazon-cache]
remote=amazon:

then my crypt remote, which I have set the remote value to amazon-cache:
[amazon-crypt]
remote=amazon-cache:encrypteddirectory

So first question, do I need to mount both
amazon-cache:
and
amazon-crypt:

and second question, should the remote for amazon-crypt: be set to

amazon-cache:encrypteddirectory

or should the remote be set to the local path where amazon-cache: is mounted?

/path/to/cache/mount/encrypteddirectory

thanks

@ncw Using rclone-v1.38-117-g409ba56fβ-linux-amd64 my drive got full…

[cache]
remote = myremote:path/to/dir
chunk_size = 8M
info_age = 12h
chunk_age = 1m
warmup_age = 3h

after 12-24h my disk was full. Shouldn’t the chunk_age have deleted the chunks after one minute?

BTW what is warmup_age ?

I’m still trying to wrap my head around impact of changing times in warm-up. I guess I do not fully understand what it is doing here.

NB in f80f7a050948d40f67edeb93f6757d30ec85d3ed I’ve changed the default location of the cache so you might need to move your cache, or start again if you aren’t using the --cache-db-path. This was to move it to a windows/mac/linux approved place like .cache.

Sorry for the inconvenience, but I thought it was better to do this now!

This new feature seems great!

So let’s say I have a GDrive remote and a crypt GDriveCrypt remote that points to GDrive:Encrypted.

What I do is to create a CACHE remote that points to GDrive, and then I create a crypt CACHECrypt remote that points to CACHE:Encrypted.

Finally I mount the CACHECrypt remote?

And where do I put the CACHE specific flags like –cache-chunk-age ? Would they work on the rclone mount CACHECrypt: ?

Also, as I understand, this new feature will be available on the v1.39. Is there an ETA?

Sounds correct.

On the command line, or while configuring the cache remote

A couple of weeks hopefully!