Correct way to cache an encrypted remote?

Having just read another thread (Move files to crypt folder), I think my cache might be incorrectly configured.

This is my current rclone.conf:

[gdrive]
type = drive
scope = drive
client_id = X
client_secret = X
token = X

[gcache]
type = cache
remote = gdrive:/ARCHIVE
chunk_size = 512M
info_age = 1d
chunk_total_size = 50G

[gcrypt]
type = crypt
remote = gdrive:/ARCHIVE
filename_encryption = standard
directory_name_encryption = true
password = X
password2 = X

But shouldn't the format be gcache:gcrypt? I tried changing gcrypt to the following, but it can't see the crypt:

[gcrypt]
type = crypt
remote = gcache:gcrypt
filename_encryption = standard
directory_name_encryption = true
password = X
password2 = X

Or is my initial config actually correct?

Which command did you use to "see" the files? mount or ls? and the arguments?

Looks like you are forgetting to stack them together.

You should first make a Teamdrive to hook up to the cloud. (as you have)
Then make a TeamdriveCache --> pointing to the Teamdrive remote (as you have)
Then make a TeamdriveCacheCrypt and point it to the TeamdriveCache remote. (missing)
Now TeamdriveCacheCrypt is the one you will want to use and which does all the steps for you. If you want to mount the drive - use this also.

With your current config you would only be able to use use crypt OR cache, since they both point directly to the Teamdrive. To make them all work together you need to layer all three.

The start of my crypt remote (the outer layer of my stack) looks like this if it helps you visualize:
[TeamdriveCacheCrypt]
type = crypt
remote = TeamdriveCache:/Secure

Hope that helps.

PS: Avoid stacking in the order Drive<--Crypt<--Cache. While this works it is known to cause problems with google rate limiting you. The "right" way to do it is Drive<--Cache<--Crypt , and besides this has the added benefit of encrypting your cache which may be nice for added local security.

-Stigma

Thank you for the reply. As the gcrypt already has a considerable amount of data, can I revise the rclone.conf to make the cache work with the crypt, or am I unable to create a cache after the gcrypt has already been created?

A cache can be added or removed at any point without any problems whatsoever.

I would recommend clearing (deleting all files + database file) in the cache if you want to re-enable it again though, just to avoid potential issues with it having old data in it. It shouldn't be absolutely required, but best to let it re-build from scratch after major changes. If you change the chunk size you HAVE to clear it to avoid unexpected end of files however.

TLDR: this is not a problem. Just point your crypt to the cache and it should all connect fine. The crypt won't care if it sends stuff through the cache or not.

Thank you

I know a cache is not a prerequisite, but there seems to be no consensus as to whether it improves performance or not. Indeed, some seem to advocate no cache and instead using --vfs.

Is there a particular benefit to going either route?

There definitely are pros and cons.

Cache has more advanced chunking, and while the VFS also has a chunked download feature I could never get that to produce completely stutter-free playback (on a 150Mbit connection). Using cache this always works well. I'd say this is currently one of the biggest reasons to use it if you play media from the cloud regularly - otherwise the buffering will drive you mad. If it is possible to achieve well on the VFS then I have at least not been able to do it after extensive testing.

Cache has it's own database file that greatly speeds up the listing and traversing of the file structure. It can remember how the folders and files were the last time it saw it and assumes (base on configurable timeouts) that the structure is still the same. With high expiry timers and a use-case where files are only modified through the case by a single write-user then it saves a lot of redundant calls to re-list files because it can just refer to it's own local database instead. This also makes moving around the filesystem far more snappy. It keeps the DB updated on it's own when changes happen as long as the changes were made via the cache.

Cache has read-caching with retention, so if you can set aside a a fair size cache then re-reading files you already accessed not too long ago is instant (it won't have to re-download or re-request). This obviously speeds up usage - especially on small files since those are slow to transfer compared to large ones.

Cache will obviously need to do more writes to disk. If heavily used you may want to put it on a HDD rather SSD to prevent too much wear (although modern SSDs can take a lot of wear now).

Cache can be added and removed from a setup easily, so there is no big commitment to it.

As for cons - cache complicates the setup more than strictly necessary. More code means more chances of encountering bugs. Cache also seems to be going out of favor for future development (perhaps being moved into VFS eventually in the long-term). The cache-writes is useless with a VFS as it does not have actually retention of writes (only used for error-safe uploads which the VFS already does using writes-mode) and the temp-upload function appears to be buggy in a few non-trivial ways - so I avoid this. Apart from that it works well. It just might not see a ton of development going forward.

Personally I use cache mostly for the ability to stream media flawlessly + the read cache. Having a few hundred gigs worth of cache on a few TB of data means a lot of cache hits that really speeds things up since it's pretty common for me to re-access the same files in a given period of time. If a better solution becomes available later (such as the VFS getting integrated cache support) then I'll just change the setup.

I think that should cover the most important points.

Am amazingly comprehensive reply! I'm convinced. I have a 1TB Nvme that I can use solely for this.

That being the case, can you tell me how I go about adding the cache to the rclone.conf below - and which settings you would recommend for optimum performance?

:slight_smile:

In all my testing, I've always found the cache to not scale well and generally slower and not worth it.

[gdrive]
type = drive
scope = drive
client_id = X
client_secret = X
token = X

[gcache]
type = cache
remote = gdrive:
chunk_size = 32M
info_age = 7d
chunk_total_size = 10G

[gcrypt]
type = crypt
remote = gcache:/ARCHIVE
filename_encryption = standard
directory_name_encryption = true
password = X
password2 = X

That would be your config I believe.

I read your detailed post about settings and about not using the cache and I'm still undecided before testing both as playback is smooth but there's always room for improvement :slight_smile:

A full, untouched bluray (40GB) currently opens in 5/6 seconds using nssm and no cache. I can scan through it with no delay or judder :slight_smile:

Would you recommend tweaking the following:

mount --allow-other --buffer-size 2G --dir-cache-time 24h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit off --vfs-cache-mode writes --vfs-cache-max-age 0m --write-back-cache --tpslimit 8 --cache-dir "C:\Cache" gcrypt:/ X: --config "C:\rclone\rclone.conf"

Run the test with DEBUG log level and share the log. 5-6 doesn't seem to bad to me.

rclone.conf amended as per above. Does the cache simply build itself in the background, or are calls only made when accessing media?

Sorry - which test should I run?

If you want to see why your open takes 5-6 seconds, mount with --log-level DEBUG and use --log-file /somewhere/rclone.log and you can perform the test and share the log.

That will show what is happening when you see that 5-6 seconds of playback.

Once you start the rclone mount with cache, it will build up based on file access.

Ahhh, I see. Will post the log shortly.

Thanks :slight_smile:

Okay, right now it won't play anthing ... errors all over the place referring to files which were on an older cache and which no longer exist.

How do I completely purge the cache so it rebuilds?

2019/07/25 16:28:56 DEBUG : : cache: expired ARCHIVE

What's your rclone.conf and the mount command you are using?

To purge the cache, you can stop it and just delete the files. You can run kill -HUP on the rclone process ID and you can run with the parameter "--cache-db-purge". Any of those would work.

rclone.conf as per the above - including your cache settings.

Seems to have purged successfully:

2019/07/25 16:37:17 INFO : Starting HTTP transaction limiter: max 8 transactions/s with burst 1 2019/07/25 16:37:18 DEBUG : gcache: wrapped gdrive:folder at root folder 2019/07/25 16:37:18 DEBUG : gcache: Purging the DB 2019/07/25 16:37:18 INFO : gcache: Cache DB path: C:\WINDOWS\system32\config\systemprofile\AppData\Local\rclone\cache-backend\gcache.db 2019/07/25 16:37:18 INFO : gcache: Cache chunk path: C:\WINDOWS\system32\config\systemprofile\AppData\Local\rclone\cache-backend\gcache 2019/07/25 16:37:18 INFO : gcache: Chunk Memory: true 2019/07/25 16:37:18 INFO : gcache: Chunk Size: 32M 2019/07/25 16:37:18 INFO : gcache: Chunk Total Size: 10G 2019/07/25 16:37:18 INFO : gcache: Chunk Clean Interval: 1m0s 2019/07/25 16:37:18 INFO : gcache: Workers: 4 2019/07/25 16:37:18 INFO : gcache: File Age: 2d 2019/07/25 16:37:18 DEBUG : Adding path "cache/expire" to remote control registry 2019/07/25 16:37:18 DEBUG : Adding path "cache/stats" to remote control registry 2019/07/25 16:37:18 DEBUG : Adding path "cache/fetch" to remote control registry 2019/07/25 16:37:19 DEBUG : gcache: wrapped gdrive:ARCHIVE at root ARCHIVE 2019/07/25 16:37:19 DEBUG : gcache: Purging the DB 2019/07/25 16:37:19 INFO : gcache: Cache DB path: C:\WINDOWS\system32\config\systemprofile\AppData\Local\rclone\cache-backend\gcache.db 2019/07/25 16:37:19 INFO : gcache: Cache chunk path: C:\WINDOWS\system32\config\systemprofile\AppData\Local\rclone\cache-backend\gcache 2019/07/25 16:37:19 INFO : gcache: Chunk Memory: true 2019/07/25 16:37:19 INFO : gcache: Chunk Size: 32M 2019/07/25 16:37:19 INFO : gcache: Chunk Total Size: 10G 2019/07/25 16:37:19 INFO : gcache: Chunk Clean Interval: 1m0s 2019/07/25 16:37:19 INFO : gcache: Workers: 4 2019/07/25 16:37:19 INFO : gcache: File Age: 2d 2019/07/25 16:37:19 DEBUG : Adding path "cache/expire" to remote control registry 2019/07/25 16:37:19 DEBUG : Adding path "cache/stats" to remote control registry 2019/07/25 16:37:19 DEBUG : Adding path "cache/fetch" to remote control registry 2019/07/25 16:37:19 DEBUG : Encrypted drive 'gcrypt:/': Mounting on "X:" 2019/07/25 16:37:19 DEBUG : Cache remote gcache:folder: subscribing to ChangeNotify 2019/07/25 16:37:19 DEBUG : vfs cache root is "C:\\Cache\\vfs\\gcrypt" 2019/07/25 16:37:19 DEBUG : Adding path "vfs/forget" to remote control registry 2019/07/25 16:37:19 DEBUG : Adding path "vfs/refresh" to remote control registry 2019/07/25 16:37:19 DEBUG : Adding path "vfs/poll-interval" to remote control registry 2019/07/25 16:37:19 DEBUG : Encrypted drive 'gcrypt:/': Mounting with options: ["-o" "fsname=gcrypt:/" "-o" "subtype=rclone" "-o" "max_readahead=131072" "-o" "attr_timeout=1" "-o" "atomic_o_trunc" "-o" "uid=-1" "-o" "gid=-1" "--FileSystemName=rclone" "-o" "volname=gcrypt" "-o" "allow_other"] 2019/07/25 16:37:19 DEBUG : Encrypted drive 'gcrypt:/': Init: 2019/07/25 16:37:19 DEBUG : Encrypted drive 'gcrypt:/': >Init:

What's the mount command?

Using nssm, as per above:

If you are using the cache backend, it does its own memory management so you can change buffer-size 2G to buffer-size 0M

I'm not sure why you have the tpslimit in as I'd just use the defaults.

You can also just remove this:

-vfs-read-chunk-size 32M --vfs-read-chunk-size-limit off

and just use the defaults as they are good :slight_smile:

Did you make this 0 for a reason? -vfs-cache-max-age 0m