Looks like you are forgetting to stack them together.
You should first make a Teamdrive to hook up to the cloud. (as you have)
Then make a TeamdriveCache --> pointing to the Teamdrive remote (as you have)
Then make a TeamdriveCacheCrypt and point it to the TeamdriveCache remote. (missing)
Now TeamdriveCacheCrypt is the one you will want to use and which does all the steps for you. If you want to mount the drive - use this also.
With your current config you would only be able to use use crypt OR cache, since they both point directly to the Teamdrive. To make them all work together you need to layer all three.
The start of my crypt remote (the outer layer of my stack) looks like this if it helps you visualize:
[TeamdriveCacheCrypt]
type = crypt
remote = TeamdriveCache:/Secure
Hope that helps.
PS: Avoid stacking in the order Drive<--Crypt<--Cache. While this works it is known to cause problems with google rate limiting you. The "right" way to do it is Drive<--Cache<--Crypt , and besides this has the added benefit of encrypting your cache which may be nice for added local security.
Thank you for the reply. As the gcrypt already has a considerable amount of data, can I revise the rclone.conf to make the cache work with the crypt, or am I unable to create a cache after the gcrypt has already been created?
A cache can be added or removed at any point without any problems whatsoever.
I would recommend clearing (deleting all files + database file) in the cache if you want to re-enable it again though, just to avoid potential issues with it having old data in it. It shouldn't be absolutely required, but best to let it re-build from scratch after major changes. If you change the chunk size you HAVE to clear it to avoid unexpected end of files however.
TLDR: this is not a problem. Just point your crypt to the cache and it should all connect fine. The crypt won't care if it sends stuff through the cache or not.
I know a cache is not a prerequisite, but there seems to be no consensus as to whether it improves performance or not. Indeed, some seem to advocate no cache and instead using --vfs.
Is there a particular benefit to going either route?
Cache has more advanced chunking, and while the VFS also has a chunked download feature I could never get that to produce completely stutter-free playback (on a 150Mbit connection). Using cache this always works well. I'd say this is currently one of the biggest reasons to use it if you play media from the cloud regularly - otherwise the buffering will drive you mad. If it is possible to achieve well on the VFS then I have at least not been able to do it after extensive testing.
Cache has it's own database file that greatly speeds up the listing and traversing of the file structure. It can remember how the folders and files were the last time it saw it and assumes (base on configurable timeouts) that the structure is still the same. With high expiry timers and a use-case where files are only modified through the case by a single write-user then it saves a lot of redundant calls to re-list files because it can just refer to it's own local database instead. This also makes moving around the filesystem far more snappy. It keeps the DB updated on it's own when changes happen as long as the changes were made via the cache.
Cache has read-caching with retention, so if you can set aside a a fair size cache then re-reading files you already accessed not too long ago is instant (it won't have to re-download or re-request). This obviously speeds up usage - especially on small files since those are slow to transfer compared to large ones.
Cache will obviously need to do more writes to disk. If heavily used you may want to put it on a HDD rather SSD to prevent too much wear (although modern SSDs can take a lot of wear now).
Cache can be added and removed from a setup easily, so there is no big commitment to it.
As for cons - cache complicates the setup more than strictly necessary. More code means more chances of encountering bugs. Cache also seems to be going out of favor for future development (perhaps being moved into VFS eventually in the long-term). The cache-writes is useless with a VFS as it does not have actually retention of writes (only used for error-safe uploads which the VFS already does using writes-mode) and the temp-upload function appears to be buggy in a few non-trivial ways - so I avoid this. Apart from that it works well. It just might not see a ton of development going forward.
Personally I use cache mostly for the ability to stream media flawlessly + the read cache. Having a few hundred gigs worth of cache on a few TB of data means a lot of cache hits that really speeds things up since it's pretty common for me to re-access the same files in a given period of time. If a better solution becomes available later (such as the VFS getting integrated cache support) then I'll just change the setup.
I think that should cover the most important points.
Am amazingly comprehensive reply! I'm convinced. I have a 1TB Nvme that I can use solely for this.
That being the case, can you tell me how I go about adding the cache to the rclone.conf below - and which settings you would recommend for optimum performance?
I read your detailed post about settings and about not using the cache and I'm still undecided before testing both as playback is smooth but there's always room for improvement
A full, untouched bluray (40GB) currently opens in 5/6 seconds using nssm and no cache. I can scan through it with no delay or judder
If you want to see why your open takes 5-6 seconds, mount with --log-level DEBUG and use --log-file /somewhere/rclone.log and you can perform the test and share the log.
That will show what is happening when you see that 5-6 seconds of playback.
Once you start the rclone mount with cache, it will build up based on file access.
What's your rclone.conf and the mount command you are using?
To purge the cache, you can stop it and just delete the files. You can run kill -HUP on the rclone process ID and you can run with the parameter "--cache-db-purge". Any of those would work.