Cache question outside from gdrive *dircache*

I have a question about the cache.

i have read a lot here about the cache, which was actually only named gdrive. does it also work outside plex? with dropbox as dir cache so it doesn't have to read all the files again every time? directories with over 1000 folders or files take a long time to list. fast list is supported I think only by gdrive.
how are your experiences with dropbox or other providers than gdrive where there is no direct api.

The cache backend caches the directory and file structure as well as providing some caching for file reads.

Plex is an application that sits on top and like any other application, it can use the files on the mount. Many folks just happen to use Plex on top of rclone to serve media.

For any provider to be supported in rclone, it has to have an API to use so dropbox has an API and many folks use it and seems to work pretty well.

My directory and file lists are pretty solid once the info is cached.

3759

real	0m0.546s
user	0m0.027s
sys	0m0.067s

Thanks, I'll give it a try at dropbox.

I'm assuming the cache there works exactly like gdrive.

Yes, the cache-backend doesn't know or care what backend it is used with. It just operates on files - the regular backend-remote is still the module that speaks to the server as it would be with or without the cache. It should work fine with any other provider as well.

I do caution against using it's --tmp-upload and --cache-writes options however as I found these to be buggy in several ways. The main function of read-cache chunking works fairly well though.

One last note: If you experiment with different chunk-sizes, know that you MUST clear the cache directory (just delete it) after such a change, otherwise you will start to get "unexpected EOF". It's a fairly typical problem people encounter when they set this up :slight_smile: