Guide to replaceing plexdrive/unionfs with rclone cache

Yes, the cache-db-purge rebuilds each time the service is started. I found this to be helpful just to keep things clean and current in case of anything going out of whack. In a perfect world, it wouldn’t be needed, but I don’t mind the minimal API hits a rebuild does as I’m well under those numbers.

/dev/shm is one half of your system memory in linux. So in my case:

 cat /proc/meminfo
MemTotal:       32836332 kB

and my /dev/shm

df -h /dev/shm
Filesystem      Size  Used Avail Use% Mounted on
tmpfs            16G  2.6G   14G  17% /dev/shm

It’s kind of a cheat to use, but works well. It gets cleaned out every reboot as well so for me, the setup more mirrors a memory only configuration as I don’t want any persistent storage of the chunks as my use case is people don’t watch the same stuff really and bandwidth usage caps are not an issue for me.

For the cache use, you do want 0M buffer size as the cache handles all that stuff and setting a bigger buffer only ‘double caches’ the information.

For plex transcoding, you ideally want that on fast storage as well as I do all my plex transcoding to my local SSD drive so that’s not a slow point. Some people do transcoding to /dev/shm, but for me, once I got bigger files, that really wasn’t a good use case.

For #2, my understanding that every other operation gets 1 cache worker but with the integration it detect a file being played, it bumps that up to to the configured cache-workers or the default of 4. You can always just turn off plex integration and bump up the cache workers to a bigger number 5-8 and see if that helps speed the uploads. That goes back to my previous statement that a few more API hits for you most likely won’t matter as the daily quote is huge.


In 30 days with all this testing, I’ve personally haven’t hit 1 million queries let alone that actual single day quota.

I use mergerfs now to always write to my /gmedia locally first which maps back to /data/local for me.

My /GD is my rclone mount and my merger command is:

felix@gemini:~/scripts$ cat mergerfs_GD
/usr/bin/mergerfs -o defaults,sync_read,allow_other,category.action=all,category.create=ff /data/local:/GD /gmedia

I use a standard rclone move on my /data/local and let that run nightly to clean up anything put locally. rclone’s standard 1 minute cache polling picks it up and things are on the same path so no changes from plex/radarr/sonarr as everything points to my /gmedia

1 Like

something odd is going on with my setup. I changed buffer back to 0M, but I’m still getting 20s+ launch times. Even opening a 42KB thumbnail (I created a smb share for my rclone mount) takes the same amount of time to open.

Do I have to wait for the cache.db file from --cache-db-path=/mnt/cache/rclone_cache to finish building before trying to access files? I can see it getting bigger on my disk - once I see how big it gets, I might move it to RAM like you, although it should be fast enough on my SSD?

Here’s where I’m at currently:

rclone mount --config=/root/.config/rclone/rclone.conf --allow-other --dir-cache-time=72h --cache-db-path=/mnt/cache/rclone_cache --cache-chunk-path /tmp/ --cache-chunk-no-memory --cache-chunk-size=10M --cache-info-age=6h --cache-workers=6 --cache-writes --cache-tmp-upload-path /mnt/user/rclone_upload --cache-tmp-wait-time 30m --buffer-size 0M --rc --log-level INFO gdrive_media: /mnt/disks/google_media

For #2, I’m going to go back to what I had previously and create a Unionfs of my upload folder (RW) and rclone mount (RO) that sonarr, radarr etc will monitor and do their thing and a rclone move cron job to run every xx mins to move stuff from the upload folder to gdrive. For the old file that will get written directly to the mount outside of my automation (e.g. kodi downloading artwork), I’ll use the offline write for that. It’ll mean I might get a slight delay before Plex/Kodi sees files, but it’ll do for now while I fix #1 then I’ll move onto #2.

ok, some progress.

Launch times are consistently around 20 seconds, which is acceptable as my users are used to local files sometimes taking up to 5-10 seconds to start if a drive is spun-down, so the remote files start just before people start thinking there’s a problem.

I think removing --cache-chunk-no-memory has helped as keeping chunks in memory makes sense.

I’m not sure if the cache is working correctly though…if I start a file and watch 1 min with a 20 second start, if I stop and play the same file again, shouldn’t it start immediately? It doesn’t for me.

Maybe things will get faster when my upload jobs finish (moving a few TBs via rclone move and via the cache-tmp-upload-path), and maybe when the plex authentication issue I’m having gets fixed. Other than that, I’m out of ideas how to speedup

rclone mount --allow-other --dir-cache-time=1h --cache-db-path=/tmp/ --cache-chunk-path=/tmp/ --cache-chunk-size=10M --cache-total-chunk-size=10G --cache-info-age=2h --cache-db-purge --cache-workers=6 --cache-writes --cache-tmp-upload-path /mnt/user/rclone_upload --cache-tmp-wait-time 30m --buffer-size 0M --rc --log-level INFO --log-file=/boot/config/plugins/rclone-beta/logs gdrive_media: /mnt/disks/google_media

Now at 10s and Plex problems resolved - thanks all

rclone mount --allow-other --dir-cache-time=1h --cache-db-path=/tmp/rclone --cache-chunk-size=10M --cache-total-chunk-size=8G --cache-info-age=2h --cache-db-purge --cache-workers=50 --cache-chunk-no-memory --cache-tmp-upload-path /mnt/user/rclone_upload --cache-tmp-wait-time=60m --buffer-size 0M --log-level INFO gdrive_media: /mnt/disks/google_media

ok, my launch times have gone back to 20s :sob::sob::sob::sob:

What’s making the experience worse, is I have the same delay whenever I skip forwards or backwards.

Here’s my latest config - anything jump out as wrong or missing? I’ve tried vfs but it was no better - probably because I had something wrong!

Thanks in advance for any help - I’m getting desperate as about 25% of my library has been uploaded now, so I’m encountering the delays more and more

rclone mount --allow-other --dir-cache-time=71h --cache-db-path=/tmp/rclone --cache-chunk-size=10M --cache-total-chunk-size=10G --cache-info-age=72h --cache-db-purge --cache-workers=8 --cache-chunk-no-memory --cache-tmp-upload-path=/mnt/user/rclone_upload --cache-tmp-wait-time=60m --buffer-size 0M --log-level INFO gdrive_media: /mnt/disks/google_media

Do you see any logs that the 8 workers are being used and that the plex connection has been established?

Any other error logs?

I’m not sure if I’m looking in the right place, as I don’t have anything in the logs about the number of workers in use.

I can only see entries like this when I start a video:

2018/06/27 14:48:23 INFO : m3136um5f89cm00pb5902gfdt0/b486e6qo69rf10i9sui6pg04k4/07h637vebo98d5mh4ipvbfs7qs/313kvmcbd8b188ef0krd2u3rmdh6bo99l5iae98goa3vb6cttokgnhe7fa71tc5he0hhhc24a5td8bug1n12d52g8mj405v65v3ls58: confirmed reading by external reader
2018/06/27 14:48:29 INFO : m3136um5f89cm00pb5902gfdt0/b486e6qo69rf10i9sui6pg04k4/07h637vebo98d5mh4ipvbfs7qs/31065vqdgkk8vdvka4342t487j1cnkaivk82f765h1upn3nc1j811lg77o7i7bsmhpf643skcqa7nkir0dmfo1rqake58u175cbekv5o90c3t74bvtj3pqb9e1akaa7c: confirmed reading by external reader
2018/06/27 14:48:38 INFO : m3136um5f89cm00pb5902gfdt0/b486e6qo69rf10i9sui6pg04k4/07h637vebo98d5mh4ipvbfs7qs/31065vqdgkk8vdvka4342t487j1cnkaivk82f765h1upn3nc1j811lg77o7i7bsmhpf643skcqa7nkir0dmfo1rqake58u175cbekv5o90c3t74bvtj3pqb9e1akaa7c: confirmed reading by external reader
2018/06/27 14:48:40 INFO : m3136um5f89cm00pb5902gfdt0/b486e6qo69rf10i9sui6pg04k4/07h637vebo98d5mh4ipvbfs7qs/31065vqdgkk8vdvka4342t487j1cnkaivk82f765h1upn3nc1j811lg77o7i7bsmhpf643skcqa7nkir0dmfo1rqake58u175cbekv5o90c3t74bvtj3pqb9e1akaa7c: confirmed reading by external reader

This confirm that the plex integration is working. Generally, this means that the worker count has been increased to the maximum per video.

Perhaps @Animosity022 can help with this further.

I’ve been doing some more testing and vfs tends to start faster for me all the time.

My current cache config looks like:

/usr/bin/rclone mount gmedia: /gmedia \
   --allow-other \
   --dir-cache-time 160h \
   --cache-total-chunk-size 10G \
   --cache-chunk-path /dev/shm \
   --cache-chunk-no-memory \
   --cache-chunk-size 10M \
   --cache-tmp-upload-path /data/rclone \
   --cache-tmp-wait-time 60m \
   --cache-info-age 168h \
   --cache-db-path /dev/shm \
   --cache-workers 6 \
   --buffer-size 512M \
   --log-level INFO \
   --syslog \
   --umask 002 \

I turned off any plex integration as I don’t want the delay in waiting for plex to confirm and up the workers.

I like using the cache tmp upload as it gives me flexibility in timing my uploads. I drop the cache chunks and cache db in memory in /dev/shm and I turn off any chunk memory usage as I want to use the other buffer to grab as much as I configure so I don’t get any pauses or delays in playing.

Cache generally seems to be slower, but my starts tend to be 10-15 seconds with my current settings for any movie/tv show.

Thanks. I’ve copied your settings almost exactly and I’m at around 12-14s average - that’ll do. Please share if you have any eureka moments

I’ve noticed that different clients launch at diff speeds e.g. my Nexus 6P seems to be much faster than the web client, whereas Android TV seems very slow

One thing to remember is that depending on what client you are using, if it has to transcode rather than direct play, it’ll take longer to start up. I try to do all my testing on clients that do Direct Play. Infuse on IOS and on my ATV Direct play just about everything.

So I’m trying to get some more testing but I do get files to start faster with limiting a few settings and making some tweaks.

Turning cache chunk memory off makes things 1-2 seconds slower in general from my testing so I turned it back on.

Rather than having a huge buffer, I tweaked it down to 100M as that seemed like a sweeter spot.

This seemed to give me a consistent 4s-5s media info on any file I did rather than 7-10seconds with the other settings:

/usr/bin/rclone mount gmedia: /Test \
   --allow-other \
   --dir-cache-time 24h \
   --cache-total-chunk-size 5G \
   --cache-chunk-path /dev/shm \
   --cache-chunk-size 10M \
   --cache-tmp-upload-path /data/rclone \
   --cache-tmp-wait-time 60m \
   --cache-info-age 28h \
   --cache-db-path /dev/shm \
   --cache-workers 5 \
   --buffer-size 100M \
   --log-level INFO \
   --log-file /home/felix/logs/rclone-test.log \
   --umask 002 

If you want to get something even a little faster, I have a unionfs/mergerfs type mount where I copy to my local drive and than rclone move stuff up via a cron job.

This looks like:

felix@gemini:/etc/systemd/system$ cat rclone.service
Description=RClone Service

ExecStart=/usr/bin/rclone mount gcrypt: /GD \
   --allow-other \
   --dir-cache-time 48h \
   --cache-dir /data/rclone \
   --vfs-read-chunk-size 10M \
   --vfs-read-chunk-size-limit 512M \
   --buffer-size 100M \
   --syslog \
   --umask 002 \
   --rc \
   --log-level INFO
ExecStop=/bin/fusermount -uz /GD


My mergerfs script:

felix@gemini:/etc/systemd/system$ cat mergerfs.service
Description=mergerFS Mounts rclone.service rclone.service

ExecStop=/usr/bin/sudo /usr/bin/fusermount -uz /gmedia


I was testing with both plexdrive which seems equal to vfs for me:

felix@gemini:~/scripts$ cat mergerfs_mount

# PlexDrive
#/usr/bin/mergerfs -o defaults,sync_read,allow_other,category.action=all,category.create=ff /data/local:/PD_decrypt /gmedia

# RClone
/usr/bin/mergerfs -o defaults,sync_read,allow_other,category.action=all,category.create=ff /data/local:/GD /gmedia

My mergerfs always writes to the first item in that list and the second item is rw and doesn’t give me all the unionfs hidden / stuff I don’t like which is why I compiled and use mergerfs.

1 Like

will give this a play tomorrow as well - I’m transferring a lot of files to the the cache temp folder that I don’t want to interrupt. I’m sticking with the ‘cache’ Vs vfs as I find the temp folder easier to manage for uploading.

-dir-cache-time 24h and -cache-info-age 28h - did you drop these for testing purposes? I thought higher was better, especially if uploads are handled by the cache?

It really just is preference of how long you want to keep the cache.

I realized that the API calls aren’t every going to be high so I dropped the larger numbers down as a few more API hits are not going to harm anything.

I am not sure how you’d ever hit the daily quota for API hits as a single user:


I’ve just switched to vfs - how do you get plex to see new files that have been freshly uploaded via the rclone move job, or do you just wait for the cache to expire / manually flush?

I’ve used unionfs before with plexdrive to have instant access to files while they are queued for upload, but I’m struggling to see how to avoid having a gap while a file can be in the cloud, but the cache is out of date.

New files appear on the polling interval of 1 minute.


felix@gemini:~$ date
Fri Jun 29 13:31:12 EDT 2018
felix@gemini:~$ rclone copy /etc/hosts gcrypt:
felix@gemini:~$ date
Fri Jun 29 13:31:22 EDT 2018
felix@gemini:~$ cd /gmedia/
felix@gemini:/gmedia$ ls
mounted  Movies  Radarr_Movies	TV
felix@gemini:/gmedia$ date
Fri Jun 29 13:32:02 EDT 2018
felix@gemini:/gmedia$ ls
hosts  mounted	Movies	Radarr_Movies  TV

excellent - just gets better and better. So, --dir-cache-time only caches the directories, whilst the mount still checks every min for new files?

One last question - does --cache-total-chunk-size still apply to vfs and defaults to 10G?

Thanks for all the help so far.

Think of dir-cache as more of the whole directory and the contents as it’ll expire if the directory if the time expires or a polling request comes in.

The mount polls your GD and listens for any ‘updates’ and will expire/refresh what’s needed based on the polling updates.

The cache-total-chunk/chunk-size/etc isn’t used in VFS.

Thanks, so set dir-cache high as in reality it doesn’t do much as it updates when there’s an update on the mount.

Understood. So I just need to make sure --vfs-read-chunk-size-limit isn’t too high to make sure that too much memory isn’t used if I have a lot of concurrent plays.

--vfs-read-chunk-size-limit doesn’t use any memory at all. It just specifies which parts of a file get requested from the remote. Without --vfs-read-chunk-size the whole file will be requested instead.
If --vfs-read-chunk-size is used with a cache remote, it is useless, as the cache remote itself will request file parts.

--buffer-size specifies the amount of ram used per open file to be used for caching.

1 Like

Thanks - understood.