Guide to replaceing plexdrive/unionfs with rclone cache

Do you see any logs that the 8 workers are being used and that the plex connection has been established?

Any other error logs?

I’m not sure if I’m looking in the right place, as I don’t have anything in the logs about the number of workers in use.

I can only see entries like this when I start a video:

2018/06/27 14:48:23 INFO : m3136um5f89cm00pb5902gfdt0/b486e6qo69rf10i9sui6pg04k4/07h637vebo98d5mh4ipvbfs7qs/313kvmcbd8b188ef0krd2u3rmdh6bo99l5iae98goa3vb6cttokgnhe7fa71tc5he0hhhc24a5td8bug1n12d52g8mj405v65v3ls58: confirmed reading by external reader
2018/06/27 14:48:29 INFO : m3136um5f89cm00pb5902gfdt0/b486e6qo69rf10i9sui6pg04k4/07h637vebo98d5mh4ipvbfs7qs/31065vqdgkk8vdvka4342t487j1cnkaivk82f765h1upn3nc1j811lg77o7i7bsmhpf643skcqa7nkir0dmfo1rqake58u175cbekv5o90c3t74bvtj3pqb9e1akaa7c: confirmed reading by external reader
2018/06/27 14:48:38 INFO : m3136um5f89cm00pb5902gfdt0/b486e6qo69rf10i9sui6pg04k4/07h637vebo98d5mh4ipvbfs7qs/31065vqdgkk8vdvka4342t487j1cnkaivk82f765h1upn3nc1j811lg77o7i7bsmhpf643skcqa7nkir0dmfo1rqake58u175cbekv5o90c3t74bvtj3pqb9e1akaa7c: confirmed reading by external reader
2018/06/27 14:48:40 INFO : m3136um5f89cm00pb5902gfdt0/b486e6qo69rf10i9sui6pg04k4/07h637vebo98d5mh4ipvbfs7qs/31065vqdgkk8vdvka4342t487j1cnkaivk82f765h1upn3nc1j811lg77o7i7bsmhpf643skcqa7nkir0dmfo1rqake58u175cbekv5o90c3t74bvtj3pqb9e1akaa7c: confirmed reading by external reader

This confirm that the plex integration is working. Generally, this means that the worker count has been increased to the maximum per video.

Perhaps @Animosity022 can help with this further.

I’ve been doing some more testing and vfs tends to start faster for me all the time.

My current cache config looks like:

/usr/bin/rclone mount gmedia: /gmedia \
   --allow-other \
   --dir-cache-time 160h \
   --cache-total-chunk-size 10G \
   --cache-chunk-path /dev/shm \
   --cache-chunk-no-memory \
   --cache-chunk-size 10M \
   --cache-tmp-upload-path /data/rclone \
   --cache-tmp-wait-time 60m \
   --cache-info-age 168h \
   --cache-db-path /dev/shm \
   --cache-workers 6 \
   --buffer-size 512M \
   --log-level INFO \
   --syslog \
   --umask 002 \

I turned off any plex integration as I don’t want the delay in waiting for plex to confirm and up the workers.

I like using the cache tmp upload as it gives me flexibility in timing my uploads. I drop the cache chunks and cache db in memory in /dev/shm and I turn off any chunk memory usage as I want to use the other buffer to grab as much as I configure so I don’t get any pauses or delays in playing.

Cache generally seems to be slower, but my starts tend to be 10-15 seconds with my current settings for any movie/tv show.

Thanks. I’ve copied your settings almost exactly and I’m at around 12-14s average - that’ll do. Please share if you have any eureka moments

I’ve noticed that different clients launch at diff speeds e.g. my Nexus 6P seems to be much faster than the web client, whereas Android TV seems very slow

One thing to remember is that depending on what client you are using, if it has to transcode rather than direct play, it’ll take longer to start up. I try to do all my testing on clients that do Direct Play. Infuse on IOS and on my ATV Direct play just about everything.

So I’m trying to get some more testing but I do get files to start faster with limiting a few settings and making some tweaks.

Turning cache chunk memory off makes things 1-2 seconds slower in general from my testing so I turned it back on.

Rather than having a huge buffer, I tweaked it down to 100M as that seemed like a sweeter spot.

This seemed to give me a consistent 4s-5s media info on any file I did rather than 7-10seconds with the other settings:

/usr/bin/rclone mount gmedia: /Test \
   --allow-other \
   --dir-cache-time 24h \
   --cache-total-chunk-size 5G \
   --cache-chunk-path /dev/shm \
   --cache-chunk-size 10M \
   --cache-tmp-upload-path /data/rclone \
   --cache-tmp-wait-time 60m \
   --cache-info-age 28h \
   --cache-db-path /dev/shm \
   --cache-workers 5 \
   --buffer-size 100M \
   --log-level INFO \
   --log-file /home/felix/logs/rclone-test.log \
   --umask 002 

If you want to get something even a little faster, I have a unionfs/mergerfs type mount where I copy to my local drive and than rclone move stuff up via a cron job.

This looks like:

felix@gemini:/etc/systemd/system$ cat rclone.service
Description=RClone Service

ExecStart=/usr/bin/rclone mount gcrypt: /GD \
   --allow-other \
   --dir-cache-time 48h \
   --cache-dir /data/rclone \
   --vfs-read-chunk-size 10M \
   --vfs-read-chunk-size-limit 512M \
   --buffer-size 100M \
   --syslog \
   --umask 002 \
   --rc \
   --log-level INFO
ExecStop=/bin/fusermount -uz /GD


My mergerfs script:

felix@gemini:/etc/systemd/system$ cat mergerfs.service
Description=mergerFS Mounts rclone.service rclone.service

ExecStop=/usr/bin/sudo /usr/bin/fusermount -uz /gmedia


I was testing with both plexdrive which seems equal to vfs for me:

felix@gemini:~/scripts$ cat mergerfs_mount

# PlexDrive
#/usr/bin/mergerfs -o defaults,sync_read,allow_other,category.action=all,category.create=ff /data/local:/PD_decrypt /gmedia

# RClone
/usr/bin/mergerfs -o defaults,sync_read,allow_other,category.action=all,category.create=ff /data/local:/GD /gmedia

My mergerfs always writes to the first item in that list and the second item is rw and doesn’t give me all the unionfs hidden / stuff I don’t like which is why I compiled and use mergerfs.

1 Like

will give this a play tomorrow as well - I’m transferring a lot of files to the the cache temp folder that I don’t want to interrupt. I’m sticking with the ‘cache’ Vs vfs as I find the temp folder easier to manage for uploading.

-dir-cache-time 24h and -cache-info-age 28h - did you drop these for testing purposes? I thought higher was better, especially if uploads are handled by the cache?

It really just is preference of how long you want to keep the cache.

I realized that the API calls aren’t every going to be high so I dropped the larger numbers down as a few more API hits are not going to harm anything.

I am not sure how you’d ever hit the daily quota for API hits as a single user:


I’ve just switched to vfs - how do you get plex to see new files that have been freshly uploaded via the rclone move job, or do you just wait for the cache to expire / manually flush?

I’ve used unionfs before with plexdrive to have instant access to files while they are queued for upload, but I’m struggling to see how to avoid having a gap while a file can be in the cloud, but the cache is out of date.

New files appear on the polling interval of 1 minute.


felix@gemini:~$ date
Fri Jun 29 13:31:12 EDT 2018
felix@gemini:~$ rclone copy /etc/hosts gcrypt:
felix@gemini:~$ date
Fri Jun 29 13:31:22 EDT 2018
felix@gemini:~$ cd /gmedia/
felix@gemini:/gmedia$ ls
mounted  Movies  Radarr_Movies	TV
felix@gemini:/gmedia$ date
Fri Jun 29 13:32:02 EDT 2018
felix@gemini:/gmedia$ ls
hosts  mounted	Movies	Radarr_Movies  TV

excellent - just gets better and better. So, --dir-cache-time only caches the directories, whilst the mount still checks every min for new files?

One last question - does --cache-total-chunk-size still apply to vfs and defaults to 10G?

Thanks for all the help so far.

Think of dir-cache as more of the whole directory and the contents as it’ll expire if the directory if the time expires or a polling request comes in.

The mount polls your GD and listens for any ‘updates’ and will expire/refresh what’s needed based on the polling updates.

The cache-total-chunk/chunk-size/etc isn’t used in VFS.

Thanks, so set dir-cache high as in reality it doesn’t do much as it updates when there’s an update on the mount.

Understood. So I just need to make sure --vfs-read-chunk-size-limit isn’t too high to make sure that too much memory isn’t used if I have a lot of concurrent plays.

--vfs-read-chunk-size-limit doesn’t use any memory at all. It just specifies which parts of a file get requested from the remote. Without --vfs-read-chunk-size the whole file will be requested instead.
If --vfs-read-chunk-size is used with a cache remote, it is useless, as the cache remote itself will request file parts.

--buffer-size specifies the amount of ram used per open file to be used for caching.

1 Like

Thanks - understood.

Just to clarify.

You use vfs for writing TO gdrive
But a regular cache drive for READING to plex?

vfs or cache is on or the other. I use vfs for reading and a rclone move script.

thanks I see you went back to the cache for testing purposes.

Can’t wait till the ATV comes

What’s ATV? Is it a new feature?

I asked the same thing…

Apple TV =P

1 Like