Guide to replaceing plexdrive/unionfs with rclone cache

My wording was a bit poor.

For the rclone.conf

1st entry is your Google Drive which is your starting point and where you configure the client auth/etc.

2nd entry is the cache spot. The recommended method is to point to a folder inside of your Google Drive.

So mine is GD:media, which my media folder is just a normal named folder and no encryption. You can call it whatever you want as most folks have an encrypted starting point to store their media.

My 3rd entry is my encrypted mount point which points back to the type=cache (my #2 in this example) which encrypts all my media/filenames/dir names/etc.

My mount command mounts uses the 3rd entry and that is what is mounted via the rclone mount.

Can you run something like ‘mediainfo’ against the file using the same user that Radarr is running as?

yeah I get the full mediainfo detail when I run that

ps aux

chrisha+ 5655 12.4 0.0 1804236 193820 ? Sl 21:26 4:38 /usr/bin/cli /home/***/***/apps/Radarr/Radarr.exe

running as my user,

looks like one for the radarr forums.

Edit: I just deleted those carry on films and basket case and it’s fired into action again… weird as they were all dated. Radarr great for snatching, terrible for scraping…

I haven’t purged my cache after each chunk size change. I wonder if that was throwing some of the errors when I increased the size.

I added the --cache-db-purge to my systemd service start parameters just for this very reason. I figure I will restart it infrequently enough that it really shouldn’t be an issue by doing it each time the service starts.

@YipYup - The cache-db-purge doesn’t handle the chunk size changes. You can to clean the chunk directory if make changes to chunk size.

Can I ask why you have your plex info on your 2nd entry, wouldn’t you want that on your 3rd entry that plex can read?

The plex integration feature works with the cache and that’s why it goes there.

You’d see something like this in the logs to show it’s working but it is the encrypted names since I use encryption:

May 14 21:52:35 gemini rclone[3256]: smu5ej34ujbdoip1cm3mlk92q4/lprfoi8lkc2951vbhcos3sfehc/asqtmp8t491khg23v0mjn4c94uflce9fsocbks8gge6g41evbumnb2it1b3bt61b1tos1giactca6: confirmed reading by external reader
May 14 22:01:42 gemini rclone[3256]: smu5ej34ujbdoip1cm3mlk92q4/fqiaqsujuvnacu2gcicf30faedc2vvvjm0qldof71lovvqv3fnbodpb17mhm9v95jsk36g9k8rdmk/tkrhg5bv1ntslv5eub5vr4qpiq8va7n38n207v1v8m4ss8paqr46isd2pg4h8savls5c466nc7l3c: confirmed reading by external reader
May 14 22:03:20 gemini rclone[3256]: smu5ej34ujbdoip1cm3mlk92q4/lprfoi8lkc2951vbhcos3sfehc/58g2da9udu1b45mpcu0t065aghls5ri7t2s0s2tmr341cbl9dukcs8523gs30btqv7llqd2vagfde: confirmed reading by external reader

Let me clarify.

You have a gdrive, using rclone, you created a gdrive mount (normal) of your ‘media’ folder. You then created a gdrive cache mount of your ‘media’ folder. and then you created an encrypted mount of your media folder inside the cache mount?

I am a bit confused with this.

Would your first mount not have to be encrypted to hide the information from google?

Have you never hit your API limit? I seem to hit it all the time when I use Radarr/Sonarr and need to scan/update/add things from scratch. I would hope to avoid this.

I am using plexdrive and that seems to work, but the problem ends up being sometimes the API limit is breached and then I cannot use plexdrive to play due to sonarr radarr.

Any help you can provide?

I have analyze files off in both Sonarr/Radarr so I don’t use that.

My config is as follows:

[felix@gemini ~]$ cat .rclone.conf
type = drive
client_id = client
client_secret = secret
token = {"access_token”:”token”,”token_type":"Bearer","refresh_token”:”token”,”expiry":"2018-05-19T15:03:47.495001225-04:00"}

type = cache
remote = GD:media
chunk_total_size = 32G
plex_url =
plex_username = username
plex_password = password
plex_token = token

type = crypt
remote = gcache:
filename_encryption = standard
password = password
password2 = password
directory_name_encryption = true

Mount / Systemd

[felix@gemini ~]$ cat /etc/systemd/system/rclone.service
Description=RClone Service

ExecStart=/usr/bin/rclone mount gmedia: /gmedia \
   --allow-other \
   --dir-cache-time=160h \
   --cache-chunk-size=10M \
   --cache-info-age=168h \
   --cache-workers=5 \
   --cache-tmp-upload-path /data/rclone_upload \
   --cache-tmp-wait-time 60m \
   --buffer-size 0M \
   --syslog \
   --umask 002 \
   --rc \
   --log-level INFO
ExecStop=/usr/bin/sudo /usr/bin/fusermount -uz /gmedia


I use a GD->Cached Encrypted -> Decrypted Mount and the plex integration. You have to configured that on the cached entry by using the rclone config so you enter in the username/password and it generates the token once it connects.

Never hit an API limit in plexdrive or rclone (minus a bad config not using the cache).

Thanks for the reply.

So with analyze files off, have you had any trouble with naming files? Does this really add that much more API hits?

I think the problem is that its too many API hits so fast that it bans it for 24 hours. as I am sure i didn’t max out the APIs.

so my config file is

type = crypt
remote = /mnt/disks/plexdrive/secure
filename_encryption = standard
directory_name_encryption = true
password = <removed>
password2 = <removed>

type = crypt
remote = gdrive:secure
filename_encryption = standard
directory_name_encryption = true
password = <removed>
password2 = <removed>

and these are my rclone settings

rclone mount --max-read-ahead 1024k --allow-other --allow-non-empty gdrive: /mnt/disks/gdrive &
rclone mount --max-read-ahead 1024k --allow-other --allow-non-empty gdrive-secure: /mnt/disks/gdrive-secure &
plexdrive mount -c /mnt/user/appdata/plexdrive -o allow_other /mnt/disks/plexdrive/ &
rclone mount --max-read-ahead 1024k --allow-other --allow-non-empty gdrive-decrypt: /mnt/disks/gdrive-decrypt &

So as you can see I have the secure mount in the plexdrive mount to “limit” the api’s which works, but it doesn’t have write so I had to use the non plexdrive with sonarr/radarr which causes the API issues.

but I am assuming with the cache mount and the proper mounting would allow me to NOT use the plexdrive mount and hopefully speed things up.

So what would my settings be for mounting?

type = cache
remote = gdrive:
chunk_total_size = 32G
plex_url = https://localhost:32400
plex_username = username
plex_password = password
plex_token = token

type = crypt
remote = gdrive:secure CHANGED to gcache:
filename_encryption = standard
directory_name_encryption = true
password =
password2 =

My config and mount is just a post above.

I use GD->Cache -> Decrypted and mount the decrypted entry.

You need to configure the plex integration through rclone config to ensure it works properly and I wouldn’t paste it in.

If you don’t use the cache, you will get banned. The cache is needed if you are using GD.

1 Like


It made more sense while going through with it. Just did a “test” mount and I see it, but I cannot test any “limits” on it just yet.

Few questions while I am waiting.

Why do you have a “tmp” folder for rclone? (Is this stored locally where the mount is, or on the gdrive?)
Second, why do you have a umask 002 instead of 777 (or whatever it is for anyone?) or rather not have one at all?

EDIT:Also i don’t remember it asking for a PLEX token when I added name and password, can I manually add this in later?

I like to force the permissions for how I want.

002 gives it rwxrwxr-x rather than full permissions.

For Plex, you only configure the first 2 items as once it connects, it gets a token.

1 Like

Another followup. I think this is where your “tmp rclone” foider helps.

Have noticed with the cached rclone, upload through that mount is minimal to what I can be achieving. Is this a limitation set by the cache drive or what?

Only getting about 200MBit/sec or about 2-3mb/sec on a 1gbit line.
But then I can start another transfer and essentially double it.

is that where you tmp folder comes in to play?

HOw do you transfer your data, do you use sonarr/radarr to transfer it, or some other method?

Sonarr and Radarr write directly to my rclone mount.

The data stays in the cache-tmp-folder for 60 minutes until it uploads up. To be honest, I’ve never watched the upload but I’m guessing it isn’t super fast as it just uses 1 worker if I’m not mistaken.

My goal was as simple as possible with the least scripts/additions from having a local mount.

1 Like

Got it, makes sense. Yeah if I am not watching it work, it doesnt matter. But I had just tested it with about 500gb+ of files and watched it upload very slow and such.

Thanks for all your fast and prompt replies, I would too rather it go a bit slower, and wait an extra 30-45 minutes for uploads than have it break API’s and having to wait 24 hours!

@Animosity022 thanks for your useful posts. I finally made the switch this week and everything ‘works’ apart from two problems.

Problem 1: it’s taking about 20-45 seconds for files to start playing.

I’ve pretty much copied your setup, so I’m not sure why my times are so bad. My setup is an unRAID server with a E5-2683V3, 64GB ram, running Plex in a docker on a SSD - so I should have enough power. I also got a 200/200 internet connection this week upgraded from 18/1, which is why I’m trying to get this working fully.

Is there something glaringly wrong with my config?

Update: I’m wondering if it’s because I’m playing files that have just been uploaded, so the cache isn’t fully populated???

rclone mount --allow-other --dir-cache-time=160h --cache-chunk-size=10M --cache-info-age=168h --cache-workers=6 --cache-tmp-upload-path /mnt/user/rclone_upload --cache-tmp-wait-time 1m --buffer-size 0M --log-level INFO gdrive_media: /mnt/disks/google_media

type = drive
client_id =
client_secret = REDACTED
scope = drive
root_folder_id = 
service_account_file = 
token = {"access_token":"REDACTED":"REDACTED","expiry":"2018-06-14T19:04:01.421796372+01:00"}

type = cache
remote = gdrive:crypt
plex_url =
plex_username = Binson_Buzz
plex_password = REDACTED
chunk_size = 10M
info_age = 48h
chunk_total_size = 32G
plex_token = REDACTED

type = crypt
remote = cache:
filename_encryption = standard
directory_name_encryption = true
password = REDACTED
password2 = REDACTED

Problem 2: Adding files to local temporary path very, very slow

I’m also using the cache-tmp-upload-path feature, but my writes to the temp store are ultra-slow - 2-3MB/s. Do you have this problem as well?

Thanks in advance

In my testing, I can get a 60GB 4K movie and any size movie for that matter, to start in roughly 5-6 seconds.

I made some changes to my config when I’m using the cache is that I just use memory for everything and I purposely don’t put anything to disk:

ExecStart=/usr/bin/rclone mount gmedia: /GD \
   --allow-other \
   --dir-cache-time 72h \
   --cache-chunk-path /dev/shm \
   --cache-chunk-no-memory \
   --cache-chunk-size 10M \
   --cache-info-age 72h \
   --cache-db-purge \
   --cache-workers 6 \
   --buffer-size 0M \
   --syslog \
   --umask 002 \
   --rc \
   --log-level INFO

I have 32GB of memory on my box so my /dev/shm is 16GB and I cap it at 10GB for max size. Depending on what your disk is and how fast/slow it is, that may make the start times slower.

For #2, the uploads are only going to use 1 worker so they are slow pretty much with the plex integration. You can either just wait for it to be done, remove the plex integration and just use a higher default cache workers or you can use something like mergerfs and just keep it locally and rclone move at a later date using your own rclone command.

If you are going for quicker start time and don’t mind the mergerfs/unionfs/rclone move scenario, I found that the vfs-chunk-size starts in ~1-2 seconds for any movie for me.

So for that, I remove the cache all together and just mount the encrypted filesystem:

ExecStart=/usr/bin/rclone mount gcrypt: /GD \
   --allow-other \
   --dir-cache-time 96h \
   --vfs-cache-max-age 48h \
   --vfs-read-chunk-size 10M \
   --vfs-read-chunk-size-limit 100M \
   --buffer-size 1G \
   --syslog \
   --umask 002 \
   --rc \
   --log-level INFO

Downside with that command is each file can max out 1G so you could run out of memory and depending on your system, you may want to tweak the memory down. I’ve got a pretty good grasp on how my plex setup/sonarr/radarr setup works so that number doesn’t bother me.


Thanks - very helpful

–cache-db-purge - does purging mean the cache db has to be rebuilt at each mount?

I didn’t specify –cache-chunk-path so I think this is one of the reasons I’m slow. is this similar to plex transcoding to RAM?

How much ram on average does each stream take up? Is /dev/shm your ram? If so, my equivalent is

--cache-chunk-path /tmp/

/tmp/ is what I use to transcode Plex to RAM

I’ve bumped up my buffer-size from 0M to 500M - that should help!

I’m going to try this as soon as I can restart and see how I get on. I’m pointing my cache-db-path to my SSD so I can see what’s going on:

rclone mount --allow-other --dir-cache-time=72h --cache-db-path=/mnt/cache/rclone_cache --cache-chunk-path /tmp/ --cache-chunk-no-memory --cache-chunk-size=10M --cache-info-age=6h --cache-workers=6 --cache-writes --cache-tmp-upload-path /mnt/user/rclone_upload --cache-tmp-wait-time 30m --buffer-size 500M --rc --log-level INFO gdrive_media: /mnt/disks/google_media

For #2, are you saying that the plex integration slows down uploading via the cache?

I’m wondering if my old

--cache-tmp-wait-time 1m

was causing problems i.e. it’s trying to encrypt and upload at the same time?? I’ve changed this to 30 mins to see if this helps.

In the interim, I’ve been uploading via a separate rclone move job. This is working well for manual bulk uploads, but I need to get the cache upload working smoothly once radarr/sonarr etc start adding new stuff to be uploaded