Up-to-date best practice and whether or not to use PlexDrive

I’ve been using RClone for just about a month and I really enjoy the idea of it but I feel that I read new varieties of guides wherever I search for guidance.
The first guide I used was this guide for unRAID over at laubacher.io but after reading this post by Animosity022 I’m more interested in the built in cache alternative, as I’d like to decrease the amount of dependencies for a complete setup.

I’m hosting a Plex server on my unRAID server over a 250/100 mbps connection, where my media usually resides on a 16 TB array and downloads from Sonarr/Radarr are being downloaded onto my 500 GB SSD cache and then moved onto the array.
I’ve set up RClone and created/mounted GDrive directories according to Animosity002’s first few posts but I don’t understand how the cache-tmp-upload-path attribute is intended to work, as I’ve copied a single large media file into that specific folder, waited a few hours, but the file has not been uploaded to the specified Drive directory.
Perhaps I’ve misunderstood the cache feature, but I want to move all new files after an mount of time, without Plex suffering from missing files, if Plex, Sonarr and Radarr all point to the same mount point which, in this case, would be /gmedia.

The newer version of unRAID is complaining about the UnionFS plugin, which has me worried that my current setup from Laubacher is soon going to be unusable.

Can anyone point me in the right direction? What am I missing?


cache-tmp-upload works with the cache backend and not with how my setup is.

I use mergerfs and a script to upload because I find for me, that’s a better working solution.

I have a link to my github where I tried to document my reasoning and why I used things.

For me, the cache was a little slower overall, but worked without issue. I didn’t find having the tmp-upload as something needed.

Never the less, you’d have to use the cache backend to use the cache-tmp-upload. That works by giving you a temporary spot to store files until they are uploaded after the period you’ve configured.

I’m a fellow unRAID user - have you tried using unionfs via Nerd Pack? I haven’t had any problems with that, although I want to move to mergerfs as unionfs doesn’t support hardlinks.

1 Like

Thanks! I didn’t actually know I could find it there.

Is there any chance you can share the GitHub link with me?

Just to clarify regarding cache-tmp; do I need to configure anything but my three remotes (gdrive->cache->crypt) and how I mount /gmedia in order to get the backend to automatically upload from /data/rclone_upload onto GDrive after the 60 minute interval or is the below sufficient?

rclone mount gm: /gmedia
–buffer-size 500M
–umask 002
–cache-tmp-upload-path /data/rclone_upload
–cache-tmp-wait-time 60m
–log-level INFO

My first guess was that I need to put files to be uploaded in the /data/rclone_upload folder, but as that didn’t seem to work, I guess (perhaps the wrong choice of words) that files that are put in (not rclone move) /gmedia are temporarily held in /data/rclone_upload for 60 minutes until they’re uploaded onto my GDrive crypt. Am I right?

With the cache-tmp-upload, you just copy to your normal rclone mount point that you have setup. It uses the tmp-upload as a staging area and it automatically uploads the files when the time expires. So in your case, it would wait 60 minutes and upload it.

The chunk-size of 5M is very small and I’d change that to at least 32M. Buffer-size should be 0M since you are using cache backend.

1 Like

Perhaps you’d like to share how you’ve set up RClone under unRAID and why?
What I’d really like is to have data moved from my cache to my array and after an amount of time (let’s say 90 days) it would be moved onto GDrive without messing up paths in Plex, Radarr & Sonarr.

That way I would hold my users’ requests locally with high accessibility and without latency for just enough time until I move the data into my “archive” which would be GDrive.

Here you go! https://forums.unraid.net/topic/57576-plexdrive/?page=6&tab=comments#comment-673290

There’s a few more checks in my install and upload script that you might not need, but my upload script tackles the problem of uploading the majority of new files before a ‘wasted’ move to the array.

I ditched the cache and moved to the vfs mount as even though it’s not an all-in-one solution, the playback experience and the upload functionality is much better

With a vfs mount I don’t think your users will tell the difference - the difference between local playback and remote for me is so slight on my 200/200 connection, I don’t even bother checking anymore

Thanks a lot!
Trying to figure out which remotes you use. I do not see VFS as a type of remote when I attempt to create a new one through rclone config.

VFS isn’t a type of remote as it’s just a normal rclone config. If you aren’t using the cache backend, you are using the vfs.

In my case, either an unencrypted GDrive remote or an encrypted remote based off of the GDrive remote, mounted with the VFS flags as stated in both yours and BinsonBuzz’ guidelines. Am I right?

I’ve tried out the cache remote wait and upload capability and I must say I like it, but I haven’t got to streaming from a cached share yet. If it proves to be bad in my situation, I’ll move over to using VFS.

I’m getting an error when running rclone lsd gmedia: which is the crypt remote in my cache remote. Error message:
rclone lsd gmedia:
2018/10/12 20:55:32 ERROR : /root/.cache/rclone/cache-backend/gcache.db: Error opening storage cache. Is there another rclone running on the same remote? failed to open a cache connection to “/root/.cache/rclone/cache-backend/gcache.db”: timeout
No other remote is mounted and there are no current transfers. Any idea what this could be?

Thanks a lot!

That means you have another rclone running.

Try checking with ps -ef | grep rclone

and see if there is another process running and you kill it.