Recommended Best Practices for Plexdrive-Rclone


#1

I’ve tested out Plexdrive4 and 5 on a few servers using the PlexGuide repo. I’ve got about 3TB of data that needs to be scanned before I can map it to Plex. I modified rclone scripts a bit and it’s working perfectly. However, Plexdrive’s initial scan is REALLY slow processing the initial cache and appears to only update about 1 file per minute. I’ve read in a few places how some people have figured out how to build the cache and processed 1 TB an hour or more. I’m not sure how accurate that is, but I’d love to find out because I’ve spent hours trying to figure out and need help.

I’m using an 8-core/32gb ram server right now, so it handles things quite well. I just don’t know if there is anything I can put in the commands when trying to mount the drive and complete the scan as fast as possible. Would somebody be kind enough to help me out and point me in the right direction please?


#2

I use plexdrive5 and not 4. I use a SSD to build my cache on and it really only take 30-40 minutes for ~20TB or so.

Very basic mount:

/home/felix/scripts/plexdrive -o allow_other -v 2 mount /GD >>/home/felix/logs/plexdrive.log 2>&1 &

1 file per minute doesn’t seem right to me unless you a very limited pipe. At that point, I mount it via rclone encrypted local mount and use mergerfs to present a local + cloud mount to plex for use.


#3

While I only have about 600gb of media, it took only about 20min to scan the whole thing with a standard Plexdrive mount / encrypted rclone mount over it.

Something else is up with your setup id say. Are you artificially limiting api calls with plexdrive or rclone in your mount commands/scripts?