I used cache when it was first released but it didn’t suit my needs at the time so I continued using Plexdrive. Now that it can detect changes on GDrive, I’ve done some testing on it again and it seems to be working fine. I particularly like the write feature.
I would like to move from Plexdrive and UnionFS to only using Rclone with Cache. Here’s my current setup.
Hetzner 8TB EX41 (Ubuntu 16.04 LTS)
- Used for Cached Gdrive.
- .media (local) and media (rclone crypt)
unionfs-fuse -o cow,allow_other,direct_io,auto_cache,sync_read
- I store all the most recent media on the local drive till it’s 95%. I run a script hourly that checks if the disk is full and delete’s everything older than a month to clear up space. I do this because most of my friends and family watch the most recently uploaded items first and performance is much better this way (also less bandwidth).
- Decrypt Gdrive Crypt (media)
- Mount options:
--read-only --allow-other --buffer-size 500M -v
- Mount options:
- Upload media to Gdrive
So here’s what I’m thinking of for the new setup:
- Rclone mount (Gdrive -> Cache -> Crypt)
- Go back to using Sonarr instead of Filebot (Sonarr will rename and move directly to the Crypt drive)
- Filebot doesn’t handle REPACKS well
- Rednoah wants me to pay per release
- Set Plex to scan every 2 hours
- What’s best way to implement something similar to my UnionFS setup with Cache?
- To avoid exceeding API thresholds, I setup a script that checks for new files every hour. With Cache, would you say it’s safe to let Plex automatically scan every 2 hours?
- Currently I have a speed limit on Rclone uploads set to 8M. If I’m going to use ‘write’ is there a way to limit the upload speed?
- Should I use offline uploading instead? I’m worried that normal writes on Cache don’t have retries. However, all my uploads thus far (with normal write) has worked without issue.
I know this is a long post but I’d really appreciate any help I can get. Thanks!