Guide to replaceing plexdrive/unionfs with rclone cache

I’ve been using rclone/plexdrive/unionfs mainly following this guide https://hoarding.me/ .

I want to move to just rclone cache but I’m reading lots of posts and none seem to have a definitive answer as they are trouble-shooting rather than guides.

Is there a E-2-E guide anywhere that I can use as a starting point?

Thanks

My GDrive configuration if as follows:

Google Drive -> Cached Encrypted Remote -> Decrypted Remote that is mounted on my OS

[GD]
type = drive
client_id = client_id_key
client_secret = client_secret
token = {"access_token":"access_token_key","expiry":"2018-03-30T09:15:29.049443547-04:00"}

[gcache]
type = cache
remote = GD:media
plex_url = http://127.0.0.1:32400
plex_username = Username
plex_password = Password
plex_token = Token
chunk_total_size = 32G

[gmedia]
type = crypt
remote = gcache:
filename_encryption = standard
password = cryptpassword
password2 = cryptpassword
directory_name_encryption = true

I use /gmedia for my mountpoint and that contains a directory for Movies and TV shows.

[felix@gemini gmedia]$ ls -al
total 0
drwxrwxr-x 1 felix felix 0 Apr 19 2017 Movies
drwxrwxr-x 1 felix felix 0 Apr 18 2017 TV

My main user on my box is ‘felix’ and my ‘plex’ user is part of my ‘felix’ group so the plex user can access all my files via the group permissions that I have set on the rclone mount.

My systemd startup:

[felix@gemini system]$ cat rclone.service
[Unit]
Description=RClone Service
AssertPathIsDirectory=/home/felix
After=plexdrive.target network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount gmedia: /gmedia \
   --allow-other \
   --dir-cache-time=160h \
   --cache-chunk-size=10M \
   --cache-info-age=168h \
   --cache-workers=5 \
   --buffer-size=500M \
   --attr-timeout=1s \
   --syslog \
   --umask 002 \
   --rc \
   --cache-tmp-upload-path /data/rclone_upload \
   --cache-tmp-wait-time 60m \
   --log-level INFO
ExecStop=/usr/bin/sudo /usr/bin/fusermount -uz /gmedia
Restart=on-abort
User=felix
Group=felix

[Install]
WantedBy=default.target

I have 32GB of memory on my system and not much else running so 500M per file opened hasn’t seemed to cause any problems personally for me. I at tops have 8 streams going at a single time.

I have a local disk that temporarily holds my items for an hour before uploading them to my GD. That is located at my /data/rclone_upload as my temporary area. This moves based on time and not size so always good to have an idea on your space.

My Sonarr and Radarr point directly to /gmedia/TV and /gmedia/Movies respectively and my plex libraries point to the same for the TV and Movie libraries.

With no cache at all the first scan for me takes about 10-15 minutes as it rolls through all the direcrtories and stores the items in the cache.

I have ~22k items in my library and ~35TB in my GD.

[felix@gemini system]$ plex-library-stats
30.03.2018 08:31:19 PLEX LIBRARY STATS
Media items in Libraries
Library =   TV
  Items = 18828

Library =  Movies
  Items = 1734

Library = Exercise
  Items = 279

Library = MMA
  Items = 53

22015 files in library
0 files missing analyzation info
0 media_parts marked as deleted
0 metadata_items marked as deleted
0 directories marked as deleted
20898 files missing deep analyzation info.

My overall goal is simplicity so I remove any other programs that didn’t add value for me. I let both Radarr and Sonarr scan as they normally would if things were local since all the items are cached anyway. Sonarr only scans a TV folder when it adds something. Radarr is little more annoying as the entire Movie folder is scanned once added to Plex, but again, it’s all cached so who cares if it takes a few seconds.

[felix@gemini gmedia]$ time ls -alR | wc -l
29238

real	0m2.303s
user	0m0.051s
sys	0m0.124s

[felix@gemini gmedia]$ time ls -alR | wc -l
29238

real	0m0.692s
user	0m0.060s
sys	0m0.131s

I can live with those times to hit every file and generate no API hits. I run my cache chunks storage on SSD. I have a Verzion Gigabit FIOS so upload/download aren’t an issue.

2 Likes

@Animosity022 Thanks for the detailed explanation. Was looking for something like this too. Some questions if you don’t mind:

  1. How does plex detect the new items from the cache? Some script like plex_autoscan or automatic scan from plex every 2 hours or so?
  2. Any delay in the initial buffering, like some users have reported?
  1. I let Sonarr and Radarr notify plex when a new item comes in. Sonarr only sends a scan for the folder of the TV shows which isn’t bad. Radarr sends a scan for the entire movie folder though to detect the new movies. A bit much, but in my case if it’s all cached anyway, it was fine for my use case. I let my post processing work by just making a copy of the completed download directly to the rclone mount which drops it into the rclone_upload tmp area until it uploads it to the cloud.

I keep the automatic scanning just to daily to catch anything that might not go correctly but usually don’t see problems with that.

An example of my Sonarr config:

  1. Any show normally takes 1s-3s to start up for me and the size of the item usually doesn’t matter. Plexdrive was definitely faster at 1s or less as I couldn’t tell if an item was local or not local using plexdrive. I was going to spend some time testing some chunk sizes and other things to see if that works. I get no buffering ever once something starts as I think mitigate that via a bigger buffer-size flag.

I’m using plex_autoscan right now for new show pickup and it is running great especially now with rclone cache integration built in.

Thx @Animosity022 - this gives me a good start to consider.

I have nowhere near as many files on gdrive as you - my plan is to move files as and when I run low on local storage, but eventually I see myself storing pretty much everything on gdrive.

I tend to find the sonarr/radarr built in plex notification works fine if the server is running plex as well. If you have two servers it’s less ideal. Plex_autoscan filled that hole for me though!

It takes about an average of 12 mins for something to be considered “done” and imported on one box in sonarr before I can play it on the plex VPS.

Good point. My entire use case is that I run everything on a single server in my basement so that has Sonarr/Radarr/ruTorrent/NZBGet/Plex. It’s all contained there.

1 Like

Yeah it dawned on me when you say about the elegance you’d probably not be running 2 different machines sharing the same remote as an intermediary!

Hmm. I’m making a few changes to my config and removing the buffer-size based on:

As there was a note here to not use a big buffer as it double buffers, which makes overhead on the reads.

@Krandor Thanks for the info.

@Animosity022 Thanks for the info. Let us know how the changes work out for you.

Anyone using medusa? If so, do you use the plex notifications or some post-processing script? If you are using a script, could you share that, please?

I’ve been using a default buffer size, 32MB cache chunk size and 4 workers, 128MB read ahead seems to be sufficient for 720p content. I think I’d want it at 8 workers or bigger chunksize though if I were playing back 4K media (which is probably - read definitely - expecting a bit much from a google drive account)

This is on a 250mbit SoYouStart server.

On my testing, having a bigger chunk size made everything way worse. Anything bigger than 20M kills playback for me:

EOF) response
Mar 30 19:29:46 gemini rclone[5040]: Movies/Chasing.Coral.(2017)/Chasing.Coral.(2017).WEBDL-2160p.mkv: ReadFileHandle.Read error: low level retry 10/10: EOF
Mar 30 19:29:47 gemini rclone[5040]: tnvepu36qiohcun8v84ddhsam0/ocfjsdro2hpv2va8orcnt636akmdmij4ap2cpk0ta68qb6it8lq0/uiu48qtqignho6r7u0d3poqhubdsciftmv0dpjfpu50u7jf3nnv2537mbo98ngai6810crnq0kge8: unexpected conditions during reading. cur
rent position: 10553904, current chunk position: 0, current chunk size: 10485760, offset: 10553904, chunk size: 26214400, file size: 30240846944
Mar 30 19:29:47 gemini rclone[5040]: tnvepu36qiohcun8v84ddhsam0/ocfjsdro2hpv2va8orcnt636akmdmij4ap2cpk0ta68qb6it8lq0/uiu48qtqignho6r7u0d3poqhubdsciftmv0dpjfpu50u7jf3nnv2537mbo98ngai6810crnq0kge8: (10553904/30240846944) error (unexpected
EOF) response
Mar 30 19:29:48 gemini rclone[5040]: Movies/Chasing.Coral.(2017)/Chasing.Coral.(2017).WEBDL-2160p.mkv: ReadFileHandle.Read error: EOF

If I run the same movie with just the default 5M chunk size, it plays back fine. That’s a huge bitrate 4K movie too:

[felix@gemini Chasing.Coral.(2017)]$ mediainfo Chasing.Coral.\(2017\).WEBDL-2160p.mkv
General
Unique ID                                : 231123133806480839100760379104910904449 (0xADE0B248D5643308B56427AFFC480C81)
Complete name                            : Chasing.Coral.(2017).WEBDL-2160p.mkv
Format                                   : Matroska
Format version                           : Version 4 / Version 2
File size                                : 28.2 GiB
Duration                                 : 1 h 29 min
Overall bit rate                         : 45.1 Mb/s
Encoded date                             : UTC 2017-08-27 05:04:49
Writing application                      : mkvmerge v12.0.0 ('Trust / Lust') 64bit
Writing library                          : libebml v1.3.4 + libmatroska v1.4.5

Video
ID                                       : 27
Format                                   : AVC
Format/Info                              : Advanced Video Codec
Format profile                           : High@L5.1
Format settings                          : CABAC / 5 Ref Frames
Format settings, CABAC                   : Yes
Format settings, ReFrames                : 5 frames
Codec ID                                 : V_MPEG4/ISO/AVC
Duration                                 : 1 h 28 min
Bit rate                                 : 45.1 Mb/s

I also tried a copy of Blade Runner I have that’s a 45GB file with a 55Mb/s bit rate and that works fine with the default chunk size as well.

So I’ve been testing the last 24 hours with:

felix 31418 1 4 Mar30 ? 00:29:31 /usr/bin/rclone mount gmedia: /gmedia --allow-other --dir-cache-time=160h --cache-chunk-size=5M --cache-info-age=168h --cache-workers=8 --buffer-size 0M --attr-timeout=1s --syslog --umask 002 --rc --cache-tmp-upload-path /data/rclone_upload --cache-tmp-wait-time 60m --log-level INFO

and haven’t hit any issues and memory use is much better (as expected).

1 Like

Yeah I guess I was hesitant early on due to ban risk to have so many concurrent workers so opted for a lower count and bigger chunks per worker but I suspect that’s misguided now, will adjust and test with smaller chunks though sadly it means a purged cache each time.

I haven’t purged my cache after each chunk size change.

I wonder if that was throwing some of the errors when I increased the size. I’ve never noticed an issue when I was making it 5 or 10 or ~20. Anything bigger would cause issues.

I’ll try clearing the cache and upping the size a bit and see how that works.

I had some issues reading older chunks under the old size when remounting with a different size, might be related then.

I just deleted the chunk temp area and when rclone was down and restarted it. That fixed all the odd chunk size / retry error messages I was getting before.

Setting the chunk size too big resulted in much slower starts for things without any cache. It does seem (for me) that 5M or 10M is a sweet spot in items starting up in 1-3s.

1 Like

Thanks for testing that, I’ll try to corroborate - obviously every setup has different sweetspots.

Wondering though what takes priority, in the rclone.conf I have "chunk_size" = "8M" and mount command it’s 32M (clearly I forgot about it in the config)

How does one go about deleting files from the remote?

Deleting them on the mount as you would locally. How would you like to delete them? Inside sonarr etc or terminal?