Rclone mount with unionfs constantly deleting/creating new file as copy happens

Hi,

I am using unionfs to join an rclone mount (rw) with plexdrive (ro). The purpose being, I want sonarr/radarr to be able to add/remove files and plex to read files properly.

What I’m noticing when radarr attempts to copy a new file to the mount:

  1. .partial file created
  2. rclone copies file (i.e. Copied (new))
  3. .partial file updated
  4. old file deleted
  5. rclone copes file (i.e. Copied (new))

What it seems to mean is that a 100mb file, if copied in chunks of 10mb, seems to be taking 10 + 20 + 30 + 40 + 60 + 70 + 80 + 90 + 100mb of bandwidth. Since I’m using gdrive I’m hoping to avoid this.

My rclone mount settings are:
rclone mount --allow-other -v gdrive: /mnt/gdrive-rw

My unionfs settings are:
unionfs-fuse -o cow,allow_other /mnt/gdrive-rw=RW:/mnt/gdrive-ro=RO /mnt/media

Ideally I’d like the copy to happen to a cache directory, then copy in one stream. Or, have the copy directly upload at the same time.

Is this possible with my setup? I realise rclone copy is better for this scenario, but I’d like the mount there so my automation programs can write and delete.

Any help is much appreciated.

Personally, I’m not sure what having plexdrive in there is doing as it seems to just add complexity.

I’d use the rclone cache and use the cache configuration to have a tmp_upload directory. I use Sonarr/Radarr and no problems with partials or anything else. I just let rclone do it’s thing and it’s been running flawlessly for a few weeks now.

My only gripe is startup time takes maybe 2-3 seconds compared to almost instant with plexdrive, but I’ve removed a lot of complexity so I’m fine with that.

Hi Animosity,

You’re right it does! To be honest I thought plexdrive performed better, but reading into it a little it looks like they both serve the same purpose.

I just read another post where you posted your setup, and before I dive into it, I wanted to ask you about 2 options you had set:

  1. what is the purpose of buffer size 0M?
  2. what is the benefit of setting tmp-wait-time to 60m?
  1. There was another post that I could find from remus where he mentioned that if you are using the cache and the buffer-size, you are basically double buffering everything and just let the cache do read-ahead and stuff. So I went with that based on a github issue.

  2. Just a personal preference for me. I wanted to keep stuff for about 60 minutes so ensure that if items were copied or moved, things like bigger movies, they finished copying and any of the Radarr/Sonarr renames happen. I would say, pick a number for that depending on what your use case and how much you’d like to keep stuff local before uploading it. I was thinking about making mine a few days as usually a proper or something comes out in that time frame so I was trying to limit the unnecessary uploads, but with Verizon Gig Fios, I don’t have a monthly cap nor a capacity constraint in uploading so that works for my use case.

Hope that helps!

Thanks for that. I assumed the timeout was something like that, I think it makes sense for me too. I have slow disks so making sure the copy finishes would be good for me. Also thanks to the automatic read location switch it doesn’t really matter much.

The double buffering problem seems like configuration misalignment. I’ll keep the defaults for now just incase anything changes in the future and will reevaluate if I start struggling for local space.

Thanks for your help! I’ll try and get it setup now.

Sure. The default for the buffer size is only 16M so really won’t matter much.

Prior iterations, I had like 512M or even 1-2GB of buffer. What remus was saying is that if you configure it high like that, it’ll read way ahead to grab all that data be more ‘hungry’ in filling the buffer size.

So I seem to have got it working with radarr quite nicely. Still waiting on an upload to complete to confirm it synced properly, but it seems to be working as expected there.

One issue I have encountered is playback with plex is now incredibly slow. Before it used to take a few seconds to start playing a file, now it’s taking minutes. I’m also getting chunk not found errors in the logs.

Apr 21 20:26:05 ubuntu rclone[16812]: tvshows/Homeland/Season 07/Homeland - S07E02 - Rebel Rebel.mkv: (380911616/1586823100) error (chunk not found 377487360) response
Apr 21 20:26:05 ubuntu rclone[16812]: tvshows/Homeland/Season 07/Homeland - S07E02 - Rebel Rebel.mkv: ReadFileHandle.Read error: low level retry 1/10: EOF
Apr 21 20:26:28 ubuntu rclone[16812]: tvshows/Homeland/Season 07/Homeland - S07E02 - Rebel Rebel.mkv: (41943040/1586823100) error (chunk not found 41943040) response

This is what I ended up using for my mount command:

/usr/bin/rclone mount gdcache: /mnt/media \
   --allow-other \
   --dir-cache-time=160h \
   --cache-chunk-size=10M \
   --cache-info-age=168h \
   --cache-workers=5 \
   --cache-tmp-upload-path /data/rclone_upload \
   --cache-tmp-wait-time 60m \
   --attr-timeout=1s \
   --syslog \
   --umask 002 \
   --log-level INFO

Any ideas what might be causing the delays?

If you change the chunk size at all and it has any chunks, you need to delete your chunk directory.

I use https://github.com/l3uddz/plex_autoscan to handle all my cache expiration type stuff as it plugs directly into Sonarr/Radarr and works like a champ.

Not entirely sure what you mean. The settings I pasted above were the original settings, I didn’t play around with chunk sizes if that’s what you mean. I can try deleting the chunk directory, but I’m not sure which one you’re referencing. Rclone’s or plex’s?

Also, not sure how sonarr/radarr would have affected this, since I assumed the chunk errors were rclone’s implementation of downloading the file?

Apologies for the ignorant questions!

So for me, I run as the user ‘felix’ and in my home directory there is a cache folder for rclone:

[felix@gemini gcache]$ pwd
/home/felix/.cache/rclone/cache-backend/gcache

This is named whatever the type=cache in your rclone.conf. I usually just stop rclone and remove that directory in there and start it back up.

If you want to prime up the caching, I usually do a scan in plex and let it see everything. For me at ~40ish TB, it takes maybe 5-10 minutes.

Ahh ok, thanks for the explanation! Still not entirely sure why I’d have missing chunks, but I’ll try deleting the folder. Even with files I’ve never played before, I’m still getting terrible speeds though.

I guess warming up the cache is something I can do. I do remember specifically testing cold files with plexdrive and it was much quicker. I’m sure the implementation isn’t widely different so I’m guessing it’s probably something to do with my configuration.

The only thing I can see that seems to affect initial streaming speed is chunk-size so I might try lowering that. Is there anything else I might be missing?

Have you run any other versions prior with cache? You can remove the actual cache.db as well by removing the file or just kill -HUP

I’ve honestly found very little difference and I leave mine just at the default 10M.

No other versions with cache. Seems like clearing out the cache hasn’t improved much, but I’m not seeing errors anymore on files I’ve played before, which is good.

I’ll leave things as they are and monitor it for a few days to see if it stays as bad, it’s possible it’s a compromise I’ll have to live with.

Thank you for all your help, though!

Seems like the chunk errors didn’t complete go away. I’m actually seeing them on newly copied files. I’m guessing when sonarr tries to do an analyse it opens the file for read, and sometimes (1 in 10ish) it will throw an error

Apr 22 03:29:54 ubuntu rclone[1748]: tvshows/Veep/Season 2/Veep - S02E05 - Helsinki Bluray-1080p.mkv: unexpected conditions during reading. current position: 4155600896, current chunk position: 4152360960, current chunk size: 1933312, offset: 3239936, chunk size: 10485760, file size: 4155669531
Apr 22 03:29:54 ubuntu rclone[1748]: tvshows/Veep/Season 2/Veep - S02E05 - Helsinki Bluray-1080p.mkv: (4155600896/4155669531) error (unexpected EOF) response
Apr 22 03:29:54 ubuntu rclone[1748]: tvshows/Veep/Season 2/Veep - S02E05 - Helsinki Bluray-1080p.mkv: ReadFileHandle.Read error: low level retry 1/10: EOF

No more errors thrown after that. Any idea what it could be?

What version are you running? There are some expiration type issues still in 1.40 and the beta hits some of them.

I use plex autoscan to remediate any of them as it automatically expires the right thing and uses the webhook feature in Sonarr/Radarr.

For me, I have analyze off in both Sonarr and Radarr.

Ok I’ll look at making plex scan run in a more predictable way. Right now I’m using analyse to provide enough details to name my media files. Other services tend to like descriptive file names for things like subtitle downloading.

I’m currently running the latest release on githbu (1.40).

I’d grab a later beta and take a look at plex_autoscan. It handles the cache expiration quite well.