Bazarr, rclone and MergerFS

Hello,

I recently switched over from using local storage to using Animosity's setup of MergerFS for local writes first and a Gsuite drive with Sonarr/Radarr/Bazarr. Sonarr/Radarr and Plex all seem to be working pretty good so far. Upload scripts are working as expected and moving items from the local storage to the Gsuite drive on a daily basis. The only issue I'm having so far is with Bazarr.

Bazarr keeps giving errors about failing to save the subtitle to disk. At first I thought it may be a permissions issue but the Bazarr container is using the same PUID/PGID as Sonarr/Radarr and all the containers have /mnt on the host mapped to /mnt in the container so it's not a Path Mapping issue. I was checking the disk and it shows that the Subs do exist so I thought it was saving the file and falsely generating the error.

Further investigation into the error shows that it was an IO fault,

Traceback (most recent call last):
File "/app/bazarr/bazarr/get_subtitle.py", line 199, in download_subtitle
path_decoder=force_unicode
File "/app/bazarr/bazarr/../libs/subliminal_patch/core.py", line 856, in save_subtitles
f.write(content)
IOError: [Errno 29] Invalid seek

I didn't think there should be an IO fault as according to the MergerFS setup it should write local first and Sonarr/Radarr are not having this issue.

I reviewed the Rclone logs on the docker host and that showed it was trying write directly to the Gsuite drive rather than the local directory. Everything else writes local first so I checked with the Bazarr discord channel and the developer advised that with the underlying Subliminal code, if a sub file is detected for the episode/movie it will open the file and replace the contents with those of the new sub file rather than delete the existing file and write a new file. So this explains why it's trying to write directly to the Gsuite drive. They advised I check over here first as they are still currently working on a fork of the subliminal code where they may be able to change this behavior in the future but not guaranteed.

Is there a way to force the rclone mount to download the SRT file and modify it locally and have it placed in the local directory for later upload by the daily script? Or some other method of forcing it to write local when Bazarr tries to modify the file directly on the Gsuite drive?

That seems really strange. I use Bazarr and it writes just fine to the local disk first as the application wouldn't know much about that and it would write based on how you have mergerfs setup.

I do not use any dockers and it would seem strange if Radarr and Sonarr work, but Bazarr does not.

If you wanted to write SRTs, you can use --vfs-cache-mode writes and that would be fine too as it would just be small SRTS that would upload quick.

I'd try to figure out whatever is going on from the docker side though.

I don't believe there is anything going on with the docker side of it? The containers are using identical PUID/PGID/Bind Mounts for access /mnt on the host which is where the the NAS directory/crypt mount/mergerfs mount are all located.

If Sonarr/Radarr update an existing item, they delete the old item from /mnt/media which removes it from the Gsuite drive, then write the new item to to /mnt/media which actually writes to /mnt/VMdata/Downloads/TempMedia as per the MergerFS mount.

What I gathered from the developer on the Bazarr discord is if Bazarr downloads a new SRT file, rather then deleting the old one from /mnt/media (removing it from the Gsuite drive) it opens the SRT file up, and replaces the entire contents with the contents of the new SRT file, saves the file, this appears to cause it to write directly to the Gsuite drive as it's not removing and recreating the file but essentially modifying a text file that already exists on the Gsuite drive.

I believe the same behavior can be noticed using the rename function in Sonarr/Radarr, if I change the naming format, click on a season and choose rename and it renames all the files for that season, it changes them directly on the Gsuite drive, it doesn't download all the media files from the Gsuite drive to my local directory and rename them there for them to be re-uploaded with the daily script.

So your saying I can use --vfs-cache-mode writes to specify it to write SRT files directly to a local cache and then upload immediately?

Edit: and if I check the rclone.log file on the host running the docker containers it contains entries like

2019/10/01 17:05:40 INFO : TVShows/Show name (2011)/Season 07/Show name (2011) - S07E09 - Episode title - WEBDL-720p.srt: Copied (new)

However the docker host does not contain my upload script and therefore wouldn't be copying anything from the Temp local directory to the Gsuite drive. Also the only items I've seen in the rclone log on this machine are for srt files. And it's the same ones over and over that Bazarr seems to be trying to update.

Which is what leads me to believe that it is in fact trying to edit the file directly on the drive and save there, and the delay caused by that causes Bazarr to think the file didn't write even though it did copy it so Bazarr keeps trying to update the same ~50 srt files continuously not realizing they've been updated.

Hmm. interesting. I can test that and validate as well as I can check on upgrades.

So funny story, it depends on the mono version. Some version of mono move it properly. Some later versions try to hard link and that fails and it re-writes it.

If it's opening the file as you are saying, vfs-cache-mode writes would definitely fix the issue and once the file is written, it immediately uploads it when done without any wait. It's such a small file, it's really just about instant.

That's not a open a file up and update it. That's a direct copy to the mount, which should not be the case if you are going to mergerfs though. That's a bit strange.

Here is what he stated on discord

if the file already exist it open the file and replace the content.

So how I described it is how I interpreted it. If that is incorrect, my apologies.

Wonderful, I believe the docker container maintainer (linuxserver.io) is who determines the mono version so if they upgrade to a newer version I should expect it to fail to create the hard link and re-download/write all of the episodes I rename to local?

It sounded like this was going to fix my problem until

I initially thought that but didn't understand why only Bazarr would be writing directly to the Gsuite drive so I just assumed rclone was using the same terminology for a save indicating it copied (saved) the new data to the mount point. Any ideas on where to start troubleshooting why Bazarr is only trying to write directly to the drive? Something to do with the aforementioned hard linking? I'm assuming no since Bazarr doesn't use mono?

Here is my MergerFS mount

[Unit]
Description = /media MergerFS mount
Requires=mcryptmount.service
After=mcryptmount.service
RequiresMountsFor=/mnt/VMdata/
RequiresMountsFor=/mnt/m_crypt/

[Service]
Type=forking
ExecStart=/usr/bin/mergerfs /mnt/VMdata/Downloads/TempMedia:/mnt/m_crypt/ /mnt/media -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true
KillMode=process
Restart=on-failure

[Install]
WantedBy=multi-user.target

Sonarr/Radarr/Bazarr are all containers on this host. All use PUID/PGID=1000 which is my user on the machine which is also being used for all the mounts and directory permissions. All containers have /mnt on the host mounted to /mnt in the container.

I have a separate VM running Plex (not in docker) and that is the machine where my daily upload script sits.

By any chance are Sonarr/Radarr/Bazaar using a common volume? I noticed that hardlink works best when those apps are using one common volume to manage media. even on the sonarr site under the Docker install it is recommended for hardlink use.

I'm fairly new to Docker so I'm going to say yes but let me explain the setup to verify. On the host I have /mnt/media setup as a MergerFS mount that includes a mount on a local NAS and the Gsuite drive. The local NAS mount and the Gsuite mount are also under /mnt. I then pass through /mnt on the host to /mnt on each of the docker containers.I have use hardlinks set to yes on Sonarr and Radarr. In Bazarr, I have subtitle folder set to Alongside Media File. I have no path mappings in Bazarr as everything is using the same mount points under /mnt. Post processing is disabled.

The subs should write to /mnt/media/TVShows/ (path that is in sonarr), which by default according to my MergerFS mount should write to the local NAS at /mnt/VMdata/Downloads/TempMedia/TVShows, however it's being written directly to the Gsuite drive at /mnt/m_crypt/TVShows. The delay caused by the write to the Gsuite drive is causing Bazarr to throw IO errors and think the files aren't actually being downloaded.

your setup seems correct...that is a weird one...make sure to post back here if you find the solution.

That's the problem, I don't really know where to start in troubleshooting it, especially since it doesn't appear that MergerFS leaves any logs.

You can follow this:

If you have a question, the developer is extremely responsive and helpful as majority of the time, it isn't a mergerfs issue but something else as I've had a few :slight_smile:

Ok thank you. I'll see if I an contact the developer over there to see if there's any sort of way to do a debug on the MergerFS mount to find out what's happening behind the scenes causing it to write to the Gsuite drive.

trapexit pointed me in the direction of running the strace on Bazarr and then took the time to look at the system calls. He was able to confirm that the system calls are actually opening and truncating the file on the remote, then writing the new data into that file. Since it's an open call instead of a create that is what's causing it then to initiate an upload directly to the Gsuite drive instead of the local directory that is technically "write first". He advised it's not really a bug with Bazarr just their method of handling the replacement is different then most, which I believe is actually handled by the underlying subliminal code. Most replacements would be system calls to create a temp file, then rename it and remove the original. The create call for the temp file would cause the MergerFS mount to write local first. So this is only occurring when Bazarr is trying to replace/upgrade an existing sub file. So it looks like reading the docs on vfs cache mode writes is in my future.

Edit: Animosity, I just realized your rclone mount doesn't use any VFS cache for some reason I was thinking it did. So I should just need to add "--vfs-cache-mode writes" to my service file right? If your not using a cache remote or a vfs cache what does your find service do?

Thanks for chasing that up as that makes sense on why you see the issue and why it does the 2nd mount "first"

You should be able to check the rclone logs and see as for that, you should just need to set "--vfs-cache-mode writes" in the mount. I might just add that myself since I use Bazarr. I checked my logs this morning and I haven't gotten an upgrade anytime recently.

There should be no harm in adding that mount option as it would handle the issue for opening/replacing a file. The SRT files are so small, it really should not matter / impact much at all as it would upload almost instantly once it wrote locally.

I added the --vfs-cache-mode writes to my mount last night and restarted. Went to Bazarr and requested a replacement on an existing sub that it has been trying to download for days and Bazarr showed it downloading instantly without error so that appears to have resolved the issue for now. The Bazarr dev had me open an issue on Github so he can modify their replace function to delete the existing and download the new which will call a create function for MergerFS to write local first.

Glad we could get a work around. Unfortunately, the POSIX filesystem API and FUSE are pretty low level and there are some common anti-patterns that can lead to these kind of unwanted behaviors that simply aren't easy to manage at my level. It's generally impossible to know intent. In this case the file was opened O_CREAT|O_TRUNC. I could add a "treat O_TRUNC as a create" option that would remove existing files and run the create policy instead. It'd end up being similar to rename's behavior.

That said it's a rare issue and if it could be addressed upstream then that'd be better since it'd improve their code.

BTW... I'm on this forum but don't regularly log in so feel free to tag me in the future if there are any questions regarding mergerfs.

2 Likes

As an update to this, Bazarr devs pushed an update this weekend to resolve this issue on their end. Yesterday I removed the VFs cache mode writes from my mount, upgraded my container and forced an update of an existing srt and it worked as expected removing the existing and writing a new file to the local directory first.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.