I am running Radarr (V3 dotnet), Sonarr (V3), NZBGet, and Rclone (all newest stable versions).
Ubuntu 20.04
I have a script that runs whenever a download is complete that is supposed to move the newly downloaded file to my Google Team Drive: /usr/bin/rclone move /home/server/data/local gcrypt: --log-file $LOGFILE -vv --delete-empty-src-dirs --fast-list --exclude *.partial~ --fast-list --drive-stop-on-upload-limit
My issue is that my script starts moving the file before Sonarr/Radarr moves the whole file. I have read that Rclone is supposed to skip files that are still being written but that's not working in my case.
It usually only moves a few GB then deletes the file. Examples: 85 GB movie transferred only 5GB then deleted the rest, 5GB shows transfer a couple GB.
So now I have a bunch of corrupted files mixed in with non-corrupted files I need to go through.
The logs show everything is working as normal so I'm lost:
If the file was being actively written to while it transferring, rclone would notice that and print some messages like:
2020/08/25 19:39:01 DEBUG : Using config file from "/opt/rclone/rclone.conf"
2020/08/25 19:39:01 DEBUG : fs cache: adding new entry for parent of "blah.mkv", "/home/felix/test"
2020/08/25 19:39:02 DEBUG : blah.mkv: Need to transfer - File not found at Destination
2020/08/25 19:39:02 DEBUG : blah.mkv: Sending chunk 0 length 1073741824
2020/08/25 19:39:06 ERROR : blah.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails&supportsAllDrives=true&uploadType=resumable&upload_id=AAANsUl4jDI_QTNICFBnyKS2BMwsIPPWRTzkHmt54gK3ZsSkV2eEOhS8D_UfBh2D7f1obrDQIZItNsUqRkhyp_hrJaw": can't copy - source file is being updated (size changed from 1504953150 to 1530118974)
2020/08/25 19:39:06 ERROR : Attempt 1/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails&supportsAllDrives=true&uploadType=resumable&upload_id=AAANsUl4jDI_QTNICFBnyKS2BMwsIPPWRTzkHmt54gK3ZsSkV2eEOhS8D_UfBh2D7f1obrDQIZItNsUqRkhyp_hrJaw": can't copy - source file is being updated (size changed from 1504953150 to 1530118974)
2020/08/25 19:39:06 DEBUG : blah.mkv: Need to transfer - File not found at Destination
2020/08/25 19:39:07 DEBUG : blah.mkv: Sending chunk 0 length 1073741824
2020/08/25 19:39:07 ERROR : blah.mkv: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails&supportsAllDrives=true&uploadType=resumable&upload_id=AAANsUncv22QuXOh6d35ESpYQnzsFE_kZNCU-izBnUKStTg05-7-WZmuss6ZE7ogW-fe-gI3I4pjF7-mEvf0Tkq4AksN9xmePQ": can't copy - source file is being updated (size changed from 2249835326 to 2543043390)
2020/08/25 19:39:07 ERROR : Attempt 2/3 failed with 1 errors and: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails&supportsAllDrives=true&uploadType=resumable&upload_id=AAANsUncv22QuXOh6d35ESpYQnzsFE_kZNCU-izBnUKStTg05-7-WZmuss6ZE7ogW-fe-gI3I4pjF7-mEvf0Tkq4AksN9xmePQ": can't copy - source file is being updated (size changed from 2249835326 to 2543043390)
2020/08/25 19:39:07 DEBUG : blah.mkv: Need to transfer - File not found at Destination
2020/08/25 19:39:08 DEBUG : blah.mkv: Sending chunk 0 length 1073741824
I searched the logs and found one of those errors within 4,600 lines of thousands of uploads.
Is it possible that the incomplete file finishes uploading before Sonarr or Radarr writes again?
edit...
actually I don't think I did get any of those errors. These are all the errors from the log:
2020/08/25 01:48:16 ERROR : XXXXX/XXXXX/XXXXX/XXXXXX.XXX: Couldn't delete: remove /home/server/data/local/Movies/XXXXXXXX/XXXXXX.XXX: no such file or directory
2020/08/25 01:48:16 ERROR : Local file system at /home/server/data/local: not deleting directories as there were IO errors
2020/08/25 01:48:16 ERROR : Attempt 1/3 failed with 2 errors and: not deleting directories as there were IO errors
2020/08/25 01:48:23 ERROR : Attempt 2/3 succeeded
NZBGet has an incomplete folder and a complete folder. Sonarr/Radarr should get the file from the complete folder.
I have mergerfs setup so my local and cloud storage is combined the same way explained in the guide.
So once the file is finished downloading Sonarr or Radarr should stick it in the local folder (via mergerfs)
then the script should run (set to run on rename in Sonarr and Radarr) which should move it to Google then delete it.
I use hard links as the copies happen instantly since they just make a link but I have my items on the same disk.
Even with that setup, if Radarr is copying locally, it's a constant stream so that seems odd as Sonarr/Radarr noramlly don't stop writing when they are copying a file.
What are you moving? Are you moving just the completed area?
Do a test to see if you're hardlinking in your setup.
Go to sonarr or radarr and do a manual import on a file that is at least 2Gb in size, from your downloads folder. If it imports and says completed within 2 seconds, you are hardlinking. If it takes longer than that, it's copying the file and deleting it. If it's doing that, the #1 cause is how you setup your volumes both on the host and in the container. Each volume binding in a container is 1 filesystem and 1 device. You can't hardlink across devices or file systems. So inside the container, if you have /downloads and /media, then it's going to be 2 different devices and filesystems and be forced to copy. To fix this, you will need to change your structure so both downloads and media folder are inside a directory together, like /data/media and /data/downloads. Then just bind ./data:/data` in sonarr/radarr
If sonarr/radarr uses the mergerfs mount than you must have your branches setup like
home/server/data/local:rw
home/server/data/cloud:nc
home/server/mergerfs
create=ff
Your downloads folder must be inside home/server/data/local for you to be able to hardlink. You're structure shows that you are likely adding downloads/incomplete and downloads/complete as 2 branches to merge together, that's not going to work.
I ran into this same thing when starting out with this setup. Look at animosity022's github repo for an example that hardlinks.
An alternative to using mergerfs would be using the rclone union remote which recently improved. You'll have the same issue if you keep your existing structure the way it is.
Normally I have my downloaders download direct into /home/server/data/local/downloads instead of going through mergerfs.
I set my cloud remote to NoCreate and I don't have a downloads folder on the remote, it doesn't upload my downloads folder. Files imported by sonarr will be moved to the media folder (appear as mergerfs/media, but real location is local/media). You can then use a script to run rclone move local/media myremote:/media with a cron job.
If the goal is to use mergerfs, everything should go through with the exception of the upload to the cloud as that has to be off the local disk.
I think data is your mounted drive and cloud is the rclone and the mergerfs is home/server/mergerfs so everything should point to:
I break mine up into the apps so if I translate to your mount, I'd have:
home/server/mergerfs/NZB which has home/server/mergerfs/NZB/incomplete and home/server/mergerfs/completed
and I have a torrent area on
home/server/mergerfs/seed
TV and Movies are at
home/server/mergerfs/TV
home/server/mergerfs/Movies
Sonarr and Radarr move everything from the NZB and seed area to TV/Movies.
The only exception is my rclone move script which moves from the "data" local disk and I specifically exclude NZB and seed as exceptions.
You'd test hard linking by just doing
ls home/server/mergerfs/TV/test1 home/server/mergerfs/TV/test2 or something like that and if it works. You need to validate hard linking is on in Sonarr and Radarr. If those take a long time, you'd want to trace on the app and see if it's hard linking or not.
I believe there was a problem with my Sonarr and Radarr setup which were both v3.
It could've been several things such as permission issues, corrupted files, bad install, etc.
I removed both the Sonarr and Radarr v3 version and reverted back to v3 of both. Everything works fine now (under v2) which I can live with.
Thanks again for the help.
Just to confirm the issue was with Sonarr and Radarr v3 not hard linking.
Okay turns out I'm still having the issue. It is clearly an issue with hard linking that I just can't figure out. Here is some more info after moving stuff around trying to fix it:
BuyVM 512mb server with 256gb drive mounted to /home/server/data with fstab by UUID