Best rclone mount settings for upload speed bound setup?

So, I'm using the usual Sonarr/Plex/Rclone setup and it's been working alright. I was having Sonarr write directly to the mount after downloads completed, but was noticing that it took a long time to move each file. My connection is 600Mbps down but only 20Mbps up, so my assumption was that the move operation was taking a while because it was uploading on the move. So I added --cache-tmp-upload-path to my mount command and that fixed the issue with the move operation. The problem I'm running into now is that once --cache-tmp-wait-time expires on one of the files in the tmp-upload-path and it starts to upload that file, every single file that is in the tmp-upload-path gets locked for modification, which causes sonarr to hang while.

Does anyone have a setup with similar limitations to mine that they are having success with?

I've just switched my setup back to not use the --cache-tmp-upload-path option because the freezing of sonarr is too frustrating. Moving the unuploaded files from the cache to the mount is going to be time consuming

For me, there are too many caveats in terms of trying to write directly to the mount that don't really meet my use case.

I rather have the ability to control when things are uploaded and limit my daily uploads to 700GB.

I don't use the cache backend as I find that much slower than without it.

To that end, I use a local disk to stage everything via mergerfs and and upload via nightly script.

That provides me with the best of both worlds as I have the ability to keep things in sync and still use one mount point.

I my my rclone read/write so I can delete from it as upgrading media is a common use case for me as well.

3 Likes

Ah, thanks! I don't have much experience with filesystem in user space, but I think this makes sense to me. Just to clarify, your setup looks something like this-- Plex and sonarr are pointed to a directory from mergerfs that combines a directory that is your rclone mount with a local directory, and so whenever write operations happen, files get written to the local directory. Then you have a script that performs an rclone move operation on the local directory every night?

Yep, you got it. mergerfs is pretty easy in terms of setup and installing without much effort.

It has many policies on it and I use a 'first found' policy which writes everything to my first disk configured, which is my local disk.

2 Likes

Cool, thanks! This sounds like it's exactly what I need. I think I can mark this as solved now. I'll try setting it up tonight

1 Like

Can confirm, the solution provided works extremely well.

1 Like

Awesome, thanks. What do you do about longrunning move operations?

Awesome, thanks. How do you handle move operations what will exceed your cron frequency? moving the backlog from my --tmp-upload-path will take about a week according to my rclone progress

1 Like

I only upload once per day and the 700G stops it and it just picks up the next day.

felix@gemini:/opt/rclone/scripts$ cat upload_cloud
#!/bin/bash
# RClone Config file
RCLONE_CONFIG=/opt/rclone/rclone.conf
export RCLONE_CONFIG

#exit if running
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi

# Move older local files to the cloud
/usr/bin/rclone move /local/ gcrypt: --log-file /opt/rclone/logs/upload.log -v --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs --user-agent animosityapp --fast-list --max-transfer 700G

My script checks for itself running and won't run another.

1 Like

I keep mine on local disk because I have 6 TB and then just upload it when it gets full manually through rclone move and bwlimit it. Happily upgrading to 20 mbps upload from 10 mbps next month though.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.