Changing setup..Advice

I havent been following rclone development for awhile now since everything is running very well. But tonight i decided to poke around the forum to see whats up and what i missed.I found that rclone can now do union which is very cool. Im still rocking plexdrive 4 + crypt rclone and plexdrive 5 + crypt rclone for my 4k content. (dont ask why i use both..lol..it just worked for my setup). But i want to simplify things. Last year i decided to give rclone vfs/cache a test try and it seemed to work well till i got the quotas ban from gdrive. Since i didnt have time to tweak and try again i moved back to my old working setup. But now im ready to give it a try and tweak what i need to make it work. My hope is for rclone to be the only piece of software i need to make everything work as well as it does now.

My current setup:
2 machines : hetzner (download box) , SoyouStart (Plex box) both Ubuntu server 18.04.3
Gsuite account currently encrypted but want to move to decrypted (is that ok?)

Hetzner:
Sonaar,Radarr, Medusa,Deluge,Sabnzbd
Processing In a nutshell...

  • File gets processed by Sonarr/radarr
    post-processing script is launch by Sonarr/radarr to convert file to mp4 if need and then rclone is l
    launched to send it to gdrive followed by a request sent via autoplexscan to get file into plex.
    Sonarr/Radarr are setup to unmonitor when file is no longer local. File
    replacement is done manually if i need a new version/quality
    Bandwidth: 1Gbps

Soyoustart:
Plex
rclone crypt + plexdrive mount (readonly)
second rclone mount RW for when i need to modify something
bandwidth: 250Mbit up and 250Mbit down

What i want to achieve:
Hetzner:

Using rclone, mount and union with local path so i can manage quality changes via Sonarr/Radarr/Medusa. Mind you file would be sent right away to gdrive and not on a nightly job. So files need to appear in union mount in a decent amount of time after upload..

Soyoustart:

Only have a single rclone mount with proper configs to avoid ban. (with my current setup, i havent had a ban in over 2yrs)

Questions:

  • From what i read about union, it only writes to a single drive, If i move files right away to gdrive after being processed by Sonarr/Radarr. How does file replacement work for those apps?I have recycle bin enabled in case it does a bad overwrite so i can recover files but if the gdrive portion is read only, the file wouldnt be able to be sent to the recycle bin.
    Also Animosity022, i was poking around your Git and saw that you use mergerfs. Any reason why your not using Rclone union?

  • For the rclone mount on the Plex server Animosity022 systemd file for the mount seems to be the right starting point.
    https://github.com/animosity22/homescripts/blob/master/systemd/rclone.service

  • If i want to duplicate my data on another gdrive account, is it still worth while spinning up a GCP machine for the transfer using rclone or not anymore..want it to go as fast as possible. Also is there away now to not pass 750GB/24h? (rclone flag?)

I use hard linking for saving space and speed so I user mergerfs for that reason.

Rclone Union doesn't support hardlink?

No, it doesn't.

Is the rclone.service using rclone cache or vfs.. Always been confused about the 2 and which is better

Better is subjective. For me, I do not use the cache backend as I find it faster without it.

That works with my setup and plex clients great.

Depending on what you need to use Union for, you may want to slighty delay planning of a new union setup because there is a massive overhaul of that system coming that will more closely mimic some of the most important features of mergerFS:

Issue:

Pull request in the works by our wonderful new member @Max-Sum (thank you):

This seems to be progressing quickly. From what I understand he already has a working system for this but currently working on integrating it into rclone with Nicks help.

The reason Animosity and some others use mergerFS is that can currently do the same as rclone union but a lot more flexible. Perhaps most importantly being able to write to more than one drive at a time, and also being able to granularly define how each drive is treated.
MergerFS works well for this job, but the downside to it is that it only works on Linux. It is also not really written for cloud-use so it must operate on mounted drives (which is usually not too big of a problem).
If your rclone system is on Linux you may just want to use this as it exists now.

@thestigma Thanks for the heads up. I have used mergerfs before so I might use it while I wait for union

@thestigma or @Animosity022

With mergerf and Sonarr/Radarr, How does changing quality or renaming a file already in Gdrive. My current setup has files being converted to mp4 before sending them to gdrive via rclone. This all done via a Post-processing script launch after move/rename from Sonarr/Radarr. Since i didnt have mergerfs before, Sonarr and Radarr would just mark those files as unmonitor on there next disk scan and i would handle the quality change manually with a manual search. If file was quickly replaced(before my script had time to upload it with rclone), it would be sent to the recycle bin. I still plan to upload the file right away as my plex is not on the same server and is only using gdrive

With mergerfs, both applications could handle quality upgrade and which im guessing would save the file on the local and since mount of gdrive is rw, they would be able to delete the old copy. But what about rename? sometime files are still on TBA or Episode instead of the episode title, I would like to use Sonarr to rename them when the title becomes available. Would it just do it directly on the gdrive or mergerfs would download the file back to local disk in order to rename it?

Last thing, would the disk scanning of both apps hurt my api hits on gdrive?

Renaming is handled fine by mergerFS. If you read up on the docs you can see that you can assign different volumes one or more of groups of actions.

So for example you can have a certain disk do move, renames and delete, but not write any new files (which you may want to send to a local disk temporarily for later upload). Animosity knows more about the spesifics of that setup than me, but it's all in the docs basically - and you can also go to his thread and steal his won settings as a base to work from since they sounds like they are very close to what you want...

As for scans - they use API calls to list, sure, but scans are in general not a big problem as long as they scan for the basic attributes. it's when programs scan for extended metadata or actual file contents (like say - generating previews) that things get bad since that will require opening each file in turn rather than just getting a bulk listing.

I would assume that sonarr and radarr operate on basic attributes and probably won't make too many problems as long as scan settings are reasonable - but I do not use these personally so that's just my best guess. You can keep an eye on this site to get a feel for how much of your quota you are actually using and if you think you see abnormally high usage compared to expectations:
https://console.developers.google.com/

@thestigma

Thanks I will give everything a try and see. First step will be to change how my download server works and add mergerfs and reconfigure Sonarr/Radar. On my plex server I will continue with Plexdrive till im satisfied with my data manipulation. Then I will try rclone again o replace plexdrive4 and 5.