Rclone command move subfolders

Hello all!

I'm trying to set up a rclone. Currently I have to following:

Every download is linked to a folder called Local. In this folder I have different subfolders to make sure that the content will be uploaded in the right folder in Google Drive (I have the same subfolders here).

This is done by a Cron script and the rclone command move.

I thought this would work but I faced the following problem today:
The subfolders were gone, meaning that Sonarr and Radarr couldn't move the downloaded files in the correct folder. (The root folder is the folder called Local + subfolder)

How can I make sure that the subfolders will always exist so Sonarr and Radarr dont have the problem?

I'm using Linux as OS and my rclone version is v.1.48.0.

What's deleting the folders ?

It's not deleting the folders, it's actually moving the folders to my google drive.

If they are gone, are they deleted or no?

No, the folders are not deleted.
The folder are overwriting the current folder name in the google drive. But since that's the same name you won't see a difference. The files inside the folders are going in the correct folder in google drive itself. It's just that the folders I would like to stay are also gone aswell.

I can maybe make it more clear with a picture, if you would like I will take some and show you what i mean.

I think I understand the issue. rclone doesn't really have a concept of folders as objects so I think I have heard of empty folders disappearing during moves. When you ask rclone to move a whole folder - what it actually does internally is move all the files inside it, and I guess it may just assume to delete a folder when empty. I am not sure if rclone can truly differentiate between moving a folder with files, or just moving all the files inside it. Take this with a grain of salt as my understanding of this is not very deep - just based on nuggets of info from NCW here and there. (feel free to correct me if you know more Animosity)

I want to say i have seen some sort of flag that disables this behavior. Something like this? But I am not sure if this is the exact thing you need here. I have not tried this myself:

--leave-root

During rmdirs it will not remove root directory, even if it’s empty.

Below are a few related suggestions that don't directly address the question but may be ideas to solve your root problem from a different angle:

You might want to look into using the "union" remote to do the thing you are already trying to do, but in a much more transparent way. It can merge 2 (or more) locations together to appear as one. Only the outer will be written to (which would be your local), then your timed script can actually handle the eventual moving of files to the cloud. This effectively amounts to a sort of manual write-cache. I believe Animosity does something similar with mergerfs on his setup. See documentation page for more info.

The easiest possible solution may be to look into if your torrent software has an "incomplete downloads folder" option that allows you to set a (local) working folder and have it simply move the files to their final location after being marked done. I am assuming the main reason you are doing this setup is because directly saving torrents to cloud works poorly. This is a fairly common feature in modern torrent clients (like qbittorrent which I use) and this is how I solve this problem for my own use.

Not yet, I will make it more clear with pictures.

Currently I have 2 folders, a Local one and a MergerFS one.

The Local one is my folder where all the download are going to. Once a day rclone will upload (move command) all the files inside this folder to the MergerFS one. This is the mount to my google drive.

As you can see in here I have quite a lot of folders in my google drive.
structure

What I would like is that Radarr and Sonarr places those files in the correct folder. So Rclone just has to move all the files inside those folder to the parent folder, and everything inside will be placed in the correct one.

That's why for example Sonarr places the files in Local/TV Shows.

The command I use for Rclone is:
Rclone move $HOME/Local gdrive:/Plex

The gdrive is the MergerFS folder, so everything will be moved from the Local folder to the cloud.

The problem is the following:
The folders inside Local has to stay otherwise Sonarr and Radarr are not able to find their root folder.
Those disappear when the move command is done running.

Is it possible to have another script running to check if those folders are removed, and if thats the case it will make the folders again?
Or that the folders wont be removed at all? It's basically just the folders inside those that has to be moved to the cloud. And those folders just exist to place them in the correct folder inside Google drive.

You'd want Sonarr/Radarr/Plex to all point to the mergerfs mount rather than a local folder so they can all see everything if they are local or in the cloud.

assuming that mergerfs works on the same principle as union, what Animosity says sounds like the right approach. Your torrent client should be able to see the already existing ultimate-location and accept this as a path - but in reality end up writing to the local one. Pretty sure rclone will then just assume to create any missing folders when files have them in their full path name (because as I said, rclone doesn't recognize folders as a thing, it just infers them from the files).

with that sort of system a "incomplete files" folder option shouldn't be required, but if you have it then that's still a very elegant, automatic and foolproof way of doing it that I think is even more preferable.

@Animosity022 related question - is there any particular reason you use mergerfs over the built-in union remote in rclone? From my understanding they basically do the same thing, but I've never actually used mergerfs so I may be ignorant to important details. I'd like to hear why you used this over the other in your implementation.

I use a concept in Linux called hard linking. Hard linking is used when you aren't crossing a physical disk and you basically make a link to the file immediately. This allows for a few things:

  • Multiple copies of the same file without consuming any extra disk space
  • Less disk IO since you aren't making a copy of the file
  • Immediate access for Sonarr/Radarr

You can only use hard links on the same physical disk since it's basically making another pointer to the file.

1 Like

So, I just have to point everything directly to MergerFS and skip the Local folder?

Everything will be immediately stored in the cloud then?
Sounds good to me!

I can share my workflow.

I use mergerfs.

I use a local disk for writing.
I use a Google Drive remote for reading and sometimes deleting to upgrade media but generally just for reading.

My mergerfs setup has /data/local which is my locak disk and /GD which is my Google remote combined to /gmedia.

Everything points to /gmedia and by mergerfs, writes always happen to /data/local first.

I run an upload script to move /data/local to my GD overnight and basically the local file is uploaded to my remote and all the paths remain the same in my setup.

So say something was Movies/Titantic/Titanic.mkv, that might be local at /data/local/Movies/Titanic/Titanic.mkv and I'd upload it to to my remote at GD:Movies/Titanic/Titanic.mkv all the while Radarr / Plex think it is at /gmedia/Movies/Titanic/Titanic.mkv and it doesn't matter if it's local or remote.

Does that help?

Well, no, the files go to local first, but your mergerfs should be handling that (again, assuming here that it works similar to union). At least in union writes always happen to whichever location is the outer wrapper (ie your local) so all writes go there and this prevent most nasty issues that happen when you try to directly save in-progress files (like torrents, but also some renders ect.) directly to cloud.
As far as I understand this is one of the major benefits to using this sort of system over just a standard vfs write-cache

So the result should be that you point everything to the mergerfs mount and in reads it appears to be all one big thing (including what you have currently on local) but when you write is transparently goes to into local (and then eventually gets moved across by a script). radarr ect. should be accepting of the folders that exist already on the cloud drive because as far as they can see it's there even if it may not actually exist on local currently.

At least that's my best understanding of this - Animosity has been using this sort of setup for a while so I defer to him on all details, but his description seems to fit my current understanding fairly well.

I think I tried to make the exact same setup, although I couldn't get Sonarr and Radarr working to point to the Google drive folder while still downloading every file to local.

But the problem I'm facing is:
I move everything in the Local folder, while it needs to have the folder Movies always underneath the Local folder because Sonarr is pointed to that.

Sonarr is pointed to the folder Local/Movies
The rclone move script is told that it has to move everything inside the Local folder, so the folder Movies will disappear.

How can I make something to be sure that the folder Movies will always exist? Otherwise Sonarr can't move Titanic to that folder (Local/Movies) when the download is finished.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.