Mergerfs questions

My server is currently setup to use a gdrive crypt which is read and written to directly by sonarr, radarr, plex, etc. I keep reading people using Mergerfs and I’m wondering what value it brings. If I merge a local directory with the gdrive crypt directory what does that solve? Does it prevent sonarr / radarr UI from hanging? Maybe there’s better plex performance for recent files as they’re coming from local disk?

How does one deal with moving files from local to gdrive after a set period of time? Won’t Plex get grumpy or will it only ever reference a “mergerfs” file path regardless of whether it’s on the local disk or gdrive?

Lastly if I already have a gdrive crypt setup what’s the best way forward to implementing mergerfs?

Many thanks for your time!

I use mergerfs for my mount.

I basically use a local disk to stage and write everything first and I run an upload script every night to move from my local storage to the cloud.

So I merge /data/local and /GD to /gmedia

mergerfs /data/local:/GD /gmedia -o rw,sync_read,allow_other,category.action=all,category.create=ff

Everything (Sonarr/Radarr/Plex/torrents) all point to /gmedia

I do not move my torrent folder each night as I hard link my media using Sonarr/Radarr since they are on the same physical file system so that works without creating any extra IO.

1 Like

Thanks for the reply @Animosity022. I’ve been looking through your scripts on GitHub and things are becoming clearer. However, I’m slightly paranoid I might wipe my gdrive if I screw this up.

Am I right in thinking that you have pretty much mirrored the root folder structure on both local and gdrive in the mergerfs pool? When a file comes in it’s moved/copied/linked to the local/data drive via mergerfs. Then you have a cronjob that will move any files found on local to the gdrive at night.

However, you point all your apps to look at the mergerfs mount so all the above occurs but the apps don’t see anything out of the ordinary regarding moving things to the cloud?

Lastly… how come you have a mount defined as /data/local yet your rclone move script references /data/mounts/local/? Here’s the line in your script I’m referring to.

Many thanks for your time.

Thanks as I fixed that path in that script it was not updated.

I wouldn’t think of things are mirrored. I can give a use case example if that helps.

/gmedia is my mount for everything.

Underneath that it is compared of a local disk “/data/local” and my Google Drive via rclone “/GD”. With mergerfs and my specific setup, writes always go to /data/local first.

So if I copy a file, it appears in /gmedia and is actually written underneath to /data/local:

[felix@gemini ~]$ ls /gmedia
mounted  Movies  Radarr_Movies  torrents  TV  TV_Ended
[felix@gemini ~]$ cp /etc/hosts /gmedia
[felix@gemini ~]$ ls
bin  logs  scripts
[felix@gemini ~]$ ls -al /data/local
total 8
drwxrwxr-x.  5 felix felix   86 Feb 13 12:41 .
drwxrwxr-x. 10 felix felix  151 Feb 13 10:08 ..
-rw-r--r--.  1 felix felix  117 Feb 13 12:41 hosts
drwxrwxr-x.  3 felix felix   42 Feb 12 08:51 Radarr_Movies
drwx------.  3 root  root    26 Feb 13 10:00 torrents
drwxrwxr-x. 14 felix felix 4096 Feb 13 08:26 TV

My upload script than takes anything (excluding my torrent folder) and moves that to my GD so in that an example, it would take /data/local/hosts and move that to GD:hosts

While that happens, /gmedia thinks nothing has changed as it’s always been in /gmedia/hosts

I do my moves at night as in the insanely rare case something accesses a file after it was moved, there is a 1 minute delay possible via Google drive polling that a file would briefly ‘disappear’, but I’ve never ran into that issue.

So my if there was a TV show, it would appear first on /data/local/TV/Someshow/Episode1.mkv until was moved to /GD/TV/Someshow/Episode1.mkv and always look like:

/gmedia/TV/Someshow/Episode1.mkv regardless if it’s local or remote. I’ll see if I can spend some time and give a few examples for folks as I think that is a great question to ask.

Thanks for the explanation on this - I had similar questions.

I’m currently running the cache backend on my Emby server, and thanks to a previous thread involving sonarr/radarr and the hanging GUI I’m able to work around that issue.

Streams start within 8 seconds for me and I have zero issues with API bans or 403 errors. A 6TB library takes 2 minutes to scab every night. Cache uploads are quick - though having gigabit fiber at the house certainly helps.

Is there any reason I should look into moving to a mergefs setup like you have? Will VFS give me that much better performance?

Thanks.

So I’m definitely of the mindset, if it ain’t broke, don’t mess with it too much.

Streams would start 1-3 seconds faster as VFS is a little faster than the cache, but depending on the use case, cache works better for some thins like music as those are tiny files that open/close a lot.

It comes down to preference. I like having the option of being able to hard link in Sonarr/Radarr and avoid extra IO. For me, the 1 layer of mergerfs is a perfect fit for my use case. If you think that a high value item for you, it would make sense. If not, I’d leave it be if it’s all working well.

Maybe that was wrong of me. Perhaps what I mean is mimick the paths so everything appears the same once the cloud upload has happened.

From my quick messing around with mergerfs and rclone I understand that it pretty much mushes the file structure together. What I’m trying to understand is, is there a circumstance where I could replace an entire folder on /GD by using rclone move?

I’m starting to think the answer is no because if I have

/data/local/tv/whatever/season1/episode10.txt
and
/GD/tv/whatever/season1/episode{1-9}.txt

And I run your cloud upload script which will move episode10.txt to /GD it will create /GD/tv/whatever/season1/episode10.txt, right?

It’s not going to clobber the season1 folder and replace the first 9 episodes with episode 10, right?

Yep, it would just merge together basically.

I’m really sorry to keep pestering you @Animosity022… I’m 90% there now but the rc vfs/refresh command fails. I’m thinking it’s a permissions issue but I see no --rc-no-auth in your config files. Below are the errors I’m getting, if you could spare a few minutes for any suggestions it’d be most appreciated.

//$ systemctl status gmedia*
Process: 11388 ExecStart=/usr/bin/rclone rc vfs/refresh recursive=true (code=exited, status=1/FAILURE)
Main PID: 11388 (code=exited, status=1/FAILURE)

Feb 16 14:28:25 bob rclone[11388]:         "error": "couldn't find method \"vfs/refresh\"",
Feb 16 14:28:25 bob rclone[11388]:         "input": {
Feb 16 14:28:25 bob rclone[11388]:                 "recursive": "true"
Feb 16 14:28:25 bob rclone[11388]:         },
Feb 16 14:28:25 bob rclone[11388]:         "path": "vfs/refresh",
Feb 16 14:28:25 bob rclone[11388]:         "status": 404
Feb 16 14:28:25 bob rclone[11388]: }
// tail ~/logs/rclone.log
2019/02/16 14:28:25 NOTICE: Serving remote control on http://127.0.0.1:5572/
2019/02/16 14:28:25 ERROR : rc: "vfs/refresh": error: couldn't find method "vfs/refresh"
rclone --version
rclone v1.46
- os/arch: linux/amd64
- go version: go1.11.5

What’s your full mount command you are using?

Full mount command is:

ExecStart=/usr/bin/rclone mount gcrypt: /media/gcrypt \
--allow-other \
--buffer-size 256M \
--dir-cache-time 72h \
--drive-chunk-size 32M \
--log-level INFO \
--log-file /home/bob/logs/rclone.log \
--timeout 1h \
--umask 002 \
--vfs-read-chunk-size 128M \
--vfs-read-chunk-size-limit off \
--rc

What happens if you run it form the command line?

felix@gemini:~$ /usr/bin/rclone rc vfs/refresh recursive=true
{
	"result": {
		"": "OK"
	}
}

Same as you:

$ /usr/bin/rclone rc vfs/refresh recursive=true
{
    "result": {
	    "": "OK"
    }
}

What’s the full service file look like in systemd? I can try that too.

Ah ha! After= was pointing at a service that didn’t exist in the refresh service file. I guess it was trying to hit it before rclone had finished setting itself up? :thinking:

All green now in systemctl status gmedia* ! :smiley:

Thanks again for the help.

Woo woo! Nice. Happy you got it working!

I’m still confused by this cloud_upload script. I’m terrified I’m going to lose all the things! :smiley:

So I now have:

// local directory
$ ls /data/local
Anime  backup  Books  hello.txt  Movies  TV

// rclone crypt mount
$ ls /gcrypt
Anime  backup  Books  Movies  TV

// mergerfs mount
$ ls /gmedia
Anime  backup  Books  hello.txt  Movies  TV

hello.txt is my test file for the move command. I then run:

$ /usr/bin/rclone move /data/local/ gcrypt: -P --checkers 3 --log-file /home/bob/logs/upload.log -v --tpslimit 3 --transfers 3 --drive-chunk-size 32M --exclude-from /home/bob/scripts/excludes --dry-run

And I get back:

tail ~/logs/upload.log
2019/02/16 15:20:49 NOTICE: backup: Not making directory as dry run is set
2019/02/16 15:20:49 NOTICE: Anime: Not making directory as dry run is set
2019/02/16 15:20:49 NOTICE: Books: Not making directory as dry run is set
2019/02/16 15:20:49 INFO  :
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 0
Checks:                 0 / 0, -
Transferred:            1 / 1, 100%
Elapsed time:        5.4s

Based on the Transferred it seems it will move the hello.txt file but those Not making directory as dry run is set are very troubling to me. Those folders are empty on local and full of things in the gcrypt.

Will the move command I run replace my folders on gcrypt with the empty local ones? :scream:

I can’t delete those folders locally as they are required by my docker containers. :thinking:

Now I’m thinking that I’m thinking about this all wrong…

If I point the docker containers to the mergerfs folders they will always exist. Then I let mergerfs write to the local directories first. In this way the local directories can come and go as sonarr creates files and rclone move --delete-empty-src-dirs flag removes them. But they will always remain on the gcrypt so the docker containers will stay happy.