Gotcha. Thanks!
So, how do I get into all with this for Windows 10 on my laptop? Should I use rclone to config to mount the drive with unlimited storage before putting movies and TV shows from there and then add them into Plex? Cause I don't want limited one and waste up the whole space for file explorer on my own at all
The goal of this thread is for any question related to my settings.
If you have something else, please start a new thread.
OK, then I'll try to post this on my topic
Hi,
I'm curious to understand if you've considered moving away from MergerFS and toward using rclone union instead? If not, what's holding you back?
I'm setting up a new system and was using MergerFS in the same way you are, but wondered what would the rclone union version look like. I can't see how to force it to write to the local first, but still keep the remote also writeable if needed - I know this question is not directly related to your settings, but I am curious as to what a like-for-like would look like.
Thanks
Hard links, which is documented on my page as I use them.
I ditched mergerfs a while ago and been using rclone union fine.
The only downside is I use the local directory for instead of the union for downloading files. So the --dir-cache-time becomes an issue with importing because the cache needs to expire before the local file is seen in the union, which arr and such use. This is due to local backend lack of change polling or something ncw. So your high dir-cache-time won't be ideal unless you do everything through the union. I worked around that by setting the dir cache time to 1m.
As far as importing goes, I'm able to import files in arr with the union and it renames to my local media folder. Just like mergerfs does.
[union]
type = union
upstreams = local:/data/local mydrive:/:nc
create_policy = ff
[local]
type = local
nc = no-create.
btw that local path can be relative to the working directory too.
Depends on what you want. If you want it so if the path is existing on the remote it creates it there, then remove the :nc and change the create policy to an existing path preserving one.
Hard links don't work in rclone:
felix@gemini:~/test$ ln file1 file2
ln: failed to create hard link 'file2' => 'file1': Function not implemented
felix@gemini:~/test$
and
2021/06/01 07:50:24 DEBUG : /: Lookup: name="file2"
2021/06/01 07:50:24 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2021/06/01 07:50:24 DEBUG : /: Link: req=Link [ID=0x3c Node=0x1 Uid=1000 Gid=1000 Pid=267523] node 11 to "file2", old=file1
2021/06/01 07:50:24 DEBUG : /: >Link: new=<nil>, err=function not implemented
Rclone (and the union remote in rclone) do not support hard links so it does not work for my use case.
The union remote also does not support polling which means you'd have to use a low dir-cache time for things to work:
2021/06/01 07:49:50 INFO : union root '': poll-interval is not supported by this remote
Which means everything on my Google Drive would be slow for listings and such.
That being said, it does not meet my requirements for my use case and that's why I use mergerfs.
Hey @Animosity022, long time
Does VFS read ahead with buffer size, no vfs mode specified?
Not sure that's related to any setting my thread though
Yes it does. Please start a new thread for general questions rather than using my settings thread please.
Thank you so much for your guide.
I've encountered a problem after uploading media from local to cloud.
Imagine I move a TV show that's hasn't ended yet. If I use --delete-empty-src-dirs
then *arrs won't be able to automatically import new episodes since the show movie was deleted. So I need to create the TV show folder and the Season folder in order to let Sonarr import them.
Is there any way to fix this?
My structure follows this scheme:
*arrs and torrent downloader point at /merged but downloads happen locally.
/data
/local (local folder)
/GD (rclone mount)
/merged (/GD + /local)
That's not the case for me as I point everything to my merged mount point so in your case, it would be /data/merged
/data/merged/TV/Ashow
would always exist as once it's moved, it would be moved from /data/local/TV/AShow to /data/GD/TV/AShow and with mergerfs, /data/merged/TV/AShow is always there.
If you aren't pointing your Sonarr/Radarr/etc to the mergerfs area, you'll have issues.
Thanks for your reply.
As I stated before, all my *arrs point at /data/merged (which is the parent folder of /tv shows and /movies and /torrents). However, whenever I download a new episode of a tv show that only exists on the cloud, sonnar won't be able to import them because (let's call the tv show "test") test folder resides in the cloud and there's no test folder locally.
Even though, test folder is in /data/merged, sonarr doesn't know how to import that new episode. Nevertheless, If I don't delete the TV Show folder from the local folder, sonarr will import the episodes without problems.
Yeah it does import without a problem as I do it all the time.
Here's an example of what you are saying:
felix@gemini:/GD$ mkdir test
felix@gemini:/gmedia$ cd
felix@gemini:~$ ls -al /gmedia/test
total 4
drwxrwxr-x 1 felix felix 0 Aug 1 15:57 .
drwxrwxr-x 6 felix felix 4096 Aug 1 08:20 ..
felix@gemini:~$ ls -al /local/test
ls: cannot access '/local/test': No such file or directory
felix@gemini:~$ cp /etc/hosts /gmedia/test
felix@gemini:~$ ls -al /gmedia/test
total 12
drwxrwxr-x 2 felix felix 4096 Aug 1 15:58 .
drwxrwxr-x 7 felix felix 4096 Aug 1 15:58 ..
-rw-r--r-- 1 felix felix 130 Aug 1 15:58 hosts
felix@gemini:~$ ls -al /local/test/
total 12
drwxrwxr-x 2 felix felix 4096 Aug 1 15:58 .
drwxrwxr-x 7 felix felix 4096 Aug 1 15:58 ..
-rw-r--r-- 1 felix felix 130 Aug 1 15:58 hosts
felix@gemini:~$
It'll make the directory as Sonarr does the same thing. I've been running with this setup for many, many months and if you aren't importing, you have something else going on.
hmm weird...
You're talking about hardlinks from the download directory to the local media library, right?
Sonarr says that is a problem related to permissions where sonnar can't access to /data/merged/torrents/tvshows or /data/merged/media/tvshows/test
It would not matter if it's a hard link or a regular copy.
felix@gemini:/gmedia$ cp /etc/hosts /gmedia/test
felix@gemini:/gmedia$ ls -al /local/test2
ls: cannot access '/local/test2': No such file or directory
felix@gemini:/gmedia$ cd test
felix@gemini:/gmedia/test$ ln hosts /gmedia/test2/
felix@gemini:/gmedia/test$ ls -al /local/test2
total 12
drwxrwxr-x 2 felix felix 4096 Aug 1 16:16 .
drwxrwxr-x 8 felix felix 4096 Aug 1 16:16 ..
-rw-r--r-- 2 felix felix 130 Aug 1 16:15 hosts
felix@gemini:/gmedia/test$
There is a hard link example. If you have a permissions issue, something else is going on. I run all my things as the same user.
Found the issue!
I needed to add umask 002 to my rclone command..
Thanks for your help.
Does vfs-max-size factor in the read-ahead value? E.g. if I have 1536MB max size and 300MB read ahead, how much space should it end up taking?