Having problems hard linking files with MergerFS

I am having problems with hard linking the downloaded files from deluge to the local directory that is part of the MergerFS mount.

This is how my MergerFS mount looks like:
~/Stuff/Local - Local Mount
~/Stuff/Mount - GDrive Mount

MergerFS Setup:
mergerfs -o rw,use_ino,auto_cache,async_read=false,allow_other,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true,threads=12 /home6/acmojado/Stuff/Local:/home6/acmojado/Stuff/Mount /home6/acmojado/MergerFS

RClone Setup:

ExecStart=/home6/acmojado/bin/rclone mount gdrive: /home6/acmojado/Stuff/Mount
--dir-cache-time 168h
--timeout 1h
--umask 002
--vfs-cache-mode writes
--vfs-read-chunk-size 64M
--vfs-read-chunk-size-limit 2048M
ExecStop=/bin/fusermount -uz /home6/acmojado/Stuff/Mount

If you head to that directory and do a quick test, what do you get if you do something like:

felix@gemini:/gmedia$ cp /etc/hosts .
felix@gemini:/gmedia$ ln hosts blah
felix@gemini:/gmedia$ ls -al hosts blah
-rw-r--r-- 2 felix felix 227 Dec  6 13:41 blah
-rw-r--r-- 2 felix felix 227 Dec  6 13:41 hosts

I just used a small file to test with.

/home6/acmojado/MergerFS shows the following:

These folders are the same with ~/Stuff/Local/ and then the files gets uploaded to GDrive.

Can you run a test like I did on the mergerfs mount with a single file and run a hard link ? Basically just pick a file and replicate like I did and share the output.

Wait I'm quite confused, which directory would you like me to use?

Your mergerfs directory.

acmojado@lw823:~/MergerFS$ ln test blah
acmojado@lw823:~/MergerFS$ ls -al test blah
-rw-r--r-- 2 acmojado acmojado 3 Dec 6 20:02 blah
-rw-r--r-- 2 acmojado acmojado 3 Dec 6 20:02 test

So that shows hard linking works without an issue.

Can you describe what issue you are having and an error log?

I don't have an error log though, but the problem is that I pointed radarr/sonarr to the MergerFS folder. Basically deluge downloads it and stores it in ~/home6/Downloads. I do get the point that I can't hard link outside of the MergerFS folder. With that said, do you have any recommendations for how to setup the whole download situation?

You point everything to the mergerfs folder.

Yes so that radarr/sonarr could import the files there.

Right now, I'm trying to point the downloads folder for deluge to the MergerFS folder, but the problem is that it is being uploaded to the GDrive.

Everything should point to that.

If you point everything to that, any write based on your mergerfs policy should happen underneath to

Once they are there, I basically run an upload script each night that moves from that local area to my GD.

In my example, everything is written to /local first:

felix@gemini:/gmedia$ cp /etc/hosts .
felix@gemini:/gmedia$ ls -al /local/hosts
-rw-r--r-- 1 felix felix 227 Dec  6 14:48 /local/hosts
felix@gemini:/gmedia$ ps -ef | grep mergerfs
root      1397     1  1 14:25 ?        00:00:23 /usr/bin/mergerfs /local:/GD /gmedia -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=auto-full
felix     4800  4721  0 14:48 pts/0    00:00:00 grep mergerfs

And everything points to /gmedia

Is there a way to exclude a folder when uploading the local folder? Also, when I tried pointing deluge to the MergerFS folder, there are times where it makes a torrent have an error status. :frowning:

Yeah, that's exactly what I do:

felix@gemini:/opt/rclone/scripts$ cat upload_cloud
# RClone Config file

#exit if running
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi

# Move older local files to the cloud
/usr/bin/rclone move /local/ gcrypt: --log-file /opt/rclone/logs/upload.log -v --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs --fast-list --max-transfer 700G --drive-chunk-size=1G


felix@gemini:/opt/rclone/scripts$ cat excludes

Alright, just an update, I started from scratch so that everything's in a clean slate. I have pointed everything to my MergerFS folder as well as I did the exclusion method that you have provided; now I just need to test it out. Hopefully it works out well. Thank you so much for helping me! :slight_smile:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.