Question about mergerfs with gcrypt mount

I've been reviewing some of the mergerfs threads and still can't seem to fix my issue I have with this so hopefully someone can white glove me a bit just to steer me in the right direction. i've been reviewing Animosity's github repo as well and there is probably just something simple I am missing.

my use case:
docker host running ubuntu server 18.04. plex running in container on this host using host networking
rclone gdrive and gcrypt mounted to docker host OK
rclone.service is started and I can browse my gcrypt: with no problem, see file tree, upload files, etc
latest rclone version and latest mergerfs version, grabbed latest .deb packages from their respective github repos

I am trying to get mergerfs set up so I can see the gcrypt volume inside sonarr/radarr etc. but I am a bit confused on the path structure. I am not using rclone cache rather rclone mount using vfs options but I can use some local storage if necessary (but not ideal). the /local path I can't seem to narrow down how I relate this to my local setup...the /GD folder is where rclone mount path is pointed to that I got but the /local path I'm a bit lost.

here's my rclone.service:

Description=rclone mount for google drive crypt

ExecStart=/usr/bin/rclone mount gcrypt: /media/NAS/_temp/crypt
--vfs-read-chunk-size 32M
--vfs-read-chunk-size-limit 2G
--tpslimit 5
--tpslimit-burst 5
--log-file /tmp/gcrypt.log
--umask 002
ExecStop=/bin/fusermount -uz /media/NAS/_temp/crypt


the /media/NAS dir is an SMB mount to my NAS which is done in fstab on the docker host
// /media/NAS cifs uid=0,credentials=/etc/login.cred,uid=bill,iocharset=utf8,vers=3.0,noperm 0 0

Please describe your use case in a more detailed way.

Animosity use a local storage for his downloads and then upload it to gdrive.
To reduce API hits and increase speed, sonarr/radarr, etc. operates on the local data and after upload the data are still visible to these apps through mergerfs.

What is the reason, why you mount rclone inside a cifs share and don't mount it on the NAS directly?

1 Like

Sure!. Appreciate the response :slight_smile:

Ideally I will run a very similar set up, point Sonarr/Radarr to local store (the CIFS share) and once uploaded to gdrive have it continue to be able to see the data in gdrive via mergerfs

I originally mounted the rclone mount to the CIFS share to get Sonarr/Radarr/Plex to see it since I pass the CIFS share volume into the containers before I knew about mergerfs and also because on the docker host there's only about 50GB of free space. Completely open to doing a different way as long as I can use the CIFS share as local storage before data is uploaded to gdrive.

I find dockers to be an added complexity I don't need it my setup.

The mergerfs part should be pretty straight forward as you just need to layout your paths and put it together.

My use case is I always want to write to the first drive, which happens to be my local disk.

So all applications (Sonarr/Radarr/qBit/etc) all point to the merger mount.

I call that /gmedia in my setup which is the merged mount point. All things point to that.

Under that, I define my local storage first and that is /local and I have my rclone mount second, which is /GD

The mergerfs policy I use is first found, write there.

The only exception to the process is my nightly upload. I upload everything Movie or TV related from /local to my Google Drive remote.

That way all my apps never see a path change and things just work. I do it in the middle of the night as that's a quiet time for things. Worse case scenario, there is a 10 second window when a file is completely moved and polling picks it up from the Google remote.

If you can share your setup and names, I'm sure we can help out to get you configured and explain if it isn't clear.

Got it, ok cool so I'll break down my setup a bit further.

Sonarr/Radarr point here: /media/NAS/Video/TV and /Movies respectively.
My rclone crypt mount is mounted via rclone.service here: /media/NAS/_temp/crypt

I'm all for writing the data locally first and I've got the swap space on the CIFS share to do it. The local disk of the docker host has under 60GB free so I'd prefer to queue up everything locally to the CIFS share before daily night time upload to Gdrive.

So if I am understanding correctly I use mergerfs to combine the /media/NAS/_temp/crypt/ with /media/NAS/Video/TV and then point Sonarr/Radarr to their respective /TV and /Movies sub-directory that live under /media/NAS/_temp/crypt correct?

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.