Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

Hello friends, I've been trying to have the optimal configuration of rclone and plex for a while, but I can't get it right.
I don't know why, but with this configuration, when someone tries to play something in plex, they start downloading to the maximum, saturating my connection in a single plex playback with a Samsung TV.
This is the command you launched for assembly:

rclone mount --allow-other --allow-non-empty --dir-cache-time=4h --vfs-cache-max-age=24h --vfs-cache-max-size=3G --vfs-read-chunk-size=64M --vfs-read-chunk-size-limit 2048M --timeout 1h --no-modtime --no-checksum --umask 002 --max-read-ahead 64k --cache-dir=/root/.config/rclone/cache --uid 1000 --gid 1000 jam: /home/plexmd2 &

@jamper - please open a new post as this is not related to my settings. Thanks.

@xyou365 -same. if you have a question related to my settings, please ask. If it's a separate topic, you can start a new thread.

Sorry, I will open it in a new post, Thanks

Ok. But I am using your settings.

When using your settings plus one rclone cache storage, I found there are many rclone PID in background and the CPU is full. How can I limit the number of instance (PID) when using rclone mount?

The setting of cache in rclone.conf is

[gd_cache]
type = cache
remote = gd:
chunk_size = 5m
chunk_total_size = 100G
info_age = 192h
workers = 4

and content in rclone-seeding.service is

[Unit]
Description=Rclone Seeding Mount
After=network-online.target

[Service]
Type=simple
GuessMainPID=no
User=root
Group=root
ExecStart=/usr/bin/rclone mount \
    gd_cache:together /home/cody/qbittorrent/download/gd \
    --read-only \
    --allow-other \
    --buffer-size 64M \
    --transfers 4 \
    --dir-cache-time 96h \
    --drive-chunk-size 32M \
    --log-level INFO \
    --log-file /home/cody/rclone.log \
    --timeout 1h \
    --umask 002 \
    --stats 1m \
    --rc

ExecStop=/bin/fusermount -uz /home/cody/qbittorrent/download/gd
Restart=on-failure
RestartSec=5
TimeoutSec=60

[Install]
WantedBy=default.target

You aren't using my settings if you are using the cache backend.

If you have a question related to the cache backend and your setup, open a new thread :slight_smile:

Those also look like threads not processes from htop as the default setting is to see threads.

Hello. I found everything is working fine but I got a problem, the import on both Sonarr and Radarr are really slow (i think mergefs is copying the file, rather than moving it or linking it, and sometimes it fails too I think).
What could be gone wrong? Should I use Hardlink options on Sonarr or not? Even if I didn't notice any difference, it's still really slow.

The torrent that I'm importing is stopped (to avoid problem), but anyway I tried with non-torrented files (manual import via ftp to test) but it's still really slow.

INFO: I'm using a slowish single HDD, no multi-disk or partition or anything.

If you don't use hard link, it copies the file so that's based on the speed of your local disk.

I use hard links (which is why I use mergerfs in the first place).

If you have hard links on and it's not copying, I'd start a new post and we can troubleshoot your setup.

1 Like

Are hardlinks beneficial even for someone who isn’t seeding torrents? All my files are obtained via Usenet, but I’d love to do anything that offers a speed increase when post-processing large files.

A hard link is beneficial if you are on the same local drive or are using mergerfs.

Sonarr and Radarr both copy and do not move so using hard links makes that operation instant rather than making another duplicate of the same file (assuming you have the hard link option checked).

I just went in to my Radarr and Sonarr settings, and I was surprised to see I already had the hardlinks option enabled. I’m relatively sure it isn’t working though. I am using your recommended settings of using mergerfs to combine my rclone cache gdrive and a local folder on my server. Post-processing movies takes a really long time, and I notice files with the .partial extension during the process. This is a sign that files are being copied instead of hardlinked, correct? If so, do you have any suggestions on what to check to further diagnose?

nzbget, deluge,radarr, sonarr etc need to be mapped to mergerfs for hardlinks and file moves, not copies to work i.e. on the same 'drive' e.g.(using @Animosity022 setup):

  • nzbget, deluge etc needs to download to /gmedia/downloads and not /data/local/downloads
  • radarr etc need to pickup the files from /gmedia/downloads and move/hardlink to /gmedia/movies

If you use mappings to /data/local you will get a lot of unnecessary disk writes and slow importing

Gotcha. I will make that change and see if it works. Also, on an unrelated note, do you happen to know what the correct procedure is to ā€œremountā€ a mergerfs mount? I need to change one of the -o flags, but I’m not sure how to redo my mount.

UPDATE: That seems to have worked. I changed my nzbget download directory to point to my mergers mount, and it seems like hardlinks are working now. Still wondering how to update my mergerfs mount. Can I just run the mergerfs command again, or do I need to remove my current mount first?

The same way you unmount rclone, you can use fusermount on it.

1 Like

@Animosity022 On your github, you mention that you don’t recommend using the PleX integration of rclone cache because it slows things down. Could you expand on this at all? How/why would this slow things down, and do you have to do anything special to make everyone work if you aren’t going to set up the PleX integration?

I don't use the cache backend.

When you set plex integration, it makes things use 1 worker until it detects playback so it has slow startup.

So the PleX integration doesn’t up the number of workers above the default during playback; it just limits the number of workers when playback isn’t happening? Do you change from the default (4) workers in your setup?

Sorry if that wasn't clear.

When you playback a thing, it starts with 1 worker, once the plex integration picks up that you are playing something, it uses more workers.

Anything like scanning or everything else will just use 1 worker.

Gotcha. I misunderstood how the integration worked. I thought it ramped up the number of workers higher than the default during playback rather than limit the number of workers when doing things other than playback.

@Animosity022, on your github page, when taking about your mergerfs setup, you mention that you use cache.files=auto-full as it replaced auto_cache. In the bit of code above the explanation, though, you have cache.files=partial listed. I don’t know enough about the setting to know the difference, and I’m wondering which one is actually preferred. Thanks.