4k mount / deleting files / torrenting

Hey guys,

since I slimmed down my setup (thrown out Plexdrive and moved from UnionFS to MergerFS) I rather was into testing streaming and general usage.

Homeserver:

  • 100Mbit down
  • Debian 10
  • Gdrive crypt

So I tested 4k streaming and have chosen a very recent movie which wants about 50Mbit. While I could see via Netdata that the server mostly used 90+Mbit I experienced a few lags which slighty affected a nice watch.

  • Plex on LG OLED C8 WebOS = Just a few lags
  • Plex on Nvidia Shield System = Considered unwatchable
  • Plex on Nvidia Shield Kodi = Messed up picture and sound for like 30 seconds after each lag

While I assume that is just about the client caching and I could live with it to watch via the TV-Plex, I have to admit that it works just fine when streaming from my rent dedicated Server tho.
Anyway is there any optimization I can do?

rclone mount:

[Unit]
Description=RClone Mount Auto
AssertPathIsDirectory=/mnt/google/auto-gd
Wants=network-online.target
After=plexdrive.service

[Service]
Type=notify
ExecStart=/usr/bin/rclone mount "auto-gd:/" /mnt/google/auto-gd \
   --allow-other \
   --allow-non-empty \
   --acd-templink-threshold 0 \
   --buffer-size 2G \
   --checkers 32 \
   --config /root/.config/rclone/rclone.conf \
   --dir-cache-time 144h \
   --drive-chunk-size 32M \
   --fast-list \
   --log-level INFO \
   --log-file /home/scripts/logs/mount-auto.cron.log \
   --max-read-ahead 2G \
   --read-only \
   --tpslimit 10 \
   --vfs-cache-mode writes \
   --vfs-read-chunk-size 128M \
   --vfs-read-chunk-size-limit off \
   --stats 0
ExecStop=/usr/bin/fusermount -uz /mnt/google/auto-gd
Restart=on-failure
RestartSec=10

[Install]
WantedBy=multi-user.target

MergerFS:

[Unit]
Description = /home/user/downloads/auto MergerFS Mount
After=mount-auto.service
RequiresMountsFor=/mnt

[Mount]
What = /mnt/google/auto:/mnt/google/auto-gd
Where = /home/user/downloads/auto
Type = fuse.mergerfs
Options = sync_read,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,auto_cache

[Install]
WantedBy=multi-user.target

In the next case I just want to get rid of the testfiles in one directory, but I can't since the rclone mount is RO. And ofc I can't find out via Gdrive website because of the encryption. What is the recommended usecase here? I mean I could maybe live with that I can't delete a torrent via the rutorrent Web-UI but do I need another RW rclone mount then?
In that case I also want to question the upload folder. Couldn't files placed in a RW rclone mount be encrypted and uploaded automatically so I also can manage the torrents conpletely from the web?

Slightly regarding the last question of managing torrents, I yesterday added 35 torrents in one go and found out today that just a few downloaded, some already have downloaded partly and others stick to 0%. And while each torrent has several seeders, even up to over 200, they don't really download. Like 4 torrents currently show a download of 0.2KB/s to 0.8KB/s, others don't download at all eventhough there are dozens of seeders. And I wonder if it could be because the upload-script already uploaded and rtorrent maybe can't handle that? The upload script runs every 5 min uploading files older than 5 min. Looks like rtorrent doesn't keep up the timestamp of unfinished torrents if they are not downloading. But if that is the case I also doubt only running it once per night won't do the work either, there may be torrents with just 1 home-PC-seeder and therefor it may take days until it's finished....

Edit: I just rechecked and it looks like the timestamp of the files which still show as downloading with the very slow speed don't update, they have the timestamp from yesterday.

Upload-script:

#!/bin/bash
# RCLONE UPLOAD CRON TAB SCRIPT
# Type crontab -e and add line below (without # )
# * * * * * root /home/scripts/upload-m.cron >/dev/null 2>&1

if pidof -o %PPID -x "upload-auto.cron"; then
exit 1
fi

LOGFILE="/home/scripts/logs/upload-auto.cron.log"
FROM="/mnt/google/auto/"
TO="auto-gd:/"

# CHECK FOR FILES IN FROM FOLDER THAT ARE OLDER THEN 15 MINUTES
if find $FROM* -type f -mmin +5 | read
then
echo "$(date "+%d.%m.%Y %T") RCLONE UPLOAD STARTED" | tee -a $LOGFILE
# MOVE FILES OLDER THEN 5 MINUTES
rclone move $FROM $TO -c --no-traverse --transfers=300 --checkers=300 --delete-after --delete-empty-src-dirs --min-age 5m --log-file=$LOGFILE
echo "$(date "+%d.%m.%Y %T") RCLONE UPLOAD ENDED" | tee -a $LOGFILE
fi
exit

Is anyone having suggestions, tipps, similiar usecases? I would be glad to solve all that finally.

I'd remove that as it allows for over mounting / hiding / multiple processes on the same mount. It's generally bad and shouldn't be used.

This can be removed as it's used with Amazon Drive and and an oldie.

This doesn't work on a mount and can be removed.

This only works if you have a custom compiled kernel with read-ahead configured for 2G so can probably be removed.

There have been quite the number of bugs on the Shields lately (glad I never got one):

I'd check through the plex reddit and the shield and you'll find much more info on that.

Sadly though if you have only 100Mb/s down, you are going to hit issues streaming larger 4k movies. The only way around that is to reduce the maximum bitrate on the player as large movies really spike bandwidth up and down.

Here is an example of the initial pull / playback of a larger 4K movie. It really grabs a huge chunk in the beginning to get some buffer.

and that's only really a tiny 4K movie:

image

On any Plex App (minus the one you can download on a computer), you don't have the option to change any of the cache settings as they are all per device and just there. The buffers/cache are usually pretty small.

So in the end, you'd really want to upgrade your link or limit your bitrate in Plex to something more fitting for your pipe.

Thx. I edited my config arcodingly. Could it already improve the performance a bit?

Do you have any idea regarding the other 2 topics? Deleting and torrenting?

I'd remove anything you aren't familiar with it and leave at the defaults.

As for torrents, you'd want to not delete them until they are done. I use a different setup with mergerfs and my setup is documented here:

Mh, thx. Will take me some time to understand your setup, is there any short answer you could give regarding my setup to at least delete single files or folders? I at least can tell that I don't have 6TB but only 400Gb free for torrents.

And as far as I see you just do the nightly upload, which I argued what about these torrents from slow seeders not online 24/7, since the torrent client doesn't update timestamps when not in download.

Other than don't move from your torrent folder :slight_smile:

I use hard links with mergerfs and my items copied to plex locally are moved.

I never use rclone move on my actually seeding area. I use my torrent client to handle removing torrents after seed requirements are met.

1 Like

Ok, I almost resolved all problems except for the streaming. I haven't retested that yet tho.

I used all your configs and adapted them. Now deleting works fine. I also tested torrent clients and it turns out that only Deluge picks up the already uploaded files just fine. rtorrent and qBittorrent keeps redownloading the file. Also Deluge has no problem while the file is being moved with the upload script.

A new error that appeared was that my traffic was like 95% of the time using all the bandwidth. I tracked it down to be most likely the mount options after I adapted your config. I now added --vfs-cache-mode writes again due to:

WriteFileHandle.Write: can't seek in file without --vfs-cache-mode >= writes
Can't open for write without O_TRUNC on existing file without --vfs-cache-mode >= writes

Testing that now. And yes, almost all Plex options are deactivated.

However are there any specific config parameters that come in mind to make streaming as enjoyable as possible regarding my case?

I never move seeding files that are in my seed folder so that's should not be an issue or you have something different than my workflow. Anything seeding is handled by the torrent client as I use ratio seed times and that handles managing my seed folder.

I don't write to my mount so not sure what the workflow that requires you had have that option set as I do not.

I don't use any specific Plex options. I have ATV 4Ks that direct play the content so that's ensuring that it direct plays is really all I do as I do not want to transcode 4K.

1 Like

Well, I do move seeding files as soon as possible, the upload script now runs once a night and I gonna test now to download in a parent directory and then move on completion. However as said, it is not a real problem anymore since Deluge can work with that. Reason for that is the server is just running on a 500GB NVMe SSD yet. And even if I upgrade some time the Deskmini can only fit small 2,5" drives so just a few TB. And on some days that could be the size of my downloads of just one day. This whole setup I want is actually made to build up a huge torrent server to serve a few thousands torrents. Reason for that might be that I never use public trackers tho... So ratio is important to me.

I don't know for the --vfs-cache-mode writes thing, I have just seen it in the log and already tried half a day to debug wtf is taking all the bandwidth. Could it be that because I already had the option enabled before when I initially moved a few torrents to the GDrive it requires that flag now?

I direct play as well always, I just thought that maybe some specific cache options or such could make sure to always have a few minutes left to stream so there is no lag.

But thx so far, I think I'm on a good way.

Mh looks like another problem now.

2019/09/21 14:45:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.803G)
2019/09/21 14:46:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.845G)
2019/09/21 14:47:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.845G)
2019/09/21 14:48:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.845G)
2019/09/21 14:49:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.845G)
2019/09/21 14:50:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.845G)
2019/09/21 14:51:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.845G)
2019/09/21 14:52:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.845G)
2019/09/21 14:53:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.464G (was 6.845G)
2019/09/21 14:54:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.617G (was 6.464G)
2019/09/21 14:55:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.780G (was 6.617G)
2019/09/21 14:56:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.780G)
2019/09/21 14:57:28 INFO  : Cleaned the cache: objects 5 (was 5), total size 6.845G (was 6.845G)

This is being spammed now in the logs for the exact same folder where I had the --vfs-cache-mode writes problem in. Just 1 folder of these 5, and it is 1 of 3 that have a larger amount of files. This is now taking all upload and sometimes all download too.

Any idea for that?

That is the background task looking to see if it can empty the cache. It runs every 60s by default. It is just an INFO message so isn't a problem.

1 Like

Ok, thx... strange it only appears there then tho.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.