Best mount settings for streaming ( Plex )

Just thought unionfs probably needs the readahead as well new options for that for me are unionfs-fuse -o allow_other,max_readahead=2000000000,cow

The biggest problem I seem to be having is these annoying ReadFileHandle.Read error: failed to authenticate decrypted block - bad password? breaks Plex playback mid video or wont even start them :frowning:

@toomuchio I have unionfs as well but since iam downloading on other server that plex is I imidattly rclone move files. On my Plex server I disabled automatic library updates/detect changes etc... ( except during scheduled maintanance ) and in my rclone cron upload script after its finished it pushes scans to plex (10mins delay) with

curl "http://xxxxx:32400/library/sections/1/refresh?force=0&X-Plex-Token=xxxxx"
curl "http://xxxxx:32400/library/sections/2/refresh?force=0&X-Plex-Token=xxxxx"

You can get token here /var/lib/plexmediaserver/Library/Application Support/Plex Media Server/Preferences.xml
PlexOnlineToken="xxxxx"

I just downloaded recent version rclone v1.34-60-g2656a0eβ and will do some testing now.

1 Like

@Ajki Nice setup, I’m still not sure readahead does anything though since the kernel enforces 128k according to most docs about fuse. Has anybody checked to see if what we’re doing with that actually has any effect?

I could not reproduce read errors you mentioned.

Tried
2016/12/10 11:20:21 rclone: Version "v1.34-60-g2656a0eβ" starting with parameters ["rclone" "mount" "-v" "--debug-fuse" "--dump-headers" "--allow-non-empty" "--allow-other" "--read-only" "--max-read-ahead" "30G" "--acd-templink-threshold" "14G" "--log-file=/storage/rclone-mount.log" "acd:/" "/storage/.acd/"]

Tested with playing 24GB file and 700MB file, in both cases I could play it normally and in both cases i got errors bellow:

@ncw Repeating errors ( )

2016/12/10 11:22:48 fuse: <- Lookup [ID=0x1a88 Node=0x9 Uid=0 Gid=0 Pid=2033] "VRwquhtsEV9aB5VIHATgzZj4"
2016/12/10 11:22:48 encrypted/rz0uXtHDGg5yClWF6O68pTW3/VRwquhtsEV9aB5VIHATgzZj4: Dir.Lookup
2016/12/10 11:22:48 fuse: -> [ID=0x1a88] Lookup error=ENOENT

If I set --acd-templink-threshold" 30G I could still play 700MB file but 24G did not work.

2016-12-10 14:00:23:0934 4072 2016/12/10 11:57:03 pacer: low level retry 1/10 (error HTTP code 400: "400 Bad Request": response body: "<?xml version=\"1.0\" encoding=\"UTF-8\"?>\nInvalidArgumentOnly one auth mechanism allowed; only the X-Amz-Algorithm query parameter, Signature query string parameter or the Authorization header should be specifiedAuthorization*********")

p.s. I tested copy with 0 and with 14G templink-threshold and speeds were more or less the same. I was copying 1.7G file and sometimes 0 was couple of seconds faster and sometimes 14G was.

@Ajki

Would you mind sharing the script you use to copy after x days. Do you delete directories on the local drive once files have successfully been moved?

just put it in crontab -e

0 4 * * * find /local-encrypted-folder/ -type f -mtime +14 -exec rm -rf {} ;
Every day at 4am it will delete files older then 14 days

p.s. You cloud also just rclone move them afer 14days ( just in case if something vent wrong with upload you may delete files that you actually did not upload yet )

0 4 * * * /usr/bin/rclone move /local-encrypted-folder/ acd:/encrypted -c --transfers=500 --checkers=500 --delete-after --min-age 14d --log-file=/var/log/rclone-cron.log >/dev/null 2>&1

@Ajki

Thank you for the help. Got one more question for you if you don’t mind. Your first post to this thread you showed how you check every minute for a file on the mount point and then remount if it’s not there. I attempted to replicate those two items, but I’m missing something.

What do you actually have in cron? It seems like this is two scripts and that is what I currently have setup, but if I use the first one and rclone isn’t running yet, it doesn’t work.

You mentioned above you use encfs, the error I get is because of a problem with rclones crypt. Interesting so templink 0 might be left best at the default… If it makes little to no difference

@Will_Butler My script check if file (acd-check) is accessible eg if you want to use it you need to check for a file that you will have in your drive.

You can also check the actual mount

if mountpoint -q -- "/storag/acd"; then
echo "mount present"
else
echo "mount not present, remount"
fusermount -uz /storage/.acd
rclone mount --allow-non-empty --allow-other --read-only --max-read-ahead 14G --acd-templink-threshold 0 acd:/ /storage/.acd/
fi

The reason I do it with checking a file is as it was more reliable, because at beginning when I used acd_cli the mount could be present but files were not accessible.

@toomuchio yea i think templink is not really needed, however since ncw may change defaults in future versions maybe it would be best to always set and use fix values so if performance changes you know its something with that rclone version and not the actual setting that may changed.

Atm i made acdmount.sh that is being called when I mount at boot or remount if mount drops
( testing with this settings now )

#!/bin/bash
rclone mount
--read-only
--allow-non-empty
--allow-other
--dir-cache-time 5m
--max-read-ahead 14G
--acd-templink-threshold 0
--no-modtime
--bwlimit 0
--checkers 16
--contimeout 15s
--low-level-retries 1
--no-check-certificate
--quiet
--retries 3
--stats 0
--timeout 30s
--transfers 16
acd:/ /storage/.acd/ &
exit

I doubled the amount of checkers, I use to have 40 BUT when tested I saw in logs 10s connection timeouts due multiple connections.

Added --no-modtime as based on docs it may speed up things

Changed contimeout to 15 seconds, default is 1m ( @ncw I assume rclone reconnects ? if thats the case I would even set it to 5 sec )

Changed --low-level-retries to 1 , but i dont think mount even use this.

--no-check-certificate , hoping for additional speed up

--quiet, hoping for additional speed up

--retries 3 ( its default - not sure if used by mount )

--timeout 30s , changed from 5m .. not sure what will happen here

--stats 0 hoping for additional speed up

--transfers 16, not sure if its used by mount, but I would raise this one to +5 of my maximum concurrent streams.

@ncw are any of above settings ignored by mount command ? and is anything else missing that I could add even if its just default value ?

I open multiple videos (4 of them while 2 were the same 24GB file) with above settings, and in iftop saw both connections the S3 and EC2 one

So even with --acd-templink-threshold 0 the 24GB files were connecting on S3

Hi folks, a little bit of help is welcome here.
I can mount and lsd my ACD folder

Plex cannot access the rclone mounts.

I am using the --allow-other
fuse.conf is configured accordingly

I still get the error in Plex media scanner where the rclone mounted folder is not accessible

What can I check?

You are more likely to get help if you create a new topic for your question. This one is about optimizing settings for streaming.

1 Like

@Ajki interesting set of parameters have you made any changes since? Is it faster?
–transfers doesn’t do anything on mount from what ncw has said before ACD + fuse mount: still not working with Plex

This is my mount now

#!/bin/bash
rclone mount
–read-only
–allow-non-empty
–allow-other
–dir-cache-time 5m
–max-read-ahead 14G
–acd-templink-threshold 0
–bwlimit 0
–checkers 32
–contimeout 15s
–low-level-retries 1
–no-check-certificate
–quiet
–retries 3
–stats 0
–timeout 30s
acd:/ /storage/.acd/ &
exit

I removed –no-modtime as i had huge problem with plex loosing analyzation for video files ( i hope that was the cause still testing now ) and i removed transfers now.

@ncw is there any other none relevant mount parameter iam using or any that iam not but its relevant to mount setting ( even just to use it as default setting )

@Ajki I've had that analyzation issue as well I wonder if it was related to --no-modtime as well but I don't think it is ACD doesn't store the mod time so that shouldn't have any effect on it. I read on the Plex forums it's a known bug with Plex or something, I don't think it's rclone related.

I'll just comment on a few of those settings

I think the kernel limits that to 128k which is the default anyway.

That is the default.

That is only really for testing - it has the potential to make things insecure

--no-modtime doesn't really buy you anything with ACD - it doesn't take extra work for to fetch the mod time (which is the time the file was uploaded) on ACD.

I just noticed something weird, I’ve got two files on ACD, grabbed one very quickly another very very slow just cp on my mount to local drive. Makes me think files are spread randomly around in clusters and performance can be bad on certain clusters. Which gives really sporadic results.
Edit: Sorry I should say that the first part of the file was fine, then about 35% into it performance went to total crap, now if I start again performance is crap. Any other file I’ve tried seems fine. Quite odd… iftop shows very very slow reads as well.

@toomuchio not sure why would you have worst performance as all settings I use are basically defaults.

Yeah it’s not your config, ACD performance just went down the toilet everything is bad now. Must be some bad peering or rate limited

@ncw There are some additional parameters check: http://eos.readthedocs.io/en/latest/configuration/fuse.html

export EOS_FUSE_NOPIO=1

configure 256k readahead (additional to 128k kernel readahead)

export EOS_FUSE_RDAHEAD=1

Not sure how they do it.

http://fuse.996288.n3.nabble.com/Can-I-use-bigger-readahead-size-than-VM-MAX-READAHEAD-td11660.html

It looks like we need to recompile the fuse kernel itself to increase the readahead :confused: