This is the log without unionfs. I did a separate rclone mount and allowed Plex access to it. The media file and the server is the exact same. The issue too is exactly the same. It wont Direct Play but will Direct Stream.
Iām attaching a log from from using the Infuse app to play the same media file, connected through the same Plex Server using the same device etc. PLay back starts in 3-4 seconds. https://1drv.ms/t/s!AvtLStMhHJGAiLB6Rt4-TI9_LFv-mA
Default RClone, i assume, since i didnāt set any API keys.
I posted these settings above previously. If this is not the full mount command, where do i find it?
As for the ratelimit error Iāve seen it a number of times. Not sure what it is or what to make of it, since everything continues working. I see the error when uploading sometimes too.
I see the same open and close for the ATV on the cache backend as well as I tested that. I think the cache hides it a little better since the chunks remain local so it get them a little faster.
The VFS has to open up the file again and grab chunks each time, which is why the issue probably popping up unfortunately.
@seuffert - did you have any specific settings that made it work better with the problematic devices you saw?
@letchu22 - you can get around this by turning off direct play on the ATV, direct stream is extremely low CPU and not noticeable for a work around.
@ncw - definitely seems to be a reproduceable bug on the ATV 4K and ATV with direct play of files. Any direct stream or transcode makes it go away.
For what itās worth, it seems to work ok on Infuse and the next release of Infuse has āinstant plex syncā so I was going to make that my main plex ATV player anyway.
For now, I just turned off Direct Play and let it Direct Stream everything which works fine.
but I am a bit confused because you donāt share a copy of your rclone.conf so I am not 100% what things refer to. you use rclone cmount gcrypt:
I just want to make a simple mount for local Plex streaming, so do I just need to use the command cmount of my encrypted remote? and skip the whole ācache remoteā thing?
Direct Play has always been wonky with Native Plex app. Turning off direct play or simply having mkv files that had to stream always worked. Just liked seeing the previews while scrubbing since all media is converted to mp4 with ac3 and mov_text for apple tv. Infuse scrubbing is pretty slow with cloud. I was just curious as to why it wasāt working with Direct Play with the native app. If you ever come across a solution please post it =] As for opening and closing even my direct stream does it. And you pointed out the 403 errors. Given my entire config is posted, any glaring mistakes or any tweaks or wisdom? You seem to have extensively tested this setup. Thanks.
I have a few questions about tweaking my config. I recently triggered DDOS protection on my server taking it offline for a bit and am trying to keep the initial transfer speed down.
ābwlimit doesnāt seem to be strictly obeyed obeyed by the server. Even with it set to 30MB/s I still get bursts over 500Mb/s during the initial transfer. This is still better than the 800Mb+ I was seeing but I would like to know if there are any other settings I should try.
What is --dir-cache-time doing with with this setup?
Is --tpslimit helpful here?
How do I find the gdrive API call graph I see other people posting?
When would I need to use cmount?
Here is the settings my unionfs service is using: unionfs-fuse -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/md0/plex/media=RW:/home/admin/mnt/gdrive=RO /home/mediaserver/data/plex/media
I do traffic shaping on my router rather than via rclone but I do believe bwlimit is not used for getting certain things and it can be burst while it levels out.
Yes you are right. bwlimit will always be correct in the long term, but there are cases where it can spike higher than the limit. Iāve fixed most of them, but there are a few that are harder to fix!
I love that you moved your scripts to github. You have a few comments re: cmount in your systemd file for the mount even though youāre no longer using it. Iāve based my setup on your scripts and am super grateful because the plex performance for the vfs remote is amazing and the local cache works ok-ish enough for me.
I have the upload script set to run every minute to provide files to my local install but that was causing ~150api hits per 100 seconds to check for uploads. So I added a third directory to mergerfs (upload, down, drive mounted at media) and then I added a simple if/or/then statement to the bash so that it only processes if upload is NOT empty.
upload:
#!/bin/bash
LOCKFILE="/var/lock/`basename $0`"
if [ ! -z "$(ls -A /mnt/upload)" ]
then
(
# Wait for lock for 5 seconds
flock -x -w 5 200 || exit 1
# Copy over locally stored files older than X minutes, excluding partials and torrent directory
/usr/bin/rclone move /mnt/upload/ drive: --checkers 4 --fast-list --log-file=/home/bishop/.scripts/upload.log -v --tpslimit 4 --transfers 2 --exclude-from /home/bishop/.scripts/excludes --delete-empty-src-dirs
) 200> ${LOCKFILE}
else
exit
fi
though I think I only need **partial~ since it doesnāt contain downdir info or use unionfs.
downloads are to /mnt/down to keep that local and able to run without the mergerfs if anything ever borks up.
Hereās the change in API hits once I added a check:
Man, thatās why I love posting and sharing as I always learn along the way too! Iāll take that into my script as well too.
Iāll adjust on mine to make one more directory under local for my staging area as I need my torrents there on the mergerfs mount so I can leverage the hard linking.
Actually, adding a third folder for uploads overcomplicates everything. Hereās my revised shit to upload anything in /mnt/down that ISNāT the ārtorā folder, which holds all of the downloaded data.
if [ ! -z "$(ls -A /mnt/down)" ] && [ "$(ls -A /mnt/down)" != "rtor" ]
then
itās a little ugly because it makes sure the directory is not empty AND contains something other than a file/folder named ārtorā prior to uploading everything thatās not on my previously referred to excludes list.
tester file:
#!/bin/bash
if [ ! -z "$(ls -A /mnt/down)" ] && [ "$(ls -A /mnt/down)" != "rtor" ]
then
echo "something other than "rtor" exists!"
else
echo "either directory is empty or the only item is rtor!"
exit
fi
@Animosity022, I have a question about VFS cache and I feel like youāre the best person to ask. Iām using the cache backend locally but keep running into expired objects and old items sitting in cache since I modify everything remotely. Itās nice because, until drive mtime can be updated (@ncw?) Kodi has to scan through the full directory tree but the annoyances with cache are starting to get to the ālive-insā. Not to mention that a Kodi update when scanning the whole dirtree instead of recently updated mtimes takes forever.
With dir-cache-time set to 72h in your systemd are you able to ls -R or ncdu your mount point without it taking ~10-15 minutes and burning API hits? I run VFS on the remote server and every hour or two the dir-cache seems to expire even though I have mine set to 672h (lol).
I know that you use PLEX on your remote server so our usages donāt completely align but I figured Iād ask before I pull out all of my hair!