Recommended Dropbox (Formally Google Drive) and Plex Mount Settings

This is the log without unionfs. I did a separate rclone mount and allowed Plex access to it. The media file and the server is the exact same. The issue too is exactly the same. It wont Direct Play but will Direct Stream.

RClone mount without unionfs - Direct Play. https://1drv.ms/t/s!AvtLStMhHJGAiLB4_EBt40_ZkHbFAw

Feel free to use my logs if it helps you with giving examples.

Whatā€™s the full mount command used? Are you using your own API key or the default rclone?

There are some 403s in there too:

2018/08/26 06:01:44 DEBUG : media/letchu/tv_shows/The Daily Show/Season 23/The.Daily.Show.2018.08.13.Spike.Lee.EXTENDED.720p.WEB.x264-TBS[rarbg].mp4: ChunkedReader.Read at 2224128 length 1048576 chunkOffset 131072 chunkSize 134217728
2018/08/26 06:01:44 DEBUG : &{media/letchu/tv_shows/The Daily Show/Season 23/The.Daily.Show.2018.08.13.Spike.Lee.EXTENDED.720p.WEB.x264-TBS[rarbg].mp4 (r)}: >Release: err=<nil>
2018/08/26 06:01:44 DEBUG : pacer: Rate limited, sleeping for 1.375736823s (1 consecutive low level retries)
2018/08/26 06:01:44 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)
2018/08/26 06:01:44 DEBUG : pacer: Rate limited, sleeping for 2.485222332s (2 consecutive low level retries)
2018/08/26 06:01:44 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: Rate Limit Exceeded, rateLimitExceeded)

In 7 days of debug logs, Iā€™ve never hit a rateLimitExceeded.

Iā€™m attaching a log from from using the Infuse app to play the same media file, connected through the same Plex Server using the same device etc. PLay back starts in 3-4 seconds. https://1drv.ms/t/s!AvtLStMhHJGAiLB6Rt4-TI9_LFv-mA

Default RClone, i assume, since i didnā€™t set any API keys.

I posted these settings above previously. If this is not the full mount command, where do i find it?

These are the settings for rclone mount.

rclone mount --allow-other --cache-dir /tmp/rclone/vfs --dir-cache-time 72h --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --buffer-size 128M --log-level DEBUG --log-file /mnt/user/mount_rclone/rclone.log gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs --stats 1m &

These are the settings for unionfs mount.

unionfs -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/user/rclone_upload/google_vfs=RW:/mnt/user/mount_rclone/google_vfs=RO /mnt/user/mount_unionfs/google_vfs

As for the ratelimit error Iā€™ve seen it a number of times. Not sure what it is or what to make of it, since everything continues working. I see the error when uploading sometimes too.

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 8 --fast-list --transfers 1 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*

Thanks for your help.

I think you are hitting the same thing Iā€™ve been as well. Direct play on ATV does something strange with opening and closing the files.

I can reproduce it now as well when I direct play via the ATV native plex client :frowning:

I think Iā€™ve always had the issue as well as Iā€™ve masked it by using Infuse a lot lately and I cannot reproduce it on the PMP client on the mac.

It only happens on my ATV 4k and my regular ATV for Direct Play.

I see the same open and close for the ATV on the cache backend as well as I tested that. I think the cache hides it a little better since the chunks remain local so it get them a little faster.

The VFS has to open up the file again and grab chunks each time, which is why the issue probably popping up unfortunately.

@seuffert - did you have any specific settings that made it work better with the problematic devices you saw?

@letchu22 - you can get around this by turning off direct play on the ATV, direct stream is extremely low CPU and not noticeable for a work around.

@ncw - definitely seems to be a reproduceable bug on the ATV 4K and ATV with direct play of files. Any direct stream or transcode makes it go away.

no not really - only cache helped with this particular problem :frowning:

For what itā€™s worth, it seems to work ok on Infuse and the next release of Infuse has ā€˜instant plex syncā€™ so I was going to make that my main plex ATV player anyway.

For now, I just turned off Direct Play and let it Direct Stream everything which works fine.

Thanks for sharing all your stuff. I was looking at the GitHub and the mount command here https://github.com/animosity22/homescripts/blob/master/systemd/rclone-cache.service

but I am a bit confused because you donā€™t share a copy of your rclone.conf so I am not 100% what things refer to. you use rclone cmount gcrypt:

I just want to make a simple mount for local Plex streaming, so do I just need to use the command cmount of my encrypted remote? and skip the whole ā€œcache remoteā€ thing?

cmount is when I was using my build_rclone script to pass my own fuse options.

If you donā€™t build with the cmount tag, you canā€™t pass fuse options with the -o.

You can really just use the regular mount and remove the fuse option.

Direct Play has always been wonky with Native Plex app. Turning off direct play or simply having mkv files that had to stream always worked. Just liked seeing the previews while scrubbing since all media is converted to mp4 with ac3 and mov_text for apple tv. Infuse scrubbing is pretty slow with cloud. I was just curious as to why it wasā€™t working with Direct Play with the native app. If you ever come across a solution please post it =] As for opening and closing even my direct stream does it. And you pointed out the 403 errors. Given my entire config is posted, any glaring mistakes or any tweaks or wisdom? You seem to have extensively tested this setup. Thanks.

I have a few questions about tweaking my config. I recently triggered DDOS protection on my server taking it offline for a bit and am trying to keep the initial transfer speed down.

ā€“bwlimit doesnā€™t seem to be strictly obeyed obeyed by the server. Even with it set to 30MB/s I still get bursts over 500Mb/s during the initial transfer. This is still better than the 800Mb+ I was seeing but I would like to know if there are any other settings I should try.

What is --dir-cache-time doing with with this setup?

Is --tpslimit helpful here?

How do I find the gdrive API call graph I see other people posting?

When would I need to use cmount?

Here is the settings my unionfs service is using:
unionfs-fuse -o cow,allow_other,direct_io,auto_cache,sync_read /mnt/md0/plex/media=RW:/home/admin/mnt/gdrive=RO /home/mediaserver/data/plex/media

Here is the rclone config:

[Unit]
Description=rclone service
After=network-online.target
Wants=network-online.target

[Service]
Type=simple
ExecStart=/usr/bin/rclone mount gcrypt: /home/admin/mnt/gdrive \
   --allow-other \
   --tpslimit 4 \
   --dir-cache-time 72h \
   --buffer-size 512M \
   --vfs-read-chunk-size 64M \
   --vfs-read-chunk-size-limit 1G \
   --umask 002 \
   --bind External_IP_Here \
   --bwlimit 30M \
   --log-level INFO \
   --log-file /home/mediaserver/logs/rclone.log
ExecStop=/bin/fusermount -uz /home/admin/mnt/gdrive
Restart=on-failure
User=admin
Group=admin

[Install]
WantedBy=default.target

Thanks!

I do traffic shaping on my router rather than via rclone but I do believe bwlimit is not used for getting certain things and it can be burst while it levels out.

Note the bug here -> https://github.com/ncw/rclone/issues/2055

dir-cache-time keeps the directory structure/file listing in memory for that period of time.

Yes, setting tpslimit to 10 could be helpful to keep the error rates down but depends on your use.

If you have your own API key, you can go to -> https://console.cloud.google.com and look at your API hits.

You donā€™t have to use cmount as Iā€™ve removed that in my later setup.

direct_io and auto_cache would overlap. Iā€™d pick one and just use auto_cache if you want to use the system memory for file caching.

Everything else looks good.

Thanks for all the feedback! Its much appreciated. I havenā€™t been /null routed today so hopefully it was a one off.

Yes you are right. bwlimit will always be correct in the long term, but there are cases where it can spike higher than the limit. Iā€™ve fixed most of them, but there are a few that are harder to fix!

Animosity,

I love that you moved your scripts to github. You have a few comments re: cmount in your systemd file for the mount even though youā€™re no longer using it. Iā€™ve based my setup on your scripts and am super grateful because the plex performance for the vfs remote is amazing and the local cache works ok-ish enough for me.

I have the upload script set to run every minute to provide files to my local install but that was causing ~150api hits per 100 seconds to check for uploads. So I added a third directory to mergerfs (upload, down, drive mounted at media) and then I added a simple if/or/then statement to the bash so that it only processes if upload is NOT empty.

upload:

#!/bin/bash

LOCKFILE="/var/lock/`basename $0`"

if [ ! -z "$(ls -A /mnt/upload)" ]
then
(
# Wait for lock for 5 seconds
  flock -x -w 5 200 || exit 1

# Copy over locally stored files older than X minutes, excluding partials and torrent directory
/usr/bin/rclone move /mnt/upload/ drive: --checkers 4 --fast-list --log-file=/home/bishop/.scripts/upload.log -v --tpslimit 4 --transfers 2 --exclude-from /home/bishop/.scripts/excludes --delete-empty-src-dirs

) 200> ${LOCKFILE}
else
exit
fi

excludes:

rtor/**
**partial~
**_HIDDEN~
.unionfs/**
.unionfs-fuse/**

though I think I only need **partial~ since it doesnā€™t contain downdir info or use unionfs.
downloads are to /mnt/down to keep that local and able to run without the mergerfs if anything ever borks up.

Hereā€™s the change in API hits once I added a check:

Thank you so much! This has made tweaking easier.

Man, thatā€™s why I love posting and sharing as I always learn along the way too! Iā€™ll take that into my script as well too.

Iā€™ll adjust on mine to make one more directory under local for my staging area as I need my torrents there on the mergerfs mount so I can leverage the hard linking.

{disregard this post}

Actually, adding a third folder for uploads overcomplicates everything. Hereā€™s my revised shit to upload anything in /mnt/down that ISNā€™T the ā€œrtorā€ folder, which holds all of the downloaded data.

if [ ! -z "$(ls -A /mnt/down)" ] && [ "$(ls -A /mnt/down)" != "rtor" ]
then

itā€™s a little ugly because it makes sure the directory is not empty AND contains something other than a file/folder named ā€œrtorā€ prior to uploading everything thatā€™s not on my previously referred to excludes list.

tester file:

#!/bin/bash

if [ ! -z "$(ls -A /mnt/down)" ] && [ "$(ls -A /mnt/down)" != "rtor" ]
then
echo "something other than "rtor" exists!"
else
echo "either directory is empty or the only item is rtor!"
exit
fi

@Animosity022, I have a question about VFS cache and I feel like youā€™re the best person to ask. Iā€™m using the cache backend locally but keep running into expired objects and old items sitting in cache since I modify everything remotely. Itā€™s nice because, until drive mtime can be updated (@ncw?) Kodi has to scan through the full directory tree but the annoyances with cache are starting to get to the ā€œlive-insā€. Not to mention that a Kodi update when scanning the whole dirtree instead of recently updated mtimes takes forever.

With dir-cache-time set to 72h in your systemd are you able to ls -R or ncdu your mount point without it taking ~10-15 minutes and burning API hits? I run VFS on the remote server and every hour or two the dir-cache seems to expire even though I have mine set to 672h (lol).

I know that you use PLEX on your remote server so our usages donā€™t completely align but I figured Iā€™d ask before I pull out all of my hair!

With the dir-cache-time at 72 hours, it should cache the directory/file structure for that time unless something else is expiring an entry.

felix@gemini:/gmedia$ time ls -alR | wc -l
31128

real	0m1.641s

I can do a cached scan in a second or so usually without a problem.

For dealing with the cache backend and modifying things from different hosts, did you try taking a look at plex autoscan? That might be a good fit.

I run everything on a consolidated server so I never have a use case to modify something outside of my setup.