Setting up rclone on ubuntu 18.4 issues with playback

Hi guys have my server set up using a Netcups RS1000 root server specs below:

Intel® Xeon® Gold 6140
Cores: 2 dedicated
Main memory DDR 4 ECC:8 GB
Hardrive:320GB SAS

I have everything loading very quick and videos loading within 3-5 seconds, however I am having issues playing 4K films via direct play which are freezing every 15-20 seconds. Is it the fact that the server I have is not fast enough for 4k films streaming from G suite?? My home internet speed is 35-40Mb/s depending on time of day. I noticed if I leave TrueHD on the server will direct play the video and transcode the audio and the film will play perfectly fine. I assume because this is giving the server the chance to place the film into cache before.

My rclone config below: I have removed the client id,secret,google token and my plex server ip address and login for security purposes. What could the main issue be here cause It is racking my brains out! Thank you. :smiley:

type = drive
client_id = --------------------
client_secret = -------------
scope = drive
token = ---------------------
chunk_size = 16M

type = cache
remote = gdrive:gdrive
chunk_size = 32M
info_age = 2d
chunk_total_size = 1G
workers = 6
chunk_no_memory = true
plex_url = http://------------------------------------
plex_username = -------------------------
plex_password = --------------------------------

Did you setup your own API key?

What do the rclone logs show?

Hello yes I have setup my own API key and using the client id and secret in my settings how do i access the rclone logs please?

thank you

What’s the command line you are running to mount?

You need a -vv or --log-level DEBUG and --log-file some.log

Sorry totally new to rclone i have edited rclone config and created gdrive and gcache has my first posta shows these credentials. What am I missing? I was under the assumption rclone would allready be mounted? thank you.

How are you mounting it? What’s the command line with the rclone mount xxxxx.

using this “ls /mnt/gdrive”
then it is showing me all directories on my gdrive where my content is. but if i run “sudo service rclone status” i get “Unit rclone.service could not be found.” if i run “sudo service unionfs status” i get the below:

Jan 24 19:57:30 v220181070318746-- systemd[1]: Starting UnionFS Daemon…
Jan 24 19:57:40 v220181070318746-- systemd[1]: unionfs.service: Found left-over process 1219 (unionfs) in control group while starting unit. Ignoring.
Jan 24 19:57:40 v220181070318746-- systemd[1]: This usually indicates unclean termination of a previous run, or service implementation deficiencies.
Jan 24 19:57:40 v220181070318746-- systemd[1]: Started UnionFS Daemon.

Hope this helps


Not at all actually.

I’m still not sure what you are using the on the system. If you have it mounted, something started it up.

If you you can check for the process like this:

felix@gemini:/opt/caddy/logs$ ps -ef | grep rclone
felix      587     1  1 Jan21 ?        01:25:49 /usr/bin/rclone mount gcrypt: /GD --allow-other --bind --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 32M --log-level INFO --log-file /home/felix/logs/rclone.log --timeout 1h --umask 002 --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --rc
felix     9480  6826  0 14:10 pts/0    00:00:00 grep rclone

I’m not sure if you are running unionfs as you mention that but it doesn’t seem to be running. Is there a reason you have chunk memory off as well?

chunk_no_memory = true

That line.

okay check that and i get the below thank you

root 5419 1 2 19:57 ? 00:00:44 /usr/bin/rclone --allow-non-empty --allow-other mount gdrive: /mnt/gdrive --uid=1000 --gid=1000 --size-only --dir-cache-time=2m --vfs-read-chunk-size=64M --vfs-cache-max-age 675h --vfs-read-chunk-size-limit=1G --buffer-size=32M --syslog --umask 002 --log-level INFO --config /opt/appdata/plexguide/rclone.conf
root 6990 5896 35 20:18 ? 00:04:59 rclone move --config /opt/appdata/plexguide/rclone.conf --bwlimit 10M --tpslimit 6 --exclude=partial~ --exclude=_HIDDEN~ --exclude=.unionfs/** --exclude=.unionfs-fuse/** --checkers=16 --max-size 99G --log-file=/opt/appdata/plexguide/rclone --log-level INFO --stats 5s /mnt/move gdrive:/
paulw 7502 1252 0 20:32 pts/0 00:00:00 grep rclone

Looks like you have your logs going to syslog so you’d need to check that for any errors while the playback is going on so we can see what the issue is.

The 2 minute dir-cache-time is odd and very low.

I also can’t tell if you are playing directly from the mount or using unionfs in your setup.

My plex is pointed at unionfs directory so would say it’s using that to play and I will check the syslog files when playing media and report back with any errors. thanks for your help.

For the unionfs mount, do you have -o sync_read set?

No idea how to check this sorry or enable it

You can ‘ps -ef | grep union’ and share the output and that will show what parameters it is running with.

okay done that and this is what i get:

$ ps -ef | grep union
root 1318 1 2 Jan24 ? 00:01:14 /usr/bin/unionfs -o cow,allow_other,nonempty,direct_io,auto_cache,sync_read /mnt/move=RW:/mnt/gdrive=RO:/mnt/gcrypt=RO /mnt/unionfs
root 3834 3475 30 Jan24 ? 00:12:31 rclone move --config /opt/appdata/plexguide/rclone.conf --bwlimit 10M --tpslimit 6 --exclude=partial~ --exclude=_HIDDEN~ --exclude=.unionfs/** --exclude=.unionfs-fuse/** --checkers=16 --max-size 99G --log-file=/opt/appdata/plexguide/rclone --log-level INFO --stats 5s /mnt/move gdrive:/
paulw 11276 11261 0 00:00 pts/0 00:00:00 grep union

That seems ok as it has the sync_read in it.

You’d need to share the logs to see what the output is showing when you are seeing the error.

I would set your Gdrive chunk size to be equal (or higher) than your cache chunk size.
Otherwise you are transferring each chunk to the cache in 2 parts but it won't be accessible until both are done. Inefficient...

upload_cutoff = 32MB
chunk_size = 32MB

EDIT: I'm sorry, these settings will ONLY affect uploads. I'm not sure what I was thinking when I first wrote this =P

For high bitrate 4K you may also want to experiment with setting cache chunks up a notch to 64MB. high bitrate 4K will eat through those chunks very quickly and there has to be enough buffer to request more before it runs out to prevent playback-buffering. This will increase the time it takes to open media however, so setting limits higher than you actually need for smooth playback is not ideal.

You can forget about google's bandwidth being the issue. It can saturate my 150Mbit connection easily and I imagine that is nowhere close to the actual limit. Wouldn't worry about it at all unless you get a gigabit connection or something.

One more thing that may help: Increasing your cache workers a little (8 or 10). From observing how cache works it seems that when files are requested it grabs the same amount of chunks to try to keep in cache as you have workers - so more workers will effectively try to keep a few more segments ready in the cache. This may slightly affect the speed of opening files too, but less than just increasing the chunk size I believe. Don't go too overboard on workers though. More is not automatically better.