Another Rclone Gdrive Mount Thread

Hey everyone!

I’ve had rclone mounted for a while and I’m not trying to test some of the newer mount options with a view to getting 4k playing reliably. I’m talking 50GB - 80G Files.

I’ve got an OVH box with VMware on it and the only running VM is Ubuntu.

My GDrive has 2 folders with media
One encrypted with ENCFS (will move it at some point)
The Other one is a rclone crypt folder

I mount the standard one and use a script to deal with the Encfs
Then I mount the Crypt Folder
Afterwards, I use UnionFuse to merge everything together.

I’ve used for ages from the days of PlexDrives a cron job to upload content.

I’ve tried to borrow/copy bits from this thread and the git hub link


So far with little success

  • I’m getting API errors on the Gsuite DashBoard
  • Errors on my Netdata DashBoard
    • system.softnet_stat Errors - 1800~
    • number of times, during the last 10min, ksoftirq ran out of sysctl ne ort.core.netdev_budget
  • OVH Reports a Detection of an attack on IP address ****

Here is my Rclonecrypt mount settings
[Unit]
Description=RClone Daemon Crypt
After=multi-user.target
[Service]
Type=simple
User=plex
#ExecStart=/usr/bin/rclone mount --allow-non-empty --allow-other --tpslimit 10 gcrypt: /mnt/ngdrive --size-only
ExecStart=/usr/bin/rclone mount gcrypt: /mnt/ngdrive
–allow-other
–buffer-size 256M
–dir-cache-time 72h
–drive-chunk-size 32M
–log-level INFO
–log-file /home/plex/logs/rclonecrypt.log
–timeout 1h
–umask 002
–vfs-read-chunk-size 128M
–vfs-read-chunk-size-limit off
ExecStop=/bin/fusermount -uz /mnt/ngdrive
TimeoutStopSec=20
KillMode=process
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

Here is the standard mount
[Unit]
Description=RClone Daemon
After=multi-user.target
[Service]
Type=simple
User=plex
ExecStart=/usr/bin/rclone mount gdrive: /mnt/gdrive
–allow-other
–buffer-size 256M
–dir-cache-time 72h
–drive-chunk-size 32M
–log-level INFO
–log-file /home/plex/logs/rclone_std.log
–timeout 1h
–umask 002
–vfs-read-chunk-size 128M
–vfs-read-chunk-size-limit off
ExecStop=/bin/fusermount -uz /mnt/gdrive
TimeoutStopSec=20
KillMode=process
RemainAfterExit=yes
[Install]
WantedBy=multi-user.target

Just checked this log “/home/plex/logs/rclonecrypt.log” I’m getting
“The download quota for this file has been exceeded”

  • I’m running rclone with an API key generated from Google

Any help or direction would be great.

I’m trying a trial and error process at the moment and it’s clearly not working for me.

Main playback devices are my main PC and a Shield.

Thanks in advance

Many edits - Formatting and Spelling Mistaes :slight_smile:

First thing is. Do you have your own client Id API key? Second I wouldn’t be surprised if the encfs layer see on gdrive isn’t giving you more API hits and slower performance. It did for me when I used to use it long ago with cloud storage.

Thanks Calisro for jumping in quickly - I was still updating the post

I think your right about Encfs, thats why I’ve gone for the 2 mounts.
The media in the older Encfs mount has lower quality media - old shows most aren’t even 1080p

The newer media from the rclone crypt has the massive 4k files.
I’m sure I’ve got rclone using the Google API creds - I can see the info on the DashBoard etc.

I’ll see if I can’t triple check it just in case.

I’ll also test this evening not mounting the main rclone mount and the EnCfS mount to see if that helps playback

I’ve gone back to this to see if this solves some my issues for the moment
ExecStart=/usr/bin/rclone mount --allow-non-empty --allow-other --tpslimit 10 gdrive: /mnt/gdrive

I have a similar setup (streaming huge files at 4k from an rclone crypt backed by gdrive) and I have had problems with multiple rclone mounts accessing the same gdrive back-end on the same machine. In particular, I get a lot of API limit errors with rclone 1.46.

Consolidating everything down to a single rclone instance running a single mount helped tremendously in my case.

What works best for me is keeping a local cache mounted in a union with the plex mount and then syncing files in a cron job in the middle of the day/night, when no one is trying to stream anything from plex. I also schedule plex updates rather than letting it try to figure it out on its own.

That makes sence - prob wouldn’t take ages - only got one account 750GB
Not looked into the multi account stuff and team drives yet.

Could you share you mount settings?

I did try to go down the cache backend remote route but it wouldn’t let me have 2 mounts.
IE one for the main and one for the crypt

I mount the rclone crypt to /mnt/Google:

rclone mount PlexCrypt: /mnt/Google --allow-other --read-only --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 32M --log-level INFO --log-file /home/plex/logs/PlexGoogle.log --timeout 1h --umask 002 --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off --rc

And I have a local cache at /mnt/media/Plex that I merge into a /Plex mount, which is the only path plexmediaserver sees.

unionfs /mnt/media/Plex=RW:/mnt/Google=RO /Plex -o rw,hide_meta_files,allow_other,cow,direct_io,auto_cache,dev,suid

A cron job crawls /mnt/media/Plex and does some things with the .unionfs-fuse files directly (like deleting files off of the gdrive back-end) and sends everything to PlexCrypt: using the rclone copy command. It then uses rclone rc to refresh the vfs cache where needed.

I do things this way because I have had terrible luck with plexmediaserver manipulating files in a R/W gdrive mount. It is probably specific to my setup, which contains hundreds of thousands of tiny nfo files that various media managers all seem to want to write to.

rclone version shows what?

Download quota exceeded usually means you have an old version of rclone if you are getting that message or something else is not playing nicely with rclone.

You are also missing sync_read on your unionfs mount as rclone runs with that.

Thanks, Chezmojo

A couple of questions.

  • To confirm you mean local data rather than a rlcone cache?
  • Do you need the --rc flag for vfs to work correctly?

I had trouble getting this to work the other day.

With VFS setup as you have it do you still upload with a separate script like the plexdrive days or does with work in a different way?

/mnt/media/Plex is a local 2 TB HDD.
Unionfs mounts the local HDD RW and combines it with the plex crypt mount, which is RO. That way new files are always written locally and I can dig around the .unionfs-fuse directory to see what has changed, but that is specific to my setup. You can just as well use mergerfs or even rclone’s union.

No, that is just because the rclone rc interface is really cool and I like it :slight_smile:
You could leave that alone and just let the vfs cache do its thing on its own.

I personally manage everything separately in cron jobs and keep the gdrive mounted RO. But I think that is a quirk of my setup; with other mounts, I have no problems mounting (much smaller) gdrive backends RW with Plex and just treating it like a local filesystem.

If your problem is massive 4k files loading slowly, you might try adding sync_read to unionfs, as Animosity022 suggests above. (And below!)

If your problem is infrequent stuttering and buffering during playback, then I suggest paring everything down to a single instance of rclone mount and turning off all the automatic update/scan/etc. options in plexmediaserver.

If your problem is so much stuttering and buffering that movies are unwatchable, I would suggest directly mounting the rclone crypt backend on a computer connected to your router via an Ethernet cable and trying to play it in VLC to see if the bottleneck is with the remote plex server.

Sorry as that’s not what I said as you can only use rclone fuse flags if you compile your own version of rclone. rclone already comes with sync_read by default so unionfs needs the option, not rclone.

Ah! Then try that :slight_smile:

I’m running
1.46

Could you go into a bit more information, you had lost me a bit?
By compiling my own version of rclone I can add in fuse options?

Chezmojo could add sync_read to the unionfs mount because rclone already has that option

PS thanks for jumping in to help out

You need to add sync_read to this. You’ve got options that seem to overlap too as auto_cache and direct_io do oppose things.

Did you have an older version before maybe? Are you the only person using it or is it shared?

Is that so? The man page is not really clear:

 auto_cache
              This option enables automatic flushing of the data cache on open(2). The cache will
              only be flushed if the modification time or the size of the file has changed.
 direct_io
              This  option  disables the use of page cache (file content cache) in the kernel for
              this filesystem. This has several affects:

       1.     Each read(2) or write(2) system call will  initiate  one  or  more  read  or  write
              operations, data will not be cached in the kernel.

       2.     The  return  value  of  the  read() and write() system calls will correspond to the
              return values of the read and write operations. This is useful for example  if  the
              file size is not known in advance (before reading it).

The way it is written implies that the combination of auto_cache and direct_io ensure that read, write and open commands all result in direct io calls. But it is also not clear if “data cache” and the kernel page cache are the same thing.

In any case, I use those options and have no trouble streaming huge 4k files. The fuse options, for me, mainly affect how quickly playback starts and the reliability of seeking.

Quick Update
Thanks for the help

The settings appeared to work last night.

  • Added Fuse Mount Options
  • Only Mounted the Crypt Mount - (Will have to move off the stuff in EnCFS)
    * This solved the API hits only having the one mount

Additional testing needed to ensure its 100% stable
I want to add in the --rc Flag and setup my creds
I might try a rclone union mount rather than a UnionFS (Fuse) mount to see if that has any impact or affect.

For the 30mins I was testing I was streaming 4k @ 60mbps ~ whilst downing and uploads at 50 mbps in the background

(I picked up an OVH box 1GB Up/Down)

Thanks for the help and I’ll keep this post updated - might help others

It has little to do with playing or starting of files as it writes items you read into memory. You can easily test this by running a mediainfo against a file with those options on / off and you can see the times go from a few seconds to instant if you re-run the commands as the information is in memory.

Same would apply to seeking as well as it would have to be in memory first, which may happen with a larger buffer size, but generally, wouldn’t have much of an impact.

My upload script is causing APIs errors on the google dashboard

This is my current upload command

rclone move $FromMovies $ToMovies --transfers=1 --drive-chunk-size 32M --delete-after --min-age 15m -v --log-file="$LogFile"

I think I need to add either one or both of these commands in

–tpslimit float Limit HTTP transactions per second to this.
–tpslimit-burst int Max burst of transactions for --tpslimit. (default 1)

Thanks for the note above about seeking is affected by Fuse Options - After adding those options in I got much better performance when seeking.

Can you run the same command with -vv and share the output as that show what the errors are.