Cache config for Plex mount

Hello all,

I was currently setting up my Gdrive rclone mount, with the following settings:

ExecStart=/usr/bin/rclone mount \
--config=/root/.config/rclone/rclone.conf \
--allow-other \
--cache-db-path=/tmp/rclone/db \
--no-modtime \
--buffer-size=2G \
--drive-chunk-size=128M \
--dir-cache-time 5000h \
--poll-interval 60s \
--cache-dir=/tmp/rclone/vfs \
--vfs-cache-mode full \
--vfs-cache-max-size 100G \
--vfs-cache-max-age 10h \
--vfs-cache-poll-interval 1m \
--vfs-read-ahead 4G \
--vfs-read-chunk-size 128M \
--cache-dir=/home/rclone \
--rc \
--log-level DEBUG \
--drive-use-trash \
--stats=0 \
--bwlimit=80M \
--cache-info-age=120m encryptteam:/ /mnt/google
ExecStop=/bin/fusermount -u /mnt/google

But what I can see is that, if I jump many times inside the movie, it will collapse and end up downloading all read-ahead from all timeline spaces I jumped, but the most important one for me, is that last one I am playing, not the before ones.
Aswell as if I play a tv show episode, and I switch to a new one, it will keep downloading the old one I was watching, as the same time as downloading the new one.
Any way to solve that?

Also, don't know why this happens, but when sometimes I resume a movie from the middle, I see there is very little bandwith downloading from Gdrive, and I have to exit and play the movie back or from the beginning for rclone to speed up. Any way on what could be issue?

Kind regards

when you posted, there was a template of questions, that help us to help you.....
the reason i ask is that some of your flags have been deprecated.

--buffer-size=2G that is a really high number.
imho, just remove that; as most well tweaked rclone mount commands do not use that flag.

for example,

Sorry man, I post now answer to those questions:

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1

  • os/version: debian 10.10 (64 bit)
  • os/kernel: 4.19.0-17-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.17.9
  • go/linking: static
  • go/tags: none

Are you on the latest version of rclone? You can validate by checking the version listed here:
Yes, it should be the latest

Which cloud storage system are you using? (eg Google Drive)

Google drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Basically the mount service I posted on the first message

The rclone config contents with secrets removed.

type = crypt
remote = team:
filename_encryption = standard
directory_name_encryption = true
password = ****

type = drive
client_id = ****
client_secret = ****
scope = drive
token = ****
team_drive = ****
root_folder_id =

A log from the command with the -vv flag

Let me some mins and I will get that for you

I have just taken out some variables based on @Animosity022 service:

ExecStart=/usr/bin/rclone mount \
--config=/root/.config/rclone/rclone.conf \
--allow-other \
# --cache-tmp-upload-path=/tmp/rclone/upload \
# --cache-chunk-path=/tmp/rclone/chunks \
# --cache-workers=20 \
# --cache-writes \
--cache-db-path=/tmp/rclone/db \
--no-modtime \
--buffer-size=256M \
--drive-chunk-size=256M \
#--drive-server-side-across-configs=true \
--drive-pacer-min-sleep 10ms \
--drive-pacer-burst 200 \
--dir-cache-time 5000h \
--poll-interval 60s \
--cache-dir=/tmp/rclone/vfs \
--vfs-cache-mode full \
--vfs-cache-max-size 100G \
--vfs-cache-max-age 5000h \
--vfs-cache-poll-interval 5m \
#--vfs-read-ahead 4G \
--vfs-read-chunk-size 256M \
--cache-dir=/home/rclone \
--rc \
--log-level DEBUG \
--drive-use-trash \
--stats=0 \
--checkers=24 \
--bwlimit-file=40M \
--cache-info-age=120m encryptteam:/ /mnt/google
ExecStop=/bin/fusermount -u /mnt/google

But not working well still, I am checking the network since it seems it is not hitting max when taking files out of the mount. But hitting it when I just simply make an rclone copy command.

At the same time I was using --buffer-size=2G since I have plenty of RAM space for rclone to use.
My expectations would be to cache the full movie or tv show once anyone requests it in a 1TB SSD, thats why I was using read-ahead, but I see @Animosity022 is not using it.

At the same time I see in google console that rclone is causing some errors on this methods:

Don't know if that could also be the cause of my issue.

Will let you know soon with more testing I would made

If you share a debug log file, the answers are in there as we can see what the issue is.

That does nothing and can be removed.

That's for the cache backend and does nothing for. you and can be removed.

Log file would be helpful as your API console does show errors in the drive.files.get which are generally you hitting the download quota for that day. Again, seen in the logs pretty easily.


I am going now to take log out and paste them in here, I was just making some tests, and I found the following:
Just by making a simple cp in linux from the mount, I get the following speed with the mount options I pasted above:

And when I make an rclone copy, I hit the following speeds:

hmmmm that's weird, I was hitting yesterday 100MiB/s, when running the rclone copy command :confused:

Let me go and get out the logs now

2022/07/08 13:30:52 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "copy" "-P" "--log-level" "DEBUG" "--log-file" "/home/pedro/rclone.log" "encryptteam:/peliculas/Aguas oscuras (2019)/Aguas oscuras (2019).mkv" "/home/pedro/"]
2022/07/08 13:30:52 DEBUG : Creating backend with remote "encryptteam:/peliculas/Aguas oscuras (2019)/Aguas oscuras (2019).mkv"
2022/07/08 13:30:52 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2022/07/08 13:30:52 DEBUG : Creating backend with remote "team:/ia6p26kv83nbj49srnrjfgrtf8/qspdltkcvt68grulili4hj1k245s84op2i92labau1u808hpb520/qefjuvgge2dia3ah8fet7d954rqrgmmcepoqrbpppkid0d75jomg"
2022/07/08 13:30:54 DEBUG : fs cache: adding new entry for parent of "team:/ia6p26kv83nbj49srnrjfgrtf8/qspdltkcvt68grulili4hj1k245s84op2i92labau1u808hpb520/qefjuvgge2dia3ah8fet7d954rqrgmmcepoqrbpppkid0d75jomg", "team:ia6p26kv83nbj49srnrjfgrtf8/qspdltkcvt68grulili4hj1k245s84op2i92labau1u808hpb520"
2022/07/08 13:30:54 DEBUG : Creating backend with remote "/home/pedro/"
2022/07/08 13:30:54 DEBUG : Aguas oscuras (2019).mkv: Sizes differ (src 38279570850 vs dst 28727181312)
2022/07/08 13:30:54 DEBUG : Aguas oscuras (2019).mkv: Starting multi-thread copy with 4 parts of size 8.913Gi
2022/07/08 13:30:54 DEBUG : Aguas oscuras (2019).mkv: multi-thread copy: stream 4/4 (28709683200-38279570850) size 8.913Gi starting
2022/07/08 13:30:54 DEBUG : Aguas oscuras (2019).mkv: multi-thread copy: stream 2/4 (9569894400-19139788800) size 8.913Gi starting
2022/07/08 13:30:54 DEBUG : Aguas oscuras (2019).mkv: multi-thread copy: stream 1/4 (0-9569894400) size 8.913Gi starting
2022/07/08 13:30:54 DEBUG : Aguas oscuras (2019).mkv: multi-thread copy: stream 3/4 (19139788800-28709683200) size 8.913Gi starting

That's the log I have from the rclone copy command
Although I remember that the log from the mount was pretty much more extensive

oki, I think it is not being a good day to test, because I am getting ridiculous speeds from google drive, on whatever account I try... And my internet speed test is maxing to 1gbps on the internet tests :confused:

Are we trying to troubleshoot a mount or a copy command? That log is from a copy command and you were speaking about a mount.

The copy is a multi threaded operation and the mount isn't so you can't compare the two directly.

What problem are we trying to solve?

I am just trying to troubleshoot why my mount speed is to slow compared to the copy command.
I usually see 10mbps on the mount speed, and 1000mbps on the copy one, which is a lot.
But now it seems I am having some sort of limitation, since I am not able to download things at good speed, not even from GD at chrome, from my personal computer, don't know whats going on at the moment TBH.

Once I see my connection is restored back to normal I can go ahead with the testing, under normal conditions

There's a whole thread about Google Drive / slow speeds.

Rclone mount random slow speeds - Help and Support - rclone forum

Might be worth checking that out, albeit with a mount log file, I can't tell if you are hitting that issue or something else as your API screenshot looks like quota.

Yep, I can get that mount log, however since the google drive speed to my home is slow at the moment, I do not want to bother you with those non realistic logs :sweat_smile:

Hey @Animosity022 @asdffdsa

Here are the logs of the cp command from the mounted drive: 2022/07/09 20:03:01 DEBUG : /: Lookup: name="peliculas" 2022/07/09 20:03:01... - c94e5dc6

It seems my connection is back working as normal, and with rclone copy, I am hitting the 1gbps that I have with my ISP, however with cp command not even hitting 20mbps.

With cp command:

With rclone copy command:

As you can see the difference is huge.
Don't know if you will be able to see anything in the logs, but I am not seeing anything strange :sweat_smile:


Have been checking these last days, and it seems with CP command, or when Plex tries to read a file from GD, sometimes, my network maxes, and sometimes it gets stuck in 1.6MiB/s as you can see in the post above.
It seems to be an sporadic behaviour, and no errors showing in the logs as I have shown. As I say, sometimes it goes bad, I restart the movie, or the cp command, and then it goes great, or maybe I need to restart 2 times for it to work.
Any ideas? Thanks in advance


Please read that thread.

Hey man, thanks for that, my issue was related and seems now to be solved.
I have already posted on that thread :slight_smile:

Just wanted to know about if something is being developed according to my comment in that thread:

"At the same time it would be nice, was people was talking before, any option to full cache the entire movie, once it is requested, don't know if placing --vfs-read-ahead 200G would fully cache the entire file while watching it, although it would not be multithread, and would be nice for it to be coded as such.
Or at the same time, to fully download the movie prior to the playback of it, in a multithread way also. People was talking about an rclone version that was doing such, but any idea on how?"

Hey @Animosity022

I had no issues with the speed now, but I came with the following in the logs:

Read: read=0, err=open file failed: googleapi: Error 403: The download quota for this file has been exceeded., downloadQuotaExceeded

However, when I get into google console, I see cuotas are not reached:

Any idea?

That means you ran out of the daily download quota. I way to see that as you just have to wait for the daily reset.

mmm, but thats weird I just watched yesterday 1 single movie :confused:
And why in google console, are the quotas at 0%? Shouldn't they be showing at 100% if all are already consumed?

No as you can't see your daily upload or download quota anywhere from Google unfortunately.

The item you can see in API quota which is impossible to run out of.