Another Rclone Gdrive Mount Thread

Would be worth it to test your setup with a seek test as that provides a bit better results too:

I’ve been using that to test changes and validate if things are where I want them to be in terms of seeking.

Here is a very good write up on fuse and cache as well:

Thanks, guys

I can’t do much today because I overdid it at the gym lol
Tomorrow day/eve I can do some in-depth testing

I’ve added the --vv flag so I can check in the morning, I hit my upload limit very early today.

  • side note my flat mate asked me when I started this project. My first notes are dated Oct 2016.
    (ACD days back then)

Good morning all

So by adding in the -vv flag. I’ve a log full of Rate Limit Exceeded

2019/03/07 09:10:04 DEBUG : pacer: Rate limited, sleeping for 2.077671389s (2 consecutive low level retries)
2019/03/07 09:10:04 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/qu otas?project=488772518172, userRateLimitExceeded)

I did manage some playback last night. I had 2 small buffering instances of a couple of secs but interestingly. I got a desync where the video kinda just lagged for a few seconds.
(This happened afew times, more annoying than the buffering)

I’m I meant to be able to upload and read to the mount at the same time without hitting the API limit?

Capture

Can you share the actual command you are running?

Is the key shared with someone else?

These types of errors seem to be normal (I see them all the time, even when things are working swimmingly).

But my API lists 3% error in 605,284 requests, and traffic is almost always below 1/s.
That is on an rclone crypt backed by gdrive that I constantly upload to and stream from.

For what it’s worth: I had a similar problem — API rate limit errors, etc. — before. In my cause it was caused by a very specific bug relating to the way that I upload files:

    $RCLONE copy --rc --rc-addr=${RCADDR} --rc-no-auth --stats=5m --stats-one-line \
        --tpslimit=5 --transfers=10 --files-from-traverse --files-from=${CLOUDLIST} \
        --fast-list --log-file=${LOGFILE} ${VERBOSE} ${TEST} "${SRC}" "${DST}"

Where $CLOUDLIST is a list of files ranging from a handful of large mkvs to hundreds of tiny nfo files. The --files-from-traverse flag is from a special build (v1.46.0-019-gf97cbd3b-fix-files-from-traverse-beta) that forces rclone to use the old method of listing files from the remote before starting an upload.

I have no idea if you are experiencing something similar (probably not, since you don’t use --files-from) but you can read about my issues in this thread.

Thanks, auto_cache was indeed not what I wanted, I removed it from my unionfs mount.

The numbers for transfers and checkers is too much. You get those errors because you are generating too many API hits per second. The limit is 10 so using 10 transfers and the default 8 checkers is too much.

You’d want to move those numbers down to try to find the best sweet spot without getting errors as 403s slow things down as it has to repace the connections.

You’d want to turn that down as you can only create ~3 files per second with Google.

I personally keep auto_cache on and I have direct_io off as I use torrents on my mergerfs mount.

The API Key is not shared & the command was

rclone move -vv $FromMovies $ToMovies -vv --transfers=1 --drive-chunk-size 32M --delete-after --min-age 15m -v --log-file="$LogFile"

Now I’m testing

rclone move $FromMovies $ToMovies -vv --tpslimit=5 --stats=5m --stats-one-line --fast-list --transfers=1 --drive-chunk-size 32M --delete-after --min-age 15m -v --log-file="$LogFile"

I’ll read in to --files-from-traverse flag & I’ll test the direct_io on the fuse tonight.

I’m using unionfs-fuse, is the meta or does everyone use mergerfs?
I’ll run that seek test later today as well

This is the copy command that I currently use and that works flawlessly, which is why I shared it.
I’m not the one getting errors, the OP is :slight_smile:

It does not work flawless because you get errors as you stated, which makes things slow down. By removing the 403 rate limits, you can increase the throughput as it’s not waiting to retry.

But my API lists 3% error in 605,284 requests

If you are mount question is solved and you are trying to work a different issue, perhaps create a new thread as it’s not related to this.

If you are moving larger files, you likely won’t hit the 3 files per second limit so you can probably increase the transfers and balance out the checkers so something like 4/4 would work well and you can increase those numbers until you start getting 403s.

The goal is to find the highest numbers that work for your use case without getting 403s. 403s cause retries and retries slow things down as that is time wasted while something retries.

We’re getting off-topic here — but those settings works flawlessly for copying files to the drive on a cron job while simultaneously streaming off of it.

The batch uploads vary wildly and these settings give me the best average performance. Sometimes it uploads thousands of tiny files, other times only a handful of huge files. And the target is not always a gdrive. Plus, I use that gdrive in many different contexts across several machines.

The goal here is not to get 0 API errors in the dashboard, it is to get smooth playback of huge 4k video files on the same drive that files are uploaded to regularly, which is my exact use-case.

I’m not getting off topic as flawlessly working and errors don’t go together.

I’ve been streaming multiple 4k videos for quite some time while copying large amounts of data.

The goal is to not have 403s as those reduce throughput because rclone must retry. Retries == slower times. If 403s are generating and that creates rate limits that can also impact the mount as the quota defaults are there per user not per process.

I’ve gotten a bit confused as to what is and is not working.
When you reported successfully streaming (if only with small glitches), was that off of a direct mount or a unionfs? If the direct mount is working, then maybe you should indeed try a different merge fs (maybe rclone’s own).

direct_io works for me, but I’m merging two caching filesystems (rclone and zfs) so you might need to play around with your settings depending on your specific setup.

Sure, but 403s that aren’t cause by the mount aren’t going to affect streaming performance unless you actually start hitting the quota. In my specific case that 3% error is caused by operations totally unrelated to the mount off of which I stream files — i.e., they occur when I’m asleep or at work and not streaming files off of the drive.

Think about the bigger picture though as your advice is being interpreted as a best use or best practice. Many folks have much smaller pools and upload many times per day and hitting rate limits would impact them during the day and their playback.

I personally do not as I have a 6TB backing so I only upload in the middle of night.

I understand that you seem to not care about 403s and an error rate. You’d get better performance and throughput if you had less errors as that’s how the rclone retry/pacing works.

There is a sweet spot based on the use of case of moving files and that depends on size/large vs small/etc.

Can affect start stream times.

Sure, I’m just sharing the settings that I use and that empirically work for me.
Best practice is difficult with so many different system configurations, geographic separation, rclone versions, undocumented tweaks on Google’s end, etc. I happen to be able to see a Google data center from where I’m sitting and I coordinate three machines across two different countries, all with different purposes. Other people will have other circumstances and, hey, maybe my exact settings will work best for them—maybe not.

My sweet spot is best average performance with a very heterogeneous set of files that varies wildly between batches and they are delivered to multiple endpoints. For me, zeroing out 403s would not necessarily improve overall performance, but it would require writing separate functions for each specific case, which adds complexity and is error-prone.