Rclone fails to log google drive 403 errors

I've been plagued by 403 errors since a experiment with rclone settings went wrong and now I discovered I can't trust the logs either.

I have 5 mirrors teamdrives I use with union and random search for load-balancing. I was testing with specifically 1 file, and I found out that 3 out of 5 teamdrives had it with 403 errors.

But it doesn't show up in my logs at all. I use --low-level-retries=2 so it also should log it easier but it doesn't...

And I found out this file wasn't working, because of a application error that's why I went digging up further.

You won't necessarily see the 403 logs if the low level retry was successfull (which it usually is).

If you want to see them then you'll need -vv.

This would be ok if the file actually worked. But it doesn't. So it doesn't log them and the file also doesn't open in my application (ffmpeg).

I'm not sure either that rclone tries to open it from another backend on the union on error either

Can you post a log with -vv of the problem?

Just restarted my mounts to set the level to DEBUG. Hopefully they won't fill my disks before it's late night again.

I assume there's no way to change the log level of a running rclone process?

From the docs at https://rclone.org/rc/#core-command

This sets DEBUG level logs (-vv)

    rclone rc options/set --json '{"main": {"LogLevel": 8}}'

And this sets INFO level logs (-v)

    rclone rc options/set --json '{"main": {"LogLevel": 7}}'

And this sets NOTICE level logs (normal without -v)

    rclone rc options/set --json '{"main": {"LogLevel": 6}}'
1 Like

There should probably be a dedicated rc for this shouldn't there!

I think I won't be able to reproduce this. I had to be very aggressive to control the 403 errors again.

image

This was august 25, after months of no issues I tried to test rclone default chunk settings this got bad:

image

Now I have 9 remotes in my union to load-balance and I still have a small % of 403 errors

image

So yeah when I say chunk sizes matter and are serious I'm being very serious!! This only happened because rclone lacks more granular bandwidth control :frowning:

Anyway if I can reproduce this issue again, which hopefully never again, I'll appear here again

What sort of bandwidth control would you like?

Bandwidth per file is now implemented which should help you.

It helps in the use case of public facing rclone serve http instances. On my mount in practice I had to set a high a value so I wouldn't have issues with peak bitrates in my files.

If you could do something like ffmpeg -re it would be perfect

But this is for media files so I don't know if you can do this for other files.

So that would be a different bwlimit for individual files?

If I'm translating right, if you have a 20Mb/s file, the bwlimit for that file would be 20Mb/s and if you had another file that was 80Mb/s, that would be 80Mb/s.

The goal being never give more bandwidth than is requested from the file. How rclone does that, I can't imagine, but I think that is the ask :slight_smile:

Yes this. Best way would probably to look how ffmpeg does it or ask them... because I don't know either how they do it...but if they can then is possible.

Also this is not working for me here

rclone rc options/set --json '{"main": {"LogLevel": 7}}'

The difference is ffmpeg is checking a file and getting information, you wouldn't want rclone to do that overhead as it's a mount and that's going down a windy road imo.

This is not true. ffmpeg -re works with real live streams, say a random stream link from the internet. where it wouldn't be possible to know the bitrate in advance or more details about the file other than what is already receiving in real time

From the page you linked:

The FFmpeg's "-re" flag means to "Read input at native frame rate. Mainly used to simulate a grab device." i.e. if you wanted to stream a video file, then you would want to use this, otherwise it might stream it too fast (it attempts to stream at line speed by default). **My guess is you typically don't want to use this flag when streaming from a live device, ever.**

We're mixing how it would work with a file and a live stream.

A file has the bitrate information in it and ffmpeg is querying the file to get that.

If ffmpeg did what you are saying it would have to read the entire file before it could start using it, because the headers are just a estimation. If you use other software to see the file bitrate you'd see that you have to read the entire file to know the exact bitrate for each frame.

So there's no shortcuts ffmpeg can use to know which speed to use for each frame in the file based on headers.

Anyway the 403 errors started again and not showing in the logs again!

And this is a file I just uploaded to gdrive so it's even weirder. I confirmed the 403 errors with rclone copy, but on the mount no logs at all if using INFO, only with DEBUG

Pretty sure we're both saying the exact same thing expect my choice of words is not matching your choice of words.

As the file is playing, it's getting the bitrate information from the file and making adjustments so it keeps a nice steady stream based on the bitrate being reported by the file. Just like in Plex, you can turn on debug and see the bitrate as something beings played.

Which would be no different from a live stream as the source doesn't matter as it's all sending the same info if it's coming from the file on "disk" or a stream.

As you can see from here, if the log level is set to info, I can't see 403 errors even if the file won't work with anything. I was trying to run mediainfo on the file. only with debug settings:

This file was just uploaded to google drive. Less than 1 hour, never used before.

I should never had tried to use rclone with the defaults settings. I wouldn't have issues like this. I'd have my nice stats showing 0 403 errors like it did before :frowning:

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.