Stuck in rate limit and retry loop in Google Drive for large(er) files

What is the problem you are having with rclone?

Google Drive is rate limiting transfers of large files. Rclone stops upload and waits, then starts the upload from the start. Google rate limits at the exact progress again. This causes continuous uploads and failures, which keeps the rate limiting up. And this looks seems to go on until manual intervention, which requires deletion of the file.

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.2

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 5.15.0-1052-oracle (aarch64)
  • os/type: linux
  • os/arch: arm64 (ARMv8 compatible)
  • go/version: go1.21.6
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Systemd Service:

[Unit]
Description=RClone Service
Wants=network-online.target
After=network-online.target
AssertPathIsDirectory=/path/to/mount

[Service]
Type=notify
RestartSec=10
ExecStartPre=/usr/bin/bash -c "fusermount -uz /path/to/mount || true"
ExecStart=/usr/bin/rclone mount remote_crypt:  /path/to/mount \
  --gid=1000 \
  --uid=1000 \
  --allow-other \
  --user-agent='Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/122.0.0.0 Safari/537.36' \
  --allow-non-empty  \
  --vfs-read-chunk-size=64M \
  --vfs-cache-mode=full \
  --vfs-read-chunk-size-limit=2048M  \
  --buffer-size=64M  \
  --dir-cache-time=168h \
  --timeout=10m  \
  --drive-chunk-size=64M  \
  --vfs-cache-max-size=15G  \
  --vfs-read-ahead=2G \
  --attr-timeout=36h \
  --poll-interval=2m \
  --fast-list \
  --rc \
  --rc-web-gui \
  --rc-web-gui-no-open-browser \
  --rc-addr="0.0.0.0:5572" \
  --rc-enable-metrics \
  --rc-user=user \
  --rc-pass=[redacted]

ExecStartPost=rclone rc vfs/refresh -v --fast-list recursive=true --rc-user=user --rc-pass=[redacted] --rc-addr="0.0.0.0:5572" _async=true
ExecStop=/usr/bin/bash -c "fusermount -uz /path/to/mount || true"
Restart=on-failure
User=user
Group=user

[Install]
WantedBy=multi-user.target

The rclone config contents with secrets removed.

[remote_name]
type = drive
client_id = [redacted]
client_secret = [redacted]
scope = drive
token = [redacted]
team_drive = 

[remote_crypt]
type = crypt
remote = remote_name:rcl
password = [redacted]
password2 = [redacted]
filename_encoding = base64

A log from the command with the -vv flag

googleapi: got HTTP response code 429 with body: <html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"/><title>Sorry...</title><style> body { font-family: verdana, arial, sans-serif; background-color: #fff; color: #000; }</style></head><body><div><table><tr><td><b><font face=sans-serif size=10><font color=#4285f4>G</font><font color=#ea4335>o</font><font color=#fbbc05>o</font><font color=#4285f4>g</font><font color=#34a853>l</font><font color=#ea4335>e</font></font></b></td><td style="text-align: left; vertical-align: bottom; padding-bottom: 15px; width: 50%"><div style="border-bottom: 1px solid #dfdfdf;">Sorry...</div></td></tr></table></div><div style="margin-left: 4em;"><h1>We're sorry...</h1><p>... but your computer or network may be sending automated queries. To protect our users, we can't process your request right now.</p></div><div style="margin-left: 4em;">See <a href="https://support.google.com/websearch/answer/86640">Google Help</a> for more information.<br/><br/></div><div style="text-align: center; border-top: 1px solid #dfdfdf;"><a href="https://www.google.com">Google Home</a></div></body></html>

As I understand, Rclone stores files as chunks as there's no way to fetch an offset from a file. But I am confused why Rclone starts upload from the beginning upon failure.

This is causing the loop up upload, rate limit. And the upload gets rate limited at the exact transfer size (around 2.1 GB) and any file larger than this gets stuck in this loop.
Is there a way to avoid this problem?

I don't understand why Google is even rate limiting authenticated sessions.

Update your rclone to the latest version and try again.

Sorry, I had a template from the old version. I have updates and it still persists.

rclone v1.65.2                                                                 
- os/version: ubuntu 22.04 (64 bit)                                            
- os/kernel: 5.15.0-1052-oracle (aarch64)                                      
- os/type: linux                                                               
- os/arch: arm64 (ARMv8 compatible)                                            
- go/version: go1.21.6                                                         
- go/linking: static                                                           
- go/tags: none 
1 Like

that is how rclone works, must upload the entire file each time.

all providers do that.

--fast-list does nothing on a mount.

as a test, i would remove the following flags, use defaults.
these flags can lead to rate limiting.

--vfs-read-chunk-size=64M
--vfs-read-chunk-size-limit=2048M
--buffer-size=64M
--vfs-read-ahead=2G
1 Like

Thanks, let me test how it works.

are you connecting to gdrive over a vpn?

No, I am not using any VPN. But I think this infinite loop has been happening for days before I could notice, which might have labelled the IP as suspicious.

Update regarding the problem, no noticeable difference.

Since I had manually deleted all tasks and the loop was stopped for around 10 hours, I thought the rate limit might have cooled.

But the first retry of the same file immediately brought back the problem.

This is the exact place it gets stuck again and again in infinite loop.

Is there any way to pause transfer and resume instead of abandoning 71% of the progress made? Because the retry works well after 5 minutes just to fail once again.

Each of these error is from the same file.

More info:

I tailed the debug log. The rate limit 429 happens almost evert second, but rclone seems to keep uploading. But rclone seems to quit the task after some time altogether (maybe after time delay or N errors). Any manual control over this behavior with any parameter?

should use a debug log file and look into it, not just tail it.

might use --bwlimit to adjust the overall speed on the fly.

what does that mean?

1 Like

The log file generated in just a few minutes is 86MB. So it is hard to even open in a regular text editor.

But from what I can see, every second there is an error while uploading before rclone ultimately backs off.

I mean every error is from a single file trying to be uploaded and failed and being retried.

So the looping is causing the error.

Does bandwidth really matter compared to the exact number of API calls being made per second?

I think slowing down API calls per time might help.

so all this is about a single file?

did you do the test i suggested, removing all those flags?

No.

Just to be clear, It is reproducible with any files of size over 3GB.

But those 6 errors all belonged to the same file which was retried often.

Yes, I have removed all those flags and the error returned immediately.

I would try to lower overall transactions pace by using --tpslimit and --tpslimit-burst flags.

Not sure what are optimal values for gdrive. I would start with --tpslimit 5 --tpslimit-burst 0

Thanks, I will try that.

What does --tpslimit-burst 0 mean?

I can't find the behavior for 0. Does that mean totally disabling the burst feature?

Yes. It will then use max 5 tps. without any spikes above.

--tpslimit-burst is delta value above --tpslimit. Not absolute value.

1 Like

I had some confusion on this too.

If I could find the old post it explained it well, but if you have burst 1 and max 10 per second. It can save that burst and you get 11 in a second. I never wanted to break the per second limit so I would set the burst to 0 to ensure I never went

Is there any documentation on per second limit (For Google Drive) ? I am using my own API keys, not the rclone default one for drive since it would provide higher limit.