Unable to upload to Gsuite as of a couple hours ago

What is the problem you are having with rclone?

Anyone else on Hetzner (or other EU servers) having issues uploading to Gsuite / Teamdrive?

It seems to go between having 502/503 errors and 429 errors. The errors rotate what they say. Sometimes it’s “your computer or network may be sending automated queries. To protect our users, we can't process your request right now” and sometimes it’s “The server encountered a temporary error and could not complete your request.Please try again in 30 seconds. That’s all we know.”

This started around 11/12am UTC today in my logs. Was working fine for before that.

I upload using service accounts and they haven’t hit a 750GB limit for the last 24 hours.

The same service accounts work fine from a server located in the US. At least, in my initial test. I didn’t run it for longer than 2 minutes.

This is my own personal Gsuite account that’s unshared.

Run the command 'rclone version' and share the full output of the command.

rclone v1.61.1

  • os/version: debian 10.13 (64 bit)
  • os/kernel: 4.19.0-21-amd64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: static
  • go/tags: none

Yes

Which cloud storage system are you using? (eg Google Drive)

Google Teamdrive (my own company’s account)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone move /mnt/storage/.tmp dst018:Teamdrive/ --config="$HOME"/.config/rclone/rclone.conf --drive-chunk-size 128M --tpslimit 12 --tpslimit-burst 12 --drive-acknowledge-abuse=true -vvv --delete-empty-src-dirs --max-transfer 750G --use-mmap --transfers 6 --log-file "$HOME"/logs/rclone-upload-dst018.log
2023/01/27 19:31:53 INFO  : Starting transaction limiter: max 12 transactions/s with burst 12
2023/01/27 19:31:53 DEBUG : rclone: Version "v1.61.1" starting with parameters ["rclone" "move" "/mnt/storage/.tmp" "dst018:Teamdrive/" "--config=/home/user/.config/rclone/rclone.conf" "--drive-chunk-size" "128M" "--tpslimit" "12" "--tpslimit-burst" "12" "--drive-acknowledge-abuse=true" "-vvv" "--delete-empty-src-dirs" "--max-transfer" "750G" "--use-mmap" "--transfers" "6" "--log-file" "/home/user/logs/rclone-upload-dst018.log"]
2023/01/27 19:31:53 DEBUG : Creating backend with remote "/mnt/storage/.tmp"
2023/01/27 19:31:53 DEBUG : Using config file from "/home/user/.config/rclone/rclone.conf"
2023/01/27 19:31:53 DEBUG : Creating backend with remote "dst018:Teamdrive/"
2023/01/27 19:31:53 DEBUG : dst018: detected overridden config - adding "{n0BsM}" suffix to name
2023/01/27 19:31:54 DEBUG : fs cache: renaming cache item "dst018:Teamdrive/" to be canonical "dst018{n0BsM}:Teamdrive"

The rclone config contents with secrets removed.

[teamdrive]
type = drive
client_id = redacted
client_secret = redacted
scope = drive
token = {"access_token":"redacted","expiry":"2023-01-27T21:17:02.158630598-05:00"}
team_drive = redacted

I think I just realized my token is expired after pasting this. Could that be the issue?

A log from the command with the -vv flag

Paste  log here

Hmmmm, I created a new project in my gsuite and then reconfigured a new share in Rclone config, but it's still giving me the same expiry info: "expiry":"2023-01-27T22:36:25.3028121-05:00"

Not really sure how to fix that, unless that's something on google's end? Anyone have any thoughts?

Cheers

Hey, same issue here. In fact, any server besides the one in my home is giving me

2023/01/27 21:26:03 ERROR : IO error: open file failed: googleapi: got HTTP response code 429 with body: <html><head><meta http-equiv="content-type" content="text/html; charset=utf-8"/><title>Sorry...</title><style> body { font-family: verdana, arial, sans-serif; background-color: #fff; color: #000; }</style></head><body><div><table><tr><td><b><font face=sans-serif size=10><font color=#4285f4>G</font><font color=#ea4335>o</font><font color=#fbbc05>o</font><font color=#4285f4>g</font><font color=#34a853>l</font><font color=#ea4335>e</font></font></b></td><td style="text-align: left; vertical-align: bottom; padding-bottom: 15px; width: 50%"><div style="border-bottom: 1px solid #dfdfdf;">Sorry...</div></td></tr></table></div><div style="margin-left: 4em;"><h1>We're sorry...</h1><p>... but your computer or network may be sending automated queries. To protect our users, we can't process your request right now.</p></div><div style="margin-left: 4em;">See <a href="https://support.google.com/websearch/answer/86640">Google Help</a> for more information.<br/><br/></div><div style="text-align: center; border-top: 1px solid #dfdfdf;"><a href="https://www.google.com">Google Home</a></div></body></html>
^Z

I have many projects setup with service accounts, all being rotated every 4 minutes. Doesn't matter which project the service account is tied to, if it's being used from a hetzner box, it's getting this error.

The server I have at home is working properly, no errors and no issues at all.. I'm convinced as well that the increased load from hetzner networks is beginning to piss of Google in some sort of way..

Ok, thanks for confirming @gsatv01 that definitely helps me feel a bit better. Might need to research some alternatives then! We'll see if it gets better tomorrow.

Same issue. According to this Google drive uploads failing, http 429 - #7 by Shannon adding --drive-upload-cutoff 1000T fixes it but I'm still seeing the same errors.

Ya, I gave that a shot as well and saw no difference. Thanks though!

Came here to say that I was also having the same problem, files being uploaded very slowly before being terminated

image

I just waited 30 minutes and tried again (didn't change rclone command) and I no longer had the problem.

1 Like

Yup! So far so good! :v:

Same problem, going on 24 hours per my logs. Also on Hetzner. Not having the same issue even attempting uploads from my desktop here in the states, although I'm using a different Google account.

Hey,

I have the same problem with my Hetzner server. Google doesn't like the high bandwidth from Hetzner...

having the same issue since today morning. all my uploads are getting 429 errors from Hetzner cloud vps.. Is this issue temporary? or do we need to find some other workaround ?

Thanks
Regards..

I am having the same problem, both uploading and downloading content.

Having the same issue also with Hetzner

lmao u guy get what aws/azure users facing 2 month ago. at this rate i think google will ban all vps ip

Same issue here, started somewhen during the night when I was uploading my VM backups.
I can also confirm that neither --tpslimit 5 nor --drive-upload-cutoff 1000T (or both together) help to overcome the issue.
Uploading a VM backup from home works without any issue, but that is obviously way slower than from Hetzner.

Seems like Google maybe banned the Hetzner range?

Why would they ban VPS IPs? That would defeat the purpose of the Business Google Drive accounts as people are using it to backup for instance VMs.

not ban,like soft ban. u cant upload big file using some vps ip adress. idk what the fuck they thinking. dont waste your time asking google support

Wouldn't make much sense either to me. As said, legitimate business use is to back up your data "offsite" as a fallback should your "onsite" backup be corrupted or something similar.
My feeling is rather that someone who had/has multiple servers at Hetzner abused Google's API (in terms of sending way too much requests) over a longer period of time and this led to Google's security measurements kicking in and banning the Hetzner ranges.
It would be interesting to know if there are Hetzner server owners who do not have any issue with uploading data to GDrive; Maybe one can make a conclusion of the used ranges.

I don't feel like reaching out to google is going to solve much, depending what your use case is and how high your usage is anyway. I know in my case, I've got almost 2 PB's of data on my account, so definitely don't want to reach out to them...

But has anyone seen anything like this before? It appears that the night before last, it was Hetzner's DC in Helsinki that was the issue. Now it's as if it's Falkenstein and Helsinki isn't affected. What is odd is that at times, media will offload to gdrive perfectly fine and others it errors out. Unfortunately, any attempt at playing anything via plex just results in the

 ERROR : IO error: open file failed: googleapi: got HTTP response code 429 with body

there most certainly are, they are those who have their servers at the Helsinki DC from what I can figure out... Seems to me it's some sort of rolling soft ban of some sort?? If that makes sense..