Google drive uploads failing, http 429

Glad to know I'm not the only one! Would love to know if there is a workaround for us mount users :blush:

it possible my data corrupt using --drive-upload-cutoff 1000T?

For a long term approach, this issue should be escalated to Google so that they could roll the limit back.

Is it normal to get this error when using: --drive-upload-cutoff 1000T
Here is the error:

2022/11/23 11:41:13 DEBUG : pacer: low level retry 1/1 (error Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&keepRevisionForever=false&prettyPrint=false&supportsAllDrives=true&uploadType=resumable&upload_id=ADPycdvuH-UWrbgRAd7AKCsRJ4i8hqLTQVwvXpJZw_zaJdJNZR9xvYDZmw85o1zu7Ca3py4sjdKiRos-zkbXLDHKnpYrGHoBLdZS": write tcp 10.71.2.208:63173->172.217.14.202:443: wsasend: An existing connection was forcibly closed by the remote host.)
2022/11/23 11:43:56 DEBUG : pacer: Reducing sleep to 0s

I'm seeing weird results for a 3G FILE UP IN Google Drive, using this method...
Yeah with my larger file, I don't think this is going to work :frowning:

Yep, looks like the transmit just tries over and over again
172.217.14.202:443: wsasend: An existing connection was forcibly closed by the remote host. - low level retry 6/10

I'm on Rclone 1.55, do I have to be on Rclone 1.60?
Nevermind 1.60 works with this command, yay!

I had created a single test file 13.550 GiB, then tried to upload with --drive-upload-cutoff 1000T but when the progress was about >90% then the Transfer size became doubled (13.55 GiB became 27.1 GiB). This process kept looping over and over again without finishing or printing any error. Looks weird, right?

Tested on latest rclone.

fallocate -l 13.55G test-file.bin
rclone copy ./test-file.bin drive:test/ \
  -v \
  --stats 10s \
  --drive-upload-cutoff 1000T
<6>INFO  : 
Transferred:   	   13.088 GiB / 13.550 GiB, 97%, *** MiB/s, ETA 2s
Transferred:            0 / 1, 0%
Elapsed time:      1m20.6s
Transferring:
 *                                 test-file.bin: 96% /13.550Gi, *** Mi/s, 2s

<6>INFO  : 
Transferred:   	   14.534 GiB / 27.100 GiB, 54%, *** MiB/s, ETA 1m20s
Transferred:            0 / 1, 0%
Elapsed time:      1m30.6s
Transferring:
 *                                 test-file.bin:  7% /13.550Gi, *** Mi/s, 1m31s

Not at all. Rclone wil retry 3 time and you'd see it more obvious with debug.

It failed and retried 1 time by that log and it would double the size, etc.

All expected.

1 Like

I've also been seeing this issue w/ Google Drive over the last few days. Prior to that, rclone has been rock solid for multiple years -- thanks for that!

@Animosity022/@ncw, any suggestions on a workaround for using rclone mount?

Sorry as I don’t use Google anymore. Seems like something in their end.

Sure, it just seems that using --drive-upload-cutoff essentially disables chunked uploads. This only works for operations like copy or move. I guess I'm wondering if there were any ways to achieve similar functionality with rclone mount.

If you are using --vfs-cache-mode full or writes then --drive-upload-cutoff will work fine for uploads.

If you are having problem with downloads then that is a different matter.

Wow that seems to have done it haha, but I needed to restart for it to work. Will keep tabs on this thread in case Google fixes this issue on their end. Thanks again for your help.

1 Like

Thanks for advice. Unfortunately, I got: Fatal error: unknown flag: --vfs-cache-mode for rclone move command.

Thanks.

That's for rclone mounts, not a move command so it won't work.

Thank you for your reply. I misunderstood the "will work fine for uploads." part.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.