V1.69.1 occasionally fully restarts copy when hitting the Google Drive rate limit

What is the problem you are having with rclone?

When downloading large files from Google Drive, it seems that v1.69.1 does not handle Google rate limits gracefully as prior versions of Rclone have. Last week, folks attempting to download two 625.7 GB files from one of our Google Drive locations using Rclone have reported a download loop where Rclone downloads the files up to a certain point, then starts back over, the .partial files resetting back to 0 GB. However, when this happens, the amount of data indicated in the progress statistics do not reset, so they instead expand well beyond the actual size of the data being downloaded.

We confirmed that everyone having this issue was running v1.69.1, and I was able to reproduce this behavior with v1.69.1. The same users were able to successfully download the files without encountering the issue by reverting to v1.60.1. I was also able to download the files with v1.65.0. People regularly download large files from our Google Drive using Rclone while encountering the rate limit, and this is the first time we ever encountered this issue.

When I reproduced the issue, my system downloaded over 3 TB of data before I canceled the Rclone operation, even though the actual files being downloaded total ~1.2TB.

The rclone version output and logs linked below are from my reproduction of the issue. The logs I've included in the gist identify one of the points when Rclone restarted downloading the files. The shared logs begin with Rclone many chunks into the download process; it had encountered the Google Drive rate limit many times before this, but, other than a handful of other full resets, had handled the rate limit gracefully. The logs then show Rclone stalling out and restarting with the first chunks of the files. I'd like to call attention to three key points in the logs in chronological order:

2025/04/22 12:22:36 DEBUG : __0_1.distcp: multi-thread copy: cancelling transfer on exit

2025/04/22 12:22:36 DEBUG : __0_1.distcp: Need to transfer - File not found at Destination

2025/04/22 12:22:36 DEBUG : __0_1.distcp: multi-thread copy: chunk 1/9324 (0-67108864) size 64Mi starting

It seems that Rclone fully canceled the transfer, then couldn't find the .partial files when restarting the download and restarted back at chunk 1.

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.1

  • os/version: linuxmint 22.1 (64 bit)
  • os/kernel: 6.8.0-58-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.24.0
  • go/linking: static
  • go/tags: none

Are you on the latest version of rclone? You can validate by checking the version listed here: Rclone downloads
--> Yes

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy mlc-llama3-1:training/nemo-formatted-hf-checkpoint/405b/weights/ ./weights -P

The rclone config contents with secrets removed.

[mlc-llama3-1]
type = drive
scope = drive.readonly
root_folder_id = 12K-2yvmr1ZSZ7SLrhidCbWc0BriN98am
team_drive =
token = <redacted>

A log from the command with the -vv flag

Logs gist here.

welcome to the forum,

could be true, not sure why yet...
would be good know the latest version that can download the files?


afiak, that is expected behavior.
rclone cannot resume transfers. every transfer is always of entire file, first byte to last byte.


maybe because default values are --low-level-retries=10 --retires=3
1.2TB X 3 = 3.6TB


https://rclone.org/drive/#making-your-own-client-id

would be good know the latest version that can download the files?

I'm running some tests on more recent versions of Rclone than v1.65.0. I'll post my findings here.

rclone cannot resume transfers. every transfer is always of entire file, first byte to last byte.

When you re-run the command, yes, but I've never encountered a canceled transfer mid-operation. Folks have been using Rclone to download large datasets from our Google Drive for over a year, and this is new behavior, from what I can tell.

maybe because default values are --low-level-retries=10 --retires=3
1.2TB X 3 = 3.6TB

I'm pretty sure it restarted the download more than three times, and I manually stopped it from running at that point. It was continuing to run and presumably would have kept looping the download until I stopped it. The folks who first reported the issue to us seem to have observed that behavior. Quoting one user: "this just simply resets back to 0%. I’ve gotten upwards of 70% before it resets, but it always does." So it seems that the resets don't occur at a consistent point in the download.

Below is a snapshot of the progress statistics provided by one of our users who tried downloading just one of the large files individually; note the significant discrepancy between the top line statistics and the individual file statistics. The denominator of 1.808 TiB is a result of the restarts, as the single file is only 625.7 GB.

Transferred:        1.252 TiB / 1.808 TiB, 69%, 81.015 MiB/s, ETA 1h59m51s
Transferred:            0 / 1, 0%
Elapsed time:  11h13m35.4s
Transferring:

*                                  __0_1.distcp:  2% /582.731Gi, 80.177Mi/s, 2h1m6s

And below is mine from right before I cancelled the operation; once again note the statistical discrepancies. My network monitoring confirms that Rclone did indeed download 3.223 TiB, but most of that was re-downloads of the same chunks, as the two files being downloaded amount to ~1.2TB.

Transferred:        3.223 TiB / 4.154 TiB, 78%, 63.356 MiB/s, ETA 4h16m44s
Checks:                 9 / 9, 100%
Transferred:            0 / 2, 0%
Elapsed time:  1d2h6m41.5s
Transferring:
 *                                  __0_0.distcp:  5% /582.731Gi, 24.252Mi/s, 6h26m53s
 *                                  __0_1.distcp: 30% /582.731Gi, 24.736Mi/s, 4h38m15s

great and maybe save a debug log for each run.


might create a client id+secret and test v1.69.1


and just a guess, might try --multi-thread-streams=0

Okay, after some extensive testing, I have some updates:

It seems that my successful download of the data using v1.65.0 was a fluke.

From my testing, the download loop behavior was introduced in v1.64.0. I tested most versions released since then, and they all reliably reproduced the behavior. Meanwhile, v1.60.0 through v1.63.1 reliably download the dataset without issue. The debug logs for these earlier versions are also very short (gist here), as they do not contain the constant errors of the debug logs from v1.64.0+ (logs are too long for gist or pastebin to accept, but refer to gist linked in the original post, as they're just a constant repeat).

Your guess regarding trying --multi-thread-streams=0 seems to have been spot on. After adding this flag to v1.64.0+, I was reliably able to download the dataset without issue, and the debug logs went back to looking like those of v1.60.0-v1.63.1.

Here's the kicker, take a look at this bit from the v1.64.0 changelog:

  • Major changes
    • Multi-thread transfers (Vitor Gomes, Nick Craig-Wood, Manoj Ghosh, Edwin Mackenzie-Owen)
      • Multi-thread transfers are now available when transferring to:
        • local, s3, azureblob, b2, oracleobjectstorage and smb
      • This greatly improves transfer speed between two network sources.
      • In memory buffering has been unified between all backends and should share memory better.
      • See --multi-thread docs for more info

So it seems that this multi-thread transfer change in v1.64.0 introduced the Google Drive download loop behavior.