Understanding Pacer behavior with Google Drive

That'll have to do thanks!

Also for posterity, as a Google Workspace Enterprise Customer I am using the following for mirroring a Windows File Server Directory, where I have a tolerance for unsynchronized items.

rclone sync "PATH" REMOTE: --checkers=120 --drive-pacer-min-sleep=0ms --retries=1 

Down to 6-9 minutes for an average Sync from 5.5 hours with out of box settings.

Example:

Transferred:      118.920 MiB / 118.920 MiB, 100%, 950 B/s, ETA 0s
Checks:            241423 / 241423, 100%
Deleted:                3 (files), 0 (dirs)
Transferred:          163 / 163, 100%
Elapsed time:       9m9.4s

Nearly 200 read calls per second according to google, with an error rate of 1-4.15% per GCP metrics.

Documentation suggest harder limitations are for write calls, the documentation stated no more than 3 write calls per second were allowed. So I don't know if this would be appropriate for an initial sync, but I am told for large scale migrations you can ask for a rate increase.

This was fun to fiddle with, the biggest help probably came from increasing the checkers to 32 initially, from there signficant diminishing returns begin. Basically doubling the checkers resulted in a further 50% increase in performance until about 120 where I start to hit my rate limit of 20,000 per 100 seconds.

It was also better to just remove the minimum pacer in my circumstance since it takes so much to hit my rate limit. Probably not smart for someone who would hit it frequently and slow down from frequent error responses.

Interestingly the Rate Limits seem to be per project and not per account, so I'm wondering if I can get creative and run dual rclone's associated with different projects to synchronize faster.

1 Like