Pacer is throttling from non-google source?

What is the problem you are having with rclone?

I'm trying to migrate my data from OpenDrive to GDrive. OpenDrive is kinda sucking in this regard, with frequent of unavailable files returning 503 errors (which I can verify from the web site). Everything will be running well, and then I'll start hitting the 503s from OpenDrive. I'm trying to get my rclone copy to skip 503s as fast as I can, otherwise the copy is just going to take unrealistic amounts of time.

A side effect, once I start hitting groupings of 503s is that Pacer errors start inducing throttling, which I don't see anyway I can address. I thought pacer items were only on the GDrive side of things, which is the write target, but I'm not pushing files quickly because of the frequent 503s, and I specifically tried to have settings not overwhelm GDrive.

Anyway, I don't understand why I'm getting those Pacer backups unless it's the file listing? The retries get up into the 5 minute range after a bit, which ends up making the transfer not really viable.

Thanks!

What is your rclone version (output from rclone version)

rclone v1.52.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

os/arch: linux/amd64 (Debian 4.9.228-1; Google VM)

Which cloud storage system are you using? (eg Google Drive)

OpenDrive copy to GDrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy od:Library/ gd:/Library --multi-thread-streams 1 -P --transfers 3 --bwlimit 8670 --checkers=2 --tpslimit 10 --tpslimit-burst 20 --cutoff-mode=soft  --drive-stop-on-upload-limit --retries 1 --low-level-retries 1 --timeout 20s --ignore-existing --log-file copy_log_library.txt -vv

The rclone config contents with secrets removed.

[od]
type = opendrive
username = xxx
password = xxx

[gd]
type = drive
client_id = xxx.apps.googleusercontent.com
client_secret = xxx
scope = drive
token = {"access_token":"xxx","token_type":"Bearer","refresh_token":"1//xxx","expiry":"2020-09-06T16:48:10.1840457Z"}


A log from the command with the -vv flag

I had to clip a chunk out of the middle because I've been running it a while. Generally, the clipped part is a dup skips, or successful copies. There's an occasional 503, but it's not until it scales up on a block of them that makes skipping a small group of files start taking hours.

https://pastebin.com/3kq2NdvE

Not sure there is much to do about that as those 503s are coming from OpenDrive so it has to retry.

Almost every cloud provider with an API has some kind of pacer rules that need to be followed. Majority use an exponential pacer which has to back off in bigger steps as you get more errors.

You could try to reduce transfers or checkers and see if Opendrive behaves better.

It only seemed like pacer related items in the docs were contextually with GDrive, and since GDrive is the target, I thought it was coming from there, despite that seeming unusual.

I was originally running only 2 transfers and a single checker, with no discernable difference. I guess I'll just have to keep running it and and pestering OpenDrive support.

Thanks!

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.