Transfers get stuck with copy from Google Drive to Dropbox

What is the problem you are having with rclone?

When running rclone copy with lots of small files, it seems to run well for around 1.5h or so, after which more and more of the transfers are stuck at 100%. I was able to successfully transfer files with a similar command (different paths) without stuck transfers when having mixed file sizes, where most of the files are larger. This issue only seems to appear when having a large amount of small files. I had this issue once before where it got to a point where eventually all of the transfers were stuck at 100% (so nothing happened anymore, I also checked there was no network activity anymore at all), cancelled it and re-ran it with -vv (as I was running it without verbose logging previously).

What I mean by "transfers stuck at 100%" is that the output at some point looks something like this for example:

Transferred:      115.186 GiB / 160.315 GiB, 72%, 1.206 MiB/s, ETA 10h38m52s
Checks:             22501 / 22501, 100%
Transferred:        26166 / 36234, 72%
Elapsed time:    8h48m1.8s
Transferring:
 * chunks/1c/c892a175d7cd…2a9d65e9e4c33ed909c56d:100% /2.751Mi, 0/s, 0s
 * chunks/1c/c8a0beb9efa8…2ef52476fe1146e5d4af9d:100% /1.296Mi, 0/s, 0s
 * chunks/1c/c8abfe0896ed…8a27601efee65fb9a306af:100% /1.586Mi, 0/s, 0s
 * chunks/1c/c8bd16be2cd6…be574078b1d40b3f7f0644:100% /12.914Mi, 0/s, 0s
 * chunks/1c/c9147f413e8f…9ab16614675cf4025b9dc0:100% /1.689Mi, 0/s, 0s
 * chunks/1c/c92a2d41d576…10f468e7736e496d75559f:100% /4.241Mi, 0/s, 0s
 * chunks/1c/c94007f8245f…bdef4f0839d599285879ed:100% /2.689Mi, 0/s, 0s
 * chunks/1c/c987e23afe09…09b1bc26749ef039897c73:100% /1.847Mi, 0/s, 0s
 * chunks/1c/c9e325ffc842…84267749b075172bf451ae:100% /1.003Mi, 0/s, 0s
 * chunks/1c/ca51f5e3fc96…fc6973afa92ad31f03b906:100% /2.358Mi, 0/s, 0s
 * chunks/1c/ca7327f19053…70b2e442c66a9d4017f124:100% /1.808Mi, 0/s, 0s
 * chunks/1c/ca83575e3917…674a0a2da2427dd9021eae:100% /3.286Mi, 0/s, 0s
 * chunks/1c/cac16e66a92c…4ad887547867215eabc3ba:100% /4.251Mi, 0/s, 0s
 * chunks/1e/75356bab4ba0…85a1a71381745d37704164:100% /1.430Mi, 0/s, 0s
 * chunks/1d/c80f6786ccac…a6edd932905b9839b55e59:100% /8.463Mi, 0/s, 0s
 * chunks/18/a68dd0efa50a…7f37bffad87fabe7ebb89f:100% /7.367Mi, 0/s, 0s
 * chunks/1b/9f6b2d8a31ab…02921065922823ea6f4da3:100% /4.654Mi, 0/s, 0s
 * chunks/1a/a7f69f51f51a…72099a9fd3f266178e9fef:100% /4.344Mi, 0/s, 0s
 * chunks/17/9e65b12dcdd9…d1e30422eef96f224d9b46:100% /896.287Ki, 0/s, 0s
 * chunks/1e/753ed8c37115…2fc9d6eeb81cc1b9fb9e00:100% /4.107Mi, 0/s, 0s
 * chunks/1d/c85f7e29e37c…b5d289dd6e116295a92412:100% /1.571Mi, 0/s, 0s
 * chunks/18/a6ab3a19413d…0cb580d84bde663317cd85:100% /1.124Mi, 0/s, 0s
 * chunks/1b/9f74da455122…bcdecba56960fd9372f919:100% /10.641Mi, 0/s, 0s
 * chunks/1a/a7f8e673d80e…636f9049181153e064c526:100% /2.689Mi, 0/s, 0s
 * chunks/1c/cae726b0d325…0b4bd7fe70513ef8dcfb1e:100% /2.422Mi, 0/s, 0s
 * chunks/19/a1f6c4242237…a76a996ddcb098875c4981:100% /7.585Mi, 0/s, 0s
 * chunks/17/9ea20a0131d9…31d7112ed086c782f29d42:100% /1.190Mi, 0/s, 0s
 * chunks/1e/7564e85cde10…cd8b3755cdc0a3bf78e726:100% /9.979Mi, 0/s, 0s
 * chunks/1d/c86adf30d263…1caef56dac2f374b5e3087:100% /3.545Mi, 0/s, 0s
 * chunks/18/a6d27d42bf1c…0b74fa25efe33a221a7650:100% /5.001Mi, 0/s, 0s
 * chunks/1b/9f8e41d6d579…179f9a76402464fac03869:100% /1.597Mi, 0/s, 0s
 * chunks/1a/a81360230803…2b5ea51cafb46857090df1:100% /1.124Mi, 0/s, 0s
 * chunks/1a/a82077693ccd…62ed7103133904f87cbcb4:100% /1.038Mi, 0/s, 0s
 * chunks/1a/a834dc9cf295…41d0cfca6ec305ddd243e3:100% /5.046Mi, 0/s, 0s
 * chunks/1a/a84d976ef2a7…4be04f3628eda0e32f01f9:100% /4.758Mi, 0/s, 0s
 * chunks/1a/a884ed3c057e…3e5ae206a11575e544c00d:100% /3.683Mi, 0/s, 0s
 * chunks/1a/a89aa9d22de7…e1ff151e695883f1beace7:100% /4.679Mi, 0/s, 0s
 * chunks/1a/a8d471570ce8…88e53ca8e7a5453180d101:100% /4.909Mi, 0/s, 0s
 * chunks/1a/a916b82871e6…7214ddad837ed7fa2641ef:100% /3.041Mi, 0/s, 0s
 * chunks/1a/a9645d6b4f9f…71c710ca5494c718adccea:100% /10.308Mi, 0/s, 0s
 * chunks/1a/a9759f22ad96…5c38bbf275d6a8a29baf53:100% /4.051Mi, 0/s, 0s
 * chunks/1a/a99bfcb01b4e…a5910eacdf22cf5dadf2ca:100% /3.352Mi, 0/s, 0s
 * chunks/1a/aa2d950e7cf4…b7df663fdf6db94d45e68b:100% /1.498Mi, 0/s, 0s
 * chunks/1a/aa9a0334a4a7…fd58b0af367f2a1463391a:100% /2.250Mi, 0/s, 0s
 * chunks/1a/aaa5a2d12bfa…ff7aa1b77b8dc0f52b0df9:100% /8.771Mi, 0/s, 0s
 * chunks/1a/aadc712652fd…da7db1e6e7953394759862:100% /6.787Mi, 0/s, 0s
 * chunks/1a/ab89a94d70fd…8563581534c1fa103a522e:100% /2.389Mi, 0/s, 0s
 * chunks/1a/ab99b86f8bb7…30a847251ec880e201dc57:100% /7.739Mi, 0/s, 0s
 * chunks/1a/abbb3a7e42bb…363452b589cb2c3e01dbdb:100% /2.259Mi, 0/s, 0s
 * chunks/1a/abde4a8b101b…c50217e70caf8c67ab1176:100% /6.299Mi, 0/s, 0s
 * chunks/1a/acd1d91e0edd…a126fa6ea3ab508ae702b1:100% /1.439Mi, 0/s, 0s
 * chunks/1a/acd4bc6455b8…8545e3beff5d38d3830e35:100% /1.953Mi, 0/s, 0s
 * chunks/1a/ad2e167009fb…3b20a49079cc72365b4a95:100% /1.382Mi, 0/s, 0s
 * chunks/1a/ad981732e138…5ef75c67871af62f7839a1:100% /1.680Mi, 0/s, 0s
 * chunks/1a/add720c98a02…5f10502cb8214ee93755d5:100% /1.554Mi, 0/s, 0s
 * chunks/1a/ae084725c837…568c19aad45a4a1f5d80ad:100% /5.994Mi, 0/s, 0s
 * chunks/1a/ae2c55579ad3…2da1abb8b50d3070edd8b4:100% /1.728Mi, 0/s, 0s
 * chunks/1a/ae401a354b25…4d533d80249ea9feb990a6:100% /1.437Mi, 0/s, 0s
 * chunks/26/3e70178bedbe…e4c6176a3236d094b917db:100% /1.794Mi, 0/s, -
 * chunks/26/3ead53ed7984…b76a4ef9cc7daace497d4b: transferring

And then hours later it still shows the same, except for the last few transfers, if I'm not mistaken in this case I think only the bottom 2 transfers were changing from time to time, the rest stayed the same.

Doing a quick grep in the log file for a few of the stuck transfers results in the following:

2023/05/25 12:42:02 DEBUG : chunks/1c/c892a175d7cd4399e9ef21186b02d30b81d28d852a9d65e9e4c33ed909c56d: Uploading chunk 1/1
2023/05/25 12:42:03 DEBUG : chunks/1c/c892a175d7cd4399e9ef21186b02d30b81d28d852a9d65e9e4c33ed909c56d: Uploading chunk 2/1
2023/05/25 12:42:04 DEBUG : Dropbox root 'GDriveSync/BackupWorkstationD': Adding "/GDriveSync/BackupWorkstationD/chunks/1c/c892a175d7cd4399e9ef21186b02d30b81d28d852a9d65e9e4c33ed909c56d" to batch

and

2023/05/25 12:42:02 DEBUG : chunks/1c/c8a0beb9efa88fb132fd15ec31a74ae46b59622a2ef52476fe1146e5d4af9d: Uploading chunk 1/1
2023/05/25 12:42:03 DEBUG : chunks/1c/c8a0beb9efa88fb132fd15ec31a74ae46b59622a2ef52476fe1146e5d4af9d: Uploading chunk 2/1
2023/05/25 12:42:04 DEBUG : Dropbox root 'GDriveSync/BackupWorkstationD': Adding "/GDriveSync/BackupWorkstationD/chunks/1c/c8a0beb9efa88fb132fd15ec31a74ae46b59622a2ef52476fe1146e5d4af9d" to batch

So they all seem to log in the same kind of pattern. I'm not an expert at rclone, but what seems a bit suspicious to me is Uploading chunk 2/1 but for some reason it seems to me other files that were uploaded logged the same. But what seems definitely wrong to me is that Copied (new) is never logged for the transfers that are stuck.

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.4.0-148-generic (x86_64)
  • os/type: linux
  • os/arch amd64
  • go/version: go1.20.2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Source: Google Drive
Destination: Dropbox

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy -P --stats 30s --drive-chunk-size 512M --user-agent="Mozilla/5.0 (Linux; Android 10; K) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/113.0.0.0 Mobile Safari/537.36" --transfers=60 --log-file=rclonedebug2.log -vv --drive-acknowledge-abuse Gdrive:/BackupWorkstationD Dropbox:/GDriveSync/BackupWorkstationD

The rclone config contents with secrets removed.

[Dropbox]
type = dropbox
token = {"access_token":"REDACTED","token_type":"bearer","refresh_token":"REDACTED","expiry":"2023-05-25T22:56:15.89554456+02:00"}

[Gdrive]
type = drive
client_id = REDACTED
client_secret = REDACTED
scope = drive
token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"2023-05-25T20:58:28.306206846+02:00"}
team_drive = 

A log from the command with the -vv flag

If you share the full log, I can show you the most likely issue :slight_smile:

I did share the full log, just click on the gist link :slight_smile:

Thanks as I did click on the link and it was only 1minute of logs when I looked.

I checked again and looks like the full file is there.

2023/05/25 11:00:12 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:00:12 DEBUG : pacer: low level retry 1/10 (error too_many_requests/..)
2023/05/25 11:00:12 DEBUG : pacer: Rate limited, increasing sleep to 5m0s
2023/05/25 11:00:12 DEBUG : pacer: Reducing sleep to 3m45s
2023/05/25 11:00:12 DEBUG : pacer: Reducing sleep to 2m48.75s

Dropbox you can only do a bout 12 TPS so you are hammering with your setup and it'll eventually time out to death as you've seen.

You want to limit the TPS and use:

--tpslimit 12 --tpslimit-burst 0

Thanks, I'll try that, but I'm wondering, on the one hand, why are those options not set by default for Dropbox and also, why if I just cancel and restart the same command does it use all transfers successfully again instantly?

Because none of this is documented and it's purely trial and error from folks testing things out. Hardly any provider gives out the exact rate limitations that they use and rclone supports quite a number of providers so trying to even document it is a chore in itself.

It's not instant. It has to check the source and destination and make sure they are the same as nothing is kept in cache. Depending on how many items, it will show long it'll take.

Generally, if you want to move from Drive to Dropbox, use the defaults for the most part and just limit the TPS for Dropbox.

That's unfortunate, I see. But I'm still wondering why the transfers get stuck at one point, especially as some of the transfers were still running and a lot of the transfers didn't change for hours, so even if it was the case eventually the transfers should've continued, right?

You got

--transfers=60

So you are hammering Dropbox and it rate limits you exponentially based on how bad you are hammering it.

You start waiting 5 minutes and that eventually morphs into more time and you keep hammering, making it worse.

So you have all these backing up.

grep NOTICE rclonedebug.log
2023/05/25 11:00:12 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:00:12 NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:08:04 NOTICE: too_many_requests/: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:08:04 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:16:08 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:16:08 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:16:08 NOTICE: too_many_requests/: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:16:08 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:22:56 NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:22:56 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:22:56 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:31:02 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:31:02 NOTICE: too_many_requests/: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:39:03 NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:47:05 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:47:05 NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:54:01 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:54:01 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 11:54:01 NOTICE: too_many_requests/: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 12:03:26 NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 12:11:29 NOTICE: too_many_requests/: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 12:11:29 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 12:18:17 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 12:18:17 NOTICE: too_many_requests/.: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 12:18:17 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 12:34:50 NOTICE: too_many_requests/..: Too many requests or write operations. Trying again in 300 seconds.
2023/05/25 12:34:50 NOTICE: too_many_requests/...: Too many requests or write operations. Trying again in 300 seconds.

and all the retries

2023/05/25 15:35:40 DEBUG : pacer: Reducing sleep to 10ms
2023/05/25 16:19:54 DEBUG : pacer: Reducing sleep to 15ms
2023/05/25 16:19:55 DEBUG : pacer: Reducing sleep to 11.25ms
2023/05/25 16:19:56 DEBUG : pacer: Reducing sleep to 16.875ms
2023/05/25 16:19:57 DEBUG : pacer: Reducing sleep to 50.625ms
2023/05/25 16:19:57 DEBUG : pacer: Reducing sleep to 37.96875ms
2023/05/25 16:19:58 DEBUG : pacer: Reducing sleep to 28.476562ms
2023/05/25 16:19:59 DEBUG : pacer: Reducing sleep to 21.357421ms
2023/05/25 16:19:59 DEBUG : pacer: Reducing sleep to 16.018065ms
2023/05/25 16:20:00 DEBUG : pacer: Reducing sleep to 12.013548ms
2023/05/25 16:20:01 DEBUG : pacer: Reducing sleep to 10ms
2023/05/25 17:55:27 DEBUG : pacer: Reducing sleep to 15ms
2023/05/25 17:55:27 DEBUG : pacer: Reducing sleep to 11.25ms
2023/05/25 17:55:27 DEBUG : pacer: Reducing sleep to 10ms
2023/05/25 18:13:52 DEBUG : pacer: Reducing sleep to 15ms
2023/05/25 18:13:55 DEBUG : pacer: Reducing sleep to 22.5ms
2023/05/25 18:13:57 DEBUG : pacer: Reducing sleep to 33.75ms
2023/05/25 18:13:58 DEBUG : pacer: Reducing sleep to 25.3125ms
2023/05/25 18:13:59 DEBUG : pacer: Reducing sleep to 18.984375ms
2023/05/25 18:14:02 DEBUG : pacer: Reducing sleep to 14.238281ms
2023/05/25 18:14:02 DEBUG : pacer: Reducing sleep to 10.67871ms
2023/05/25 18:14:03 DEBUG : pacer: Reducing sleep to 10ms
2023/05/25 18:30:40 DEBUG : pacer: Reducing sleep to 15ms
2023/05/25 18:30:40 DEBUG : pacer: Reducing sleep to 11.25ms
2023/05/25 18:30:41 DEBUG : pacer: Reducing sleep to 10ms
2023/05/25 19:20:45 DEBUG : pacer: Reducing sleep to 30ms
2023/05/25 19:20:46 DEBUG : pacer: Reducing sleep to 22.5ms
2023/05/25 19:20:47 DEBUG : pacer: Reducing sleep to 16.875ms
2023/05/25 19:20:50 DEBUG : pacer: Reducing sleep to 12.65625ms
2023/05/25 19:20:52 DEBUG : pacer: Reducing sleep to 10ms

So it's going to get worse and if you waited a long time, it will eventually finish but the number of retries and sleeping would make that time period something exponential I'd imagine.

etexter@macmini 43e04dfbddc35ea5f2868689dcc49f36-9d9a2b888f035267b77f88d9f9054473f5300aa3 % grep "Reducing sleep" rclonedebug.log | wc -l
     569
etexter@macmini 43e04dfbddc35ea5f2868689dcc49f36-9d9a2b888f035267b77f88d9f9054473f5300aa3 % grep NOTICE rclonedebug.log| wc -l
      27

I see, thanks for the explanation!

As I said I ran the command again with the parameters you suggested, but the issue still appears. Can you please check the new log and tell me whether I just need to further limit the tps or whether it's something else?

Unfortunately, as the log file is over 100 MB I can't upload it to a gist or pastebin, so I uploaded it to file.io: https://file.io/BldhVy1C27yF

It's not a perfect setting as you might have to go lower.

I use one API app registration per rclone instance as it seems to be tied to the client ID/secret.

If it's shared and other things are using, that might be it as well.

Just my 2 cents here, I've had to use --transfers 5, even going to 6 has caused me issues.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.