"low level retry 1/2" should be "low level retry 1/10"

What is the problem you are having with rclone?

not understanding the debug log.
the output from the debug log, does not look correct.
--- "low level retry 1/2" should be "low level retry 1/10"
--- each retry should increase the low level counter

or is there different types of events that trigger low level retries and each type has a different retry count?

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0
- os/version: Microsoft Windows 10 Pro 2009 (64 bit)
- os/kernel: 10.0.22000.527 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.17.2
- go/linking: dynamic
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

wasabi, s3 clone

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy d:\data\iso\veeam\re\en08\20220216\VeeamRecoveryMedia_EN08.iso wasabi01:zork --progress --s3-disable-checksum  -vv

The rclone config contents with secrets removed.

[wasabi01]
type = s3
provider = Wasabi
access_key_id = redacted
secret_access_key = redacted
endpoint = s3.us-east-2.wasabisys.com

A log from the command with the -vv flag

rclone copy d:\data\iso\veeam\re\en08\20220216\VeeamRecoveryMedia_EN08.iso wasabi01:zork --progress --s3-disable-checksum  -vv
2022/03/08 10:17:52 DEBUG : Setting --config "C:\\data\\rclone\\rclone.conf" from environment variable RCLONE_CONFIG="C:\\data\\rclone\\rclone.conf"
2022/03/08 10:17:52 DEBUG : rclone: Version "v1.57.0" starting with parameters ["C:\\data\\rclone\\rclone.exe" "copy" "d:\\data\\iso\\veeam\\re\\en08\\20220216\\VeeamRecoveryMedia_EN08.iso" "wasabi01:zork" "--progress" "--s3-disable-checksum" "-vv"]
2022/03/08 10:17:52 DEBUG : Creating backend with remote "d:\\data\\iso\\veeam\\re\\en08\\20220216\\VeeamRecoveryMedia_EN08.iso"
2022/03/08 10:17:52 DEBUG : Using config file from "C:\\data\\rclone\\rclone.conf"
2022/03/08 10:17:52 DEBUG : fs cache: adding new entry for parent of "d:\\data\\iso\\veeam\\re\\en08\\20220216\\VeeamRecoveryMedia_EN08.iso", "//?/d:/data/iso/veeam/re/en08/20220216"
2022/03/08 10:17:52 DEBUG : Creating backend with remote "wasabi01:zork"
2022/03/08 10:17:52 DEBUG : wasabi01: detected overridden config - adding "{1SSjr}" suffix to name
2022/03/08 10:17:52 DEBUG : fs cache: renaming cache item "wasabi01:zork" to be canonical "wasabi01{1SSjr}:zork"
2022-03-08 10:17:52 DEBUG : VeeamRecoveryMedia_EN08.iso: Need to transfer - File not found at Destination
2022-03-08 10:17:52 INFO  : S3 bucket zork: Bucket "zork" created with ACL "private"
2022-03-08 10:17:52 DEBUG : VeeamRecoveryMedia_EN08.iso: multipart upload starting chunk 1 size 5Mi offset 0/896.562Mi
2022-03-08 10:17:53 DEBUG : VeeamRecoveryMedia_EN08.iso: multipart upload starting chunk 2 size 5Mi offset 5Mi/896.562Mi
2022-03-08 10:17:53 DEBUG : VeeamRecoveryMedia_EN08.iso: multipart upload starting chunk 3 size 5Mi offset 10Mi/896.562Mi
2022-03-08 10:17:53 DEBUG : VeeamRecoveryMedia_EN08.iso: multipart upload starting chunk 4 size 5Mi offset 15Mi/896.562Mi
2022-03-08 10:17:53 DEBUG : VeeamRecoveryMedia_EN08.iso: multipart upload starting chunk 5 size 5Mi offset 20Mi/896.562Mi
2022-03-08 10:17:53 DEBUG : VeeamRecoveryMedia_EN08.iso: multipart upload starting chunk 6 size 5Mi offset 25Mi/896.562Mi
2022-03-08 10:17:53 DEBUG : VeeamRecoveryMedia_EN08.iso: multipart upload starting chunk 7 size 5Mi offset 30Mi/896.562Mi
2022-03-08 10:17:53 DEBUG : VeeamRecoveryMedia_EN08.iso: multipart upload starting chunk 8 size 5Mi offset 35Mi/896.562Mi
2022-03-08 10:18:13 DEBUG : pacer: low level retry 1/2 (error RequestError: send request failed
caused by: Put "https://s3.us-east-2.wasabisys.com/zork/VeeamRecoveryMedia_EN08.iso?partNumber=8&uploadId=tbeE3IbfIXgpl-iXYg00YpTVHiJKnyC2HUYFM17sozMn9tcwJyBMeiBfbuBKFZnFJFFL5NjM9ZGO0Le104SLQhFNCk5qNiyJVrEDl0Bwi73LcxIuCOchJcuc7E_9X2--": write tcp 192.168.62.234:62511->38.73.225.22:443: wsasend: An existing connection was forcibly closed by the remote host.)
2022-03-08 10:18:13 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2022-03-08 10:18:13 DEBUG : pacer: low level retry 1/2 (error RequestError: send request failed
caused by: Put "https://s3.us-east-2.wasabisys.com/zork/VeeamRecoveryMedia_EN08.iso?partNumber=7&uploadId=tbeE3IbfIXgpl-iXYg00YpTVHiJKnyC2HUYFM17sozMn9tcwJyBMeiBfbuBKFZnFJFFL5NjM9ZGO0Le104SLQhFNCk5qNiyJVrEDl0Bwi73LcxIuCOchJcuc7E_9X2--": write tcp 192.168.62.234:62512->38.73.225.22:443: wsasend: An existing connection was forcibly closed by the remote host.)
2022-03-08 10:18:13 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2022-03-08 10:18:57 DEBUG : pacer: low level retry 2/2 (error RequestError: send request failed
caused by: Put "https://s3.us-east-2.wasabisys.com/zork/VeeamRecoveryMedia_EN08.iso?partNumber=7&uploadId=tbeE3IbfIXgpl-iXYg00YpTVHiJKnyC2HUYFM17sozMn9tcwJyBMeiBfbuBKFZnFJFFL5NjM9ZGO0Le104SLQhFNCk5qNiyJVrEDl0Bwi73LcxIuCOchJcuc7E_9X2--": dial tcp: lookup s3.us-east-2.wasabisys.com: no such host)
2022-03-08 10:18:57 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2022-03-08 10:18:57 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2022-03-08 10:18:57 DEBUG : pacer: Rate limited, increasing sleep to 80ms
2022-03-08 10:18:57 DEBUG : pacer: low level retry 2/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2022-03-08 10:18:57 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2022-03-08 10:18:57 DEBUG : pacer: low level retry 2/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2022-03-08 10:18:57 DEBUG : pacer: Rate limited, increasing sleep to 160ms
2022-03-08 10:18:57 DEBUG : pacer: Rate limited, increasing sleep to 320ms
2022-03-08 10:18:57 DEBUG : pacer: Rate limited, increasing sleep to 640ms
2022-03-08 10:18:57 DEBUG : pacer: low level retry 2/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2022-03-08 10:18:57 DEBUG : pacer: Rate limited, increasing sleep to 1.28s
2022-03-08 10:18:57 DEBUG : VeeamRecoveryMedia_EN08.iso: Cancelling multipart upload
2022-03-08 10:19:44 DEBUG : pacer: Reducing sleep to 960ms
2022-03-08 10:19:44 DEBUG : VeeamRecoveryMedia_EN08.iso: Failed to cancel multipart upload: RequestError: send request failed
caused by: Delete "https://s3.us-east-2.wasabisys.com/zork/VeeamRecoveryMedia_EN08.iso?uploadId=tbeE3IbfIXgpl-iXYg00YpTVHiJKnyC2HUYFM17sozMn9tcwJyBMeiBfbuBKFZnFJFFL5NjM9ZGO0Le104SLQhFNCk5qNiyJVrEDl0Bwi73LcxIuCOchJcuc7E_9X2--": dial tcp: lookup s3.us-east-2.wasabisys.com: no such host
2022-03-08 10:19:44 DEBUG : VeeamRecoveryMedia_EN08.iso: Received error: multipart upload failed to upload part: RequestError: send request failed
caused by: Put "https://s3.us-east-2.wasabisys.com/zork/VeeamRecoveryMedia_EN08.iso?partNumber=7&uploadId=tbeE3IbfIXgpl-iXYg00YpTVHiJKnyC2HUYFM17sozMn9tcwJyBMeiBfbuBKFZnFJFFL5NjM9ZGO0Le104SLQhFNCk5qNiyJVrEDl0Bwi73LcxIuCOchJcuc7E_9X2--": dial tcp: lookup s3.us-east-2.wasabisys.com: no such host - low level retry 1/10
2022-03-08 10:20:33 DEBUG : pacer: Reducing sleep to 720ms
2022-03-08 10:20:33 ERROR : VeeamRecoveryMedia_EN08.iso: Failed to copy: multipart upload failed to initialise: RequestError: send request failed
caused by: Post "https://s3.us-east-2.wasabisys.com/zork/VeeamRecoveryMedia_EN08.iso?uploads=": dial tcp: lookup s3.us-east-2.wasabisys.com: no such host
2022-03-08 10:20:33 ERROR : Attempt 1/3 failed with 1 errors and: multipart upload failed to initialise: RequestError: send request failed
caused by: Post "https://s3.us-east-2.wasabisys.com/zork/VeeamRecoveryMedia_EN08.iso?uploads=": dial tcp: lookup s3.us-east-2.wasabisys.com: no such host
2022-03-08 10:21:19 DEBUG : pacer: Reducing sleep to 540ms
2022-03-08 10:21:20 ERROR : Attempt 2/3 failed with 1 errors and: RequestError: send request failed
caused by: Head "https://s3.us-east-2.wasabisys.com/zork/VeeamRecoveryMedia_EN08.iso": dial tcp: lookup s3.us-east-2.wasabisys.com: no such host
2022-03-08 10:21:48 INFO  : Signal received: interrupt
2022-03-08 10:21:48 INFO  : Exiting...
Transferred:           40 MiB / 40 MiB, 100%, 4 B/s, ETA 0s
Elapsed time:      3m55.5sTerminate batch job (Y/N)?

This is confusing, I'd agree.

What we do is let the AWS SDK do --low-level-retries worth of retries.

However for complicated reasons we need to try some retries in rclone itself, so we set 2 low level retries in rclone itself.

So when you see low level retry 1/2 it should really say low level retry 10/20 as the SDK has already done 10 retries at this point.

ok, thanks for the explanation.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.