Scaleway (S3), speed reduce between 1.57 and 1.58.1

What is the problem you are having with rclone?

I encounter abnormal check speed on C14 / Scaleway backend with rclone v1.58.1. Same command used with version v1.57 runs just fine.

I've runed several times the same test :

  • v1.57 : 11363 files checked ended in ~ 4 minutes and 15 secondes
  • v1.58.1 : in ~ 4 minutes, 400 files checked

I've tried on Windows 10 and on my Debian NAS. I've same results : slow checks with v1.58.1 and normal speed with v1.57.

Changelog talks about changes in Version of ListObjects. I tried to figure out if this could be related with my problem. The above command has been tried with --s3-list-version 1 and --s3-list-version 2 without any better improvement. That's strange :face_with_raised_eyebrow:

I don't know what other change has been made from 1.57 to 1.58.1 and that could be tested easily. Let me know if you have any idea ... :star_struck:

I don't need v1.58.1 for the moment and I went back to my beloved v1.57. :slight_smile:

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1
- os/version: Microsoft Windows 10 Home 21H2 (64 bit)
- os/kernel: 10.0.19044.1706 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.17.9
- go/linking: dynamic
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

C14 / Scaleway. It's an S3 compatible backend, similar to Amazon Glacier

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync "\\naspi4c\data\Mes photos\2021" c14:"cl-backup/Mes photos/2021" --dry-run -vv

The rclone config contents with secrets removed.

[c14]
type = s3
provider = Scaleway
env_auth = false
access_key_id = XXX
secret_access_key = XXX
region = fr-par
endpoint = s3.fr-par.scw.cloud
acl = private
storage_class = GLACIER

A log from the command with the -vv flag

2022/06/05 09:24:40 DEBUG : rclone: Version "v1.58.1" starting with parameters ["rclone" "sync" "\\\\naspi4c\\data\\Mes photos\\2021" "c14:cl-backup/Mes photos/2021" "--dry-run" "-vv" "--log-file=D:\\Users\\JC\\Mes programmes\\Internet\\rclone\\rclone-v1.58.1-windows-amd64\\rclone-v1.58.1-windows-amd64\\output.txt"]
2022/06/05 09:24:40 DEBUG : Creating backend with remote "\\\\naspi4c\\data\\Mes photos\\2021"
2022/06/05 09:24:40 DEBUG : Using config file from "C:\\Users\\JC\\.config\\rclone\\rclone.conf"
2022/06/05 09:24:40 DEBUG : fs cache: renaming cache item "\\\\naspi4c\\data\\Mes photos\\2021" to be canonical "//?/UNC/naspi4c/data/Mes photos/2021"
2022/06/05 09:24:40 DEBUG : Creating backend with remote "c14:cl-backup/Mes photos/2021"
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1854.CR2: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1854.CR2: Unchanged skipping
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1852.CR2: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1852.CR2: Unchanged skipping
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1852.xmp: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1852.xmp: Unchanged skipping
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1854.xmp: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1854.xmp: Unchanged skipping
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1852.JPG: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1852.JPG: Unchanged skipping
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1855.CR2: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1855.CR2: Unchanged skipping
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1855.JPG: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1855.JPG: Unchanged skipping
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1854.JPG: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:54 DEBUG : 2021-01-10/IMG_1854.JPG: Unchanged skipping
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1855.xmp: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1855.xmp: Unchanged skipping
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1859.JPG: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1859.JPG: Unchanged skipping
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1859.xmp: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1859.xmp: Unchanged skipping
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1861.CR2: Size and modification time the same (differ by 0s, within tolerance 100ns)
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1861.CR2: Unchanged skipping
2022/06/05 09:24:59 DEBUG : 2021-01-10/IMG_1862.CR2: Size and modification time the same (differ by 0s, within tolerance 100ns)

and going on going on ...

hmm, that is abnormal.

did a quick test and failed to replicate.
--- using wasabi, s3 clone.
--- the bucket has 10,000 files in the root
--- v1.57.0 and v1.58.1
--- ran two commands
rclone sync . remote:bucket --checkers=128 --fast-list
rclone sync . remote:bucket

here is the summary, v1.58.1 is faster than v1.57.0

v1.57.0 v1.58.1 command
6s 4s rclone sync . remote:bucket --checkers=128 --fast-list
40s 37s rclone sync . remote:bucket
1 Like

I can replicate this problem. Rclone 1.57 goes like a rocket with scaleway - 1.58 like a tortoise. I tried to bisect the problem and discovered that the problem wasn't with rclone:

  • If I build 1.57 with go1.18 then I get the same results - very slow
  • if I build 1.58 with go1.17 then it goes fast

So this is to do with the go runtime, next bisect the go runtime... This will take a little time...

15 compiles of the Go compiler, its libraries and rclone later...

And the winner is

7109323af5a8c7e076fa2af7185caa6185df97cd is the first bad commit
commit 7109323af5a8c7e076fa2af7185caa6185df97cd
Author: Damien Neil <dneil@google.com>
Date:   Mon Oct 4 10:50:02 2021 -0700

    all: update golang.org/x/net to pull in CL 353390
    
    Fixes #48564.
    Fixes #23559.
    
    Change-Id: I8e0b646c4791d3a6fb17df1af0a7175b68ce8983
    Reviewed-on: https://go-review.googlesource.com/c/go/+/353870
    Trust: Damien Neil <dneil@google.com>
    Run-TryBot: Damien Neil <dneil@google.com>
    TryBot-Result: Go Bot <gobot@golang.org>
    Reviewed-by: Brad Fitzpatrick <bradfitz@golang.org>

 src/go.mod                |    2 +-
 src/go.sum                |    6 +
 src/net/http/h2_bundle.go | 1397 ++++++++++++++++++++++++++-------------------
 src/vendor/modules.txt    |    2 +-
 4 files changed, 807 insertions(+), 600 deletions(-)

So the http2 module in Go got updated.

Annoyingly that itself is a monster commit, but it gives us somewhere to point the finger.

Adding the flag --disable-http2 causes rclone to run nice and fast again. However the fast rclone 1.57.0 is running with HTTP2 and it is fast, so I this looks like a regression in Go's http2 support.

I'm pretty sure there isn't a go issue about this already so I will create one in a moment!

Update and here it is net/http2: perfomance regression from go1.17 to go1.18 · Issue #53253 · golang/go · GitHub

1 Like

Thanks for your analysis. I haven't noticed that rclone v1.58 uses go1.18. If I had read changelog better, I could have pointed that way ...

OMG ! :open_mouth: ... You really wanted to find the cause.
Thank you for your work and your amazing rclone. :slight_smile:

I will try this trick. :slight_smile:

1 Like

I tested --disable-http2. It worked as you said, perfect normal speed with v1.58.
Thank you

I hate HTTP2 and everything about it. It always seems to break stuff regardless of the provider at times as it hit this, Google, dropbox, etc.

Yes http2 does seem to throw up a lot of problems. I guess it is cutting edge tech and the implementations are quite complicated!

I will continue to try to get to the bottom of the regression in Go's http2 performance though!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.