Slow uploads to s3 glacier deep archive on 2gig fiber line

What is the problem you are having with rclone?

Hey everyone, thanks for looking. I'm running rclone 1.65.2 on an Unraid box with 2.5 gig ethernet to a 2 gig upload fiber connection.

Trying to upload to s3 glacier deep archive storage class.

Unfortunately i'm seeing extremely slow speeds during this upload. Capping out around 60 MB/s at best, but often sending way below that, with speeds dipping down under 1MB/s often.

I'm mostly uploading large files. I've read the glacier guide and am setting most flags to save costs. I'm not running checksums. I've tweaked the chunk size and threads to various levels but am not seeing any speedups.

I've verified normal file transfers to this machine run at full drive speed across my network. I've ran the CLI on this machine and verified the connection is seeing full 2 gig upload.

My next test is to get aws cli installed on here and doing manual uploads to try to rule out aws being the issue here.

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.2

  • os/version: slackware 14.2+ (64 bit)
  • os/kernel: 5.10.28-Unraid (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.21.6
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

S3 glacier deep archive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy Concerts :s3:s3-bucket-name/Concerts --s3-provider AWS --s3-access-key-id SECRET --s3-secret-access-key SECRET --s3-region us-east-1 --s3-storage-class DEEP_ARCHIVE --s3-upload-concurrency 16 --transfers 16 --s3-chunk-size 32M --s3-no-head --s3-disable-checksum --s3-no-check-bucket --size-only --fast-list --cutoff-mode soft -v --progress --ignore-checksum --human-readable

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Not running a config, doing full configure via cli since unraid does not persist certain storage.

A log from the command that you were trying to run with the -vv flag

Transferred:      246.614 GiB / 2.989 TiB, 8%, 546.980 KiB/s, ETA 8w6d10h
Transferred:           76 / 1414, 5%
Elapsed time:   5h11m13.1s
 *                20021025 Oreon/021025oreon.avi: 77% /20.118Gi, 3/s, 431946h29m56s
 * 20020720 GMPD/020720GM…frostburg-maryland.avi: 66% /24.189Gi, 7.370Ki/s, 323h36m1s
 *                  20000511 GMPD/000511gmpd.avi: 74% /20.936Gi, 0/s, 2562047h47m16s
 *                  20000706 GMPD/000706gmpd.avi: 61% /25.311Gi, 53/s, 55007h14m22s
 *    20030131 Staple Oreon/030131-oreon-alt.avi: 69% /22.424Gi, 0/s, 2562047h47m16s
 *                20030524 gmpd/GMPD20030524.avi: 61% /17.803Gi, 0/s, 2562047h47m16s
 *              20040116 oreon/20040116oreon.avi: 78% /13.292Gi, 87/s, 9508h24m45s
 *            20040129 WWR WPS/040129WWR-WPS.avi: 57% /12.872Gi, 167.696Ki/s, 9h26m13s
 *          20040428 waltzing/waltzing040428.avi: 89% /5.006Gi, 200.846Ki/s, 44m3s
 *                    20040407 tcf/tcf040407.avi: 61% /6.494Gi, 0/s, 1289989h30m29s
 *            20040305 minkus/20040305minkus.avi: 49% /6.301Gi, 158.375Ki/s, 5h48m34s
 *                  20040225 ALVR/040225avlr.avi: 51% /5.854Gi, 1/s, 599530h5m50s
 * 20040407 illicit ALB/0…albillicit-gravity.avi: 13% /19.180Gi, 0/s, 2562047h47m16s
 *  20040407 illicit ALB/illicitdreams040407.avi: 80% /3.098Gi, 0/s, 539599h43m26s
 *                    20040305 TCF/040305tcf.avi: 34% /7.199Gi, 177/s, 7889h57m22s
 *         20040225 wps rydells/040225wps-JR.avi:  0% /6.030Gi, 0/s, -^C

welcome to the forum,

please post the results of that test.

as for rclone, please post a full rclone debug log, so we can exactly what rclone is doing?

--cutoff-mode soft without using --max-transfer and/or --max-duration
not sure that does anything?

--cutoff-mode soft is for eventual use of the max-duration yes.

I just ran a test using one single 50GB file with rclone, same config, and it maxed out my hard drive speed and uploaded at 150MiB/S.

So the connection seems fine and the aws ingress seems fine.

Something is amiss when trying to upload a bunch at once.

Full debug output for the entire dir i'm trying to upload is a bit insane, i'll try to snag it and toss it into a github gist.

Is there a flag to go single file at a time? That might fix my issue.

maybe try

You can use --transfers 1 if you are maxing out your hard drive. You were running 16 at a time and were probably being IO bound with that many going at the same time.

Here's a log file for current config. Took almost 6 minutes before anything started, then went at about 40MiB/s up.

I think Animosity022 might be right with some kind of IO issue here. Gonna try one at a time now and see how that feels.

Setting --transfers to 1 or some other low number seems to be the right move here. Getting 200 MB/s up now. So, will probably suck while copying smaller files, but good enough for now.

Not sure what the hardware limitation i'm hitting is, or what the deal is. But I can live with this for the moment.

Thank you both!

from the debug log
multi-thread copy: chunk 6/430 failed: multi-thread copy: failed to write chunk
pacer: Rate limited, increasing sleep to 2s
so the provider is throttling your connection

fwiw, might start off using default values, establish a baseline and then tweak that.
to remove "--s3-upload-concurrency" "16" "--transfers" "16" "--s3-chunk-size" "32M"

I actually started with defaults and worked my way up to the current chunk size and concurrency. It showed pretty much the same speeds and slowdown issues. Kind of odd.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.