Trouble uploading multi-part files to S3 / Minio

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

Multipart uploads appear to be failing with the following error.

Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 2 with 52428800 bytes: : 
	status code: 524, request id: , host id: 

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.2
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 6.2.0-35-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

S3 / Minio

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy /mnt/local/Media/TV/Ridiculousness s3:Media/TV/Ridiculousness --low-level-retries 1 -vv --log-file /home/user/logfile.log

I was told to add --low-level-retries 1 by my provider, and without that flag it fails even harder.

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[s3]
type = alias
remote = minio:data/personal-files

[minio]
type = s3
provider = Minio
access_key_id = XXX
secret_access_key = XXX
endpoint = https://drive.domain.tld
acl = bucket-owner-full-control
upload_cutoff = 100Mi
chunk_size = 50Mi

A log from the command that you were trying to run with the -vv flag

Log file

Try running your copy command for single big file (so multipart kicks in) with -vv --dump headers flags. It should give a bit more insight what is going on.

A 524 status code typically indicates that a request to a server took longer than the server's configured timeout period . This can be caused by a variety of factors, including: Network latency issues. High traffic on the server

So probably a server problem - most likely server is overloaded.

Try reducing this

  --s3-upload-concurrency int   Concurrency for multipart uploads (default 4)

Tried again with command:
rclone copy "./Creator, The (2023)" "s3:Media/Movies/Creator, The (2023)" --low-level-retries 1 -vv --log-file /home/user/oneBigFile.log --dump headers

--dump headers logfile

You can see:

2023/11/19 10:40:04 DEBUG : HTTP RESPONSE (req 0xc0008e4800)
2023/11/19 10:40:04 DEBUG : HTTP/2.0 500 Internal Server Error

it is S3 server responding that it has some issue.... What follows is rclone throwing ERRORs that upload failed.

Have you tried ncw suggestion? Try with:

--s3-upload-concurrency 1

You may be on to something, since it does seem to work occasionally.

Trying with --s3-upload-concurrency 1 results in the same upload fails.

I'll try following up with the provider.

I did, I was just in the process of testing and following up to his reply. :slight_smile:

1 Like

They also have a WebDAV config, but that is also failing.

I can upload via Pydio / Web though.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.