Scaleway multipart upload failed

Hi I try to upload large file to Scaleway

It fails several times on 3 file
Two files are 6Gb and largest is 384 Gb
I learned about about multipart upload threshold of 4.656Gi and Scaleway limitation of 1000 chunks!
I solved problem for two smaller file (6Gb) with flag --s3-copy-cutoff 0
But this flag no go for 384 Gb file.

I read forum and find an suggestion to use flag --s3-chunk-size so that number of chunks be smaller of 1000 - that in my case wil be --s3-chunk-size 384M but that not worked either.

Rclone is just killed after a while.

Here is tek stuf:

I use Ubuntu server:
Distributor ID: Ubuntu
Description:|Ubuntu 20.04.3 LTS| - server version
Release:|20.04|
Codename:|focal|

Virtual machine:
1 vcpu
2 GB RAM
20 GB boot drive
600 Gb temp drive

What is your rclone version (output from rclone version)

rclone v1.56.0

Which cloud storage system are you using? (eg Google Drive)

Scaleway

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy Wasabi:rst.arhiva Scaleway:rst.arhiva --retries 10 --retries-sleep 15s --s3-chunk-size 384 -P

A log from the command with the -vv flag

Screen #1 (flag: --s3-copy-cutoff 0 )
-----------------------------------------

rclone copy Wasabi:rst.arhiva Scaleway:rst.arhiva --retries 10 --retries-sleep 15s --s3-copy-cutoff 0 -P
Enter configuration password:
password:
Transferred:       87.888Gi / 395.192 GiByte, 22%, 51.022 MiByte/s, ETA 1h42m47s
Checks:                97 / 97, 100%
Transferred:            2 / 3, 67%
Elapsed time:     30m41.0s
Transferring:
 * v152ufugvdlri395fgi5p0…pcvqgrblogqki28mmsqv3g: 19% /383.277Gi, 51.419Mi/s, Killed

Screen #2 (flag: --s3-chunk-size 430M )
------------------------------------------------------
rclone copy Wasabi:rst.arhiva Scaleway:rst.arhiva --retries 10 --retries-sleep 15s --s3-chunk-size 430M -P
Enter configuration password:
password:
Transferred:        6.997Gi / 383.277 GiByte, 2%, 30.363 MiByte/s, ETA 3h31m30s
Checks:                99 / 99, 100%
Transferred:            0 / 1, 0%
Elapsed time:      2m26.9s
Transferring:
 * v152ufugvdlri395fgi5p0…pcvqgrblogqki28mmsqv3g:  1% /383.277Gi, 29.081Mi/s, Killed

Any idea how to upload 384 Gb file to Scaleway

I use FreeNAS with Rclone implementation for "cloud sync tasks" and I upload all this files to Wasabi with no problem. FreeNAS use 1.50.2 version of Rclone.

I try to install that version but I have no success to upload to Scaleway.

Any help / idea is welcome!

Thanks

hello and welcome to the forum,

there is a bunch of questions that you did not answer.
please post

  • the output of rclone version
  • the config file, redact id/secret/token/password
  • a full debug log, add -vv to the command and post the full output.

i think the issue could be lack of memory on the VM.
that should be easy to verify.

as a test, perhaps try --transfers=1 --checkers=1 --s3-upload-concurrency=1 --buffer=0 --use-mmap
and use the absolute smallest chunk size as possible.
also, before running rclone, add export GOGC=20

Thank you a lot, it is work!

I did command as you suggested:

export GOGC=20
rclone copy Wasabi:rst.arhiva Scaleway:rst.arhiva --retries 10 --retries-sleep 15s --transfers=1 --checkers=1 --s3-upload-concurrency=1 --buffer-size 0 --use-mmap -vv | tee scaleway.out

Unfortunatelly I'm new to linux, so tee scaleway.out does not produce any log file so I have only terminal text to copy and paste it is 3200 lines long, I suppose that only 100 last line would be sufficient? Or do I need to post all?

Still, I think that there is a bug in rclone.
Rclone shouldn't kill it self.
I suppose that happened because it not checking available memory.
I have 1269 M available RAM before I execute Rclone.

This "Killed" problem is happening only on two conditions:

  1. when multipart is trigered for upload
  2. only when is Scaleway as cloud destination

For example if I use Wasabi - this is not happening.
I use Mega, Wasabi and Jottacloud and I do sync / copy between these providers without any trouble. ​

While I execute Rclone,
I notice that Rclone respect limitation of Scaleway multipart and create 393Mi multipart chunks, that is great, but as I understand does not take care about available memory. Should I report this as a bug?

Thak you for help.

It's not a bug if you don't have enough memory on your server.

You can tweak that by:

https://rclone.org/s3/#multipart-uploads

If you can recreate the issue, share a full debug log with -vv on it.

good we got it working.

that is not a bug.
if the OS runs out of memory, better the kill rclone then to fatally hard crash the entire OS.

to create a rclone log file, add --log-level=DEBUG --log-file=/path/to/folder/log.txt
post the entire log.

that is not a bug, that is how rclone works.
if you want to request a new feature, start a new topic using the feature template.

based on rclone source code, Wasabi supports 10,000 chunks, not just 1,000
https://github.com/rclone/rclone/blob/269f90c1e4d908d6ce1d22a8c94a84c8bf60b14b/backend/s3/s3.go#L1569

I use FreeNAS-11.3-U5 it use Rclone 1.50.2
Then I use Ubuntu server at Hetzner and copy all data from Wasabi to Mega and Jottacloud.
It work great only problem is with Jottacloud with 255 bytes filename limitations.
So, I look for another cloud storage.

I will transfer now another folder (bucket) from Mega to Scaleway it contains 35800 objects and 525.141Gi of data. The 3 files are 103Gi, 138Gi and 154Gi.... so I'm willing to do more tests, have you any suggestion what and how to test?

OK, but this happening only while transfer to Scaleway.

Ok, thanks. I'll do it on that way.

yes, scaleway max chunk count is only 1,000.
AWS and wasabi are 10,000 so it is possible to use a smaller chunk size and this less memory.

as per the rclone source code

if opt.Provider == "Scaleway" && opt.MaxUploadParts > 1000 {
		opt.MaxUploadParts = 1000

i use the combination of:

  • aws deep glacier for cold storage at $1.01/TB.
  • wasabi for hot storage

I do some additional test and here what I found:

  1. FreeNAS use Rclone v 1.50.2 - this version of Rclone does not use chunks when upload to Wasabi
  • i downloaded and test this version on my server and log shows no chunks.
  • version 1.56.0 which I use - do chunks when upload to Wasabi.
  1. Rclone 1.56.0 use over 9000 chunks on 100Gi file. so it using 11Mi chunks and it works ok
  2. When I send 100Gi file to Scaleway it create 106Mi chunks so that work too!
  3. On simple calculation I can use max total combined files of 300 Gi, thats mean that I need to know how many BIG files I have for transfer in advance. Or I need to use --transfers=1 --s3-unpload-concurency=1 to be safe. It is inconvenient if I have lots of small files in same transfer.

Well, for this Scaleway scenario it will be very good that Rclone have ability to lower number for --s3-upload-concurency automatically regarding to size of chunk and available memory!

Thank you for help.

this might be of interest scaleway (s3) multipart default settings limit streaming uploads to 5GB filesize · Issue #4159 · rclone/rclone · GitHub
or
contact scaleway and ask them to increase the max chunk count.

in the mean time, should be able to create a simple script and/or make use of:

  • filters to give you more control - for example, --max-size. might run multiple rclone commands each with a different chunk size

also, this command will output the total size of all files that need to be copied without copying the files.
parse out the total size and use that to calculate the max chunk size based on free memory
for example, in this case 2.018Gi

rclone copy . remote  --dry-run 
2021/10/09 21:01:48 NOTICE: file01.txt: Skipped copy as --dry-run is set (size 1Gi)
2021/10/09 21:01:48 NOTICE: file04.txt: Skipped copy as --dry-run is set (size 18.857Mi)
2021/10/09 21:01:48 NOTICE: file03.txt: Skipped copy as --dry-run is set (size 992)
2021/10/09 21:01:48 NOTICE: file02.txt: Skipped copy as --dry-run is set (size 1Gi)
2021/10/09 21:01:48 NOTICE: 
Transferred:   	    2.018Gi / 2.018 GiByte, 100%, 0 Byte/s, ETA -
Transferred:            4 / 4, 100%
Elapsed time:         1.3s

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.