Rclone copy fails after migrating 98TB out of 123TB

What is the problem you are having with rclone?

I am trying to perform a copy from Google Cloud Storage to Linode Object storage. The total data volume is 123TB. transfer for 98TB is complete, but post that I am seeing errors like the following:

2023/10/26 16:57:17 ERROR : <redacted-filename>: Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 2 with 5242880 bytes: QuotaExceeded: 
	status code: 403, request id: tx00000ba11001a7bfa6334-00653a9a6c-35b16b-default, host id:
2023/10/26 16:57:17 ERROR : <redacted-filename>: Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 4 with 5242880 bytes: QuotaExceeded: 
	status code: 403, request id: tx00000f57fd9e3e37875dc-00653a9a6c-3523eb-default, host id:
2023/10/26 16:57:18 ERROR : <redacted-filename>: Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 2 with 5242880 bytes: QuotaExceeded: 
	status code: 403, request id: tx00000bb98e921798255cc-00653a9a6d-352427-default, host id:
2023/10/26 16:57:18 ERROR : <redacted-filename>: Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 1 with 5242880 bytes: QuotaExceeded: 
	status code: 403, request id: tx000003b43de1f5c4913b1-00653a9a6d-3523eb-default, host id:
2023/10/26 16:57:18 ERROR : <redacted-filename>: Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 4 with 5242880 bytes: QuotaExceeded: 
	status code: 403, request id: tx00000581efafc53652379-00653a9a6d-35de64-default, host id:

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.2
- os/version: debian 11.7 (64 bit)
- os/kernel: 5.10.0-23-amd64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Migrating from Google Cloud Storage to Linode Object Storage

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy jadoo-eu-west-4:<bucket-name> jadoo-linode-eu-ams:<bucket-name> --transfers 32 --progress

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[jadoo-eu-west-4]
type = google cloud storage
project_number = xxxxxx
service_account_file = xxxx
object_acl = bucketOwnerFullControl
bucket_acl = private
location = europe-west4
storage_class = MULTI_REGIONAL

[jadoo-linode-eu-ams]
type = s3
provider = Other
access_key_id = xxxxx
secret_access_key = xxxxx
endpoint = nl-ams-1.linodeobjects.com
acl = public-read


A log from the command that you were trying to run with the -vv flag

2023/10/26 15:32:13 DEBUG : <redacted-filename>: open chunk writer: started multipart upload: 2~Pp4xS62PmGZG9SnHmd4LL7SiI9ybAtY
2023/10/26 15:32:13 DEBUG : <redacted-filename>: multi-thread copy: using backend concurrency of 4 instead of --multi-thread-streams 4
2023/10/26 15:32:13 DEBUG : <redacted-filename>: Starting multi-thread copy with 362 chunks of size 5Mi with 4 parallel streams
2023/10/26 15:32:13 DEBUG : <redacted-filename>: multi-thread copy: chunk 4/362 (15728640-20971520) size 5Mi starting
2023/10/26 15:32:13 DEBUG : <redacted-filename>: multi-thread copy: chunk 1/362 (0-5242880) size 5Mi starting
2023/10/26 15:32:13 DEBUG : <redacted-filename>: multi-thread copy: chunk 2/362 (5242880-10485760) size 5Mi starting
2023/10/26 15:32:13 DEBUG : <redacted-filename>: multi-thread copy: chunk 3/362 (10485760-15728640) size 5Mi starting
2023/10/26 15:32:13 DEBUG : <redacted-filename>: multipart upload "2~gQEFyZLyOjwpHFETsMRkX29tUCcwNFn" aborted
2023/10/26 15:32:13 ERROR : <redacted-filename>: Failed to copy: multi-thread copy: failed to write chunk: failed to upload chunk 1 with 5242880 bytes: QuotaExceeded:
        status code: 403, request id: tx00000a54d59995d37e7f8-00653a867d-3551a7-default, host id:
-panic: runtime error: index out of range [0] with length 0

goroutine 2565 [running]:
github.com/rclone/rclone/lib/pool.(*RW).readPage(...)
	github.com/rclone/rclone/lib/pool/reader_writer.go:91
github.com/rclone/rclone/lib/pool.(*RW).Read(0xc0160dab90, {0xc01bde8000?, 0x41071d?, 0x7effda035108?})
	github.com/rclone/rclone/lib/pool/reader_writer.go:127 +0x20d
github.com/aws/aws-sdk-go/aws/request.(*offsetReader).Read(0x4537e9?, {0xc01bde8000?, 0x1c3aec0?, 0xc00170aa01?})
	github.com/aws/aws-sdk-go@v1.44.311/aws/request/offset_reader.go:47 +0x11d
io.(*LimitedReader).Read(0xc01e591f08, {0xc01bde8000?, 0xc00170aa80?, 0xc0016950c8?})
	io/io.go:480 +0x42
io.copyBuffer({0x2758cc0, 0xc00170aa80}, {0x274a6a0, 0xc01e591f08}, {0x0, 0x0, 0x0})
	io/io.go:430 +0x1a6
io.Copy(...)
	io/io.go:389
net/http.persistConnWriter.ReadFrom({0x274ca20?}, {0x274a6a0, 0xc01e591f08})
	net/http/transport.go:1801 +0x55
bufio.(*Writer).ReadFrom(0xc0031c6040, {0x274a6a0, 0xc01e591f08})
	bufio/bufio.go:797 +0x18b
io.copyBuffer({0x274b1e0, 0xc0031c6040}, {0x274a6a0, 0xc01e591f08}, {0x0, 0x0, 0x0})
	io/io.go:416 +0x147
io.Copy(...)
	io/io.go:389
net/http.(*transferWriter).doBodyCopy(0xc010ffb220, {0x274b1e0?, 0xc0031c6040?}, {0x274a6a0?, 0xc01e591f08?})
	net/http/transfer.go:412 +0x48
net/http.(*transferWriter).writeBody(0xc010ffb220, {0x274b1e0, 0xc0031c6040})
	net/http/transfer.go:370 +0x3c5
net/http.(*Request).write(0xc001bb0800, {0x274b1e0, 0xc0031c6040}, 0x0, 0xc00196d560, 0xc001b9ff60)
	net/http/request.go:738 +0xbad
net/http.(*persistConn).writeLoop(0xc001951440)
	net/http/transport.go:2424 +0x18f
created by net/http.(*Transport).dialConn in goroutine 1953
	net/http/transport.go:1777 +0x16f1

Might be nothing to do with rclone - but you exceeded some Google daily quotas.

Wait 24h and then start again - already copied stuff will be skipped and it should finish.

looks like an issue with writing/uploading to linode, you have exceeded some sort of quota.

no idea really,
but might test without --transfers 32
and if the max file size is less than 5GiB, might test --multi-thread-streams=0

thank you, I'll update this thread after 24 hours with further observations

I did try without --transfers 32 earlier but got the same issue.

not sure if there is any quota on Linode that is exceeding, coz the requests per second and total volume quota is within the limits. one strange thing is that rclone tries to do a multi part upload for 5MiB data, for which I can't understand the reason

not sure, but rclone is simply printing the error from linode.
maybe rclone is doing something to trigger it.

in any event, not sure why you would wait 24 hours, as this does not seem to be a gdrive issue?
github.com/aws/aws-sdk-go/aws/request

that is rclone default value, as per rclone docs.
i was going to suggest that you maybe try to increase it, tho based on your debug log, the upload is using just 362 chunks

"Since the default chunk size is 5 MiB and there can be at most 10,000 chunks, this means that by default the maximum size of a file you can stream upload is 48 GiB. If you wish to stream upload larger files then you will need to increase chunk_size."

I tried increasing the chunk size to 10MB, its just trying to read 10MiB at a time but the quota exceeded error still exists for some reason

2023/10/27 14:38:24 DEBUG : <file path redacted>: multi-thread copy: chunk 4/179 failed: multi-thread copy: failed to write chunk: failed to upload chunk 4 with 10485760 bytes: QuotaExceeded:
        status code: 403, request id: tx00000c06955b38550f63d-00653bcb60-35dea0-default, host id:
2023/10/27 14:38:24 DEBUG : <file path redacted>: multi-thread copy: chunk 5/179 (41943040-52428800) size 10Mi starting
2023/10/27 14:38:24 DEBUG : <file path redacted>: multi-thread copy: chunk 5/179 failed: multi-thread copy: failed to open source: Get "https://storage.googleapis.com/download/storage/v1/b/<file path redacted>?generation=1607445327402528&alt=media": context canceled

can @ncw also share this thoughts please?

That's an error from the destination saying you are out of quota. You'd want to contact the provider and get an explanation.

thank you, I'll contact the provider and update this thread

The quota was indeed exceeding on the destination, thank you everyone for pitching in!

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.