Many Rclone copy errors

What is the problem you are having with rclone?

I am seeing many 400 errors when i am using rclone copy from s3 compatible storage to AWS snowball.

What is your rclone version (output from rclone version)

rclone v1.53.1
- os/arch: windows/amd64
- go version: go1.15

Which OS you are using and how many bits (eg Windows 7, 64 bit)

windows server 2016

Which cloud storage system are you using? (eg Google Drive)

Source : s3 compatible storage
Destination : aws snowball

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy s3compatible:/src/ awssnowball:/destination/ --transfers=10 --checkers=10 --fast-list --max-backlog=999999 --log-file="debug.txt" --log-level DEBUG

The rclone config contents with secrets removed.


[s3compatible]
type = s3
provider = Other
access_key_id = 
secret_access_key = 
region = other-v2-signature
endpoint = 
acl = private
bucket_acl = private
upload_concurrency = 10

[awssnowball]
type = s3
provider = Other
access_key_id = 
secret_access_key = 
endpoint = http://10.0.0.11:8080
acl = private

A log from the command with the -vv flag

    Failed to copy: s3 upload: 400 Bad Request: <?xml version="1.0" encoding="UTF-8" standalone="yes"?>
    <Error xsi:schemaLocation="http://s3.amazonaws.com/doc/2006-03-01/" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
        <Code>InvalidPart</Code>
        <Message>Authorization header cannot be null or empty</Message>
    </Error>

It looks like the snowball doesn't support the way rclone does uploads with signed URLs.

Try setting --s3-upload-cutoff 0 which will upload all files as multipart files.

  --s3-upload-cutoff SizeSuffix   Cutoff for switching to chunked upload (default 200M)

Thanks Nick. It worked

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.