S3 ACL issues on Backblaze with multi-part upload

The ACL changes are after I did the test. I expect they'll fix it up quickly enough though!

Thanks guys, I will try to open a new ticket with them. Sorry, the link in my previous message was incorrect, it should have pointed to here

So what I am seeing is that running the server side copy command with a large file which requires multi-part copy works fine, but with a number of small files which don't require multi-part copy, it fails as described above...

I managed to replicate this.

2020/07/30 11:04:12 DEBUG : HTTP REQUEST (req 0xc00039d400)
2020/07/30 11:04:12 DEBUG : PUT /rclone-test-bucket/10M.copy8 HTTP/1.1
Host: s3.us-west-001.backblazeb2.com
User-Agent: rclone/v1.52.2-DEV
Content-Length: 0
Authorization: XXXX
X-Amz-Acl: private
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Copy-Source: rclone-test-bucket/10M
X-Amz-Date: 20200730T100412Z
X-Amz-Metadata-Directive: COPY
Accept-Encoding: gzip
2020/07/30 11:04:13 DEBUG : HTTP RESPONSE (req 0xc00039d400)
2020/07/30 11:04:13 DEBUG : HTTP/1.1 400 
Connection: close
Content-Length: 179
Cache-Control: max-age=0, no-cache, no-store
Content-Type: application/xml
Date: Thu, 30 Jul 2020 10:04:12 GMT
X-Amz-Id-2: addFu/Ws/bn5vgHeTboU=
X-Amz-Request-Id: 1576de9bf1ae1673

<?xml version="1.0" encoding="UTF-8" standalone="yes"?>
<Error>
    <Code>InvalidArgument</Code>
    <Message>Backblaze does not support the 'x-amz-acl' header</Message>
</Error>

Funnily enough X-Amz-Acl: private is perfectly acceptable on the multipart copy API so if you want a work-around you can use --s3-copy-cutoff 0. It isn't a great workaround as that API will do at minimum 2 API requests more but it is better than nothing.

Great, thanks for the confirmation! I will await a reply from Backblaze, hopefully they can sort this out quickly!

1 Like

Hi guys - just starting with Rclone and love it. Is there an update for the unsupported x-amz-acl error?

~$ rclone copy --s3-copy-cutoff 4G  b2pgs3: backups/pg/daily/7 b2pgs3: backups/pg/weekly/33
2020/08/24 06:49:10 ERROR : 3510.dat.gz: Failed to copy: InvalidArgument: Backblaze does not support the 'x-amz-acl' header
        status code: 400, request id: c1a4a8a99ca50cc7, host id: adfJuk2uCbj1vuHdfbhc=

Specifying --s3-copy-cutoff 0 gives a dive by zero error:

~$ rclone copy  --s3-copy-cutoff 0 b2pgs3:backups/pg/daily/7 b2pgs3:backups/pg/weekly/33
panic: runtime error: integer divide by zero

If I try the copy with the b2 backend, I get the ERROR : 3521.dat.gz: Failed to copy: Copy source too big: 5323717872 (400 bad_request)

~$ rclone copy b2pg:backups/pg/daily/7 b2pg:backups/pg/weekley/33
2020/08/24 06:52:19 ERROR : 3521.dat.gz: Failed to copy: Copy source too big: 5323717872 (400 bad_request)

I'm using the stable version:

~$ rclone --version
rclone v1.52.3
- os/arch: linux/amd64
- go version: go1.14.7

Thanks!

Hi @dkam!

So regarding the B2 S3 interface, server side copies still have that Backblaze does not support the 'x-amz-acl' header error. I logged a request with B2 a while ago already, this past Friday they handed the problem over to their engineers. I am awaiting further feedback from them...

Regarding the error you see using the normal B2 API, I assume that the code to fix that large file problem is not in the 1.52.3 release. You can try this beta v1.52.2-232-gff843516-beta maybe? This version is working on my side...

Thanks @ctonsing! I used v1.52.3-337-gd6996e33-beta and it's copied perfectly.

Great! :+1: :+1:

Just to report back: Backblaze said a while ago they'd look into the issue of not supporting the 'x-amz-acl' header for non-multi-part server-side copies. I haven't heard from them again, but according to my testing, this seems to be working again now. If anybody else could confirm, that would be great!

I can confirm it is working for me :slight_smile:

Thank you for confirming @ncw! :+1:t2: :+1:t2:

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.