Uploading .zip files to MinIO S3 backend fails

What is the problem you are having with rclone?

Running rclone copy file.zip MinIO:bucket fails with 403. If I rename the file to file.zip.gzip and run rclone copy file.zip.gzip MinIO:bucket it works fine. Also, running native MinIO client mc cp file.zip remote/bucket also works fine using the same access and secret keys. Uploading anything but .zip files works.

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.2

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.13.0-1027-oracle (aarch64)
  • os/type: linux
  • os/arch: arm64 (ARMv8 compatible)
  • go/version: go1.21.6
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

MinIO S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy file.zip remote:bucket

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[MinIO]
type = s3
provider = Minio
access_key_id = XXX
secret_access_key = XXX
endpoint = XXX
acl = private

A log from the command that you were trying to run with the -vv flag

2024/03/01 02:00:04 DEBUG : rclone: Version "v1.65.2" starting with parameters ["rclone" "copy" "file.zip" "Minio:bucket/mysql-db-backups" "-vv"]
2024/03/01 02:00:04 DEBUG : Creating backend with remote "file.zip"
2024/03/01 02:00:04 DEBUG : Using config file from "/home/ubuntu/.config/rclone/rclone.conf"
2024/03/01 02:00:04 DEBUG : fs cache: adding new entry for parent of "file.zip", "/home/ubuntu/mysql-backups/2024-02-29"
2024/03/01 02:00:04 DEBUG : Creating backend with remote "Minio:bucket"
2024/03/01 02:00:04 DEBUG : Resolving service "s3" region "us-east-1"
2024/03/01 02:00:05 ERROR : Attempt 1/3 failed with 1 errors and: Forbidden: Forbidden
	status code: 403, request id: 17B881769DCD7D4E, host id: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8
2024/03/01 02:00:05 ERROR : Attempt 2/3 failed with 1 errors and: Forbidden: Forbidden
	status code: 403, request id: 17B881769F553E1F, host id: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8
2024/03/01 02:00:05 ERROR : Attempt 3/3 failed with 1 errors and: Forbidden: Forbidden
	status code: 403, request id: 17B88176A0DFE521, host id: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8
2024/03/01 02:00:05 INFO  :
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         0.2s

2024/03/01 02:00:05 DEBUG : 5 go routines active
2024/03/01 02:00:05 Failed to copy: Forbidden: Forbidden
	status code: 403, request id: 17B88176A0DFE521, host id: dd9025bab4ad464b049177c95eb6ebf374d3b3fd1af9251148b658df7ac2e3e8

hi,
not sure what is going on but for a deeper look, try
--dump=headers --retries=1

2024/03/01 02:21:45 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/03/01 02:21:45 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/03/01 02:21:45 DEBUG : HTTP REQUEST (req 0x4000990400)
2024/03/01 02:21:45 DEBUG : HEAD /mysql-db-backups/file.zip HTTP/1.1
Host: XXXXXXXX
User-Agent: rclone/v1.65.2
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240301T022145Z

2024/03/01 02:21:45 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/03/01 02:21:45 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2024/03/01 02:21:45 DEBUG : HTTP RESPONSE (req 0x4000990400)
2024/03/01 02:21:45 DEBUG : HTTP/2.0 403 Forbidden

Isn't copy expecting a path not a file, try using the copyto command or . as the path with an appropriate filter for the filename.

I use copy and move with filenames and they work fine, but I include the path in front of the file usually, like rclone copy path/to/file.dat. So maybe it needs to see a path to work at all? I've not tried it just with a filename.

No. The destination is a path, not a file as that's probably your confusion.

If you want to copy a file and change the name on the destination you'd use copyto.

rclone copy hosts remote: would make a file named hosts on the remote
rclone copy hosts remote:hosts would make a directory on the remote called hosts and put a file called in hosts in the directory

rclone copyto hosts remote:nothosts would make a file in the root there called nothosts.

I tried with all kinds of combinations, full path on the local, full path on the remote, full path on both, just target folder/bucket on the remote, etc. None of it works.

Seems like rclone tries to do a head on a zip file on the remote and that returns 403.

if you create a one-byte text file with the extension .zip, does that upload?

that should apply to any file that is uploaded, correct?

and there are flags that prevent rclone from doing that initial head and other head operations.
tho in your case, not sure that will make a difference.

Yes, I have created a 1 byte file.zip and still doesn't work. I rename that file to file.zip.gzip and uploads fine.

maybe some odd interaction with Small File Archives in MinIO but that shouldn't affect things unless the header is set.

To enable the extension the header x-minio-extract must have the value true set.

I was thinking the same thing. Seem like rclone is maybe doing something that is causing this. Should I open a bug report?

"generally it is recommended to just leave files inside ZIP files uncompressed"
might test that.

i see that you posted at reddit, have you posted a minio forum?

not yet.
for a deeper look, i would re-upload that one-byte text file again.

rclone copy onebyte_text.zip remote:bucket --retries=1 --ignore-times --no-check-dest --no-traverse --s3-no-check-bucket --s3-no-head --s3-no-head-object --dump=headers,bodies,requests,bodies,auth

This worked

rclone copy file.zip remote:bucket --s3-no-check-bucket --s3-no-head --s3-no-head-object

Seems like heading zip is what gives 403.

good.
normally, i always suggest to use default vaules and as few flags as possible.
in this case, we did the opposite and found a workaround.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.