Rcat to s3 storage

What is the problem you are having with rclone?

Hello!
I'm trying to set up on-the-fly backups on storage type s3, but I get an error.

Run the command 'rclone version' and share the full output of the command.

rclone v1.59.2
- os/version: arch "rolling" (64 bit)
- os/kernel: 5.19.9-arch1-1 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.1
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Own s3, based on minio - for test.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

cat /tmp/test.file | rclone rcat aws:/test.file 
<5>NOTICE: S3 root: Streaming uploads using chunk size 5Mi will have maximum file size of 48.828Gi
<3>ERROR : test.file: Post request rcat error: multipart upload failed to initialise: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreateMultipartUploadInput.Key.
Failed to rcat with 2 errors: last error was: multipart upload failed to initialise: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreateMultipartUploadInput.Key.

and

cat /tmp/test.file | rclone rcat aws:/test.file --size 1048576
<3>ERROR : test.file: Post request put error: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, PutObjectInput.Key.
Failed to rcat with 2 errors: last error was: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, PutObjectInput.Key.

The rclone config contents with secrets removed.

[aws]
type = s3
provider = Other
access_key_id = ***
secret_access_key = ***
endpoint = http:/***:9000/bucket1

A log from the command with the -vv flag

cat /tmp/test.file | rclone rcat aws:/test.file -vv
<7>DEBUG : rclone: Version "v1.59.2" starting with parameters ["rclone" "rcat" "aws:/test.file" "-vv"]
<7>DEBUG : rclone: systemd logging support activated
<7>DEBUG : Creating backend with remote "aws:/"
<7>DEBUG : Using config file from "/home/***/.config/rclone/rclone.conf"
<7>DEBUG : fs cache: renaming cache item "aws:/" to be canonical "aws:"
<6>INFO  : S3 root: Bucket "test.file" created with ACL "private"
<5>NOTICE: S3 root: Streaming uploads using chunk size 5Mi will have maximum file size of 48.828Gi
<3>ERROR : test.file: Post request rcat error: multipart upload failed to initialise: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreateMultipartUploadInput.Key.
<7>DEBUG : 6 go routines active
Failed to rcat with 2 errors: last error was: multipart upload failed to initialise: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, CreateMultipartUploadInput.Key.

and

cat /tmp/test.file | rclone rcat aws:/test.file --size 1048576 -vv
<7>DEBUG : rclone: Version "v1.59.2" starting with parameters ["rclone" "rcat" "aws:/test.file" "--size" "1048576" "-vv"]
<7>DEBUG : rclone: systemd logging support activated
<7>DEBUG : Creating backend with remote "aws:/"
<7>DEBUG : Using config file from "/home/***/.config/rclone/rclone.conf"
<7>DEBUG : fs cache: renaming cache item "aws:/" to be canonical "aws:"
<6>INFO  : S3 root: Bucket "test.file" created with ACL "private"
<3>ERROR : test.file: Post request put error: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, PutObjectInput.Key.
<7>DEBUG : 6 go routines active
Failed to rcat with 2 errors: last error was: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, PutObjectInput.Key.

hello and welcome to the forum,

might try --dump=headers for more detail.

maybe a minio issue?

seems to have worked for me

cat file.ext | rclone rcat wasabi01:zork/file.ext -vv
DEBUG : rclone: Version "v1.59.2" starting with parameters ["rclone" "rcat" "wasabi01:zork/file.ext" "-vv"]
DEBUG : Creating backend with remote "wasabi01:zork/"
DEBUG : Using config file from "/home/user01/.config/rclone/rclone.conf"
DEBUG : fs cache: renaming cache item "wasabi01:zork/" to be canonical "wasabi01:zork"
INFO  : S3 bucket zork: Bucket "zork" created with ACL "private"
NOTICE: S3 bucket zork: Streaming uploads using chunk size 256Mi will have maximum file size of 2.441Ti
DEBUG : file.ext: multipart upload starting chunk 1 size 1.382Mi offset 0/off
DEBUG : file.ext: Multipart upload Etag: 70b18c6e82bd4eccb12c045dc1f342a7-1 OK
DEBUG : file.ext: Dst hash empty - aborting Src hash check
DEBUG : file.ext: Size of src and dst objects identical
1 Like

does rclone copy work?

not sure, maybe
endpoint = http:/***:9000/bucket1
should be
endpoint = http:/***:9000
and
cat /tmp/test.file | rclone rcat aws:bucket1/test.file

1 Like

Yes, copying works with original config.

I fixed the endpoint as you pointed out to me, after that it worked. It's a shame on me.

Thank you so much!

no, that was a very confusing error.
given that the original config worked with rclone copy but not rclone rcat

It could still be a minio configuration error. If you want, you can set up your storage config like I did and check it out.

sorry, i do not use minio.

and i would not use a endpoint like your original, so nothing for me to test.

could be an issue with https://rclone.org/s3/#s3-force-path-style

Sorry, I mean, you can set up rclone like I did, with your storage.

| and i would not use a endpoint like your original, so nothing for me to test.

That's ok for me. You helped me solve the problem, I'm grateful to you.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.