We are using rclone to store backups in Tencent OS using the s3 backend. This used to work with rclone 1.46, but with rclone 1.51 we are getting error messages like this:
2020/05/18 15:07:43 INFO : Starting bandwidth limiter at 25MBytes/s
2020/05/18 15:07:43 ERROR : Attempt 1/3 failed with 1 errors and: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, HeadObjectInput.Key.
The command worked fine with rclone 1.46 so I suspect this to be a bug in rclone. It seems that the "." in the path is stripped by rclone and therefore results in "" instead of the root of the bucket.
I tried using something else like "foo" but that results in rclone constructing an URL for which the certificates that Tencent is using doesn't verify. Not sure that behavior is correct, but I have to admit I don't know much about S3 to judge that.
Is there anything we can do differently to make it work or should I file a bug report over at Github?
What is your rclone version (output from rclone version)
rclone v1.51.0
os/arch: linux/amd64
go version: go1.13.7
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Ubuntu 16.04
Which cloud storage system are you using? (eg Google Drive)
Tencent COS
The command you were trying to run (eg rclone copy /tmp remote:tmp)
To be honest: I don't know. I haven't done the initial work to configure rclone for Tencent, but that used to work with 1.46, so my assumption is that this was the only way to get it working with 1.46.
What would be be the right way?
It seems that Tencent is using this virtual addressing style, which is probably the reason why the endpoint in our configuration includes the container name.
Is there any reference for configuring the s3 backend for Tencent that I could RTFM? I didn't find any.
The problem is that it doesn't work without the dot either, same error message.
I just could get it working by changing the endpoint to only the cos-host without the bucket name in it (the forum say I'm disallowed to add urls, that's why I have to describe it) and passing the bucket-name instead of the dot. But in that configuration rclone ls lists all buckets, that these access-key has access to, which to me sounds a bit like a dangerous configuration.
Yeah, investigating an improvement of the security policies is certainly something I should pursue. But for now I've just want to get a rclone configuration work, where the remote is limited to a bucket.
In 1.46 that worked by including the bucket name as part of the endpoint name.
I would say this is the correct way of configuring rclone and the fact that it worked before was relying on undefined behaviour!
Did you try setting path_stye = true in the config?
Rclone always needs the bucket...
Setting an alias might be an idea if you don't want the buckets as suggested by @asdffdsa though don't use that syntax - use the one documented here: https://rclone.org/alias/
I would say this is the correct way of configuring rclone and the fact that it worked before was relying on undefined behaviour!
I somewhat expected you to say that, as it was my initial suspicion, too.
Did you try setting path_stye = true in the config?
Yes, but that didn't make a difference. I'll do some further tries and get back to you on this.
Rclone always needs the bucket...
I know. And it got the bucket as part of the endpoint-url in the previous configuration. But I guess, that @asdffdsa is right then that Tencent is not really S3-compatible?