Rclone failing with validation errors on Tencent COS

What is the problem you are having with rclone?

We are using rclone to store backups in Tencent OS using the s3 backend. This used to work with rclone 1.46, but with rclone 1.51 we are getting error messages like this:

2020/05/18 15:07:43 INFO : Starting bandwidth limiter at 25MBytes/s
2020/05/18 15:07:43 ERROR : Attempt 1/3 failed with 1 errors and: InvalidParameter: 1 validation error(s) found.
- minimum field size of 1, HeadObjectInput.Key.

The command worked fine with rclone 1.46 so I suspect this to be a bug in rclone. It seems that the "." in the path is stripped by rclone and therefore results in "" instead of the root of the bucket.

I tried using something else like "foo" but that results in rclone constructing an URL for which the certificates that Tencent is using doesn't verify. Not sure that behavior is correct, but I have to admit I don't know much about S3 to judge that.

Is there anything we can do differently to make it work or should I file a bug report over at Github?

What is your rclone version (output from rclone version)

rclone v1.51.0

  • os/arch: linux/amd64
  • go version: go1.13.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 16.04

Which cloud storage system are you using? (eg Google Drive)

Tencent COS

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone" "sync" "--bwlimit" "25M" "--config=/etc/rclone/rclone.conf" "/var/lib/cassandra/backup/cassandra-snapshot-backup-2020-05-18-15-29.tar.gz.enc" "tencent:" "-vv"

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

2020/05/18 15:34:22 DEBUG : rclone: Version "v1.51.0" starting with parameters ["rclone" "sync" "--bwlimit" "25M" "--config=/etc/rclone/rclone.conf" "/var/lib/cassandra/backup/cassandra-snapshot-backup-2020-05-18-15-34.tar.gz.enc" "tencent:." "-vv"]

2020/05/18 15:34:22 DEBUG : Using config file from "/etc/rclone/rclone.conf"

2020/05/18 15:34:22 INFO : Starting bandwidth limiter at 25MBytes/s

2020/05/18 15:34:22 ERROR : Attempt 1/3 failed with 1 errors and: InvalidParameter: 1 validation error(s) found.

  • minimum field size of 1, HeadObjectInput.Key.

2020/05/18 15:34:22 ERROR : Attempt 2/3 failed with 1 errors and: InvalidParameter: 1 validation error(s) found.

  • minimum field size of 1, HeadObjectInput.Key.

2020/05/18 15:34:22 ERROR : Attempt 3/3 failed with 1 errors and: InvalidParameter: 1 validation error(s) found.

  • minimum field size of 1, HeadObjectInput.Key.

2020/05/18 15:34:22 Failed to sync: InvalidParameter: 1 validation error(s) found.

  • minimum field size of 1, HeadObjectInput.Key.

hello and welcome to the forum,

you could create a new remote and test that.

A new remote, ok, but what should I change in that remote?

is there a reason you have that .?

i did a test, with and without the ., works both ways.

To be honest: I don't know. I haven't done the initial work to configure rclone for Tencent, but that used to work with 1.46, so my assumption is that this was the only way to get it working with 1.46.

What would be be the right way?

It seems that Tencent is using this virtual addressing style, which is probably the reason why the endpoint in our configuration includes the container name.

Is there any reference for configuring the s3 backend for Tencent that I could RTFM? I didn't find any.

Would you mind sharing how your remote config looks like? Without the access_key_id and secret_access_key of course.

if you think that the dot could be a problem, then i would test without that dot.
rclone ls tencent:
rclone ls tencent:.

amazon s3
"aws01": {
"access_key_id": "",
"provider": "AWS",
"region": "us-east-1",
"secret_access_key": "",
"type": "s3"

wasabi, s3 rclone
"wasabieast2": {
"access_key_id": "",
"endpoint": "s3.us-east-2.wasabisys.com",
"env_auth": "false",
"provider": "Wasabi",
"secret_access_key": "",
"type": "s3"

The problem is that it doesn't work without the dot either, same error message.

I just could get it working by changing the endpoint to only the cos-host without the bucket name in it (the forum say I'm disallowed to add urls, that's why I have to describe it) and passing the bucket-name instead of the dot. But in that configuration rclone ls lists all buckets, that these access-key has access to, which to me sounds a bit like a dangerous configuration.

did you try both
rclone ls tencent:
rclone ls tencent:.

might be that tencent is not really s3 compatible.
in my testing,
rclone ls remote:
and
rclone ls remote:.
resulted in the same exact output.

there must be some way to generate a key and id, that is locked to a certain bucket?

Yeah, investigating an improvement of the security policies is certainly something I should pursue. But for now I've just want to get a rclone configuration work, where the remote is limited to a bucket.

In 1.46 that worked by including the bucket name as part of the endpoint name.

And this is apparently broken with 1.51.

you can create an alias

"remote": {
    "remote": "wasabieast2:aliasremote",
    "type": "alias"

rclone.exe "wasabieast2:aliasremote" 
        1 test.txt
rclone ls remote: 
        1 test.txt
rclone.exe ls remote:. 
    1 test.txt

Hmm. Yeah, that could be a viable workaround, but I guess I should create a bug report anyway. At least I know a bit more about the problem.

do not worry, here comes @ncw

I would say this is the correct way of configuring rclone and the fact that it worked before was relying on undefined behaviour!

Did you try setting path_stye = true in the config?

Rclone always needs the bucket...

Setting an alias might be an idea if you don't want the buckets as suggested by @asdffdsa though don't use that syntax - use the one documented here: https://rclone.org/alias/

what do you mean?

I would say this is the correct way of configuring rclone and the fact that it worked before was relying on undefined behaviour!

I somewhat expected you to say that, as it was my initial suspicion, too.

Did you try setting path_stye = true in the config?

Yes, but that didn't make a difference. I'll do some further tries and get back to you on this.

Rclone always needs the bucket...

I know. And it got the bucket as part of the endpoint-url in the previous configuration. But I guess, that @asdffdsa is right then that Tencent is not really S3-compatible?

It looks pretty S3 compatible to me. I think S3 would fail in an identical way if you tried configuring it like that in rclone.

You posted a non native rclone config file... which looks like a JSON blob, not the INI style rclone uses

oh,
i created the alias via rclone config
and that was the output of rclone dump

Ah! I see!

You want rclone config show or rclone config show remote (though annoyingly you can't put a : on the end of the remote - I'll fix that now...)