Rclone with implicit S3 bucket name (e.g. on Wasabi+CloudFlare)

What is the problem you are having with rclone?

I have a Wasabi bucket configure with CloudFlare peering as per their documentation by creating a DNS CNAME some.domain.com to s3.wasabisys.com/some.domain.com. Given a configuration file like this

[wasabi]
type = s3
provider = Wasabi
env_auth = false
access_key_id = <ID>
secret_access_key = <SECRET>
endpoint = some.domain.com
acl = private

and an rclone invocation like

rclone sync --verbose wasabi:./folder /some/local/directory

I get different behaviour with different rclone versions:

  • With version 1.36 the sync operation succeeds
  • With 1.49.5 and later (I have not tested versions between 1.36 and 1.49.5) I get
2020/12/21 04:41:42 ERROR : : error reading source directory: directory not found
2020/12/21 04:41:42 INFO  : Local file system at /some/local/directory: Waiting for checks to finish 
2020/12/21 04:41:42 INFO  : Local file system at /some/local/directory: Waiting for transfers to finish 
2020/12/21 04:41:42 ERROR : Local file system at /some/local/directory: not deleting files as there were IO errors 
2020/12/21 04:41:42 ERROR : Local file system at /some/local/directory: not deleting directories as there were IO errors 
2020/12/21 04:41:42 ERROR : Attempt 1/3 failed with 2 errors and: directory not found
2020/12/21 04:41:42 ERROR : : error reading source directory: directory not found
2020/12/21 04:41:42 INFO  : Local file system at /some/local/directory: Waiting for checks to finish 
2020/12/21 04:41:42 INFO  : Local file system at /some/local/directory: Waiting for transfers to finish 
2020/12/21 04:41:42 ERROR : Local file system at /some/local/directory: not deleting files as there were IO errors 
2020/12/21 04:41:42 ERROR : Local file system at /some/local/directory: not deleting directories as there were IO errors 
2020/12/21 04:41:42 ERROR : Attempt 2/3 failed with 2 errors and: directory not found
2020/12/21 04:41:42 ERROR : : error reading source directory: directory not found
2020/12/21 04:41:42 INFO  : Local file system at /some/local/directory: Waiting for checks to finish 
2020/12/21 04:41:42 INFO  : Local file system at /some/local/directory: Waiting for transfers to finish 
2020/12/21 04:41:42 ERROR : Local file system at /some/local/directory: not deleting files as there were IO errors 
2020/12/21 04:41:42 ERROR : Local file system at /some/local/directory: not deleting directories as there were IO errors 
2020/12/21 04:41:42 ERROR : Attempt 3/3 failed with 2 errors and: directory not found
2020/12/21 04:41:42 Failed to sync with 2 errors: last error was: directory not found

If I change the endpoint in the configuration to s3.wasabisys.com and change the invocation to

rclone sync --verbose wasabi:some.domain.com/folder /some/local/directory

then the sync operation succeeds on newer versions. Furthermore if I run

rclone ls wasabi:./

with the original configuration I get a full listing of the files, e.g.

41189382 folder/022280.sst
  1198317 folder/022423.sst
125297321 folder/022424.sst
  1549394 folder/022425.sst
  3758303 folder/022426.sst
   873091 folder/022427.sst

Furthermore with

rclone ls wasabi:./

I get

0 2020-12-21 04:54:18        -1 folder

and with

rclone ls wasabi:./folder

I get

2020/12/21 04:54:27 Failed to lsd with 2 errors: last error was: directory not found

I suspect that this behaviour is due to the bucket name being implicit, but I do not know why it broke between rclone 1.36 and 1.49.5, nor do I know how to make this work with recent versions of rclone.

What is your rclone version (output from rclone version)

rclone v1.53.3
- os/arch: linux/amd64
- go version: go1.15.5
rclone v1.49.5
- os/arch: linux/amd64
- go version: go1.12.10
rclone v1.36

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Linux, 64bit

Which cloud storage system are you using? (eg Google Drive)

Wasabi (S3 compatible)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync --verbose wasabi:./folder /some/local/directory

The rclone config contents with secrets removed.

[wasabi]
type = s3
provider = Wasabi
env_auth = false
access_key_id = <ID>
secret_access_key = <SECRET>
endpoint = some.domain.com
acl = private

A log from the command with the -vv flag

2020/12/21 05:02:37 DEBUG : rclone: Version "v1.53.3" starting with parameters ["rclone" "sync" "-vv" "wasabi:./folder" "/some/local/directory"]
2020/12/21 05:02:37 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2020/12/21 05:02:37 DEBUG : Creating backend with remote "wasabi:./folder"
2020/12/21 05:02:37 DEBUG : Creating backend with remote "/some/local/directory"
2020/12/21 05:02:37 ERROR : : error reading source directory: directory not found
2020/12/21 05:02:37 DEBUG : Local file system at /some/local/directory: Waiting for checks to finish 
2020/12/21 05:02:37 DEBUG : Local file system at /some/local/directory: Waiting for transfers to finish 
2020/12/21 05:02:37 ERROR : Local file system at /some/local/directory: not deleting files as there were IO errors 
2020/12/21 05:02:37 ERROR : Local file system at /some/local/directory: not deleting directories as there were IO errors 
2020/12/21 05:02:37 INFO  : There was nothing to transfer
2020/12/21 05:02:37 ERROR : Attempt 1/3 failed with 1 errors and: directory not found
2020/12/21 05:02:37 ERROR : : error reading source directory: directory not found
2020/12/21 05:02:37 DEBUG : Local file system at /some/local/directory: Waiting for checks to finish 
2020/12/21 05:02:37 DEBUG : Local file system at /some/local/directory: Waiting for transfers to finish 
2020/12/21 05:02:37 ERROR : Local file system at /some/local/directory: not deleting files as there were IO errors 
2020/12/21 05:02:37 ERROR : Local file system at /some/local/directory: not deleting directories as there were IO errors 
2020/12/21 05:02:37 INFO  : There was nothing to transfer
2020/12/21 05:02:37 ERROR : Attempt 2/3 failed with 1 errors and: directory not found
2020/12/21 05:02:37 ERROR : : error reading source directory: directory not found
2020/12/21 05:02:37 DEBUG : Local file system at /some/local/directory: Waiting for checks to finish 
2020/12/21 05:02:37 DEBUG : Local file system at /some/local/directory: Waiting for transfers to finish 
2020/12/21 05:02:37 ERROR : Local file system at /some/local/directory: not deleting files as there were IO errors 
2020/12/21 05:02:37 ERROR : Local file system at /some/local/directory: not deleting directories as there were IO errors 
2020/12/21 05:02:37 INFO  : There was nothing to transfer
2020/12/21 05:02:37 ERROR : Attempt 3/3 failed with 1 errors and: directory not found
2020/12/21 05:02:37 INFO  : 
Transferred:             0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         0.3s

2020/12/21 05:02:37 DEBUG : 4 go routines active
2020/12/21 05:02:37 Failed to sync: directory not found

hello and welcome to the forum,

did you try wasabi:folder?

Yes, I tried wasabi:folder and also wasabi:/folder and wasabi:""/folder. Same directory not found error on all versions. I think this may be an issue with how the remote is provided. Usually it would be defined as <remote>:<bucket>/<folder>, but since the bucket name is implicit on the Wasabi side from the CNAME it ends up needing to be <remote>:./<folder> (on v1.36 at least).

Admittedly this may be an issue with how Wasabi does this specifically, but it would be nice to be able to use the CNAME for performance reasons.

I think setting --s3-force-path-style to false might fix the problem. You'll still need to mention the bucket name I think.

Use -vv --dump headers to see what requests rclone is making.

I made some progress comparing the responses using --dump headers. As mentioned in the initial post, the full listing of files is

41189382 folder/022280.sst
  1198317 folder/022423.sst
125297321 folder/022424.sst
  1549394 folder/022425.sst
  3758303 folder/022426.sst
   873091 folder/022427.sst

The directory /folder is implicit and does not get created by rclone (1.53.0) when doing a sync from the local directory to Wasbi.

Now request-wise, both 1.53 and 1.36 versions first do a HEAD /folder request with returns with a 404. It's the next request that changes though:

1.36:  GET /?delimiter=%2F&max-keys=1024&prefix=folder%2F HTTP/1.1
1.53:  GET /folder?delimiter=%2F&encoding-type=url&max-keys=1000&prefix= HTTP/1.1

1.36 works because it uses the root path with a search prefix, so folder/022280.sst matches. Since /folder returns a 404 the search done by 1.53 fails.

Rclone 1.53 doesn't support implicit bucket names. It was only ever an accident that it worked before.

So I think you need to specify the bucket name wasabi:bucket

So to make this work I think you need --s3-force-path-style=false and to put your endpoint in without the leading bucket. So instead of endpoint = bucket.domain.com just put in endpoint = domain.com

If we want rclone to have a bucketless mode then it probably needs to be specified explicitly in the config file.

Rclone 1.53 doesn't support implicit bucket names. It was only ever an accident that it worked before.

Understood.

So instead of endpoint = bucket.domain.com just put in endpoint = domain.com

Interesting. I figured that because the bucket is the FQDN (i.e. bucket.domain.com instead of just domain.com) this would not work, but I tested using the subdomain as the bucket name and the domain as the endpoint it and it worked. I guess that with --s3-force-path-style=false rclone composes endpoint = domain.com and bucket = bucket to bucket.domain.com

I think we can consider this resolved.

Great

That is exactly what happens.

Glad we got it working and sorry about the changes between rclone versions. That is the trouble with having a popular program, whenever I fix undefined behaviour I usually find someone was relying on it!

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.