rclone copy fails to copy a file from the local filesystem to an Object Storage bucket in AWS S3 as it is building a subdomain of the bucket for the S3 object storage endpoint
What is your rclone version (output from rclone version)
rclone v1.53.1
- os/arch: linux/amd64
- go version: go1.15
Which OS you are using and how many bits (eg Windows 7, 64 bit)
4.14.35-1902.305.4.el7uek.x86_64
Which cloud storage system are you using? (eg Google Drive)
Oracle Cloud Infrastructure with AWS S3 compatibility enabled
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Interesting. That looks like a genuine TLS error though.
caused by: Head "https://<s3_bucket>.<oracle_cloud_tenancy_namespace>.compat.objectstorage.us-ashburn-1.oraclecloud.com/file.tar.gz": x509: certificate is valid for
*.compat.objectstorage.us-ashburn-1.oraclecloud.com, not
<s3_bucket>.<oracle_cloud_tenancy_namespace>.compat.objectstorage.us-ashburn-1.oraclecloud.com/
Does your bucket name have periods "." In?
The * in the SSL wildcard wont cover a period.
You could try --s3-force-path-style=true
--s3-force-path-style If true use path style access if false use virtual hosted style. (default true)
My bucket name does not have a period in them. It has only hyphens '-'. I tried setting the option in the rclone.conf file, but that did not work either and threw the same error.
force_path_style = true
This seems like a bug with rclone: Version "v1.53.1"(installed the latest from rclone website). The same works with version 1.50.2 that I installed from YUM.
Awesome! that really worked, setting the provider to IBMCOS. Is this a known issue with the AWS provider and documented somewhere?
If so, it would help a lot of users in getting their config right for the provider. Thank you again for putting in time and helping me resolve this issue. I'm going to do a quick sanity testing and make sure all the other rclone commands work fine with the new provider in the config.
Setting the provider correctly enables rclone to apply the "quirks" for each provider properly.
In this case AWS wants that flag false but IBMCOS wants it true.
If you build your config through rclone config then it will prompt you for the provider at the start.
Choose your S3 provider.
Enter a string value. Press Enter for the default ("").
Choose a number from below, or type in your own value
1 / Amazon Web Services (AWS) S3
\ "AWS"
2 / Alibaba Cloud Object Storage System (OSS) formerly Aliyun
\ "Alibaba"
3 / Ceph Object Storage
\ "Ceph"
4 / Digital Ocean Spaces
\ "DigitalOcean"
5 / Dreamhost DreamObjects
\ "Dreamhost"
6 / IBM COS S3
\ "IBMCOS"
7 / Minio Object Storage
\ "Minio"
8 / Netease Object Storage (NOS)
\ "Netease"
9 / Scaleway Object Storage
\ "Scaleway"
10 / StackPath Object Storage
\ "StackPath"
11 / Tencent Cloud Object Storage (COS)
\ "TencentCOS"
12 / Wasabi Object Storage
\ "Wasabi"
13 / Any other S3 compatible provider
\ "Other"
provider>
How did you build your config?
There aren't as many quirks in the S3 backend as there are in, say, the WebDAV backend because the providers to a pretty good job at staying compatible.
I knew the minimum config options required to get my rclone working and thus built it by hand. However, I did not know the below quirk with the AWS provider and it needed to be set to IBMCOS.
The documentation also did not state anything about the default being set to false for AWS provider.
--s3-force-path-style
If true use path style access if false use virtual hosted style.
If this is true (the default) then rclone will use path style access, if false then rclone will use virtual path style. See the AWS S3 docs for more info.
Some providers (eg AWS, Aliyun OSS, Netease COS or Tencent COS) require this set to false - rclone will do this automatically based on the provider setting.
Config: force_path_style
Env Var: RCLONE_S3_FORCE_PATH_STYLE
Type: bool
Default: true
No worries! I'm thinking of changing the way this works so you rclone is more explicit about setting the quirks. Then you'd get a debug log telling you what things were set at.