I have a server running in eu-west-1, which has an attached ebs volume. I have a bucket in ca-central1, and one in ap-southeast that I need to clone the tree to. I activated transfer acceleration on the buckets.
This rclone command does copy the data:
rclone sync /my/dir s3:bucket-in-canada/target
When I run it I do get two identical log entries of:
2019/09/25 18:24:20 NOTICE: S3 bucket bucket-in-canada path target: Switched region to "ca-central-1" from "eu-west-1"
My config file says:
[s3]
type = s3
provider = AWS
env_auth = false
access_key_id = yadda
secret_access_key = yadda
region = eu-west-1
acl = public-read
storage_class = STANDARD
(yes, I want public-read for this specific data)
if I add -s3-use-accelerate-endpoint before the "sync" I get:
2019-09-25 18:31:54 ERROR : : error reading destination directory: AuthorizationHeaderMalformed: The authorization header is malformed; the region 'eu-west-1' is wrong; expecting 'ca-central-1'
Followed by pages of IO errors and such because the authorization is bad and it's unwilling to let me write to the disk.
The IAM role I'm using has full admin access.
I'm missing something, a region argument, or something in the wrong place, etc. Any insight?
I'm letting the --s3-use-accelerate-endpoint do that, which is what I thought that was for. Based on the docs, it looks like you change the endpoint on the target bucket, so i'd need to use, for instance, the ca-central-1 one.
I can try that. I put eu-west-1 in the config because the host running it is in eu-west, and I got errors which I don't recall if I switched that to ca-central. I also used --s3-region at some point. (I've been fiddling with this for a while!). I'll rerun those tests and post the results.
Ok, I must have changed too many things at once in early tests, because I was also changing the profile settings. Leaving the profile at eu-west-1, and adding an --s3-region that matched the location of the bucket I was hitting did finally get me a connection. And since I got errors from cloudfront during a couple of transfers, it must be working. Had to do an amazing amount of tweaking and upgrade to an instance with more memory to get reasonable throughput though - made it up to 100mbps from about 10...
Glad you got it working! Rclone could probably be more helpful here in changing the accelerate endoint if it is changing the endpoint. I'm not sure that is possible through the go SDK though without re-creating the connection completely.