I'm using Rclone to copy data between s3 buckets. The source bucket is owned by my organization and the destinations account is owned by another.
I have access keys and secrets to both account, this allows my permissions to pull data from my bucket and push it to the destination bucket.
After doing some research I noticed that aws writes that in order to transfer data between s3 buckets in different account you need to set an ACL on the destination bucket.
I did not done this and still Rclone can transfer data between the accounts. How is this done ? Will it be more efficient (faster transfer) to set the ACL on the destination bucket ?
Run the command 'rclone version' and share the full output of the command.
yes
Which cloud storage system are you using? (eg Google Drive)
s3
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone copy remote1:bucket1 remote2:bucket2
The rclone config contents with secrets removed.
[remote2]
type = s3
provider = AWS
access_key_id = xxx
secret_access_key = xxx
[remote1]
type = s3
provider = AWS
env_auth = true
region = us-east-1
A log from the command with the -vv flag
I do not want to run the command again is it is a copy command
Sorry I think I was mistaken,
I ment I don't have destination write permission with the first IAM role.
Nothing to do with ACL.
And it seems that I need that, referencing this:
(Copy Amazon S3 objects from another AWS account)
Here is the command output, I censored a lot of details, bucket names, remote names, obejct names, and linux hostname and username.
(base) xxxx@xxx:~/data$ rclone copy <remote1>:<bucket1>/<object1> <remote2>:<bucket2>/<dirpath> --dry-run -vv
2022/09/17 14:20:15 DEBUG : rclone: Version "v1.59.2" starting with parameters ["rclone" "copy" "<remote1>:<bucket1>/<object1>" "<remote2>:<bucket2>/<dirpath>" "--dry-run" "-vv"]
2022/09/17 14:20:15 DEBUG : Creating backend with remote "<remote1>:<bucket1>/<object1>"
2022/09/17 14:20:15 DEBUG : Using config file from "/homefolder/<username>/.config/rclone/rclone.conf"
2022/09/17 14:20:16 DEBUG : fs cache: adding new entry for parent of "<remote1>:<bucket1>/<object1>", "<remote1>:<bucket1>/<dirpath>"
2022/09/17 14:20:16 DEBUG : Creating backend with remote "<remote2>:<bucket2>/<dirpath>"
2022/09/17 14:20:16 DEBUG : <object1>: Need to transfer - File not found at Destination
2022/09/17 14:20:16 NOTICE: <object1>: Skipped copy as --dry-run is set (size 13.434Mi)
2022/09/17 14:20:16 NOTICE:
Transferred: 13.434 MiB / 13.434 MiB, 100%, 0 B/s, ETA -
Transferred: 1 / 1, 100%
Elapsed time: 0.1s
2022/09/17 14:20:16 DEBUG : 8 go routines active
(base) xxx@xxx:~/data$
Sorry for the confusion. There isn't a bug and everything works fine. But I don't understand why:
I thought that you need one user with privileges to both accounts buckets in order to transfer objects from one bucket to another.
But what I have is two users, one for each account, one user has read privileges to the source bucket and the other one has write permission to the destination bucket.
Yeah, the thing is. Whenever I searched the AWS docs for this topic they always show that they use one that has privileges to both account. So I don't understands how this works, with two users.
in my case, i prefer two users, one per account. find it simpler.
--- easy to work with rclone.
--- S3 IAM users require MFA and from that a session token is created and feed to rclone.
--- need a way to deal with source and dest, both using different SSE-C encryption keys.
Ok thanks, that is exactly what I was wandering about.
A colleague of mine told me the rclone directly sends data from one bucket to another, without downloading it first.
I wanted to be sure that i'm not missing something.
Now when I look at the FAQ I see that it is mentioned there explicitly.
so as you posted, i was writing a post, asking that same question.
as of now, i do not know the answer.
tho, i do know that in this case, server-side copy is not used.
--- based on rclone debug log
--- two users, one per account
--- source and dest accounts are both at Wasabi, an s3 rclone.
I was searching online about copying files between AWS accounts from yesterday, I did not find a single article that mentioned multiple users.
Only one user with permissions to both accounts.
If the permissions are set correctly for the destination user to be able to read from the source bucket, then using the latest beta and --server-side-across-configs should enable a server side copy I think.