Copying data between s3 buckets in different accounts

What is the problem you are having with rclone?

I'm using Rclone to copy data between s3 buckets. The source bucket is owned by my organization and the destinations account is owned by another.
I have access keys and secrets to both account, this allows my permissions to pull data from my bucket and push it to the destination bucket.
After doing some research I noticed that aws writes that in order to transfer data between s3 buckets in different account you need to set an ACL on the destination bucket.
I did not done this and still Rclone can transfer data between the accounts. How is this done ? Will it be more efficient (faster transfer) to set the ACL on the destination bucket ?

Run the command 'rclone version' and share the full output of the command.

yes

Which cloud storage system are you using? (eg Google Drive)

s3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy remote1:bucket1 remote2:bucket2

The rclone config contents with secrets removed.

[remote2]
type = s3
provider = AWS
access_key_id = xxx
secret_access_key = xxx

[remote1]
type = s3
provider = AWS
env_auth = true
region = us-east-1

A log from the command with the -vv flag

I do not want to run the command again is it is a copy command

hi,

rclone sets the ACL for you, tho you can change the value.
fwiw, in all my years of using S3, never used the ACL, always use IAM

in fact, as per amazon
we recommend that you disable ACLs

run the command with --dry-run, no files will be copied/deleted

Sorry I think I was mistaken,
I ment I don't have destination write permission with the first IAM role.
Nothing to do with ACL.
And it seems that I need that, referencing this:
(Copy Amazon S3 objects from another AWS account)

sorry, a bit confused as to the problem.

as a test, copy a single file and post the full debug output.

Hey again

Here is the command output, I censored a lot of details, bucket names, remote names, obejct names, and linux hostname and username.

(base) xxxx@xxx:~/data$ rclone copy <remote1>:<bucket1>/<object1> <remote2>:<bucket2>/<dirpath> --dry-run -vv
2022/09/17 14:20:15 DEBUG : rclone: Version "v1.59.2" starting with parameters ["rclone" "copy" "<remote1>:<bucket1>/<object1>" "<remote2>:<bucket2>/<dirpath>" "--dry-run" "-vv"]
2022/09/17 14:20:15 DEBUG : Creating backend with remote "<remote1>:<bucket1>/<object1>"
2022/09/17 14:20:15 DEBUG : Using config file from "/homefolder/<username>/.config/rclone/rclone.conf"
2022/09/17 14:20:16 DEBUG : fs cache: adding new entry for parent of "<remote1>:<bucket1>/<object1>", "<remote1>:<bucket1>/<dirpath>"
2022/09/17 14:20:16 DEBUG : Creating backend with remote "<remote2>:<bucket2>/<dirpath>"
2022/09/17 14:20:16 DEBUG : <object1>: Need to transfer - File not found at Destination
2022/09/17 14:20:16 NOTICE: <object1>: Skipped copy as --dry-run is set (size 13.434Mi)
2022/09/17 14:20:16 NOTICE:
Transferred:       13.434 MiB / 13.434 MiB, 100%, 0 B/s, ETA -
Transferred:            1 / 1, 100%
Elapsed time:         0.1s

2022/09/17 14:20:16 DEBUG : 8 go routines active
(base) xxx@xxx:~/data$

Sorry for the confusion. There isn't a bug and everything works fine. But I don't understand why:

I thought that you need one user with privileges to both accounts buckets in order to transfer objects from one bucket to another.

But what I have is two users, one for each account, one user has read privileges to the source bucket and the other one has write permission to the destination bucket.

that is correct, that is what i do.

Yeah, the thing is. Whenever I searched the AWS docs for this topic they always show that they use one that has privileges to both account. So I don't understands how this works, with two users.

This is my issue.

both methods are valid, personal preference

in my case, i prefer two users, one per account. find it simpler.
--- easy to work with rclone.
--- S3 IAM users require MFA and from that a session token is created and feed to rclone.
--- need a way to deal with source and dest, both using different SSE-C encryption keys.

let's take a file named file.ext

  1. rclone downloads file.ext from source account, using the source user permissions.
  2. rclone uploads file.ext to dest account, using the dest user permissions.
    in my case, the permissions are S3 policies, not ACL

Ok thanks, that is exactly what I was wandering about.
A colleague of mine told me the rclone directly sends data from one bucket to another, without downloading it first.
I wanted to be sure that i'm not missing something.

Now when I look at the FAQ I see that it is mentioned there explicitly.

Thanks again.

in some cases, that is correct.

INFO  : file.ext: Copied (server-side copy)

Ohh I see. I was just looking for a way to make the transfer more efficient as I had to move a lot of data.

So in order to do server-side copy I need to have one user with permission to both account right ?

so as you posted, i was writing a post, asking that same question.
as of now, i do not know the answer.

tho, i do know that in this case, server-side copy is not used.
--- based on rclone debug log
--- two users, one per account
--- source and dest accounts are both at Wasabi, an s3 rclone.

Putting Rclone aside.

I was searching online about copying files between AWS accounts from yesterday, I did not find a single article that mentioned multiple users.
Only one user with permissions to both accounts.

if there is a specific advantage of one-user over two-user, then let me know?

Of course. I think that you can only do a bucket->bucket transfer with one user/role. So it's probably more efficient.

that works fine with two-user.

If the permissions are set correctly for the destination user to be able to read from the source bucket, then using the latest beta and --server-side-across-configs should enable a server side copy I think.

hi,

i think i was able to get server-side copy to work between two accounts.
without using the beta and without using --server-side-across-configs

rclone copy source:zork.source source:zork.dest -vv --retries=1 --s3-acl=bucket-owner-full-control 
DEBUG : Setting --config "C:\\data\\rclone\\rclone.conf" from environment variable RCLONE_CONFIG="C:\\data\\rclone\\rclone.conf"
DEBUG : rclone: Version "v1.58.1" starting with parameters ["C:\\data\\rclone\\rclone.exe" "copy" "source:zork.source" "source:zork.dest" "-vv" "--retries=1" "--s3-acl=bucket-owner-full-control"]
DEBUG : Creating backend with remote "source:zork.source"
DEBUG : Using config file from "C:\\data\\rclone\\rclone.conf"
DEBUG : source: detected overridden config - adding "{6n9_F}" suffix to name
DEBUG : fs cache: renaming cache item "source:zork.source" to be canonical "source{6n9_F}:zork.source"
DEBUG : Creating backend with remote "source:zork.dest"
DEBUG : source: detected overridden config - adding "{6n9_F}" suffix to name
DEBUG : fs cache: renaming cache item "source:zork.dest" to be canonical "source{6n9_F}:zork.dest"
DEBUG : S3 bucket zork.dest: Waiting for checks to finish
DEBUG : S3 bucket zork.dest: Waiting for transfers to finish
DEBUG : file.ext: md5 = c7f5af9b93f5aa17934c84ad53fd2cea OK
INFO  : file.ext: Copied (server-side copy)
INFO  : 
Transferred:   	    1.382 MiB / 1.382 MiB, 100%, 0 B/s, ETA -
Transferred:            1 / 1, 100%
Elapsed time:         1.6s

and here is some output to make clear, i am using two different accounts.

rclone lsd source: --include=zork*//* 
          -1 2022-09-19 16:00:26        -1 zork.source

rclone lsd dest: --include=zork*//* 
          -1 2022-09-19 16:11:38        -1 zork.dest
1 Like