S3: source/destination named profile

With rclone v1.52.2:
I have a customer who needs to sync AWS buckets between two accounts. The documents appears to allow for this, by potentially using two rclone configs, one for each side.

However, I do not see a way to include two AWS configs for credentials. He would need to use 2 named profiles, for source and destination. After searching documentation and Google, it appears there may be a way to include an access/secret key for each (which we cannot use), but env_auth=true does not appear to allow for specifying the named AWS profile to use (so am guessing it will use an environment variable and use same credentials for both, which won't work).

Is there a current way to set an AWS named profile (or any way to assume roles) independently for source and destination? Thanks in advance for any pointers.
-Alan

hello and welcome to the forum,

perhaps this is what you need

https://rclone.org/s3/#authentication
" Or, use a named profile:"
and
https://rclone.org/docs/#config-file

Well, there is that, and I've looked at that... however, it only appears to support one profile, to use for source AND destination. I was asking about any way to have a different named profile for source and destination, if that is possible. I see no way to put a named profile in the config at all, less being able to use two.
-Alan

i have done this before, to create the remotes on the fly

https://rclone.org/docs/#backend-path-to-dir

rclone lsd :s3:en07/kdbx --s3-access-key-id=id --s3-secret-access-key=key --s3-endpoint=s3.us-east-2.wasabisys.com -vv

Appreciate the answer, but after reading that and the referenced manual section, I fail to see how this would allow me to have one profile for source, another for destination. Since the :backend: part has to be a config type, and both are "s3", I don't see how to configure the profiles either in config file or via env vars or command line options. Sorry if I'm dense, it's Wednesday...

yeah, wednesday. me too

export RCLONE_CONFIG_MYS31_TYPE=s3
export RCLONE_CONFIG_MYS32_TYPE=s3

export RCLONE_CONFIG_MYS31_ACCESS_KEY_ID=$id1
export RCLONE_CONFIG_MYS32_ACCESS_KEY_ID=$id2

export RCLONE_CONFIG_MYS31_ENDPOINT=$endpoint1
export RCLONE_CONFIG_MYS32_ENDPOINT=$endpoint2

export RCLONE_CONFIG_MYS31_SECRET_ACCESS_KEY=$key1
export RCLONE_CONFIG_MYS32_SECRET_ACCESS_KEY=$key2

rclone sync mys31:testremotefolder01 mys32:testremotefolder02 -vv

2020/06/24 13:30:06 DEBUG : rclone: Version "v1.52.1" starting with parameters ["/mnt/c/data/rclone/scripts/rclone" "sync" "mys31:testremotefolder01" "mys32:testremotefolder02" "-vv"]
2020/06/24 13:30:06 DEBUG : no.worries.be.happy.txt: MD5 = d41d8cd98f00b204e9800998ecf8427e OK
2020/06/24 13:30:06 INFO  : no.worries.be.happy.txt: Copied (new)
Transferred:   	         0 / 0 Bytes, -, 0 Bytes/s, ETA -
Transferred:            1 / 1, 100%

Thanks again, but not sure how that helps with profiles, as I was asking. Don't have access/secret keys (except what could be from an assumed role). I tried the idea of using the env vars for an s3src config, and trying something like "export RCLONE_CONFIG_S3SRC_AWS_PROFILE=xxx" (since the manual discusses an "AWS_PROFILE" capability... but that doesn't work either.

perhaps, add it to your config file itself

[s31]
aws_profile=/path/to/file/profile

Is there a reason you can't just use env_auth = false in the config file and then use and access_key_id and secret_access_key and optionally a session_token ( AWS STS )

That's not valid of itself, since the profile is a name, not a file. Maybe if I created two files and used exports and AWS_SHARED_CREDENTIALS_FILE. Ugly but I can test it.

Maybe. I don't have those, but would have to use aws CLI to try to do assume-roles, and get it into rclone somehow (env vars or the backend custom command line args. Was hoping for something easier I guess.

This is tricky! You can used named profiles with rclone by using env_auth=true and AWS_PROFILE but as you point out this won't work if you have two profiles. You can set AWS_SHARED_CREDENTIALS_FILE but I don't think that will work as both of the backends will read the same file.

What you really need are some more config options so you can make two different backends (s3a and s3b with different profile variables).

This should enable that - you'll need env_auth=true and profile=name

Let me know if it works and if the docs look ok!

https://beta.rclone.org/branch/v1.52.2-133-ge557e6b2-fix-s3-aws-profile-beta/ (uploaded in 15-30 mins)

Here are the docs

--s3-profile

Profile to use in the shared credentials file

If env_auth = true then rclone can use a shared credentials file. This
variable controls which profile is used in that file.

If empty it will default to the environment variable "AWS_PROFILE" or
"default" if that environment variable is also not set.

  • Config: profile
  • Env Var: RCLONE_S3_PROFILE
  • Type: string
  • Default: ""

--s3-shared-credentials-file

Path to the shared credentials file

If env_auth = true then rclone can use a shared credentials file.

If this variable is empty rclone will look for the
"AWS_SHARED_CREDENTIALS_FILE" env variable. If the env value is empty
it will default to the current user's home directory.

Linux/OSX: "$HOME/.aws/credentials"
Windows:   "%USERPROFILE%\.aws\credentials"
  • Config: shared_credentials_file
  • Env Var: RCLONE_S3_SHARED_CREDENTIALS_FILE
  • Type: string
  • Default: ""

I've merged this to master now which means it will be in the latest beta in 15-30 mins and released in v1.53

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.