Multiple --config rclone.conf files?

What is the problem you are having with rclone?

I'm trying to copy files between two remotes. One is specified in rclone1.conf file and the other in rclone2.conf file.

I hoped that you can run rclone copy remote1: remote2: --config rclone1.conf --config rclone2.conf, but it doesn't seem to work - it uses the last conf file instead of merging them.

The reason why it's needed is because rclone2.conf is encrypted and rclone1.conf isn't. It also means that rclone needs to decrypt it first, so you can't use shell process substitution to merge the two files "on the fly".

Run the command 'rclone version' and share the full output of the command.

rclone v1.62.2
- os/version: arch "rolling" (64 bit)
- os/kernel: 6.2.13-arch1-1 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.4
- go/linking: dynamic
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

google drive ---> aws s3

The rclone config contents with secrets removed.

rclone1.conf:

[gdrive]
type = drive
client_id = <redacted>
client_secret = <redacted>
scope = drive
token = <redacted>
root_folder_id = <redacted>
team_drive = 

rclone2.conf:

[aws-glacier-deep]
type = s3
provider = AWS
access_key_id = <redacted>
secret_access_key = <redacted>
region = eu-west-1
location_constraint = eu-west-1
acl = private
storage_class = DEEP_ARCHIVE

do not think rclone can do that. tho would be a nice feature.

if the second config is not crypted, then perhaps no need for a config file.
might use a connection string, something like

rclone ls ":s3,access_key_id='redacted',secret_access_key='redacted',endpoint='https://s3.us-east-2.wasabisys.com':bucketName"

and there other workarounds

thanks, I didn't know about connection strings. This is a really cool feature, but even though the first rclone1.conf file isn't encrypted, putting its secrets on the command line for all the other processes to see would give up some useful security that file system permissions still provide.

Could you provide them as environmental variables instead?

No worries, it would be really nice to have this functionality though.

sure, i should have mentioned that.

https://rclone.org/docs/#environment-variables

as a partial example,

[aws-glacier-deep]
type = s3
provider = AWS
access_key_id = <redacted>
secret_access_key = <redacted>

would translate into

RCLONE_CONFIG_AWS-GLACIER-DEEP_TYPE=s3
RCLONE_CONFIG_AWS-GLACIER-DEEP_PROVIDER=AWS
RCLONE_CONFIG_AWS-GLACIER-DEEP_ACCESS_KEY_ID=<redacted>
RCLONE_CONFIG_AWS-GLACIER-DEEP_SECRET_ACCESS_KEY=<redacted>

and the use it like so
rclone lsd aws-glacier-deep:foldername

1 Like

Perfect, thank you again! That solves it then :slight_smile:

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.