Shared-credentials-file is not recognised

What is the problem you are having with rclone?

I want to sync two S3 backends and store the access keys outside the configuration file. My config file has two S3 backends configured. If I add the parameters access_key and secret_access_key to the respective profiles in the configuration file everything works as expected. However, I don't want to store the credentials in plain text in the configuration file, because that file is also stored in Git.

Version 1.53 introduced --s3-profile and --s3-shared-credentials-file but somehow I can't get them to work.

Run the command 'rclone version' and share the full output of the command.

rclone v1.67.0
- os/version: alpine 3.20.0 (64 bit)
- os/kernel: 5.14.0-284.59.1.el9_2.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.4
- go/linking: static
- go/tags: none

One more note on my environment: I'm using rclones docker image in an OpenShift (Kubernetes) cluster. The configuration file is passed as Configmap and I want to provide the keys as a Kubernetes secret.

Which cloud storage system are you using? (eg Google Drive)

  • self-hosted MinIO
  • self-hosted Quay (Image registry)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync quay:quay-datastore-xxx minio:quay-datastore-xxx --no-check-certificate

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Working example with inline authentification

[minio]
type = s3
provider = Minio
region = us-east-1
endpoint = http://my-minio-server:9000
location_constraint = 
server_side_encryption =
env_auth = false
access_key = xxx
secret_access_key = xxx

[quay]
type = s3
provider = Ceph
region = 
endpoint = https://my-quay-server
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 
env_auth = false
access_key = xxx
secret_access_key = xxx

Preferred example, so far not working

[minio]
type = s3
provider = Minio
region = us-east-1
endpoint = http://my-minio-server:9000
location_constraint = 
server_side_encryption = 
env_auth = true
shared_credentials_file = /config/rclone/rclone-credentials/secrets
profile = minio

[quay]
type = s3
provider = Ceph
region = 
endpoint = https://my-quay-server
location_constraint = 
acl = private
server_side_encryption = 
storage_class = 
env_auth = true
shared_credentials_file = /config/rclone/rclone-credentials/secrets
profile = quay

I tried the parameter aws_shared_credentials_file as well as shared_credentials_file, respectively aws_profile and profile.

Content of /config/rclone/rclone-credentials/secrets

[quay]
access_key = xxx
secret_access_key = xxx
[minio]
access_key = xxx
secret_access_key = xxx

A log from the command that you were trying to run with the -vv flag

2024/07/23 10:23:51 DEBUG : Creating backend with remote "quay:quay-datastore-xxx"
2024/07/23 10:23:51 DEBUG : Using config file from "/config/rclone/rclone.conf"
2024/07/23 10:23:51 DEBUG : Resolving service "ec2metadata" region ""
2024/07/23 10:23:51 DEBUG : Resolving service "s3" region "us-east-1"
2024/07/23 10:23:51 DEBUG : Creating backend with remote "minio:quay-datastore-xxx"
2024/07/23 10:23:51 DEBUG : Resolving service "ec2metadata" region ""
2024/07/23 10:23:51 DEBUG : Resolving service "s3" region "us-east-1"

That's all that is happening. I waited for ~10 minutes before I cancelled the operation.

welcome to the forum,

i can reproduce the issue, on windows.

edit: i did some quick testing, could be an issue with profile = using non-default profile name

this does not work

[remote]
type = s3
provider = Wasabi
env_auth = true
shared_credentials_file = c:\data\rclone\secrets
profile = wasabi
region = us-east-2
endpoint = s3.us-east-2.wasabisys.com

this does work using default profile

[remote]
type = s3
provider = Wasabi
env_auth = true
shared_credentials_file = c:\data\rclone\secrets
region = us-east-2
endpoint = s3.us-east-2.wasabisys.com

also, this does work, specifying the default profile

[remote]
type = s3
provider = Wasabi
env_auth = true
shared_credentials_file = c:\data\rclone\secrets
profile = default
region = us-east-2
endpoint = s3.us-east-2.wasabisys.com

i did some more testing.
for now, i think there is a workaround.

for each remote, use a unique credentials policy file using default section

file=secrets_minio

[default]
aws_access_key_id = xxx
aws_secret_access_key = xxx

file=secrets_quay

[default]
aws_access_key_id = xxx
aws_secret_access_key = xxx

and changes to the remotes.

[minio]
shared_credentials_file = /config/rclone/rclone-credentials/secrets_minio
#profile = minio

[quay]
shared_credentials_file = /config/rclone/rclone-credentials/secrets_quay
#profile = quay

Thank you for trying, it's much appreciated. On first glance splitting the credentials in different files didn't make a difference. Then I stumbled upon the missing AWS_ prefix on the key parameters but rclone still didn't want to connect. However, I still can't rule out that I'm using a wrong syntax at the moment, so I will try to test some more.

To reduce complexity I thought to reproduce this in Podman instead of OpenShift. Unfortunately, I was not able to get it to work with shared credential files.

However, I tried environmental variables again and found a workaround for me. I removed the shared_credentials_file and profile parameter from the config file, but kept env_auth = true.

rclone.conf
[quay]
type = s3
provider = Ceph
region =
endpoint = https://my-quay-server
location_constraint =
acl = private
server_side_encryption =
storage_class =
env_auth = true

[minio]
type = s3
provider = Minio
region = us-east-1
endpoint = http://my-minio-server:9000
location_constraint =
server_side_encryption =
env_auth = true

I can provide OpenShift secrets as environmental variables which works for me just as well as a secrets file.

successfull sync log
2024/07/24 09:41:52 DEBUG : rclone: Version "v1.67.0" starting with parameters ["rclone" "sync" "quay:quay-datastore-xxx" "minio:quay-datastore-xxx" "--no-check-certificate" "-vv"]
2024/07/24 09:41:52 DEBUG : Creating backend with remote "quay:quay-datastore-xxx"
2024/07/24 09:41:52 DEBUG : Using config file from "/config/rclone/rclone.conf"
2024/07/24 09:41:52 DEBUG : Setting access_key_id="quay_id" for "quay" from environment variable RCLONE_CONFIG_QUAY_ACCESS_KEY_ID
2024/07/24 09:41:52 DEBUG : Setting secret_access_key="quay_secret" for "quay" from environment variable RCLONE_CONFIG_QUAY_SECRET_ACCESS_KEY
2024/07/24 09:41:52 DEBUG : quay: detected overridden config - adding "{6sijj}" suffix to name
2024/07/24 09:41:52 DEBUG : Setting access_key_id="quay_id" for "quay" from environment variable RCLONE_CONFIG_QUAY_ACCESS_KEY_ID
2024/07/24 09:41:52 DEBUG : Setting secret_access_key="quay_secret" for "quay" from environment variable RCLONE_CONFIG_QUAY_SECRET_ACCESS_KEY
2024/07/24 09:41:52 DEBUG : Resolving service "s3" region "us-east-1"
2024/07/24 09:41:52 DEBUG : fs cache: renaming cache item "quay:quay-datastore-xxx" to be canonical "quay{6sijj}:quay-datastore-xxx"
2024/07/24 09:41:52 DEBUG : Creating backend with remote "minio:quay-datastore-xxx"
2024/07/24 09:41:52 DEBUG : Setting access_key_id="minio_id" for "minio" from environment variable RCLONE_CONFIG_MINIO_ACCESS_KEY_ID
2024/07/24 09:41:52 DEBUG : Setting secret_access_key="minio_secret" for "minio" from environment variable RCLONE_CONFIG_MINIO_SECRET_ACCESS_KEY
2024/07/24 09:41:52 DEBUG : minio: detected overridden config - adding "{V6oTz}" suffix to name
2024/07/24 09:41:52 DEBUG : Setting access_key_id="minio_id" for "minio" from environment variable RCLONE_CONFIG_MINIO_ACCESS_KEY_ID
2024/07/24 09:41:52 DEBUG : Setting secret_access_key="minio_secret" for "minio" from environment variable RCLONE_CONFIG_MINIO_SECRET_ACCESS_KEY
2024/07/24 09:41:52 DEBUG : Resolving service "s3" region "us-east-1"
2024/07/24 09:41:52 DEBUG : fs cache: renaming cache item "minio:quay-datastore-xxx" to be canonical "minio{V6oTz}:quay-datastore-xxx"
2024/07/24 09:41:53 DEBUG : datastorage/registry/sha256/...: Size and modification time the same (differ by 0s, within tolerance 1ns)
2024/07/24 09:41:53 DEBUG : datastorage/registry/sha256/...: Unchanged skipping
[...]
2024/07/24 09:41:53 DEBUG : Waiting for deletions to finish
2024/07/24 09:41:53 INFO  : There was nothing to transfer
2024/07/24 09:41:53 INFO  : 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Checks:                63 / 63, 100%
Elapsed time:         1.0s

2024/07/24 09:41:53 DEBUG : 66 go routines active

Is there a way that I can help solve this issue from rclones perspective? My tools are quite limited right now - I can't download/use binarys for example - but maybe there is something I can do to give back :slightly_smiling_face:

that worked for me. i think you might be making a simple mistake.

i did not use any AWS_ variables

here is a way to use environment variables with rclone.
does not use rclone config file, does not use AWS_ env variables.

RCLONE_CONFIG_WASABI_TYPE=s3
RCLONE_CONFIG_WASABI_ACCESS_KEY_ID=xxx
RCLONE_CONFIG_WASABI_SECRET_ACCESS_KEY=xxx
RCLONE_CONFIG_WASABI_ENDPOINT=s3.us-east-2.wasabisys.com

rclone lsd wasabi:

i did use any AWS_ variables

I meant in your secret files. After reading through different posts and the documentation I was a bit confused and didn't know if I should use aws_access_key_id or access_key_id.

here is a way to use environment variables with rclone.

That's exactly what I did in the end and what worked for me :slight_smile:

my working example used aws_access_key_id

that would be used inside a rclone config file, or as part of a rclone connection string.

I see, thanks for clearing that up.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.