How to fail fast if authorization info is not provided

What is the problem you are having with rclone?

We are running rclone in some scripts. If users haven't configured their AWS secrets, rclone hangs. We would like for rclone to just error at this point, but instead it seems to make a call to PUT /latest/api/token. Is there a way to just have the command fail at this point instead?

Run the command 'rclone version' and share the full output of the command.

rclone v1.57.0

  • os/version: darwin 12.1 (64 bit)
  • os/kernel: 21.2.0 (arm64)
  • os/type: darwin
  • os/arch: arm64
  • go/version: go1.17.2
  • go/linking: dynamic
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

Amazon S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone   --dump headers  copy s3:foo/bar /tmp

The rclone config contents with secrets removed.

[s3]
type = s3
provider = AWS
env_auth = True
region = us-east-1
no_check_bucket = True

A log from the command with the -vv flag

2022/01/27 11:59:42 NOTICE: Automatically setting -vv as --dump is enabled
2022/01/27 11:59:42 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "--dump" "headers" "copy" "s3:foo/bar" "/tmp"]
2022/01/27 11:59:42 DEBUG : Creating backend with remote "s3:foo/bar"
2022/01/27 11:59:42 DEBUG : Using config file from "/tmp/rclone.conf"
2022/01/27 11:59:42 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2022/01/27 11:59:42 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2022/01/27 11:59:42 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/01/27 11:59:42 DEBUG : HTTP REQUEST (req 0x140005f8d00)
2022/01/27 11:59:42 DEBUG : PUT /latest/api/token HTTP/1.1
Host: [snipped]
User-Agent: rclone/v1.57.0
Content-Length: 0
X-Aws-Ec2-Metadata-Token-Ttl-Seconds: 21600
Accept-Encoding: gzip

2022/01/27 11:59:42 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

hello and welcome to the forum,

in my testing, rclone does not hang.

rclone config show s3: 
--------------------
[s3]
type = s3
provider = AWS
env_auth = True
region = us-east-1
no_check_bucket = True
--------------------

rclone copy s3:foo/bar ./temp -vv --retries=1 --low-level-retries=1 
2022/01/27 15:44:25 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "copy" "s3:foo/bar" "./temp" "-vv" "--retries=1" "--low-level-retries=1"]
2022/01/27 15:44:25 DEBUG : Creating backend with remote "s3:foo/bar"
2022/01/27 15:44:25 DEBUG : Using config file from "C:\\data\\rclone\\scripts\\rclone.conf"
2022/01/27 15:44:25 DEBUG : Creating backend with remote "./temp"
2022/01/27 15:44:25 DEBUG : fs cache: renaming cache item "./temp" to be canonical "//?/C:/data/rclone/scripts/temp"
2022/01/27 15:44:25 ERROR : S3 bucket foo path bar: error reading source root directory: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
	status code: 403, request id: YNSBE65FVFDQZMDM, host id: 1QeH6rp4MPYMVqKOWKKEeF7isiVG6P2H+dybRCezFV+J/7WJwWyRQ79x3YxOxpPTAnLOpmQUF5w=
2022/01/27 15:44:25 DEBUG : Local file system at //?/C:/data/rclone/scripts/temp: Waiting for checks to finish
2022/01/27 15:44:25 DEBUG : Local file system at //?/C:/data/rclone/scripts/temp: Waiting for transfers to finish
2022/01/27 15:44:25 ERROR : Attempt 1/1 failed with 1 errors and: InvalidAccessKeyId: The AWS Access Key Id you provided does not exist in our records.
	status code: 403, request id: YNSBE65FVFDQZMDM, host id: 1QeH6rp4MPYMVqKOWKKEeF7isiVG6P2H+dybRCezFV+J/7WJwWyRQ79x3YxOxpPTAnLOpmQUF5w=

Thanks for the quick reply! After more experimentation, I get varying behavior. Viewing the dumped headers, sometimes it hangs on 1 http request, sometimes multiple and then hangs, sometimes it fails. Maybe the behavior is dependent on the remote server. Is there a way to just skip the http request and error out?

if you ask rclone to do anything, then rclone has to make a request.

would need to see the dump log for that. perhaps wait for six minutes and see what happens.

Hi,
The command finally failed after about 1 hour. I started at 14:42:01 and it finished at 15:41:11. Each request seems to take 1 minute. It also seems to be trying to connect to 169.254.169.254. If I understand correctly, this address is only valid if you are running within AWS?

2022/01/27 14:42:01 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/01/27 14:42:01 DEBUG : HTTP REQUEST (req 0x1400068f800)
2022/01/27 14:42:01 DEBUG : PUT /latest/api/token HTTP/1.1
Host: 169.254.169.254
User-Agent: rclone/v1.57.0
Content-Length: 0
X-Aws-Ec2-Metadata-Token-Ttl-Seconds: 21600
Accept-Encoding: gzip

2022/01/27 14:42:01 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>

until finally

2022/01/27 15:40:11 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2022/01/27 15:40:37 INFO  : 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Elapsed time:     58m35.9s

2022/01/27 15:41:11 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/01/27 15:41:11 DEBUG : HTTP RESPONSE (req 0x1400063c100)
2022/01/27 15:41:11 DEBUG : Error: dial tcp 169.254.169.254:80: i/o timeout
2022/01/27 15:41:11 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2022/01/27 15:41:11 ERROR : S3 bucket [snipped]: error reading source root directory: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2022/01/27 15:41:11 DEBUG : Local file system at /tmp: Waiting for checks to finish
2022/01/27 15:41:11 DEBUG : Local file system at /tmp: Waiting for transfers to finish
2022/01/27 15:41:11 ERROR : Attempt 3/3 failed with 1 errors and: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors
2022/01/27 15:41:11 INFO  : 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:     59m10.2s

2022/01/27 15:41:11 DEBUG : 4 go routines active
2022/01/27 15:41:11 Failed to copy: NoCredentialProviders: no valid providers in chain. Deprecated.
	For verbose messaging see aws.Config.CredentialsChainVerboseErrors

in the end, is there a reason to run rclone without using client id/secrets and not expect problems?
rclone takes what you give it and contacts aws. then rclone waits for aws reply.

however, the full address is 169.254.169.254:80
port 80, is http, not https, that is strange

We definitely expect a problem. But if someone hasn't configured their setup properly, it would just be nice to get one immediately with a nice error message than having a script hang for 1 hour and then have to hunt down the answer. Is there a way to just get rclone to do a small number of authentication attempts with a quick timeout?

How are you expecting users to configure their secrets?

Using env_auth gives rclone permission to try all the auth methods it can from the environment. These are

  • Export the following environment variables before running rclone:
    • Access Key ID: AWS_ACCESS_KEY_ID or AWS_ACCESS_KEY
    • Secret Access Key: AWS_SECRET_ACCESS_KEY or AWS_SECRET_KEY
    • Session Token: AWS_SESSION_TOKEN (optional)
  • Or, use a named profile:
    • Profile files are standard files used by AWS CLI tools
    • By default it will use the profile in your home directory (e.g. ~/.aws/credentials on unix based systems) file and the "default" profile, to change set these environment variables:
      • AWS_SHARED_CREDENTIALS_FILE to control which file.
      • AWS_PROFILE to control which profile to use.
  • Or, run rclone in an ECS task with an IAM role (AWS only).
  • Or, run rclone on an EC2 instance with an IAM role (AWS only).
  • Or, run rclone in an EKS pod with an IAM role that is associated with a service account (AWS only).

Now these are run down in order and it looks like the EC2 one is hanging in your environment.

Can you put some default (incorrect) config in?

Can you make 169.254.169.254 refuse all connections in your network? Its an RFC3927 address, not a globally routable internet address.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.