What is your rclone version (output from
Which cloud storage system are you using? (eg Google Drive)
What is the problem you are having with rclone?
- Our program is attempting to upload many files to S3 using Rclone RC. An example command:
rc: "operations/copyfile": with parameters map[_group:<nil> dstFs:REMOTE-NAME:bucket-name/2654c16b1cd044e18de0ac45015ed37c/file/path/ dstRemote:File.txt srcFs:/path/to/input/ srcRemote:File.txt
- After refreshing AWS Credentials, config/update was called to update the credentials in Rclone.
rc: "config/update": with parameters map[name:REMOTE-NAME obscure:true parameters:map[access_key_id:ASIAXXXXXXXXMOR env_auth:false provider:AWS region:us-east-1 secret_access_key:Redacted session_token:Redacted]]
- A file failed to copy with "The AWS Access Key Id you provided does not exist in our records." with the original AWSAccessKeyId
File.txt: Failed to copy: s3 upload: 403 Forbidden: <?xml version="1.0" encoding="UTF-8"?> <Error><Code>InvalidAccessKeyId</Code><Message>The AWS Access Key Id you provided does not exist in our records.</Message><AWSAccessKeyId>ASIAXXXXXXXXRWO</AWSAccessKeyId><RequestId>DC3E05DCE4E23787</RequestId><HostId>2HZvmnLKSlUYzU4CmLJNa/2WaeQtaf2FdXWZPBhinOM/FhKjTUB1gzFNGbEEQqY05TTPgyZ5POk=</HostId></Error> ERROR : rc: "operations/copyfile": error: s3 upload: 403 Forbidden: <?xml version="1.0" encoding="UTF-8"?>]
For every future attempt to upload that same file Rclone returns the same error with the original AWSAccessKeyId (ASIAXXXXXXXXRWO). Confirmed that both the config file and config/dump show the new credentials.
Other files are able to successfully upload using the updated credentials.
-The only way to get the "stuck" file to upload is to restart the Rclone instance.
- Does Rclone "cache" the AWS credentials for a single file (even after the file completes, either successfully or unsuccessfully)
- Is there any setting / flag / method to ensure Rclone does not remember files after an upload attempt?