Can't make it work for the Yandex Cloud Object Storage (S3-compatible)

What is the problem you are having with rclone?

I want to use it to sync files to the Yandex Cloud Object Storage. They are said to be compatible with aws cli via just providing the entrypoint flag. AWS Command Line Interface (AWS CLI) | Yandex.Cloud - Documentation

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.0
- os/version: alpine 3.15.1 (64 bit)
- os/kernel: 5.10.25-linuxkit (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.18
- go/linking: static
- go/tags: none

The command you were trying to run

docker run --rm -ti -v $(pwd)/credentials:/root/.aws/credentials:ro -v $(pwd):/data:ro rclone/rclone --s3-endpoint=https://storage.yandexcloud.net --s3-provider=Other -vv sync -i . :s3:mybucket

The rclone config contents with secrets removed.

$ cat credentials 
[default]
aws_access_key_id=...
aws_secret_access_key=...

A log from the command with the -vv flag

2022/04/06 22:09:13 DEBUG : rclone: Version "v1.58.0" starting with parameters ["rclone" "--s3-endpoint=https://storage.yandexcloud.net" "--s3-provider=Other" "-vv" "sync" "-i" "." ":s3:mybucket"]
2022/04/06 22:09:13 DEBUG : Creating backend with remote "."
2022/04/06 22:09:13 NOTICE: Config file "/config/rclone/rclone.conf" not found - using defaults
2022/04/06 22:09:13 DEBUG : fs cache: renaming cache item "." to be canonical "/data"
2022/04/06 22:09:13 DEBUG : Creating backend with remote ":s3:mybucket"
2022/04/06 22:09:13 DEBUG : :s3: detected overridden config - adding "{mH7HS}" suffix to name
2022/04/06 22:09:13 DEBUG : fs cache: renaming cache item ":s3:mybucket" to be canonical ":s3{mH7HS}:mybucket"
2022/04/06 22:09:13 ERROR : S3 bucket mybucket: error reading destination root directory: AccessDenied: Access Denied
	status code: 403, request id: 6ba4a68fb90865e4, host id: 

...

2022/04/06 22:09:13 ERROR : Attempt 3/3 failed with 1 errors and: AccessDenied: Access Denied
	status code: 403, request id: 86ba6f1266d04359, host id: 
2022/04/06 22:09:13 NOTICE: 
Transferred:   	          0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (retrying may help)
Elapsed time:         0.1s

hello and welcome to the forum,

403 is a permissions error from yandex, not a rclone error.

and might be easier to test with rclone ls

OK, I see:

$ docker run --rm -ti -v $(pwd)/credentials:/root/.aws/credentials:ro -v $(pwd):/data:ro rclone/rclone --s3-endpoint=https://storage.yandexcloud.net --s3-provider=Other ls :s3:mybucket
2022/04/06 23:04:18 NOTICE: Config file "/config/rclone/rclone.conf" not found - using defaults
2022/04/06 23:04:18 Failed to ls: AccessDenied: Access Denied
	status code: 403, request id: 38b5dfcca0aa6e31, host id:

The credentials are a static key for the Service Account that has READ and WRITE ACL given on the bucket.

which other tools have you tested using, aws s3 cli, s3cmd or what?

fwiw, for testing, i would try to simplify this.
--- run rclone on the command line, no docker
--- create a rclone remote, no credentials files.

Just checked that credentials are good:

$ docker run --rm -ti -v $(pwd)/credentials:/root/.aws/credentials:ro amazon/aws-cli s3 --endpoint-url=https://storage.yandexcloud.net ls s3://mybucket
                           PRE temp/

Also adding // before bucket name by analogy to rclone ls command didn't help.

run rclone on the command line, no docker

$ AWS_SHARED_CREDENTIALS_FILE=$(pwd)/credentials rclone --s3-endpoint=https://storage.yandexcloud.net --s3-provider=Other ls :s3:mybucket
2022/04/07 02:20:07 NOTICE: Config file "/Users/nakilon/.config/rclone/rclone.conf" not found - using defaults
2022/04/07 02:20:08 Failed to ls: AccessDenied: Access Denied
	status code: 403, request id: 2ac94ae94c3c15b9, host id: 

create a rclone remote, no credentials files

I need it for the simplicity of the automation I build.

yes, i understand that.

for testing, get it working using rclone on the command line, with a rclone remote.
that is my advice, not sure what else to offer.

and this is what is use
rclone ls :s3,endpoint=s3.us-east-2.wasabisys.com,provider=wasabi:zork

I went through the dialog resulting in the following:

--------------------
[yandex]
type = s3
provider = Other
access_key_id = ...
secret_access_key = ...
endpoint = https://storage.yandexcloud.net
--------------------
$ rclone lsf yandex:mybucket
temp/

What is the issue then? I feel like rclone just does not look for ~/.aws/credentials unlike what the docs say. Though the AWS_SHARED_CREDENTIALS_FILE does not work either for me.

Using the remote parameters syntax results in the same:

$ AWS_SHARED_CREDENTIALS_FILE=$(pwd)/credentials rclone ls :s3,endpoint=storage.yandexcloud.net,provider=Other:mybucket

2022/04/07 02:41:53 Failed to ls: AccessDenied: Access Denied

status code: 403, request id: c729c4e0045c9d01, host id:

well, good, not we know this is not a 403 issue.

what is the full path for pwd?

$ pwd
/Users/nakilon/_/REPOS/git-to-os/Yandex

run the commands with -vv for debug output.

$ AWS_SHARED_CREDENTIALS_FILE=$(pwd)/credentials rclone -vv ls :s3,endpoint=storage.yandexcloud.net,provider=Other:mybucket
2022/04/07 02:49:10 DEBUG : rclone: Version "v1.56.2" starting with parameters ["rclone" "-vv" "ls" ":s3,endpoint=storage.yandexcloud.net,provider=Other:mybucket"]
2022/04/07 02:49:10 DEBUG : Creating backend with remote ":s3,endpoint=storage.yandexcloud.net,provider=Other:mybucket"
2022/04/07 02:49:10 DEBUG : :s3: detected overridden config - adding "{4c8Vb}" suffix to name
2022/04/07 02:49:10 DEBUG : Using config file from "/Users/nakilon/.config/rclone/rclone.conf"
2022/04/07 02:49:10 DEBUG : fs cache: renaming cache item ":s3,endpoint=storage.yandexcloud.net,provider=Other:mybucket" to be canonical ":s3{4c8Vb}:mybucket"
2022/04/07 02:49:10 DEBUG : 5 go routines active
2022/04/07 02:49:10 Failed to ls: AccessDenied: Access Denied
	status code: 403, request id: 8bec2e3962bbb4b2, host id: 

remove pwd and hard code the full path

The same output.

this worked for me

2022/04/06 20:37:54 DEBUG : rclone: Version "v1.57.0" starting with parameters ["rclone" "lsd" ":s3,env_auth,shared_credentials_file=./credentials,endpoint=s3.us-east-2.wasabisys.com,provider=wasabi:" "-vv"]
2022/04/06 20:37:54 DEBUG : Creating backend with remote ":s3,env_auth,shared_credentials_file=./credentials,endpoint=s3.us-east-2.wasabisys.com,provider=wasabi:"
2022/04/06 20:37:54 DEBUG : :s3: detected overridden config - adding "{f6KNq}" suffix to name
2022/04/06 20:37:54 DEBUG : Using config file from "/home/user01/.config/rclone/rclone.conf"
2022/04/06 20:37:54 DEBUG : fs cache: renaming cache item ":s3,env_auth,shared_credentials_file=./credentials,endpoint=s3.us-east-2.wasabisys.com,provider=wasabi:" to be canonical ":s3{f6KNq}:"
          -1 2022-02-09 09:49:42        -1 zork

Nice, it works now. Probably RCLONE_S3_ENV_AUTH= will do too.

(weird that docs say "deleting any excess files in the bucket." about the "rclone sync" but the directory "temp" remained there)

good, and yeah that should work.

and where, exactly, is the quote from?

and without the exact command and debug log, have no idea???

Looks like it deletes the unexisting folder only if it included files.

sorry, not understanding what you wrote. unexisting folder?
no commands?, no debug log?, no idea?