What is the problem you are having with rclone?
Concurrent rclone processes writing to the same bucket interfere with each other, resulting in 409 errors. I don't get similar errors with other AWS CLIs.
Run the command 'rclone version' and share the full output of the command.
$ rclone version
rclone v1.59.0-DEV
- os/version: redhat 8.6 (64 bit)
- os/kernel: 4.18.0-372.26.1.el8_6.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.18.3
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
S3
The command you were trying to run (eg rclone copy /tmp remote:tmp)
This example is contrived to be easy to reproduce. I understand that I could copy 100 files in a directory with a single rclone process. My use-case is to use rclone subprocesses to copy files that may not be in the same parent directory.
for i in {0..99}; do dd if=/dev/urandom of=$(openssl rand -hex 12) bs=1024 count=1024; done
ls | xargs -I {} -P 100 rclone --ignore-checksum --no-check-dest --config PATH-TO-CONFIG copyto {} s3://BUCKET/{}
The rclone config contents with secrets removed.
[s3]
type = s3
provider = AWS
env_auth = true
region = us-east-1
acl = private
server_side_encryption = aws:kms
sse_kms_key_id = arn:aws:kms:us-east-1:.....
A log from the command with the -vv flag
2022/10/20 22:46:29 ERROR : 1e36fd15ba88c4c1f47b71a0: Failed to copy: OperationA - Pastebin.com)