What is the problem you are having with rclone?
I frequently use commands like rclone lsf MyRemote: -R --format psimh --csv > inventory.csv
to generate an inventory list of files on a given remote. For some, e.g. Box, I know I need to specify the hash type with --hash SHA1
. I know, too, that S3 and similar services have an MD5 hash as the ETag (except multipart uploads where it's more complicated). Is there any way to have the lsf
command include the ETag value?
Run the command 'rclone version' and share the full output of the command.
rclone v1.69.1
- os/version: darwin 13.5.1 (64 bit)
- os/kernel: 22.6.0 (arm64)
- os/type: darwin
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.24.0
- go/linking: dynamic
- go/tags: cmount
Which cloud storage system are you using? (eg Google Drive)
AWS S3, and Wasabi
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone lsf MyS3Remote: -R --format psimh --csv > inventory.csv
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[MyS3Remote]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX
asdffdsa
(jojothehumanmonkey)
June 5, 2025, 6:46pm
2
one possible workaround is to use --dump=headers
and grep the ETag
have you read?
opened 06:35PM - 21 Nov 23 UTC
enhancement
Remote: S3
#### What is your current rclone version (output from `rclone version`)?
v1.64.… 2
#### What problem are you are trying to solve?
AWS S3 [supports](https://aws.amazon.com/blogs/aws/new-additional-checksum-algorithms-for-amazon-s3/) saving additional object checksums in the object metadata. It supports checksums like SHA-1, SHA-256, CRC-32, and CRC-32C, that should be calculated by client side and be passed in the request header.
However, *rclone* doesn't support this S3 functionality, what limits *rclone* usage if there are some requirements for having S3 checksums in the storage.
#### How do you think rclone should be changed to solve that?
*rclone* should support configuration of S3 backend that enables additional checksums calculation. Example of configuration:
```ini
[some-name]
type = s3
provider = AWS
...
additional_checksum_algorithm = SHA256 # or CRC32 or CRC32C or SHA1
```
or cli argument like:
```bash
rclone sync --s3-additional-checksum-algorithm=SHA256 ...
```
Setting `extra_checksum_algorithm` should cause computing checksum on client side with further passing to S3 PUT request headers. This can be achieved by setting parameter [ChecksumAlgorithm](https://pkg.go.dev/github.com/aws/aws-sdk-go-v2/service/s3#PutObjectInput) while uploading object via `PutObjectRequest`.
**NOTE**: using S3 checksums for syncing data purposes is out of scope of this proposal.
If community doesn't have objections for the proposed changes I'm ready to start working on this and open a PR.