Include S3 ETag in lsf

What is the problem you are having with rclone?

I frequently use commands like rclone lsf MyRemote: -R --format psimh --csv > inventory.csv to generate an inventory list of files on a given remote. For some, e.g. Box, I know I need to specify the hash type with --hash SHA1. I know, too, that S3 and similar services have an MD5 hash as the ETag (except multipart uploads where it's more complicated). Is there any way to have the lsf command include the ETag value?

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.1
- os/version: darwin 13.5.1 (64 bit)
- os/kernel: 22.6.0 (arm64)
- os/type: darwin
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.24.0
- go/linking: dynamic
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

AWS S3, and Wasabi

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone lsf MyS3Remote: -R --format psimh --csv > inventory.csv

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[MyS3Remote]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX

one possible workaround is to use --dump=headers and grep the ETag


have you read?