How to set checksum cutoff for custom s3-compatible storage?

By Definition of S3 ETag,ETag of an object may be a md5 checksum or not, it always depends on the object size.

When I copy a big object from custom s3-compatible object storage, corrupted md5 has is reported.

I know that --ignore-checksum will work well, but is there some method to allow the checksum work by some cutoff such as size? upload_cutoff is not worked.

If it is reporting corrupted MD5 then it isn't 100% compliant, or it might be that the objects are encrypted. AWS always returns non MD5 etags with a -0123 suffix which makes them invalid MD5s so rclone ignores them.

What storage platform are you using?

Not at the moment.

Are you saying that some MD5s are OK and some aren't?

You could copy the OK ones with --max-size 4M (or whatever number is correct) then the bad ones in a second pass with --min-size 4M --ignore-checksum.

Thanks!

We developed our S3-compatible object store system, and test it using rclone check. This is a compatibility bug that we don't return an Non-MD5 etag for large object in our system, we will fix it.

If you can just make the Etag not look like an md5sum (32 hex digits) then rclone will ignore it.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.