Some background...
Rclone has 3 ways of uploading files to s3
- a single part upload
- a multi part upload controlled by the s3 backend (used if size >
--s3-upload-cutoff
) - a multipart upload controlled by the rclone code (used if size >
--multi-thread-cutoff
) - this can be fully concurrent unlike 2) which is sequential on the source read but concurrent on the writes.
It looks like settting MD5s is broken in scenario 3.
Hmm, yes, rclone is in transition from this being controlled by the backend to this being controlled by the rclone core so you will need to set --multi-thread-streams 0
to disable scenario 3 above.
I think this is a bug. Can you open a new issue on Github about this please.
This is very similar to #7424 but I think your problem is to do with the ChunkedWriter in the s3 backend not applying the MD5 metadata from the source so should be much easier to fix.
Files between --s3-upload-cutoff
(default 200M) and --multi-thread-cutoff
(default 256M) will use the old uploading mechanism (scenario 2).
This suggests a workaround for you though, raise --multi-thread-cutoff
to something very large, say --multi-thread-cutoff 1P
and this will disable secenario 3 uploads. Or set --multi-thread-streams 0
which should have the same effect I think.