Etag differ error for bigger files than 200MB

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

I use rlcone sync for copying 8TB, they have some still images and some video clips.
rclone almost work well, but there are some error logs for bigger files than 200MB.
When I checked the copyed files, It was fine, I could play the video clip.

Why do the rclone output error log for bigger files than 200MB?
What can I do to remove the error logs?

Failed to copy: multipart upload corrupted: Etag differ: expecting 7d1805e5f9f9efc2841f9382b0f7d284-45 but got 37e9543627c65667469d511886d9c2d3-45

Run the command 'rclone version' and share the full output of the command.

-->
rclone v1.64.2

  • os/version: ubuntu 16.04 (64 bit)
  • os/kernel: 4.15.0-142-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.21.3
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync /mnt_rg/2023년 odc_s3:data/2023 -P --track-renames --log-file=/tmp/rclone.log

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[odc_s3]
endpoint = https://s3.s-cloud.com
type = s3
provider = Minio
basic_auth = true
access_key_id = XXX
secret_access_key = XXX
env_auth = false
region = us-east-1

A log from the command that you were trying to run with the -vv flag

Paste  log here

not sure that is a valid flag?

we cannot see into your machine.
please pick a single file over 200MiB, try to copy and then post the full debug output.
for a deeper look, add -vv --dump=headers

and is there a problem with the s-cloud.com?

Try adding --s3-use-multipart-etag=false - you can set this in the config as use_multipart_etag = false

I thought minio normally supports this it if it doesn't the defaults in rclone need changing.

Thank you for your interest.
Your comment option is helpful to me. Thank you

Now I have two types of error.

$ cat /tmp/rclone.log | grep error
2023/11/27 15:01:10 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled
2023/11/27 15:01:10 DEBUG : pacer: low level retry 2/2 (error BodyHashError: failed to compute body hashes

What makes these types of error?

These are probably networking errors, but I need to see more of the log to be sure.

2023/11/27 15:01:11 DEBUG : 6.aaa/★bbb/PV/Frame/9225.MOV: md5 = 51ca734d8bf5077043f986b141493120 OK
2023/11/27 15:01:11 INFO : 6.aaa/★bbb/PV/Frame/9225.MOV: Copied (new)
2023/11/27 15:01:11 DEBUG : 6.aaa/★bbb/PV/Frame/9226.MOV: md5 = eca6930dcfef5928ba767a2af3202e86 OK
2023/11/27 15:01:11 INFO : 6.aaa/★bbb/PV/Frame/9226.MOV: Copied (new)
2023/11/27 15:01:11 DEBUG : 6.aaa/★bbb/PV/Frame/9224.MOV: md5 = bb366833ac4d4396c893583108a147d9 OK
2023/11/27 15:01:11 INFO : 6.aaa/★bbb/PV/Frame/9224.MOV: Copied (new)
2023/11/27 15:01:11 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2023/11/27 15:01:11 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2023/11/27 15:01:11 DEBUG : pacer: low level retry 2/2 (error BodyHashError: failed to compute body hashes
caused by: context canceled)
2023/11/27 15:01:11 DEBUG : pacer: Rate limited, increasing sleep to 80ms
2023/11/27 15:01:11 DEBUG : 8.TTT/Dev/WWW/ALL_MULTI_CERT.tar.md5: multi-thread copy: chunk 487/1414 failed: multi-thread copy: failed to write chunk: failed to upload chunk 487 with 5242880 bytes: BodyHashError: failed to compute body hashes
caused by: context canceled
2023/11/27 15:01:11 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2023/11/27 15:01:11 DEBUG : pacer: Rate limited, increasing sleep to 160ms
2023/11/27 15:01:11 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2023/11/27 15:01:11 DEBUG : pacer: Rate limited, increasing sleep to 320ms
2023/11/27 15:01:12 DEBUG : pacer: low level retry 2/2 (error BodyHashError: failed to compute body hashes
caused by: context canceled)
2023/11/27 15:01:12 DEBUG : pacer: Rate limited, increasing sleep to 640ms
2023/11/27 15:01:12 DEBUG : 8.TTT/Dev/WWW/ALL_MULTI_CERT.tar.md5: multi-thread copy: chunk 488/1414 failed: multi-thread copy: failed to write chunk: failed to upload chunk 488 with 5242880 bytes: BodyHashError: failed to compute body hashes
caused by: context canceled
2023/11/27 15:01:12 DEBUG : 6.bbb/★speed/PV/Frame/dump.log: Need to transfer - No matching file found at Destination
2023/11/27 15:01:12 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2023/11/27 15:01:12 DEBUG : pacer: Rate limited, increasing sleep to 1.28s
2023/11/27 15:01:13 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled

I think there should be an ERROR before this which caused the context to be cancelled - can you find it?

This is the first error

2023/11/27 15:01:10 DEBUG : pacer: low level retry 1/2 (error RequestCanceled: request context canceled
caused by: context canceled)
2023/11/27 15:01:10 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2023/11/27 15:01:10 DEBUG : pacer: low level retry 2/2 (error BodyHashError: failed to compute body hashes
caused by: context canceled)
2023/11/27 15:01:10 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023/11/27 15:01:10 DEBUG : 8.TTT/Dev/WWW/ALL_MULTI_CERT.tar.md5: multi-thread copy: chunk 153/1414 failed: multi-thread copy: failed to write chunk: failed to upload chunk 153 with 5242880 bytes: BodyHashError: failed to compute body hashes
caused by: context canceled

If not, please let me know the keywords I should be looking for..

Try searching for context canceled, error, fail - that sort of thing.

If you can't find a different error then can you upload the logs somewhere so I can take a look please?

I have a question regarding this.

When a file in the source is deleted or moved, does the log output an error?

Also, in what cases will an error be output regarding changes made during sync operation?

A sync is a snapshot of a moment so if things change during the operation, it can impact the sync/copy.

If it's already copied, no change. If it hasn't be copied and it was in queue, it would generate an error.

Any errors are logged such as a copy failures, permissions, quite a long list of 'could be' items.

In the previous test, I found out that sync operation was in progress for 2 days and that there was a change in the source during this period.

In this test, sync of source without changes was performed for 12 hours, and the following error output occurred.

Actually the video at the destination where the error occurred can't be played.

All errors occured in the video.
I can't understand why this error occurred.

Most errors are of same type :
2023/12/01 06:55:50 ERROR : aaa/bbb/38402160_28.114.mp4: Failed to copy: multi-thread copy: failed to open source: open /mnt_rg/dev/2022/aaa/bbb/38402160_28.114.mp4: device or resource busy

Below is another type of error:
2023/12/01 07:18:19 ERROR : S3 bucket sdp-iq-data path dev/2022: not deleting files as there were IO errors
2023/12/01 07:18:19 ERROR : S3 bucket sdp-iq-data path dev/2022: not deleting directories as there were IO errors
2023/12/01 07:18:19 ERROR : Attempt 1/3 failed with 781 errors and: multi-thread copy: failed to open source: open /mnt_rg/dev/2022/aaa/bbb/38402160_28.114.mp4: device or resource busy
2023/12/01 07:18:19 ERROR : aaa: error reading destination directory: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403, request id: 179C868D20E4D514, host id:

2023/12/01 07:18:19 ERROR : S3 bucket sdp-iq-data path dev/2022: not deleting files as there were IO errors
2023/12/01 07:18:19 ERROR : S3 bucket sdp-iq-data path dev/2022: not deleting directories as there were IO errors
2023/12/01 07:18:19 ERROR : Attempt 2/3 failed with 1 errors and: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403, request id: 179C868D20E4D514, host id:
2023/12/01 07:18:20 ERROR : aaa: error reading destination directory: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403, request id: 179C868D281E5761, host id:

2023/12/01 07:18:20 ERROR : S3 bucket sdp-iq-data path dev/2022: not deleting files as there were IO errors
2023/12/01 07:18:20 ERROR : S3 bucket sdp-iq-data path dev/2022: not deleting directories as there were IO errors
2023/12/01 07:18:20 ERROR : Attempt 3/3 failed with 1 errors and: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403, request id: 179C868D281E5761, host id:
2023/12/01 07:18:20 Failed to sync: SignatureDoesNotMatch: The request signature we calculated does not match the signature you provided. Check your key and signing method.
status code: 403, request id: 179C868D281E5761, host id:

This error is likely because something else is writing the file or has it locked in some way.

This is more problematic. It should never happen!

  • bug in the server or the AWS SDK which calculated the signature wrong.
  • bug in the server which caused it not to recognize your credentials for a while (say network glitch to active directory server or something like that)
  • some kind of networking problem which corrupted some bits between your computer and the server
  • some kind of memory problem in your computer which corrupted some bits

Given your earlier errors, it might be worth running memtest86 on your computer to check its RAM is OK - this is the most common hardware fault.