"Fragment overlap" error with encrypted onedrive?

I have a personal onedrive setup with an encrypted frontend.

I'm doing an initial copy of my backups to it. And it's mostly working. But partway through...

[OneDriveBackups]
type = crypt
remote = OneDriveSweharris2:Backups
filename_encryption = off
directory_name_encryption = false
password = hahahah
password2 = hahahah

[OneDriveSweharris2]
type = onedrive
token = thatdbetelling
drive_id = anumber
drive_type = personal

Now I'm doing a simple rclone sync . OneDriveBackups: and it's going pretty well.

But I noticed the occasional error:

2020/01/23 18:11:42 INFO  :
Transferred:      377.624G / 600.533 GBytes, 63%, 30.980 MBytes/s, ETA 2h2m47s
Errors:                 1 (retrying may help)
Checks:                 0 / 0, -
Transferred:         1015 / 1150, 88%
Elapsed time:   3h28m1.6s
Transferring:
 *            penfold/20200105._FastData.0.gz_ak: 63% /5G, 8.982M/s, 3m25s
 *            penfold/20200105._FastData.0.gz_al: 59% /5G, 9.425M/s, 3m40s
 *            penfold/20200105._FastData.0.gz_am: 30% /5G, 8.870M/s, 6m42s
 *            penfold/20200105._FastData.0.gz_an: 26% /5G, 9.311M/s, 6m46s

2020/01/23 18:12:16 ERROR : penfold/20200105._FastData.0.gz_am: Failed to copy: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/23 18:12:20 ERROR : penfold/20200105._FastData.0.gz_ak: Failed to copy: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/23 18:12:42 INFO  :
Transferred:      379.124G / 595.475 GBytes, 64%, 30.955 MBytes/s, ETA 1h59m17s
Errors:                 3 (retrying may help)
Checks:                 0 / 0, -
Transferred:         1015 / 1148, 88%
Elapsed time:   3h29m1.6s

Now this has been running for 3.5hrs and only 3 errors (looks like I missed the first one)

The same thing happened a little later:

2020/01/23 19:45:52 ERROR : penfold/20200119._FastData.0.gz_al: Failed to copy: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/23 19:45:54 ERROR : penfold/20200119._FastData.0.gz_am: Failed to copy: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.

Now rclone automatically retried:

2020/01/23 20:03:01 ERROR : Encrypted drive 'OneDriveBackups:': not deleting files as there were IO errors
2020/01/23 20:03:01 ERROR : Encrypted drive 'OneDriveBackups:': not deleting directories as there were IO errors
2020/01/23 20:03:01 ERROR : Attempt 1/3 failed with 5 errors and: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/23 20:03:03 INFO  : Encrypted drive 'OneDriveBackups:': Waiting for checks to finish
2020/01/23 20:03:03 INFO  : Encrypted drive 'OneDriveBackups:': Waiting for transfers to finish
2020/01/23 20:03:42 INFO  : 
Transferred:      587.472G / 610.857 GBytes, 96%, 31.329 MBytes/s, ETA 12m44s
Errors:                 0
Checks:              1146 / 1146, 100%
Transferred:         1146 / 1151, 100%
Elapsed time:   5h20m1.6s
Transferring:
 *                 pugwash/20190625.root.0.gz_aa:  7% /4.569G, 8.547M/s, 8m27s
 *            penfold/20200105._FastData.0.gz_ak:  4% /5G, 5.650M/s, 14m25s
 *            penfold/20200105._FastData.0.gz_am:  6% /5G, 8.337M/s, 9m34s
 *            penfold/20200119._FastData.0.gz_al:  6% /5G, 8.103M/s, 9m53s

And this time the 5 uploaded.

2020/01/23 20:23:25 INFO  : Waiting for deletions to finish
2020/01/23 20:23:25 ERROR : Attempt 2/3 succeeded
2020/01/23 20:23:25 INFO  :
Transferred:      610.863G / 610.863 GBytes, 100%, 30.685 MBytes/s, ETA 0s
Errors:                 0
Checks:              1146 / 1146, 100%
Transferred:         1151 / 1151, 100%
Elapsed time:  5h39m45.2s

Thu Jan 23 20:23:25 EST 2020 Ended OneDriveBackups:

So it doesn't look like the problem is easily replicable, which is gonna make it hard to provide debug level output!

rclone v1.46
- os/arch: linux/amd64
- go version: go1.11.5

I know that's a little old but the changelog doesn't mention anything related to this.

It's possible https://github.com/rclone/rclone/issues/3131 is related, and this might help narrow down that it's the onedrive backend and not the webdav frontend.

hi,
before calling this a bug, you should update to the latest version and try again,
v1.46 is almost one year old now.

That issue does look relevant.

There is an idea for a fix in that issue too.

So we could assume if we get a 416 on a retry that the server got it already and to skip the chunk.

Perhaps better would be to get the upload status to work out where we are. Our hands are somewhat tied here - if the server is missing stuff that isn't in the current chunk we'll have to abort.

The first idea would be very easy to implement and there is no danger of data corruption - the server will just give another 416 if we get it wrong. Some care with the error handling would be needed.

If you can replicate it then I'm sure we can fix it! It would be worth trying to replicate it with the latest beta and if you can put a note on that issue and tag @Cnly who maintains the onedrive backend.

Do you fancy trying to fix this?

Unfortunately replication doesn't seem to be easy. 5 errors after a 650Gb upload :frowning: And it's clearly not specific to a file 'cos the 5 failes one re-uploaded without issue .

Without a clear replication path, "no error" doesn't mean "no issues" :frowning:

You could try a patch like this (untested) which assumes that 416 means everything has been sent

diff --git a/backend/onedrive/onedrive.go b/backend/onedrive/onedrive.go
index 22e2a2730..3c7d6f711 100644
--- a/backend/onedrive/onedrive.go
+++ b/backend/onedrive/onedrive.go
@@ -1537,6 +1537,12 @@ func (o *Object) uploadFragment(ctx context.Context, url string, start int64, to
 	err = o.fs.pacer.Call(func() (bool, error) {
 		_, _ = chunk.Seek(0, io.SeekStart)
 		resp, err = o.fs.srv.Call(ctx, &opts)
+		if err == nil && resp != nil && resp.StatusCode == http.StatusRequestedRangeNotSatisfiable {
+			// Likely we have sent the chunk already so return OK
+			// FIXME this needs protection for happening multiple times
+			fs.Errorf(o, "Skipping chunk as already sent")
+			return false, nil
+		}
 		if err != nil {
 			return shouldRetry(resp, err)
 		}

If that works I can improve it.

I had a go at fixing this properly - if you could run the transfer in debug (-vv) and watch out for Received 416 error - checking position logs that would be helpful!

https://beta.rclone.org/branch/v1.50.2-191-gca8bd807-fix-3131-onedrive-416-beta/ (uploaded in 15-30 mins)

OK, what I'll test is this beta to try and copy the existing 650Gb of data to another volume, in -vv mode

2020/01/25 17:04:46 DEBUG : rclone: Version "v1.50.2-191-gca8bd807-fix-3131-onedrive-416-beta" starting with parameters ["/home/sweh/XX/rclone" "-vv" "-c" "--stats=60s" "--max-size=40000M" "--config" "/home/sweh/.config/rclone/rclone.conf" "--size-only" "sync" "." "OneDriveTST:"]

I'll see if anything shows up :slight_smile:

Thanks! I tested the code with by simulating the error, but I might have made a mistake in the simulation, so a real test is great!

This is what I was afraid of... Last night I did a complete 600Gb upload with the beta version in debug mode, but no errors occured at all.

This morning, using the old version I also uploaded 140Gb (my standard Sunday level 0 backups). And also no errors!

So I'm really having trouble replicating the problem. I'm trying again with the beta version in debug mode...

It's looking like a real challenge to replicate! Probably a server-side timing issue and so hard to force :frowning:

Thanks for testing :slight_smile: Frustrating bug! Let me know if you succeed!

Yay, after 80% of the way through a complete upload I see an error!

There are no "416" errors, but something else...

The error message is consistent:

2020/01/26 17:30:37 ERROR : penfold/20200119._FastData.0.gz_af: Failed to copy: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.

Now the filename that refers to is 10s earlier debug log, so I'll include the whole segment plus a few more lines:

2020/01/26 17:30:26 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1541406720/5370019872 size 10485760
2020/01/26 17:30:26 DEBUG : penfold/20200119._FastData.0.gz_af.bin: Uploading segment 534773760/5370019872 size 10485760
2020/01/26 17:30:26 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 587202560/5370019872 size 10485760
2020/01/26 17:30:26 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1384120320/5370019872 size 10485760
2020/01/26 17:30:27 DEBUG : penfold/20200119._FastData.0.gz_af.bin: Uploading segment 545259520/5370019872 size 10485760
2020/01/26 17:30:27 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1551892480/5370019872 size 10485760
2020/01/26 17:30:27 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 597688320/5370019872 size 10485760
2020/01/26 17:30:27 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1394606080/5370019872 size 10485760
2020/01/26 17:30:28 DEBUG : penfold/20200119._FastData.0.gz_af.bin: Uploading segment 555745280/5370019872 size 10485760
2020/01/26 17:30:28 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1562378240/5370019872 size 10485760
2020/01/26 17:30:28 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 608174080/5370019872 size 10485760
2020/01/26 17:30:29 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1405091840/5370019872 size 10485760
2020/01/26 17:30:29 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1572864000/5370019872 size 10485760
2020/01/26 17:30:30 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 618659840/5370019872 size 10485760
2020/01/26 17:30:30 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1415577600/5370019872 size 10485760
2020/01/26 17:30:30 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1583349760/5370019872 size 10485760
2020/01/26 17:30:31 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 629145600/5370019872 size 10485760
2020/01/26 17:30:31 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1426063360/5370019872 size 10485760
2020/01/26 17:30:31 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1593835520/5370019872 size 10485760
2020/01/26 17:30:32 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 639631360/5370019872 size 10485760
2020/01/26 17:30:32 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1436549120/5370019872 size 10485760
2020/01/26 17:30:32 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1604321280/5370019872 size 10485760
2020/01/26 17:30:33 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 650117120/5370019872 size 10485760
2020/01/26 17:30:33 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1614807040/5370019872 size 10485760
2020/01/26 17:30:33 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1447034880/5370019872 size 10485760
2020/01/26 17:30:34 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 660602880/5370019872 size 10485760
2020/01/26 17:30:34 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1625292800/5370019872 size 10485760
2020/01/26 17:30:34 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1457520640/5370019872 size 10485760
2020/01/26 17:30:35 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 671088640/5370019872 size 10485760
2020/01/26 17:30:35 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1635778560/5370019872 size 10485760
2020/01/26 17:30:36 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 681574400/5370019872 size 10485760
2020/01/26 17:30:36 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1468006400/5370019872 size 10485760
2020/01/26 17:30:36 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1646264320/5370019872 size 10485760
2020/01/26 17:30:37 DEBUG : pacer: low level retry 1/10 (error generalException: General Exception While Processing)
2020/01/26 17:30:37 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2020/01/26 17:30:37 DEBUG : pacer: Reducing sleep to 15ms
2020/01/26 17:30:37 DEBUG : penfold/20200119._FastData.0.gz_ad.bin: Uploading segment 1478492160/5370019872 size 10485760
2020/01/26 17:30:37 DEBUG : pacer: Reducing sleep to 11.25ms
2020/01/26 17:30:37 DEBUG : penfold/20200119._FastData.0.gz_ae.bin: Uploading segment 692060160/5370019872 size 10485760
2020/01/26 17:30:37 DEBUG : pacer: Reducing sleep to 10ms
2020/01/26 17:30:37 DEBUG : penfold/20200119._FastData.0.gz_af.bin: Error encountered during upload: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/26 17:30:37 DEBUG : penfold/20200119._FastData.0.gz_af.bin: Cancelling multipart upload
2020/01/26 17:30:37 ERROR : penfold/20200119._FastData.0.gz_af: Failed to copy: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/26 17:30:37 DEBUG : penfold/20200119._FastData.0.gz_ag.bin: Starting multipart upload
2020/01/26 17:30:37 DEBUG : penfold/20200119._FastData.0.gz_ac.bin: Uploading segment 1656750080/5370019872 size 10485760

Given the 10s delay and the other 3 uploads continuing, I'm wondering if a response to the chunk upload was delayed (or dropped by onedrive?) and this made rclone think the segment had failed, triggering a low level retry... and that caused the "but I've already got that!" response.

But that's just a guess!

That is the error text of the 416 error, the workaround just didn't detect it :frowning:

I suspect if you look futher back in the log for penfold/20200119._FastData.0.gz_af.bin you'll see a 500 error (or similar) which caused a low level retry. However onedrive got the data despite the 500 error, hence the 416 message.

I think I see what is wrong with the detection code though - can you try this code and watch out for the debug above - thanks :slight_smile:

https://beta.rclone.org/branch/v1.50.2-195-gd4091ccf-fix-3131-onedrive-416-beta/ (uploaded in 15-30 mins)

The first indication of any problem is the pacer message already reported:

2020/01/26 17:30:37 DEBUG : pacer: low level retry 1/10 (error generalException: General Exception While Processing)

Nothing in the log, previous to that, indicates any problems.

OK, starting a new test :slight_smile:

2020/01/27 06:43:39 DEBUG : rclone: Version "v1.50.2-195-gd4091ccf-fix-3131-onedrive-416-beta" starting with parameters ["/home/sweh/XX/rclone" "-vv" "-c" "--stats=60s" "--max-size=40000M" "--config" "/home/sweh/.config/rclone/rclone.conf" "--size-only" "sync" "." "OneDriveTST:"]

That is it :slight_smile: Sorry if you posted this before and I didn't notice! There was an error error generalException: General Exception While Processing (which is probably a 500 error) which caused rclone to do a retry which caused the problem because even though onedrive gave an error, the part actually got uploaded.

Thanks :slight_smile:

And no errors on that test. So let's try another one!

I'm so glad I have a fast internet connection at home :slight_smile:

OK... the error was detected... but the upload still failed

2020/01/27 18:34:40 DEBUG : monitor/20200126.root.0.gz_aa.bin: Uploading segment 786432000/818907137 size 10485760

(other segments for other files continue)

2020/01/27 18:35:09 DEBUG : pacer: low level retry 1/10 (error generalException: General Exception While Processing)
2020/01/27 18:35:09 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2020/01/27 18:35:09 DEBUG : pacer: Reducing sleep to 15ms
2020/01/27 18:35:09 DEBUG : pugwash/20190625.root.0.gz_aa.bin: Uploading segment 3586129920/4907313121 size 10485760
2020/01/27 18:35:10 DEBUG : pacer: Reducing sleep to 11.25ms
2020/01/27 18:35:10 DEBUG : router/router/20200104.xfs:root.6.gz.bin: Uploading segment 83886080/110237788 size 10485760
2020/01/27 18:35:10 DEBUG : monitor/20200126.root.0.gz_aa.bin: Received 416 error - checking position: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/27 18:35:10 DEBUG : pacer: Reducing sleep to 10ms
2020/01/27 18:35:10 DEBUG : monitor/20200126.root.0.gz_aa.bin: Error encountered during upload: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/27 18:35:10 DEBUG : monitor/20200126.root.0.gz_aa.bin: Cancelling multipart upload
2020/01/27 18:35:10 DEBUG : router/router/20200105.xfs:_boot.0.gz.bin: Uploading segment 83886080/194920946 size 10485760
2020/01/27 18:35:10 ERROR : monitor/20200126.root.0.gz_aa: Failed to copy: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.

:frowning:

What I think must have happened is only part of the previous block got uploaded. We check for all or nothing.

I've made the retry routine skip part of the block if that is required so hopefully should work now.

It will also generate more logs!

Hopefully this will work this time! There is a lot more code now though so more chance for bugs :frowning:

https://beta.rclone.org/branch/v1.50.2-195-g164d9c96-fix-3131-onedrive-416-beta/ (uploaded in 15-30 mins)

Code is here if you want to take a look: https://github.com/rclone/rclone/commit/164d9c968dbe6dcf9cd52d35f316daba79f9d104

Thanks for testing :slight_smile:

New test started

2020/01/28 13:27:59 DEBUG : rclone: Version "v1.50.2-195-g164d9c96-fix-3131-onedrive-416-beta" starting with parameters ["/home/sweh/XX/rclone" "-vv" "-c" "--stats=60s" "--max-size=40000M" "--config" "/home/sweh/.config/rclone/rclone.conf" "--size-only" "sync" "." "OneDriveTST:"]

:crossed_fingers:

Hmm... kinda looks like it failed multiple times then restarted the file from scratch

2020/01/28 14:57:34 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 52428800/5370019872 size 10485760

(other files deleted from this log; only entries relating to this file retained)

2020/01/28 14:57:54 DEBUG : pacer: low level retry 1/10 (error generalException: General Exception While Processing)
2020/01/28 14:57:54 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2020/01/28 14:57:54 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:54 DEBUG : pacer: Reducing sleep to 15ms
2020/01/28 14:57:54 DEBUG : pacer: Reducing sleep to 11.25ms
2020/01/28 14:57:54 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:54 DEBUG : pacer: low level retry 2/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:54 DEBUG : pacer: Rate limited, increasing sleep to 22.5ms
2020/01/28 14:57:54 DEBUG : pacer: Reducing sleep to 16.875ms
2020/01/28 14:57:55 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:55 DEBUG : pacer: Reducing sleep to 12.65625ms
2020/01/28 14:57:55 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:55 DEBUG : pacer: low level retry 3/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:55 DEBUG : pacer: Rate limited, increasing sleep to 25.3125ms
2020/01/28 14:57:55 DEBUG : pacer: Reducing sleep to 18.984375ms
2020/01/28 14:57:56 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:56 DEBUG : pacer: Reducing sleep to 14.238281ms
2020/01/28 14:57:56 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:56 DEBUG : pacer: low level retry 4/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:56 DEBUG : pacer: Rate limited, increasing sleep to 28.476562ms
2020/01/28 14:57:56 DEBUG : pacer: Reducing sleep to 21.357421ms
2020/01/28 14:57:56 DEBUG : pacer: Reducing sleep to 16.018065ms
2020/01/28 14:57:56 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:56 DEBUG : pacer: Reducing sleep to 12.013548ms
2020/01/28 14:57:56 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:56 DEBUG : pacer: low level retry 5/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:56 DEBUG : pacer: Rate limited, increasing sleep to 24.027096ms
2020/01/28 14:57:57 DEBUG : pacer: Reducing sleep to 18.020322ms
2020/01/28 14:57:57 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:57 DEBUG : pacer: Reducing sleep to 13.515241ms
2020/01/28 14:57:57 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:57 DEBUG : pacer: low level retry 6/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:57 DEBUG : pacer: Rate limited, increasing sleep to 27.030482ms
2020/01/28 14:57:57 DEBUG : pacer: Reducing sleep to 20.272861ms
2020/01/28 14:57:57 DEBUG : pacer: Reducing sleep to 15.204645ms
2020/01/28 14:57:57 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:58 DEBUG : pacer: Reducing sleep to 11.403483ms
2020/01/28 14:57:58 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:58 DEBUG : pacer: low level retry 7/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:58 DEBUG : pacer: Rate limited, increasing sleep to 22.806966ms
2020/01/28 14:57:58 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:58 DEBUG : pacer: Reducing sleep to 17.105224ms
2020/01/28 14:57:58 DEBUG : pacer: Reducing sleep to 12.828918ms
2020/01/28 14:57:58 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:58 DEBUG : pacer: low level retry 8/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:58 DEBUG : pacer: Rate limited, increasing sleep to 25.657836ms
2020/01/28 14:57:59 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:59 DEBUG : pacer: Reducing sleep to 19.243377ms
2020/01/28 14:57:59 DEBUG : pacer: Reducing sleep to 14.432532ms
2020/01/28 14:57:59 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:59 DEBUG : pacer: low level retry 9/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:59 DEBUG : pacer: Rate limited, increasing sleep to 28.865064ms
2020/01/28 14:57:59 DEBUG : pacer: Reducing sleep to 21.648798ms
2020/01/28 14:57:59 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Received 416 error - reading current position from server: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.
2020/01/28 14:57:59 DEBUG : pacer: Reducing sleep to 16.236598ms
2020/01/28 14:57:59 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Read position 60784640, chunk is 52428800..62914560, bytes to skip = 8355840
2020/01/28 14:57:59 DEBUG : pacer: low level retry 10/10 (error retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.)
2020/01/28 14:57:59 DEBUG : pacer: Rate limited, increasing sleep to 32.473196ms
2020/01/28 14:57:59 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Error encountered during upload: retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received.

This part is where it seems to give up

2020/01/28 14:57:59 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Cancelling multipart upload
2020/01/28 14:58:00 DEBUG : pacer: Reducing sleep to 24.354897ms
2020/01/28 14:58:00 DEBUG : pacer: Reducing sleep to 18.266172ms
2020/01/28 14:58:00 DEBUG : godzilla/20200126.root.0.gz_ad: Received error: retry this chunk skipping 8355840 bytes: invalidRange: fragmentOverlap: The uploaded fragment overlaps with data that has already been received. - low level retry 1/10

And the restart

2020/01/28 14:58:00 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Starting multipart upload
2020/01/28 14:58:00 DEBUG : pacer: Reducing sleep to 13.699629ms
2020/01/28 14:58:00 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 0/5370019872 size 10485760
2020/01/28 14:58:00 DEBUG : pacer: Reducing sleep to 10.274721ms
2020/01/28 14:58:00 DEBUG : pacer: Reducing sleep to 10ms
2020/01/28 14:58:01 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 10485760/5370019872 size 10485760
2020/01/28 14:58:02 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 20971520/5370019872 size 10485760
2020/01/28 14:58:03 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 31457280/5370019872 size 10485760
2020/01/28 14:58:04 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 41943040/5370019872 size 10485760
2020/01/28 14:58:05 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 52428800/5370019872 size 10485760
2020/01/28 14:58:06 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 62914560/5370019872 size 10485760
2020/01/28 14:58:07 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 73400320/5370019872 size 10485760
2020/01/28 14:58:08 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 83886080/5370019872 size 10485760
2020/01/28 14:58:09 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 94371840/5370019872 size 10485760
....

2020/01/28 15:08:49 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 5347737600/5370019872 size 10485760
2020/01/28 15:08:50 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 5358223360/5370019872 size 10485760
2020/01/28 15:08:51 DEBUG : godzilla/20200126.root.0.gz_ad.bin: Uploading segment 5368709120/5370019872 size 1310752
2020/01/28 15:08:52 INFO  : godzilla/20200126.root.0.gz_ad: Copied (new)