Transfer percentages >100%?

I am running an rclone sync to Backblaze B2. The files are windows file history files that are accessed over SMB on an unRaid server as a mount on Ubuntu.

I have consistently been seeing files with percentages going over 100%. Here is a screenshot of a couple: https://imgur.com/a/aiM9yJV

Any ideas why this could be happening? Our home internet seems to be slower than usual and has some dropped packets when I ping it (when the transfer isn’t running, of course) so could it be re-uploads of failed parts?

I’m reasonably sure that this is caused by retry of parts of multi-part uploads.

If you run with -vv then rclone will debug each part if you want to be certain.

@ncw , as I commented on the other topic, I still think it’s some kind of rounding error in the percentage calculation. In my jobs, values greater than 100% only appear for very small files (a few kb):


	Line 3400:   *   ...: 105% /878, 6.325k/s, 0s
	Line 3531:   *   ...: 105% /878, 7.233k/s, 0s
	Line 38037:  *   ...: 105% /958, 7.740k/s, 0s
	Line 41370:  *   ...: 105% /846, 0/s, 0s
	Line 44851:  *   ...: 105% /830, 0/s, 0s
	Line 46345:  *   ...: 105% /942, 7.178k/s, 0s
	Line 47827:  *   ...: 105% /926, 7.605k/s, 0s
	Line 62738:  *   ...: 105% /942, 0/s, 0s
	Line 63833:  *   ...: 105% /814, 0/s, 0s
	Line 64416:  *   ...: 105% /910, 7.478k/s, 0s
	Line 65104:  *   ...: 105% /814, 6.296k/s, 0s
	Line 66617:  *   ...: 105% /894, 0/s, 0s
	Line 66970:  *   ...: 105% /894, 7.358k/s, 0s
	Line 67713:  *   ...: 105% /830, 0/s, 0s
	Line 68514:  *   ...: 105% /830, 0/s, 0s
	Line 68941:  *   ...: 105% /894, 0/s, 0s
	Line 68942:  *   ...: 105% /830, 6.803k/s, 0s
	Line 69436:  *   ...: 105% /958, 7.858k/s, 0s
	Line 70030:  *   ...: 105% /846, 0/s, 0s
	Line 70998:  *   ...: 105% /830, 6.002k/s, 0s
	Line 71129:  *   ...: 105% /814, 0/s, 0s
	Line 72352:  *   ...: 105% /926, 7.572k/s, 0s
	Line 72550:  *   ...: 105% /910, 7.399k/s, 0s
	Line 72551:  *   ...: 105% /830, 0/s, 0s
	Line 74304:  *   ...: 105% /910, 0/s, 0s
	Line 74791:  *   ...: 105% /926, 0/s, 0s
	Line 74924:  *   ...: 105% /862, 0/s, 0s
	Line 75101:  *   ...: 105% /830, 0/s, 0s
	Line 75273:  *   ...: 105% /830, 0/s, 0s
	Line 76178:  *   ...: 105% /814, 0/s, 0s
	Line 81352:  *   ...: 105% /942, 0/s, 0s
	Line 81506:  *   ...: 105% /878, 7.231k/s, 0s
	Line 81722:  *   ...: 105% /862, 7.107k/s, 0s
	Line 81878:  *   ...: 105% /878, 0/s, 0s
	Line 82471:  *   ...: 105% /926, 0/s, 0s
	Line 58:     *   ...: 105% /926, 0/s, 0s
	Line 13735:  *   ...: 106% /686, 0/s, 0s
	Line 26527:  *   ...: 106% /686, 0/s, 0s
	Line 35804:  *   ...: 106% /718, 5.966k/s, 0s
	Line 32033:  *   ...: 106% /718, 0/s, 0s
	Line 39442:  *   ...: 106% /718, 5.597k/s, 0s
	Line 44211:  *   ...: 106% /702, 0/s, 0s
	Line 46123:  *   ...: 106% /750, 0/s, 0s
	Line 46794:  *   ...: 106% /718, 5.941k/s, 0s
	Line 47269:  *   ...: 106% /734, 6.108k/s, 0s
	Line 52728:  *   ...: 106% /718, 0/s, 0s
	Line 58991:  *   ...: 106% /750, 0/s, 0s
	Line 62510:  *   ...: 106% /686, 5.338k/s, 0s
	Line 63147:  *   ...: 106% /718, 0/s, 0s
	Line 63618:  *   ...: 106% /766, 0/s, 0s
	Line 65013:  *   ...: 106% /798, 0/s, 0s
	Line 65491:  *   ...: 106% /718, 0/s, 0s
	Line 65493:  *   ...: 106% /750, 0/s, 0s
	Line 65661:  *   ...: 106% /702, 0/s, 0s
	Line 66384:  *   ...: 106% /718, 0/s, 0s
	Line 66618:  *   ...: 106% /782, 0/s, 0s
	Line 66932:  *   ...: 106% /798, 6.597k/s, 0s
	Line 68119:  *   ...: 106% /782, 6.482k/s, 0s
	Line 68427:  *   ...: 106% /766, 0/s, 0s
	Line 68516:  *   ...: 106% /798, 0/s, 0s
	Line 69310:  *   ...: 106% /798, 6.607k/s, 0s
	Line 69526:  *   ...: 106% /798, 6.603k/s, 0s
	Line 69684:  *   ...: 106% /718, 0/s, 0s
	Line 70029:  *   ...: 106% /750, 0/s, 0s
	Line 70371:  *   ...: 106% /766, 0/s, 0s
	Line 70372:  *   ...: 106% /782, 0/s, 0s
	Line 71130:  *   ...: 106% /750, 6.233k/s, 0s
	Line 71346:  *   ...: 106% /702, 5.856k/s, 0s
	Line 71691:  *   ...: 106% /766, 0/s, 0s
	Line 71972:  *   ...: 106% /702, 0/s, 0s
	Line 72126:  *   ...: 106% /718, 5.910k/s, 0s
	Line 72203:  *   ...: 106% /734, 0/s, 0s
	Line 72751:  *   ...: 106% /798, 0/s, 0s
	Line 72861:  *   ...: 106% /782, 0/s, 0s
	Line 73477:  *   ...: 106% /782, 0/s, 0s
	Line 73553:  *   ...: 106% /702, 0/s, 0s
	Line 79006:  *   ...: 106% /766, 6.330k/s, 0s
	Line 79784:  *   ...: 106% /734, 0/s, 0s
	Line 82474:  *   ...: 106% /734, 6.105k/s, 0s
	Line 83751:  *   ...: 106% /686, 5.733k/s, 0s
	Line 88525:  *   ...: 106% /718, 0/s, 0s
	Line 88908:  *   ...: 106% /766, 6.312k/s, 0s
	Line 27452:  *   ...: 107% /654, 0/s, 0s
	Line 27811:  *   ...: 107% /654, 0/s, 0s
	Line 29034:  *   ...: 107% /670, 5.241k/s, 0s
	Line 35420:  *   ...: 107% /606, 0/s, 0s
	Line 32034:  *   ...: 107% /670, 0/s, 0s
	Line 33960:  *   ...: 107% /638, 5.288k/s, 0s
	Line 38059:  *   ...: 107% /670, 5.604k/s, 0s
	Line 46563:  *   ...: 107% /654, 5.474k/s, 0s
	Line 48722:  *   ...: 107% /670, 5.600k/s, 0s
	Line 70373:  *   ...: 107% /654, 0/s, 0s
	Line 77596:  *   ...: 107% /654, 0/s, 0s
	Line 78831:  *   ...: 107% /622, 5.233k/s, 0s
	Line 79783:  *   ...: 107% /606, 0/s, 0s
	Line 79952:  *   ...: 107% /654, 0/s, 0s
	Line 80083:  *   ...: 107% /606, 0/s, 0s
	Line 80408:  *   ...: 107% /638, 0/s, 0s
	Line 88:     *   ...: 107% /622, 5.230k/s, 0s
	Line 512:    *   ...: 107% /665, 0/s, 0s
	Line 539:    *   ...: 107% /665, 4.840k/s, 0s
	Line 584:    *   ...: 107% /665, 0/s, 0s

I saw that bhagen posted large files, but as you commented that the percentage refers to reading from the source, so I guess it has nothing to do with the upload, right?

It is reading using

SMB on an unRaid server as a mount on Ubuntu

so maybe the problem in this case is caused by reading retries.

I will try with more verbosity and report back.

Oh, I know what is causing those… It is because we put a 20 byte sha1sum on the end of the datastream encoded as 40 hex bytes when doing b2 uploads. The percentages work out correct if you add 40 bytes to the size…

This will certainly be the case for large uploads beyond --b2-upload-cutoff - each chunk will have 40 bytes added to it and the accounting will be out by that much.

I would have thought they would be correct for files below --b2-upload-cutoff - did you set that very low maybe?

I’m not immediately sure how to fix this!

I don’t think this can explain the results as seen by @bhagen though as the numbers are too big

Appears @ncw is correct, lots of failed chunk upload attempts for a file that went over 600%.

Guess I need to look into why my internet is being so unreliable… Or would it help to lower chunk sizes?

I’m not using b2, I just got into the topic :wink:… I’m using Wasabi/S3

I think it’s just a detail, I don’t know if it’s worth spending energy on it for now …

Lowering the chunk size will help. It will also lower the transfer rate. It is worth trying though.

1 Like

Ah OK! That blows that explanation out of the water then :wink:

The mood may take me to investigate at some point, but I’ll pretend I didn’t notice for the moment!