Big file upload to S3 Fails

Hi everyone,
I could use some help since I'm having trouble to upload a big file to an S3 Bucket. The file is about 700Gb, and with my line speed, it would take about 24h to upload it. Unfortunately, I'm having the same error over and over after a few hours of transfer, and when rclone retries, it adds up total size to the file, as it was restarting the upload. See below, any help would be very much appreciated.

Version: v1.50.2
SO: Widows Server 2012 64bits
Amazon S3 Standard

Command:
rclone sync H:\Backups mytest:aws-jannarelli\Backups2 --contimeout=10m --transfers 10 --s3-upload-concurrency 10 --low-level-retries 10 --retries 5 -vv

Log (vv)

2019/12/29 09:33:35 INFO :
Transferred: 166.032G / 705.625 GBytes, 24%, 6.325 MBytes/s, ETA 1d15m56s
Errors: 0
Checks: 43 / 43, 100%
Transferred: 0 / 1, 0%
Elapsed time: 7h27m59.4s
Transferring:

  • BACKUP - BSFS - AWS (C…-12-28T000046_F90E.vbk: 23% /705.625G, 23.315M/s, 6h34m59s

2019/12/29 09:35:05 INFO :
Transferred: 166.816G / 705.625 GBytes, 24%, 6.334 MBytes/s, ETA 1d11m48s
Errors: 0
Checks: 43 / 43, 100%
Transferred: 0 / 1, 0%
Elapsed time: 7h29m29s
Transferring:

  • BACKUP - BSFS - AWS (C…-12-28T000046_F90E.vbk: 23% /705.625G, 11.466M/s, 13h22m0s

2019/12/29 09:35:22 DEBUG : pacer: Reducing sleep to 0s
2019/12/29 09:35:23 ERROR : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T000046_F90E.vbk: Failed to copy: MultipartUpload: upload multipart failed
upload id: MZotwdNgwpLNsYDlv9PcNu4n0mnrCL8F3FNxLJk4TvMXASDSy3xp9ZVTu0Pxb.ZHDNkgAWfoJ9D2bNrkS4QSO7XW7oaZVOeuY9d_TZGBnzJ8wKWzXL5KemWN51ZEuMR0
caused by: RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-jannarelli/Backups2/BACKUP%20-%20BSFS%20-%20AWS%20(CACHE%20CSV)/BACKUP%20-%20BSFS%20-%20AWS%20(CACHE%20CSV)D2019-12-28T000046_F90E.vbk?partNumber=693&uploadId=MZotwdNgwpLNsYDlv9PcNu4n0mnrCL8F3FNxLJk4TvMXASDSy3xp9ZVTu0Pxb.ZHDNkgAWfoJ9D2bNrkS4QSO7XW7oaZVOeuY9d_TZGBnzJ8wKWzXL5KemWN51ZEuMR0: write tcp 192.168.0.3:45472->52.219.101.33:443: wsasend: An existing connection was forcibly closed by the remote host.
2019/12/29 09:35:24 ERROR : S3 bucket aws-jannarelli path Backups2: not deleting files as there were IO errors
2019/12/29 09:35:24 ERROR : S3 bucket aws-jannarelli path Backups2: not deleting directories as there were IO errors
2019/12/29 09:35:25 ERROR : Attempt 1/5 failed with 3 errors and: MultipartUpload: upload multipart failed
upload id: MZotwdNgwpLNsYDlv9PcNu4n0mnrCL8F3FNxLJk4TvMXASDSy3xp9ZVTu0Pxb.ZHDNkgAWfoJ9D2bNrkS4QSO7XW7oaZVOeuY9d_TZGBnzJ8wKWzXL5KemWN51ZEuMR0
caused by: RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-jannarelli/Backups2/BACKUP%20-%20BSFS%20-%20AWS%20(CACHE%20CSV)/BACKUP%20-%20BSFS%20-%20AWS%20(CACHE%20CSV)D2019-12-28T000046_F90E.vbk?partNumber=693&uploadId=MZotwdNgwpLNsYDlv9PcNu4n0mnrCL8F3FNxLJk4TvMXASDSy3xp9ZVTu0Pxb.ZHDNkgAWfoJ9D2bNrkS4QSO7XW7oaZVOeuY9d_TZGBnzJ8wKWzXL5KemWN51ZEuMR0: write tcp 192.168.0.3:45472->52.219.101.33:443: wsasend: An existing connection was forcibly closed by the remote host.
2019/12/29 09:35:35 INFO :
Transferred: 166.888G / 166.888 GBytes, 100%, 6.332 MBytes/s, ETA 0s
Errors: 0
Checks: 43 / 43, 100%
Transferred: 0 / 0, -
Elapsed time: 7h29m47.3s

2019/12/29 09:35:39 ERROR : : Entry doesn't belong in directory "" (same as directory) - ignoring
2019/12/29 09:35:41 INFO : S3 bucket aws-jannarelli path Backups2: Waiting for checks to finish
2019/12/29 09:35:43 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV).vbm: Sizes differ (src 489579 vs dst 467931)
2019/12/29 09:35:44 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T000037_B3FC.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:44 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T000037_B3FC.vib: Unchanged skipping
2019/12/29 09:35:44 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T000122_D614.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:44 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T000122_D614.vib: Unchanged skipping
2019/12/29 09:35:44 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T160029_F27C.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:44 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T160029_F27C.vib: Unchanged skipping
2019/12/29 09:35:44 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T120047_547B.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:44 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T120047_547B.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T080043_D2F7.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T080043_D2F7.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-21T181516_A69D.vbk: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-21T181516_A69D.vbk: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T040042_5FEE.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T040042_5FEE.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T200036_9CAF.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-22T200036_9CAF.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T040041_6DF2.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T040041_6DF2.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T120042_B996.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T120042_B996.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T080025_B2CF.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T080025_B2CF.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T160028_FD33.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T160028_FD33.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T200017_0D43.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-23T200017_0D43.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T040027_2B93.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T040027_2B93.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T000030_2878.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T000030_2878.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T160033_E808.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T160033_E808.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T200022_0430.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T200022_0430.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T120023_751E.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T120023_751E.vib: Unchanged skipping
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T080026_DA88.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:45 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T080026_DA88.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T040039_7FAA.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T040039_7FAA.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T160023_4080.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T160023_4080.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T200016_2B72.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T200016_2B72.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T080042_3534.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-24T080042_3534.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T120035_31F5.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T120035_31F5.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T000029_952C.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T000029_952C.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T040024_6D8D.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T040024_6D8D.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T000041_4C1C.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-25T000041_4C1C.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T080037_BE12.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T080037_BE12.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T160022_6D1C.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T160022_6D1C.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T000032_EE88.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T000032_EE88.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T120025_39C7.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T120025_39C7.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T120034_91C6.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T120034_91C6.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T040040_22E3.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T040040_22E3.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T080025_9B40.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T080025_9B40.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T200040_A7DF.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T200040_A7DF.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T120028_1955.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T120028_1955.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-29T000022_4D08.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-29T000022_4D08.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T080018_5A0B.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T080018_5A0B.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T200019_058B.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-26T200019_058B.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T160024_DE46.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T160024_DE46.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T160025_BB44.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-28T160025_BB44.vib: Unchanged skipping
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV).vbm: MD5 = 7f20c1bb2541480d156d51bc0969ee3b OK
2019/12/29 09:35:46 INFO : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV).vbm: Copied (replaced existing)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T200036_1696.vib: Size and modification time the same (differ by 0s, within tolerance 100ns)
2019/12/29 09:35:46 DEBUG : BACKUP - BSFS - AWS (CACHE CSV)/BACKUP - BSFS - AWS (CACHE CSV)D2019-12-27T200036_1696.vib: Unchanged skipping
2019/12/29 09:35:46 INFO : S3 bucket aws-jannarelli path Backups2: Waiting for transfers to finish
2019/12/29 09:36:35 INFO :
Transferred: 167.033G / 875.037 GBytes, 19%, 6.325 MBytes/s, ETA 1d7h50m25s
Errors: 0
Checks: 86 / 86, 100%
Transferred: 1 / 4, 25%
Elapsed time: 7h30m42.5s
Transferring:

  • BACKUP - BSFS - AWS (C…-12-28T000046_F90E.vbk: 0% /705.625G, 0/s, -
  • BACKUP - BSFS - AWS (C…-12-29T040027_758A.vib: 40% /141.723M, 1.640M/s, 50s
  • BACKUP - BSFS - AWS (C…-12-29T080027_FF26.vib: 3% /2.386G, 2.597M/s, 15m5s

That error means that something (not your computer) closed the connection. Are you running through a proxy or a firewall which could be interfering with your connections?

It could possibly be s3 but it seems unlikely if it happens at the same moment each time.

Hello Nick, thank you for the quick response.

I have a Firewall on the border of my network, but this server in my DMZ, so no specific rules, blocks or shaping from source side. It doesn't happen always the same moment, but a few hours after starting it. Sometimes in the morning, others at night, so nothing that I could correlate to some sort of behavior. It has been a couple weeks of troubleshooting to workaround this, and no success so far. I've tried with multiple flags combinations aswell.

But as the issue above, I'm so far unable to understand why rclone can't "pick up" this lost part of the mulitpart upload, without losing all progress and adding the full size file in addition to what was already transferred. I mean, connection errors will always occurr one way or another, is rclone capable of resuming this transfer after this kind of failure, or am I misconfiguring something?

Thank you, I would be glad to provide any further details you might need.

:frowning:

rclone backends retry chunks in multipart uploads normally. However this particular backend uses the s3manager abstraction provided by the AWS SDK and I guess it doesn't retry them properly which is most annoying.

In fact there have been quite a few problems with the s3manager using too much memory, so I'm tempted to re-write it...

One thing that might be happening in your particular case is that the last transaction might take a very long time (as it has to re-assemble the parts) so maybe your firewall/proxy is timing it out? Can you check its logs for timeouts?

Thanks again for the follow up. I see, I've tried to raise low-level-retries to workaround multiple chunk upload failures during a long transfer, but this doesn't seem to be the right answer to this particular issue, but it is a great feature/tweak.

This is a great piece of advice, thank you again ! :slight_smile: I was unable (so far) to find any logs that proves it, but it makes perfect sense to me that my border Firewall might be sending RST's to the server during the operation. I've raised the TCP and UDP timeouts (7200 and 360 respectively). I'll monitor it and get back to you.

Much apprecciated ! Regards.

1 Like

I re-wrote the AWS multipart uploader which should help with your problem of not retrying if you want to have a go with it:

https://beta.rclone.org/branch/v1.50.2-095-g5363afa7-fix-s3-manager-beta/

1 Like

Wow, that's fast ! Thank you very much Nick.

I've already set up a test and it's in progress right now. I can already tell that the memory usage is about 40% lower than the previous version, considering these first minutes. Below, the command I'm using and the directory details.

Command
rclone sync F:\Backups aws-globalcargo:TestBucket\BetaTest --checkers 16 -vv --log-file C:\scripts\AWS\Logs\Log-VMs-BetaTest.txt

Files
Total size - 0,98Tb
File structure - 2 files of about 400gb and 5 of about 15Gb

I'll report to you as soon as I get the results. Appreciate your help, happy new year !

1 Like

Thanks for testing - look forward to seeing the results :slight_smile:

Hello Nick !

The tests are going extremely well, and the ETA to finish and assembly the 2 remaining big files (400gb each) is 15h, I'll update you at the end for a complete validation. So far, I've noticed:

  • Memory consumption - About 45% lower and steady throughout the whole sync, which is great !
  • Low level retries - Rclone is now attempting low level retries properly, without even breaking a sweat. The behavior of increasing the total size after a failure is gone, the size remains the same and rclone picks it up right where it has stopped. Even after two major disconnection events (ISP issues), rclone behaved perfectly and waited (sleep) until the connection was reestabilished. This is awesome news.
  • Speed and reliability - As usual, extremely fast, reliable and with good visibility in the log file.

Can't wait for it to finish, so it'll check the files and I will deploy to my other clusters.

Thanks again, reporting again soon.

Logs when connection was lost and resumed:

1

2020/01/01 14:17:47 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 3619 size 48M offset 169.594G/468.253G
2020/01/01 14:17:57 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 3607 size 48M offset 169.031G/464.090G
2020/01/01 14:18:01 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 3620 size 48M offset 169.641G/468.253G
2020/01/01 14:18:06 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 3621 size 48M offset 169.688G/468.253G
2020/01/01 14:18:08 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 3608 size 48M offset 169.078G/464.090G
2020/01/01 14:18:13 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 3609 size 48M offset 169.125G/464.090G
2020/01/01 14:18:25 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 3610 size 48M offset 169.172G/464.090G
2020/01/01 14:18:37 INFO :
Transferred: 416.751G / 1010.141 GBytes, 41%, 4.751 MBytes/s, ETA 1d11h31m29s
Transferred: 6 / 8, 75%
Elapsed time: 24h56m59.8s
Transferring:

  • BACKUP - VMS - AWSS3/B…2-21T000000_859F_W.vbk: 36% /468.253G, 959.840k/s, 90h35m16s
  • BACKUP - VMS - AWSS3/B…2-28T000000_E02C_W.vbk: 36% /464.090G, 2.912M/s, 28h47m59s

2020/01/01 14:19:04 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-28T000000_E02C_W.vbk?partNumber=3609&uploadId=hND5kS9aH2977Jg2JJAWWvx1IF7PNzBCYqsJrCN.oqjFYs3a5hRLGW834slqYnsF79c7W829Zu92v7Wm7WJC9PHPdZdOMQQn_oxqZP5GU1h5XtoxdW0UaZGbLwa3MHp4: write tcp 192.168.0.197:56905->52.219.96.138:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 14:19:04 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2020/01/01 14:19:04 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-28T000000_E02C_W.vbk?partNumber=3610&uploadId=hND5kS9aH2977Jg2JJAWWvx1IF7PNzBCYqsJrCN.oqjFYs3a5hRLGW834slqYnsF79c7W829Zu92v7Wm7WJC9PHPdZdOMQQn_oxqZP5GU1h5XtoxdW0UaZGbLwa3MHp4: write tcp 192.168.0.197:56967->52.219.96.122:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 14:19:04 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2020/01/01 14:19:04 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-21T000000_859F_W.vbk?partNumber=3621&uploadId=ZISIhT47_wQnOL.RQCFKYLde1FVJltRBSfFd3BMshYMMBPC1nqCgMkvFPav3yxTT7iIBhJE1B7weRipQRG4SIjLRMvtonxSTAlGjQps2BdOCsb8UytEDrwUWUcAsfn0g: write tcp 192.168.0.197:56711->52.219.88.51:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 14:19:04 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2020/01/01 14:19:04 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-28T000000_E02C_W.vbk?partNumber=3607&uploadId=hND5kS9aH2977Jg2JJAWWvx1IF7PNzBCYqsJrCN.oqjFYs3a5hRLGW834slqYnsF79c7W829Zu92v7Wm7WJC9PHPdZdOMQQn_oxqZP5GU1h5XtoxdW0UaZGbLwa3MHp4: write tcp 192.168.0.197:56834->52.219.104.202:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 14:19:04 DEBUG : pacer: Rate limited, increasing sleep to 80ms
2020/01/01 14:19:04 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-28T000000_E02C_W.vbk?partNumber=3608&uploadId=hND5kS9aH2977Jg2JJAWWvx1IF7PNzBCYqsJrCN.oqjFYs3a5hRLGW834slqYnsF79c7W829Zu92v7Wm7WJC9PHPdZdOMQQn_oxqZP5GU1h5XtoxdW0UaZGbLwa3MHp4: write tcp 192.168.0.197:56779->52.219.100.234:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 14:19:04 DEBUG : pacer: Rate limited, increasing sleep to 160ms
2020/01/01 14:19:04 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-21T000000_859F_W.vbk?partNumber=3619&uploadId=ZISIhT47_wQnOL.RQCFKYLde1FVJltRBSfFd3BMshYMMBPC1nqCgMkvFPav3yxTT7iIBhJE1B7weRipQRG4SIjLRMvtonxSTAlGjQps2BdOCsb8UytEDrwUWUcAsfn0g: write tcp 192.168.0.197:56784->52.219.88.122:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 14:19:04 DEBUG : pacer: Rate limited, increasing sleep to 320ms
2020/01/01 14:19:04 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-21T000000_859F_W.vbk?partNumber=3618&uploadId=ZISIhT47_wQnOL.RQCFKYLde1FVJltRBSfFd3BMshYMMBPC1nqCgMkvFPav3yxTT7iIBhJE1B7weRipQRG4SIjLRMvtonxSTAlGjQps2BdOCsb8UytEDrwUWUcAsfn0g: write tcp 192.168.0.197:56731->52.219.100.178:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 14:19:04 DEBUG : pacer: Rate limited, increasing sleep to 640ms
2020/01/01 14:19:04 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-21T000000_859F_W.vbk?partNumber=3620&uploadId=ZISIhT47_wQnOL.RQCFKYLde1FVJltRBSfFd3BMshYMMBPC1nqCgMkvFPav3yxTT7iIBhJE1B7weRipQRG4SIjLRMvtonxSTAlGjQps2BdOCsb8UytEDrwUWUcAsfn0g: write tcp 192.168.0.197:56741->52.219.96.154:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 14:19:04 DEBUG : pacer: Rate limited, increasing sleep to 1.28s
2020/01/01 14:19:37 INFO :
Transferred: 416.751G / 1010.141 GBytes, 41%, 4.748 MBytes/s, ETA 1d11h32m55s
Transferred: 6 / 8, 75%
Elapsed time: 24h57m59.8s
Transferring:

  • BACKUP - VMS - AWSS3/B…2-21T000000_859F_W.vbk: 36% /468.253G, 19.974k/s, 4353h2m52s
  • BACKUP - VMS - AWSS3/B…2-28T000000_E02C_W.vbk: 36% /464.090G, 62.060k/s, 1383h56m7s

2020/01/01 14:19:52 DEBUG : pacer: Reducing sleep to 960ms
2020/01/01 14:19:52 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 3622 size 48M offset 169.734G/468.253G
2020/01/01 14:20:16 DEBUG : pacer: Reducing sleep to 720ms
2020/01/01 14:20:16 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 3611 size 48M offset 169.219G/464.090G
2020/01/01 14:20:18 DEBUG : pacer: Reducing sleep to 540ms
2020/01/01 14:20:18 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 3623 size 48M offset 169.781G/468.253G
2020/01/01 14:20:19 DEBUG : pacer: Reducing sleep to 405ms
2020/01/01 14:20:19 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 3624 size 48M offset 169.828G/468.253G
2020/01/01 14:20:31 DEBUG : pacer: Reducing sleep to 303.75ms
2020/01/01 14:20:31 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 3612 size 48M offset 169.266G/464.090G
2020/01/01 14:20:37 INFO :
Transferred: 416.985G / 1010.141 GBytes, 41%, 4.748 MBytes/s, ETA 1d11h32m18s
Transferred: 6 / 8, 75%
Elapsed time: 24h58m59.8s
Transferring:

  • BACKUP - VMS - AWSS3/B…2-21T000000_859F_W.vbk: 36% /468.253G, 2.182M/s, 38h53m38s
  • BACKUP - VMS - AWSS3/B…2-28T000000_E02C_W.vbk: 36% /464.090G, 2.999M/s, 27h57m25s

2

2020/01/01 22:08:56 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 5028 size 48M offset 235.641G/468.253G
2020/01/01 22:09:16 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 5017 size 48M offset 235.125G/464.090G
2020/01/01 22:09:16 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 5018 size 48M offset 235.172G/464.090G
2020/01/01 22:09:18 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 5029 size 48M offset 235.688G/468.253G
2020/01/01 22:09:25 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 5019 size 48M offset 235.219G/464.090G
2020/01/01 22:09:37 INFO :
Transferred: 548.798G / 1010.141 GBytes, 54%, 4.759 MBytes/s, ETA 1d3h34m23s
Transferred: 6 / 8, 75%
Elapsed time: 32h47m59.8s
Transferring:

  • BACKUP - VMS - AWSS3/B…2-21T000000_859F_W.vbk: 50% /468.253G, 1.414M/s, 46h47m1s
  • BACKUP - VMS - AWSS3/B…2-28T000000_E02C_W.vbk: 50% /464.090G, 3.216M/s, 20h14m10s

2020/01/01 22:09:43 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 5030 size 48M offset 235.734G/468.253G
2020/01/01 22:09:44 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-21T000000_859F_W.vbk: multipart upload starting chunk 5031 size 48M offset 235.781G/468.253G
2020/01/01 22:09:47 DEBUG : BACKUP - VMS - AWSS3/BACKUP - VMS - AWSS3D2019-12-28T000000_E02C_W.vbk: multipart upload starting chunk 5020 size 48M offset 235.266G/464.090G
2020/01/01 22:10:13 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-21T000000_859F_W.vbk?partNumber=5029&uploadId=ZISIhT47_wQnOL.RQCFKYLde1FVJltRBSfFd3BMshYMMBPC1nqCgMkvFPav3yxTT7iIBhJE1B7weRipQRG4SIjLRMvtonxSTAlGjQps2BdOCsb8UytEDrwUWUcAsfn0g: write tcp 192.168.0.197:57060->52.219.96.10:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 22:10:13 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2020/01/01 22:10:13 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-28T000000_E02C_W.vbk?partNumber=5019&uploadId=hND5kS9aH2977Jg2JJAWWvx1IF7PNzBCYqsJrCN.oqjFYs3a5hRLGW834slqYnsF79c7W829Zu92v7Wm7WJC9PHPdZdOMQQn_oxqZP5GU1h5XtoxdW0UaZGbLwa3MHp4: write tcp 192.168.0.197:57017->52.219.104.241:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 22:10:13 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2020/01/01 22:10:13 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-21T000000_859F_W.vbk?partNumber=5031&uploadId=ZISIhT47_wQnOL.RQCFKYLde1FVJltRBSfFd3BMshYMMBPC1nqCgMkvFPav3yxTT7iIBhJE1B7weRipQRG4SIjLRMvtonxSTAlGjQps2BdOCsb8UytEDrwUWUcAsfn0g: write tcp 192.168.0.197:57106->52.219.84.122:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 22:10:13 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2020/01/01 22:10:13 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-28T000000_E02C_W.vbk?partNumber=5018&uploadId=hND5kS9aH2977Jg2JJAWWvx1IF7PNzBCYqsJrCN.oqjFYs3a5hRLGW834slqYnsF79c7W829Zu92v7Wm7WJC9PHPdZdOMQQn_oxqZP5GU1h5XtoxdW0UaZGbLwa3MHp4: write tcp 192.168.0.197:57071->52.219.96.10:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 22:10:13 DEBUG : pacer: Rate limited, increasing sleep to 80ms
2020/01/01 22:10:13 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-28T000000_E02C_W.vbk?partNumber=5020&uploadId=hND5kS9aH2977Jg2JJAWWvx1IF7PNzBCYqsJrCN.oqjFYs3a5hRLGW834slqYnsF79c7W829Zu92v7Wm7WJC9PHPdZdOMQQn_oxqZP5GU1h5XtoxdW0UaZGbLwa3MHp4: write tcp 192.168.0.197:57007->52.219.84.210:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 22:10:13 DEBUG : pacer: Rate limited, increasing sleep to 160ms
2020/01/01 22:10:13 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-21T000000_859F_W.vbk?partNumber=5030&uploadId=ZISIhT47_wQnOL.RQCFKYLde1FVJltRBSfFd3BMshYMMBPC1nqCgMkvFPav3yxTT7iIBhJE1B7weRipQRG4SIjLRMvtonxSTAlGjQps2BdOCsb8UytEDrwUWUcAsfn0g: write tcp 192.168.0.197:57030->52.219.100.42:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 22:10:13 DEBUG : pacer: Rate limited, increasing sleep to 320ms
2020/01/01 22:10:13 DEBUG : pacer: low level retry 1/10 (error RequestError: send request failed
caused by: Put https://s3.us-east-2.amazonaws.com/aws-globalaircargo/BetaTest/BACKUP%20-%20VMS%20-%20AWSS3/BACKUP%20-%20VMS%20-%20AWSS3D2019-12-28T000000_E02C_W.vbk?partNumber=5017&uploadId=hND5kS9aH2977Jg2JJAWWvx1IF7PNzBCYqsJrCN.oqjFYs3a5hRLGW834slqYnsF79c7W829Zu92v7Wm7WJC9PHPdZdOMQQn_oxqZP5GU1h5XtoxdW0UaZGbLwa3MHp4: write tcp 192.168.0.197:56948->52.219.88.43:443: wsasend: An existing connection was forcibly closed by the remote host.)
2020/01/01 22:10:13 DEBUG : pacer: Rate limited, increasing sleep to 640ms
2020/01/01 22:10:37 INFO :
Transferred: 548.939G / 1010.141 GBytes, 54%, 4.758 MBytes/s, ETA 1d3h34m17s
Transferred: 6 / 8, 75%
Elapsed time: 32h48m59.8s
Transferring:

  • BACKUP - VMS - AWSS3/B…2-21T000000_859F_W.vbk: 50% /468.253G, 237.663k/s, 284h51m4s
  • BACKUP - VMS - AWSS3/B…2-28T000000_E02C_W.vbk: 50% /464.090G, 198.653k/s, 335h26m23s

2020/01/01 22:11:37 INFO :
Transferred: 548.939G / 1010.141 GBytes, 54%, 4.756 MBytes/s, ETA 1d3h35m8s
Transferred: 6 / 8, 75%
Elapsed time: 32h49m59.8s
Transferring:

  • BACKUP - VMS - AWSS3/B…2-21T000000_859F_W.vbk: 50% /468.253G, 4.945k/s, 13688h2m33s
  • BACKUP - VMS - AWSS3/B…2-28T000000_E02C_W.vbk: 50% /464.090G, 4.134k/s, 16119h0m3s

Good news, and look forward to the update!

I'm glad the low level retries are working properly now - that was one of the major goals of the re-write and it is great to have experimental validation.

Hello Nick, glad to give you good news here !

The transfer completed successfully, and the behavior was exactly the expected. Memory consumption was considerably lower and any interruptions on the connections were handled properly, with low level retries and not changing the total size, as per previous logs (if you'd like, I can send you the whole log).

I'm running a "rclone check" just to be sure, but the size and mod time were ok at the end of the upload. I also did some reading in the forum and I realized that rclone and the AWS multipart uploader checks the MD5 for each chunk, as per https://github.com/rclone/rclone/issues/523, which helps with our peace of mind :slight_smile:

Thanks again for the follow up and support. Regards,

Excellent - thanks for testing!

I've merged this to master now which means it will be in the latest beta in 15-30 mins and released in v1.51

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.