Retry corrupted writes not working

What is the problem you are having with rclone?

I mount my (hubic) backend.

Then I run duplicacy over it and there are some chunks corrupted during transfer.

However rclone does not retry the transfer but fails.

How to convince it to retry the upload instead?

What is your rclone version (output from rclone version)

rclone v1.53.3

  • os/arch: linux/amd64
  • go version: go1.15.5

Which OS you are using and how many bits (eg Windows 7, 64 bit)

rclone v1.53.3

  • os/arch: linux/amd64
  • go version: go1.15.5

Which cloud storage system are you using? (eg Google Drive)

Hubic

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount hubic:default /mnt/hubic_svfs/  --retries 20 --low-level-retries 20

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Uploading segments into "default_segments" seems done (EOF)
2021/01/08 10:04:21 DEBUG : duplicacy/chunks/bb/a5020e06282acccc683436950ddf86328523ec7b2c2d360ae9ec8a617855cc.njrfxjcp.tmp: Sizes differ (src 1395240 vs dst 0)
2021/01/08 10:04:21 ERROR : duplicacy/chunks/bb/a5020e06282acccc683436950ddf86328523ec7b2c2d360ae9ec8a617855cc.njrfxjcp.tmp: corrupted on transfer
2021/01/08 10:04:21 ERROR : duplicacy/chunks/bb/a5020e06282acccc683436950ddf86328523ec7b2c2d360ae9ec8a617855cc.njrfxjcp.tmp: WriteFileHandle.New Rcat failed: corrupted on transfer
2021/01/08 10:04:21 ERROR : duplicacy/chunks/bb/a5020e06282acccc683436950ddf86328523ec7b2c2d360ae9ec8a617855cc.njrfxjcp.tmp: WriteFileHandle.Flush error: corrupted on transfer

Hmm..

Try putting no_chunk = true into your remote definition as a work-around.

I'm not 100% sure what is causing this but it is something to do with chunking.

If that doesn't work, can you run the mount with -vv and attach (or put on pastebin) a complete lot?

Another observation... if I start rclone with - say - transfers = 4 and then I “feed” the data to it from duplicacy using < 4 threads,

the file seems to re-upload.

If I use >=4 threads, the file is not reuploaded.

So it it possible that the reason is that rclone is “oversaturated” with requests and just reports back error as it “has no time/slot to reupload”?

OK, so it behaves randomly.

Sometimes the write is re-done, sometimes it reports back error and duplicacy gives IO error.

Maybe it could be fixed with vfs write caching? The duplicacy would get OK and the backup would continue?

But I am afraid it would lead to inconsistencies - duplicacy would think everything is OK when instead the write would fail later?

Is there a way how to monitor progress of the background write cached operations?

So when I enable vfs write cache, it seems to be uploading in the background OK...

VFS write caching should be more reliable than not using it as rclone has the opportunity to retry uploads when things go wrong.

I suspect that is what is happening - there is some failure in the upload which is causing it not to work properly. Maybe duplicacy isn't noticing the error message from rclone and assumes everything is OK?

I'd really need to see a complete log with -vv to be sure.

I’ve posted the relevant section of the -vv log... anyway I think we can close as it seems to be running fine with the caching - many thanks!

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.