Rclone copying a dying/dead drive: skip problematic files early?

I am using rclone (latest stable) on Windows 10 to back up files on a dying/dead hard drive to encrypted Google Drive. The command I'm currently using is:

rclone copy "F:\Downloads" "gcrypt:/Downloads" --transfers 10 --checkers 10 --stats 10s --drive-chunk-size 256M --progress

Since it's a dying/dead hard drive, there's understandably a lot of problematic files. These turn up on rclone like this:

2020-12-30 15:09:33 ERROR : ExampleFile.mp4: Failed to copy: Post "https://www.googleapis.com/upload/drive/v3/files?alt=json&fields=id%2Cname%2Csize%2Cmd5Checksum%2Ctrashed%2CexplicitlyTrashed%2CmodifiedTime%2CcreatedTime%2CmimeType%2Cparents%2CwebViewLink%2CshortcutDetails%2CexportLinks&supportsAllDrives=true&uploadType=resumable&upload_id=ABg5-UwqoQPFKChOfUKpDgXt3-_fkxi8GIbkN6hmDdzBJ8MLU6igT_0zNCzca5MzszwTOxLgcnan1wJzSkI6EC7C1Ts": read \\?\F:\Downloads\ExampleFile.mp4: Data error (cyclic redundancy check).

However, rclone spends a very long time trying and re-trying these files, so I have multiple files in the currently transferring queue with speed 0/s OR ETA in the thousands/hundreds of thousands of hours. Anecdotally, rclone can try for many hours before "giving up", posting the error message and moving on to the next file. These files are bogging up the queue and slowing down the overall salvaging process for the remaining "good" files.

Is it possible to ask rclone to "give up early" on these files?

What you could do is a first pass with --low-level-retries 1 this means rclone will try once rather than 10 times which should help.

I've had a lot of success with ddrescue in the past which copies the good stuff off as quickly as it can then works on the more damaged parts of broken disks. It works on a sector by sector basis. You can then use a tool like photorec to recover the files from the (not quite complete) image. This needs as much free disk space as the disk size though.

I've used this technique many times to rescue things off broken disks, highlights being an unfinished novel, and half the photos off an SD card which had been returned from a professional recovery place as unrecoverable!

1 Like

Thanks @ncw! Exactly what I'm looking for.

Unfortunately I don't have a disk of equal or larger size though for the ddrescue method. Is there anything else I should look into?

EDIT: Actually giving it a go now, it doesn't seem to be that aggressive in pruning out bad files. I'm still seeing files (known to be bad from previous copies) stay at ~10 digit hours ETA for over 40 minutes without skipping it. Is there an even more aggressive filter available? Maybe something that gives up on anything with ETA > 100 hrs? Or some way to manually tell it to skip certain files on-the-fly?

That's the sharpest tool in my data rescue toolkit :slight_smile: If necessary I buy a second disk to use it.

The OS will be doing loads of retries here. It will give up eventually but it is quite tenacious

Good ideas there. There isn't anything like that at the moment, however users have requested killing connections if the bitrate drops below a threshold for a time and that is equivalent to your ETA idea.

What you could do is keep a list of bad file names and exclude them.

You may be able to tell the OS to try less hard also, I think this is how ddrescue works. That would be worth trying. So turn down the retries on the SATA or SCSI bus.

1 Like

you can use acronis, they have a boot disk, that will backup a entire computer or just folders.
it has an option to compress the files.

another option is to copy the files to a blue-ray disc, i have copied over 100GB using a single disc.
else copy to a dvd burner.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.