How best to use --retries and --low-level-retries?

What is the problem you are having with rclone?

When copying files to a remote, rclone will display errors e.g. file not found, SHA-1 differs, etc.

If a file has not been copied successfully from the source destination, what is the recommendation on using --retries and --low-level-retries? This to instruct rclone to continue retrying until a file has been successfully copied and to exit if failures continue after x minutes/hours/days?

#### What is your rclone version (output from rclone version)
rclone v1.53.2

#### Which OS you are using and how many bits (eg Windows 7, 64 bit)
Microsoft Windows 10 Professional version 1909 Build 18636.1110

#### Which cloud storage system are you using? (eg Google Drive)
Microsoft OneDrive

--low-level-retries means that rclone tries each API call that many times before giving up.

--retries retries the whole sync that many times before giving up.

The defaults work pretty well mostly.

You'll need to use -vv to see the low level retries. With some backends (eg Google Drive) they are common.

1 Like

Thanks @ncw. I didn't quite get that. Do you mean that if a file or files have not been copied successfully and don't match between source and destination, --low-level-retries would continue calling the API for completing the operation x times (x being a value such as 100)?

  • If yes, how many APIs would be called with OneDrive or GDrive for example?
  • How much longer would this take in comparison to --retries?
  • When should --low-level-retries or --retries be used?
  • Can they be used at the same time?

Could you elaborate on using -vv to see low level retries? I didn't understand "seeing it" as well as "they are common"?

No as a retry means something failed like a bad network connection or something along those lines. It's a low level error.

Yes, you can configure them any way you like. The higher values, the longer it takes if it encounters error as it's dependent on what the remote is doing and what you want to do.

You need to use -vv to see low level retries as they are not in the general -v / info level of logging.

@Animosity022, do you mean that --low-level-retries is an attempt to retry a copy because of a network issue that has resulted in a mismatch between a files/files both in the source an destination? If yes, it would only attempt to retry the failed files. Wouldn't --retries do the same?

Yes, you can configure them any way you like. The higher values, the longer it takes if it encounters error as it's dependent on what the remote is doing and what you want to do.

I didn't quite get this. If the value is set as 100 for both --low-level-retries and --retries, do you mean that it would attempt to perform the operation 100 times for each file that doesn't match the source?

It doesn't have anything really to do with the file being checked as it's a layer below that. It's written up here:

It's more like you are making a request, something happens breaking the connection and it retries.

A retry is a level above that as the remote might return an error because it's overloaded or for whatever reason and rclone will retry that.

retries have nothing to do with a file matching or not matching. Say you have an internet outage and you have 100 retries, it'll cycle through 100 times until it fails so it takes longer to fail.

Thanks @Animosity022.

If I have understood you correctly, a low-level-retry is to resolve an error or failure other than the remote. A retry is resolving an error returned by a remote.

If yes, how do I combine these to retry a mismatch between source and destination x times for y long?

You don't as those things are not related,

Low level retries are for network level events and retries are for remote level events.

Neither do anything with a check coming back not the same.

In the Windows world, you'd use VSS and take a snapshot , copy , delete the snapshot if a file is being constantly written to.

Thanks. In that case, what is the option to retry a mismatch automatically?

Rerun the same command again.

Is there no option to retry the same file again without running the command again?

Or alternatively if --transfers has been specified e.g. 2 and there is mismatch, rclone would continue processing a single file while simultaneously retry copying the mismatched file. Once successful, it would resume 2 transfers again.

That's correct.

It would be great to include a flag or an option to retry a mismatched file without running the command again.

For example, if --transfers has been specified e.g. 2 and there is mismatch, rclone would continue processing a single file while simultaneously retry copying the mismatched file. Once successful, it would resume 2 transfers again.

Can this be included as a feature @ncw?

You should check out:

As that's a real solution on Windows rather than copying files in use.

Thanks @Animosity022.

I was reading the article just as you posted. it would be great for a native option. This is so that the solution remains portable regardless if it's Windows or any other.

Is there a way to return the errors e.g. SHA-1 differs back to rclone so that it processes these immediately as part of the running command?

hi,
i wrote that wiki. if you have any questions, let me know.

@asdffdsa. Thanks. It's a great option for a Windows only environment.

As I am now continually switching between Linux and Windows, I was hoping for a portable solution between both. The sources are external hard drives.

after rclone has finished running, parse the log file.

then feed whatever files you want to retry using https://rclone.org/filtering/#include-from-read-include-patterns-from-file

Thanks @asdffdsa. Is there an option to parse it natively with rclone?

what i did was to write a 350+ lines of python code for all my backups needs.
it runs 7zip, fastcopy, rclone, veeam, vss snapshots and a small simple .ini database.

that script scans the log files from all of those and sends emails reports.