How to deal with http error 524 that causes deletion of good files?

When I try to see which files will be added to or removed from a local copy when I sync an http remote which is protected by cloudflare, I run rclone with a '--dry-run' option, and rclone reports that it is going to remove a lot of files which are still definitely present on the remote. The list of deleted files changes each time I run the command: sometimes the list grows, other times it shrinks, and in some cases it contains only files which really are missing on the remote (which is what I would expect to see in 100% of the runs).

When I run rclone in debug mode I see that the problem occurs because http server returns an error 524 for the problematic files, and because of that rclone decides that the file is not on the remote. The error in the log looks
like that:

DEBUG : <name of some file>: skipping because of error: failed to stat: HTTP Error: 524

In fact this error is temporal - the error code is specific to cloudflare, and basically it means that the origin web server is taking too long to respond. Cloudflare documentation offers a few solutions (like, paying much more for its services), but all of them are not applicable.

It it possible to make rclone retry such errors till the operation eventually succeeds?

One kind of a workaround that I found is to use '--tpslimit' option (I set it to 20), and (for the time being) it seems to help, but on the other hand it greatly increases the length of an operation (by a factor of 20). Even bigger problem is that I don't think it will solve the problem in 100% of cases, since when there is a great(er) load a remote this option may still not prevent the error.

I suppose that other workaround would be to decrease '--checkers', but it suffers from exactly the same problems as the previous workaround.

Finally, I thought to try increasing '--low-level-retries', but it doesn't seem that error 524 is retried regardless, so it won't help.

In other words, workarounds above are not good enough, and I am looking for a better way that guarantees that good files are not removed during sync.

hello,

can you please answer the questions in the help and support templates.
that helps us to help you.

0 is the default value, so should not make a difference

Sorry, somehow I missed a digit during copy/paste and didn't notice: I used '20', not '0' (fixed the mistake in the first post as well).

Here is a problem according to the template

What is the problem you are having with rclone?

During a dry run of a sync from http remote protected by cloudflare, rclone reports that it is going to remove files which are actually present on the remote

Run the command 'rclone version' and share the full output of the command.

rclone v1.60.0

  • os/version: arch (64 bit)
  • os/kernel: 6.0.8-arch1-1 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.2
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

http remote

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone --config="" -vv sync --dry-run --http-url <remote url> <remote path> <local path>

The rclone config contents with secrets removed.

no config is used

A log from the command with the -vv flag

DEBUG : <name of some file>: skipping because of error: failed to stat: HTTP Error: 524
...
NOTICE: <name of some file>: Skipped delete as --dry-run is set (size X Mb)

can you include a full or more complete debug log, including the top 20+ lines.
not just one line snippets.
something that makes clear what rclone is doing and not doing with the 524
might try --dump=headers

if rclone does not handle the 524, maybe a change is needed.

A full log is here:

I removed all the lines about new copied & unmodified files (they do not refer to the problematic files in any way).

From the current log it seems that nothing is done with error 524 - I see in the code that if during test of a file rclone encounters an error (524 response from the server in this case) the file is just skipped and no further processing is done on it

at this point, i think we have enough info for someone more experienced to comment.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.