Forcing pauses between files

What is the problem you are having with rclone?

As I am scripting some routines to backup several ftp accounts to my local drive, one of them fails with random "connection refused" errors.
My guess is that rclone is hitting the server too fast, even with transfers and checkers set to "1". I have tried several combinations of transfers, checkers, timeouts and retries, but had no luck.

I wonder if there is any parameter to force a delay between each file download attempt.
Of course this would impact the time taken to backup the account, but if it works it would be an acceptable solution for me.

Running multiple passes does not help either because it always fails on different files.

Tnks.

What is your rclone version (output from rclone version)

rclone v1.53.3

  • os/arch: linux/arm64
  • go version: go1.15.5

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Debian ARM64

Which cloud storage system are you using? (eg Google Drive)

Remote FTP to Local HDD

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync --delete-during --ignore-existing --one-file-system --skip-links --transfers 1 --checkers 1 --contimeout 30s --timeout 300s --retries 3 --retries-sleep 30s --low-level-retries 3 --verbose --log-file=rclonelog --delete-excluded <very-long-exclusion-list> --stats 10s --stats-file-name-length 0 --fast-list RemoteFTP: testdir

The rclone config contents with secrets removed.

[FTP-account]
type = ftp
host = *********
user = *********
port = ****
pass = *********

A log from the command with the -vv flag

2020/12/19 18:43:13 ERROR : public_html/webapp/internal/include/PHPExcel/Worksheet/Drawing: error reading source directory: dial tcp ***.***.**.**:50000: connect: connection refused
2020/12/19 18:43:17 ERROR : public_html/webapp/js/Charts/exporting-server/java/highcharts-export/highcharts-export-web/target/classes/com/highcharts/export: error reading source directory: dial tcp ***.***.**.**:50000: connect: connection refused
2020/12/19 18:43:18 ERROR : public_html/webapp/js/Charts/exporting-server/java/highcharts-export/highcharts-export-web/src/main/webapp/WEB-INF/jspf: error reading source directory: dial tcp ***.***.**.**:50000: connect: connection refused

If you are spawning too many connection, you can limit transfers/checkers as that'll reduce connections.

Sorry, my string had variables and wasn't showing on screen. Fixed that now.

I have turned my connections as low as I was able based on my understanding of the switches, and I' m using 1x1 for both transfers and checkers.

If there is something more I could try I haven't been able to find it in the Docs nor the forum.

Also, I am running the routines sequentially, and not simultaneously.

Thanks

I'm not sure what the issue as as of yet as the log just shows connection refused meaning the other side isn't running.

Can you share a full debug log of the issue? Run the command with -vv and share the full output.

Hi Animosity022, thanks for the feedback,

I don't think the full log will provide any further info, and it will be quite large. It shows several transfers and then some random files fail here and there.

It is not going offline, as many files are being transferred before and after the failures, more likely the server is throttling by refusing connections, hence why I ask if there is any switch that will make rclone "less aggressive" against this particular FTP.

I am not seeking to debug an error, rather I am asking if there is something like the nmap's switch to reduce the risk of being blocked while it scans a target.

Thanks again.

Sure seems like you are asking to debug an error.

The log provides context of things happening around the command, the command, the version in the log and a complete picture.

Transfers 1 and checkers 1 is the best you can do and without a log, good luck as I can't offer much else since I can't see what is going on.

You can try using --ftp-concurrency which may help. Though I note that this flag has been the cause of lots of deadlocks in the past!

If --tpslimit worked with the FTP backend that would be useful too - maybe I should make it work...

Hi Nick, thanks for your reply,

I just tried --ftp-concurrency with "3" and the "1" but it seems to make no difference in this case.

" --tpslimit " for what I gather from the Docs seems like it would be the switch I need, if it worked with ftp connections...

If you feel like adding it it would be great and I can offer to do my best to help testing it against this stubborn server.
But on the other hand, this looks like an edge case and making a "feature request" out of it just so I can back up this site feels too selfish. Perhaps I can find another way around this.

Anyway, thanks for the help to both of you.

A full debug log might get to the bottom of it.

Can you please make a new issue on github about this so we don't forget please!

Hi there, hope you are starting off a great 2021.

Great then, thanks a ton for the great attitude!

I haven' t been able to look back at this until now, so I just opened a feature request:

I hope I included everything as needed.
Best regards.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.