--max-duration hard limit changed to soft limit

I've been downloading with rclone from an anonymous FTP, it has some files I would like to preserve and keep should the FTP cease to exist, the problem is that the FTP is restricted in both speed 50k/sec and the number of connections at any one time.

Also if you stay connected to the FTP for too long your speed is further reduced.

But rclone has this --max-duration flag and it works well no more speed reductions, but it frequently cuts in during a multi-part transfer as the --max-duration is a hard limit rather than a soft limit akin to the upload limit of 750gb Google Drive has, as if this has been reached but a transfer is in progress it'll let the transfer finish but then no further ones will work.

@ncw Could the --max-duration be changed so that it is no longer a hard limit and takes into account transfer in progress, and lets them finish instead of just cutting the connection halfway through.

2020/03/15 11:05:09 DEBUG : filename.zip: multi-thread copy: stream 4/4 failed: context deadline exceeded

Edit

Here is the command I am using.

/usr/bin/rclone copy ftp:'/path/to/files' '/path/to/files' --checkers 1 --checksum --create-empty-src-dirs --drive-chunk-size 256M --ftp-concurrency 1 --log-level DEBUG --max-backlog 32 --max-duration 1h --order-by 'size,ascending' --progress --retries 3 --retries-sleep 5s --stats 10s --stats-file-name-length 30 --stats-one-line-date --tpslimit 1 --tpslimit-burst 1 --transfers 1;

My rclone version

rclone v1.51.0
- os/arch: linux/amd64
- go version: go1.13.7

Debug Log

2020/03/15 10:05:07 DEBUG : rclone: Version "v1.51.0" starting with parameters ["/usr/bin/rclone" "copy" "ftp:/path/to/files" "/path/to/files" "--bwlimit" "07:00,100k 23:00,off" "--checkers" "1" "--checksum" "--create-empty-src-dirs" "--drive-chunk-size" "256M" "--ftp-concurrency" "1" "--log-level" "DEBUG" "--max-backlog" "32" "--max-duration" "1h" "--order-by" "size,ascending" "--progress" "--retries" "3" "--retries-sleep" "5s" "--stats" "10s" "--stats-file-name-length" "30" "--stats-one-line-date" "--tpslimit" "1" "--tpslimit-burst" "1" "--transfers" "1"]
2020/03/15 10:05:08 DEBUG : Using config file from "~/.rclone.conf"
2020/03/15 10:05:08 INFO  : Starting bandwidth limiter at 100kBytes/s
2020/03/15 10:05:08 INFO  : Starting HTTP transaction limiter: max 1 transactions/s with burst 1
2020/03/15 10:05:08 DEBUG : ftp://hostname:21/path/to/files: Connecting to FTP server
2020/03/15 10:05:09 INFO  : Local file system at /path/to/files: Transfer session deadline: 2020/03/15 11:05:09
2020/03/15 10:05:21 NOTICE: Local file system at /path/to/files: --checksum is in use but the source and destination have no hashes in common; falling back to --size-only
2020/03/15 10:05:21 DEBUG : filename1.xlsx: Size of src and dst objects identical
2020/03/15 10:05:21 DEBUG : filename2.xlsx: Unchanged skipping
2020/03/15 10:05:21 DEBUG : filename3.zip: Sizes differ (src 1088654036 vs dst 822067200)
2020/03/15 10:05:21 DEBUG : preAllocate: got error on fallocate, trying combination 1/2: operation not supported
2020/03/15 10:05:21 DEBUG : preAllocate: got error on fallocate, trying combination 2/2: operation not supported
2020/03/15 10:05:21 DEBUG : filename3.zip: Starting multi-thread copy with 4 parts of size 259.562M
2020/03/15 10:05:21 DEBUG : filename3.zip: multi-thread copy: stream 4/4 (816513024-1088654036) size 259.534M starting
2020/03/15 10:05:21 DEBUG : filename3.zip: multi-thread copy: stream 1/4 (0-272171008) size 259.562M starting
2020/03/15 10:05:21 DEBUG : filename3.zip: multi-thread copy: stream 2/4 (272171008-544342016) size 259.562M starting
2020/03/15 10:05:21 DEBUG : filename3.zip: multi-thread copy: stream 3/4 (544342016-816513024) size 259.562M starting
2020/03/15 11:05:09 DEBUG : filename3.zip: multi-thread copy: stream 4/4 failed: context deadline exceeded

Hmm, that is not what the documentation says it does...

--max-duration=TIME

Rclone will stop scheduling new transfers when it has run for the duration specified.
Defaults to off.
When the limit is reached any existing transfers will complete.
Rclone won’t exit with an error if the transfer limit is reached

--max-size however is a hard-limiter that just cuts existing transfers (you aren't confusing the two?

I just did a test for you to verify the documentation is not mistaken:

[ 2:57:27,01]
C:\rclone>rclone copy F:\testfile.rar E:\ --max-duration 2s -P
Transferred:        1.356G / 1.356 GBytes, 100%, 126.163 MBytes/s, ETA 0s
Transferred:            1 / 1, 100%
Elapsed time:        11.0s

As you see, the limit is 2 seconds, yet transfer took 11 seconds and did not break. File also hash-checks fine on the destination.

I suspect that your problem is somewhere else...

EDIT: I guess it might be plausible that multi-threaded download is an exception here, as each thread might count as it's own operation. Why not just disable multi-threaded download? I am not sure why you would be using this anyway, as I can't see much benefit in your scenario where both speed and concurrent connections are so limited. Also, how large can these files be if you are willing to download them at 50KB/sec? With default settings multi-threaded download shouldn't be kicking in for files less than 250MB.

@ncw could you comment on the multithreaded download scheduling as it relates to --max-duration ? Just wanted to ping you in case this edge-case is a legit bug.

Actually I think this is this issue

I've somehow got the contexts muddled up in the sync routine

@ncw Ah yes you might be right, in my case with google drive it doesn't keep on retrying like that b2 guys example does it does fail and stop there. Other than that it does appear to be the same issue.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.