I've been downloading with rclone from an anonymous FTP, it has some files I would like to preserve and keep should the FTP cease to exist, the problem is that the FTP is restricted in both speed 50k/sec and the number of connections at any one time.
Also if you stay connected to the FTP for too long your speed is further reduced.
But rclone has this --max-duration flag and it works well no more speed reductions, but it frequently cuts in during a multi-part transfer as the --max-duration is a hard limit rather than a soft limit akin to the upload limit of 750gb Google Drive has, as if this has been reached but a transfer is in progress it'll let the transfer finish but then no further ones will work.
@ncw Could the --max-duration be changed so that it is no longer a hard limit and takes into account transfer in progress, and lets them finish instead of just cutting the connection halfway through.
2020/03/15 11:05:09 DEBUG : filename.zip: multi-thread copy: stream 4/4 failed: context deadline exceeded
Edit
Here is the command I am using.
/usr/bin/rclone copy ftp:'/path/to/files' '/path/to/files' --checkers 1 --checksum --create-empty-src-dirs --drive-chunk-size 256M --ftp-concurrency 1 --log-level DEBUG --max-backlog 32 --max-duration 1h --order-by 'size,ascending' --progress --retries 3 --retries-sleep 5s --stats 10s --stats-file-name-length 30 --stats-one-line-date --tpslimit 1 --tpslimit-burst 1 --transfers 1;
My rclone version
rclone v1.51.0
- os/arch: linux/amd64
- go version: go1.13.7
Debug Log
2020/03/15 10:05:07 DEBUG : rclone: Version "v1.51.0" starting with parameters ["/usr/bin/rclone" "copy" "ftp:/path/to/files" "/path/to/files" "--bwlimit" "07:00,100k 23:00,off" "--checkers" "1" "--checksum" "--create-empty-src-dirs" "--drive-chunk-size" "256M" "--ftp-concurrency" "1" "--log-level" "DEBUG" "--max-backlog" "32" "--max-duration" "1h" "--order-by" "size,ascending" "--progress" "--retries" "3" "--retries-sleep" "5s" "--stats" "10s" "--stats-file-name-length" "30" "--stats-one-line-date" "--tpslimit" "1" "--tpslimit-burst" "1" "--transfers" "1"]
2020/03/15 10:05:08 DEBUG : Using config file from "~/.rclone.conf"
2020/03/15 10:05:08 INFO : Starting bandwidth limiter at 100kBytes/s
2020/03/15 10:05:08 INFO : Starting HTTP transaction limiter: max 1 transactions/s with burst 1
2020/03/15 10:05:08 DEBUG : ftp://hostname:21/path/to/files: Connecting to FTP server
2020/03/15 10:05:09 INFO : Local file system at /path/to/files: Transfer session deadline: 2020/03/15 11:05:09
2020/03/15 10:05:21 NOTICE: Local file system at /path/to/files: --checksum is in use but the source and destination have no hashes in common; falling back to --size-only
2020/03/15 10:05:21 DEBUG : filename1.xlsx: Size of src and dst objects identical
2020/03/15 10:05:21 DEBUG : filename2.xlsx: Unchanged skipping
2020/03/15 10:05:21 DEBUG : filename3.zip: Sizes differ (src 1088654036 vs dst 822067200)
2020/03/15 10:05:21 DEBUG : preAllocate: got error on fallocate, trying combination 1/2: operation not supported
2020/03/15 10:05:21 DEBUG : preAllocate: got error on fallocate, trying combination 2/2: operation not supported
2020/03/15 10:05:21 DEBUG : filename3.zip: Starting multi-thread copy with 4 parts of size 259.562M
2020/03/15 10:05:21 DEBUG : filename3.zip: multi-thread copy: stream 4/4 (816513024-1088654036) size 259.534M starting
2020/03/15 10:05:21 DEBUG : filename3.zip: multi-thread copy: stream 1/4 (0-272171008) size 259.562M starting
2020/03/15 10:05:21 DEBUG : filename3.zip: multi-thread copy: stream 2/4 (272171008-544342016) size 259.562M starting
2020/03/15 10:05:21 DEBUG : filename3.zip: multi-thread copy: stream 3/4 (544342016-816513024) size 259.562M starting
2020/03/15 11:05:09 DEBUG : filename3.zip: multi-thread copy: stream 4/4 failed: context deadline exceeded