Checks count not updating

Running v1.46, i’m syncing TBs worth of small files from s3 to s3. With log-level INFO and --checkers 200, I’ve seen an hour’s worth of inactivity logged with this message logged every minutes:

2019/03/15 02:11:43 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 0
Checks: 0 / 0, -
Transferred: 0 / 0, -
Elapsed time: 59m0s

Then suddenly:

2019/03/15 02:12:43 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 0
Checks: 279423 / 289623, 96%
Transferred: 0 / 0, -
Elapsed time: 1h0m0s
Checking:

  • 2017/10/filename…07165254_482004.csv.gz: checking
  • 2017/10/filename…07165340_751336.csv.gz: checking
    <continued for each thread, in this case 200 total checking log messages>

2019/03/15 02:13:43 INFO :
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Errors: 0
Checks: 601997 / 612198, 98%
Transferred: 0 / 0, -
Elapsed time: 1h1m0s

It seems like the checks counters is delayed in updating, and also INFO is not printing every s3 object that is being checked.

What command line flags are you using? If you are using --fast-list then it has to list the entire bucket before starting which might be what you are seeing.

Thank you for the quick reply. I am not using --fast-list.

The flags I am using:

rclone --use-mmap --log-file /data/rclone/logs/rclone.log --transfers=200 --checkers=200 --log-level INFO sync rclone1:src-bucket-1/prefix/ rclone2:dst-bucket-1/prefix/

Try using --checksum without it, rclone will be reading metadata from each file to read the modification time which will be slowing things down a lot and costing extra transactions.

--fast-list has the potential to speed things up if you have enough memory to keep the entire listing in memory.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.