Rclone thrashes disk if using many transfers

Each rclone transfer does its own disk read. I'm wondering if it could be more efficient to have a single thread do all the reads and fill buffers for each transfer.

i.e. imagine i'm doing 50 parallel transfers, I might say I want a 50MB buffer for each transfer (so I'm dedicating 2.5GB of memory to this).

so I set up 50 channels.

Each channel takes a message that contains data up to size block, so we make the channel size n = buffer_size / block_size.

each thread of transfer execution just reads form the channel.

the single buffer thread iterates over all transfers its processing and reads the length (l) of the channel and sees that we have n-l slots free so fills them, and proceeds to the next (when it finishes reading the file, it closes the channel, so the reader will see that)

idea is to reduce disk thrashing on large number of transfers in parallel, where we have large bandwidth but for whatever reason need large number of parallel threads to make use of it.

example from iotop on my machine that i was able to grab

Total DISK READ :      15.47 M/s | Total DISK WRITE :      59.98 M/s
Actual DISK READ:      15.47 M/s | Actual DISK WRITE:      68.61 M/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                                                                     
30863 be/4 spotter   329.31 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30900 be/4 spotter   329.31 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30852 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30853 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30847 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30829 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30795 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30799 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30828 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30817 be/4 spotter   439.09 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30854 be/4 spotter   548.86 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30802 be/4 spotter   329.31 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30858 be/4 spotter   109.77 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30803 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30811 be/4 spotter   658.63 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30663 be/4 spotter   109.77 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30657 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30889 be/4 spotter   987.94 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30861 be/4 spotter   548.86 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30821 be/4 spotter   439.09 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30806 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30856 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30894 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30681 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30815 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30897 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30898 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30868 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30890 be/4 spotter   329.31 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30670 be/4 spotter   548.86 K/s    0.00 B/s  0.00 % 99.99 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30661 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 98.26 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
 5825 be/4 root       17.15 K/s    0.00 B/s  0.00 % 97.93 % [kworker/u32:2]
30805 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 97.02 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30895 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 95.03 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30668 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 93.77 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30855 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 91.47 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30870 be/4 spotter   878.17 K/s    0.00 B/s  0.00 % 91.36 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30872 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 91.35 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30862 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 91.29 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs
30797 be/4 spotter   219.54 K/s    0.00 B/s  0.00 % 91.29 % rclone --transfers=60 --rc move . gcrypt1:/ToSort -P --delete-empty-src-dirs

though in thinking about it, I wonder if this would be a positive or negative impact on using rclone as a file system mount. threads are obviously easy, just wondering if there's a smarter way to minimize thrashing. i.e. having a single IO thread (or at least a controllable number)

I don't think we can avoid the lots of disk IO if you do 50 transfers at once. Rclone will nead to read from 50 files at once so you are going to get lots of IO thrashing. Setting --buffer-size should help a bit but I'm not sure how much.

I'd try lowering --transfers a bit - if it is disk IO limited then that will make things more efficient but shouldn't affect network speed.

Which backend are you transferring to? Rclone may be precalculating hashes which is quite disk intensive.

crypt to cache to (google) drive.

I agree, we can't avoid doing lots of IO, but we can do it in a more intelligent manner than the OS's IO scheduler. or perhaps there's a way to instruct the OS's io scheduler to be more intelligent? (i.e. expect large streaming reads)

perhaps there's a way to use mmap and madvise / MAD_SEQUENTIAL (and perhaps MAD_DONTNEED after it's read?) to optimize this within the context of the linux io scheduler?