Rclone sync local <===> minion runs at a very slow speed half compared to cifs <===> local
Cifs around 8 to 60 MBPS
Minio 300 kbps to 14 MBPS mostly 4 to 5 MBPS
Minio runs on localhost with single drive.
Rclone with default 4 transfers
Rclone sync local <===> minion runs at a very slow speed half compared to cifs <===> local
Cifs around 8 to 60 MBPS
Minio 300 kbps to 14 MBPS mostly 4 to 5 MBPS
Minio runs on localhost with single drive.
Rclone with default 4 transfers
So rclone and minio are competing for disk IO and processor on the same machine? That is probably the difference.
They are 2 different disk…
In the case of network shared folder to local disk direct it’s good speed. But network shared folder to minio via rclone it’s less than half the speed.
I haven’t used mc for transferring yet but wonder is it rclone or minio that is working slow or is there a double full check running in the background.
Other thing coming to my mind is in rclone config minio storage type ’ standard’ ? What should it be in case of single disk setup? Will Changing this to some other value make a difference?
Rclone with other backends works fine.
Other users inputs rclone with minio…
I would check using vmstat 5
and top
while a copy is running to see
b
column should always be 0--------memory---------- ---swap-- -----io---- -system-- ------cpu-----
r b swpd free buff cache si so bi bo in cs us sy id wa st
0 0 453916 1022736 2190764 4769908 0 0 0 465 448 1041 2 1 97 0
I don’t think minio will use that.
Here are 2 screenshots from a linux box of mc and rclone
Rclone 12MB/s @ 14mins
MC 16MB/s @ 10mins
Both run with the same source file and destination bucket.
This does look a big difference but when there are huge gigantic files to sync in queue it will take hours more in comparison.
Can something be tweaked to get similar results.
Try tweaking these. Larger chunks will be faster but use more memory. Disabling the checsum will stop a pause at the start and upload concurrency may help.
--s3-chunk-size int Chunk size to use for uploading (default 5M)
--s3-disable-checksum Don't store MD5 checksum with object metadata
--s3-upload-concurrency int Concurrency for multipart uploads (default 2)
I feel the defaults are the best…
Has anyone tried between 2 different S3 instances
Any inputs