What is the problem you are having with rclone?
First and foremost, thanks for the excellent tool, it's fantastic!
We have got an in-house data centre serving all our applications and data services (everything on Kubernetes), we rely on kafka-connect-s3 for exporting our Kafka cluster topics to the Minio object-store as a cold backup option for Kafka (similar to https://jobs.zalando.com/en/tech/blog/backing-up-kafka-zookeeper/). The Kafka cluster is pretty big and some of the topics have got 10M+ records! and it gets exported to our minio cluster, The setup works fine w/o any issues.
I'm currently trying to mirror the minio buckets to another minio cluster running at a different data centre, I initially attempted using minio mirror, which didn't work and spiked the system load, so I have replaced minio mirror with rclone sync, this setup works for all buckets except for the ones containing millions of records, they do not sync a single file even after few hours of running.
I am using the below options for all the buckets which works well for all small buckets and I have tried plenty of rclone command combinations for making big ones (having 10 millions of records) work, but no success yet
rclone --config="/etc/default/rclone.conf" sync minio-bkphost1:cassandra /bkp/minio/cassandra --bwlimit=150M
I would appreciate if you someone can provide guidelines on using the command options which suits for syncing buckets with millions of small files.
Thanks in advance and I am sorry if I am trying something really stupid (I am a newbie here, using rclone only since last week )
What is your rclone version (output from
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Ubuntu 18.04.3 LTS (Bionic)
Which cloud storage system are you using? (eg Google Drive)