My case is to sync data between two Ceph clusters via s3 interface. And I have some buckets that contains large small objects(greater than 600,000, where each object is in 4kb ~ 4M).
I understand that command sync will first list all objects before real sync process. The problem is the list process costs too much time: each api call only to retrieve 1000 objects maxinum costs around 3s, 600,000 objects means 1800s, that is half an hour (in fact I have anohter buckets with billion objects to be migrated...), and the slow list process seems to be a known flaw of Ceph.
I'm asking for help, except --fast-list, is there any way or idea, or suggestion else to speed this migration up? Thanks.
Anything would be appreciate!!
Other hands, we actually did some custom modifications to speed up list_objects process of Ceph, still use s3 protocol but need the client to use a custom parameter in the list_object api call. Is Rclone support this kind of custom parameters? Modify source code is also acceptable for us, please help us to know where to start.
What is your rclone version (output from rclone version)
Thanks for the links, I will definitely look into them.
For you questions, for the whole migration work, it's a one time sync, which simply migrates data from one Ceph cluster to another; to achieve this, it may needs multiple rclone sync operations, and each additional sync may have around 100,000 as the incremental
@ncw I've made a simple test using sync, and looks like the perf is outstanding.
I ran rclone in the same machine with cluster A, and sync data to cluster B via a bond network card (full duplex, 20Gb/s), the transfer speed reached 800MB/s!
But I don't understand why there are both recv traffic and send traffic at the same time? I think rclone starts with list all objects in cluster A to memory, and compare them only using the metadata in cluster B, then makes the transfer if necessary. So there should be only traffic out? what am I missing? Do you mind a bit explanation here? many thanks.