Huge memory usage when copying between S3-like services

hello and welcome to the forum,

that is a small amount of files, should not be using so much memory.

not sure the exact reason for the memory usage but is the source and dest are both s3, should consider using --checksum
If the source and destination are both S3 this is the recommended flag to use for maximum efficiency.

have you tested --fast-list

here i sync 1,000,000 files in 33 seconds.
https://forum.rclone.org/t/fastest-way-to-check-for-changes-in-2-5-million-files/25957/11