Huge memory usage when copying between S3-like services

That is very useful thanks.

The difference between the top value and the amount of ram Go thinks it is usuing is normal and it is to do with memory fragmentation and memory which hasn't been released back the the OS yet.

This is direct from the AWS SDK.

I note you are using Ceph so I guess this could be some sort of compatibility issue?

Could you generate an svg from the memory profile and attach it? That has a lot more info in it.

Did you try any previous versions of rclone? It might be worth trying some older versions to see if they have the same problem - this will tell us whether it is a problem with a specific version of the SDK.

I looked through recent bugs in the SDK and I couldn't see any with memory issues.

You can also do this to see how much memory each object takes on average. Point it at a subdirectory that you know will finish.

$ rclone test memory s3:rclone
2022/10/25 15:41:53 NOTICE: 62 objects took 175248 bytes, 2826.6 bytes/object
2022/10/25 15:41:53 NOTICE: System memory changed from 35239176 to 35239176 bytes a change of 0 bytes