On the machine running rclone, you'd download from the source and upload to the destination using your machine's bandwidth with two different cloud providers.
Sure, rclone uses memory but it's very dependent on what you are running, how many transfers are going, buffer size, etc. Depending on the memory on the server, you can tweak these things to run. I don't know what a huge amount of data as size is all relative.
Open transfers would be aborted if the machine went off and they'd have to be reuploaded. There is nothing stored on the client.
Come to Query 1 again. its bothering me about machine hardware requirement (i.e. memory processes and hard disk) . Do we have any machine hardware requirement matrix.
You can run on very tiny machines or very large machines. It's really dependent on what parameters you use and the expectations of how fast you want things to go.
The smaller the machine, the lower the settings and longer things take. People run on raspberry Pis and on huge machines.
When. you copy or move a file, you have to download from the source and upload to the destination.
The top one aggregates a bit more and the bottom is the per file being moved. You are only moving 1 file so it really doesn't matter which you look at.