Ok, then here is my suggestion to make this as efficient as possible...
Your basic setup of using union here to dump your uploads to a local folder is good. We should keep this. It's really the only workable way for you do transport them later.
However, to avoid any of the weirdness with how --backup-dir behaves, we should run the upload and&or maintenance commands/scripts outside of the union - ie. directly to the cloud-drive. This will allow for deletes and moves to work well together with the backup because the local union won't be absorbing those commands. (this is currently a limitation of union we have to work around - but there are plans to improve union sometime soon hopefully to more closely offer the same kind of features that mergerFS has).
To do the deletions part of the sync we should use:
--delete-before
https://rclone.org/docs/#delete-before-during-after
I would combo this with --max-transfer 1M. This will first do all the deletes in a separate pass, and then quickly exit on the second-pass (uploading) because it will hit the transfer-limit. Depending on the setup you might want to also use --compare-dest on your "temp-upload" directory if you wanted to prevent anything being synced that was already in that folder already. This should effectively solve your initial question (but sync to the cloud-remote, not the union as I said).
I think it will also be worthwhile to run a second sync command after that with --max-size 1M or something like that - just so your local connection that get some of the smallest file done - making your larger upload later faster and more efficient later on. You may want to use --bwlimit xxxk to let this work in the background without using more than 70-80% of your upload so you don't choke your normal-daily usage. How worthwhile this is really depends on how many small files you typically work on. If we can get a few hundred small files out of the way in this manner that would be a great benefit for example.
Now that's the theory of it and the flags that I think are correct for the job. Sorry for being long-winded, but I think it is just as important the explain the why as it is to just give you an answer that works
I am also a Windows-primary guy and I've written a lot of batch for my own use with rclone. If you can provide a few more details I can try to coalesce all this information into an actual batch script for you - or at least a rough draft you can use as a basis. For starters these things would be relevant to know:
- Name of your union remote
- Name of your clouddrive remote
- The type of cloud provider (important for any upload optimization flags)
- The local path you sync from (assuming it is not sensitive in nature)
- The local path for the "temp upload" folder
None of these are essential for me to give you an example, but it helps understanding and lessens the potential for confusion if I can put in the real names and paths rather than abstracted names
Does that sound like a plan?