I'm copying roughly 12TB from one Gdrive to another and don't have 12TB free space on anything. From what I'm seeing, rclone copy Gdrive1 Gdrive2 writes everything from Gdrive1 first. This is an issue since I don't have enough storage space to hold everything. Ideally, I would like to have rclone copy everything by downloading each file, uploading it to the new Gdrive, and deleting the local copy afterwards.
What is your rclone version (output from rclone version)
rclone v1.48.0
os/arch: linux/amd64
go version: go1.12.6
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Linux 64-bit (Openmediavault/Debian. Rclone downloaded manually)
Which cloud storage system are you using? (eg Google Drive)
Google Drive
The command you were trying to run (eg rclone copy /tmp remote:tmp)
as standard - rclone does not save anything locally if copying between (non-local) remotes.
It simply pipes the output from one directly into the other. The data only goes so far as your network and nothing gets stored.
If you are experiencing that files are being stored locally then something else must be causing that. Are you using the cache-backend for read and/or write caching on either of these? You might want to share your config file (just make sure to redact any passwords and other sensitive info).
BTW, rclone now supports server-side copying between remotes if you enable it. This way even the traffic doesn't go via you. The only problem is it is subject to quotas. The normal upload/day is 750GB/day. Server-side it is less. The exact limit is not well explored. I've hard 100GB/day but I believe I have seen more along the lines of 200-300GB in a single go before it stalls. Anyway, it can be useful to know about if you have very limited bandwith to work with.