Not currently, no. It would certainly be useful and I have wanted this feature myself, but for now you will have to run 1 operation pr destination. I can't see any reason why it would not be possible to implement at some point in the future however, but I'm not aware of this being a current priority. You may want to take a look at the suggestions on the issue-page and see if already exist there. If so up-vote it. If not - make the suggestion yourself for later evaluation.
Yes, this can be done.
For certain providers (most noticeably perhaps Google), a true server-side copy is possible. This is very ideal to use for a sync (and I recommend also the --track-renames flag for this). Tell me what provider you use and I can elaborate. Note that it is impossible to process the data this way however, for example crypting/decrypting it as that must happen locally.
If you have a backend that does not support this, a good workaround method is to use a VPS. A virtual machine in the cloud. This can often be done on the cheap, or even free. This will be an option regardless of the backend that is used, and also allows you to do processing on the data going from one location to the other. It will technically be downloaded and reuploaded, but it just won't have to use your (probably limited) bandwidth for the job.
Option 2 could be. But, as you said, the remote is encrypted. However, I don't see why I can't encrypt to remote 1 and then plain-sync the encrypted data from remote1 into remote2 (it's google teams drive)
What do you think?
That works fine. As long as the data itself doesn't change from drive1 to drive2 there is no issue.
In fact - this is exactly what I do myself for my backup-location. Keeping a redundancy drive is a breeze when the sync can be done server-side with no load on my own system.
Just keep your crypt-remotes and non-crypt remotes straight in your mind.
For example, you will normally upload to Gcrypt1:
But to sync two drives you will actualyl need to NOT use the encrypted remote (as to not un-crypt them on access), so it would look something like rclone sync Gdrive1: Gdrive2:
(not Gcrypt1: Gcrypt2: - this would resulting decrypt-recrypt and force the transfer to go via the local machine).
Hope that was not too confusing
And let me just mention one more time how useful --track-renames can be for this.
Because normally if you just changed the name of a large folder it would have to be re-copied in it's entirety because it's no longer the same location technically. --track-renames will use hashes to actually track where files moved or if they got renamed. This means they can be moved/renamed remotely rather than re-copied which is obviously much faster, more efficient and doesn't waste your quota. Highly recommended you read up on this https://rclone.org/docs/#track-renames
Will try this, the only issue I see is that, in order to read from the Gcrypt2: The keys for encrypoting the crypt locations on both sites must be the same.... Am I right?
Does gdrive/teams folders keep the data limit on server-to-server transfers as well?
Take note of the correction to the command I wrote above.
But yea - 1.43 is like a year old now, so definitely update. Rclone development has proceeded fast and there is a world of new improvements, fixes and features since then.
As many (most?) Linux repos seem to be really out of date for rclone I highly recommend you grab the latest version directly from this site. You can get the package manually (from the download tab on rclone.org) and install via apt or whatever you normally use easily.
Or even easier - just use the install script:
Loved it. I just installed on arch linux arm: The installation process was flawless (except for some man cache generation warnings...)
Now the copy between drives works (gotta look those rate limits carefully, though)
Thank you very much kind sir!!! (all involved in this thread that helped)