First of all, to enable server-side copying across accounts you need to enable it, as it is not enabled by default (because it is not guaranteed to work under all circumstances).
Either add this to the config file (under your GDrive remotes):
server_side_across_configs = true
Or add this to your commandline:
Then, all you have to do is perform a normal move, copy or sync command.
In this example I will call the source Gdrive1 and the target Gdrive2:
rclone move Gdrive1:/Private Gdrive2:/Private
rclone copy Gdrive1:/SFW Gdrive2:/SFW
rclone sync Gdrive1:/NSFW Gdrive2:/NSFW
a "move" will copy the files, then delete them from the source
a "copy" will copy the files (and not delete them from the source)
a "sync" will make the target folder identical to the source folder (including deleting files that no longer exist on the source or re-uploading files that was changed since last time you synced)
I recommend you use the --dry-run flag to test what will happen without actually making any changes. Especially before using sync because that can delete files, so if you write the command wrongly that could be pretty bad and cause data loss.
You may also want to use --fast-list if there are a lot of files (many thousands). This will map all the files on the drive faster by asking for all all lists in all folders at once instead of on-demand. This only matters in terms of how fast the sync starts to do it's job.
Lastly, be aware that there are limits on Gdrive. 750GB/day is the normal upload quota. There also appears to be some server-side copy quota that is less than that, but I am not sure exactly what the details are. In short - if you see your transfer stalls before it is finished you probably reached the quota for today and you either way to try again tomorrow - or continue uploading without -drive-server-side-across-configs (this will rely on your local bandwith).
Hope that was helpful. Let me know if you need more assistance.
Let's say I use Remote Desktop which has a big bandwith, does it consumes a lot of space??
Because i only have max 230GB if using remote desktop.
Also does Gsuite offers more bandwith? the original account is from my school edu.
Copying from cloud-remote to cloud-remote does not use any space at all - whether you do it server-side or via the local bandwidth. The files are simply streamed from one place to another without it being saved anywhere. The only difference is that with a server-side copy your bandwidth is only limited by Google (often resulting in many GB/sec) and if you do it locally then the bandwidth will be limited by however much bandwidth you have at the computer you are conducting the transfer from.
If you have remote access to a computer with a lot of bandwidth then that would be an ideal place to do any non-server-side transfers, yes. Then you can do the full 750GB/day no problem.
As far as I know the quotas for upload (750GB/day) and download (10TB/day) are identical between Gsuite, EDU teamdrives, Gsuit teamdrives and free personal Gdrives.
One last relevant thing to mention:
There are also some limits on the API calls you can make to the Gdrive API - basically the amount of commands pr. second (like "copy this file" or "list this directory"). If you use rclone without making your own Oauth clientID/secret then you will be sharing the default API quota with every other rclone user that is doing the same. For that reason it is typically recommended that you make your own Oauth clientID/secret so you get your own personal quota (1000 calls / 100 seconds) to work with. While this is not strictly necessary to do - it will help performance - and it is free to do for anyone who has a Google account, so it is recommended.
A basic guide for how to do this can be found here: https://rclone.org/drive/#making-your-own-client-id
Note: I recommend that if you make more than one Oauth key (one for each drive) you make these in 2 separate projects rather than the same one. Recent tests I've run indicate that Oauth keys within the same project share a quota, so this is not ideal (even if it will work). This info hasn't quite made it into the documentation yet (working on it).
If you hit the quota you will just get 403 errors from the server until it resets (apparently the time can vary somewhat for each drive, but generally during the night - around 3AM for some of mine in CET timezone), and rclone will automatically throttle when this happens to not spam the requests.
rclone will keep trying until it reaches the --low-level-retries (default 10) amount of failures and then try the next file. You can set that to some very high number if you want it to never give up. It is probably a good idea to a move/copy/sync afterwards to clean up anything that was left behind due to any errors - just repeat the same command basically (it should only take a minute).
For your use-case it seems like either move, copy or sync would do what you need.
The differences only really matter when you already have files in the new location already.
So yes, using sync to transfer your files would be fine. It will make the folders on the target side identical to the source.
Your mistake is that you have no Gdrives named "Gdrive2" - so rclone has no idea what you mean by "Gdrive2". I only used that name as an example because I had no idea what you named your Gdrive remotes.
I see you have 2 remotes configured. Are those the old and new drives? Which is which?
When using --fast-list it will list the entire drive before syncing begins. This can take a little time - depending on how many files and folders you have. I typically get a full listing in a little less than a minute for 30-40.000 files. So it is normal if it looks like it does "nothing" for up to a few minutes.
If you don't use fast-list it will seem to start faster as it lists folders one-by-one, so your "checks" count will start to go up immediately. It will however take a lot longer to list out the entire drive, so the whole operation usually ends up taking longer this way.