Well first of all I can confirm that it should be able to do much more - assuming you aren't transferring tons of tiny files (which you have already eliminated by testing with a single large one). I have access to several unrelated teamdrives and I've never seen their ingest speed ever be a problem. I assume you'd probably need 1gigabit or more to start to really see such a limitation based on comments from other users who have faster connections.
--drive-chunk-size 128M is optimal if you can afford that much memory pr transfer (so for example up to 1GB for 8 transfers, keep this in mind). Still... 64MB chunks are pretty good and I very much doubt this is the problem. 64MB should be more than sufficient on 200Mbit - but maybe try that just to be safe. You can even go to 256MB just for testing. Beyond that I find no practical benefit on my 150Mbit as saturates 100% of that 95% of the time.
Just to give a little more detail, the reason this helps is that TCP works by ramping up speeds from a low starting point, so the lesser chunks the more of a "sawtooth" pattern you get, and you want to avoid those at it means inefficient bandwidth utilization if you have too many of them (very easy to visualize in task manager under performance->network). Larger chunks help reduce these, but the benefits for each doubling up get progressively smaller while the cost increase is linear.
I will do some really quick tests of 32, 64 and 128 chunks and see if any of those can even limit me on 150Mbit (about 18MB/sec)
I'm not aware of any Windows-specific issues that might affect this and I'm primarily a windows user day-to-day.
Of course, basic networking issues may apply. You definitely want to run a speedtest.net to check that you actually can achieve that much on from the computer you are on. I don't think that's the issue if it sems to work better on the google webUI but I'd just do it and get it out of the way to be safe.
I think the next step to get to the bottom of this then is to supply us with a debug log.
Append these to your command:
-vv (enable debug output)
--log-file=mylogfile.txt (output to file - because debug logs can be pretty long and unwieldy)
Depending on how long the log is you may need to use pastebin or similar service to show us the resulting file. Normally there won't be anything very sensitive in there except maybe showing some filenames of files or folder in the place you copy to.
I can't see anything wrong with your config The ramp ups might be TCP doing their thing - maybe you have a lot of latency between you and the drive API endpoint?
You could try doing the uploads without chunking so --drive-upload-cutoff 100G then rclone will send the files in a single TCP stream. That might be quicker.