Well first of all I can confirm that it should be able to do much more - assuming you aren't transferring tons of tiny files (which you have already eliminated by testing with a single large one). I have access to several unrelated teamdrives and I've never seen their ingest speed ever be a problem. I assume you'd probably need 1gigabit or more to start to really see such a limitation based on comments from other users who have faster connections.
--drive-chunk-size 128M is optimal if you can afford that much memory pr transfer (so for example up to 1GB for 8 transfers, keep this in mind). Still... 64MB chunks are pretty good and I very much doubt this is the problem. 64MB should be more than sufficient on 200Mbit - but maybe try that just to be safe. You can even go to 256MB just for testing. Beyond that I find no practical benefit on my 150Mbit as saturates 100% of that 95% of the time.
Just to give a little more detail, the reason this helps is that TCP works by ramping up speeds from a low starting point, so the lesser chunks the more of a "sawtooth" pattern you get, and you want to avoid those at it means inefficient bandwidth utilization if you have too many of them (very easy to visualize in task manager under performance->network). Larger chunks help reduce these, but the benefits for each doubling up get progressively smaller while the cost increase is linear.
I will do some really quick tests of 32, 64 and 128 chunks and see if any of those can even limit me on 150Mbit (about 18MB/sec)
I'm not aware of any Windows-specific issues that might affect this and I'm primarily a windows user day-to-day.
Of course, basic networking issues may apply. You definitely want to run a speedtest.net to check that you actually can achieve that much on from the computer you are on. I don't think that's the issue if it sems to work better on the google webUI but I'd just do it and get it out of the way to be safe.