When chunking uploads, it splits the file into segments of X megabytes each.
The these effectively act as a separate transfer.
Due to the way TCP works, it needs to ramp up speed, so it make take a couple of seconds to reach maximum potential. Thus the more stop/start there are from each new chunk segment the more performance penalty you will have and thus lower effective bandwidth utilization. This penalty is very low at 64M, and even less at 128M. 256M not worth it unless you have more memory than you know what to do with. The default however is a mere 8M, which I do NOT recommend. Bandwidth utilization on that is rather poor - especially for a gigabit connection.
If you want to disable chunking entirely you can do that too, by setting a very high upload cutoff.
For some reason I need more concurrent transfers to fully use my bandwidth when I do that though. Exactly why that is is something I have to ask Nick about some day.
But in short - it will not be related to this problem. If you use 256M you are effectively already running the highest performance config possible.
Gdrive can take about 40-45MB/sec pr stream (megabyte, not megabit). And you can have as many concurrent streams as you want basically (or more like as many as the new connections/sec limit will allow you) - so in short... it's not a bandwidth issue at google's end.
The most typical problem is that transfer of many small files are slow due to a backend limit of a little over 2 new connections pr. second. However, as I understand you - this happens even on large files? If so that is very curious.
Definitely check the basics first of all. Check that you don't have any packetloss on the network. Do you use any wifi locally for the transfer? Check the disks you are copying from too. If something suddenly starts hitting them hard and it's HDDs they can drop a lot in speed under really bad circumstances. How is your memory situation during transfer? You aren't starting to swap to a swapfile on HDD or something right? 256M chunks do use 256M pr transfer, so that's nearly a GB of memory right there.
I haven't seen any such issue myself where the speed varies that drastically. I only have 160Mbit, but that is always rock-solid assuming the files are of reasonable size. I am not aware of any settings that would cause this phenomenon to happen.
The only thing I take issue with in your command is --tpslimit 3. That is really not needed unless you have a very special setup, otherwise you are just limiting your API to 30% and removing your ability to burst for no good reason. Gdrive has a pacer that handles all this for you anyway. Leave it to default unless you have a very good reason not to.