Default settings efficient enough for huge upload to google drive?

Wondering if anyone has used additional options/parameters to increased efficiency on uploading lots of data to google drive. I have a windows share that is 332GB in size with 43k files and 6600 folders that I need to upload to team drives.

Looked through several posts here and they all talked about using the --drive-upload-cutoff and --drive-chunk-size parameters but the box performing the upload only has 6GB of RAM to work with. Not sure it’d be good to up the default size of those settings.

Before I started getting the rate limit exceeded errors from google api, i was transferring about 13GB in 40-45mins but all uploads beyond that initial upload were taking hours to upload only 5-8GB of data. I’ve tried creating my own client ID for the google drive api to bypass the rate limiting but that didn’t seem to have any effect, i kept getting the error.

Just don’t wanna spend days just to upload 300GB. I THINK we have a 1Gbit symetrical line at work but i can’t be certain.

There is no need for additional parameters for solely uploading lesser than 750GB. If its slow, it means your pipe is slow. Simple.

thanks @clckwerk. That’s what I figured, but just wanted to double-check in case i was missing something.

No problem you can upload up to 750GB for day in gdrive

Keep in mind that rclone is not very efficient uploading lots of small files.