Short answer - no. Gdrive shouldn't have any real limit on bandwidth. I have no idea how much of a 10Gbit it can saturate, but it should be a significant amount. It certainly eats up my 150Mbit like it's nothing.
Be aware that many small files are an issue in rclone. You won't get good bandwith utilization on them.
For large files however you should - but with that amount of bandwidth you should absolutely increase:
upload_cutoff = 256M
chunk_size = 256M
(this needs to go in your config, under the Gdrive header)
to as high as you can afford. This uses that amount of memory multiplied by amount of transfers, so make you your system can actually handle that. The default is 8MB - and uploading in 8MB segments on your uplink would be silly. I'd run my numbers at an absolute minimum. 25 transfers seems very high. Just don't assume that more = better, because it will at some point just go slower.
this only affects upload. downloads are split by the chunk size (which you seem to have set high already).
I would also make sure your disk can actually keep up with a lot of bandwidth, especially when heavily threaded. A HDD could stuggle with feeding 25 transfers possibly. Worth checking.
Well I can at the very least confirm it's not some hard cap by google. I just transferred 157mbit on a single file no issue. It took about 7-8 seconds to ramp to full speed. I currently use 256MB upload chunk size (and cutoff) for Gdrive.
Anymore than that I simply don't have the bandwidth to test. I would find it very strange if google set a per-transfer bandwidth limit rather than some more general limit (if there even exists one)
I don't see anything obviously wrong there. Also there's not really a lot about the mount that should change how uploads perform.
Note: cache-chunk-total-size is a cache backend flag, as will be any other that start with cache. It won't do anything unless you use the cache backend. I don't know if you do, but just checking you aren't confusing it with vfs-cache-max-size .
But in any case - it certainly can't hurt to start the troubleshooting by doing a test-transfer on a single file directly from the rclone commandline. First rule of troubleshooting is to eliminate as many sources of error as possible - and if it caps at the same speeds pr. transfer there too then we have just eliminated a lot of possibilities.
My mount is pretty simple as I stream 4K content to mostly ATV devices so majority is all direct play with some direct stream, but hardly any transcoding.
felix 783 1 1 Jul23 ? 01:12:32 /usr/bin/rclone mount gcrypt: /GD --allow-other --buffer-size 1G --dir-cache-time 96h --log-level INFO --log-file /opt/rclone/logs/rclone.log --timeout 1h --umask 002 --rc
My use case is streaming for Plex and I have a single home server that has 32GB of memory. I rarely stream more than 5-6 concurrent which 2-3 being direct stream (where the buffer helps out in rclone) and 2-3 being trancodes, which case the buffer doesn't matter much.
I use rclone move from a local disk to my encrypted remote.
[felix@gemini scripts]$ cat upload_cloud
# RClone Config file
#exit if running
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi
# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --log-file /opt/rclone/logs/upload.log -v --drive-chunk-size 64M --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs --fast-list