Short answer - no. Gdrive shouldn't have any real limit on bandwidth. I have no idea how much of a 10Gbit it can saturate, but it should be a significant amount. It certainly eats up my 150Mbit like it's nothing.
Be aware that many small files are an issue in rclone. You won't get good bandwith utilization on them.
For large files however you should - but with that amount of bandwidth you should absolutely increase:
upload_cutoff = 256M
chunk_size = 256M
(this needs to go in your config, under the Gdrive header)
to as high as you can afford. This uses that amount of memory multiplied by amount of transfers, so make you your system can actually handle that. The default is 8MB - and uploading in 8MB segments on your uplink would be silly. I'd run my numbers at an absolute minimum. 25 transfers seems very high. Just don't assume that more = better, because it will at some point just go slower.
this only affects upload. downloads are split by the chunk size (which you seem to have set high already).
I would also make sure your disk can actually keep up with a lot of bandwidth, especially when heavily threaded. A HDD could stuggle with feeding 25 transfers possibly. Worth checking.
153 megabytes is ~ 1227 megabits and its bad whan you have 10gbit network. that is an overall value for all 3 files, but im saying not about that. Im saying about 'cap' on each file separately.
It does indeed look like more than a gigabit. If you get that much I kind of feel there isn't much left to complain about lol. My local LAN isn't that fast..
There's no cap from the Google side on any transfer. As I shared, you might want to try changing the chunk size to something smaller and see how that works.
Just because you have a 10Gb link on a card doesn't mean to you have a full 10Gb to google.
Did you check all the other metrics and see what's going on? CPU? Disk IO? etc?
If you click on the link I shared, they have similar high end results.
Well I can at the very least confirm it's not some hard cap by google. I just transferred 157mbit on a single file no issue. It took about 7-8 seconds to ramp to full speed. I currently use 256MB upload chunk size (and cutoff) for Gdrive.
Anymore than that I simply don't have the bandwidth to test. I would find it very strange if google set a per-transfer bandwidth limit rather than some more general limit (if there even exists one)
I don't see anything obviously wrong there. Also there's not really a lot about the mount that should change how uploads perform.
Note: cache-chunk-total-size is a cache backend flag, as will be any other that start with cache. It won't do anything unless you use the cache backend. I don't know if you do, but just checking you aren't confusing it with vfs-cache-max-size .
But in any case - it certainly can't hurt to start the troubleshooting by doing a test-transfer on a single file directly from the rclone commandline. First rule of troubleshooting is to eliminate as many sources of error as possible - and if it caps at the same speeds pr. transfer there too then we have just eliminated a lot of possibilities.
I began using those settings yesterday after your advice and can confirm that they make a considerable difference. My connection is constantly maxed out at 120MB/sec when uploading
If, it would be a pleasure if you could helped me out.
Basically im using only for Plex + 'rclone move'. Watching 4k from all avaliable devices. In this case i need i need fast buffering/fast load and so on. Don't wanna get stuck on middle.
My mount is pretty simple as I stream 4K content to mostly ATV devices so majority is all direct play with some direct stream, but hardly any transcoding.
felix 783 1 1 Jul23 ? 01:12:32 /usr/bin/rclone mount gcrypt: /GD --allow-other --buffer-size 1G --dir-cache-time 96h --log-level INFO --log-file /opt/rclone/logs/rclone.log --timeout 1h --umask 002 --rc
My use case is streaming for Plex and I have a single home server that has 32GB of memory. I rarely stream more than 5-6 concurrent which 2-3 being direct stream (where the buffer helps out in rclone) and 2-3 being trancodes, which case the buffer doesn't matter much.
I use rclone move from a local disk to my encrypted remote.
[felix@gemini scripts]$ cat upload_cloud
#!/usr/bin/bash
# RClone Config file
RCLONE_CONFIG=/opt/rclone/rclone.conf
export RCLONE_CONFIG
#exit if running
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi
# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --log-file /opt/rclone/logs/upload.log -v --drive-chunk-size 64M --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs --fast-list