Rclone upload speed Google Team Drive


i have tried a lot, without success.

( i use client id and secret id )

I have 200mbits in upload and only via browser i can use it, with rclone, only3 or 4MB/s, very slow of 25MB/s that i have.

I read a lot i tried some strings, no luck.

Please can you help me?

E:\Programmi\Rclone\rclone.exe --config E:/Programmi/Rclone/qt.conf copy --verbose --transfers 3 --checkers 3 --contimeout 60s --timeout 300s --retries 3 --low-level-retries 10 --tpslimit 2 --drive-chunk-size 64M --stats 1s C:\Users\MYPC\Desktop\Nuova cartella\MY TEST.mkv MY_TEAM_DRIVE:TEST_FOLDER

Are you uploading lots of small files? That is a known weak point of google drive.

This will make it quite slow, so I'd suggest you remove that.

i have tried to upload only 1 files of 48GB.

I didn't upload small size file, and i tried without tpslimit, but change nothing

Remove the --tpslimit and it will be fine I think. You could increase --drive-chunk-size further too if you have enough memory.

E:\Programmi\Rclone\rclone.exe --config E:/Programmi/Rclone/qt.conf copy --verbose --transfers 7 --checkers 7 --contimeout 60s --timeout 300s --retries 3 --low-level-retries 10 --drive-chunk-size 512M --stats 1s C:\Users\MYPC\Desktop\Nuova cartella\Blade Runner 2049 (2017).mkv BOB_TEAM_DRIVE:Movies 4k

better but anyway i can't use my full upload

i have 32gb ram and i7

You are getting 7.4MByte/s which is about 60 Mbit/s

Check how you are connecting to google? Are you using IPv6? Could you try via a VPN?

I use ipv4, no vpn.

via browser i can use my full upload, so 25MB/s - 200mbits.
I have ftth fiber 1000/200.

I use rclone browser but i guess that change nothing.

and thanks for support

i hope in a solution, i love rclone

The browser and API are quite a different so it's very hard to compare the two directly.

What rclone version are you running?

I can usually get 35-40MB at most per single file upload with defaults and a 128M chunk size as that seems to be a sweet spot.

You are pretty much trying to debug the Internet and Windows though which is not quite a fun thing to do.

You can try to run with -vv and see what the logs show if anything as that might be helpful as well.


is my version

What version of rclone though? Not the rclone browser.

rclone version
rclone v1.49.1
- os/arch: linux/amd64
- go version: go1.12.9


rclone v1.48.0

  • os/arch: windows/amd64
  • go version: go1.12.3

That's a pretty current version. Not sure if @thestigma has any other Windows related advice to things to try.

with vpn the same ;(

Well first of all I can confirm that it should be able to do much more - assuming you aren't transferring tons of tiny files (which you have already eliminated by testing with a single large one). I have access to several unrelated teamdrives and I've never seen their ingest speed ever be a problem. I assume you'd probably need 1gigabit or more to start to really see such a limitation based on comments from other users who have faster connections.

--drive-chunk-size 128M is optimal if you can afford that much memory pr transfer (so for example up to 1GB for 8 transfers, keep this in mind). Still... 64MB chunks are pretty good and I very much doubt this is the problem. 64MB should be more than sufficient on 200Mbit - but maybe try that just to be safe. You can even go to 256MB just for testing. Beyond that I find no practical benefit on my 150Mbit as saturates 100% of that 95% of the time.

Just to give a little more detail, the reason this helps is that TCP works by ramping up speeds from a low starting point, so the lesser chunks the more of a "sawtooth" pattern you get, and you want to avoid those at it means inefficient bandwidth utilization if you have too many of them (very easy to visualize in task manager under performance->network). Larger chunks help reduce these, but the benefits for each doubling up get progressively smaller while the cost increase is linear.

I will do some really quick tests of 32, 64 and 128 chunks and see if any of those can even limit me on 150Mbit (about 18MB/sec)

I'm not aware of any Windows-specific issues that might affect this and I'm primarily a windows user day-to-day.

Of course, basic networking issues may apply. You definitely want to run a speedtest.net to check that you actually can achieve that much on from the computer you are on. I don't think that's the issue if it sems to work better on the google webUI but I'd just do it and get it out of the way to be safe.

64MB --> just a hair under 16MB/sec
128MB--> about 17.5MB/sec
256MB--> 18MB/sec or a hair under (this is very close to my max anyway)

So do 128MB if you can afford the memory, but 64MB clearly shouldn't result in as low numbers as you get.

I think the next step to get to the bottom of this then is to supply us with a debug log.
Append these to your command:
-vv (enable debug output)
--log-file=mylogfile.txt (output to file - because debug logs can be pretty long and unwieldy)

Depending on how long the log is you may need to use pastebin or similar service to show us the resulting file. Normally there won't be anything very sensitive in there except maybe showing some filenames of files or folder in the place you copy to.

The google drive API is a different API than via the website unfortunately. So it uses different endpoints and often has different performance.

Have you got a VPN you could try? Often people report that if they are having speed problems direct, via the VPN fixes them.

during upload, very strange, have these peak, not constant.

Here there is my log, i hope that helps :frowning:

I Really want use rclone to my upload, i love it.

logs part 1
logs part 2
logs part 3
logs part 4

Thanks for everyone want to help me and thanks to @ncw

I can't see anything wrong with your config :frowning: The ramp ups might be TCP doing their thing - maybe you have a lot of latency between you and the drive API endpoint?

You could try doing the uploads without chunking so --drive-upload-cutoff 100G then rclone will send the files in a single TCP stream. That might be quicker.