I managed to replicate this with BAD being about 20 MB/s and good being about 65 MB/s
v1.41 BAD
v1.40 BAD
v1.39 GOOD
v1.36 GOOD
So it happened somewhere in the 1.40 development cycle.
I ran git bisect and it came up with this commit as being the problem one (as I expected!)
07f20dd1fd6f89b9ebdefdc8964a8e9ee9aa9892 is the first bad commit
commit 07f20dd1fd6f89b9ebdefdc8964a8e9ee9aa9892
Author: Fabian Möller <fabianm88@gmail.com>
Date: Wed Jan 24 00:46:41 2018 +0100
drive: migrate to api v3
:040000 040000 aad5e99c16dab297c5f79c505bb4d303d2497953 35a6b83de5afb909d3f83c9dfc29b9c2d7c20247 M backend
The difference appears to be that for v2 vs v3 is that we are downloading files from different endpoints:
FAST 1.39
Host: doc-14-bc-docs.googleusercontent.com
/docs/securesc/XXX/YYY/ZZZ/AAA/PPP?e=download&gd=true
This is odd, I don’t see a reason why v3 should be slower than v2.
But I can reproduce this behavior. Depending on the device used, there seem to be a limit of 25 to 30 MB/s.
There doesn’t seem to be report of this issue in the Google tracker. Should we open one? In the meantime I can look into creating a workaround flag like --drive-use-v2-download to use the old links.
I implemented a workaround in v1.41-075-ga193ccdb-drive_v2_download. It adds a flag --drive-v2-download-min-size to specify the minimum size, at which drive v2 download links are used. This adds the overhead of a drive.files.get request for each download to obtain the downloadUrl.
This 25 to 30 MB/s limit seems to also affect uploads for me.
Uploads has always been limited since day 1. Found it weird how only download has full speed, but didn’t thought of it at the time as i do batch uploads but single downloads.
Also, is there a reason to use v3? Seems like v2 is superior, why not use that instead.
Both recommend using the alt=media approach. The difference seems to be from the different urls for downloading: the downloadUrl from the file resource and the official method recommended by google.
Would it be possible for this branch to be merged into the main branch? Or possibly a flag added --use-drivev2-api or something like that to force it to use the v2 api? It would be nice for uploads to go through the v2 API as well since they are locked at 30 MB/s max on the current rclone version.
Any update on whether this flag is going to be added? Seems rclone isn’t pulling files (especially 4K ones) fast enough for them to stream without constant buffering.
I get google drive issues every single night is, between 7PM PST and 10PM PST. My max download is about 15-16MB/sec which would basically make those large files NOT playable. Buffering etc. It will even buffer on smaller files too. I kept thinking it was my home server. But I just did a simple COPY from the drive to my desktop (hard wired at 300mbit) and i get 15-16mb/sec or roughly 100-150mbit. Which is JUST under what is needed to stream these larger bit rate files. Marked as 1.6hours to transfer the movie file, with the movie being just over that. so Yea, especially since this speed is not constant.
I will test again in the AM to see if my speeds improve. I am wondering if it is like the other thread of the api v2 vs api v3 being used. Someone pointed out that their downloads used to be MAX gigabit speeds, but since upgrading to new rclone they are maxed at 150-200mbit downloads AND uploads.
If you guys have ipv6 configured on your machines, I’d recommend forcing rclone to bind to your ipv4 address instead and having it use that instead for connections. The difference in speed between the two is significant. You can do this by adding --bind I.P.v.4Here to your mount command.