Slow download from gdrive to local

If possible, can you maybe run vfs with a debug log and post the whole log of when you do the rsync copy test?

/usr/bin/rclone mount gcrypt: /GD --allow-other --dir-cache-time 48h --vfs-read-chunk-size 32M --vfs-read-chunk-size-limit 2G --tpslimit 5 --tpslimit-burst 5 --umask 002 --bind 192.168.1.30 --log-level DEBUG --log-file /home/felix/logs/rclone.log

I’m dying to see what the log shows from start to finish as something isn’t adding it.

You might be on to something. I’m going to try adding tpslimits to my mount.

4,752,949,749 100% 65.15MB/s 0:01:09 (xfr#1, to-chk=0/1)

Edit: I tried the transfer again and I’m getting the slow speeds

241,991,680   5%    4.21MB/s    0:17:25 

Edit2: This issue seems to go away if I plug into my ISPs router. Sigh. I must have traffic shaping somewhere else in my network.

What are you using for a router?

pfsense. I think I just narrowed it down to rule 1 of IT: it’s always dns.

I had cloudflare’s DNSSEC enabled on pfsense and I just disabled it and went back to google’s DNS servers. The issue seems to be resolved. I’m thinking what happened is rclone had to resolve DNS every time it grabbed a new chunk of data, and the overhead of DNS over TLS was causing massive slowdowns. FML.

Edit: it looks like this wasn’t it. After a few minutes of glory, everything’s back to 4.5MBps

Are you doing any traffic shaping at all in PFSense? I’m very familiar with PFSense.

I run Unbound DNS resolver with DNSSEC enabled. I don’t use the forwarder options.

I’ve never set the shaping up before. I also use Unbound DNS resolver. I’m thinking it could be my netgear GS724TPv2 switch, but I never touched the QoS settings.

So couple things I would validate, but doesn’t sound like it would be the case. If you do check the Traffic Shaper, you don’t see anything there other than the blank section where the queues would be. You also haven’t setup any limiters but those generally just caps so you wouldn’t get a burst than a drop.

If you login to your pfsense router, run a top while the transfer is going on and make sure you aren’t seeing any high interrupts which might be slowing you down.

Mine jump up a bit when pulling down 500 Mbs, but nothing more than one core out of my 4 can handle. I traffic shape a gigabit link.

If you don’t see anything obvious, I’d go along with your simple route and limit the pieces of gear involved. If you can test just PFsense router with a machine plugged into it, start there and add pieces to your chain till it breaks.

DNS really shouldn’t be an issue as you’d see those in your logs and it only asks for the DNS on the initial connection out. You’d see a clear message saying it can’t lookup a DNS name.

You’d see timeouts too like:

rclone.log:2018/07/25 21:28:52 ERROR : Radarr_Movies/I Feel Pretty (2018)/I Feel Pretty (2018).mkv: ReadFileHandle.Read error: low level retry 1/10: read tcp 192.168.1.30:38943->172.217.10.138:443: i/o timeout

So that’s a retry for a connection which can happen. If you can share the full debug log from start to finish, that might have some info in there too.

Same with my system… it about 4-5 MB/sec:

rclone v1.42

  • os/arch: linux/amd64
  • go version: go1.10.1

rsync --progress arrival-1080p-hyperx.mkv /home/vamp/download/
arrival-1080p-hyperx.mkv
260,866,048 2% 4.87MB/s 0:31:28


My mount settings:

/usr/bin/rclone mount --read-only --tpslimit 5 --allow-other --acd-templink-threshold 0 --stats 1s --buffer-size 128M --timeout 5s --contimeout 5s --cache-db-purge -vv --log-file=/home/vamp/log/rclone_log Google_gsuite_crypt:/ /home/vamp/google

If i try to normal “rclone copy” not mount, i get same results.


My internet speed is 1000/200 Mbit

Retrieving speedtest.net server list…
Retrieving information for the selected server…
Hosted by Vodafone Hungary Ltd. (Budapest) [2.50 km]: 10.409 ms
Testing download speed…
Download: 650.88 Mbit/s
Testing upload speed…
Upload: 287.98 Mbit/s


i use this tree: google–>cache–>crypt


i not see any error in log files.

1 Like

I just ran continuous iperf tests along with speedtest.net and I’m not seeing and traffic shaping. I think google might just be pulling our legs and for whatever reason whenever a transfer of a file first starts, they give me burst speeds.

This is how youtube works, would not be surprised if something similar is going on.

So if I download the mount in my unRaid setup I max 4-6mb/sec. sometimes I’ll hover around 10mb/sec.

Same mount settings on my windows. I max my bandwidth.

I don’t have anything network wise on my unRaid in anyway.

No idea.

I mount on a seed box (Linux) I max my internet speed.

No idea.

Just throwing this out there that it’s not IP or account based.

I think I have a better idea of what’s going on here @Vamp @Animosity022 @ncw. This may very well be a peering issue of sorts with google’s newer v3 API endpoints. I just connected to my vpn provider and was able to get the max speed that the provider was showing on a speed test (roughly 200Mbps). If this is the case, the issue isn’t easily resolvable other than allowing users to use the v2 endpoints or if the user switches providers. This would explain why the issue varies so much from user to user and seems to disappear if using a VPS with a good backbone connection.

(it’s also possible that google’s web interface uses different endpoints to download files which would explain why that resolves the problem)

Yes, that’s definitely possible.

I usually do a rsync --progress to test my speeds.

I normally push the max of my line most of the time on the copy:

felix@gemini:/gmedia/Radarr_Movies/Unsane (2018)$ rsync --progress Unsane\ \(2018\).mkv  /data
Unsane (2018).mkv
  1,581,023,232   6%   86.70MB/s    0:04:34

I tested without cache drive and it much better! With cache, about 30-40 Mbit, without cache is 180-200 Mbit!

It need to disable cache drive or optimalize it, or…?

My drive is a SSD, so i dont think that is slow…

Cache is not built for speed as it’s built to manage lower transactions. If you want speed, use vfs and not the cache.

Yes, now i re-set it to vfs. It much better! (about 200Mbit)

My question is: what is the best to Plex? Cache or vfs?

Best is a tough question.

Depends on the what you think is the best.

In my testing, I find cache works fine for streaming. It reduces a good amount of the API hits.

VFS gets me much better higher throughput (which is expected as it moves to bigger chunk sizes) and reduces API hits for a file transfer.

That being said, I use VFS for my config.

I see.

Up to now, i use cache, now i try VFS. If i don’t get ban, i think it is the better choice.

So did we just move past this? The version linked /w v1.41-075-ga193ccdb-drive_v2_download before isn’t available anymore and now I’m capped 100% of the time at 25MB/sec even /w gigabit connection. I don’t think @B4dM4n submitted a PR to add --drive-v2-download-min-size and now that version is gone.

I also haven’t seen this option in the latest release from @ncw . Any options to work around this issue anymore?

I saw you made an issue about this which was a good idea.