Slow download from gdrive to local

@B4dM4n can you send a PR for this? Or do you think we should be doing it a different way?

@Animosity022 @ncw @B4dM4n
My testing procedure is downloading a file with rclone copy on two different versions and comparing the transfer speed.

Here is the output with 1.42:

INFO : XXXFILENAMEXXX: Copied (new)
INFO :
Transferred: 428 MBytes (4.265 MBytes/s)
Errors: 0
Checks: 0
Transferred: 1
Elapsed time: 1m40.4s

e1

Here is the output with v1.41-075-ga193ccdb-drive_v2_download
(found here: https://beta.rclone.org/branch/v1.41-075-ga193ccdb-drive_v2_download/)

args: --stats 10s --drive-v2-download-min-size 0
INFO : XXXFILENAMEXXX: Copied (replaced existing)
INFO :
Transferred: 428 MBytes (35.756 MBytes/s)
Errors: 0
Checks: 0
Transferred: 1
Elapsed time: 11.9s

pr

There is a difference between a “rclone copy” and copying from the mount.

The rsync I was showing was using a vfs-chunk-size set of options and it goes extremely fast as folks were talking about copying from a mount, in which case, vfs will be a nice fit if that is the problem.

The issue you are seeing with rclone copy is a different one and relates back to the changes described above with the v3 vs the v2 api.

So back to the actual problem, are you having a problem copying files from a mount or just using the rclone copy command?

@Animosity022

I was testing using rclone copy. However, I have the same issue if I am copying off of an rclone mount (I tested using the following:

pv filename > folder on local storage

That transfer also tops at 40Mbps which is why I believe the problems are related. This is also why I stated that I cannot stream 4k content.

So what’s your mount command? How are you testing that? Can you share the debug output of the mount when you do that test? What’s the bitrate of the movie you are trying to stream?

My command above shows me pulling 100MB/s so just over 800Mbs down my GD using the 1.42 version of rclone, which really just about maxes out my gigabit connection.

Here is my systemd file and appropriate output streaming 4k.

[Unit]
Description=rclone Google Drive FUSE mount
Documentation=http://rclone.org/docs/
After=network-online.target
[Service]
User=plexadmin
Group=plexadmin
Type=simple
ExecStart=/usr/bin/rclone mount PlexCryptVFS: /home/plexadmin/rclone_mount
–allow-non-empty
–cache-db-purge
–allow-other
–buffer-size 32M
–cache-tmp-upload-path /home/plexadmin/bighdd/rclone_upload
–dir-cache-time 48h
–vfs-cache-max-age 48h
–vfs-read-chunk-size 32M
–vfs-read-chunk-size-limit 1G
–syslog
–rc
–log-level DEBUG
–config /home/plexadmin/.config/rclone/rclone.conf
ExecStop=/usr/bin/fusermount -uz /home/plexadmin/rclone_mount

[Install]
WantedBy=multi-user.target
(no transcoding is happening during this period)

Here is a sample router traffic graph during this period:

Thanks for your help!

The vfs-chunk-size and cache don’t work together. I’d suggest picking one or the other.

If you want to stick with the cache, you need to increased the --cache-chunk-size to something bigger than 5M, I’d suggest 32M or 64M.

If you don’t have a need for the cache or it doesn’t perform well enough for you, try just the vfs read chunk size options.

I do a pretty simple mount these days as I’ve been testing with various chunk sizes, but 32M or 64M should be a great starting point.

felix@gemini:/etc/systemd/system$ cat gmedia-rclone.service
[Unit]
Description=RClone Service
PartOf=gmedia.service

[Service]
Type=notify
ExecStart=/usr/bin/rclone mount gcrypt: /GD \
   --allow-other \
   --dir-cache-time 48h \
   --vfs-read-chunk-size 64M \
   --vfs-read-chunk-size-limit 2G \
   --umask 002 \
   --bind 192.168.1.30 \
   --log-level INFO \
   --log-file /home/felix/logs/rclone.log
ExecStop=/bin/fusermount -uz /GD
Restart=on-failure
User=felix
Group=felix

[Install]
WantedBy=gmedia.service

If you could test a copy now by just baselining with a rsync --progress from the mount to a local disk, you can test with upping the cache-chunk-size and finally if you want to remove the cache config and just do the --vfs-read-chunk-size for a final test, I’d surmise you can stream 4k without a problem. I’ve played multiple 50-60GB 4K movies without a problem with 3-4 other people streaming 1080p content with the --vfs-read-chunk-size config and haven’t had any problems.

I’ve just tried doing that and the copy is still stuck at ~4.5MBps. :confused:
I’m gonna switch back to cache for now and hope that the --drive-v2 flag is released soon for the mount command.

Tried what? Can you share what you did / tested?

I had already switched to vfs, just accidentally left some of those params. I tried the copy again (after using your posted mount settings) and it resulted in the same speed ~4.5MBps. Interesting update though, I’ve switched back to cache and I tried the copy again, but this time it peaks at ~15MBps which is enough for the 4k but still awfully slow.

Edit 1: it’s back to ~5MBps. I really think google has some funky business going on.

If you can post a snap of your mount command “ps -ef | grep rclone” and what you are doing to test it, that would be helpful along with some of the debug logs as that might shed some light on why you are seeing some slowness.

it’s tough to figure it out if you don’t share specifics :slight_smile:

New mount command (using cache backend)

/usr/bin/rclone mount PlexCrypt: /home/xxx/rclone_mount
–allow-non-empty
–allow-other
–cache-tmp-wait-time 30m
–cache-chunk-size=64M
–cache-total-chunk-size=20G
–cache-workers 8
–buffer-size 0M
–cache-tmp-upload-path /home/xxx/bighdd/rclone_upload
–cache-db-purge
–syslog
–rc
–log-level DEBUG
–config /home/xxx/.config/rclone/rclone.conf

I’m running the rsync command now, but it appears it might be hung.

If you have 16 cache-workers, that’s going to slow things down quite a bit as it spawns 16 workers, each trying to get 64M each. Bigger isn’t always better. If you want to use 64M, I’d bring it down to 4 or 5 workers at most. I’d surmise you are seeing some 403s in the debug logs as well as that’s hitting a lot of API calls too.

I ran rsync --progress rclone_mount/data.mkv ~/test.mkv and it would freeze, shoot up to 100MBps, and freeze again. While frozen, the traffic graph on my router was steady at 4.5MBps. I’m going to try reducing the cache workers to 8 and the chunk size to 32M.

Update: New config and results below. :confused:

[Unit]
Description=rclone Google Drive FUSE mount
Documentation=http://rclone.org/docs/
After=network-online.target
[Service]
User=xxx
Group=xxx
Type=simple
ExecStart=/usr/bin/rclone mount PlexCrypt: /home/xxx/rclone_mount
–allow-non-empty
–allow-other
–cache-tmp-wait-time 30m
–cache-chunk-size=32M
–cache-total-chunk-size=20G
–cache-workers 8
–buffer-size 0M
–cache-tmp-upload-path /home/plexadmin/bighdd/rclone_upload
–cache-db-purge
–syslog
–rc
–log-level DEBUG
–config /home/xxx/.config/rclone/rclone.conf
ExecStop=/usr/bin/fusermount -uz /home/xxx/rclone_mount
[Install]
WantedBy=multi-user.target

plexadmin@plex:~/rclone_mount/TV/xxx/Season 1$ du -ms ‘xxx- S01E02 - xxx’
10042 xxx.mkv
plexadmin@plex:~/rclone_mount/TV/xxx/Season 1$ rsync --progress ‘xxx.mkv’ ~/test.mkv
xxx.mkv
1,073,250,304 10% 5.87MB/s 0:26:12

After a few minutes, the speed dwindles again as shown above.

I think you have something else going on perhaps. I replicated your config:

felix@gemini:/Test/Radarr_Movies/Tomb Raider (2018)$ rsync --progress Tomb\ Raider\ \(2018\).mkv /data
Tomb Raider (2018).mkv
  1,614,839,808   2%   79.41MB/s    0:11:31  ^C
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.2]
rsync: mkstemp "/data/.Tomb Raider (2018).mkv.r2J1UJ" failed: Permission denied (13)
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at io.c(513) [generator=3.1.2]
felix@gemini:/Test/Radarr_Movies/Tomb Raider (2018)$
felix@gemini:/Test/Radarr_Movies/Tomb Raider (2018)$ ps -ef | grep Test
felix    21850  4041 23 21:34 pts/1    00:00:21 /usr/bin/rclone mount gmedia: /Test --allow-other --cache-tmp-wait-time 30m --cache-chunk-size=32M --cache-total-chunk-size=20G --cache-workers 8 --buffer-size 0M --cache-db-purge --log-file /home/felix/logs/testcache.log --log-level DEBUG

I can still get 70-90MBs on that config. What’s are you speeds rated at?

I let this copy run all the way:

felix@gemini:/Test/TV/Sherlock$ rsync --progress Sherlock.S04E03.mkv /data
Sherlock.S04E03.mkv
  2,607,354,583 100%   85.40MB/s    0:00:29

The speeds are constant at ~4.5MBps occasionally burst to anywhere between ~25-120MBps. Could it be the account type? It’s an edu account that I legitimately acquired (not ebay).

thank you so much for the help btw @Animosity022

I wonder if someone else with a .edu account could chime in as I don’t know. I have a registered domain with a single user that I use.

It does seem very strange it bursts and dies as that seems to be more related to something throttling it or rclone backing off, but you aren’t seeing errors/rate limits in the logs?

I generated these in my logs as I had a bunch of streams going on while I was testing as well:

2018/07/30 21:35:07 DEBUG : pacer: Rate limited, sleeping for 1.817931709s (1 consecutive low level retries)
2018/07/30 21:35:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2018/07/30 21:35:07 DEBUG : pacer: Rate limited, sleeping for 2.307111691s (2 consecutive low level retries)
2018/07/30 21:35:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2018/07/30 21:35:07 DEBUG : pacer: Rate limited, sleeping for 4.842705593s (3 consecutive low level retries)
2018/07/30 21:35:07 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)

I just realized I had syslog output enabled, which is why I didn’t see many debug messages (I was only checking with systemctl status). I’m going to dig through that and report back.

Edit: Other than the occasional chunk retry storage, I didn’t see any errors. At this point I’m not sure what else it could be.

Just for giggles, can you try to download a file via a browser or something to test a straight download and see if that also gets stuck at 5MB/s?

Downloading via the web ui is fine.