@B4dM4n can you send a PR for this? Or do you think we should be doing it a different way?
@Animosity022 @ncw @B4dM4n
My testing procedure is downloading a file with rclone copy on two different versions and comparing the transfer speed.
Here is the output with 1.42:
INFO : XXXFILENAMEXXX: Copied (new)
INFO :
Transferred: 428 MBytes (4.265 MBytes/s)
Errors: 0
Checks: 0
Transferred: 1
Elapsed time: 1m40.4s
Here is the output with v1.41-075-ga193ccdb-drive_v2_download
(found here: https://beta.rclone.org/branch/v1.41-075-ga193ccdb-drive_v2_download/)
args: --stats 10s --drive-v2-download-min-size 0
INFO : XXXFILENAMEXXX: Copied (replaced existing)
INFO :
Transferred: 428 MBytes (35.756 MBytes/s)
Errors: 0
Checks: 0
Transferred: 1
Elapsed time: 11.9s
There is a difference between a ârclone copyâ and copying from the mount.
The rsync I was showing was using a vfs-chunk-size set of options and it goes extremely fast as folks were talking about copying from a mount, in which case, vfs will be a nice fit if that is the problem.
The issue you are seeing with rclone copy is a different one and relates back to the changes described above with the v3 vs the v2 api.
So back to the actual problem, are you having a problem copying files from a mount or just using the rclone copy command?
I was testing using rclone copy. However, I have the same issue if I am copying off of an rclone mount (I tested using the following:
pv filename > folder on local storage
That transfer also tops at 40Mbps which is why I believe the problems are related. This is also why I stated that I cannot stream 4k content.
So whatâs your mount command? How are you testing that? Can you share the debug output of the mount when you do that test? Whatâs the bitrate of the movie you are trying to stream?
My command above shows me pulling 100MB/s so just over 800Mbs down my GD using the 1.42 version of rclone, which really just about maxes out my gigabit connection.
Here is my systemd file and appropriate output streaming 4k.
[Unit]
Description=rclone Google Drive FUSE mount
Documentation=http://rclone.org/docs/
After=network-online.target
[Service]
User=plexadmin
Group=plexadmin
Type=simple
ExecStart=/usr/bin/rclone mount PlexCryptVFS: /home/plexadmin/rclone_mount
âallow-non-empty
âcache-db-purge
âallow-other
âbuffer-size 32M
âcache-tmp-upload-path /home/plexadmin/bighdd/rclone_upload
âdir-cache-time 48h
âvfs-cache-max-age 48h
âvfs-read-chunk-size 32M
âvfs-read-chunk-size-limit 1G
âsyslog
ârc
âlog-level DEBUG
âconfig /home/plexadmin/.config/rclone/rclone.conf
ExecStop=/usr/bin/fusermount -uz /home/plexadmin/rclone_mount[Install]
WantedBy=multi-user.target
(no transcoding is happening during this period)
Here is a sample router traffic graph during this period:
Thanks for your help!
The vfs-chunk-size and cache donât work together. Iâd suggest picking one or the other.
If you want to stick with the cache, you need to increased the --cache-chunk-size to something bigger than 5M, Iâd suggest 32M or 64M.
If you donât have a need for the cache or it doesnât perform well enough for you, try just the vfs read chunk size options.
I do a pretty simple mount these days as Iâve been testing with various chunk sizes, but 32M or 64M should be a great starting point.
felix@gemini:/etc/systemd/system$ cat gmedia-rclone.service
[Unit]
Description=RClone Service
PartOf=gmedia.service
[Service]
Type=notify
ExecStart=/usr/bin/rclone mount gcrypt: /GD \
--allow-other \
--dir-cache-time 48h \
--vfs-read-chunk-size 64M \
--vfs-read-chunk-size-limit 2G \
--umask 002 \
--bind 192.168.1.30 \
--log-level INFO \
--log-file /home/felix/logs/rclone.log
ExecStop=/bin/fusermount -uz /GD
Restart=on-failure
User=felix
Group=felix
[Install]
WantedBy=gmedia.service
If you could test a copy now by just baselining with a rsync --progress from the mount to a local disk, you can test with upping the cache-chunk-size and finally if you want to remove the cache config and just do the --vfs-read-chunk-size for a final test, Iâd surmise you can stream 4k without a problem. Iâve played multiple 50-60GB 4K movies without a problem with 3-4 other people streaming 1080p content with the --vfs-read-chunk-size config and havenât had any problems.
Iâve just tried doing that and the copy is still stuck at ~4.5MBps.
Iâm gonna switch back to cache for now and hope that the --drive-v2 flag is released soon for the mount command.
Tried what? Can you share what you did / tested?
I had already switched to vfs, just accidentally left some of those params. I tried the copy again (after using your posted mount settings) and it resulted in the same speed ~4.5MBps. Interesting update though, Iâve switched back to cache and I tried the copy again, but this time it peaks at ~15MBps which is enough for the 4k but still awfully slow.
Edit 1: itâs back to ~5MBps. I really think google has some funky business going on.
If you can post a snap of your mount command âps -ef | grep rcloneâ and what you are doing to test it, that would be helpful along with some of the debug logs as that might shed some light on why you are seeing some slowness.
itâs tough to figure it out if you donât share specifics
New mount command (using cache backend)
/usr/bin/rclone mount PlexCrypt: /home/xxx/rclone_mount
âallow-non-empty
âallow-other
âcache-tmp-wait-time 30m
âcache-chunk-size=64M
âcache-total-chunk-size=20G
âcache-workers 8
âbuffer-size 0M
âcache-tmp-upload-path /home/xxx/bighdd/rclone_upload
âcache-db-purge
âsyslog
ârc
âlog-level DEBUG
âconfig /home/xxx/.config/rclone/rclone.conf
Iâm running the rsync command now, but it appears it might be hung.
If you have 16 cache-workers, thatâs going to slow things down quite a bit as it spawns 16 workers, each trying to get 64M each. Bigger isnât always better. If you want to use 64M, Iâd bring it down to 4 or 5 workers at most. Iâd surmise you are seeing some 403s in the debug logs as well as thatâs hitting a lot of API calls too.
I ran rsync --progress rclone_mount/data.mkv ~/test.mkv and it would freeze, shoot up to 100MBps, and freeze again. While frozen, the traffic graph on my router was steady at 4.5MBps. Iâm going to try reducing the cache workers to 8 and the chunk size to 32M.
Update: New config and results below.
[Unit]
Description=rclone Google Drive FUSE mount
Documentation=http://rclone.org/docs/
After=network-online.target
[Service]
User=xxx
Group=xxx
Type=simple
ExecStart=/usr/bin/rclone mount PlexCrypt: /home/xxx/rclone_mount
âallow-non-empty
âallow-other
âcache-tmp-wait-time 30m
âcache-chunk-size=32M
âcache-total-chunk-size=20G
âcache-workers 8
âbuffer-size 0M
âcache-tmp-upload-path /home/plexadmin/bighdd/rclone_upload
âcache-db-purge
âsyslog
ârc
âlog-level DEBUG
âconfig /home/xxx/.config/rclone/rclone.conf
ExecStop=/usr/bin/fusermount -uz /home/xxx/rclone_mount
[Install]
WantedBy=multi-user.target
plexadmin@plex:~/rclone_mount/TV/xxx/Season 1$ du -ms âxxx- S01E02 - xxxâ
10042 xxx.mkv
plexadmin@plex:~/rclone_mount/TV/xxx/Season 1$ rsync --progress âxxx.mkvâ ~/test.mkv
xxx.mkv
1,073,250,304 10% 5.87MB/s 0:26:12
After a few minutes, the speed dwindles again as shown above.
I think you have something else going on perhaps. I replicated your config:
felix@gemini:/Test/Radarr_Movies/Tomb Raider (2018)$ rsync --progress Tomb\ Raider\ \(2018\).mkv /data
Tomb Raider (2018).mkv
1,614,839,808 2% 79.41MB/s 0:11:31 ^C
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at rsync.c(644) [sender=3.1.2]
rsync: mkstemp "/data/.Tomb Raider (2018).mkv.r2J1UJ" failed: Permission denied (13)
rsync error: received SIGINT, SIGTERM, or SIGHUP (code 20) at io.c(513) [generator=3.1.2]
felix@gemini:/Test/Radarr_Movies/Tomb Raider (2018)$
felix@gemini:/Test/Radarr_Movies/Tomb Raider (2018)$ ps -ef | grep Test
felix 21850 4041 23 21:34 pts/1 00:00:21 /usr/bin/rclone mount gmedia: /Test --allow-other --cache-tmp-wait-time 30m --cache-chunk-size=32M --cache-total-chunk-size=20G --cache-workers 8 --buffer-size 0M --cache-db-purge --log-file /home/felix/logs/testcache.log --log-level DEBUG
I can still get 70-90MBs on that config. Whatâs are you speeds rated at?
I let this copy run all the way:
felix@gemini:/Test/TV/Sherlock$ rsync --progress Sherlock.S04E03.mkv /data
Sherlock.S04E03.mkv
2,607,354,583 100% 85.40MB/s 0:00:29
The speeds are constant at ~4.5MBps occasionally burst to anywhere between ~25-120MBps. Could it be the account type? Itâs an edu account that I legitimately acquired (not ebay).
thank you so much for the help btw @Animosity022
I wonder if someone else with a .edu account could chime in as I donât know. I have a registered domain with a single user that I use.
It does seem very strange it bursts and dies as that seems to be more related to something throttling it or rclone backing off, but you arenât seeing errors/rate limits in the logs?
I generated these in my logs as I had a bunch of streams going on while I was testing as well:
2018/07/30 21:35:07 DEBUG : pacer: Rate limited, sleeping for 1.817931709s (1 consecutive low level retries)
2018/07/30 21:35:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2018/07/30 21:35:07 DEBUG : pacer: Rate limited, sleeping for 2.307111691s (2 consecutive low level retries)
2018/07/30 21:35:07 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
2018/07/30 21:35:07 DEBUG : pacer: Rate limited, sleeping for 4.842705593s (3 consecutive low level retries)
2018/07/30 21:35:07 DEBUG : pacer: low level retry 2/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=286927808882, userRateLimitExceeded)
I just realized I had syslog output enabled, which is why I didnât see many debug messages (I was only checking with systemctl status). Iâm going to dig through that and report back.
Edit: Other than the occasional chunk retry storage, I didnât see any errors. At this point Iâm not sure what else it could be.
Just for giggles, can you try to download a file via a browser or something to test a straight download and see if that also gets stuck at 5MB/s?