Rclone Move? Speed is capped to ~50mb/s?

Hello,

So basically wanna understand, does gdrive has cap on transfer speed per file(http://prntscr.com/okflr6)?

Things is that i don't get more than 55mb/s and it doesn't depends how much files im transfering.

10Gbit channel. Dedicated.

Command:

rclone -v move --buffer-size 1024M --drive-chunk-size 1024M --transfers 25 --copy-links --stats 1s $Movies GDrive:Movies/

Those are big M so you are getting ~1194 Mb/s which is pretty nice. Hard to tell on your system what else might the bottleneck.

With Google, you can only create 2-3 files per second so 25 transfers would really hamper things if you were creating that much.

drive-chunk-size of 1G seems to not be good either as the recommendation from testing is 32M or 64M.

There is a good article on that written up by Utah here:

https://www.chpc.utah.edu/documentation/software/rclone.php

There is nothing why my system, tested also on seedboxes/localy and speed is like capped on 50 mb/s(megabytes).
Im not getting "~1194 Mb/s".

Is your screenshot wrong?

It shows 153 MB/s which is ~1224 Mb/s.

1 Like

Short answer - no. Gdrive shouldn't have any real limit on bandwidth. I have no idea how much of a 10Gbit it can saturate, but it should be a significant amount. It certainly eats up my 150Mbit like it's nothing.

Be aware that many small files are an issue in rclone. You won't get good bandwith utilization on them.
For large files however you should - but with that amount of bandwidth you should absolutely increase:
upload_cutoff = 256M
chunk_size = 256M
(this needs to go in your config, under the Gdrive header)
to as high as you can afford. This uses that amount of memory multiplied by amount of transfers, so make you your system can actually handle that. The default is 8MB - and uploading in 8MB segments on your uplink would be silly. I'd run my numbers at an absolute minimum. 25 transfers seems very high. Just don't assume that more = better, because it will at some point just go slower.

this only affects upload. downloads are split by the chunk size (which you seem to have set high already).

I would also make sure your disk can actually keep up with a lot of bandwidth, especially when heavily threaded. A HDD could stuggle with feeding 25 transfers possibly. Worth checking.

153 megabytes is ~ 1227 megabits and its bad whan you have 10gbit network. that is an overall value for all 3 files, but im saying not about that. Im saying about 'cap' on each file separately.

It does indeed look like more than a gigabit. If you get that much I kind of feel there isn't much left to complain about lol. My local LAN isn't that fast..

x4 NVMe SSD in RAID 0, so yeah =).

1 Like

There's no cap from the Google side on any transfer. As I shared, you might want to try changing the chunk size to something smaller and see how that works.

Just because you have a 10Gb link on a card doesn't mean to you have a full 10Gb to google.

Did you check all the other metrics and see what's going on? CPU? Disk IO? etc?

If you click on the link I shared, they have similar high end results.

1 Like

Basically i've have tested with 25 files at the same time and i've got ~ 40 mb/s on each files.

The thing is not about that.

If i will just try to upload 1 file, the speed won't reach more than ~ 50 mb/s.

There is all ok with stats. So yeah.

Well I can at the very least confirm it's not some hard cap by google. I just transferred 157mbit on a single file no issue. It took about 7-8 seconds to ramp to full speed. I currently use 256MB upload chunk size (and cutoff) for Gdrive.

Anymore than that I simply don't have the bandwidth to test. I would find it very strange if google set a per-transfer bandwidth limit rather than some more general limit (if there even exists one)

Maybe my mount is wrong?

Here it is:

rclone mount -vvv --dir-cache-time 96h --cache-chunk-total-size 25G --drive-chunk-size 256M --vfs-read-chunk-size 256M --vfs-read-chunk-size-limit off --timeout 1h --tpslimit 20 --checkers 10 --umask=002 --rc --user-agent 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.77 Safari/537.36' --allow-other --buffer-size 0 GDrive: ~/GDrive/

I don't see anything obviously wrong there. Also there's not really a lot about the mount that should change how uploads perform.

Note: cache-chunk-total-size is a cache backend flag, as will be any other that start with cache. It won't do anything unless you use the cache backend. I don't know if you do, but just checking you aren't confusing it with vfs-cache-max-size .

But in any case - it certainly can't hurt to start the troubleshooting by doing a test-transfer on a single file directly from the rclone commandline. First rule of troubleshooting is to eliminate as many sources of error as possible - and if it caps at the same speeds pr. transfer there too then we have just eliminated a lot of possibilities.

If single file then its smth like this (https://prnt.sc/okhajc, https://prnt.sc/okhaq3). Not good at all.

I began using those settings yesterday after your advice and can confirm that they make a considerable difference. My connection is constantly maxed out at 120MB/sec when uploading :beer:

Tried with this kind of mount and basically the same thing.

You'd want to remove checkers as that does nothing on a mount.
You can remove the tpslimit of 20 as the defaults are fine there.

I'd remove your vfs parameters as you can just keep it as the defaults.

Are you writing to your mount as well?

Couple things that don't really apply.

If, it would be a pleasure if you could helped me out.

Basically im using only for Plex + 'rclone move'. Watching 4k from all avaliable devices. In this case i need i need fast buffering/fast load and so on. Don't wanna get stuck on middle.

What config should i use?

My mount is pretty simple as I stream 4K content to mostly ATV devices so majority is all direct play with some direct stream, but hardly any transcoding.

felix      783     1  1 Jul23 ?        01:12:32 /usr/bin/rclone mount gcrypt: /GD --allow-other --buffer-size 1G --dir-cache-time 96h --log-level INFO --log-file /opt/rclone/logs/rclone.log --timeout 1h --umask 002 --rc

My use case is streaming for Plex and I have a single home server that has 32GB of memory. I rarely stream more than 5-6 concurrent which 2-3 being direct stream (where the buffer helps out in rclone) and 2-3 being trancodes, which case the buffer doesn't matter much.

I use rclone move from a local disk to my encrypted remote.

[felix@gemini scripts]$ cat upload_cloud
#!/usr/bin/bash
# RClone Config file
RCLONE_CONFIG=/opt/rclone/rclone.conf
export RCLONE_CONFIG

#exit if running
if [[ "`pidof -x $(basename $0) -o %PPID`" ]]; then exit; fi

# Move older local files to the cloud
/usr/bin/rclone move /data/local/ gcrypt: --log-file /opt/rclone/logs/upload.log -v --drive-chunk-size 64M --exclude-from /opt/rclone/scripts/excludes --delete-empty-src-dirs  --fast-list

With your type of mount and 'move' im getting this (https://prnt.sc/okivp2, https://prnt.sc/okivrb). Still the same. =(