Rclone --bwlimit "newbie" ?s

With some tinkering, I’ve realized that --bwlimit doesn’t set an upper threshold limit on upload bandwidth but tries to average out the bandwidth consumption over time. So depending on the file sizes (i.e. small files), I’ll see complete saturation of the upload connection by rclone, which is creating internet usability issues on the network. I’ve been fooling around with --drive-chunk-size, --checkers, & --transfers but to no avail. Ideally I’d like to restrict rclone from crossing the “bandwidth limit” (i.e. 150 kBytes/s). Can you guys point me in a direction?

rclone version: 1.40
remote host: google drive

What’s the actual command you are running?

There is also a note in the docs:

Note that the units are Bytes/s, not Bits/s. Typically connections are measured in Bits/s - to convert divide by 8. For example, let’s say you have a 10 Mbit/s connection and you wish rclone to use half of it - 5 Mbit/s. This is 5⁄8 = 0.625MByte/s so you would use a --bwlimit 0.625M parameter for rclone.

Yup, aware of the bits/bytes conversion…

rclone -v --bwlimit=150k copy /source /destination

My upload is 3.5 Mbit/s, the above command should only be using ≈ 1.2 Mbit/s, but watching my Router’s Tx rates I’m seeing the upload maxing out ocassionally. There are no other processes on the network. I’ll see if I can grab some screenshots.

Limit being “respected”, then some magic happens, I’m thinking rclone is checking files (idk)…

Breaks outs and saturates the upload…

Goes back to “respecting” the limit…

Maybe run that with a -vvv and post some output? I wonder if it uploads something and checks or pulls through some big listing since the limit only applies to the data transfer:

Bandwidth limits only apply to the data transfer. They don’t apply to the bandwidth of the directory listings etc.

If you post the logs, maybe that will point out what’s actually consuming it.

@Animosity022, thanks for the feedback. Pretty sure I’m replicating this issue #1944. It’s slated to be enhanced. Leaving everything here just in case it helps someone down the line.

Hmm. That bug seems to be for S3 transfers. I wonder if it’s the same for GD.

I’m wondering if that’s the case myself. Hoping it is. But for now, I’m kinda giving up. Planning on only using rclone in the evenings when my upload needs aren’t restricted.

Ah, rclone only limits file transfers, not directory listing transfers… Have you got a huge number of files?

#1944 would fix that

@ncw the source directory contains 6800 subdirectories and the total number of files within those directories is 92381.

Should have mentioned this early, I was crypt’ing the files to the remote. Not sure if that matters.

Yes, I think it is likely that listings are causing the bandwidth overrun. This will be fixed by #1944

It will make more listing bandwidth as the file names are longer!

@ncw Perfect, looking forward to the update in the future. Thanks for your assistance.

1 Like