--bwlimit (bandwidth limit) not working (I know it's kbytes/s)

I’ve seen a few other posts about this, but they didn’t go very far. I’ve noticed for about a week that --bwlimit isn’t working. Yes, I know it’s in bytes/s and not bit/s. I’m using 1.36 on Linux 64.

When it was functioning correctly, I’d set --bwlimit 400 for example, and it would limit the transfer to 400 kilobytes/s according to my DD-WRT bandwidth graph. Now, even though rclone will state it’s limiting to 400, the transfer is running around 768kbytes/sec, which is the max of my upload. Nothing obvious has changed since it no longer wants to follow the limit.

I am using both 5M and 100M limits and both are working exactly as expected for me.

Maybe try with a k, it might not default to kilobytes
–bwlimit 400k

I tried that too. The thing is, rclone states it’s running at 400Kbytes/s, and even gives the actual speed of 399.something K/s, but this does not correspond to what DD-WRT is claiming. Additionally, it has transferred about 25GB in the last roughly 12 hours which is not 400K/s.

Unless this applies to each concurrent transfer (–transfers)? I don’t believe it does though.

The limit is across all transfers. I would try something like iftop to check the connection speeds. Your computer might be doing some other uploads not related to rclone. But if the 25GB size is correct that too much for 400KB.

I have looked at iftop, but the numbers are too jumpy to be meaningful, likely because it’s moving a lot of small files. I could try syncing one large file.

set a minimum file size to get a good feel --min-size (assuming you have some bigger stuff in there). You’d likely want more than a few files to transfer at a time.

I haven’t tried the option yet, but is the directory listing used much when synching? If you have tons of files and rclone checks the listing from time to time? Not sure how much traffic this could be using besides file transfers.

Bandwidth limits only apply to the data transfer. The don’t apply to the bandwith of the directory listings etc.

Just an idea :slight_smile:

Maybe. Although I do have a lot of files, some small, some large, I do see an awful lot of sustained high bandwidth utilization which leads me to think it’s simply not throttling files. I do still need to test this with a large file.

Which cloud provider are you using? And which OS?

Are there lots of retries happening? That could bust the bandwidth limit maybe.

This is happening with B2 and S3, and OS is Linux. I don’t believe there’s a lot of retries since the calculated total transfer over time seems to be what you’d expect.

Update: maybe it is working. I tried syncing with the addition of one large file, and the one-minute updates showed the transfer rate and the amount transferred every minute; if I can trust it (I think I can), it works out to be correct.

During that time DD-WRT bandwidth graph wasn’t making sense so maybe that’s what I can’t trust!