Let me preface by saying that I am fairly new to using rclone.
I am getting rate limited before the advertised rate limit on Linode (s3 compatible) object storage. They have an advertised rate limit of 750 requests / second. The maximum I was able to get was 125 requests / second. I reached out to the support. Among other possible reasons one of the thing that they mentioned was this:
Your client may be making more requests than log entries, abstracting multiple requests into a single log entry.
I am getting rate limited as soon as I try for 126 requests / second.I have configured the rclone parameters (to the best of my knowledge) to make sure that I am only sending 126 transactions / second (and each transaction is just a single put request). The --dump=header logs also seems to only show this one request and one response for each file.
This is probably due to some other limit on Linode's side, but just wanted to confirm that there are no additional requests being sent other than the one that I am seeing in the logs. It would make sense if there were 6 requests being made for each file upload (125 * 6 = 750).
Run the command 'rclone version' and share the full output of the command.
rclone v1.58.1
- os/version: Microsoft Windows 11 Home Single Language 21H2 (64 bit)
- os/kernel: 10.0.22000.675 (x86_64)
- os/type: windows
- os/arch: amd64
- go/version: go1.17.9
- go/linking: dynamic
- go/tags: cmount
Which cloud storage system are you using? (eg Google Drive)
Linode Object Storage (s3 compatible).
The command you were trying to run (eg rclone copy /tmp remote:tmp)
This may be due to how your client is connecting to our Object Storage system. Our Object Storage clusters do have multiple endpoints as you can see here:
If rclone's behavior as a client is to connect to one IP address of the six in the cluster instead of balancing the load across all endpoints, it is likely limited to 1/6th of the total possible rate limit for a bucket in that cluster (750/6=125).
I'm not sure what rclone's behaviour will be here. It will probably use just one of the IPs for the TTL of the entry but I'm not sure. I think that is how most programs will work.
Can you check with netstat -tnp after rclone had been running for a bit. That will sure what IPs rclone is connecting to.
Rclone doesn't really pick an IP to connect as that's the local resolver on the system.
Generally, it'll get one IP and stick unless you do something to adjust the behavior. Windows will keep one IP cached for a period of time to reduce look ups.
This is because rclone is using persistent connections (which speeds up HTTP transactions by needing fewer round trips). Rclone should pick a new IP address every 15 minutes (which is how long the IP takes to expire).
I will test this out and update as soon as possible.
I was able to create a rudimentary python script that cycles through the 6 ip addresses. I was able to build an upload script using asyncio & aiohttp to achieve upload speeds greater than 325 files / second. I was still not able to hit the advertised limit of 750 requests / second due to my AWS4-HMAC-SHA256 implementation, TCP concurrent connections limit etc. But it still did confirm that a round-robin approach will give upload speed greater than 125 requests / second.
import socket
from itertools import cycle
ais = socket.getaddrinfo("us-southeast-1.linodeobjects.com", 0, 0, 0, 0)
linode_cluster_ips = []
for result in ais:
linode_cluster_ips.append(result[-1][0])
ip_addr = cycle(linode_cluster_ips)
# next(ip_addr) will return one from the linode_cluster_ips in a round robin manner
...
Sorry for the delay. I was able to get 164-170 requests / second (not consistent) with --disable-http-keep-alives. Above that it has a lot of errors similar to