Limit access to source filesystem

I’m running rclone to sync data on an EFS out at Amazon to another storage location and would like to limit or control the speed of which rclone reads from the source filesystem. We only have so many ‘credits’ on the filesystem, we’d like to spread out the rclone sync process for a longer period of time to spread out the file stats / reads.

I saw the option --cache-rps but wasn’t sure if this was really what I was looking for. Can rclone be configured to slow down its access to the source filesystem?

Not directly. But if you limit its overall bandwidth, it should in turn slowdown the access.

–bw-limit

We’re traversing 500K files/directories, but only moving a few hundred. Bandwidth limits aren’t so much the issue, rather, it’s the traversal of the source filesystem. nice and ionice aren’t cutting it, because it’s not an issue of competing with local processes; it’s just exhausting the limits on the EFS filesystem. I was looking at https://github.com/opsengine/cpulimit, but I don’t know if that’s a good idea; I saw some caveats about how it uses signals to pause and restart the application.

The other rclone flag you might find useful is --tpslimiit which limits the number of transactions per second. I think you’ll find that does pretty much what you need.

@ncw that option says it limits the HTTP transactions per second. Would this also affect the STATing of the local filesystem. Our issue isn’t HTTP requests (since it’s a sync we only ever transfer approx 50MB over a 2 hour period), our issue is the speed in which rclone is traversing / STATing the local system.

I don’t think it will. I just tried this:

rclone mount /home/xxxx/Downloads/ x -vv --tpslimit=0.005

and then ran a find on ‘x’ and it searched at full speed. tps-limit is a http limiter.

Ah, I missed the fact that this is EFS and hence a local file system, sorry!