Is there any reason why rclone should bring down a server?

I’m running rclone to transfer about 2Gbyte of files from a local directory on a dedicated server, to an Amazon S3 bucket. Each time I start the command, about 30 seconds in, the network connection drops and I get thrown off, and the server network become unstable from all diections for about ten minutes after that.

I’m just running a simple command line tool, and not using any kind of privileged user. How can that happen? I can SFTP massive multi-gigbyte files off the server without any issues, so what is this tool doing that the server and/or network dislikes so much? I’m following this up with the hosting company too, as I’m not sure if this is a server issue or a wider network issue, perhaps with too many connections triggering some kind of mallicious code detector and thorowing up a firewall around the server. Is the S3 file protocol particularly heavy in something it does?

Anyone else had this experience?

Probably memory. decrease the transfers/checkers or the chunk sizes/buffering/etc.

chunk sizes - is that an rclone thing, or a server thing?

rclone. Its remote specific. Depending on what you are doing you may be using lots of memory. rclone has to store some in memory while its doing each transfer.

for example:
–b2-chunk-size int Upload chunk size. Must fit in memory. (default 96M)

Thanks. Seems there is no one place for all the documentation. Some options are only listed on the comnmand itself (so far as I can see).

These options ran to completion without throwing me off:

rclone --checkers 2 --b2-chunk-size 16 --bwlimit 5M copyto /my/source/dir remotename:bucketname/remote/dir

Thanks again.

https://rclone.org/ ?

I would just start with lowering checkers/transfers to the defaults., watching memory usage, and then increase and see.

I’ll see if I can find what teh limits are. The way the network seemed unstable for a good time after the command dies and memory was cleared, puts my suspicians still potentially on the network and what firewalls we may be triggering, hence keeping the bandwdith down too. So far, so good…

Just got word back from our hosting company:

On closer inspection here, I can see that it is our network security is dropping your server from our network temporarily. For now I have temporarily disabled this. This will automatically enable again at midnight. Is that enough time for you to transfer your files over?

That confirms my suspicions - it [probably] wasn’t memory. But there are options at least to tune network access so I can stay under the radar, and play nice with other users on the network too I guess :slight_smile:

1 Like

Been informed that “storm control” at my hosting providers triggers at 10kppm (10 thousand packets per second). I’ll have to do some calculations to work out what that means (not sure how to do that yet) but if anyone has any experience with this, it would be helpful to know if this translates directly to options on the rclone command. It certainly sounds like a lot of packets. Our files are all several megabytes in size, so it’s not like we need to send so many packets for the data, but what the S3 prototcol does, or whatever protocols sit in the network stack, I have no idea. It’s all kind of very low level stuff I feel I shouldn’t have to be dealing with at this level. I just want to copy some files to a remote location :-/

ppm would be packets per minute isn’t it?
Anyway 10kpps at 1500 byte/packet (I guess the average size would be actually smaller?) is 15 Mbytes/s so not so stormy for most decent networks…

1 Like

Seems slow for"storm control". Anyway add a band width limit of 10Mb and keep checkers and transfers low. Rclone does bw limiting on an average so if you have a lot of errors queue up it may burst higher than that. Also add buffer-size 0 as that will always download at full speed.

1 Like