Rclone_High_CPU_Consumption

What is the problem you are having with rclone?

Today, we are using Rclone for file transfers from Unix Machine to Windows. This is how the Rclone transfer works for our application.

"Installed Rclone Client copies the files from Source Server (RedHAT_Linux) ----> Intermediate Server (Redhat_Linux) ---> Rclone client installed on Windows machine pulls the files from Intermediate Server."

SSH_Connections Allowed on **Intermediate** Server: 1200
CPU: 4-Core &16G Memory

Problem:

During file transfer CPU consumption is 100% and all SSH connections are closed on Intermediate server and does not allow/accept any new connections. Is there any tuning that needs to be done w.r.t Rclone perspective on Source and destination servers where Rclone client is installed on it?

Note: Intermediate server is a plain Unix machine & does not had Rclone installed in it.

What is your rclone version (output from rclone version)

rclone v1.49.5
- os/arch: linux/amd64
- go version: go1.12.10

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Source Server:  RHEL Server 6.10 (where Rclone is installed )
Intermediate Server: RedHat 7.8
Destination : Windows Server 2012 (Rclone installed on it )

Which cloud storage system are you using? (eg Google Drive)

N/A

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy /tmp/dummy2.txt rclonelled:/test/ --timeout 20m --ignore-times --retries 5 --retries-sleep 1m -vv --config /tmp/rclone.conf

The rclone config contents with secrets removed.

[rclonelled]
type = sftp
host = ###
user = ###
key_file = <id_rsa_path>
key_use_agent = false
use_insecure_cipher = false
disable_hashcheck = false
md5sum_command = md5sum
sha1sum_command = sha1sum

[rclonehled]
type = sftp
host = ###
user = ###
key_file = <id_rsa_path>
key_use_agent = false
use_insecure_cipher = false
disable_hashcheck = false
md5sum_command = md5sum
sha1sum_command = sha1sum

Any Rclone tuning is required here?

hi,
the latest stable of rclone is v1.52.3
so please update on all machines and test again.

https://rclone.org/downloads/

If latest version does not solve it, what process names does top say are saturating the processor?

Is it sshd, md5sum or sha1sum instances?

Is the rclone command one of a few large transfers or many small transfers sent as separate rclone commands?

If you login from the first server to second with ssh using the sftp credentials rclone is using do you get message of the day or landscape-sysinfo type output?

For frequent, small automated sftp copies, processes triggered at login from .bashrc or similar are a great way to cane the processor.

Thanks for your reply, Edward.

However to answer your questions

  1. Yes, its many small transfers sent as a separate rclone commands
  2. SSH login from first server to second with rclone user is not enabled & allowed

Rclone will be more efficient if you transfer all the files with one rclone. Maybe using --files-from ?

Can you see what is using all the CPU with top?

I'm seeing the same pattern. I use Rclone from a batch script to transfer tar.gz files one by one.

My CPU usage also sky rockets to the point where I see packet loss because of high CPU.

hi,
if you had started a new post, you would have been asked for this info.
can you supply it so we can help you.

What is the problem you are having with rclone?

What is your rclone version (output from rclone version)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Which cloud storage system are you using? (eg Google Drive)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Paste command here

The rclone config contents with secrets removed.

Paste config here

A log from the command with the -vv flag

Paste  log here

What is the problem you are having with rclone?

When I copy files with Rclone it maxes out all CPU's on a 2vCpu VPS during copy.

What is your rclone version (output from rclone version)

rclone v1.51.0

  • os/arch: linux/amd64
  • go version: go1.13.7

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Centos7

Which cloud storage system are you using? (eg Google Drive)

Backblaze

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone move /home/admin/backups/ b2:/server21/

The rclone config contents with secrets removed.

  GNU nano 2.9.8                                                    /root/.config/rclone/rclone.conf                                                              

[b2]
type = b2
account= ****
key= ****

A log from the command with the -vv flag

I don't get any output when running rclone -vv

that would seem to be an issue with the vps.

you might want to tweak
--transfers
--checkers
--bwlimit

and you might want to update to latest, v1.53.1
https://rclone.org/downloads/#script-download-and-install

I expect you are transferring big files and rclone is doing the pre-transfer sha1 check for b2.

You can disable this with this flag if you really don't care about large objects having checksums.

  --b2-disable-checksum   Disable checksums for large (> upload cutoff) files

Thanks @asdffdsa your suggestions has already made a huge difference. I don't even notice a major change in CPU usage after configuring these. Transfer took almost the same time without the performance penalty.

I set the following if anyone else bumps into the same.

--transfers=2
--checkers=4
--bwlimit=10m

The VPS we use runs 8GB Ram and 2vCPU's + 100mbps link.

@ncw Thank you for that insight :grinning: - I'll look into that. I'm not quite sure if I totally want to omit checksums. But I understand this probably takes a lot of the CPU time in the transfer.

BTW I love Rclone!

1 Like

Interesting to see that even though I've set --bwlimit=10m rclone saturates the link on transferring files.

Found the issue.
The bwlimit option does not accept an = like the other options.

--bwlimit 10m

It can have = or not in my testing.

Note that 10M is 10 MBytes/s - your dial above reads MBits/s. So ifyou want 10 Mbit/s you need --bwlimit 1.25M