Rclone sync Webdav Nextcloud CPU max during checks

What is the problem you are having with rclone?

I'm worried about the CPU consumption on the destination host.
While monitoring both instances during a rclone sync command to evaluate timings and performance, I noticed that during file checks of a sync command the cpu consumption is totally maxed out. And this can last several minutes. Although during file transfers only, the usage performance is a lot healthier.
I'm concerned about putting this sync in a cron job.
Any thoughts ?
Should I worry ?
Can file checks be throttled in any way ?
Thanks for your admirable work into this now legendary tool :slight_smile:
Kind regards.

What is your rclone version (output from rclone version)

Which cloud storage system are you using? (eg Google Drive)

Local Linux ext4
Remote Nextcloud Webdav

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync {local} {remote} -P -v
8TB of contents in several 700'000 files.

The rclone config contents with secrets removed.


A log from the command with the -vv flag

Paste  log here

Not sure what you are running since included such little details.
The version would help as that output gives us your running version and OS you are on.

Is that the command you are running exactly?
What CPU are you running?
You are seeing 100% CPU from rclone?

What does yoru log look like? More details help as that's why we have the template but folks really fight against for some unknown reason. It's like they don't want to be helped :frowning:

I would try reducing --checkers to 4, 2 or 1 (the default is 8), this will reduce the parallelism of the checks.

This may give your server some air and better overall response times.

1 Like

Spot on !
Using --checkers 4 brought the remote's host cpu use to high but perfectly healthy levels.
Thank you so much.

1 Like

Hi @Animosity022,

My appologies, you are perfectly right to point this out, and I totally understand your frustration. I admit writing this ticket after a long day, and hadn't realized how little information I had shared.

While needing to make sure that I do not share sensitive information, the command I have been running is the one I posted

rclone sync {local ext4 folder location} {remote Webdav location} -P -v

The CPU I was concerned about was the one on the remote host, a 4c 8t, Intel Xeon 2.4Ghz on a bare-metal dedicated server (0.518 ms latency from source instance). While launching the sync command, all 8 threads on the remote server would completely max out at 100% (seen through htop). On the source instance, the cpu levels were of no concern, healthy and acceptable.

Sharing the logs contain all the details about the contents I am syncing and have to be censored. For the command posted above, the end of the log file shows:

2021/11/28 13:06:15 DEBUG : 20 go routines active
2021/11/28 13:06:15 DEBUG : rclone: Version "v1.50.2" finishing with parameters ["rclone" "sync" "/local-folder-censored" "remote:/remote-folder-censored" "-P" "-v" "-vv"]

And during the sync here is what the remote host htop looked like:

I have now used the --checkers settings provided by Ole, and they seem to totally tackle the issue that I was experiencing.
Here are the same results with --checkers 2

2021/11/28 12:56:12 DEBUG : 7 go routines active
2021/11/28 12:56:12 DEBUG : rclone: Version "v1.50.2" finishing with parameters ["rclone" "sync" "/local-folder-censored" "remote:/remote-folder-censored" "-P" "-v" "-vv" "--checkers" "2"]

I believe this settings perfectly answers my concerns, making sure the remote server is still capable of handling others processses that may be running during the sync process. This is especially since with 8TB for 700'000 files, it is expected that the sync process will only have a very small amount of files to upload/delete each time it runs, and will be doing checks for 99% of the time.

Any complimentary recommandations are of course welcome :slight_smile:

Have a nice day.
Best regards.

That's very old version and should be updated.

That looks like htop output, are you sure it's rclone taking up the CPU?

Maxing my gigabit connections with defaults I tend to get:

I use prometheus/process exporter so I can get some very specific history on things I run.

Also, you can use nice to allow it to run a lower priority if your CPU can't handle it as well as that allows for you to run at max when nothing else is needed.

Gives some good examples there.

Hi @Animosity022

Thanks for your feedback.

So yes I have now upgraded to rclone v1.57.0 using the installation bash script. Thanks for pointing it out.

(I had installed rclone on the source machine a long time ago using apt, and the default sources kept saying it was up to date, should have (re)read the rclone documentation :wink: )

No its not "rclone" as such that is taking all that CPU. The CPU usage I have been referring to is on the remote target destination machine (so using nice on rclone on the source mahine probably won't help here). This remote destination is being accessed with the WebDav protocol, and the WebDav server on this remote machine was the one maxing out the CPU during the file checks. Using --checkers 2 or 4 effectively lowers the CPU usage significantly on that remote machine to acceptable levels.

Of this I am absolutely certain, because it is a fresh install, tuned and configured very precicely with optimum settings. I personnaly use elasticsearch/kibana to monitor my instances, but this one isn't setup yet for this (again fresh and clean setup). Using htop at the moment is however sufficient since there are absolutely no other components running on the machine else than this WebDav server. And it is 100% calm and unused outside of these rclone transfers at the moment.

The local machine running rclone is healthy, load is good, cpu/ram ressources consumptions are good.

Thank you and have a nice day !

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.