High memory usage causing system instability

rClone version v1.49.5 - OS - Linux 64bit

rclone command:
rclone rcd --rc-htpasswd=XXX --rc-addr=[IP]:[PORT] --checkers=100 --transfers=100 --rc-cert=XXX --rc-key=XXX --log-file=[FILEPATH]\rclone-log --log-level=INFO --update --use-server-modtime

rclone config:
rclone rc config/create --json '{"name":"production","type":"swift","parameters":{"auth_version":"2","auth":"[LINK]","endpoint_type":"public","env_auth":"false","key":"[PASSWORD]","tenant":"[TENANT]","user":"[USER]"}}' --rc-addr=[REMOTEADDR] --no-check-certificate

The logfile just shows completed file transfers before an error as shown below where despite the IP being specified it begins to try and host on a blank IP ([::])
2020/01/27 12:16:11 INFO : Using "[PASSWORDFILE]" as htpasswd storage
2020/01/27 12:16:11 NOTICE: Serving remote control on https://[::]:5572/

Hi all

I've got an issue with rClone taking up significant memory usage (>60%) on one of our machines.

What we're doing is putting rClone on a number of local PC's and setting up the rclone rcd command as a service so it's always running in the background, then sending over commands to have it copy data to an swiftstack objectstore.

One of our machines has run into memory issues. We suspect that it is due to the high checkers/transfers flags. Would this be a good guess or is there something else we should be aware of?

Thanks very much.

that is an old version of rclone.
can you update and test again?

That's because you are using such a higher number of checkers and transfers.

You are on point as it relates to:

--buffer-size SizeSuffix               In memory buffer size when reading files for each --transfer. (default 16M)

Note that if you are running the --rc you can profile rclone like this: https://rclone.org/rc/#debugging-memory-use

That will show exactly what is using the memory.

Thanks everyone. I'm going to reduce the transfers/checkers flags and see if that helps.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.