Rclone SFTP mount - high CPU usage

What is the problem you are having with rclone?

Using Rclone with SFTP Mount to a external storage Server. The CPU usage is very high. Average CPU load is 50-60% permanently. (My External Server has that high cpu load) My Mount is only for read usage.

CPU: Single [ Xeon Gold 6150 ]

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.0

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.4.0-167-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.21.4
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Storage Server, and mounted the storage to a external Server.

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[localstorage]
type = sftp
host = XXX
pass = XXX
shell_type = unix
md5sum_command = none
sha1sum_command = none


Rclone Systemd Service file

[Unit]
Description=RClone Service SFTP
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
Environment=RCLONE_CONFIG=/root/.config/rclone/rclone.conf
RestartSec=5
ExecStart=/usr/bin/rclone mount localstorage:/media/localstorage /media/localstorage \
# This is for allowing users other than the user running rclone access to the mount
--allow-other \
# Dropbox is a polling remote so this value can be set very high and any changes are detected via polling.
--dir-cache-time 1h \
# Log file location
--log-file /media/rclonelocal.log \
# Set the log level
--log-level INFO \
# This is setting the file permission on the mount to user and group have the same access and other can read
--umask 002 \
# This sets up the remote control daemon so you can issue rc commands locally
--rc \
# This is the default port it runs on
--rc-addr 127.0.0.1:5574 \
# no-auth is used as no one else uses my server and it is not a shared seedbox
--rc-no-auth \
# The local disk used for caching
--cache-dir=/media/cache/ \
--vfs-cache-mode full \
# This limits the cache size to the value below
--vfs-cache-max-size 880G \
# Speed up the reading: Use fast (less accurate) fingerprints for change detection
#--vfs-fast-fingerprint \
# Set chunk size
#--vfs-read-chunk-size 16M \
# Wait before uploading
#--vfs-write-back 1m \
# This limits the age in the cache if the size is reached and it removes the oldest files first
--vfs-cache-max-age 5h \
# Disable HTTP2
--disable-http2 \
#--sftp-idle-timeout 60m \
# Set the tpslimit
#--tpslimit 12 \
# Set the tpslimit-burst
#--tpslimit-burst 0




ExecStop=/bin/fusermount -uz /media/localstorage
ExecStartPost=/usr/bin/rclone rc vfs/refresh recursive=true --url 127.0.0.1:5574 _async=true
Restart=on-failure


[Install]
WantedBy=multi-user.target

Is there maybe a better way to mount files from another server instead of SFTP?

rclone supports many protocols, simply choose another one, perhaps webdav

rclone itself can act as a server using rclone serve

thank you for your advise.

My Setup is like this:

Server 1 (Storage) - > Server 2 (mounted Storage) -> Server 3 (mounted Storage Streaming)

Rclone serve is usable from Server 2 to 3 right? Because i need to choose there a remote.

correct. https://rclone.org/commands/rclone_serve/

on server3, could run something like
rclone serve webdav /path/to/files --read-only -vv

on server2:
--- create a webdav remote pointing to server3
--- run rclone mount

Server 2 acts like a loadbalancer , there i will place few more later on. And the last Server 3 has multiple gbps throuput so that one can get high speed from a couple of Load Balance Servers. Just for your understanding.

On Server 3 it needs to be mounted aswell.

My Server 2 had also 100% CPU usage after 1-2 days. So there i need to start with webdav or maybe s3 will have better performance, what do you think?

example:
Storage Server --SFTP --> Loadbalance Server s3 mount via serve -> Streaming Server s3 mount

with server3, the files to be streamed are local, so why do you need rclone mount ?

Lets say Server 3 is Plex.
So files need to be mounted there.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.