Saturating download bandwidth when running s3 remote and NFS

What is the problem you are having with rclone?

When running a local NFS share of a rclone mounted directory my download bandwidth is being saturated by rclone (I assume that NFS is likely the culprit here but am hoping someone might be able to point me in the right direction)

I have since bwlimited the remote to prevent it saturating the line as it is pulling an entire 1gbps constantly if not limited.

What is your rclone version (output from rclone version)

1.55.1

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04

Which cloud storage system are you using? (eg Google Drive)

s3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount \
  --fast-list \
  --allow-other \
  --allow-non-empty \
  --rc \
  --rc-addr=localhost:5584 \
  --rc-no-auth \
  --dir-cache-time=168h \
  --timeout=10m \
  --umask=002 \
  --syslog \
  -v \
  --buffer-size=32M \
  --vfs-cache-mode=full \
  --vfs-cache-max-age=24h \
  --vfs-cache-max-size=50G \
  --vfs-read-ahead=128M \
  --vfs-read-chunk-size=32M \
  --vfs-read-chunk-size-limit=2048M \
  --s3-chunk-size=32M \
  --s3-disable-http2 \
  --async-read=true

The rclone config contents with secrets removed.

type = s3
provider = Ceph
env_auth = false
access_key_id = redacted
secret_access_key = redacted
endpoint = redacted

So what is your question here?

I guess that would have been helpful :slight_smile:

I am trying to figure out what exactly is causing rclone to saturate my bandwidth. I do not have any program polling the directory other than NFS that might cause this issue and an lsof of the mounted directory shows zero entries.

You run with a log and it would show you. Rclone doesn't do anything by itself so you have something requesting things.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.