How to optimize mount flags to reduce R2 (S3) API costs when using for Nextcloud Data directory?

Hi, thank you very much for this wonderful project and it's great to see rclone grow in the storage industry :slight_smile:

What is the problem you are having with rclone?

I'm successfully using rclone to mount Cloudflare R2 as the data directory for Nextcloud and I would like to know the appropriate flags to use to reduce R2 Class A API transactions thar are about mutating state. I'm also using crypt on top of mount.

Would the following flags work with mount and reduce the following R2 operations?

--fast-list
--no-checksum

R2 Class A operations:

ListBuckets, PutBucket, ListObjects, PutObject, CopyObject, CompleteMultipartUpload, CreateMultipartUpload, ListMultipartUploads, UploadPart, UploadPartCopy and PutBucketEncryption.
rclone version
rclone v1.60.0
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-131-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.2
- go/linking: static
- go/tags: none

My rclone mount systemd unit file:
[Unit]
Description=rclone mount for nextcloud data dir
After=multi-user.target

[Service]
Type=simple
User=root

ExecStart=/usr/bin/rclone mount coscrypt: /opt/nextcloud/data \
          --config "/opt/rclone/rclone.conf" \
          --log-level INFO --log-file "/opt/rclone/mount-nextcloud.log" \
          --uid 33 --gid 33 --umask 002 \
          --dir-perms 770 --file-perms 660 \
          --allow-other --allow-non-empty \
          --no-modtime \
          --attr-timeout 10s \
          --cache-dir "/opt/rclone/cache" \
          --vfs-cache-mode full \
          --vfs-read-chunk-size 0 \
          --vfs-cache-max-size 5G \
          --vfs-fast-fingerprint \
          --dir-cache-time 5m0s \
          --buffer-size 32Mi --vfs-read-ahead 128M --transfers 16

ExecStop=/usr/bin/fusermount -u /opt/nextcloud/data
Restart=on-abort

[Install]
WantedBy=default.target

rclone config:
[r2]
type = s3
provider = Cloudflare
access_key_id = accesskey
secret_access_key = secretkey
region = auto
endpoint = https://my-r2-account-id.r2.cloudflarestorage.com

[coscrypt]
type = crypt
remote = r2:bucket-name
password = secret
password2 = secret2

Sample log output:
2022/12/10 09:17:01 INFO  : appdata_ocdi9rtldqz7/appstore/apps.json: vfs cache: queuing for upload in 5s
2022/12/10 09:17:07 INFO  : appdata_ocdi9rtldqz7/appstore/apps.json: Copied (replaced existing)
2022/12/10 09:17:07 INFO  : appdata_ocdi9rtldqz7/appstore/apps.json: vfs cache: upload succeeded try #1
2022/12/10 09:17:20 INFO  : vfs cache: cleaned: objects 2 (was 2) in use 0, to upload 0, uploading 0, total size 2.302Mi (was 2.302Mi)
2022/12/10 09:18:20 INFO  : vfs cache: cleaned: objects 2 (was 2) in use 0, to upload 0, uploading 0, total size 2.302Mi (was 2.302Mi)

Thank you!

Hi Jay,

My experience with bucket based remotes is very limited, but think any answer is better than no answer, so here is what I can see (with some room for misunderstandings and mistakes on my side).

I generally find you mount command very reasonable, so I will only comment where I see possibilities for improvement.

No, --fast-list has no effect on mounts and --no-checksum doesn't exist, you are perhaps thinking on --ignore-checksum which I don't think is doing a Class A operation. Even if it did I would strongly advice against using it. Here is a clip from the docs:

You should only use it if ... you are sure you might want to transfer potentially corrupted data.

We have not so long ago seen a corruption issue with a major storage service where rclone users was saved by this default check summing.

Used to list/check buckets. Your crypt is locked in a single existing bucket, so I would set --s3-no-check-bucket.

Used to refresh the directory listings. Usage can probably be reduced by increasing --dir-cache-time, if the mount is the only writing to the remote then try 24h or so.

Used when you modify/upload files. Usage can perhaps be reduced by increasing --vfs-write-back to 1h or so - depends on your file update patterns.

Used when you modify/upload files. Usage can probably be reduced by increasing --s3-chunk-size. Not sure if it is used by mounts, but if it is then increasing --s3-upload-cutoff will also reduce usage (by using a single PutObject instead).

Not used.

Bonus tip: You can use --dump headers to track the S3 API calls from rclone.

1 Like

There is a section in the s3 backend about reducing costs.

For a mount, you want to use --use-server-modtime to avoid lots of HEAD requests (class B operations).

Thank you, Ole for the detailed response! Thanks, Nick.

So can you confirm, can I use s3 flags with the mount command?

Thanks, you are welcome.

Yes, you can.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.