Global --transfers across multiple Mounts

What is the problem you are having with rclone?

I wanted to know if there is a way to set a global transfer limit across multiple Rclone instances?
All the time i am uploading different numbers of files to different mounts (seven) at the same time. this is causing me to run into an API rate limit very quickly. I was wondering if there is a way to set --transfers globally. But I don't think this is possible with rclone's own means, is it?

The result would be a queue of transfers instead of simultaneous uploads (centralised upload queue). This would not trigger the onedrive api limit so quickly.

Run the command 'rclone version' and share the full output of the command.

rclone v1.66.0-beta.7672.6e4dd2ab9

  • os/version: ubuntu 22.04 (64 bit)
  • os/kernel: 5.15.0-92-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.22rc2
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Onedrive business

The command you were trying to run (eg rclone copy /tmp remote:tmp)

[Unit]
Description=rclone: Remote FUSE filesystem for cloud storage config %i
Documentation=man:rclone(1)
After=network-online.target
Wants=network-online.target
AssertPathIsDirectory=%h/mnt/%i
StartLimitInterval=200
StartLimitBurst=5

[Service]
Type=notify
Environment="RCLONE_CONFIG_PASS=password"
ExecStart= \
  /usr/bin/rclone mount \
    --config=%h/.config/rclone/rclone.conf \
    --log-level DEBUG \
    --log-file /root/logs/rclone/%i.log \
    --umask 022 \
--vfs-cache-mode full \
--allow-other \
--bind 0.0.0.0 \
--no-modtime \
--buffer-size 32M \
--cache-dir /root/rclone/cache \
--no-checksum \
--disable-http2 \
--vfs-fast-fingerprint \
--ignore-checksum \
--no-check-certificate \
--checkers 1 \
--tpslimit 3 \
--transfers 1 \
--bwlimit-file 50M:400M \
--low-level-retries 1 \
--onedrive-no-versions \
--onedrive-hash-type none \
--onedrive-chunk-size 250M \
--onedrive-delta \
--dir-cache-time 9999h \
--vfs-refresh \
--vfs-cache-max-age 9999h \
--vfs-read-chunk-size 100M \
--poll-interval 1m \
--ignore-size \
--user-agent "ISV|testestest|1ddsds7fe-53ds4cc1-9457-2a8dsdsb2sdsc" \
--vfs-cache-min-free-space 90G \
    %i: %h/mnt/%i
ExecStop=/bin/fusermount -u %h/mnt/%i
Restart=always
RestartSec=30

[Install]
WantedBy=default.target

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[testteam4]
type = onedrive
token = XXX
drive_id = XXX
drive_type = documentLibrary

[testteam4_privat]
type = crypt
remote = testteam4:privat
password = XXX
password2 = XXX

[testteam5]
type = onedrive
token = XXX
drive_id = XXX
drive_type = documentLibrary

[testteam5_privat]
type = crypt
remote = testteam5:privat
password = XXX
password2 = XXX

[team7]
type = onedrive
token = XXX
drive_id = XXX
drive_type = documentLibrary

[team7_privat]
type = crypt
remote = team7:privat
password = XXX
password2 = XXX
filename_encoding = base32768
suffix = none


Create one mount from all your remotes using combine remote and you will have for example mount like this:

$ rclone mount combine_remote: /mount/point

# combined structure
/mount/point
       testteam4_privat
       testteam5_privat
       ...
       team7_privat

And then all set limits will be shared across all remotes.

1 Like

Made my day,

Thanks!

1 Like

The combine remote is certainly the easiest. But if you want even more control, you could use the rc interface to start up an rclone server and then tell it to mount. I am pretty sure that --transfers are then shared

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.