Writes to the rclone mount are extremely slow / rclone slows down the system during copy

  • rclone is mounted automatically through systemd service (command below)
  • has its own unique API credentials
  • worked like this on a much slower machine
  • writes are cached
  • it's the same behaviour on a cache remote
    this is the current rclone.conf: https://pastebin.com/59wGEpAF

What is the problem you are having with rclone?

1. copying a file to the mount slows down the whole system
(the whole system is on 100% SSD (specs below))
copying a file to the mount (with or without caching writes) slows down the whole server
allthough CPU usage is barely near 2%
I guess its IOs or IOPS but I couldn't verify that

  1. unrar of 15GB takes a few seconds
  2. start copy of the file to the mount
  3. unrar another 15GB would take up to 3 hours
  4. cancel the copy
  5. unrar finishes the process in a few seconds

2. slow write speed to disk
(reading is fine and plex works with multiple users at the same time)
importing a file to the mount is slow AF
(writes are cached)
I recently moved everything to a more powerfull server but rclone is the only program that won't work like it did on the old server
Even if I copy a file manually to the rclone mount cp src dst the speed is somewhere in between 8 to 15mbits

rclone sync or rclone copy utilizes almost the whole bandwidth and does not slow down the other processes

initially I've transfered the config 1:1 to the new server, then I noticed the slow write speeds.

  1. I then reconfigured everything -> same issue
  2. purged the rclone installation and started from scratch -> same issue
  3. purged again and configured a cache remote ->
    now it was a bit different:
    copying a file would start with 30 to 40 MB/s and drop every second
    but after 15 seconds (arround 400MB) it stopped for about 7 seconds (showed a speed of 0.00)
    then it continued with like 900-1200kbits
    but strangely not for every file
    every file had this issues but at different time stamps
    at one file it stopped after 5 sec, another after 20 the next didn't stop at all
    but instead was at 900kbits from the start,
    the next would again start at 30MBps stop after 15s and continue with ~1MBps
  4. played arround with different settings like chunk size, cache retention, workers,... (don't remember everything -> same issue everytime
  5. I noticed that I didn't have logging enabled -> enabled it
  6. tried to reproduce the above issues -> failed -> now all file transfers would run with 0.9 to 1.8Mbits and I canceled after ~30 secs
  7. disabled logging -> now the behaviour was again like in step 3

here's the log https://we.tl/t-hSGkQ5PnxV

3. rclone stopped utilizing the cache folder
before I reconfigured everything I noticed that the cache folder was not used, instead (I guess) it uploaded directly to the drive
the cache folder was set with the mount command

4. server side move/copy seems to fail
I sometimes copy stuff to an "import" folder on the drive and then import from there into sonarr/radarr
until now it worked great, took like 3 seconds and just moved the file
now on the new server it seems to copy the file locally, which takes forever

What is your rclone version (output from rclone version )

rclone v1.49.2

  • os/arch: linux/amd64
  • go version: go1.12.9

the old server was running 1.42.x so I've updated rclone on the old server to see if I could reproduce the issues there
no luck, worked just fine, no issues at all
(the 2 servers have their own API creds)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04 x64
Intel(R) Xeon(R) E5-2680 v3 @ 2.50GHz
SSD only (HPE 3PAR All-Flash)

Which cloud storage system are you using? (eg Google Drive)

Google Team Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp )

My mount command is(was):
/usr/bin/rclone mount -v --config=/root/.config/rclone/rclone.conf --allow-other --cache-writes --cache-dir=/media/rclonecache gdrive: /mnt/gdrive

I've enabled the log file and the behaviour changed (like described above)
/usr/bin/rclone mount -vv --config=/root/.config/rclone/rclone.conf --allow-other --cache-writes --cache-dir=/media/rclonecache --log-file=/var/log/rclone/rclone.log gdrive: /mnt/gdrive

A log from the command with the -vv flag

thank you

I don't see anything obvious that jumps out as I had a look through the logs as I'm guessing something gets caught up on the cache backend. I personally do not write to the mount as you might have better luck removing the cache backend from the equation as it works ok for the read part, but the backend doesn't have an active maintainer so it hasn't gotten much love recently.

I use mergerfs and an upload script to move files locally to my cloud mount. You could use the same as well and it would remove the writes from the equation.

thank you for looking into the logs
I've also tried it without caching the writes (or is that default behaviour if not set in the command?)

I've heard about mergerfs but never looked into it.
but I don't like the fact of needing another tool/script, cause what I want to accomplish, already worked on the previous server (with same OS and same rclone version).

would you mid sharing your setup?
I'm thinking: mount google drive read only and merge it with a local folder
so all writes would be local (if I understood mergerfs' wiki right)
then have a script move the local files to the drive, therefore the paths would never change.

but how would that merged mount behave during the move?
The file would be visible twice during the sync process, right?

and one thing does not make sense at all:
renaming a file through sonarr/radarr results in a complete copy of the file
which means the file is processed locally

that should be handled through a server side move

I have no idea why this is not working

and since when does the command rclone copy /src/path gdrive:/dst/path -P --stats=1s write to the cache which is set on the mount command with the cache-dir flag?

the rclone config only has a tmp upload dir set for the cached remote but not for the destination in the above command

this is the remote config: (I explicitly added the server side flag, cause it didn't work and still does not)


type = drive
client_id = ***.apps.googleusercontent.com
client_secret = ***
scope = drive
chunk_size = 64M
server_side_across_configs = true
token = {"access_token":"***","token_type":"Bearer","refresh_token":"***","expiry":"2019-09-16T19:25:42.209485981+02:00"}
team_drive = ***

this is the log: https://we.tl/t-iSReZDHxAZ
thank you

That's an easy one unfortunately, it does a link call instead of a move.

I never got an answer why it does a link over a move and I couldn't figure out if there was a flag or something in Sonarr/Radarr to make it do a move.

ignore this, my command was wrong, sorry

what the actual F
so I guess radarr does the same

but this works on my old server
takes just seconds to rename a file

Radarr does the same thing as well in my testing.

If you find a switch or something to make it work without issuing a link, I'd love to know as I want to rename a bunch of stuff and have been too lazy to do it by moves.

Mine has always done a server-side move. Both when uploading (partial to normal file) and any organize operations. This is with --vfs-cache-mode set to writes

If you organize / rename, it issues a link command. If you can run a debug log and share, that would be great:

2019/09/16 13:30:42 DEBUG : torrents/: Lookup: name="TV"
2019/09/16 13:30:42 DEBUG : torrents/: >Lookup: node=<nil>, err=no such file or directory
2019/09/16 13:30:43 DEBUG : Movies/102.Dalmatians.2000/: >Lookup: node=<nil>, err=no such file or directory
2019/09/16 13:30:43 DEBUG : Movies/102.Dalmatians.2000/: Link: req=Link [ID=0x4c8c22 Node=0x1790 Uid=1000 Gid=1000 Pid=1370] node 559 to "102.Dalmatians.2000.mkv", old=Movies/10
2 Dalmatians (2000)/102 Dalmatians (2000).mkv
2019/09/16 13:30:43 DEBUG : Movies/102.Dalmatians.2000/: >Link: new=<nil>, err=function not implemented
2019/09/16 13:30:43 DEBUG : Movies/: Attr:
2019/09/16 13:30:43 DEBUG : Movies/: >Attr: attr=valid=1000h0m0s ino=0 size=0 mode=drwxrwxr-x, err=<nil>
2019/09/16 13:30:43 DEBUG : Movies/102.Dalmatians.2000/: Lookup: name="102.Dalmatians.2000.mkv"
2019/09/16 13:30:43 DEBUG : Movies/102.Dalmatians.2000/: >Lookup: node=<nil>, err=no such file or directory
2019/09/16 13:30:43 DEBUG : Movies/102.Dalmatians.2000/: Lookup: name="102.Dalmatians.2000.mkv"
2019/09/16 13:30:43 DEBUG : Movies/102.Dalmatians.2000/: >Lookup: node=<nil>, err=no such file or directory
2019/09/16 13:30:43 DEBUG : Movies/102 Dalmatians (2000)/102 Dalmatians (2000).mkv: Open: flags=OpenReadOnly
2019/09/16 13:30:43 DEBUG : Movies/102 Dalmatians (2000)/102 Dalmatians (2000).mkv: Open: flags=O_RDONLY
2019/09/16 13:30:43 DEBUG : Movies/102 Dalmatians (2000)/102 Dalmatians (2000).mkv: >Open: fd=Movies/102 Dalmatians (2000)/102 Dalmatians (2000).mkv (r), err=<nil>
2019/09/16 13:30:43 DEBUG : Movies/102 Dalmatians (2000)/102 Dalmatians (2000).mkv: >Open: fh=&{Movies/102 Dalmatians (2000)/102 Dalmatians (2000).mkv (r)}, err=<nil>

i have set it to writes
this is my current mount command
usr/bin/rclone mount -vv --config=/root/.config/rclone/rclone.conf --allow-other --cache-writes --cache-dir=/media/rclonecache --vfs-cache-mode writes --vfs-read-chunk-size 16M --log-file=/var/log/rclone/rclone.log gdrive: /mnt/gdrive

I noticed two differences between the servers
the old had a dedicated data disk where the rclone cache was
and a swap partition
my new vps does not have that

is rclone utilizing the swap explicitly in some way?
I mean it should only be needed/accessed if RAM is low
the server has 12 gig and only needs about 2 atm

rclone doesn't do anything special. It's all on your settings / memory configuration on the server. Swap generally gets used if you run out of physical memory.

The question here is why isn't it releasing memory or is something else going on and something is freaking out and opening a few thousand files or something. That would be odd as well.

I think you misunderstood me here
I was just speculating about the swap
the server has 12GB and currently 2.3GB are used, the rest is free
so it should not need a swap
there are no leaks, as far as I can tell

That will take some time. Quite a lot of data already queued to be uploaded.

In the meanwhile, could you try doing a rename after disabling "Use Hardlinks instead of Copy" in the Settings? I am not using mergerfs and currently upload directly via the mount so this is disabled for me.

I did try that both in Sonarr and Radarr. Here's a Radarr example:


Here is the link command:

2019/09/16 14:22:48 DEBUG : /: >Lookup: node=<nil>, err=no such file or directory
2019/09/16 14:22:48 DEBUG : Movies/1.1.2018/: >Lookup: node=<nil>, err=no such file or directory
2019/09/16 14:22:48 DEBUG : Movies/1.1.2018/: Link: req=Link [ID=0x5ff89c Node=0x72e0 Uid=1000 Gid=1000 Pid=1374] node 12396 to "1.1.2018.mkv", old=Movies/1+1 (2018)/1+1 (2018).mkv
2019/09/16 14:22:48 DEBUG : Movies/1.1.2018/: >Link: new=<nil>, err=function not implemented
2019/09/16 14:22:48 DEBUG : Movies/: Attr:

After I renamed it via the GUI.

are your scripts public?

would this work?:

thank you

I did the same test without mergerrfs in the mix:

2019/09/16 14:30:01 DEBUG : Movies/12.Feet.Deep.2017/: >Lookup: node=<nil>, err=no such file or directory
2019/09/16 14:30:01 DEBUG : Movies/12.Feet.Deep.2017/: Link: req=Link [ID=0x106 Node=0x5 Uid=1000 Gid=1000 Pid=2129] node 7 to "12.Feet.Deep.2017.mp4", old=Movies/12 Feet Deep (2017)/12 Feet Deep (20
2019/09/16 14:30:01 DEBUG : Movies/12.Feet.Deep.2017/: >Link: new=<nil>, err=function not implemented
2019/09/16 14:30:01 DEBUG : Movies/12.Feet.Deep.2017/: Lookup: name="12.Feet.Deep.2017.mp4"

Same results.

I just started my old vps again and did a move there
and again, took just seconds

mount command:

/usr/bin/rclone mount -vvv \
        --config=/home/.config/rclone/rclone.conf \
        --allow-other \
        --cache-workers=8 \
        --cache-writes \
        --no-modtime \
        --drive-use-trash \
        --stats=0 \
        --checkers=16 \
        --vfs-cache-mode writes \
        --dir-cache-time=60m \
        --cache-dir=/mnt/sdb1/rclonecache \
        --cache-info-age=60m \
        --log-file=/home/rclone.log \
        gdrive: /mnt/gdrive


type = drive
client_id = ***.apps.googleusercontent.com
client_secret = ***
scope = drive
team_drive = ***
pacer_min_sleep = 200ms
token =

here's the log https://we.tl/t-oiSEult8BG

Any chance adding --drive-server-side-across-configs helps ?