Rclone High RAM Usage (Potentially Related to Open Threads)

What is the problem you are having with rclone?

I am currently using Rclone to mount a large Dropbox cloud storage to my Ubuntu server. Both accessing the Dropbox account and mounting it run normally on my machine. However, at the conclusion of running scripts that access this storage, I continue to have high RAM usage at the conclusion of scripts.

Further inspection using lsof ~/mountpoint (in this case /path/to/dropbox/ finds no files continue to be accessed at the conclusion of each script.

An inspection of RAM usage utilizing the utility htop yields the following discovery:


Please note that I have removed portions of the file structure to omit the computer name, solely for security purposes. This shows the number (of what I believe to be threads) that Rclone has left open at the conclusion of a script. I have intentionally not set the --cache argument, in prior attempts to maintain low RAM usage.

Is there a way to purge the open Rclone threads (not related to the initial mounting of the Dropbox storage to the server) without having ot unmount & remount?

What is your rclone version (output from rclone version)

rclone v1.51.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

os/arch: linux/amd64

Which cloud storage system are you using? (eg Google Drive)

Dropbox Professional

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Command to mount: rclone --vfs-cache-mode writes mount dropbox: ~/dropbox &

hello and welcome to the forum,

that is an old version of rclone; best to update to latest stable and test again.

with the newest version, it is recommened to use --vfs-cache-mode full

1 Like

That's an old version of rclone. Can you please update and re-test.

2 Likes

Hello, glad I found this given how much I use Rclone! Thank you for the quick response! Let me install and circle back!

My latest update:
rclone v1.53.2
os/arch: linux/amd64
go version: go1.15.3

I successfully ran the same script that prompted the initial forum post, and sadly have an identical issue in front of me (albeit slightly lower RAM usage overall at the conclusion):

Updated command: rclone --vfs-cache-mode full mount dropbox: ~/dropbox &
I tally 18 individual Rclone threads used to make the first script function, are there any additional steps I can take to prevent this behavior from happening? Thank you for your time!

I can't see the issue as having threads open is normal.

So just to clarify, once the threads are opened for a specific script, they would remain open even after its termination when the machine is in idle? I'm fine with accepting the behavior as normal, but I'll account for "x" percentage of RAM usage for future work

I wouldn't be looking at threads as you really just want to look at the process. Any process will spawn a number of threads.

What are you doing on the mount as well? The memory seems a bit high but does depend on what you are doing.

Okay good to know! Great question, so one of two things:

  1. Writing files to the mount, in some cases this could number 50-100 in a given script
  2. Reading GeoTIFF files stored on the mount (Dropbox) to run gridded calculations on the Linux machine. When run in parallel, the number of files could reach a maximum of 270. I account for RAM usage being higher during those occurrences, but I didn't think the RAM would remain high when the script terminates.
    My apologies if some of these questions appear rather novice, however I've been meticulously testing other options and scouring the Internet for solutions prior to my post here!

The default buffer size is 16M so you'd be seeing 16M X 50-100 for RAM usage as well.

Same thing for #2 so things seem pretty normal.

We're all novices :slight_smile: Always ask question as that's the best way to find an answer.

Parallel usage will put your RAM usage up.

Go isn't great at returning memory to the system. You can add the --use-mmap flag to see if that helps.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.