I am currently using Rclone to mount a large Dropbox cloud storage to my Ubuntu server. Both accessing the Dropbox account and mounting it run normally on my machine. However, at the conclusion of running scripts that access this storage, I continue to have high RAM usage at the conclusion of scripts.
Further inspection using lsof ~/mountpoint (in this case /path/to/dropbox/ finds no files continue to be accessed at the conclusion of each script.
An inspection of RAM usage utilizing the utility htop yields the following discovery:
Please note that I have removed portions of the file structure to omit the computer name, solely for security purposes. This shows the number (of what I believe to be threads) that Rclone has left open at the conclusion of a script. I have intentionally not set the --cache argument, in prior attempts to maintain low RAM usage.
Is there a way to purge the open Rclone threads (not related to the initial mounting of the Dropbox storage to the server) without having ot unmount & remount?
What is your rclone version (output from rclone version)
rclone v1.51.0
Which OS you are using and how many bits (eg Windows 7, 64 bit)
os/arch: linux/amd64
Which cloud storage system are you using? (eg Google Drive)
Dropbox Professional
The command you were trying to run (eg rclone copy /tmp remote:tmp)
Command to mount: rclone --vfs-cache-mode writes mount dropbox: ~/dropbox &
My latest update:
rclone v1.53.2
os/arch: linux/amd64
go version: go1.15.3
I successfully ran the same script that prompted the initial forum post, and sadly have an identical issue in front of me (albeit slightly lower RAM usage overall at the conclusion):
Updated command: rclone --vfs-cache-mode full mount dropbox: ~/dropbox &
I tally 18 individual Rclone threads used to make the first script function, are there any additional steps I can take to prevent this behavior from happening? Thank you for your time!
So just to clarify, once the threads are opened for a specific script, they would remain open even after its termination when the machine is in idle? I'm fine with accepting the behavior as normal, but I'll account for "x" percentage of RAM usage for future work
Okay good to know! Great question, so one of two things:
Writing files to the mount, in some cases this could number 50-100 in a given script
Reading GeoTIFF files stored on the mount (Dropbox) to run gridded calculations on the Linux machine. When run in parallel, the number of files could reach a maximum of 270. I account for RAM usage being higher during those occurrences, but I didn't think the RAM would remain high when the script terminates.
My apologies if some of these questions appear rather novice, however I've been meticulously testing other options and scouring the Internet for solutions prior to my post here!