Sometimes suddenly high memory usage

#1

Hello guys,

Sometimes rclones memory usage is crazly high. Even if I do nothing with it. And Plex is not scanning anything. So I think it is not caused by Plex.

Sometimes it uses 98% of the avaiable memory (16 GB)

But right now it sits around at 6.8 GB usage since about 1-2 hours.

Could someone suggest on what I could do now ?

0 Likes

#2

What’s your mount settings?

As root, you can run:

lsof /mountpoint

and see what files are open.

0 Likes

#3

My mount settings:

 #!/bin/sh
/opt/bin/rclone mount gcrypt2:Multimedia /share/CACHEDEV1_DATA/GDrive --allow-other --allow-non-empty --vfs-cache-mode writes --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --log-file /share/CACHEDEV1_DATA/rclone/rclone.log --timeout 1h --umask 002 --use-mmap --rc --config /share/CACHEDEV1_DATA/rclone/rclone.conf &
/opt/bin/rclone mount gcrypt:shared /share/CACHEDEV1_DATA/GDriveBackup --allow-other --allow-non-empty --vfs-cache-mode writes --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --log-file /share/CACHEDEV1_DATA/rclone/rclone.log --timeout 1h --umask 002 --use-mmap --config /share/CACHEDEV1_DATA/rclone/rclonegdrivebackup.conf &

lsof results:

[~] # lsof /share/CACHEDEV1_DATA/GDrive
[~] # lsof /share/CACHEDEV1_DATA/GDriveBackup
COMMAND     PID  USER   FD   TYPE DEVICE SIZE/OFF                 NODE NAME
vs_refres 26993 admin  cwd    DIR   0,30        0  3666108240682974710 /share/CA                                                                                                                                                             CHEDEV1_DATA/GDriveBackup/Multimedia/IP-CAM/20180521/images
vs_refres 26993 admin    4r   DIR   0,30        0                    1 /share/CA                                                                                                                                                             CHEDEV1_DATA/GDriveBackup
vs_refres 26993 admin    5r   DIR   0,30        0 11465421883166377183 /share/CA                                                                                                                                                             CHEDEV1_DATA/GDriveBackup/Multimedia
vs_refres 26993 admin    6r   DIR   0,30        0  4797431945302953088 /share/CA                                                                                                                                                             CHEDEV1_DATA/GDriveBackup/Multimedia/IP-CAM
vs_refres 26993 admin    7r   DIR   0,30        0 11973051123208067883 /share/CA                                                                                                                                                             CHEDEV1_DATA/GDriveBackup/Multimedia/IP-CAM/20180521
vs_refres 26993 admin    8r   DIR   0,30        0  3666108240682974710 /share/CA                                                                                                                                                             CHEDEV1_DATA/GDriveBackup/Multimedia/IP-CAM/20180521/images
[~] #

Does this help to figure out why the ram usage is so high ?

0 Likes

#4

rclone version is what?

Where are you seeing the high memory usage? What’s the output/tool?

0 Likes

#5

rclone v1.46

  • os/arch: linux/amd64
  • go version: go1.11.5

My QNAP NAS show this in the resource monitoring:

0 Likes

#6

What does this show?

ps ef -o command,vsize,rss,%mem,size  | grep rclone
 \_ grep rclone SHELL=/bin/   3040   808  0.0   352
0 Likes

#7

Or you could use this and see what’s on top, little nicer output:

ps aux --sort -rss
USER       PID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
felix    30319  1.8  3.1 4285412 1050504 ?     Ssl  Mar26  31:49 /usr/lib/plexmediaserver/Plex Media Server
influxdb   599  1.5  1.4 4976276 460716 ?      Ssl  Mar25  41:04 /usr/bin/influxd -config /etc/influxdb/influxdb.conf
felix    30432  7.4  1.0 1628372 344704 ?      Sl   Mar26 126:11 Plex Plug-in [com.plexapp.plugins.trakttv] /usr/lib/plexmediaserver/Resourc
felix
0 Likes

#8

Normally a command like that worked, now I get an error. I know it is unreleated to the memory issue, but do you have any idea why the ps aux is not working normally?

[~] # ps aux --sort -rss
ps: invalid option -- 'a'
BusyBox v1.30.1 () multi-call binary.

Usage: ps

Show list of processes

        w       Wide output
0 Likes

#9

Oh, you are on QNAP. I’m not sure how to see what I want as it doesn’t seem to take Linux commands :frowning:

0 Likes

#10

Well till now commands like this always worked. I had never an issue with it. I can use almost all Linux commands. But somehow I think something with Busybox is wrong. Could that be the reason ps aux is not working ?

0 Likes

#11

It seems like you can:

cat /proc/PID/status
0 Likes

#12

Sorry for the late reply. I needed to fix busybox on my qnap nas.

But I get this error:
[~] # ps ef -o command,vsize,rss,%mem,size | grep rclone
ps: bad -o argument ‘command’, supported arguments: user,group,comm,args,pid,ppid,pgid,etime,nice,rgroup,ruser,time,tty,vsz,stat,rss

0 Likes

#13

You can:

cat /proc/22450/status
0 Likes

#14

Right now the usage is only.at 2.3 GB but that is still to high, because no one is using Rclone to do anything right now.

This is results of the cat proc status command:

[~] # lsof /share/CACHEDEV1_DATA/GDriveBackup
COMMAND    PID  USER   FD   TYPE DEVICE SIZE/OFF                 NODE NAME
vs_refres 3558 admin  cwd    DIR   0,31        0 13401379043208657366 /share/CACHEDEV1_DATA/GDriveBackup/Multimedia/Family/gelsas/DELL-XPS_8500/Data/$OF/250701
vs_refres 3558 admin    4r   DIR   0,31        0                    1 /share/CACHEDEV1_DATA/GDriveBackup
vs_refres 3558 admin    5r   DIR   0,31        0 11465421883166377183 /share/CACHEDEV1_DATA/GDriveBackup/Multimedia
vs_refres 3558 admin    6r   DIR   0,31        0 12158741749687591524 /share/CACHEDEV1_DATA/GDriveBackup/Multimedia/Family
vs_refres 3558 admin    7r   DIR   0,31        0 14800355792846338703 /share/CACHEDEV1_DATA/GDriveBackup/Multimedia/Family/gelsas
vs_refres 3558 admin    8r   DIR   0,31        0 15024279114291699993 /share/CACHEDEV1_DATA/GDriveBackup/Multimedia/Family/gelsas/DELL-XPS_8500
vs_refres 3558 admin    9r   DIR   0,31        0 18096138740963539151 /share/CACHEDEV1_DATA/GDriveBackup/Multimedia/Family/gelsas/DELL-XPS_8500/Data
vs_refres 3558 admin   10r   DIR   0,31        0 16774556790556056095 /share/CACHEDEV1_DATA/GDriveBackup/Multimedia/Family/gelsas/DELL-XPS_8500/Data/$OF
vs_refres 3558 admin   11r   DIR   0,31        0 13401379043208657366 /share/CACHEDEV1_DATA/GDriveBackup/Multimedia/Family/gelsas/DELL-XPS_8500/Data/$OF/250701
[~] # cat /proc/3558/status
Name:   vs_refresh
State:  S (sleeping)
Tgid:   3558
Ngid:   0
Pid:    3558
PPid:   18515
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 64
Groups: 0 100
NStgid: 3558
NSpid:  3558
NSpgid: 3558
NSsid:  18515
VmPeak:    85904 kB
VmSize:    83604 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:      9888 kB
VmRSS:      7312 kB
VmData:     1260 kB
VmStk:       132 kB
VmExe:       276 kB
VmLib:     15552 kB
VmPTE:       172 kB
VmPMD:        12 kB
VmSwap:        0 kB
Threads:        1
SigQ:   8/62966
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000000006
SigCgt: 00000001800004e0
CapInh: 0000000000000000
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
Seccomp:        0
Cpus_allowed:   f
Cpus_allowed_list:      0-3
Mems_allowed:   1
Mems_allowed_list:      0
voluntary_ctxt_switches:        2114708
nonvoluntary_ctxt_switches:     111885
0 Likes

#15

That’s showing 2.3GB in the other screen now?

The status of that proc doesn’t show that so the screen seems to be reporting wrong I guess.

VmPeak:    85904 kB
VmSize:    83604 kB

That’s only 83MB in use and peaked at 85MB.

0 Likes

#16

This is usually caused by having --buffer-size set too big. Each open file can potentially use that much memory.

If you use the new --use-mmap flag then rclone will be much better at returning those buffers to the OS.

0 Likes

#17

He’s not using any buffer size from the command he posted so just the default size.

0 Likes

#18

@Animosity022 I send the wrong cat proc status thing, this is the right one:

[~] # ps ax | grep rclone
22710 admin     61764 S   /opt/bin/rclone mount gcrypt2:Multimedia /share/CACHED                                                                                                                                                             EV1_DATA/GDrive --allow-other --allow-non-empty --vfs-cache-mode writes --dir-ca                                                                                                                                                             che-time 96h --drive-chunk-size 32M --log-level INFO --log-file /share/CACHEDEV1                                                                                                                                                             _DATA/rclone/rclone.log --timeout 1h --umask 002 --use-mmap --rc --config /share                                                                                                                                                             /CACHEDEV1_DATA/rclone/rclone.conf
22711 admin    2408204  S   /opt/bin/rclone mount gcrypt:shared /share/CACHEDEV1                                                                                                                                                             _DATA/GDriveBackup --allow-other --allow-non-empty --vfs-cache-mode writes --dri                                                                                                                                                             ve-chunk-size 32M --log-level INFO --log-file /share/CACHEDEV1_DATA/rclone/rclon                                                                                                                                                             e.log --umask 002 --use-mmap --config /share/CACHEDEV1_DATA/rclone/rclonegdriveb                                                                                                                                                             ackup.conf
24975 admin       228 D   grep rclone
[~] #
[~] # cat /proc/22711/status
Name:   rclone
State:  S (sleeping)
Tgid:   22711
Ngid:   0
Pid:    22711
PPid:   1
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 256
Groups:
NStgid: 22711
NSpid:  22711
NSpgid: 1870
NSsid:  1870
VmPeak:  3141828 kB
VmSize:  3141668 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:   2918516 kB
VmRSS:   2408204 kB
VmData:  3115156 kB
VmStk:       132 kB
VmExe:     11560 kB
VmLib:         0 kB
VmPTE:      6088 kB
VmPMD:        28 kB
VmSwap:        0 kB
Threads:        18
SigQ:   8/62966
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000080000
SigCgt: fffffffe7fc1feff
CapInh: 0000000000000000
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
Seccomp:        0
Cpus_allowed:   f
Cpus_allowed_list:      0-3
Mems_allowed:   1
Mems_allowed_list:      0
voluntary_ctxt_switches:        69
nonvoluntary_ctxt_switches:     6

@ncw this are my current mount settings so I think the issue is not buffer size and I am using the use mmap command already.

 #!/bin/sh
/opt/bin/rclone mount gcrypt2:Multimedia /share/CACHEDEV1_DATA/GDrive --allow-other --allow-non-empty --vfs-cache-mode writes --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --log-file /share/CACHEDEV1_DATA/rclone/rclone.log --timeout 1h --umask 002 --use-mmap --rc --config /share/CACHEDEV1_DATA/rclone/rclone.conf &
/opt/bin/rclone mount gcrypt:shared /share/CACHEDEV1_DATA/GDriveBackup --allow-other --allow-non-empty --vfs-cache-mode writes --dir-cache-time 96h --drive-chunk-size 32M --log-level INFO --log-file /share/CACHEDEV1_DATA/rclone/rclone.log --timeout 1h --umask 002 --use-mmap --config /share/CACHEDEV1_DATA/rclone/rclonegdrivebackup.conf &
0 Likes

#19

Just now it started rising again it is using 7.7 GB right now:

[~] # cat /proc/22711/status
Name:   rclone
State:  S (sleeping)
Tgid:   22711
Ngid:   0
Pid:    22711
PPid:   1
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 256
Groups:
NStgid: 22711
NSpid:  22711
NSpgid: 1870
NSsid:  1870
VmPeak: 25824996 kB
VmSize: 25824996 kB
VmLck:         0 kB
VmPin:         0 kB
VmHWM:   8043876 kB
VmRSS:   8043876 kB
VmData: 25798484 kB
VmStk:       132 kB
VmExe:     11560 kB
VmLib:         0 kB
VmPTE:     50384 kB
VmPMD:       112 kB
VmSwap:        0 kB
Threads:        18
SigQ:   8/62966
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: 0000000000080000
SigCgt: fffffffe7fc1feff
CapInh: 0000000000000000
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
Seccomp:        0
Cpus_allowed:   f
Cpus_allowed_list:      0-3
Mems_allowed:   1
Mems_allowed_list:      0
voluntary_ctxt_switches:        69
nonvoluntary_ctxt_switches:     6
0 Likes

#20

What does the lsof on the mountpoint show when you run it as root?

I think you should grab a memory capture as it looks like it is written up a bit here:

and you can see what is using it up.

If it isn’t a buffer thing as you are using the default size, something is definitely off.

0 Likes