Yes, that's right
You can install go later that is fine!
Yes - good idea.
Yes, that's right
You can install go later that is fine!
Yes - good idea.
Okay, I figured out why pcloud keeps restarting: I was running it as a user service
I had checked many times that it was not running as a service but I had forgotten about user services and apparently they don't show up when doing the normal checks for services. I guess because those are system services. Duh.
So below is my rclone@pcloud.service
file. I wonder whether there is anything wrong with it that might explain why pcloud failed (as described in the other topic). That is: we have an explanation for the restarting, but I guess the question of what causes the memory problem remains, right?
# User service for Rclone mounting
#
# Place in ~/.config/systemd/user/
# File must include the '@' (ex rclone@.service)
# As your normal user, run
# systemctl --user daemon-reload
# You can now start/enable each remote by using rclone@<remote>
# systemctl --user enable rclone@dropbox
# systemctl --user start rclone@dropbox
[Unit]
Description=rclone: make sure pcloud is served via sftp
Documentation=man:rclone(1)
After=network-online.target
Wants=network-online.target
# AssertPathIsDirectory=%h/mnt/%i
[Service]
Restart=on-failure
RestartSec=5s
Type=notify
ExecStart= \
/usr/bin/rclone serve sftp pcloud:Backup/ \
--config=%h/.config/rclone/rclone.conf \
--addr :2022 \
--vfs-cache-mode minimal \
# --vfs-cache-max-size 100M \
--log-level INFO \
--log-file /zfs/NAS/config/rclone/rclone-%i.log \
--user christoph
--pass Font6-Antibody-Widget
# --umask 022 \
# --allow-other \
%i: %h/mnt/%i
#ExecStop=/bin/fusermount -u %h/mnt/%i
[Install]
WantedBy=default.target
How many files get accessed in the mount? Rclone will use memory proportional to the number of files, maybe 1k per file. So if you have 1 Million files, that is 1 GB of ram. If the files never get accessed (by that I mean listed rather than opened) then it doesn't use RAM.
I have now installed Go (and graphviz, for some reason) but I misunderstood what I'm supposed to do. I thought I run those commands after rclone has failed. I now realized that rclone needs to be running to do that. So I just restarted rclone (with the --rc flag) but I'm not sure when is the right time to create the memory debug trace. If I understand things correctly, we idealy want it just before it fails/gets killed by systemd, but I have no idea how to catch that moment.
What I can say, though, is that now that I have disabled the rclone service, it takes just around a day (instead of previously two weeks) for rclone to fail. (it probably failed at s similar speed previously, but I didn't notice because the service immediately restarted it. - I have no idea, why even the service eventually failed to restart it)
In any case, I can probably create the desired memory trace later today, but how do I determine the right moment?
And is it just this command I should execute, or also other ones: go tool pprof -web http://localhost:5572/debug/pprof/heap
?
For what it's worth, here is the current state of things about 10 minutes after starting rclone:
go tool pprof -text http://localhost:5572/debug/pprof/heap
Fetching profile over HTTP from http://localhost:5572/debug/pprof/heap
Saved profile in /zfs/NAS/config/homedirs/christoph/pprof/pprof.rclone.alloc_objects.alloc_space.inuse_objects.inuse_space.005.pb.gz
File: rclone
Type: inuse_space
Time: Mar 4, 2023 at 11:29am (CET)
Showing nodes accounting for 907.63MB, 98.78% of 918.85MB total
Dropped 68 nodes (cum <= 4.59MB)
flat flat% sum% cum cum%
250.55MB 27.27% 27.27% 250.55MB 27.27% github.com/rclone/rclone/vfs.newFile
232.52MB 25.31% 52.57% 232.52MB 25.31% time.FixedZone (inline)
191.03MB 20.79% 73.36% 191.03MB 20.79% github.com/rclone/rclone/backend/pcloud.(*Fs).newObjectWithInfo
130.01MB 14.15% 87.51% 130.01MB 14.15% path.Join
68.87MB 7.50% 95.01% 319.41MB 34.76% github.com/rclone/rclone/vfs.(*Dir)._readDirFromEntries
16.50MB 1.80% 96.80% 250.02MB 27.21% encoding/json.(*decodeState).literalStore
9.15MB 1% 97.80% 9.15MB 1% reflect.unsafe_NewArray
8MB 0.87% 98.67% 8MB 0.87% encoding/json.(*Decoder).refill
1MB 0.11% 98.78% 233.52MB 25.41% github.com/rclone/rclone/backend/pcloud/api.(*Time).UnmarshalJSON
0 0% 98.78% 267.17MB 29.08% encoding/json.(*Decoder).Decode
0 0% 98.78% 8MB 0.87% encoding/json.(*Decoder).readValue
0 0% 98.78% 259.17MB 28.21% encoding/json.(*decodeState).array
0 0% 98.78% 259.17MB 28.21% encoding/json.(*decodeState).object
0 0% 98.78% 259.17MB 28.21% encoding/json.(*decodeState).unmarshal
0 0% 98.78% 259.17MB 28.21% encoding/json.(*decodeState).value
0 0% 98.78% 673.61MB 73.31% github.com/pkg/sftp.(*Request).call
0 0% 98.78% 674.61MB 73.42% github.com/pkg/sftp.(*RequestServer).Serve.func2.1
0 0% 98.78% 675.61MB 73.53% github.com/pkg/sftp.(*RequestServer).packetWorker
0 0% 98.78% 673.61MB 73.31% github.com/pkg/sftp.filestat
0 0% 98.78% 588.71MB 64.07% github.com/rclone/rclone/backend/pcloud.(*Fs).List
0 0% 98.78% 588.71MB 64.07% github.com/rclone/rclone/backend/pcloud.(*Fs).listAll
0 0% 98.78% 267.17MB 29.08% github.com/rclone/rclone/backend/pcloud.(*Fs).listAll.func1
0 0% 98.78% 321.54MB 34.99% github.com/rclone/rclone/backend/pcloud.(*Fs).listAll.func2
0 0% 98.78% 588.71MB 64.07% github.com/rclone/rclone/backend/pcloud.(*Fs).listHelper
0 0% 98.78% 321.04MB 34.94% github.com/rclone/rclone/backend/pcloud.(*Fs).listHelper.func1
0 0% 98.78% 908.13MB 98.83% github.com/rclone/rclone/cmd/serve/sftp.vfsHandler.Filelist
0 0% 98.78% 267.17MB 29.08% github.com/rclone/rclone/fs.pacerInvoker
0 0% 98.78% 588.71MB 64.07% github.com/rclone/rclone/fs/list.DirSorted
0 0% 98.78% 267.17MB 29.08% github.com/rclone/rclone/lib/pacer.(*Pacer).Call
0 0% 98.78% 267.17MB 29.08% github.com/rclone/rclone/lib/pacer.(*Pacer).call
0 0% 98.78% 267.17MB 29.08% github.com/rclone/rclone/lib/rest.(*Client).CallJSON (inline)
0 0% 98.78% 267.17MB 29.08% github.com/rclone/rclone/lib/rest.(*Client).callCodec
0 0% 98.78% 267.17MB 29.08% github.com/rclone/rclone/lib/rest.DecodeJSON
0 0% 98.78% 906.13MB 98.62% github.com/rclone/rclone/vfs.(*Dir).Stat
0 0% 98.78% 908.13MB 98.83% github.com/rclone/rclone/vfs.(*Dir)._readDir
0 0% 98.78% 906.13MB 98.62% github.com/rclone/rclone/vfs.(*Dir).stat
0 0% 98.78% 906.13MB 98.62% github.com/rclone/rclone/vfs.(*VFS).Stat
0 0% 98.78% 9.15MB 1% reflect.MakeSlice
0 0% 98.78% 8.71MB 0.95% runtime.doInit
0 0% 98.78% 9.72MB 1.06% runtime.main
0 0% 98.78% 232.52MB 25.31% time.Parse (inline)
0 0% 98.78% 232.52MB 25.31% time.parse
Edit2: This time it went very fast. Less than two hours later rclone was already killed. The latest trace I caught is below, but it was still rather early in the process (the failure occurred around 12:20):
go tool pprof -text http://localhost:5572/debug/pprof/heap
Fetching profile over HTTP from http://localhost:5572/debug/pprof/heap
Saved profile in /zfs/NAS/config/homedirs/christoph/pprof/pprof.rclone.alloc_objects.alloc_space.inuse_objects.inuse_space.007.pb.gz
File: rclone
Type: inuse_space
Time: Mar 4, 2023 at 11:44am (CET)
Showing nodes accounting for 2127.47MB, 98.11% of 2168.50MB total
Dropped 89 nodes (cum <= 10.84MB)
flat flat% sum% cum cum%
558.05MB 25.73% 25.73% 558.05MB 25.73% path.Join
523.60MB 24.15% 49.88% 523.60MB 24.15% github.com/rclone/rclone/vfs.newFile
447.54MB 20.64% 70.52% 447.54MB 20.64% time.FixedZone (inline)
388.55MB 17.92% 88.44% 388.55MB 17.92% github.com/rclone/rclone/backend/pcloud.(*Fs).newObjectWithInfo
135.40MB 6.24% 94.68% 659.58MB 30.42% github.com/rclone/rclone/vfs.(*Dir)._readDirFromEntries
54MB 2.49% 97.17% 502.04MB 23.15% encoding/json.(*decodeState).literalStore
11.16MB 0.51% 97.69% 23.67MB 1.09% github.com/pkg/sftp.(*sshFxpNamePacket).marshalPacket
4.59MB 0.21% 97.90% 28.26MB 1.30% github.com/pkg/sftp.(*sshFxpNamePacket).MarshalBinary
2.89MB 0.13% 98.03% 401.66MB 18.52% github.com/rclone/rclone/vfs.(*DirHandle).Readdir
1.68MB 0.077% 98.11% 398.76MB 18.39% github.com/rclone/rclone/vfs.(*Dir).ReadDirAll
0 0% 98.11% 519.19MB 23.94% encoding/json.(*Decoder).Decode
0 0% 98.11% 511.19MB 23.57% encoding/json.(*decodeState).array
0 0% 98.11% 511.19MB 23.57% encoding/json.(*decodeState).object
0 0% 98.11% 511.19MB 23.57% encoding/json.(*decodeState).unmarshal
0 0% 98.11% 511.19MB 23.57% encoding/json.(*decodeState).value
0 0% 98.11% 1358.83MB 62.66% github.com/pkg/sftp.(*Request).call
0 0% 98.11% 401.66MB 18.52% github.com/pkg/sftp.(*Request).opendir
0 0% 98.11% 1681.48MB 77.54% github.com/pkg/sftp.(*RequestServer).Serve.func2.1
0 0% 98.11% 1681.98MB 77.56% github.com/pkg/sftp.(*RequestServer).packetWorker
0 0% 98.11% 28.26MB 1.30% github.com/pkg/sftp.(*conn).sendPacket
0 0% 98.11% 28.26MB 1.30% github.com/pkg/sftp.(*packetManager).controller
0 0% 98.11% 28.26MB 1.30% github.com/pkg/sftp.(*packetManager).maybeSendPackets
0 0% 98.11% 12.50MB 0.58% github.com/pkg/sftp.(*sshFxpNameAttr).MarshalBinary
0 0% 98.11% 1352.83MB 62.39% github.com/pkg/sftp.filestat
0 0% 98.11% 28.26MB 1.30% github.com/pkg/sftp.marshalPacket
0 0% 98.11% 28.26MB 1.30% github.com/pkg/sftp.sendPacket
0 0% 98.11% 1306.35MB 60.24% github.com/rclone/rclone/backend/pcloud.(*Fs).List
0 0% 98.11% 1306.35MB 60.24% github.com/rclone/rclone/backend/pcloud.(*Fs).listAll
0 0% 98.11% 519.19MB 23.94% github.com/rclone/rclone/backend/pcloud.(*Fs).listAll.func1
0 0% 98.11% 787.16MB 36.30% github.com/rclone/rclone/backend/pcloud.(*Fs).listAll.func2
0 0% 98.11% 1306.35MB 60.24% github.com/rclone/rclone/backend/pcloud.(*Fs).listHelper
0 0% 98.11% 787.16MB 36.30% github.com/rclone/rclone/backend/pcloud.(*Fs).listHelper.func1
0 0% 98.11% 447.54MB 20.64% github.com/rclone/rclone/backend/pcloud/api.(*Time).UnmarshalJSON
0 0% 98.11% 2123.51MB 97.93% github.com/rclone/rclone/cmd/serve/sftp.vfsHandler.Filelist
0 0% 98.11% 519.19MB 23.94% github.com/rclone/rclone/fs.pacerInvoker
0 0% 98.11% 1306.35MB 60.24% github.com/rclone/rclone/fs/list.DirSorted
0 0% 98.11% 519.19MB 23.94% github.com/rclone/rclone/lib/pacer.(*Pacer).Call
0 0% 98.11% 519.19MB 23.94% github.com/rclone/rclone/lib/pacer.(*Pacer).call
0 0% 98.11% 519.19MB 23.94% github.com/rclone/rclone/lib/rest.(*Client).CallJSON (inline)
0 0% 98.11% 519.19MB 23.94% github.com/rclone/rclone/lib/rest.(*Client).callCodec
0 0% 98.11% 519.19MB 23.94% github.com/rclone/rclone/lib/rest.DecodeJSON
0 0% 98.11% 1721.86MB 79.40% github.com/rclone/rclone/vfs.(*Dir).Stat
0 0% 98.11% 1965.93MB 90.66% github.com/rclone/rclone/vfs.(*Dir)._readDir
0 0% 98.11% 1721.86MB 79.40% github.com/rclone/rclone/vfs.(*Dir).stat
0 0% 98.11% 153.01MB 7.06% github.com/rclone/rclone/vfs.(*File).Path
0 0% 98.11% 1721.86MB 79.40% github.com/rclone/rclone/vfs.(*VFS).Stat
0 0% 98.11% 153.01MB 7.06% github.com/rclone/rclone/vfs.Nodes.Less
0 0% 98.11% 153.01MB 7.06% sort.Sort
0 0% 98.11% 26MB 1.20% sort.insertionSort
0 0% 98.11% 121.51MB 5.60% sort.partition
0 0% 98.11% 153.01MB 7.06% sort.pdqsort
0 0% 98.11% 447.54MB 20.64% time.Parse (inline)
0 0% 98.11% 447.54MB 20.64% time.parse
Edit3: I asked ChatGPT for a script to run the command every 5 minutes as long as rclone is running, which is what I'm using now:
#!/bin/bash
while true
do
# Check if the rclone process is running
if ! pgrep -x "rclone" > /dev/null; then
echo "rclone process has been killed. Exiting script."
break
fi
# Execute the go tool pprof command
go tool pprof -text http://localhost:5572/debug/pprof/heap
# Sleep for five minutes
sleep 300
done
Its not that important. Wait until rclone has doubled its initial size, Or wait a few hours.
If you could make available the .gz
file the pprof makes, eg /zfs/NAS/config/homedirs/christoph/pprof/pprof.rclone.alloc_objects.alloc_space.inuse_objects.inuse_space.005.pb.gz
that has all the info in and I can run go tool pprof
on that myself.
This looks very much like the memory use is associated with file objects. How many file objects are in use in the mount (roughly)?
can you run
rclone test memory pcloud:Backup/
This will show how much memory loading all the pcloud objects takes.
This is looking like memory use caused by lots of objects rather than a leak at the moment, but we'll see!
Got the trace thanks.
Looking at it I can see there are exactly 3.2M VFS objects in use as you said.
I usually tell people that they need 1GB of RAM per 1,000,000 objects. Your pcloud objects are taking a bit less space than this so that looks fine.
So I think the memory use here is totally normal - if you want 3.2M objects in memory then you will need to have more RAM on the server with rclone v1.61
How much memory does the server have?
The problem is the objects stored in the directory cache which is something that is on my radar to reduce. I have a half done fix for it which stores the directory cache on disk instead which will improve things enormously.
The server has 24 GB of memory.
I tried this twice now, but everytime I get Failed to memory: couldn't list files: pcloud error: Internal error. Try again later. (5000)
after some time...
Trying again right now, but I'm expecting it to fail again...
According to the profile Go was using 2.3GB of RAM. THis can often translate into more actual RAM through memory fragmentation. How much memory do you see rclone using in ps/taskmanager? Note that GOGC
will help with this.
Using top | grep rclone
I can see that it (RES
) goes up to 2.9G. So I guess that's not a problem. Then again, it hasn't been killed yet. So I'll let my top
command run for a while to see how much memory it consumes just before it gets killed.
Whoops: now it just went up to 3.5G...
I use this little script
#!/bin/sh
# Monitor memory usage of the given command
COMMAND="$1"
if [ "$COMMAND" = "" ]; then
echo "Syntax: $0 command"
exit 1
fi
headers=""
while [ 1 ]; do
ps $headers -C "${COMMAND}" -o pid,rss,vsz,cmd
sleep 1
headers="--no-headers"
done
You call it "./memusage rclone" and it will print a line every second with the memory usage of the command you passed. Its good for tracking changes.
How much memory is free on the server? Would it be a problem if rclone used 4G? 8G?
That sounds pretty much like what my layman's oneliner did. In any case, rclone has meanwhile been killed and here are the last couple of lines from top | grep rclone
2796391 christo+ 20 0 4642648 3.7g 3088 S 12.0 15.7 3:04.11 rclone
2796391 christo+ 20 0 4642648 3.7g 3172 S 12.3 15.7 3:04.48 rclone
2796391 christo+ 20 0 4642648 3.6g 2816 S 3.0 15.7 3:04.57 rclone
2796391 christo+ 20 0 4642648 3.6g 2816 S 11.3 15.7 3:04.91 rclone
2796391 christo+ 20 0 4642648 3.6g 3044 S 12.6 15.7 3:05.29 rclone
2796391 christo+ 20 0 4642648 3.6g 3044 S 2.3 15.7 3:05.36 rclone
2796391 christo+ 20 0 4642648 3.7g 3768 S 11.6 15.7 3:05.71 rclone
2796391 christo+ 20 0 4642648 3.7g 3500 S 2.6 15.8 3:05.79 rclone
2796391 christo+ 20 0 4711580 3.8g 2660 S 15.3 16.3 3:06.25 rclone
2796391 christo+ 20 0 4711580 3.8g 0 S 4.1 16.3 3:06.38 rclone
2796391 christo+ 20 0 4848420 3.9g 0 S 14.1 16.8 3:06.87 rclone
2796391 christo+ 20 0 4848420 3.9g 0 S 2.7 16.8 3:07.00 rclone
2796391 christo+ 20 0 4848420 3.9g 0 S 5.6 16.8 3:07.39 rclone
So, it went up to 3.9GB and perhaps it tried to go to 4GB which may have been a red line to cross...
Not sure what to make of the sudden drop of shared memory to 0...
I don't think so. I have no knowledge of these things, but there is nothing on that server that deeds so much RAM. The only reason why those GBs are in there is because I had them lying around. After a restart of the server, it takes quite a while (half an hour?) for the memory usage to reach the levels shown in the graph above. My interpretation is that the gradually starts handing out that free memory to processes that don't really need it but that might as well use it when no one is asking for it.
Your server has loads of free RAM, and page cache.
I think something must be killing rclone when it gets to 4GB
I suspect monit might be killing rclone.
Can you post your monit
config file - it should be at /etc/monit.conf
or /etc/monit/monitrc
I think.
$ sudo cat /etc/monit/monitrc
# This file is auto-generated by openmediavault (https://www.openmediavault.org)
# WARNING: Do not edit this file, your changes will get lost.
set daemon 30 with start delay 30
set logfile syslog facility log_daemon
set idfile /var/lib/monit/id
set statefile /var/lib/monit/state
set httpd unixsocket /run/monit.sock
allow localhost
set eventqueue
basedir /var/lib/monit/events
slots 100
include /etc/monit/conf.d/*
$ sudo cat /etc/monit/conf.d/*
check process collectd with matching collectd
start program = "/bin/systemctl start collectd"
stop program = "/bin/systemctl stop collectd"
mode active
check process omv-engined with pidfile /run/omv-engined.pid
start program = "/bin/systemctl start openmediavault-engined"
stop program = "/bin/systemctl stop openmediavault-engined"
mode active
# Alert if disk space of root filesystem gets low
check filesystem rootfs with path /
if space usage > 85% for 5 times within 15 cycles
then alert else if succeeded for 10 cycles then alert
check process nginx with pidfile /run/nginx.pid
start program = "/bin/systemctl start nginx"
stop program = "/bin/systemctl stop nginx"
mode active
if cpu is greater than 40% for 2 cycles then alert
if cpu is greater than 80% for 5 cycles then restart
# https://mmonit.com/monit/documentation/monit.html#CONNECTION-TESTS
# https://mmonit.com/monit/documentation/monit.html#FAULT-TOLERANCE
if failed host 127.0.0.1 port 88 protocol http timeout 15 seconds for 2 times within 3 cycles then restart
check process php-fpm with pidfile /run/php/php7.4-fpm.pid
start program = "/bin/systemctl start php7.4-fpm"
stop program = "/bin/systemctl stop php7.4-fpm"
mode active
check process rrdcached with pidfile /run/rrdcached.pid
start program = "/bin/systemctl start rrdcached"
stop program = "/bin/systemctl stop rrdcached"
mode active
check system $HOST
if loadavg (1min) > 4.0 for 3 cycles then alert
if loadavg (5min) > 2.0 for 3 cycles then alert
if memory usage > 90% then alert
if cpu usage (user) > 95% for 2 cycles then alert
if cpu usage (system) > 95% for 2 cycles then alert
if cpu usage (wait) > 95% for 2 cycles then alert
I don't see anything obviously wrong with that.
Might there be a ulimit set on rclone?
Hm, we might be getting somewhere:
$ ps aux | grep rclone
christo+ 139918 2.6 0.2 761996 68960 pts/3 Sl 18:14 0:00 rclone serve sftp pcloud:Backup/ --addr :2022 --user ********* --pass ********** --log-file=/zfs/NAS/config/rclone/rclone.log --vfs-cache-mode writes --rc
christo+ 140271 0.0 0.0 6216 636 pts/3 S+ 18:14 0:00 grep rclone
$ cat /proc/139918/limits
Limit Soft Limit Hard Limit Units
Max cpu time unlimited unlimited seconds
Max file size unlimited unlimited bytes
Max data size unlimited unlimited bytes
Max stack size 8388608 unlimited bytes
Max core file size 0 unlimited bytes
Max resident set unlimited unlimited bytes
Max processes 95063 95063 processes
Max open files 1048576 1048576 files
Max locked memory 3122961920 3122961920 bytes
Max address space unlimited unlimited bytes
Max file locks unlimited unlimited locks
Max pending signals 95063 95063 signals
Max msgqueue size 819200 819200 bytes
Max nice priority 0 0
Max realtime priority 0 0
Max realtime timeout unlimited unlimited us
3122961920 Bytes is actlually just 2.9085 Gigabytes...
But where does that limit come from? Could it be some system default?
Type
ulimit -a
On the user running rclone.
Indeed, that gives me
max locked memory (kbytes, -l) 3049767
max memory size (kbytes, -m) unlimited
So that is where the limit comes from. But where is it set?
chatGPT tells me
The default resource limits are usually defined in the
/etc/security/limits.conf
file. This file contains default limits for different user and group classes. (...)In addition to the default limits set in
limits.conf
, there are also system-wide limits set in the kernel that apply to all processes on the system. These limits can be viewed and changed using thesysctl
command or by modifying the kernel parameters in the/etc/sysctl.conf
file.
But both of these files contain nothing but commented lines...
Correct.
That's the spot.
Mine as an example only contains:
# End of file
root soft nofile 65535
root hard nofile 65535
* soft nofile 65535
* hard nofile 65535
and
root@gemini:/etc/security# ulimit -a
real-time non-blocking time (microseconds, -R) unlimited
core file size (blocks, -c) 0
data seg size (kbytes, -d) unlimited
scheduling priority (-e) 0
file size (blocks, -f) unlimited
pending signals (-i) 127356
max locked memory (kbytes, -l) 4091400
max memory size (kbytes, -m) unlimited
open files (-n) 65535
pipe size (512 bytes, -p) 8
POSIX message queues (bytes, -q) 819200
real-time priority (-r) 0
stack size (kbytes, -s) 8192
cpu time (seconds, -t) unlimited
max user processes (-u) 127356
virtual memory (kbytes, -v) unlimited
file locks (-x) unlimited
I'm Ubuntu though so slightly higher defaults. You can adjust that file and add in and reboot.
I think it's something like:
* soft memlock unlimited
* hard memlock unlimited
My point was: if the limits arenät set in these files, where do they come from?
So you're saying that each OS has it's built in defaults?
I'm getting quite deep low-level settings here. So deep that I can't seem to manage to even temporarily increase the limits for the user running rclone, just to test things out. ulimit
seems to be no ordinary command...
Not sure I want to set it to unlimited. I assume there is a reason for those default settings. I was thinking that 6GB should be fine for rclone to handle 4-5m files, no?