This issue happens without any file opened. So not a buffer issue...
I have 12 gigs and the only thing the server is doing is uploading right now, no media playback at all.
When I took this screenshot the server was uploading, I was watching the upload logs, and suddenly I saw all the memory fill from 70% to 100% for about a minute. After that, Rclone mount crashed and memory usage decreased to 70%.
Upload never stopped and continued without issue but I had to restart the mount.
Edit: Oh I think I know: Can this be due to Plex doing the indexing/thumbnail creation --> Opening a lot of connection??
It's probably a better solution just to dial down your buffer-size, at least to see how that affects things before you go investing in even more RAM. Chances are that you aren't actually getting that much benefit out of such a large buffer size. Remember the default is only 16M, and that is usually fairly adequate. 16 times more is a lot...
I have a very similar experience using rclone on Unraid.
If mounting with --buffer-size and --drive-chunk-size everything works fine unless I try to upload using rclone move. When trying to upload it fails with error message that --vfs-cache-mode writes is needed. When adding --vfs-cache-mode writes to the config it floods the RAM and doesn't accept limits set by --vfs-cache-max-size, it pushes the complete file onto the cache so I had to make a cache dir on a drive instead of RAM since I only have 16Gb.
In short
vfs not activated, stable but upload doesn't work properly
vfs activated, RAM gets flooded
This is basically all due to an improper setup / user error though.
If you run software like plex or something on a cloud drive and you do NOT disable functions that will hit the storage really hard then rclone will just execute the instructions the software asks for. And if it asks to simultaneously read from dozens if not hundreds of files then well... this works on a local HDD, but not so much for a cloud drive.
each open file will get a "buffer size" quota of RAM to use.
VFS-cache-max-size does not block it going over a size. It will do so if it HAS to (the only other alternative would be to error or crash). It will however try to reduce the cache as soon as it can - ie as soon as files are no longer being actively accessed by processes.
Rclone does not open files. The (other) software accessing the VFS do - that's the real problem.
The reason a buttload of large files get pulled to cache is often because something like Plex (or windows for that matter) tries to generate thumbnails or do analysis. This can at worst involve accessing the entire file. Doable on local storage. Not so much on a cloud-drive.
TLDR: Reconsider your settings in whatever software is accessing all these files to solve the issue
Off topic question, I have set a speed limit of 8M for the upload but I have 2 speed reporting: the overall one is much lower than the "per file" speed. Is it normal?
That overall speed is actually an "average since the rclone process started" speed.
For a mount that might be many hours ago => quite a low average speed.
I agree this can be a little confusing, but you pretty much have to look at the total of the current transfer are. That is your real "right now" bandwidth through rclone. I think the -P progress indicator code wasn't quite designed with a permanently running processs (like a mount) in mind, but rather processes like a sync which have a clear "start" and "stop" point. At some point we may get an overhauled version that is more tailored to this use-case and less confusing.
You can confirm this rather easily with any bandwidth monitor