Follow Animosity's instructions in general, but these:
--tpslimit 5
--drive-pacer-burst 25
Are for the record diametrically opposed.
The first one says "never make mroe than 5 requests pr second"
The second says "you can make up to 25 requests pr second when quota allows".
It just makes no sense whatsoever.
I'd just leave them to defaults.
Somewhat doubt this is the cause of your problem however.
I would post your plex settings for scanning. Most likely you have in-depth metadata scans on that may be reading every file in your library or at least part of every file. That is not suited to cloud-drive usage and is known to cause problems.
A debug log would also be very helpful. Add these you your mount command: --log-file=rcloneplexlog.txt --log-level DEBUG
Those settings were desperation... Got less problems with them, maybe placebo effect...
The most interesting lines that I get with debug activated are:
2019/11/24 17:10:31 DEBUG : pacer: low level retry 1/10 (error googleapi: Error 403: User Rate Limit Exceeded. Rate of requests for user exceed configured project quota. You may consider re-evaluating expected per-user traffic to the API and adjust project quota limits accordingly. You may monitor aggregate quota usage and adjust limits in the API Console: https://console.developers.google.com/apis/api/drive.googleapis.com/quotas?project=XXXXXXXX, userRateLimitExceeded)
Actually this is the first time I see this error so clear, I previously had another, is the "out of memory" related with buffer and plex check multiple files? Should I remove it?
Ya this is result of my desperation. With those I had less errors, and was just testing out solutions, and maybe ended up with frankenstein mount settings... I removed them and added log in debug mode. Looks like is related with buffer getting "out of memory"?
Ah, well good in a sense - because an out of memory error is a very clear problem we can fix.
You do not need to worry about the pacer 1/10 error. This is perfectly normal to have some of during normal operation. You can just consider this the server telling rclone's pacer that it needs to throttle down the request-rate slightly. The pacer will respond gracefully to this to try and mostly stay under the limit for optimal performance. Unless you start to see significant amounts of 4/10 or 5/10 retry errors or higher you can consider this perfectly normal and nothing to worry about. It should not be related to your problem at all.
So how much RAM do you have on the system exactly?
I suspect that Plex is doing agressive scanning and opening way too many file at once, such that even the reasonable 128M buffer may multiply so much that it causes a memory problem. If so that's a Plex settings issue. Nothing else in your config should be particularly RAM heavy. You probably do not actually need a 128M buffer - but I don't suspect that this is the main issue in an of itself. Awaiting your response to be able to say more about the best way to fix this...
Default buffer is 16M yes. This may already be enough to play things smoothly, so unless you actually have issues with this size I recommend you just leave it at that.
With 2GB RAM we need to be conservative. It shouldn't be a problem at all, but your OS is probably using most of the first GB on it's own, so we don't have that much room to play around with non-essentials. rclone does not use much RAM by default though. It's mostly very large buffer-sizes or chunk-sizes that make it bloat, and these are usually very much optional to increase. With your current setup it is really only an extreme amount of concurrent connections that could make it go OOM. (assuming there is nothing else on the system that is also sucking up a lot of RAM obviously).
I still suspect the Plex scans as I said, because even with 128M buffer it would take a significant amount of concurrent transfers to OOM. I would just post an image of your Plex settings. I think there are 2 settings pages that are relevant for this. I can provide some decent educated guesses on what settings are likely to cause trouble - and Animosity is an expert on the topic.