Out of Memory kill process rclone

Items started randomly disappearing in my plex server, so i was digging round the logs and found this in the kernel logs:

Out of memory: Kill process 8384 (rclone) score 683 or sacrifice child

Not sure if it's related. How can I fix this error to eliminate plex issues?

It means you have a config that's using too much memory.

What's your mount command?

Description=RClone Service
Wants=network-online.target
After=network-online.target

[Service]
Type=notify
#Environment=RCLONE_CONFIG=/opt/rclone/rclone.conf
KillMode=none
RestartSec=5
ExecStart=/usr/bin/rclone mount vault: /mnt/gvault \
--allow-other \
--attr-timeout 1000h \
--buffer-size 1G \
--dir-cache-time 1000h \
--log-level INFO \
--log-file /opt/rclone/logs/gvault-service.log \
--poll-interval 15s \
--timeout 1h \
--umask 002 \
--user-agent vaultapp \
--rc \
--rc-addr 127.0.0.1:5572
ExecStop=/bin/fusermount -uz /mnt/gvault
Restart=on-failure
User=root
Group=root

[Install]
WantedBy=multi-user.target

You are using 1GB per open file so you are exhausting the memory on your server.

@Animosity022 I haven't had issues with that before, so what should I change it to?

also, plex is just randomly dropping tv shows now. Im down to 6, when I actually have close to 200. Can you help with this too?

What is the memory on your system? You'd want to lower the buffer size to something that can handle how many open files you have.

I really don't know why it is common to see people set their buffer-size to such absurd values. I think this is some sort of misunderstanding where people have just been copying a setup without really understanding what they were copying. There are extremely few - if any - scenarios where 1G buffer would actually be beneficial. This also uses up to 1G pr. open transfer, so if a few clients suddenly access a dozen files in the same timeframe that's 12G of memory and really no wonder why you get an OOM. Since this will vary upon usage it can have worked fine for a while but will then OOM crash if suddenly several large files are being accessed at once.

The default is 16M, which is usually more than enough for general use. On a really fast connection you might want something slightly larger. Even if you had loads of free memory I would never use more than 64M or 128M pretty much.

The main objective of the buffer is to make sure that bandwidth can be effectively utilized even if the destination is temporarily busy for a short time (like a HDD that is multitasking othe reads/writes and has very limited onboard cache). Enough buffer to account for a second or so of your bandwidth very ample for that general task.

The secondary purpose (which is more of a side-effect really) is that you effectively pre-fetch data in the buffer. This can be useful to add a little more added padding against a potential stutter in terms of media-use if traffic should happen to spike for whatever reason, but again - a couple of second worth of the bitstream is more than enough. It can just as easily be detrimental in terms of it prefetching way too much data if the buffer is huge. A 1GB buffer on let's say a 30Mbit video would prefetch almost 5 minutes of video. Not only is that orders of magnitude beyond the limits of what would actually be helpful but it also means you waste a lot of bandwidth any time you open a file but don't intent to watch it from beginning to end. I guess maybe some people try to overcompensate with a huge buffer because they have other issues that cause stuttering, but that won't really solve anything.

TLDR: Maximum buffer-size that is reasonable is "large enough to buffer a handful of seconds worth of your bandwidth". The math to determine this should be simple - but if you are unsure then just tell me your download speed and I will help find a reasonable value.

memory: 15 GB. I'll try lowering the buffer size. @thestigma, you're right, I was using a copy paste (from @Animosity022 GitHub repo, haha). I appreciate the info. Can you explain what a higher or lower buffer size would do? I have 15 gb of ram on the plex machine. Network is a gigabit fiber connection.

Edit: I re-read the explanation and it makes a little more sense to me now.

Gigabit is very fast, so 128M would equate to about a second of maximum speed.
That is probably more than you need, but if you have 15GB RAM then it is perfectly reasonable I think. You can still handle several dozens of streams then if needed - which is reasonable to expect probably won't happen anyway.

The primary reason for buffers in general when it comes to computing is to smooth out a data-stream. Sometimes sending or receiving can not be done 100% of the time. Having just a small amount of instant-access and always accessible storage (RAM in this case) greatly improves efficiency because it can more or less remove the penalties to having software or hardware that multitasks (which all modern computers do of course).

The reason you don't think about buffers is because "everything" uses them because it's such a great benefit for a low investment. Every HDD has some internal RAM for example. Whatever you play your video in has it's own RAM-cache ect. Sometimes that means rclones buffer is realistically superfluous and there would be no difference to set buffer to 0 - but that's why it's configurable so you can choose to be a little extra generous than having the minimum needed. In general it never hurts to have more than you need as long as you don't go completely overkill (like 1G...). The key point is that you don't need much typically. Just enough to smooth out the wrinkles.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.