Reduction of memory use on mount

Is there a preference in block size when caching mounts for reduction in memory use? Trying to use a 3GB RAM VPS and rclone is often eating 2gb or more on a crypted, cached google drive remote.

What are your command line flags? I suspect you have a large --buffer-size. Rclone allocates this per open file so you could reduce it.

 rclone mount google-media-cached: /data/cloud --cache-tmp-wait-time=10m --cache-tmp-upload-path=/home/media/.cache/rclone/cache-upload --cache-chunk-no-memory --cache-writes --allow-other --vfs-cache-mode minimal -v

Default I’d imagine. Will reduce and try again.

Default is 16MB which isn’t massive so maybe that isn’t the problem.

How are you measuring the memory? Is it RSS or VSZ?

RSS in this case, usually sits around 1GB but occasionally on something heavy like a radarr library scan it’ll hit 2GB.
This then thrashes the swap as other processes end up pushed there and sonarr/radarr grind to a halt.

If it’s just expected behaviour that’s fine just hopeful there’s an easy partial mitigation.

Just noticed an outstanding bug from radarr that may have caused it (Radarr being disabled and rebooted the machine to be fair to test).
Seems it was trying to rename all the files on startup, and regularly on a task. Even if they are already named correctly.

I’ll monitor it without radarr and see if memory use goes up.

It is unexpected behaviour to me… It would be interesting if you can find a cause.

I can confirm, that rclone mount can use huge amounts of RAM when lots of small files are read in a short time. After a while the memory usage goes down again, but i have the impression that it consumes more memory than before opening the files (up to hundreds of MB more).

I’m pretty sure that is to do with the --buffer setting… If you set it to --buffer 0 then I expect it won’t. What I could do with is a better way of getting the async buffer code to re-use buffers. It uses a pool which doesn’t seem to be super effective at returning the memory in a timely fashion.

Perhaps a manually managed pool of buffers might be better.

I did think of changing over to using large buffers allocated with memmap and anonymous mappings. These have the advantage that they are perfectly reclaimable by the OS. We’d need to make sure that we don’t allow the mmapped memory to leak out into slices though. However the asyncreader only implements Read/ReadAt which copy memory.