Random crash eating all memory

Hello I have an issue with Rclone on a unRAID setup.

What is the problem you are having with rclone?

Randomly rclone crash by using a lot of ressources (I see 100% available memory used). making my rclone mount disapear.

What is your rclone version (output from rclone version)

rclone-beta 2019.06.24 --> rclone v1.48.0-099-gc2635e39-beta (latest available on unRAID)

Which OS you are using and how many bits (eg Windows 7, 64 bit)

unRAID 6.7.2 (Latest) it's a Linux based OS

Which cloud storage system are you using? (eg Google Drive)

Google Drive (Google Suite)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone mount --allow-other --buffer-size 256M --dir-cache-time 72h --drive-chunk-size 512M --fast-list --log-level INFO --vfs-read-chunk-size 128M --vfs-read-chunk-size-limit off gdrive_media_vfs: /mnt/user/mount_rclone/google_vfs &

and

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude fuse_hidden --exclude _HIDDEN --exclude .recycle* --exclude .backup~ --exclude .partial~ --delete-empty-src-dirs --fast-list --bwlimit 8000k --tpslimit 3 --min-age 30m

However it is the mount that is crashing, the upload continue without issue.

I am using the following scripts to run the whole setup:

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

Can I put the -vv on the mount command? As I use it encapsulted in a script (https://github.com/BinsonBuzz/unraid_rclone_mount/blob/master/rclone_mount) where will I see the logs? (totally newbie on Linux)

Here is a screenshot on the issue during a crash:

Thanks !

I'd guess the problem be you are using 256M per file opened and running out of memory as you barely have 2G of memory on the system.

You'd want to reduce the buffer size to something much smaller like 16M or 32M at most.

That's 12 GB of total memory I believe, not 2 GB.

I had to zoom in and that seems right, but still the same message. OP is running out of memory due to open files * buffer size.

This issue happens without any file opened. So not a buffer issue...
I have 12 gigs and the only thing the server is doing is uploading right now, no media playback at all.

When I took this screenshot the server was uploading, I was watching the upload logs, and suddenly I saw all the memory fill from 70% to 100% for about a minute. After that, Rclone mount crashed and memory usage decreased to 70%.
Upload never stopped and continued without issue but I had to restart the mount.

Edit: Oh I think I know: Can this be due to Plex doing the indexing/thumbnail creation --> Opening a lot of connection??

Sure, that would be a good thing to turn off.

The mount uses open files * buffer size. Rclone itself does not open files as it's the applications using it.

Ok thanks

I purchased 8gigs of ram that I will add to the server and hopefully this issue will be solved!
I will update this post if it solves the issue.

It's probably a better solution just to dial down your buffer-size, at least to see how that affects things before you go investing in even more RAM. Chances are that you aren't actually getting that much benefit out of such a large buffer size. Remember the default is only 16M, and that is usually fairly adequate. 16 times more is a lot...

I have a very similar experience using rclone on Unraid.
If mounting with --buffer-size and --drive-chunk-size everything works fine unless I try to upload using rclone move. When trying to upload it fails with error message that --vfs-cache-mode writes is needed. When adding --vfs-cache-mode writes to the config it floods the RAM and doesn't accept limits set by --vfs-cache-max-size, it pushes the complete file onto the cache so I had to make a cache dir on a drive instead of RAM since I only have 16Gb.
In short
vfs not activated, stable but upload doesn't work properly
vfs activated, RAM gets flooded

This writes to disk and does not write to memory. If you are copying a file that's bigger than your max there, it's going to work I believe.

The buffer-size * open files is the thing that would impact memory and not vfs-cache-mode writes as that's disk caching for write operations.

It seems that lowering the buffer-size to 128mb solved the issue!

Thanks guys

This is basically all due to an improper setup / user error though.

If you run software like plex or something on a cloud drive and you do NOT disable functions that will hit the storage really hard then rclone will just execute the instructions the software asks for. And if it asks to simultaneously read from dozens if not hundreds of files then well... this works on a local HDD, but not so much for a cloud drive.

  • each open file will get a "buffer size" quota of RAM to use.
  • VFS-cache-max-size does not block it going over a size. It will do so if it HAS to (the only other alternative would be to error or crash). It will however try to reduce the cache as soon as it can - ie as soon as files are no longer being actively accessed by processes.

Rclone does not open files. The (other) software accessing the VFS do - that's the real problem.
The reason a buttload of large files get pulled to cache is often because something like Plex (or windows for that matter) tries to generate thumbnails or do analysis. This can at worst involve accessing the entire file. Doable on local storage. Not so much on a cloud-drive.

TLDR: Reconsider your settings in whatever software is accessing all these files to solve the issue

Off topic question, I have set a speed limit of 8M for the upload but I have 2 speed reporting: the overall one is much lower than the "per file" speed. Is it normal?

Transferred: 468.430G / 3.377 TBytes, 14%, 5.179 MBytes/s, ETA 6d20h11m50s
Errors: 768 (retrying may help)
Checks: 3378 / 3380, 100%
Transferred: 1074 / 11079, 10%
Elapsed time: 25h43m42.9s
Checking:

  • tv_shows/xxxxxxxxxxxxxxxxxxxx.mkv: 12% /1.780G, 3.915M/s, 6m45s
  • tv_shows/xxxxxxxxxxxxxxxxxxxx.mkv: 45% /1.609G, 3.913M/s, 3m50s
    Transferring:
  • tv_shows/xxxxxxxxxxxxxxxxxxxx.mkv: 12% /1.780G, 3.915M/s, 6m45s
  • tv_shows/xxxxxxxxxxxxxxxxxxxx.mkv: 45% /1.609G, 3.913M/s, 3m50s

3.913+3.915=7.828 --> Close to the 8M/s limit so why is the overall 5.179M/s ?
At 8M/s i would need 4 days to upload 3.377Tb and not 6.days+

That overall speed is actually an "average since the rclone process started" speed.
For a mount that might be many hours ago => quite a low average speed.

I agree this can be a little confusing, but you pretty much have to look at the total of the current transfer are. That is your real "right now" bandwidth through rclone. I think the -P progress indicator code wasn't quite designed with a permanently running processs (like a mount) in mind, but rather processes like a sync which have a clear "start" and "stop" point. At some point we may get an overhauled version that is more tailored to this use-case and less confusing.

You can confirm this rather easily with any bandwidth monitor

Thanks for clarifying this !

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.