Out of Memory after moving to Rpi 8Gb and SSD

Hello all -

@asdffdsa
I tried --dropbox-batch-mode off but still getting Out Of Memory error.

So I switched back to 2 GB Raspberry Pi. 32-bit OS.
Rclone v 1.59.2
with SSD
AND still getting 'Out Of Memory' error.

Only difference now is the Hard Drive. Before it used to be a SATA drive, now it is a SSD.

So it is either SSD causing this error. And I cannot revert back to old HD for testing.

Or something has changed at Dropbox end within last week.

Only thing that 'Delays' the Memory error is the --check-first flag.

Can you confirm your dropbox config for rclone please? In particular what is the value of chunk_size?

I see your command as this which looks quite vanilla.

rclone: Version "v1.59.2" starting with parameters ["rclone" "sync" "/mnt/store/snapshot-20221015_090342" "remote:mac-backups-raspberrypi" "--log-level" "DEBUG"]

You can make rclone use less memory by decreasing chunk_size and decreasing transfers.

  --dropbox-chunk-size SizeSuffix   Upload chunk size (< 150Mi) (default 48Mi)
  --transfers int                   Number of file transfers to run in parallel (default 4)

However you have 8 GB of RAM so this sounds like it is a more general problem.

A 32 bit OS on an 8GB Pi should be able to use 2GB or 4GB of RAM depending on exactly how the kernel was compiled which should be ample for rclone.

You can run with --rc and use the memory profiling tools - I don't think there is a memory leak but perhaps there is some interaction between the 32 bit OS and the Go runtime.

I have done the memory profiling and there didn't seem to be any leak issue.
I will try the above two options and see if that helps.

Thanks

1 Like

Before your above message, I had just --check-first' and load ran for 1h19m, uploaded 59 GB, then gave the Out of Memory error.

After your recommendation, I changed it to --check-first --dropbox-chunk-size 20Mi --transfers 2 it ran for 2h20m uploaded 80GB.
This seems to have done the trick = PASS.

Then I also tried --check-first --dropbox-chunk-size 20Mi --transfers 6. Increased the transfers, to get better throughput.
And this worked as well. Ran for 1h42m Uploaded 107 GB.

Thanks a lot.

I'm hoping to move to 64-bit Raspbian soon.

1 Like

I see you sometimes use --check-fist and sometimes not, so just in case you you missed @asdffdsa's initial post: --check-first may increase the memory usage:

Using this flag can use more memory as it effectively sets --max-backlog to infinite. This means that all the info on the objects to transfer is held in memory before the transfers start.

More info here: https://rclone.org/docs/#check-first

I therefore suggest you omit --check-first unless you have a very specific need and then you may be able to reduce memory usage further by lowering --max-backlog, the default is 10,000. More info here: https://rclone.org/docs/#max-backlog-n

Hi @faridv,

I am trying to make a fix for the issue you discovered in install.sh earlier in this thread.

Do you know the meaning of the l (lower case L) in the end of armv7l (returned from uname -m)?

I am asking to determine if rclone-current-linux-arm-v7.zip should be downloaded for everything starting with armv7 or just for for armv7l.

Please excuse me if this is a very naïve question, I use Windows most of the time.

Thanks, Ole

I think the suffix after the v7 is probably not important.

1 Like

Thanks, here is a pull request ready for review:

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.