Raspberry Pi "$ rclone sync" out of memory error

I’m not sure if this qualifies as a bug or a limitation of rpi - but here’s my issue:

Background:

I have a local linux server that acts as a samba file server (amongst a few other tasks). I have two Windows 10 machines that use the samba shared drive as a backup location for the native windows backup utility.

My goal is to get away from the old hardware that is running the current server (super energy wasteful and loud) and migrate to a few raspberry pis running each service on a separate rpi. So, one rpi as DNS/DHCP server, one rpi as file server/backup server, one rpi as web server, etc.

Now, to my problem:

I want to back up my backups. My two workstations are backed up locally to the server, which is fine - but I want to backup my backups off site. I am currently trying to set up an rpi to run rclone and backup to backblaze b2.

I understand that the rpi is fairly limited compared to even an old (OLD) recycled server. So, I curently run the backups as $ rclone --transfers 2 --bwlimit 250K --no-traverse sync /source backup:destination. This runs fine for a bit but then errors with fatal error: runtime: out of memory.

After reading some other posts, it looks like that error is probably caused by traversing the source to see what needs to be deleted. --no-traverse does not work with sync because sync has to traverse to work.

So, what are my options here? Or is the rpi just not going to work for what I want?

EDIT: Here is my memtest results:

pi@raspberrypi:~ $ rclone memtest /home/pi/Backups/
2017/01/23 12:20:48 34112 objects took 21864480 bytes, 641.0 bytes/object
2017/01/23 12:20:48 System memory changed from 27929724 to 66219132 bytes a change of 38289408 bytes

pi@raspberrypi:~ $ rclone memtest backup:pconwellBackups
2017/01/23 12:23:33 24021 objects took 8965016 bytes, 373.2 bytes/object
2017/01/23 12:23:33 System memory changed from 29043836 to 42421372 bytes a change of 13377536 bytes

Shot in the dark here since the main constraints on the RPi is memory/cpu… but…

You could write a bash script (or python, ruby, choose your poison) with a recursive function to crawl through the folders of your source and execute an rclone sync for each of the folders it finds.

It ain’t pretty but it would work.

Let’s say I have this structured

– TV
---- Game of Thrones
------ Season 1

Your script would crawl down to the deepest folder (in this case, Season 1), then execute something like:

rclone sync /source/TV/Game\ of\ Thrones/Season\ 1/ destination:Backups/TV/Game\ of\ Thrones/Season\ 1/ --options

You could get fancy and do max-depth so it would stop at Game of Thrones, etc…

The backup job runs on Windows, where are you going to store the backup locally before you transfer them to the cloud?

Also, sync deletes files from the target system that are not on the source system. This would delete old backups from B2? Is that your intent?

The backup job runs on Windows, where are you going to store the backup locally before you transfer them to the cloud?

Window workstation -> local linux server samba share -> rpi rclone to b2

The data never ‘lives’ with the rpi. The data is (currently) only on the workstations and the samba share. The rpi access the samba share remotely and coordinates the backup.

Also, sync deletes files from the target system that are not on the source system. This would delete old backups from B2? Is that your intent?

I don’t know that it would matter as the windows backups are currently set to ‘keep forever’, so there is (currently) really no need to delete anyway. Eventually there may be a need to delete (say a few years down the road if the windows backups have ballooned in size).

You could write a bash script…

I guess it could work, but it seems overly complicated and inelegant.

You say you want to get rid of old hardware. What hardware did you want to discard?

You say you want to get rid of old hardware. What hardware did you want to discard?

It’s an old dell 850. I also have a 750, but it won’t boot for some reason after being in storage for about a year. I haven’t played with it too much so it may be salvageable, but I couldn’t get it working.

Either way, both of them are loud and power hungry. That being said though, they are both (well, not so much the 750 now) great machines that will handle just about whatever you can throw at it for a small home office / home entertainment setup.

Hmm… So I’m not sure what’s going on. I didn’t change anything but was trying to do some trouble shooting and now the CPU% and memory usage are both down quite a bit. Free memory is quite low, but it is hovering right around 27M. I’m not sure what the memory usage was when it was failing earlier but presumably it was 0M free.

CPU% is down around 12.5% and average load is now significantly less than 1 (as in around .1 or .2). This is down from CPU% of 60%+ and average load as high as 14. I’m not sure what the cause for the change is because I’m running the same exact command…

Either way, it looks like it’s working now. I might set up a crontab -e to just run the script every hour and see what happens. Maybe someone can shed some light on this mystery but my only real guess (and it’s just that - a total wild guess) is that the first few times I was running sync it was building a list of some sort and kept crashing before the list was built. Once the list was built, it is not needing to run the CPU as hard or use as much memory.

But, honestly, I don’t have a clue. It’s been running for about half an hour and (so far) it seems to be chugging along slowly but surely.

Spoke too soon - it ran for about 6 hours then ran out of memory again. 6 hours sounds impressive, but it’s going to take DAYS to upload all my stuff.

I enabled zram and it’s been running okay since. I’m at the 6 hour mark again and (so far) it hasn’t crashed. Obviously zram is going to eat into the CPU a bit, but the load average and CPU% are still pretty low so I’m not too concerned about it.

I’ll keep everyone updated as we go.

I'm currently working on this issue.

It will have the consequence of dramatically reducing the memory usage of rclone. Follow that issue and I'll post a beta soon for everyone to try.

In fact the new sync algorithm is better than the old one so I'll probably end up making it the default.

https://github.com/ncw/rclone/issues/517

Cool, thanks! I’ll keep an eye on that issue and watch for the beta.

Is there anything special when upgrade versions? Or do you just install the new version/beta on top of the old version?

The only thing rclone saves on the disk is the config file which I try to keep backwards compatible, so no special precautions needed.

I know this it not an elegant solution, but have you tried giving the rpi a usb stick as a swapfile?