This completely depends on your configuration.
It can use almost nothing (maybe 20MB at baseline) - or with certain settings it can balloon into the extreme of many gigabytes.
So I would need to see your full rclone.conf as well as the spesific commands you run to give you pointers.
!!warning!! rclone.conf can contain certain sensitive information like clientID, clientSecret, token and crypt keys. These need to be [REDACTED] from the text before you post them - for your own safety and privacy.
Once I can see the setup I can point to spesific things that use a non-trivial amount of memory and you can trim those settings down to your need.
The most common big memory eaters are a combination of many concurrent transfers + large chunk sizes. These can be good for performance in certain situations, but they also use a lot of memory. it is certainly possible to get decent performance with a low memory setup though. I run rclone on Google VM micro-instances all the time, and they are not exactly overflowing with memory
Are you running in a VM or something since its so low and you say 1 or 2 ...? if so waht is the host-OS? I can perhaps offer a good alternative to look into also (even if that is a side-discussion to this topic).
So it's basically a completely stock setup with no special config or flags used?
If so it is very curious that you use so much RAM, as the stock configuraiton defaults are quite conservative
Do you have any hard numbers on how much rclone uses? (like with the "top" command). I just want to make sure it's actually rclone using up your memory here...
Also - does this problem occur both when using google photos and AWS?
EDIT: Oh and one more thing - please check your rclone version using rclone version
A common problem is that people download from repositories that have versions literally years out of date... it's not even worth trying to troubleshoot until we know you have relatively new version honestly it might be that simple.
Current version is 1.50.1
Easiest way to install latest version:
curl https://rclone.org/install.sh | sudo bash
EDIT: need some rest now. Will check back tomorrow
Yep, I updated to the latest version. Yep, stock setup. I'd been using rclone to sync our photos via Google Drive until July of this year when Google decided to break that linkage. It took me this long to get around to reconfiguring to use Google Photos directly.
Not really sure how to record Top's output over time... but it only takes a few seconds for rclone to burn through the RAM. My command looks like this: rclone --no-traverse --log-file=rclone.log copy source dest
If that still runs out of memory when using AWS I can't imagine this is the fault of the settings.
You may want to re-create the problem while you run a debug log for us. This will give us some insight to what rclone is actually doing at the time. Use these flags:
then while that is running, provoke the problem. Then post the log-file. Some names of files and folders may leak in the log, but that's about the extent of security concerns for a debuglog.
I couldn't think of an easy way to test, because I only want to write from GP to S3. This morning, though, I wrote from both to a local directory, and both worked. So, should we expect copying or syncing from cloud to cloud to be more memory intensive than cloud to local?
Another thing I did just now was spin up a much larger instance on EC2, this time with 16GB of RAM. This was a completely bare install of Amazon Linux and the only things I added were rclone and htop. Watching htop I could see that memory usage peaked around 5GB (curiously, with --no-traverse set, it peaked at 8GB), and I was able to transfer all of my files (only hundreds per directory) without a hitch in just a few minutes.
Tested again on the machine with only 1GB RAM... I can transfer to a local folder from GPhotos, I can transfer from a local folder to S3, but if I try to transfer from GPhotos to S3 it runs out of memory so quickly that even 1 JPG (500k) is enough to break it.
No, a remote-to-remote transfer should if anything be even less memory intensive, or at least not require any more.
5GB is completely abnormal with default settings and should not be possible as far as I know - and even more abnormal is this "one .jpg is enough to break the memory" scenario that you mention.
Did you specifically confirm that it was the "rclone" process that was using the memory )you wren't super spesific about that)? This is very nice if we can nail down just to make sure it's not something else than rclone itself causing the problem... I'd prefer that to just looking at the overall memory usage. This should be easily visible in htop.
I must assume this is some form of internal bug. Time to call in the big guns for some advice... @ncw Can you comment on this? What information do you need to debug?
@seancamden He is likely to ask for that debug log I mentioned, so you might want to get ahead of that to make this process resolve faster EDIT: thanks! I saw you added it now.
I looked at the log and it is failing very eaarly, so something seems very badly wrong - although I am not quite qualified enough here to speculate on the exact cause - except that I expect an internal bug.
That you got the stack-trace in the log is exellent, and NCW should be able to get very good information out of that. Now we just wait for a response. he usually answers within a day, but he is out traveling now so have a little patience for his reply
Well, that's a good question. Google gives us things that look like directories: "media/by-month", "media/by-year", "media/all" and so on. But they also kind of look like aliases, right? If we consider the path "camden-family-google-photos:media/by-month/2000/2019-08" to represent a directory, there's only about 300 files in it. But if that's just an alias (or something) that sorts a subset of my whole photo collection, then we could be talking about a larger number of files.
This sounds like a bug to me! An rclone copy should use very little memory (I tried a similar copy here and rclone used 35MB of RAM). The biggest memory use will be if you have a directory with millions of files, but I don't think that is the case.
Are you running on a 64 bit machine? I guess it is possible rclone ran out of address space.
What does top say about the memory used on the machine?
Try without --use-mmap --buffer-size 0M - I don't think that will fix it but let's try narrowing things down.
As you noted above these directories are totally synthetic! Rclone only parses the last element of the path which has all the info in it.
It is because we don't know the size of the google photos when we upload them, and the s3 uploader therefore sizes its chunks to allow the maximum possible file size given the fact that you can only have 10,000 chunks. The partition size it chooses is 525 MB for a max file size of 5 TB.
s3: Reduce memory usage streaming files by reducing max stream upload size
Before this change rclone would allow the user to stream (eg with
rclone mount, rclone rcat or uploading google photos or docs) 5TB
files. This meant that rclone allocated 4 * 525 MB buffers per
transfer which is way too much memory by default.
This change makes rclone use the configured chunk size for streamed
uploads. This is 5MB by default which means that rclone can stream
upload files up to 48GB by default staying below the 10,000 chunks
This can be increased with --s3-chunk-size if necessary.
If rclone detects that a file is being streamed to s3 it will make a
single NOTICE level log stating the limitation.
This fixes the enormous memory usage.