Dear community,
is there a way to see Transfers, Checkers, Buffer usage?
It would help much tuning these options to achieve best transfer rate.
For example how many Transfer threads are working / idle.
Is there enough Checkers to provide work for Transfers threads.
How many Checkers are working / idle.
What is real buffer usage - should I increase it or reduce to save memory.
Memory draining is big - around 1GB per 10 minutes of work. Copying 1M of small files from S3 to S3. File size is ~1M.
Thank you.
asdffdsa
(jojothehumanmonkey)
May 22, 2025, 12:14pm
2
welcome to the forum,
first step is too look at the debug log.
hard to know why, as none of the questions in the help and support template were posted??
opened 08:44AM - 25 Jul 24 UTC
closed 07:28PM - 08 Apr 25 UTC
bug
## Background - Amazon S3 rclone problems
I'm trying to backup a datalake wit… h 100 million files at the root. They are mostly small files < 1mb.
rclone was simply not designed for this use case and will eat up all available memory and then crash. There was no machine instance that I could throw at it that would fix this issue. Even running locally in a docker instance would eat up all available memory and then crash.
All advice in the forums did nothing to help the situation. And a lot of people seem to be running into this. Therefore I wanted to post this here so that anyone searching for this problem can try our solution.
## Solution in a nutshell: PUT YOUR FILES INTO FOLDERS!!!!
What's interesting is the behavior: rclone would never start transferring files, it would always sit there saying 0 files transferred, 0 bytes transferred, eat up all available memory before crashing with an OOM.
I tried all the suggestions in the forums, reducing the buffering memory, reducing the number of checkers and transfers. Nothing worked.
## Cause & Fix
Without looking at the code or doing any profiling, my hypothesis was that rclone scans an all files in a"directory" into RAM before executing on it. This seems true whether not `--fast-scan` is used or not.
Obviously, having 100 million files at the root was causing our org a whole bunch of problems anyway and it's been something that I've wanted to fix for a while, so this problem gave me enough reason to go ahead and re-organize our entire datalakes.
Since each file is referenced in our database with a datestamp, I was able to write python scripts that would move these files from the root into folders by the service and year-month (for example name.html -> service/2023-04/name.html)
This worked extremely well and I was able to now run rclone and have it at least start transferring some files. However, there were still folders with 5+ million files, and eventually ran into the same out-of-memory error.
So again, I further re-organized the files in our datalake into service/yrmo/day. And now that seems to have done the trick. rclone now consistently runs under 2GB memory and I've been able to increase the number of transfers and checkers up to 100 each and have 3mb of buffer per transfer.
## Dead ends
All the advice about adjust memory buffers and number of transfers is mostly wrong. They will only cut your minimum memory usage by a constant factor, but will do very little to prevent the absolute unbounded memory that rclone uses for extremely large "directories".
If you have this same problem, no amount of setting tweaking will work... you MUST re-organize your data into folders or rclone will just run out of memory every single time. If you have too many files at the root, rclone will simply never start transferring anything and just crash. If one of your subdirectories is too big, you'll get a memory pattern that looks like this:

## Recommendations to the Devs of rclone
Please serialize your directory scans to disk if you start exceeding a certain threshold of memory or files in the current directory. You can probably just get away with just always doing that since the disk is so much faster than network anyway. I'm currently doing an inventory scan of our datalakes and 50 million files entries is only taking up 12 gb of disk without any fancy compression. I know you are storing a lot more file information, like metadata, so it could easily be double or triple that.
But it is simply so much easier and cheaper to allocate disk space to a docker instance than it is to get a machine with much more ram.
An additional pain point about an out of memory issue crash is that when the rclone process gets a kill signal, it will **exit 0** making it look like it succeeded. According to this thread https://github.com/rclone/rclone/issues/7966 this is a feature of linux and you must get the exit code from the operating system instead of the return value of the exited rclone process.
This is super scary if you are relying on rclone to backup your datalake but in reality, it starts failing because one of your directories has millions of files in it. I know on Digital Ocean it's easy to see that a docker instance has failed, on Render.com however you'll get a "Run Succeeded" and it's not until you look at the run history that you'll see that in fact your instance ran out of memory. I'm not sure about the other hosting providers.
Anyway, I'm glad this huge task is finally over with, and we have started syncing up our data for redundancy and backup purposes. So far so good!
