What is the problem you are having with rclone?
I'm trying to move an s3 bucket to GCP with Rclone. Unfortunately my command always fails due to the OOM killer.
Run the command 'rclone version' and share the full output of the command.
rclone v1.69.0
os/version: debian 11.11 (64 bit)
os/kernel: 5.10.0-33-cloud-amd64 (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.23.4
go/linking: static
go/tags: none
Which cloud storage system are you using? (eg Google Drive)
The source storage is AWS S3 and the destination bucket is in GCP.
The S3 bucket has Total objects: 4.825M (4825249) and Total size: 97.543 GiB (104736162700 Byte).
I tried to make the sync work by allocating 8G to the VM but it wasn't enough. I also tried the same command with 16 and 32G to no avail.
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
sudo -u rclone rclone --config /home/rclone/rclone.conf sync --combined /tmp/output.txt --checksum --progress --transfers=32 --use-mmap s3:source gcs:dest
Please run 'rclone config redacted' and share the full output.
[gcs]
type = google cloud storage
project_number = XXX
[s3]
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX
region = eu-central-1
endpoint = bucket.vpce-XXX-XXX.s3.eu-central-1.vpce.amazonaws.com
A log from the command that you were trying to run with the -vv
flag
(I will add the log once the command is finished)
Paste log here
Do you have any recommendation on how to sync 2 buckets without rclone blowing up its memory?
I also tried with a single transfer but it's too slow.
Good timing:) Potential fix for such scenario has been being tested right now.
Try it yourself and provide feedback if it works:
opened 08:44AM - 25 Jul 24 UTC
bug
## Background - Amazon S3 rclone problems
I'm trying to backup a datalake wit… h 100 million files at the root. They are mostly small files < 1mb.
rclone was simply not designed for this use case and will eat up all available memory and then crash. There was no machine instance that I could throw at it that would fix this issue. Even running locally in a docker instance would eat up all available memory and then crash.
All advice in the forums did nothing to help the situation. And a lot of people seem to be running into this. Therefore I wanted to post this here so that anyone searching for this problem can try our solution.
## Solution in a nutshell: PUT YOUR FILES INTO FOLDERS!!!!
What's interesting is the behavior: rclone would never start transferring files, it would always sit there saying 0 files transferred, 0 bytes transferred, eat up all available memory before crashing with an OOM.
I tried all the suggestions in the forums, reducing the buffering memory, reducing the number of checkers and transfers. Nothing worked.
## Cause & Fix
Without looking at the code or doing any profiling, my hypothesis was that rclone scans an all files in a"directory" into RAM before executing on it. This seems true whether not `--fast-scan` is used or not.
Obviously, having 100 million files at the root was causing our org a whole bunch of problems anyway and it's been something that I've wanted to fix for a while, so this problem gave me enough reason to go ahead and re-organize our entire datalakes.
Since each file is referenced in our database with a datestamp, I was able to write python scripts that would move these files from the root into folders by the service and year-month (for example name.html -> service/2023-04/name.html)
This worked extremely well and I was able to now run rclone and have it at least start transferring some files. However, there were still folders with 5+ million files, and eventually ran into the same out-of-memory error.
So again, I further re-organized the files in our datalake into service/yrmo/day. And now that seems to have done the trick. rclone now consistently runs under 2GB memory and I've been able to increase the number of transfers and checkers up to 100 each and have 3mb of buffer per transfer.
## Dead ends
All the advice about adjust memory buffers and number of transfers is mostly wrong. They will only cut your minimum memory usage by a constant factor, but will do very little to prevent the absolute unbounded memory that rclone uses for extremely large "directories".
If you have this same problem, no amount of setting tweaking will work... you MUST re-organize your data into folders or rclone will just run out of memory every single time. If you have too many files at the root, rclone will simply never start transferring anything and just crash. If one of your subdirectories is too big, you'll get a memory pattern that looks like this:

## Recommendations to the Devs of rclone
Please serialize your directory scans to disk if you start exceeding a certain threshold of memory or files in the current directory. You can probably just get away with just always doing that since the disk is so much faster than network anyway. I'm currently doing an inventory scan of our datalakes and 50 million files entries is only taking up 12 gb of disk without any fancy compression. I know you are storing a lot more file information, like metadata, so it could easily be double or triple that.
But it is simply so much easier and cheaper to allocate disk space to a docker instance than it is to get a machine with much more ram.
An additional pain point about an out of memory issue crash is that when the rclone process gets a kill signal, it will **exit 0** making it look like it succeeded. According to this thread https://github.com/rclone/rclone/issues/7966 this is a feature of linux and you must get the exit code from the operating system instead of the return value of the exited rclone process.
This is super scary if you are relying on rclone to backup your datalake but in reality, it starts failing because one of your directories has millions of files in it. I know on Digital Ocean it's easy to see that a docker instance has failed, on Render.com however you'll get a "Run Succeeded" and it's not until you look at the run history that you'll see that in fact your instance ran out of memory. I'm not sure about the other hosting providers.
Anyway, I'm glad this huge task is finally over with, and we have started syncing up our data for redundancy and backup purposes. So far so good!

PS.
mNantern:
--transfers=32
You can also try to lower this value, even to the default 4.
The new version is working well, it uses a lot less memory than the previous one!
But it's not very fast, it needed 1 hour to sync 450k files even with --transfers=32
. For now it uses only 3G of memory and almost no CPU (files have already been copied).
After 1 hour it started to sync some new files and the CPU usage is up but not the memory usage.
It looks good!
Good to hear that it works.
Please provide your feedback on github issue. It is very valuable to have some real life tests.
You could try to increase number of checkers: --checkers 32
(??). Default is 8.