How to calculate RAM necessary for uploading

I have a system with 8 GB ram and I adapted some scripts I found around who used --transfers 300 and I just keep crashing with this error :

https://pastebin.com/86x2rWKf

Then I have to reboot manually because even the reboot commands fails. How I can avoid this in the future and at same time not wasting RAM?

Instead of going straight to 300, if you reduce to 150 or 100 would that crash your system? How much swap is allocated for your system?

Can you paste the whole rclone command line?

Which cloud provider are you using?

rclone move --config=/path/rclone.conf $FROM $TO -c --no-traverse --transfers=300 --checkers=300 --delete-after --min-age 15m --log-file=$LOGFILE

gdrive

And which cloud provider is it?

Also what sort of sizes are the files you are transferring - big >100MB or small?

It was google drive.

They are many many > 100 more small files under 5 MB each. My folder keep a constant rate of ~600+ files at --transfers 100. Like I said they are all very small files.

Is there a way to have --transfers 300 at acceptable RAM usage? Or how much RAM is recommended ?

Now my script looks like :

/usr/sbin/rclone --config=/root/.config/rclone/rclone.conf move $FROM $TO -c --no-traverse --transfers 600 --checkers 100 --min-age 3m --log-file=$LOGFILE2 --exclude-from /root/exclude-file.txt --drive-chunk-size 1024k

If my math is correct this shouldn’t use more than 1 GB of ram if uploading 600 files at 1 MB each right?

I don’t really have an idea of how much ram each connection will take, but if you want it to use the minimum memory then set --buffer-size 0 otherwise each transfer will use up to 16MB of RAM in read-ahead buffer. For small files the read-ahead buffer is mostly useless anyway so you won’t lose a lot disabling it.

Drive chunks aren’t buffered in RAM so you don’t need to set this.

Something weird is happening then, because after setting -drive-chunk-size 1024k I can do up to --transfers 800 without any issues and never uses more than 3 GB of ram.

Where before that flag --transfers 300 would crash my machine.

Just checked the code and yes you are right - now we do buffer the chunk so we can retry it on failure (we didn’t used to).

Yes, so setting --drive-chunk-size is a good idea! Setting --buffer-size should help too.

Lol. How exactly buffer size will affect me with small files? And big files like 1GB+ ?

If I wanted to tweak for low ram usage with big files what would you recommend ?

For files below the buffer size, we don’t do buffering. For files above the buffer size we use that much extra memory as a read ahead buffer. So if all your small files are below 16MB it won’t make any difference.

Tweak down --buffer-size. Setting it too low will impact the performance though.

1 Like