Tarball + rclone sync on web server, ram goes to 100%

What is the problem you are having with rclone?

The following command is causing the RAM to go to 100%, freezing the docker containers/web server that is running.

This command created a tarball of the uploads folder (300 files, total 250MB), and then uses the sync command to sync them to my b2

DESTINATION=b2:xxxxxxx/xxxxxx

BACKUP_DIR=~/backup/backups
DB_DUMP_PATH=~/backup/db
UPLOADS_DIR=~/docker/uploads

docker exec docker-database-1 pg_dump directus -U directus | gzip > $DB_DUMP_PATH

tar -czvf $BACKUP_DIR/directus-backup-$(date +%F).tar.gz $DB_DUMP_PATH $UPLOADS_DIR

find "$BACKUP_DIR" -type f -name '*.gz' -mmin +720 -exec rm {} \;

rclone sync $BACKUP_DIR $DESTINATION --b2-hard-delete --fast-list
rclone cleanup

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.2
- os/version: centos 7.9.2009 (64 bit)
- os/kernel: 3.10.0 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.3
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

B2

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync $BACKUP_DIR $DESTINATION --b2-hard-delete --fast-list

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[b2]
type = b2
account = XXX
key = XXX

A log from the command that you were trying to run with the -vv flag

I'm afraid I can't (for now), I crashed the web server just a few minutes ago. I hope to get some advice without this step.

The B2 backend uses lots of memory for buffers. If you set --b2-chunk-size to the minimum of 5M then you will use less memory at the expense of longer transfers.

I merged a different approach into 1.65 which uses much less memory so you could try that too.

Thanks for your response, since it's only 1 file, I suppose settings the transfers=1 will have no immediate change, right? Is there any other option I can set to decrease RAM usage? The backup runs at night so I've got plenty of time.

hi,

first, update rclone and test.

add --b2-chunk-size=5M and test

perhaps copy just that one file instead of rclone sync on the entire directory
rclone copy $BACKUP_DIR/directus-backup-2023-12-10.tar.gz $DESTINATION --no-traverse --b2-hard-delete

Remove --fast-list flag when low on memory. When used it makes operations faster but at the cost of using more RAM.

Thanks! I will try that, I also saw the --no-traverse flag, does it also have effect on B2 instances?

Yes it does. Its probably sensible if you are copying a single file into a directory with lots of other files.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.