Move / copy uses up ram, locks up my NAS

What is the problem you are having with rclone?

rclone move or copy seems to eat up all my ram and locks up my NAS.

What is your rclone version (output from rclone version)

1.52.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Linux 64bit (Synology d1019+)

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone /volume1/GD/local/plex gcrypt:plex \
        --user-agent="gcrypt" \
        --buffer-size 512M \
        --drive-chunk-size 512M \
        --tpslimit 8 \
        --checkers 8 \
        --transfers 4 \
        --order-by modtime,$ModSort \
        --min-age $MinimumAge \
        --exclude *fuse_hidden* \
        --exclude *_HIDDEN \
        --exclude .recycle** \
        --exclude .Recycle.Bin/** \
        --exclude *.backup~* \
        --exclude *.partial~* \
        --drive-stop-on-upload-limit \
        --log-level INFO \
        --log-file $LOG

The rclone config contents with secrets removed.

[gdrive]
type = drive
client_id = xxxxxxx.apps.googleusercontent.com
client_secret = xxxxxxxx
scope = drive
root_folder_id = xxxxxxxxx
token = {"access_token":"xxxxxxxxx"}

[gcrypt]
type = crypt
remote = gdrive:crypt
filename_encryption = standard
directory_name_encryption = true
password = xxxxxxxx
password2 = Ixxxxxxxx

A log from the command with the -vv flag

No log, see below

I had an issue with the log and it is lost, I don't want to run the command again without getting some advice since it crashed my server.

I was watching plex and it froze. I checked the Monitor and rclone was using all the ram. I couldn't ssh in to kill the script because the server was not responding. I tried rebooting though DM (the web ui for the server) and it froze for several minutes before it just shut down.

The script seems to work find for a small amount of data, so I was trying a larger amount (about 60GB).

Not sure if this matters, but I also have the remote mounted in a mergerfs with the local path I am uploading.

can you post that command?
how much ram does the nas have?

You are using 512M per file so that's your issue, I'd remove that all together.

This also uses memory and that's per transfer so * 4 since you have transfers going on.

How much memory is in your NAS?

rclone mount gcrypt: /volume1/GD/gcrypt  \
   --allow-other \
   --dir-cache-time 1000h \
   --log-level INFO \
   --log-file $LOG \
   --poll-interval 15s \
   --umask 002 \
   --user-agent animosityapp \
   --rc \
   --rc-addr :5572 \
   --vfs-read-chunk-size 32M &
mergerfs /volume1/GD/gcrypt/volume1/GD/local /volume1/GD/pool \
     -o defaults,fsname=encrypted_pool,allow_other \
     -o moveonenospc=true,category.create=ff,func.getattr=newest &

8GB ram

I can safely remove both of these?

yes, you can remove them or reduce/tweak the values.

you might find the defaults values will give you fast upload speeds.
for example the defaults for
--checkers is 8
and
--transfers is 4

also, keep in mind, that gdrive has limits in terms of uploading lots of small files, so those settings might not be useful or needed

Thank you! I removed:

--buffer-size 512M
--drive-chunk-size 512M
--checkers
--transfers

... the upload ran with no issues.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.