Rclone crashes due to memory usage issues

What is the problem you are having with rclone?

Copying from a Google Drive handle to another encrypted drive handle (server side copy), rclone crashes due to insufficient memory.

Files are transferred via a GCP instance (g1-small, 1.7GB RAM)

What is your rclone version (output from rclone version)

1.50

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Ubuntu 18.04 LTS

Which cloud storage system are you using? (eg Google Drive)

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy mydrivehandle1:folders mydrivehandle2:folders -P -v

A log from the command with the -vv flag (eg output from rclone -vv copy /tmp remote:tmp)

Can you please post a log file with -vv ?

Just restarted my transfer with -vv. I'll report back if there's any out of memory errors.

I got the log file, but it's quite long

After running the copy command for a few days it eventually gave an out of memory error:

https://pastebin.com/uPBA1GbL

@Animosity022 @ncw

Do you have the full log as that just shows a snippet?

@Animosity022 Unfortunately I don't have the full log as it will be way too long :confused:

Without a log, the stack trace just shows it ran out of memory so nothing to help in debugging the issue unfortunately.

Alright, looks like I'll rerun the command again with -vv and hope this helps :slight_smile:

I used a tool call panicparse on the backtrace which summarizes what is going on.

It tells me

72: select [0~1439 minutes] [Created by asyncreader.(*AsyncReader).init @ asyncreader.go:78]
    asyncreader asyncreader.go:83     (*AsyncReader).init.func1(*)

Which means you have 72 open files.

So rclone is using 72 * --buffer-size of memory. --buffer-size is 16M by default so that is 1.2 GB which is a big fraction of the 1.7GB of RAM the small instance has.

So either tune --transfers down, or use --buffer-size 0 which will slow things down a bit (but not much).

Hmm, just seen your command line - are you using the default value of --transfer? In which case having 72 files open at once means there is a leak somewhere.

If you try --buffer-size 0 does that help?

That was why I wanted to see the full debug log so we can see the command being run and not assume what's being done.

A debug log always answers the questions you are absolutely right :slight_smile:

@ncw @Animosity022 I've ran the command again with -vv and --buffer-size 0. Will keep both of you posted. :slight_smile:

1 Like

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.