Crash with a lot of files

What is the problem you are having with rclone?

After a seemingly random time while syncing, rclone crashes. There is over 2.5 million, mostly small files in this sync directory. Normally ~%99 of the files are unchanged, I run the sync nightly and it was running for a year without an issue. Issue started after I modified nearly all the files, after this the sync took about 5 days (not an issue) and then it crashed for the first time. I retried with same command, it uploads more files and after a while it crashes again.

This is running on AWS, the sync directory is an EFS mount, which is functionally identical to a Linux NFS mount, basically a network share. "Running for some amount then crashing" may be associated with a corrupted file/directory but I cannot find such a file yet.

Since there is so many files, output from -vv is at least take a gigabyte, even without the -vv the error backtrace overflows my console buffer so I can't grab the full error output. For that reason, I'm only able to include a partial output for now.

The -vv output consists mostly of Size and modification time the same (differ by 0s, within tolerance 1ms)s and Unchanged skippings so if my (partial) log is useless, please advise for any way to grab just the related parts in -vv log. I am started the sync (again) but with > rclone.out.log 2> rclone.err.log meantime for able to capture the full backtrace.

Run the command 'rclone version' and share the full output of the command.

rclone v1.61.1
- os/version: amazon 2 (64 bit)
- os/kernel: 4.14.301-224.520.amzn2.x86_64 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.19.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using?

Google Drive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone sync /mnt/efs-prod gdrive:efs-prod --transfers 4 --checkers 64

(without --transfers and --checkers it still crashes)

The rclone config contents with secrets removed.

[gdrive]
type = drive
client_id = xxx.apps.googleusercontent.com
client_secret = xxx
scope = drive
root_folder_id = xxx
skip_gdocs = true
acknowledge_abuse = true
token = xxx
upload_cutoff = 1Ti
chunk_size = 256Mi
retries = 10

A log from the command with the -vv flag

Note: It is not with -vv flag and it is partial like I mentioned above. A full -vv log is over a gigabyte.

rclonelog.txt (134.1 KB)

I can't tell from that log as it's after the good information but I'd imagine you ran out of memory.

You have 64 checkers and a large chunk size.

Can you share the whole log?

You are probably right. The instance has only 2G of ram and I added the chunk_size for evading Gdrive 403's recently, without thinking what exactly it means..

Bit embarrassed now for not able to think something simple as memory exhaustion. I'm currently capturing the full log, will report the final status.

Results are in and it's

fatal error: runtime: out of memory

Problem exists between keyboard and chair :slight_smile: Sorry for taking your time.

1 Like

No worries! I've fixed so many of my own things by simply posting as well :slight_smile:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.