Panic: runtime error: invalid memory address or nil pointer dereference

STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.

What is the problem you are having with rclone?

I'm getting a panic: runtime error: invalid memory address or nil pointer dereference error mid-way through a transfer. This seems to happen from dropbox remote to s3 remote. Direct upload from local drive doesn't seem to exhibit the same problems.

Running on a dedicated server with 64 GB memory. 8 GB used / 27 GB free when I start / whenever I run free while rclone is running.

Run the command 'rclone version' and share the full output of the command.

rclone v1.65.0
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 6.2.0-35-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.21.4
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Dropbox -> s3/Minio

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy dropbox-crypt:Media s3:Media --checksum --low-level-retries 1 --fast-list --tpslimit 12 --max-transfer 200G --cutoff-mode SOFT --log-level DEBUG --log-file /home/user/log.log

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[minio]
type = s3
provider = Minio
access_key_id = XXX
secret_access_key = XXX
endpoint = https://host.tld.com:port
acl = bucket-owner-full-control
upload_cutoff = 100Mi
chunk_size = 50Mi

[s3]
type = alias
remote = minio:data/personal-files

[dropbox]
type = dropbox
client_id = XXX
client_secret = XXX
token = XXX

[dropbox-crypt]
type = crypt
remote = dropbox:encrypt
password = XXX
password2 = XXX
filename_encoding = base32768

A log from the command that you were trying to run with the -vv flag

Log file is too large to paste on pastebin. No errors until the end, just has thousands of lines like:

2023/12/12 09:06:52 DEBUG : Movies-720p/Avengers Infinity War (2018)/Avengers Infinity War (2018) [imdb-tt4154756][Bluray-1080p][8bit][x264][DTS 5.1]-DON.mkv: multipart upload wrote chunk 154 with 52428800 bytes and etag "f5d8301015fba20292fc7e8ec192b12e"
2023/12/12 09:06:52 DEBUG : Movies-720p/Avengers Infinity War (2018)/Avengers Infinity War (2018) [imdb-tt4154756][Bluray-1080p][8bit][x264][DTS 5.1]-DON.mkv: multi-thread copy: chunk 154/356 (8021606400-8074035200) size 50Mi finished
2023/12/12 09:06:52 DEBUG : Movies-720p/Avengers Infinity War (2018)/Avengers Infinity War (2018) [imdb-tt4154756][Bluray-1080p][8bit][x264][DTS 5.1]-DON.mkv: multi-thread copy: chunk 158/356 (8231321600-8283750400) size 50Mi starting
2023/12/12 09:06:52 DEBUG : Masterclass/Armin van Buuren Masterclass on Dance Music/29 Performance Tips.ts: multipart upload wrote chunk 5 with 42887052 bytes and etag "2a8c4ff833dd567b9597ad6ab7e186e6"
2023/12/12 09:06:53 DEBUG : Movies-720p/Avengers Infinity War (2018)/Avengers Infinity War (2018) [imdb-tt4154756][Bluray-1080p][8bit][x264][DTS 5.1]-DON.mkv: multipart upload wrote chunk 155 with 52428800 bytes and etag "076162d1a599bb8bd9905e17c780a7d9"

The last few lines are:

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x161093c]

goroutine 79 [running]:
github.com/rclone/rclone/backend/s3.(*Fs).OpenChunkWriter(0xc000537340, {0x28c6510?, 0xc00094e780}, {0xc000866a00, 0x4c}, {0x7f0d1c7baef8?, 0xc000f29cc8}, {0xc02f82fc20, 0x1, 0x1})
	github.com/rclone/rclone/backend/s3/s3.go:5704 +0x63c
github.com/rclone/rclone/fs/operations.multiThreadCopy({0x28c6510?, 0xc00094e780}, {0x28da638, 0xc000537340}, {0xc000866a00, 0x4c}, {0x28d8958?, 0xc000f29cc8}, 0x4, 0xc00085fb80, ...)
	github.com/rclone/rclone/fs/operations/multithread.go:161 +0x463
github.com/rclone/rclone/fs/operations.(*copy).multiThreadCopy(0xc001d51320, {0x28c6510?, 0xc00094e780?}, {0xc02f82fc20?, 0x28d8958?, 0xc000f29cc8?})
	github.com/rclone/rclone/fs/operations/copy.go:160 +0x85
github.com/rclone/rclone/fs/operations.(*copy).manualCopy(0xc001d51320, {0x28c6510, 0xc00094e780})
	github.com/rclone/rclone/fs/operations/copy.go:249 +0x425
github.com/rclone/rclone/fs/operations.(*copy).copy(0xc001d51320, {0x28c6510, 0xc00094e780})
	github.com/rclone/rclone/fs/operations/copy.go:302 +0x16c
github.com/rclone/rclone/fs/operations.Copy({0x28c6510, 0xc00094e780}, {0x28da638, 0xc000537340}, {0x0?, 0x0}, {0xc000866a00, 0x4c}, {0x28d8958, 0xc000f29cc8})
	github.com/rclone/rclone/fs/operations/copy.go:404 +0x425
github.com/rclone/rclone/fs/sync.(*syncCopyMove).pairCopyOrMove(0xc0009d0900, {0x28c6510, 0xc00094e780}, 0x0?, {0x28da638, 0xc000537340}, 0x0?, 0x0?)
	github.com/rclone/rclone/fs/sync/sync.go:446 +0x1f6
created by github.com/rclone/rclone/fs/sync.(*syncCopyMove).startTransfers in goroutine 1
	github.com/rclone/rclone/fs/sync/sync.go:473 +0x45

I can't see a common thread when it happens, some times it takes 12 minutes, sometimes 17, sometimes 48GB, sometimes 70 GB.

Can you try on the beta? I might be confused, but I thought a bunch of S3 issues are fixed in the beta (I might be crazy though).

Installed rclone v1.66.0-beta.7578.c69eb8457 and I'm re-running the command now.

Well I was hopeful, but that ultimately resulted in the same error to end the transfer.

Additionally it seems to be copying files that already exist also.

panic: runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x0 pc=0x161139c]

goroutine 107 [running]:
github.com/rclone/rclone/backend/s3.(*Fs).OpenChunkWriter(0xc00075c840, {0x28c8aa0?, 0xc001cc75e0}, {0xc0008c67d0, 0x4c}, {0x7fe49728ac58?, 0xc0026016e0}, {0xc002f81070, 0x1, 0x1})
        github.com/rclone/rclone/backend/s3/s3.go:5737 +0x63c
github.com/rclone/rclone/fs/operations.multiThreadCopy({0x28c8aa0?, 0xc001cc75e0}, {0x28dcbb8, 0xc00075c840}, {0xc0008c67d0, 0x4c}, {0x28daed8?, 0xc0026016e0}, 0x4, 0xc000a574a0, ...)
        github.com/rclone/rclone/fs/operations/multithread.go:161 +0x463
github.com/rclone/rclone/fs/operations.(*copy).multiThreadCopy(0xc00281ecf0, {0x28c8aa0?, 0xc001cc75e0?}, {0xc002f81070?, 0x28daed8?, 0xc0026016e0?})
        github.com/rclone/rclone/fs/operations/copy.go:160 +0x85
github.com/rclone/rclone/fs/operations.(*copy).manualCopy(0xc00281ecf0, {0x28c8aa0, 0xc001cc75e0})
        github.com/rclone/rclone/fs/operations/copy.go:249 +0x425
github.com/rclone/rclone/fs/operations.(*copy).copy(0xc00281ecf0, {0x28c8aa0, 0xc001cc75e0})
        github.com/rclone/rclone/fs/operations/copy.go:302 +0x16c
github.com/rclone/rclone/fs/operations.Copy({0x28c8aa0?, 0xc001cc75e0}, {0x28dcbb8, 0xc00075c840}, {0x0?, 0x0}, {0xc0008c67d0, 0x4c}, {0x28daed8, 0xc0026016e0})
        github.com/rclone/rclone/fs/operations/copy.go:404 +0x425
github.com/rclone/rclone/fs/sync.(*syncCopyMove).pairCopyOrMove(0xc000a1ad80, {0x28c8aa0, 0xc001cc75e0}, 0xc0008c7c70?, {0x28dcbb8, 0xc00075c840}, 0x0?, 0x0?)
        github.com/rclone/rclone/fs/sync/sync.go:446 +0x1f6
created by github.com/rclone/rclone/fs/sync.(*syncCopyMove).startTransfers in goroutine 1
        github.com/rclone/rclone/fs/sync/sync.go:473 +0x45

Running rclone v1.66.0-beta.7579.743ea6ac2 on a new server, same issue.

Doesn't seem to have any consistency in when / why it panics.

Any other troubleshooting / logging I can do?

Well the latest beta didn't help. I tried v1.64.2 and that also kept crashing.

I tried v1.63.0 and that seems to work. At least so far after transferring 375 GB and counting...

This is definitely a bug...

The error is from this line of code

Which implies that mOut.UploadId is nil. We know mOut isn't nil because it was used a few lines before that.

That in turn means that the UploadId was returned as nil from the destination s3/Minio. I think this is probably a bug in minio.

Can you try this? This will write logs with internal error in when the problem happens and it will attempt to retry though I note you have --low-level-retries 1 which means it won't be able to retry. It should just fail that file gracefully rather than crashing!

v1.66.0-beta.7579.9c2b2a199.fix-s3-multipart-no-uploadid on branch fix-s3-multipart-no-uploadid (uploaded in 15-30 mins)

Ideally we'd debug this with -vv --dump responses but that will dump the file data from dropbox into the log too which will make it completely unreadable. -vv --dump headers would be useful though - before the internal error you should see a request and a response trying to create the multipart upload.

Sure. So you'd like me to try the new binary with -vv and --dump headers or the old "buggy" binary?

Try the new binary. It might work! Even if it does there will be stuff in the log you can look at.

OK, well I tried with the new binary, and I notice that it's re-uploading existing files.

I've noticed if I use --no-traverse that it does not re-copy files, but also takes a long time to start transferring. I've read

If you are only copying a small number of files (or are filtering most of the files) and/or have a large number of files on the destination then --no-traverse will stop rclone listing the destination and save time.

However, if you are copying a large number of files, especially if you are doing a copy where lots of the files under consideration haven't changed and won't need copying then you shouldn't use --no-traverse.

I'm trying to rclone copy dropbox:Path/Sub s3:Path/Sub The source is approx 10 TiB and 38k files. So per the docs --no-traverse should not be used, but without it I'm re-copying files.

I'm using the --checksum flag which defaults to --size-only in both commands.

ncw.log is without --no-traverse. Not much there because I cancelled it once I saw it was re-uploading existing files.
ncw.log (3.0 MB)

ncwnotraverse.log is in progress and I'll upload it when I can. It's been running for 20 minutes and it's through about a third of the Sub2 directories and I'm about to jump on a plane.

I've also experimented and found that if I run rclone copy dropbox:Path/Sub1/Sub2 s3:Path/Sub1/Sub2 that it skips existing files, but of course it's a bit of a hassle to try that with "Sub2" directories.

I didn't see the error in the log so it looked like it wasn't triggered.

Well I'll continue troubleshooting. Seems to be working OK for now. Looks like there was a file / folder with % in at least one of the the directory / file names.

I'll continue to experiment with it.

1 Like

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.