Rclone 1.52 release

I didn't upload the source maps, you don't need them unless you are debugging the js or css

The docker build is fixed now :slight_smile:

1 Like

Hello, latest docker crash for me, working with the previous beta 15d ago.

Host: Ubuntu 20.04 5.4.0-33-generic

runtime: mlock of signal stack failed: 12
runtime: increase the mlock limit (ulimit -l) or
runtime: update your kernel to 5.3.15+, 5.4.2+, or 5.5+
fatal error: mlock failed

runtime stack

runtime stack:
runtime.throw(0x1967414, 0xc)
/usr/local/go/src/runtime/panic.go:1112 +0x72
runtime.mlockGsignal(0xc002270000)
/usr/local/go/src/runtime/os_linux_x86.go:72 +0x107
runtime.mpreinit(0xc0003dfc00)
/usr/local/go/src/runtime/os_linux.go:341 +0x78
runtime.mcommoninit(0xc0003dfc00)
/usr/local/go/src/runtime/proc.go:630 +0x108
runtime.allocm(0xc00005f000, 0x1a09730, 0x2a0c78c4a6c1)
/usr/local/go/src/runtime/proc.go:1390 +0x14e
runtime.newm(0x1a09730, 0xc00005f000)
/usr/local/go/src/runtime/proc.go:1704 +0x39
runtime.startm(0x0, 0xc000001e01)
/usr/local/go/src/runtime/proc.go:1869 +0x12a
runtime.wakep(...)
/usr/local/go/src/runtime/proc.go:1953
runtime.resetspinning()
/usr/local/go/src/runtime/proc.go:2415 +0x93
runtime.schedule()
/usr/local/go/src/runtime/proc.go:2527 +0x2de
runtime.park_m(0xc000001c80)
/usr/local/go/src/runtime/proc.go:2690 +0x9d
runtime.mcall(0x0)
/usr/local/go/src/runtime/asm_amd64.s:318 +0x5b
goroutine 1 [select, 3 minutes]:
github.com/rclone/rclone/cmd/mount.Mount(0x1d62f20, 0xc0021c4500, 0x7fff9fff0f4a, 0x5, 0x195a908, 0x1)
/go/src/github.com/rclone/rclone/cmd/mount/mount.go:155 +0x364
github.com/rclone/rclone/cmd/mountlib.NewMountCommand.func1(0xc0001cab00, 0xc00036fc20, 0x2, 0x11)
/go/src/github.com/rclone/rclone/cmd/mountlib/mount.go:349 +0x29a
github.com/spf13/cobra.(*Command).execute(0xc0001cab00, 0xc00036fb00, 0x11, 0x12, 0xc0001cab00, 0xc00036fb00)
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:846 +0x29d
github.com/spf13/cobra.(*Command).ExecuteC(0x292a500, 0x5ed135b9, 0x29453c0, 0xc00009e058)
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:950 +0x349
github.com/spf13/cobra.(*Command).Execute(...)
/go/pkg/mod/github.com/spf13/cobra@v1.0.0/command.go:887
github.com/rclone/rclone/cmd.Main()
/go/src/github.com/rclone/rclone/cmd/cmd.go:511 +0x92
main.main()
/go/src/github.com/rclone/rclone/rclone.go:14 +0x20
goroutine 33 [select, 2 minutes]:
go.opencensus.io/stats/view.(*worker).start(0xc000144aa0)
/go/pkg/mod/go.opencensus.io@v0.22.3/stats/view/worker.go:154 +0x100
created by go.opencensus.io/stats/view.init.0
/go/pkg/mod/go.opencensus.io@v0.22.3/stats/view/worker.go:32 +0x57
goroutine 66 [IO wait, 2 minutes]:
internal/poll.runtime_pollWait(0x7fe441ffa370, 0x72, 0xffffffffffffffff)
/usr/local/go/src/runtime/netpoll.go:203 +0x55
internal/poll.(*pollDesc).wait(0xc00213a018, 0x72, 0xf00, 0xf59, 0xffffffffffffffff)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:87 +0x45
internal/poll.(*pollDesc).waitRead(...)
/usr/local/go/src/internal/poll/fd_poll_runtime.go:92
internal/poll.(*FD).Read(0xc00213a000, 0xc0021bc000, 0xf59, 0xf59, 0x0, 0x0, 0x0)
/usr/local/go/src/internal/poll/fd_unix.go:169 +0x19b
net.(*netFD).Read(0xc00213a000, 0xc0021bc000, 0xf59, 0xf59, 0x1ac51bf43c85c9ab, 0x581ee08adab9f8e2, 0xde3edc62855c038f)
/usr/local/go/src/net/fd_unix.go:202 +0x4f
net.(*conn).Read(0xc00011e0f0, 0xc0021bc000, 0xf59, 0xf59, 0x0, 0x0, 0x0)
/usr/local/go/src/net/net.go:184 +0x8e
github.com/rclone/rclone/fs/fshttp.(*timeoutConn).readOrWrite(0xc0021a8000, 0xc000337830, 0xc0021bc000, 0xf59, 0xf59, 0x203000, 0x8, 0x40c546)
/go/src/github.com/rclone/rclone/fs/fshttp/http.go:75 +0x48
github.com/rclone/rclone/fs/fshttp.(*timeoutConn).Read(0xc0021a8000, 0xc0021bc000, 0xf59, 0xf59, 0xc000337918, 0x40c546, 0x10)
/go/src/github.com/rclone/rclone/fs/fshttp/http.go:87 +0x8a
crypto/tls.(*atLeastReader).Read(0xc000296340, 0xc0021bc000, 0xf59, 0xf59, 0x28, 0xc00218b710, 0xc000337918)
/usr/local/go/src/crypto/tls/conn.go:760 +0x60
bytes.(*Buffer).ReadFrom(0xc000100958, 0x1d10820, 0xc000296340, 0x40a495, 0x16e9f80, 0x18591c0)
/usr/local/go/src/bytes/buffer.go:204 +0xb1
crypto/tls.(*Conn).readFromUntil(0xc000100700, 0x7fe441ffa638, 0xc0021a8000, 0x5, 0xc0021a8000, 0xc00011f2e0)
/usr/local/go/src/crypto/tls/conn.go:782 +0xec
crypto/tls.(*Conn).readRecordOrCCS(0xc000100700, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:589 +0x115
crypto/tls.(*Conn).readRecord(...)
/usr/local/go/src/crypto/tls/conn.go:557
crypto/tls.(*Conn).Read(0xc000100700, 0xc0021bd000, 0x1000, 0x1000, 0x0, 0x0, 0x0)
/usr/local/go/src/crypto/tls/conn.go:1233 +0x15b
net/http.(*persistConn).Read(0xc0000c0fc0, 0xc0021bd000, 0x1000, 0x1000, 0xc000191020, 0xc000337c70, 0x404e35)
/usr/local/go/src/net/http/transport.go:1825 +0x75
bufio.(*Reader).fill(0xc002191f20)
/usr/local/go/src/bufio/bufio.go:100 +0x103
bufio.(*Reader).Peek(0xc002191f20, 0x1, 0x0, 0x0, 0x1, 0xc000152300, 0x0)
/usr/local/go/src/bufio/bufio.go:138 +0x4f
net/http.(*persistConn).readLoop(0xc0000c0fc0)
/usr/local/go/src/net/http/transport.go:1978 +0x1a8
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1647 +0xc56
goroutine 67 [select, 2 minutes]:
net/http.(*persistConn).writeLoop(0xc0000c0fc0)
/usr/local/go/src/net/http/transport.go:2277 +0x11c
created by net/http.(*Transport).dialConn
/usr/local/go/src/net/http/transport.go:1648 +0xc7b
goroutine 38 [select, 2 minutes]:
github.com/rclone/rclone/backend/drive.(*Fs).ChangeNotify.func1(0xc0000d48c0, 0xc001e37aa0, 0x1d4b0c0, 0xc00018e008, 0xc0020b6400)
/go/src/github.com/rclone/rclone/backend/drive/drive.go:2586 +0x147
created by github.com/rclone/rclone/backend/drive.(*Fs).ChangeNotify
/go/src/github.com/rclone/rclone/backend/drive/drive.go:2577 +0x67
goroutine 64 [select, 2 minutes]:
github.com/rclone/rclone/vfs/vfscache.(*Cache).cleaner(0xc00065cf50, 0x1d4b080, 0xc000677c80)
/go/src/github.com/rclone/rclone/vfs/vfscache/vfscache.go:540 +0x149
created by github.com/rclone/rclone/vfs/vfscache.New
/go/src/github.com/rclone/rclone/vfs/vfscache/vfscache.go:75 +0x2a1
goroutine 81 [syscall, 2 minutes]:
syscall.Syscall(0x0, 0xc, 0xc002232000, 0x21000, 0x3, 0x30, 0x17488e0)
/usr/local/go/src/syscall/asm_linux_amd64.s:18 +0x5
syscall.read(0xc, 0xc002232000, 0x21000, 0x21000, 0x28, 0x0, 0x0)
/usr/local/go/src/syscall/zsyscall_linux_amd64.go:686 +0x5a
syscall.Read(...)
/usr/local/go/src/syscall/syscall_unix.go:189
bazil.org/fuse.(*Conn).ReadRequest(0xc0021ea180, 0x1a05418, 0xc002146000, 0x1d3c200, 0xc0007ad740)
/go/pkg/mod/bazil.org/fuse@v0.0.0-20200117225306-7b5117fecadc/fuse.go:578 +0xe0
bazil.org/fuse/fs.(*Server).Serve(0xc002146000, 0x1d11d00, 0xc002132620, 0x0, 0x0)
/go/pkg/mod/bazil.org/fuse@v0.0.0-20200117225306-7b5117fecadc/fs/serve.go:414 +0x36e
github.com/rclone/rclone/cmd/mount.mount.func1(0xc002146000, 0xc002132620, 0xc0021ea180, 0xc0001856e0)
/go/src/github.com/rclone/rclone/cmd/mount/mount.go:101 +0x45
created by github.com/rclone/rclone/cmd/mount.mount
/go/src/github.com/rclone/rclone/cmd/mount/mount.go:100 +0x2ee
goroutine 82 [syscall, 3 minutes]:
os/signal.signal_recv(0x0)
/usr/local/go/src/runtime/sigqueue.go:147 +0x9c
os/signal.loop()
/usr/local/go/src/os/signal/signal_unix.go:23 +0x22
created by os/signal.Notify.func1
/usr/local/go/src/os/signal/signal.go:127 +0x44

I had that problem with testing...

What you need to do is add --ulimit memlock=67108864 to your docker command line.

There is a very long thread about it on the go issue but to summarise

  • there is a bug in certain versions of the linux kernel
  • this causes memory corruption in go1.14 programs (BAD)
  • go engages a workaround (GOOD)
  • which fails in this case because of ulimits (BAD)
  • some vendor kernels have the fix but haven't updated the kernel patch number so go doesn't know it doesn't have to engage the workaround
  • this is exacerbated in docker which runs with very low ulimits
1 Like

Oh ok, thanks for the explaination, i will wait for the kernel fix then.

If you are running a recent vendor kernel, you might find it has the patch already but go isn't detecting it. See the go issue thread for more info!

Awesome set of features. This will be very useful .

2 Likes

You'll still get iowait but that is because rclone is making the kernel wait for stuff to be fetched over the network with async fetches. You can use --async-read=false to turn off async reading which will get rid of the IOWAIT at the cost of some performance.

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.