Getting slice bounds out of range error while running sync with webdav

I'm using Ubuntu 18.04.3 LTS (GNU/Linux 4.15.0-58-generic x86_64) with the latest (non Beta) rclone (downloaded yesterday).

I try to do a sync to my cloudstorage provided by TransIP (stackstorage) via webdav.
type = webdav
url = [username] (I have https:// in front of it, but the forum won't let me post it)
vendor = owncloud
user = [username]
password = xxx

I've tried vendor other before I tried owncloud. I read somewhere that Stack uses owncloud for their stackstorage, so I thought I tried that, but it doesn't change the outcome.

When I try to do a sync I get the following error in the logfile after a few minutes:
2019/09/01 17:09:24 INFO : webdav root 'Backup/Foto': Waiting for checks to finish
2019/09/01 17:09:24 INFO : webdav root 'Backup/Foto': Waiting for transfers to finish
panic: runtime error: slice bounds out of range

goroutine 234 [running]:
github. com/rclone/rclone/backend/local.(*localOpenFile).Read(0xc0008401e0, 0xc001298000, 0x100000, 0x100000, 0xc0004ecf20, 0x43f5d3, 0xc0004ecea8)
/home/travis/gopath/src/ +0x4ac
github. com/rclone/rclone/fs/operations.(*reOpen).Read(0xc000a88780, 0xc001298000, 0x100000, 0x100000, 0x0, 0x0, 0x0)
/home/travis/gopath/src/ +0xde
github. com/rclone/rclone/lib/readers.ReadFill(0x7f5449819460, 0xc000a88780, 0xc001298000, 0x100000, 0x100000, 0x13b0d00, 0x100000, 0x7f5449819460)
/home/travis/gopath/src/ +0x79
github. com/rclone/rclone/fs/asyncreader.(*buffer).read(0xc00073b0b0, 0x7f5449819460, 0xc000a88780, 0x7f5449819460, 0xc000a88780)
/home/travis/gopath/src/ +0x58
github. com/rclone/rclone/fs/asyncreader.(*AsyncReader).init.func1(0xc00014e000)
/home/travis/gopath/src/ +0x1ab
created by github. com/rclone/rclone/fs/asyncreader.(*AsyncReader).init
/home/travis/gopath/src/ +0x18c

(also here I placed a space between github and .com due to the link restriction for first post).

The command I run is:
rclone sync /mnt/storage/Foto Stack:Backup/Foto --no-update-modtime --progress --size-only --cache-db-purge --transfers=1 --cache-chunk-size=10M --cache-chunk-no-memory --max-backlog=50000 --log-file=/root/rclone.log --log-level=DEBUG --buffer-size=20M

although I tried it with only the following options:
--no-update-modtime (isn't supported) --size-only --cache-db-purge
and it doesn't change. Error is the same everytime

Rclone version is:
rclone --version
rclone v1.49.1

  • os/arch: linux/amd64
  • go version: go1.12.9

Does anyone have an idea what I'm not seeing, or what I'm doing wrong?

That looks like a bug to me :slight_smile: Can you please make a new issue on github with those details in please and I'll have a go at fixing - thanks :slight_smile:

Looks like a bug, but this might be related to issue #2926 as my source is also a CIFS mount. I've tried his "solution" but it doesn't help with my case. However, what I did is install rclone on the server that actually houses the storage. It's a very old ubuntu install, and desperately in need of a reinstall (and new disks for that, the reason for now also putting a backup to my cloud drive). Except for storage that thing doesn't do anything anymore, and because it's a very old install it wasn't very straighforward to get rclone even installed :slight_smile:
But I did manage to get it installed after a bit of manual updating various packages (openssl and everything related to it was a bit of a headache). Rclone now seems to run on it without any issue or panics.
The strange thing is that why does it panic on my new server where the source is a CIFS mount? I can do an rsync to webdav using the same CIFS mount as source, and don't have any issues (except for anoying davfs cache) but it works, be it slow. Rclone is a much more userfriendly and fast way to sync my stuff.

Weird stuff, but I'll keep you posted.

I had forgotten about that bug! Yes from my first glance at your traceback this is the read syscall returning something impossible so very likely to be the same bug.

That was the issue in #2926 so I guess it isn't fixed in the kernel yet? It is a bug in the smb client kernel code most likely.

I don't think it will be fixed, as I suspect it seems to be related to the SMB v1 protocol (the guy from the #2926 bug and me are both using SMB v1) and as that's depricated I guess there won't be put a lot of effort into that.
The reason for me to make these backups is to prepare for a switch to new hardware and a fresh OS install, and a step away from SMB v1. I've worked around it for now as I can run rclone on my old server, and I did a sync of about 200GB of data to my cloud drive without any issues. Thanks for this great piece of software, it's exactly what I've been looking for.

Ah, I see...

Well I'm glad you've got a work around :slight_smile: