How to use chunker to copy a file to Nextcloud (webdav)?

What is the problem you are having with rclone?

To copy files bigger than 2G to my remote Nextcloud Drive (on my second RPi in the same home network) i need to split them with chunker. I created an overlay remote. But my command doesn't work. It stops transferring at 2G. Do I have to use a specific addition flag??

What is your rclone version (output from rclone version)

rclone v1.53.2

  • os/arch: linux/arm
  • go version: go1.15.3

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Rabperrby Pi 4B 2G Debian Buster 32 bit

Which cloud storage system are you using? (eg Google Drive)

Nextcloud (webdav)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copyto /media/pi/usb-stick/backup/kellerpi-1.img NEXTCLOUD:BACKUP/kellerpi-1_`date +"%Y-%m-%d"`.img --progress  --no-traverse --no-gzip-encoding --no-check-certificate

The rclone config contents with secrets removed.

type = webdav
url =
vendor = nextcloud
user = ncp
pass = VyRqm.....
no-check-certificate = true

type = chunker
chunk_size = 2G
hash_type = md5

Try a smaller chunk size - say 10M to get the config working first. I suspect 2G might be just too big.

I tried to with chunk size 100M

type = chunker
chunk_size = 100M
hash_type = md5

... has no effect.

rclone copyto starts to copy the 10G file (... for reasons I don't understand it takes 5 minutes till the first bytes gets transfered. After exactly 2.000G the transfer stops.

Tread me, but is it really enough to create this chunker path with rclone config? Doesn't there have to be any additional option or flag next to the rclone copy o clone copyto command?

Hmm, I have already overcome so many hurdles to get here ... it would be a great pity if this would not work.

Nearly solved: The 10G file to copy was flawless chunked (into 1G pieces) and unchunked locally (!) to a 10G file called overlay by using this command:

rclone copyto /media/pi/usb-stick/backup/kellerpi-1.img overlay --progress --verbose --no-traverse --no-gzip-encoding --no-check-certificate

I learned in another post that I have to use the remote ID overlay (named within rclone configuration) and not the remote name NEXTCLOUD:BACKUP

Now, i still have to bring this to the remote drive ... so I tried:

rclone copyto /media/pi/usb-stick/backup/kellerpi-1.img overlay:kellerpi-1_`date +"%Y-%m-%d"`.img --progress --verbose --no-traverse --no-gzip-encoding --no-check-certificate

But then I get the error message:

Failed to create file system for destination "overlay:": didn't find section in config file

My config file right now:

type = webdav
url =
vendor = nextcloud
user = ncp
pass = VyRqm...
no-check-certificate = true

type = chunker
remote = NEXTCLOUD:BACKUP/kellerpi-1
chunk_size = 1G
hash_type = md5

The solution is within reach. Any ideas?

Check rclone is using the config file you think it is? Remember each user has a different config file. You can use a specific config file with --config-file.

Try running

rclone listremotes

to check which remotes are defined as the same user you ran the rclone copy that failed.

Nick, you are my hero.

As you said, I did rclone config with pi user when creating the chunk-remote.

And I do sudo rclone copyto with root ... so it was an user/config file problem.

Now it works like a charm, except:

My 10G file is chunked into 12 pieces now in a folder on my Nextcloud Drive.

How do I unchunk them into one 10G file directly on my Nextcloud Remote Drive?

PS: Nextcloud supposedly supports files >2G even in 32bit environments for quite some time now (I think Nextcloud "chunks" by itself when handling larger files).



I don't know the answer to that. As far as I'm aware you can't do that via webdav so it would have to be a custom thing.

I can't recall any complaints about nextcloud and large files, so I guess it does!

Ok, so I'll keep this file "chunked" on my Nextcloud remote till I need them and then unchunk them via rclone copy from remote to local storage.

I assume that this also works with the moveto command ...

In this case that's ok for me 'cause it's only an image backup. But e.g. how about a 5G movie file? This can't be streamed if "chunked" on the remote.

Doesn't other users have similar problems with large files on remote drives or is this just a webdav thing?

It does.

You can stream it through the overlay: backend if you mount that. But not direct from nextcloud I guess.

I think we should probably examine your original statement

What happens when you try to copy a file bigger than 2G? Can you paste a log with -vv of whatever goes wrong? I think it should work...

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.