Rclone Webdav bug

What is the problem you are having with rclone?

I'm using rclone to connect to webdav, one directory is working properly, the other is not.
Both are properly loaded via web browser.
The directory, which is properly loaded by rclone contains common jpg pictures. The directory "secret", which isn't loaded properly contains files named as sha256 hashes. See attached picture.

Run the command 'rclone version' and share the full output of the command.

rclone v1.61.1

  • os/version: arch (64 bit)
  • os/kernel: 6.1.2-arch1-1 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.19.4
  • go/linking: dynamic

Which cloud storage system are you using? (eg Google Drive)

webdav

The command you were trying to run (eg rclone copy /tmp remote:tmp)

$ rclone -vv lsf test:secret --dump bodies --retries 1 --low-level-retries 1

The rclone config contents with secrets removed.

[test]
type = webdav
url = http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive

A log from the command with the -vv flag

<7>DEBUG : rclone: Version "v1.61.1" starting with parameters ["rclone" "-vv" "lsf" "test:secret" "--dump" "bodies" "--retries" "1" "--low-level-retries" "1"]
<7>DEBUG : rclone: systemd logging support activated
<7>DEBUG : Creating backend with remote "test:secret"
<7>DEBUG : Using config file from "/home/ksj/.config/rclone/rclone.conf"
<7>DEBUG : found headers: 
<7>DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
<7>DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
<7>DEBUG : HTTP REQUEST (req 0xc00089ba00)
<7>DEBUG : PROPFIND /8tERmJ7R/Cloud%20Drive/secret HTTP/1.1
Host: 127.0.0.1:4443
User-Agent: rclone/v1.61.1
Depth: 1
Referer: http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive/
Accept-Encoding: gzip

<7>DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
<7>DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<7>DEBUG : HTTP RESPONSE (req 0xc00089ba00)
<7>DEBUG : 
<7>DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<7>DEBUG : pacer: low level retry 1/1 (error read tcp 127.0.0.1:60146->127.0.0.1:4443: i/o timeout)
<7>DEBUG : pacer: Rate limited, increasing sleep to 20ms
Failed to create file system for "test:secret": read metadata failed: read tcp 127.0.0.1:60146->127.0.0.1:4443: i/o timeout

This is the releavant error.

Its taking too long to read the directory by the look of it.

You can increase the timeout with --timeout and --contimeout

  --contimeout Duration                Connect timeout (default 1m0s)
  --timeout Duration                   IO idle timeout (default 5m0s)

I tried to increase the contimeout to 12m and timeout to 60m and got the same response.

I don't have there infinite number of files. Only 6099 files.
On web browser is this "secret" directory read in a 3 seconds. The other directory, which has more than 700 files is loaded in a second in rclone.

I don't know, where the issue is, but I don't think it has something to do with timeouts.

I have tried to create another directory and copy there few files from secret directory. it was loaded in rclone in 1 sec.
So, it has nothing to do with name of the file, but only number of files in directory.
Maybe there is need to add some headers to request, which aren't there.

How long does it take before you get the timeout message?

(Your logs are showing in syslogd format as rclone things you are running under syslogd. I think this is because you have a syslog environment variable set. If you unset it in the terminal you run rclone from, you'll get more sensible looking timestamps!)

Can you investigate what the browser sends? You can add new headers to rclone very easily with the --header flag.

It is possible this is a bug in the timeouts of rclone too.

Can you try the PROPGET with curl and see if you get the same result?

The whole command freeze the log on:

...
<7>DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
<7>DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<7>DEBUG : HTTP RESPONSE (req 0xc0008b8800)

after the timeout (which was now 1 hour) it is added rest:

<7>DEBUG : 
<7>DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
<7>DEBUG : pacer: low level retry 1/1 (error read tcp 127.0.0.1:49100->127.0.0.1:4443: i/o timeout)
<7>DEBUG : pacer: Rate limited, increasing sleep to 20ms
Failed to create file system for "test:secret": read metadata failed: read tcp 127.0.0.1:49100->127.0.0.1:4443: i/o timeout

It really took 1 hour before exception

Request headers are:

GET /8tERmJ7R/Cloud%20Drive/secret HTTP/1.1
Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8
Accept-Encoding: gzip, deflate, br
Accept-Language: en-US,en;q=0.9
Cache-Control: no-cache
Connection: keep-alive
Host: 127.0.0.1:4443
Pragma: no-cache
Referer: http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive
Sec-Fetch-Dest: document
Sec-Fetch-Mode: navigate
Sec-Fetch-Site: same-origin
Sec-Fetch-User: ?1
Sec-GPC: 1
Upgrade-Insecure-Requests: 1
User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36
sec-ch-ua: "Google Chrome";v="108", "Chromium";v="108", "Not=A?Brand";v="24"
sec-ch-ua-mobile: ?0
sec-ch-ua-platform: "Windows"

I don't know, if some of these headers are important. I presume not. I can be only that the webdav backend is stuck in infinite loop somewhere in loop and will not finish until timeout.

I don't know what you mean by propget. I created in browser curl command:

curl 'http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive/secret' \
  -H 'Accept: text/html,application/xhtml+xml,application/xml;q=0.9,image/avif,image/webp,image/apng,*/*;q=0.8' \
  -H 'Accept-Language: en-US,en;q=0.9' \
  -H 'Cache-Control: no-cache' \
  -H 'Connection: keep-alive' \
  -H 'Pragma: no-cache' \
  -H 'Referer: http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive' \
  -H 'Sec-Fetch-Dest: document' \
  -H 'Sec-Fetch-Mode: navigate' \
  -H 'Sec-Fetch-Site: same-origin' \
  -H 'Sec-Fetch-User: ?1' \
  -H 'Sec-GPC: 1' \
  -H 'Upgrade-Insecure-Requests: 1' \
  -H 'User-Agent: Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/108.0.0.0 Safari/537.36' \
  -H 'sec-ch-ua: "Google Chrome";v="108", "Chromium";v="108", "Not=A?Brand";v="24"' \
  -H 'sec-ch-ua-mobile: ?0' \
  -H 'sec-ch-ua-platform: "Windows"' \
  --compressed

it seems even the command:

curl 'http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive/secret'

returns result normally in less than a second.
So, no issue with headers.

Should I create bug ticket in github?

That isn't the query rclone is doing. It is doing a PROPGET rather than a GET. This is the way the webdav protocol works with a set of http verbs. It looks very odd now - there aren't many protocols which do that.

You can do this with curl by adding -X PROPGET and that should be more exactly what rclone is doing.

$ curl -X PROPGET 'http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive/secret'
curl: (52) Empty reply from server

It seems propget doesn't work.

You'll need the Depth: 1 header I think.

Is there no auth on this webdav server?

It is webdav from megacmd (official tool from mega cloud) and I haven't put there any auth. I don't need it, because it will run only on localhost.

$ curl -X PROPGET -H 'Depth: 1' 'http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive/secret'
curl: (52) Empty reply from server

So looks like you should be trying PROPFIND instead of PROPGET

curl -vvv -X PROPFIND -H 'Depth: 1' 'http://127.0.0.1:4443/8tERmJ7R/Cloud%20Drive/secret'

curl.log (2.1 MB)

There is a log, where did this command stuck. I had to interrupt that command

Looks like you reproduced the problem with curl alone :slight_smile:

I suggest you report a bug to whoever/whatever runs your webdav server with the curl command and the curl output.

I don't understand. What exactly is improper configuration? I have checked difference between PROPFIND and GET, and now I know it should be used PROPFIND, because GET doesn't have result format.

Does it mean, that the main issue there is that the mega-webdav doesn't return the correct xml format, so some elements are not closed? And because of that it is not checked by GET (it doesn't have defined output format) and checked with PROPFIND (which is waiting for the closing elements)?

But then I have not idea, why the one directory works properly even with PROPFIND and the other not.

I think this is a bug in mega-webdav. The PROPFIND should finish and it doesn't with curl or rclone.

I would report a bug to the mega webdav project giving them the curl command line you used and see if they can help.

I already done that. It seems it is a known unfixed issue WEBdav buffer overflow with folders with many files · Issue #507 · meganz/MEGAcmd · GitHub

:frowning:

You could recompile mega-cmd and increase this number (try 33554432)

static const unsigned int MAX_BUFFER_SIZE = 2097152;

That works, but it doesn't solve this issue for good. But this is not an issue in rclone...

Alas, I would solve this issue in rclone if I could, but I don't think it is possible here :frowning:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.