Option to increase Nextcloud chunk merge timeout

Hi, over a month ago I posted the question in the Help section, about large files upload resulting in a '423 LOCKED' error, which seems to be just the server taking long time to merge the file.

The easiest solution to that would be to increase the time rclone should wait for the merge to happen.

The --timeout and --contimeout options don't seem to change anything in this matter, and there was no reply to my topic, so I guess rclone doesn't have an option for that yet.

I kindly ask for introducing this option, as it would make my life easier. Now I have to do some tricky error handling in my wrapper to prevent rclone from uploading the file again and again when that error happens.

You could try something like this which will cause rclone to sleep for one minute and try again on a 423 locked error.

diff --git a/backend/webdav/chunking.go b/backend/webdav/chunking.go
index 4cea79838..5d8c0ba70 100644
--- a/backend/webdav/chunking.go
+++ b/backend/webdav/chunking.go
@@ -14,6 +14,7 @@ import (
 	"io"
 	"net/http"
 	"path"
+	"time"
 
 	"github.com/rclone/rclone/fs"
 	"github.com/rclone/rclone/lib/readers"
@@ -28,7 +29,8 @@ func (f *Fs) shouldRetryChunkMerge(ctx context.Context, resp *http.Response, err
 
 	// 423 LOCKED
 	if resp != nil && resp.StatusCode == 423 {
-		return false, fmt.Errorf("merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: %w", err)
+		time.Sleep(time.Minute)
+		return true, fmt.Errorf("merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: %w", err)
 	}
 
 	return f.shouldRetry(ctx, resp, err)

Thanks, I have never done anything with Go, so I will have to learn how to make the changes and build rclone afterwards.

the chunk reassemble time for a 4.5GB file on my server is about 7 minutes, and I have larger files in my backup set, so that will not help. I could increase the time to 30 minutes, which should work for most of my files, if not all.

The best option would be if rclone was checking every 30 seconds or so, and fail only if the error is still there after some large amount of time (perhaps configurable).

I just posted a beta for this here

Give it a go!

Hi, Nick.
First of all, thanks for the amazing Rclone.
I suggest adding a new option/flag nextcloud_merge_timeout / --nextcloud-merge-timeout

Thank you for you work. I have just tested it, and although the increasing sleep time works well, there is still some problem. After the 423 errors and waiting time rclone starts getting 404 not found errors (it seems to be looking for chunk files, but if the chunks have been assembled already, should it do that?), and then starts uploading the file again, despite the --retries=1 flag.

Here is the relevant log fragment:

2023/09/29 07:24:01 DEBUG : rclone: Version "v1.65.0-beta.7393.5be640f6e.fix-7109-nextcloud" starting with parameters ["C:\\PProg\\Dysk\\rclone_beta\\rclone.exe" "copy" "D:\\BackupScript\\rclone\\test_files" "nxtcld_crypt:Test" "--config=(removed)" "--transfers=4" "--contimeout=30m0s" "--timeout=30m0s" "--stats=2m0s" "--bwlimit=5M:50M" "--low-level-retries=13" "--retries-sleep=10s" "--retries=1" "-vv"]
(...)
2023/09/29 08:24:14 INFO  :
Transferred:   	    4.144 GiB / 4.143 GiB, 100%, 0 B/s, ETA -
Checks:                10 / 10, 100%
Transferred:            0 / 1, 0%
Elapsed time:      1h13.0s
Transferring:
 *                    openSUSE-12.2-DVD-i586.iso:100% /4.143Gi, 0/s, -
System.Management.Automation.RemoteException
2023/09/29 08:24:42 DEBUG : pacer: low level retry 1/13 (error <html>
<head><title>504 Gateway Time-out</title></head>
<body>
<center><h1>504 Gateway Time-out</h1></center>
<hr><center>nginx</center>
</body>
</html>: 504 Gateway Time-out)
2023/09/29 08:24:42 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023/09/29 08:24:48 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 1s to wait for chunks to be merged after 423 error
2023/09/29 08:24:49 DEBUG : pacer: low level retry 2/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:24:49 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2023/09/29 08:24:52 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 2s to wait for chunks to be merged after 423 error
2023/09/29 08:24:54 DEBUG : pacer: low level retry 3/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:24:54 DEBUG : pacer: Rate limited, increasing sleep to 80ms
2023/09/29 08:24:56 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 4s to wait for chunks to be merged after 423 error
2023/09/29 08:25:00 DEBUG : pacer: low level retry 4/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:25:00 DEBUG : pacer: Rate limited, increasing sleep to 160ms
2023/09/29 08:25:03 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 8s to wait for chunks to be merged after 423 error
2023/09/29 08:25:11 DEBUG : pacer: low level retry 5/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:25:11 DEBUG : pacer: Rate limited, increasing sleep to 320ms
2023/09/29 08:25:14 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 16s to wait for chunks to be merged after 423 error
2023/09/29 08:25:30 DEBUG : pacer: low level retry 6/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:25:30 DEBUG : pacer: Rate limited, increasing sleep to 640ms
2023/09/29 08:25:33 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 32s to wait for chunks to be merged after 423 error
2023/09/29 08:26:05 DEBUG : pacer: low level retry 7/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:26:05 DEBUG : pacer: Rate limited, increasing sleep to 1.28s
2023/09/29 08:26:09 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 1m4s to wait for chunks to be merged after 423 error
2023/09/29 08:26:14 INFO  :
Transferred:   	    4.144 GiB / 4.143 GiB, 100%, 0 B/s, ETA -
Checks:                10 / 10, 100%
Transferred:            0 / 1, 0%
Elapsed time:    1h2m13.1s
Transferring:
 *                    openSUSE-12.2-DVD-i586.iso:100% /4.143Gi, 0/s, -
System.Management.Automation.RemoteException
2023/09/29 08:27:13 DEBUG : pacer: low level retry 8/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:27:13 DEBUG : pacer: Rate limited, increasing sleep to 2s
2023/09/29 08:27:17 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 2m8s to wait for chunks to be merged after 423 error
2023/09/29 08:28:14 INFO  :
Transferred:   	    4.144 GiB / 4.143 GiB, 100%, 0 B/s, ETA -
Checks:                10 / 10, 100%
Transferred:            0 / 1, 0%
Elapsed time:    1h4m13.1s
Transferring:
 *                    openSUSE-12.2-DVD-i586.iso:100% /4.143Gi, 0/s, -
System.Management.Automation.RemoteException
2023/09/29 08:29:25 DEBUG : pacer: low level retry 9/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:29:28 NOTICE: webdav root 'Backup/Lapasus/Test': Sleeping for 4m16s to wait for chunks to be merged after 423 error
2023/09/29 08:30:14 INFO  : 
Transferred:   	    4.144 GiB / 4.143 GiB, 100%, 0 B/s, ETA -
Checks:                10 / 10, 100%
Transferred:            0 / 1, 0%
Elapsed time:    1h6m13.0s
Transferring:
 *                    openSUSE-12.2-DVD-i586.iso:100% /4.143Gi, 0/s, -
System.Management.Automation.RemoteException
2023/09/29 08:32:14 INFO  :
Transferred:   	    4.144 GiB / 4.143 GiB, 100%, 0 B/s, ETA -
Checks:                10 / 10, 100%
Transferred:            0 / 1, 0%
Elapsed time:    1h8m13.0s
Transferring:
 *                    openSUSE-12.2-DVD-i586.iso:100% /4.143Gi, 0/s, -
System.Management.Automation.RemoteException
2023/09/29 08:33:44 DEBUG : pacer: low level retry 10/13 (error merging the uploaded chunks failed with 423 LOCKED. This usually happens when the chunks merging is still in progress on NextCloud, but it may also indicate a failed transfer: "kazik/files/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g.upload.part" is locked: OCA\DAV\Connector\Sabre\Exception\FileLocked: 423 Locked)
2023/09/29 08:33:50 DEBUG : pacer: low level retry 11/13 (error File with name //rclone-chunked-upload-efd3ed6450e0ebf228ee375257cf6642 could not be located: Sabre\DAV\Exception\NotFound: 404 Not Found)
2023/09/29 08:33:53 DEBUG : pacer: low level retry 12/13 (error File with name //rclone-chunked-upload-efd3ed6450e0ebf228ee375257cf6642 could not be located: Sabre\DAV\Exception\NotFound: 404 Not Found)
2023/09/29 08:33:56 DEBUG : pacer: low level retry 13/13 (error File with name //rclone-chunked-upload-efd3ed6450e0ebf228ee375257cf6642 could not be located: Sabre\DAV\Exception\NotFound: 404 Not Found)
2023/09/29 08:33:56 DEBUG : openSUSE-12.2-DVD-i586.iso: Received error: finalize chunked upload failed, destinationURL: "https://cloud.tobehappy.club:6660/remote.php/dav/files/Kazik/Backup/Lapasus/Test/hfp6jgvlc76qsiev8h7dhk7ohuuqcoiqfns6olepjn2ju1j2da6g": File with name //rclone-chunked-upload-efd3ed6450e0ebf228ee375257cf6642 could not be located: Sabre\DAV\Exception\NotFound: 404 Not Found - low level retry 1/13
2023/09/29 08:33:59 DEBUG : pacer: Reducing sleep to 1.5s
2023/09/29 08:33:59 DEBUG : openSUSE-12.2-DVD-i586.iso: Update will use the chunked upload strategy
2023/09/29 08:34:02 DEBUG : pacer: Reducing sleep to 1.125s
2023/09/29 08:34:04 DEBUG : pacer: Reducing sleep to 843.75ms
2023/09/29 08:34:10 DEBUG : pacer: Reducing sleep to 632.8125ms
2023/09/29 08:34:14 INFO  :
Transferred:   	    4.163 GiB / 8.286 GiB, 50%, 926.472 KiB/s, ETA 1h17m46s
Checks:                10 / 10, 100%
Transferred:            0 / 1, 0%
Elapsed time:   1h10m13.1s
Transferring:
 *                    openSUSE-12.2-DVD-i586.iso:  0% /4.143Gi, 1.095Mi/s, 1h4m16s
System.Management.Automation.RemoteException

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.