Error report when using sync for Rclone synchronization

I previously posted Issues on Github, but the administrator suggested that I post here, so I came. Based on the address of the previous Issues: Is there an error in the functional design of sync· Issue # 6989 · rclone/rclone (github. com), I learned about the transfer command mode of sync, but when I tested it, it didn't work as I expected. It would transfer and change all the files every time, which made me very confused. For example, I tried to start the transfer process when I discovered this problem, The difference is that I found this issue because I was mounting two webdavs for mutual transmission between them, but this time it was transferred from webdav to local. For this experiment, I downloaded all the files on webdav and deleted one of them. If sync is valid, it should only transfer the one file that I deleted, and then I saw the following,

Is the file being transferred a file that I can find locally? Then I ended the transfer and tried to use the rclone config file command to find the log file, but it was only my configuration without any logs. I don't know why this happened, and the same thing happened on the previous webdav to webdav transfer. So far, it has been working for 2d13m and still hasn't ended. Moreover, my total file size is around 280GB, but it has also become 572.492GB in the display, And by the time of posting, it had increased by 2GB Can someone help me solve this problem? What I hope is to only synchronize those few different files each time, instead of endlessly synchronizing like this

Version of rclone used: rclone v1.62.2 windows amd64
Command used: rclone sync * * *:/*****:/- P

Due to forum restrictions, the other two images are here

Rclone should only transfer files that have changed.

If it is transferring files then it thinks they have changed.

Rclone will tell you in the log why it transferred the objects.

What I suspect is happening is that the modification times are not constant on the webdav server. You can try adding --size-only to the sync/copy and see if that helps.

We really need to see a log with -vv to debug this.

ok,i will try it.

There is a new problem here. It output the log using the above command, but I can't find where the log file is. Where should I look for it?thks ur reply.

hi,

add -vv to the command, run the command and post the full output.

please do not use tiny, hard to read screenshots, just post the text output into the forum.
or use pastebin.com

I thought it had its own log output file. Damn, when I just ran the command, I should have added the command to output to the file...

try something like
--log-level=DEBUG --log-file=/path/to/rclone.log

I am currently testing whether the --size-only command is valid, so I would like to wait for it to run completely again, so maybe next time?

I saw many 423 locked, why did this happen?

the easiest way to communicate is with debug output from your commands.

from the debug output, post the top 20 lines.

copy/paste the exact output from the debug output.

Okay, thank you for your reply. I will add the log output to the file after the previous command runs, and then reply to this question. This will take approximately eight hours.

if you can find a file that always fails, then can test with just that one flag.
or test using a subdir

fwiw, it is possible to view the log file at the same time rclone is running,
so after the first 423 locked, you can kill rclone.

After more than ten hours of operation, about 30MB of logs were generated, but there were not many useful ones, there was only one locked file, and after I don't know how long it was retryed, it was actually successfully transferred, as if the problem had been solved? I don't know, it looks like the more parameters added, the fewer errors? Because there are too many logs, I tried to append a few potentially useful logs as follows:

2023-05-08 13:22:51 DEBUG : pacer: low level retry 1/10 (error Locked: 423 Locked)
2023-05-08 13:23:25 DEBUG : 05301.mp4: Received error: Locked: 423 Locked - low level retry 2/10
2023-05-08 14:35:24 ERROR : 026bea430eb2e886ba1c253ea6f6d1eb.mp4: Failed to copy: object not found
2023-05-08 14:36:02 ERROR : 09d88ce11a031a969cf97365f81b11e6.mp4: Failed to copy: Method Not Allowed: 405 Method Not Allowed
2023-05-08 14:55:39 ERROR : 09ecb058a149ed6d159895c310f4c0e2.mp4: Failed to copy: Method Not Allowed: 405 Method Not Allowed
2023-05-08 16:00:05 ERROR : 09169b98722630e08411247a60f03045.mp4: Failed to copy: Method Not Allowed: 405 Method Not Allowed

By the way, I tried to test the single-file that was locked for command transfer without any problems, and I have a question, is there a transfer command like size that only compares the file name? I guess for cases where there are many files but only need to transfer dozens of differencing files, just comparing the file names will speed up the checkup

A new discovery! When I was using another software for synchronous testing, it said that the path to WebDav could not be used, Chinese, I was guessing if it was because of this problem, and I was ready to test it.

Rclone does lots of retries to clean up temporary errors.

It looks like this one was successfully retried.

I would guess this is a temporary object of some kind which got deleted during the sync - is that possible?

The closest to that is --size-only. I don't think it will speed up the sync much though.

Perhaps, I think this because I used to mount webdev to the local and use it to copy different files, as far as I know, localized file copying only compared to the file name, the effect is not bad, and it is fast, so I think this will speed up the transfer

Perhaps this is due to an issue with the Webdav provided by Alist software. I did not have this issue when using Rclone to directly add Onedrive. Also, does Rclone support adding China's 123 cloud disk and Tianyi cloud disk?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.