I am receiving the following error on a file related to some code I was working on yesterday:
2024/04/25 14:37:21 INFO : Checking potential conflicts...
2024/04/25 14:37:23 NOTICE: Local file system at /home/saltysyncs/sync: 0 differences found
2024/04/25 14:37:23 NOTICE: Local file system at /home/saltysyncs/sync: 2 hashes could not be checked
2024/04/25 14:37:23 NOTICE: Local file system at /home/saltysyncs/sync: 2 matching files
2024/04/25 14:37:23 INFO : Finished checking the potential conflicts. %!s(<nil>)
2024/04/25 14:37:23 NOTICE: - WARNING New or changed in both paths - code/blah/a4/lifeco-jms/task1/core/target/classes/service/core/AbstractQuotationService.class
2024/04/25 14:37:23 INFO : code/blah/a4/lifeco-jms/task1/core/target/classes/service/core/AbstractQuotationService.class: Path2 is newer. Path1: 2024-04-24 17:04:53 +0000 UTC, Path2: 2024-04-24 17:09:25.28472562 +0000 UTC, Difference: 4m32.28472562s
2024/04/25 14:37:23 INFO : code/blah/a4/lifeco-jms/task1/core/target/classes/service/core/AbstractQuotationService.class: The winner is: Path2
2024/04/25 14:37:23 NOTICE: - Path1 Renaming Path1 copy - idrive-sync-crypt:/code/blah/a4/lifeco-jms/task1/core/target/classes/service/core/AbstractQuotationService.class.conflict1
2024/04/25 14:37:24 ERROR : Bisync critical error: idrive-sync-crypt:/ rename failed for idrive-sync-crypt:/code/blah/a4/lifeco-jms/task1/core/target/classes/service/core/AbstractQuotationService.class: MethodNotAllowed: Method Not Allowed
status code: 405, request id: 17C9896F31B9809A, host id:
2024/04/25 14:37:24 ERROR : Bisync aborted. Must run --resync to recover.
I just added the --resilient and --conflict-resolve arguments yesterday in the hope or preventing the many conflicts that seem to occur when working on code files like this, or Jupyter notebooks etc. I rarely have both my laptop and desktop powered on at the same time, which I thought would mitigate the risk of conflicts but this is not the case as they occur so often when just say my laptop is bisyncing to the Idrive every 5 minutes:
Is there something I can do to prevent this from recurring? I am using rclone and IDrive to sync files between my two computers and IDrive. With this in mind, am I using the optimal conflict settings?
This looks like an s3 error to me... I'm curious if it behaves any differently without the trailing slash? ("idrive-sync-crypt:" not "idrive-sync-crypt:/")
If not, I'd also be interested to know if removing no_check_bucket = true makes any difference.
If neither of those make a difference, it would be helpful to see a log with -vv --dump headers (make sure to redact any sensitive info first.)
Using --dry-run with the trailing slash removed in idrive-sync-crypt: worked fine but failed again on a normal run. I had to check why I was using no_check_bucket = true and I can see now it is required when the access key does not have all bucket access, are you suggesting I create a new access key to test without no_check_bucket?
Also, is there a script or known terminal command to redact or scramble etc. the names of all files in the log?
I tried to recreate this exact setup with iDrive just now, but so far I have not been able to reproduce this error. A debug log would be very helpful.
Not that I know of, and sometimes the filenames themselves are relevant to identifying the issue. (When I said "sensitive info" above, I was talking about things like authorization credentials which HTTP headers can sometimes contain.)
If the filenames are sensitive, I would suggest creating a new directory with dummy (non-sensitive) filenames, reproduce the error there, and then upload that debug log.
The problem is we do not know exactly what caused it. I will figure out a way to delete all file names with bash and share the full log. Probably easier than finding the small number of filenames that would have my name or other personal info.
Also, as I am not running --resync so we can troubleshoot the cause, when we are finished can you please tell me if there will be some issue because I have continued making changes to the local on my laptop and desktop (not the same files), or should everything just bisync as normal afterwards, even though there will be 3 different states across my 2 systems and iDrive?
In case this is helpful, I can tell you that the error is happening at this line in the code:
The file idrive-sync-crypt:/code/blah/a4/lifeco-jms/task1/core/target/classes/service/core/AbstractQuotationService.class has changed on both paths, and the Path2 version is newer, so it is trying to rename the Path1 version to idrive-sync-crypt:/code/blah/a4/lifeco-jms/task1/core/target/classes/service/core/AbstractQuotationService.class.conflict1 and it is this rename operation that is failing.
The error is MethodNotAllowed, which is why it would be helpful to see the --dump headers to see what method it was trying to call.
We can try this first, and hopefully it will be sufficient to identify the problem.
Since you are using --recover, bisync will keep "retrying" these changes until a run is successful. And since you are using --conflict-resolve newer, the newest version should win (and the others should be renamed), in the event of conflicts. Assuming this is what you want, you should be ok.
I think I have caught all filenames now. Can you please tell me if there is anything else related to my remote or config that this command will output that I should also redact?
It's very hard to say without seeing it, unfortunately... I would say that if there is sensitive info it is likely to be in the HTTP header sections, which will start with lines like this in the log:
If you want to err on the safe side you could just redact anything that looks potentially sensitive (usually long strings of random characters). We can start with the minimum amount of info and make adjustments if it's too redacted to work with.
Okay thanks, I hope this attached log helps. Anything inside code/blah has been left as is. Where a file path and name are replaced, if the same filepath and name occur later in the log the same replacement should be used. HTTP Requests and Responses redacted. Please let me know if it is necessary to provide the original text for anything redacted.
As the log is larger than 4MB, and I cannot upload zips here, I added .log to the end of the file name which should be removed.
If you are concerned about publicly posting but for example 'trust' @nielash to check your logs, you can always send them via a PM as sometimes, the raw logs are the easiest way to fix a problem.
I do completely understand if you didn't want to do that either but just offering it up as a potential way around it.
Thanks! It looks like all the --dump headers sections have been deleted though (not just redacted). It would be really helpful to see the last 4 at the end, which I think is where the problem is:
You can feel free to replace sensitive info with XXX if you want, or DM me the log (good suggestion @Animosity022).
I am guessing the first request is a PUT of the new name and the second request is a DELETE of the old name. It would be helpful to confirm that and try to figure out what about the request is invalid.
One other question -- is it possible that your --filter-from has a rule that would be excluding the new name? (i.e. is it possible the destination already exists but we think it doesn't because it has been filtered out?)
It is returning 405 Method Not Allowed when we are expecting 404 Not Found.
I think this is because Versioning is enabled for the bucket and we have a non-null X-Amz-Version-Id.
According to this source, when you get an object by specifying a versionId in your request, if the specified version is a delete marker, it returns 405 (Method Not Allowed), instead of 404 (Not Found) which would be returned without a versionId.
And sure enough, if I make this small change to handle 405 the same way as 404, everything works as expected.
@ncw does that all sound right to you, and do you think this is the proper fix?
Thanks for looking into this and proposing the fix.
How practical would it be for me to put something in place as a stop-gap measure until a more permanent fix is implemented?
Somewhat related, and probably best left until after the Method Not Allowed issue is resolved, but a lot of code files I am working on today are being appended with .conflict, .conflict2, etc., without leaving an originally named file in the directory, so I just manually edit the filename to get the code to work.
I am curious if conflicts may be occurring because currently my bisync never completes due to the error, or if this is a separate matter, and if so is it normal that so many conflict occur between active files when they are just being synced between a local computer and remote S3, and if so can I tweak the command I am using to better deal with this?
I appreciate conflicts are likely when the same files are changed on 2 local systems around the same time, but I am only editing on 1 I am unsure why changes are not just synced on the next run.
I can at least stop the service running the command every 5 minutes until the rename error is resolved.
Otherwise if you prefer I could have GitHub compile and upload beta binaries. It would be helpful if you could test the fix and verify that it solves your issue.
My guess is that this is a direct result of the s3 bug you found (the error happens when it is trying to rename one of those .conflict files.) If you try the branch with the fix and find that this issue is still happening, let me know and I can investigate further.
Assuming that ./rclone is your compiled binary, it should just work. What was the full command that you tried? Maybe there is some syntax error causing args / flags to be parsed incorrectly.
Double-check that the command includes the bisync subcommand, as the --recover flag is only valid for bisync.
@nielash I thought there may be an alias or bash config issue on my side causing this, perhaps related to ., especially as when some arguments were omitted an error was output about cd usage, which definitely was not in the command.
After removing Oh My Bash I tried a fresh terminal with bash --noprofile --norc and receive the same (original, non-cd related) error for --recover.
Trying again without --recover Fatal error: unknown flag: --conflict-resolve, and only after omitting that argument would bisync run though it failed prompting a --resync.
I would like to resolve this with --reync but is there anything else that I could try first that would help prove this patch fixes the bug?
Edit: I notice that checking my locally installed v1.66 lists the recover argument for rclone bisync --help but this and some other arguments are missing when I use the dev version to check the docs, could it be this dev version be missing some functionality we are trying to test?