Opendrive return 400 file_id is required

What is the problem you are having with rclone?

When uploading files using the crypt layer I get an error "Failed to copy: file_id is required. (Error 400)". It seems to be with some specific files but I don't understand the reason for this failure.

Run the command 'rclone version' and share the full output of the command.

rclone v1.64.0

  • os/version: Microsoft Windows 11 Pro 22H2 (64 bit)
  • os/kernel: 10.0.22621.2283 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.21.1
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

OpenDrive

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy .\CAM\ danicryp:CAM\  --transfers=8 --checkers=4 --opendrive-chunk-size=16384k --tpslimit 15 -P

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[danicryp]
type = crypt
remote = opendrive:cloud/cifrada2
filename_encryption = obfuscate
password = XXX
password2 = XXX

[opendrive]
type = opendrive
username = XXX
password = XXX

A log from the command that you were trying to run with the -vv flag

LOGS

This looks like there was a timeout and refund did a a retry. That didn't work though because the file Id was missing

2023/09/25 17:35:18 DEBUG : pacer: low level retry 1/10 (error Post "https://dev.opendrive.com/api/v1/upload/close_file_upload.json": net/http: timeout awaiting response headers)
2023/09/25 17:35:18 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023/09/25 17:35:20 DEBUG : pacer: Reducing sleep to 10ms
2023/09/25 17:35:20 ERROR : video_2023-05-26_00-39.mp4: Failed to copy: `file_id` is required. (Error 400)

I would guess this is a bug in rclone - will examine the code tomorrow!

1 Like

I have stopped it and tried many times and the same thing keeps happening, I don't think it's an internet problem.

Any news ? should I open a bug post ?

Try increasing these parameters first

  --contimeout Duration                Connect timeout (default 1m0s)
  --timeout Duration                   IO idle timeout (default 5m0s)
1 Like

Gracias, probate con este comando rclone --transfers=10--checkers=5 --tpslimit 20 --contimeout 10 --timeout 15 copy

You need to say --contimeout 10m --timeout 15m for minutes.

1 Like

Thank you very much for helping me, I have tried this solution by putting the 'm' but I still get the same error. --contimeout 10m --timeout 15m error: 2023/09/27 22:54:35 ERROR : video: Failed to copy: file_id is required. (Error 400)

I have noticed in my vps that the 'file_id' error only occurs in the new version, since I installed rclone rclone v1.50.2 from the default linux repositories and I did not get the error, but I updated rclone to rclone v1.64.0 and I do get the error.

Can you make a log of this going wrong for me please?

Add -vv --dump responses --log-file rclone.log and when it goes wrong upload rclone.log to pastebin and post a link here?

Thank you.

1 Like

You have given me sensitive information, such as the accesstoken, could you tell me which fields I should remove so that my units are not compromised?

Rclone tries not to put sensitive info in the logs but it doesn't always succeed.

If you remove any long base64 or hex strings that will do it.

Or email me the file directly nick@craig-wood.com with a link to this forum post.

Following - I am also getting this error repeatedly with rclone and opendrive also using crypt.

At the moment I am using version 1.5 of rclone which does not give the error. But I have the problem that some files are not completely uploaded to opendrive and remain infinitely uploaded. As far as I understand it doesn't seem to be a problem with the encryption level.

I'm starting to suspect it's just opendrive being garbage and not processing the upload in a timely fashion. Sometimes this causes rclone to try to upload the files again from scratch even though they will suddenly appear in opendrive

The truth is that I have noticed something strange, although I use 1.5 I see that it stays in loop and at the end there are files that are not uploaded, but then appear in the cloud.

am also getting this, any update on this?

Followup/Update - was talking with opendrive support, sharing some logs and they said they were talking with their developers about it - I just got the following message back from them:


Admin Today at 18:49

We have identified the cause of this but it will take several days to roll the fix out to all servers. We will update you once this has completed.


1 Like

OpenDrive support said to set timeout to 9600 but I am not certain how that would help anything (it hasn't)

I was wondering would setting a lower chunk size possibly help?

i have mine set to 9600 and it doesnt help, please let me know if the chunker settings helps. and also when they tell about the roll out fix