Not deleting as copy failed: failed to open source object

2016/10/28 21:37:41 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/5Imdo4VPL1DOD-E1oVKXG,xqZoraAmQFgEBPAatSjKMBk,: Error detected after finished upload - waiting to see if object was uploaded correctly: HTTP code 409: "409 Conflict", reponse body: {"logref":"fc6e9af1-9d45-11e6-a2ed-e9c69863280d","message":"Node with the name 5imdo4vpl1dod-e1ovkxg,xqzoraamqfgebpaatsjkmbk, already exists under parentId SmAReBvXQeCXo8mu3M72iw conflicting NodeId: 9p714qYFTzaDNyOOvHe64A","code":"NAME_ALREADY_EXISTS","info":{"nodeId":"9p714qYFTzaDNyOOvHe64A"}}
2016/10/28 21:37:42 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/5Imdo4VPL1DOD-E1oVKXG,xqZoraAmQFgEBPAatSjKMBk,: Object found with correct size 3518261331 after waiting (1/44) - 0 - returning with no error
2016/10/28 21:37:42 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/5Imdo4VPL1DOD-E1oVKXG,xqZoraAmQFgEBPAatSjKMBk,: Failed to read src hash: hash: failed to stat: lstat /home/storage/.media-local/uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/5Imdo4VPL1DOD-E1oVKXG,xqZoraAmQFgEBPAatSjKMBk,: no such file or directory
2016/10/28 21:37:42 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/5Imdo4VPL1DOD-E1oVKXG,xqZoraAmQFgEBPAatSjKMBk,: Copied (new)
2016/10/28 21:37:42 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/5Imdo4VPL1DOD-E1oVKXG,xqZoraAmQFgEBPAatSjKMBk,: Not deleting as copy failed: hash: failed to stat: lstat /home/storage/.media-local/uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/5Imdo4VPL1DOD-E1oVKXG,xqZoraAmQFgEBPAatSjKMBk,: no such file or directory
2016/10/28 21:37:42 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/Lk9SIGUNZ3nFAyknTuH7Gn7ECbdX33lf1E-asVk9RVKdQ0: Failed to copy: failed to open source object: open /home/storage/.media-local/uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/Lk9SIGUNZ3nFAyknTuH7Gn7ECbdX33lf1E-asVk9RVKdQ0: no such file or directory
2016/10/28 21:37:42 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/Lk9SIGUNZ3nFAyknTuH7Gn7ECbdX33lf1E-asVk9RVKdQ0: Not deleting as copy failed: failed to open source object: open /home/storage/.media-local/uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/Lk9SIGUNZ3nFAyknTuH7Gn7ECbdX33lf1E-asVk9RVKdQ0: no such file or directory

command:
/usr/sbin/rclone move /home/storage/.media-local/ acd-rclone:/encrypted -v -c --min-age 15m --transfers=100 --checkers=20 --delete-after --log-file=/var/log/flix/rclone.log

Iam also getting

2016/10/28 21:44:08 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/H2RSLOi2Jgw-nYHWTtAzW8e6lFCpYfnr5O2FCimVjwvdC,: Object not found - waiting (13/44)

2016/10/28 21:46:39 uNliDtMkT7HeihH7uM5180xg/9M9cV570xFzsxrxtC9Ku58qo/XmOJ43gTW9UXKvqGib3YmeZG/H2RSLOi2Jgw-nYHWTtAzW8e6lFCpYfnr5O2FCimVjwvdC,: Object not found - waiting (40/44)
2016/10/28 21:46:42 uNliDtMkT7HeihH7uM5180xg/5H1wj0bW2VIyuG0OcSoTK4KA/UhRilx6ElegsxKFSyjHUZPZd/JCFEasBrGIfML9GlmViTC7MtjbsHNUhRSvKFLFRekAfUQ0: Error detected after finished upload - waiting to see if object was uploaded correctly: HTTP code 409: "409 Conflict", reponse body: {"logref":"3ef34dc3-9d47-11e6-b996-735a20d07393","message":"Node with the name jcfeasbrgifml9glmvitc7mtjbshnuhrsvkflfrekafuq0 already exists under parentId iqgGJDMdSO6G3wPm_6GG_A conflicting NodeId: DpxMip_WTJSTXQdU0DZ90A","code":"NAME_ALREADY_EXISTS","info":{"nodeId":"DpxMip_WTJSTXQdU0DZ90A"}}

All of these are consequences of Amazon Drive’s handing of uploading of big files.

Here is the explanation from the docs.

I suggest you try the latest beta if you are havig problems with this: http://beta.rclone.org/v1.33-85-g6846a1c/

–acd-upload-wait-time=TIME, --acd-upload-wait-per-gb=TIME, --acd-upload-wait-limit=TIME

Sometimes Amazon Drive gives an error when a file has been fully
uploaded but the file appears anyway after a little while. This
happens sometimes for files over 1GB in size and nearly every time for
files bigger than 10GB. These parameters control the time rclone waits
for the file to appear.

If the upload took less than --acd-upload-wait-limit (default 60s),
then we go ahead an upload it again as that will be quicker.

We wait --acd-upload-wait-time (default 2m) for the file to appear,
with an additional --acd-upload-wait-per-gb (default 30s) per GB of
the uploaded file.

These values were determined empirically by observing lots of uploads
of big files for a range of file sizes.

Upload with the -v flag to see more info about what rclone is doing
in this situation.

Thanks, upgraded to latest from github now and added --acd-upload-wait-time=10m

Thanks for this info, it was very useful.

Is it possible to terminate the upload when it reaches 100%, which will allow to bypass the waiting time before uploading the next file ?

Not at the moment… You can interrupt rclone (CTRL-C) but then it won’t upload more files. You could also increase --transfers which would help.

On this note I would like to thank you @ncw for this great tool you have made ! This is really awesome. will --transfers help ? How should I implement it ?

Give it a go - just add --transfers 16 to the command line to increase the number of simultaneous transfers from 4 to 16.

Thank you @ncw. The files are uploaded through a command in a batch file, and unfortunately I can’t upload them simultaneously, as the files need to be processes individually, in turn, with a different software before they are uploaded. Do you think that you can implement a ‘skip’ for the checking of the files in the upcoming beta’s? Thanks a LOT !

p.s

Where can I donate for your efforts for creating such a great tool ? :slight_smile:

Thanks

When I’ve done https://github.com/ncw/rclone/issues/559 (which should be soon) You can set the acd timeouts to very small and the file will get uploaded, it may error, but rclone won’t wait or try to delete it.

The donations page is here: http://rclone.org/donate/

Cheers

Nick

@ncw Thank you Nick. I have downloaded the new version 1.34 and I tried using --acd-upload-wait-per-gb 1s --retries=1.

The files are 48G each. After the 100% is uploaded, the upload will restart after minute or two. Do you have any suggestion ?

Just a reminder,

I am trying to skip the waiting time. Thanks in Advance.

Edit -

I have uploaded a 48 GB file which restarted on the first try, two minutes after the upload reached 100%. I have had the desired effect on the second try when the batch file resumed its next command, one minute after the upload reached 100%. Do you have any suggestion for the upload doing the same on the first upload try ? maybe --retires=0 ?

With rclone 1.34 you want to use --acd-upload-wait-per-gb 0 and that will avoid any waiting for the file to be uploaded. You might want --retries 1 and --low-level-retries 1 too…

@ncw Thanks. I am uploading the files fine. I have encountered one problem though, and I am sorry if this have been discussed before, I couldn’t find a solution though. You probably know that error 408 and 502 means that the files will appear on ACD.

I am currently running rclone 1.33 with the these flags : --acd-upload-wait-time 1m --retries 1 --low-level-retries 1. Occasionally I will receive an error that the connection was forcefully closed by the remote, during uploads. What I am trying to achieve is, to have the file reupload in certain errors, and not, if these errors are 408/502. Is there a setup that will allow me to do that ?

rclone does its best to work out which errors need retrying and which don’t.

This changed a lot in 1.34 so I suggest you try that or the latest beta: http://beta.rclone.org/v1.34-49-ge79a5de/

These aren’t user configurable though. You can see it in the source here: https://github.com/ncw/rclone/blob/master/amazonclouddrive/amazonclouddrive.go#L138