408 Timeout to Amazon Cloud Drive, file eventually uploaded

Hi,

First of all, thanks for this wonderful tool :slight_smile:

I am currently trying to upload some file from a mounted rclone directory (encrypted). It mostly works fine, but I’m sometimes (when file > 1gb) getting some errors (IO error after the “mv” command)

I mounted the directory using the following command :
rclone mount acdcrypt: ~/drive --allow-other --max-read-ahead 200M --no-modtime --acd-upload-wait-per-gb 30m --timeout 0 --bwlimit 10M

When starting rclone in verbose mode, I can see the following lines when it fails:

2017/01/09 17:35:57 Shows/en/XXX.mkv: WriteFileHandle.Flush
2017/01/09 17:36:58 Shows/en/XXX.mkv: Error detected after finished upload - waiting to see if object was uploaded correctly: HTTP code 408: "408 REQUEST_TIMEOUT": no response body ("408 REQUEST_TIMEOUT")
2017/01/09 17:36:58 Shows/en/XXX.mkv: Object not found - waiting (1/1)
2017/01/09 17:37:03 Shows/en/XXX.mkv: Giving up waiting for object - returning original error: HTTP code 408: "408 REQUEST_TIMEOUT": no response body ("408 REQUEST_TIMEOUT")
2017/01/09 17:37:03 pacer: Rate limited, sleeping for 183.117216ms (1 consecutive low level retries)
2017/01/09 17:37:03 pacer: low level retry 1/1 (error HTTP code 408: "408 REQUEST_TIMEOUT": no response body)
2017/01/09 17:37:03 Shows/en/XXX.mkv: WriteFileHandle.Flush error: HTTP code 408: "408 REQUEST_TIMEOUT": no response body
2017/01/09 17:37:03 Shows/en/XXX.mkv: WriteFileHandle.Release nothing to do

Upload started ~2mins before
The file is available a couple of minutes after the error has been raised in the drive. File size is about 1.2GB

According to the configuration for the “acd-upload-wait-per-gb” flag (30m, to try), it is supposed to check at least for 36 minutes (30min/gb * 1.2gb)

It looks like it cannot compute the “retries” number correctly here https://github.com/ncw/rclone/blob/master/amazonclouddrive/amazonclouddrive.go#L532, because debug line “https://github.com/ncw/rclone/blob/master/amazonclouddrive/amazonclouddrive.go#L539” shows 1/1.

When manually computing the value of the algorithm from github, I come with the following values :

uploadWaitPerByte = 1288326596 / 1024 / 1024 / 1024 = 0.001676381
timeToWait = 0.001676381 * 1288326596 = 2159726.227329076
sleepTime = 5000
retries = (2159726.227329076 + 5000 -1) / 5000 = 433

Any ideas ?

Thanks

EDIT :
I outputed the values of “retries” and “src.Size()” https://github.com/ncw/rclone/blob/master/amazonclouddrive/amazonclouddrive.go#L529 there, and src.Size() returns 32 bytes, instead of 1.2GB

Can it be become of the encrypt rclone instead of normal ?

Presumably you uploaded the file with cp to the mount?

I see what the problem is - it is that rclone doesn’t know the size of the file it is uploading when you upload like that. rclone mount works with streams of data and nothing tells it how big the file it is about to upload (or even which file) so it assumes 0. That explains the 1/1 retries I think.

I’d suggest you do uploads with rclone copy which will do the correct thing.

I wonder if there is a FUSE interface for sendfile support - that is the only way I can think of fixing this.

Hi,

Thanks for your work on rClone :slight_smile:
I got the same problem. I thought I could use sickrage and couchpotato to move or copy my files to my rclone mount (encrypted) but I have the same error. That means I can’t automate my “backup” because I don’t know scripting and nzbget only accept Python scripts.

What do you mean by [quote=“ncw, post:2, topic:669”]
I wonder if there is a FUSE interface for sendfile support - that is the only way I can think of fixing this.
[/quote]

Maybe I could search a bit about it?

I thought that too but it takes bash scripts just fine.

Here’s what I’m using as my nzbGET post-processing upload script:

#!/bin/bash
####################################### 
### NZBGET POST-PROCESSING SCRIPT   ###
# Rclone upload to Amazon Cloud Drive

# Wait for NZBget/Sickrage to finish moving files
sleep 10s

# Upload
rclone move -c /home/$username/$local/tv $encrypted:tv

# Tell Plex to update the Library
wget http://localhost:32400/library/sections/2/refresh?X-Plex-Token=$plexToken

# Send PP Success code
exit 93

Saved to nzbget/scripts/uploadTV.sh and chmod+x so it’s executable.

1 Like

Great, thank you. Thought it was like SABnzbd and only accept Python scripts. I’ll deal with your example so as to make my files be moved to the right dirs on my Amazon Cloud.

This is right, I was copying with “cp” (or moving with mv).

I had a look, and it doesn’t look like the FUSE interface will send any information about the file (so file size won’t be able to be retrieved).

Depending on the usage, it may be useful to have an option to skip the file size validation, and upload it in a “best effort” mode ?

@chrisanthropic

Thanks for the script. I will have a look to process my files this way

That is effectively what it does, but if Amazon sends a 429 error (which it does very frequently) the upload will fail. rclone copy will retry in this case.