ERROR: file already closed

Hi there!

I am trying to rclone from a Digital Ocean volume to Spaces but I get a lot of these errors:

ERROR : app/entries/photo/entry_thumb/0fb7b2481526441eaca054f929c18017/0faaa532-974a-4aa8-bbaf-0941f8a808df_1559909928.jpg: Failed to copy: Put https://fra1.digitaloceanspaces.com/epicollect5-production/app/entries/photo/entry_thumb/0fb7b2481526441eaca054f929c18017/0faaa532-974a-4aa8-bbaf-0941f8a808df_1559909928.jpg?XXXXXXXXXX... file already closed

The command I use:
rclone copy /var/www/html_prod/shared/storage epicollect5-storage-space:epicollect5-production

Any suggestion?

I just want to add here that I have seen the same error reoccurring (not on every file, but a non trivial percentage). This is on my google teamdrive though, so that might indicate it's a problem not directly tied to a spesific remote?

I think I mostly saw this occurring via mount with write-cache though - but I'm not 100% certain, so take that with a grain of salt.

Try v1.49.0 - there was a fix for a very similar problem put in at the last minute.

Can you post a more complete log with -vv --dump headers if that doesn't work.

If it was a last minute change then it's possible it could have fixed mine too. I was running mostly on your VFS cache fix (133?) at the time I believe, but I'm pretty sure I saw this happening before 1.49.

I will keep my eyes out for the same thing happening again now on 1.49

I am on v1.49.0 already, here is the log of a failed request

503 slow down might be the culprit

`2019/08/28 14:24:16 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2019/08/28 14:24:16 DEBUG : HTTP REQUEST (req 0xc000629800)
2019/08/28 14:24:16 DEBUG : PUT /epicollect5-production/app/projects/project_thumb/32d79be2c56a416cb5c4f2e5946da728/logo.jpg?X-Amz-Algorithm=AWS4-HMAC-SHA256&XXXXXXXXXX....HTTP/1.1
Host: fra1.digitaloceanspaces.com
User-Agent: rclone/v1.49.0
Content-Length: 80776
content-md5: T/F+8wE3mxn3peSCv7iCrw==
content-type: image/jpeg
x-amz-acl: private
x-amz-meta-mtime: 1543325209.292999777
Accept-Encoding: gzip

2019/08/28 14:24:16 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2019/08/28 14:24:16 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2019/08/28 14:24:16 DEBUG : HTTP RESPONSE (req 0xc000629800)
2019/08/28 14:24:16 DEBUG : Error: file already closed
2019/08/28 14:24:16 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2019/08/28 14:24:16 ERROR : app/projects/project_thumb/32d79be2c56a416cb5c4f2e5946da728/logo.jpg: Failed to copy: Put https://fra1.digitaloceanspaces.com/epicollect5-production/app/projects/project_thumb/32d79be2c56a416cb5c4f2e5946da728/logo.jpg?XXXXXXXX..... file already closed
2019/08/28 14:24:16 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2019/08/28 14:24:16 DEBUG : HTTP RESPONSE (req 0xc000116a00)
2019/08/28 14:24:16 DEBUG : HTTP/1.1 503 Slow Down
Connection: close
Cache-Control: no-cache
Content-Type: text/xml
Retry-After: 60

2019/08/28 14:24:16 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<`

So Digital Ocean has a limit of 200 requests per second.

I get the same error limiting the requests (--tpslimit 100) though, this time the response is 200 but still "file already closed error"

Same issue here. I get a large number of ERROR : ... Failed to copy: file already closed.

Separately, I've been noticing that for the last three days I've been running rclone non-stop, and it always gets stuck overnight at some point forcing me to restart in the mornings. The command that I use is:

rclone copy . GDriveCrypt: --bwlimit 8650k --progress --fast-list --verbose --transfers=10 --checkers=20 --tpslimit=10 --tpslimit-burst=10 --drive-chunk-size=256M --log-file=./log.log

When I check on it in the mornings I see no network activity, 0% CPU usage for rclone (although it is around 40% when it is doing active transfers), although the transfer rate shows as a non-zero number. I have to do a keyboard interrupt:

Transferred:      472.334G / 1.461 TBytes, 32%, 8.341 MBytes/s, ETA 1d10h55m25s
Errors:                92 (retrying may help)
Checks:            449886 / 449886, 100%
Transferred:        12763 / 22791, 56%
Elapsed time:  16h6m29.7s
Transferring:
 * <file>.imgc:  5% /912.067G, 2.231M/s, 109h29m4s
 * <file>.JPG: transferring
 * <file>.mkv: 58% /13.054G, 2.269M/s, 40m25s
 * <file>.mp4: 83% /1.640G, 2.288M/s, 2m4s
 * <file>.JPG:  0% /754.299k, 0/s, -
 * <file>.JPG:  0% /1.146M, 0/s, -
 * <file>.jar:  0% /985.255k, 0/s, -
 * <file>.png:  0% /133, 0/s, -
 * <file>.JPG:  0% /2.228M, 0/s, -^C

The last item in my log file from yesterday is at 2:24:36 AM with a status of INFO : ... : Copied (new), and the one the day before is at 7:54:44 AM with the same status.

If you could make a new thread as it's not related to this one rather than hijacking it. Please use the question template and fill out all the relevant information.

Thanks.

Whoops -- sorry! I didn't mean to hijack; I primarily came here for the file already closed error. I thought they could be related.

We can delete this and I can move to its own thread.

What version of rclone are you on?

Can you post a log with what happens before you get the error (with -vv please)?

There are only 4 places this error could come from

backend/cache/handle.go:                return errors.New("file already closed")
backend/crypt/cipher.go:        ErrorFileClosed              = errors.New("file already closed")
fs/chunkedreader/chunkedreader.go:      ErrorFileClosed  = errors.New("file already closed")
fs/operations/reopen.go:        errorFileClosed   = errors.New("file already closed")

so seeing a bit more log and knowing which backends you are using would be useful.

Can you also see if you can reproduce with the latest beta?

I think I've fixed that problem already in: https://github.com/rclone/rclone/issues/3505

Try the latest beta in 30 minutes and if that doesn't fix the please kill the process kill -QUIT xxx and post the backtrace (it will be big) into a new issue.

I think I've fixed the "already closed" messages here - can you have a go please?

https://beta.rclone.org/branch/v1.49.0-042-gaf097d7a-fix-already-closed-beta/ (uploaded in 15-30 mins)

Sorry, messed up the build, try this one

https://beta.rclone.org/branch/v1.49.0-042-g89fda14a-fix-already-closed-beta/ (uploaded in 15-30 mins)

I've merged this to master now which means it will be in the latest beta in 15-30 mins and released in v1.49.3

Thanks for this. I will get back to you soon

1 Like

I'm not sure if this is related, but since I have only had the "file already closed" problem in the last few releases, it might be.

On 1.49.3 I no longer see the "file already closed" problem, but I am seeing lots of "corrupted on transfer: sizes differ" errors. Now while I am aware this could be caused by my internet connection, I'm seeing too many of these to attribute it to that and too far apart, which makes me suspect it may be linked to the "file already closed" error.

Can you try the latest beta? And if you still get the problems can you post some log with -vv? Thanks

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.