How to check EOF on rclone?

What would that show? You are comparing source to source as they are the same thing.

It was just a (crazy) thought, if that would "provoke" the "unexpected EOF" check, to answer the original question:

So I tried to use rclone sync to try to copy a corrupted file (one another drive so another remote) and I get an error that helps me "unexpected EOF".

I would like to know if there is a way to make it without copy all?

I feel like that's going down the wrong way street and asking to validate if the breaks work on the car.

The config needs to get cleaned up as who knows what's going on with files being mixed up as the OP is definitely not in a good 'state' for data integrity at this point.

So we have the same configuration? (unless I don't understand anything lol)

My ShareDrive contains only encrypted content.

It's exactly the same configuration i use (expect SDrive)?

@albertony Exactly!

"sync_remote": "SDrive_crypt:/",
"upload_folder": "/mnt/local/medias",
"upload_remote": "SDrive_crypt:/"

Currently I upload the content on SDrive_crypt (so the local files) which are in clear and the content on the drive is automatically encrypted.

See that?

That points to your entire SDrive: so your entire drive is an encryption point.

That's not the same as me as I use a folder in my drive to host my encrypted content.

To use cryptcheck, you need the source and destination with the source being the original file.

I gave an exact example above in my post on how to create and test it.

If you move a file from the source and it only exists on the destination, you have nothing to 'check' it against. In my case, I copied the file so the source remained.

Everything became clear to me with this message, thanks a lot for the help.
So there is absolutely no way to check if I have corrupted files except to make a full copy?

I'm not sure how you are determining if a file is 'corrupted' as if you are uploading only 20GB of a file from some 3rd party program, you'd want to fix it there I'd imagine.

Rclone detects if a file size is changing and will abort on the file. I don't quite understand your flow and what programs you are using to get into your 'bad' state.

Precisely rclone when I copy a corrupted file it appears in EOF, I cannot know which file is corrupted or not this problem is due to an upload of a file in of writing not finished (to be more precise with docker my folder downloads and the folder of my medias local loading was not on the same filesystem, so symlink/hardlink doesn't work and it's a full copy with sonarr/radarr)

Sonarr/Radarr both copy partials and rename them at the end of a copy so that would most likely not be a thing if they were causing it unless you are trying to copy the partial files.

You'd have to provide a full debug log with the issue so we can see what's going on as what you are saying really doesn't make sense to me in terms of the issue.

If a file is being written to, rclone won't copy it up as it notices the size changing.

I'm really sorry what I'm saying is absolutely not clear.
I will try to recap:

  • My corrupted file problem has been fixed: cloudplow was uploding files being written because I was uploading every minute. This problem happened frequently for big files for example 4K movies which has a weight of 60 gb simply because radarr hadn't finished making a full copy of the file.

  • My only way to check a file is partial/corrupt: is when I make a copy with rclone on another remote it tells me the EOF problem. By checking on Plex for example this file (corrupted) there is indeed a problem because when I advance too far compared to the duration of the file Plex stops directly. (totally normal because the file was written partially)

My current goal is to know all these partial files (since when I make a copy with rclone it detects the problem) in order to delete them.

Feel like we're going in a bit of a circle here as if you are not uploading files with the .partial extension, you don't hit the issue and if a file is writing, rclone won't upload it as it knows that. If you have a situation that it is uploading, we need a log file to look at.

Still doesn't make sense as rclone doesn't upload files that are being written to so you don't get partial uploads. Again, see need for log file.

See previous comments on log file.

The problem is not due to rclone but to cloudplow, I found some old screenshots I can try to make a test environment to reproduce the problem for the logs.

File original:

lsof +D:

see upload partial file:

The 12gb are normal because the copy is long (not the same filesystem but it is now fixed) except that cloudplow uploaded the file anyway.

How does cloudplow upload files? It uses rclone so if it's uploading something and the file is changing, rclone would notice that and write an error. It's just a bunch of python wrappers around the various tools.

Your screenshots don't match up with how Radarr works though so something doesn't match up with the screenshots as you'd see a partial file before it's moved to it's final name and it would have the same size as in your screenshot.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.