Failed to close file before aborting: incorrect size after upload

What is the problem you are having with rclone?

I have installed rclone on my Synology NAS and transferring files to Pcloud.
The configuration is working and files get from Nas to Pcloud as I want.
The only thing which is worrying me is that the transfer never seems to end well.
I am busy now for over a week transferring many video files (large files) and it never seems to finish and indicating all files are copied over from NAS to Pcloud 1-on-1. I do see errors in the terminal (Errors: 8 (retrying may help))

This it is the commandline I am using in the terminal:
```rclone sync /volume1/temp/videobackup/ remote:/Rclone_Photo/Movie/ --progress --checksum --log-file=/volume1/tools/Backup/syncMovie_log.txt --log-level=INFO --transfers=12`

What do I see in the log:
Multiple lines with this text
multi-thread copy: failed to close file before aborting: incorrect size after upload: got 201457664, want 1169545271

An example what I see is this after running some hours:
Transferred: 36.584 GiB / 77.744 GiB, 47%, 1.057 MiB/s, ETA 11h4m31s Errors: 62 (retrying may help) Checks: 11087 / 11087, 100% Transferred: 6 / 77, 8% Elapsed time: 6h26m56.5s
I see many errors and restarting the job does not let them go away.

I have tried to do a sync of the NAS and Pcloud with a freeSync program (separate from Rclone). First letting him check both folders and make Hashes of them. After the comparison there was a bunch out of sync let in this program sync from NAS to Pcloud. After that I did the same again and noticed (as expected) both were in sync.

To do a check from rclone I ran the program with the commandline as above and almost immediately it started transferring files from the NAS to Pcloud again.
While I did not expect this, as freeSync mentioned everything was in sync.

rclone version
rclone v1.69.1

  • os/version: unknown
  • os/kernel: 4.4.59+ (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.24.0
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using?

Pcloud

command run:

rclone sync /volume1/temp/videobackup/ remote:/Rclone_Photo/Movie/ --progress --checksum --log-file=/volume1/tools/Backup/syncMovie_log.txt --log-level=INFO --transfers=12

*I am using 12 transfers as the default 4 do not completely fill my upload speed. More transfers give me more throughput. I don't know why the stand 4 do not take all the bandwidth as I would expect.
*I have progress turned on to see in the terminal the progress.
*I have checksum turned on to check that every bit is checked during the copy.

###LOG

2025/03/19 14:58:57 ERROR : 2017/iphone/IMG_1011.MOV.c8e20dd1.partial: multi-thread copy: failed to close file before aborting: incorrect size after upload: got 501874688, want 1234193261
2025/03/19 15:00:48 ERROR : 2017/iphone/IMG_1011.MOV.c8e20dd1.partial: multi-thread copy: failed to close file before aborting: incorrect size after upload: got 203161600, want 1234193261
2025/03/19 15:01:33 ERROR : 2017/iphone/IMG_1011.MOV.c8e20dd1.partial: multi-thread copy: failed to close file before aborting: incorrect size after upload: got 201981952, want 1234193261
2025/03/19 15:02:52 ERROR : 2017/iphone/IMG_1011.MOV.c8e20dd1.partial: multi-thread copy: failed to close file before aborting: incorrect size after upload: got 201719808, want 1234193261
2025/03/19 15:02:52 ERROR : 2017/iphone/IMG_1011.MOV: Failed to copy: multi-thread copy: failed to write chunk: open file: open new file descriptor: pcloud error: Internal upload error. (5001)
2025/03/19 15:03:59 ERROR : 2017/video/IMG_4265.MOV.b8c8a02b.partial: multi-thread copy: failed to close file before aborting: incorrect size after upload: got 607649792, want 668112472
2025/03/19 15:04:46 ERROR : 2017/video/IMG_4265.MOV.b8c8a02b.partial: multi-thread copy: failed to close file before aborting: incorrect size after upload: got 201457664, want 668112472

use a debug log, rclone should tell you the exact reason it is copying a file.
--log-level=DEBUG --dry-run


try --multi-thread-streams=0

I think the --multi-thread-streams=0 option did the job!

Thanks! Finally a 1-on-1 backup :blush:

welcome. yeah, 1-on-1 back is good

fwiw, can also have a incremental.

rclone sync /volume1/temp/videobackup/ remote:/Rclone_Photo/Movie/current 
--backup-dir=remote:/Rclone_Photo/Movie/incrementals/`date +%Y%m%d.%I%M%S`

Looks interesting.. I don't yet see what it is doing though.

What is the --backup-dir option doing? Next to the sync having an extra folder with just the changed files that run? (It looks like I have duplicate files then on the remote location?)

Isn't there a way to speed up the syncing like having the an overview of date 1 and when I run the script at day 2 it will look at the cached overview and can easily and fast see what has changed and the sync can take less time?

Or I assume that every file get hashed on location A and checked at location B.
Can't the hashingtable be cached or stored to be reused?
Or is this already done without me knowing this? :blush:

not duplicates but a very simple incremental backup.
if the source file changed, normally, rclone will delete the dest file and upload the source file.
but with --backup-dir, the dest file that would be deleted is server-side moved to the timestamped folder.

rclone sync does not use a cache.
listing files on pcloud is very slow as --fast-list is not supported.


no, by default, rclone compares size and modtime. to compare by hash, which is slow, use --checksum


well, there is a hasher remote, but it is experimental.

Wow! That backup file-option is great! I was thinking that rclone did not have delete-rights, but of course it should as something with picture1.jpg gets changed it cannot put another picture1.jpg in the folder...
In case all files get deleted by accident or a virus on location A, would Rclone then also delete those files on location B? (probably not as he has no db / cache what the previous state was).

Another q which popped up. I am doing a first Rclone session of my raw pictures now. Taking of course a long time. But I start this up and Rclone is giving me quite fast info of gigabytes transferred and gigabytes to go
Seeing the timer as indicator (gig-to-go divided by upload speed). But for example at the beginning I see 200 Gb to go and taking 15 hours. Looking at the terminal 5 hours later it is telling met 250 Gb and 10 hours later 300 Gb...
Isn't there a smarter way to determine the actual total file size and how many files (same principle) at the beginning?

Of course yes. Rclone is just funky tool to copy/sync/move files. Not a backup software. When you sync empty source then rclone will dully make destination empty too. If you want to use it for backup then backup related functionality is your problem.

This is why in use-case you described it is probably not the best tool for the job.

You want backup? Then use some backup program which provides snapshots, versioning, compression and deduplication among key features expected from proper backup solution. In FOSS space I could recommend restic, rustic or kopia. And all of them can actually utilise rclone for file transfers - something you would need for Pcloud.

If you need to know all estimates fast then use --check-first flag. It will come at the cost of RAM usage (but if you do not have millions of files then probably nothing to think about) and slight delay before transfers start.

yeah, it is

can try --check-first

Hello Kapitainsky, I have never heard about FOSS space (neither the names you mentioned) The first hit on a search engine (https://www.fossspace.com/) look as an incomplete page. I see nothing I can do or actually search there. Where can I find more info / the correct link with the tools (and maybe more interesting?) commandline tools I can use?

FOSS = Free and open-source software.

I meant free software options:)

As for backup programs you will google them easy by name.

Ah! :blush:
I thought there was a handy hub-page with all or many usefull open source topics and their description :blush:
The thing is with these commandline tools (like Rclone) is you have to find the name somewhere to get to know it. Before that I used the tools I could find on Google like Freesync and Robocopy and the tools on my NAS.
I bet there are more commandline tools available which I can never think of, but may come in handy.

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.