Copy/Sync of large files to pCloud always failing

So you mean due to multithreaded support?

is there an option to disable this?

yes, try --multi-thread-cutoff=0

another test, do not use --multi-thread-cutoff=0 and reduce --multi-thread-chunk-size

as for older verion, try https://downloads.rclone.org/v1.67.0/

???

Ok - tried your suggestion. All not working.

Tried also -

-ignore-checksum together with --ignore-size

also no success.

The last trial was:

--multi-thread-streams=1

This really switches multithreading off and at least ensures that copying of larger files is working reliable with the recent release.

Tried several times with my 'critical' files.
So - your assumption that multithreading causes the problem is true.

I have checked the source. I'm not familiar with Go language - but the Close() method provides some hint where the problem may lie.
A new connection is obviously opened for retrieving the size of the transfered file. Is there eventually a limit how many connections can be opened? Is it really wise to open an new connection just to retrieve the size of a copied file?
Establishing a new connection may take a considerable amount of time.

Don't see this as criticism, but as a question to get to the bottom of the matter.

WriterAt.Close.
func (c *writerAt) Close() error {
	// close fd
	if _, err := c.fileClose(c.ctx); err != nil {
		return fmt.Errorf("close fd: %w", err)
	}

	// Avoiding race conditions: Depending on the tcp connection, there might be
	// caching issues when checking the size immediately after write.
	// Hence we try avoiding them by checking the resulting size on a different connection.
	if c.size < 0 {
		// Without knowing the size, we cannot do size checks.
		// Falling back to a sleep of 1s for sake of hope.
		time.Sleep(1 * time.Second)
		return nil
	}
	sizeOk := false
	sizeLastSeen := int64(0)
	for retry := 0; retry < 5; retry++ {
		fs.Debugf(c.remote, "checking file size: try %d/5", retry)
		obj, err := c.fs.NewObject(c.ctx, c.remote)
		if err != nil {
			return fmt.Errorf("get uploaded obj: %w", err)
		}
		sizeLastSeen = obj.Size()
		if obj.Size() == c.size {
			sizeOk = true
			break
		}
		time.Sleep(1 * time.Second)
	}

	if !sizeOk {
		return fmt.Errorf("incorrect size after upload: got %d, want %d", sizeLastSeen, c.size)
	}

	return nil
}

Thank you!

--multi-thread-streams=1

works for me as well, and also accelerates the transfer quite dramatically.
I would be curious to see whether it's worth adding an option to avoid this check, since there are already scenarios in which "hope" is used.

Concurrency is always hard to manage. Maybe one of the rclone Gurus can dig deeper in this topic!? I'm just a user.

I also have contacted pCloud-Service and hope they can provide an answer how to ensure that things can also work in a multithreaded context.

Hint from Nick Craig-Wood regarding the current release...

"Note that --disable OpenWriterAt will turn the feature off."

Hello, and thanks @Jeah for your hard investigation !

I encounter the same issue on a .MOV file of around 1.5G and if it can help, confirm that using only one thead (with your provided parameter) allows the job to reach the end!
For now it's a pretty good workaround..

I never touch go langage but will take a look when I will have some time, so don't hesitate to share the pCloud team reply if it can give a possible solution

Following this subject Problems copying big files from local to remote using 'rclone rc' - #8 by Faw

It seems to be related to SMB remote

Dont think this has to do with SMB.
I use a Raspi with attached SSD.
And i have doubt that pCloud is using SMB on their side.

I have the exact same problem with 500MB files. After I updated to rclone v1.68.1 I got this error, and the transfer speed were 10x slower.

Adding --multi-thread-streams=1 solved the issue

rclone sync --progress --ignore-checksum --multi-thread-streams=1 $a $b

and I got my normal upload speed back.

Downgrading to 1.67.0-1 on ArchLinux also solved the problem and I then don't have to use --multi-thread-streams=1.

Could someone open a new issue on Github about this with instructions on how to reproduce. I can then work with the developer who wrote the feature to fix it - thank you :slight_smile:

I'm not on Github.

According to the docu 'Georg Welzel' implemented this.
Isn't it possible to pass this over to him?

I need this to be on GitHub before I can do that!

Ok - opened a Github account.

This is the new issue. Just mentioned the link to this discussion.

pCloud support has answered. He found some tmp garbage in my account he stated, that he has removed and i should retry if the problem is gone. But this is of course not the reason. I have suggested he should look into our discussion.

1 Like

Here the answer of the pCloud support about the multithreading problem... (translated to english).
They mention one should use their (API?) sync function for block level transfer.


Hello Joachim,

Thank you for your feedback.

I have checked all available information and the problem here is with the multi-threaded upload. pCloud does not support this, but works via a single thread. This means that files are uploaded sequentially instead of splitting the upload process across multiple threads.
You may find pCloud Drive's sync function useful as it works on the principle of file transfer at block level.

If you need further assistance, please contact us.

Kind regards,
Daniel
pCloud technical support

2 Likes

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.