pCloud keeps partial uploads

Continuing the discussion from pCloud keeps partial interrupted uploads:

What is the problem you are having with rclone?

Partial uploads to pCloud are not deleted, despite rclone requesting nopartial=1 in the server requests.

Run the command 'rclone version' and share the full output of the command.

rclone v1.66.0
- os/version: darwin 14.4.1 (64 bit)
- os/kernel: 23.4.0 (arm64)
- os/type: darwin
- os/arch: arm64 (ARMv8 compatible)
- go/version: go1.22.1
- go/linking: dynamic
- go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

pCloud

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy -v -P file pcloud:

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[pcloud]
type = pcloud
hostname = eapi.pcloud.com
token = XXX

A log from the command that you were trying to run with the -vv flag

2024/05/14 14:55:55 DEBUG : rclone: Version "v1.66.0" starting with parameters ["/usr/local/bin/rclone" "copy" "-vv" "-P" "file" "pcloud:"]
2024/05/14 14:55:55 DEBUG : Creating backend with remote "file"
2024/05/14 14:55:55 DEBUG : Using config file from "/Users/alex/.config/rclone/rclone.conf"
2024/05/14 14:55:55 DEBUG : fs cache: adding new entry for parent of "file", "/Users/alex/tmp"
2024/05/14 14:55:55 DEBUG : Creating backend with remote "pcloud:"
2024/05/14 14:55:57 DEBUG : file: Need to transfer - File not found at Destination
Transferred:   	    7.527 MiB / 1 GiB, 1%, 1.882 MiB/s, ETA 9m
Transferred:            0 / 1, 0%
Elapsed time:         6.5s
Transferring:
 *                                          file:  0% /1Gi, 1.882Mi/s, 9m0s^C

Adding more details in the next post for clarity...

Hi all,

As mentioned in the previous topic, I reported the behavior to pCloud as a bug report and promised to let you all know what the outcome was.

They have now replied.

It appears that access to the pCloud API for rclone has been restricted and the team has indicated that this restriction will not be lifted in the near future.

I am sorry but we won't be able to assist you in resolving this.

I was a bit confused by this so asked them to clarify.

They wrote me back and explained essentially the same thing in more detail. The conclusion was however the same. They would not investigate my bug report further.

hi, thanks for the update.

i am still very confused.

not sure what that means?

they intentionally have disabled/banned rclone from their entire platform, that you cannot download files?
or
they will not offer tech support for third-party software such as rclone?

can you post that?

Here is a more verbose version of the exchange. Sorry it's rather long. I still trimmed pleasantries etc. They were very pleasant to interact with :slight_smile: .

Me:

(Summary of issues I saw from the previous post.)

pCloud:

I understand your need for assistance with the API and we truly appreciate your interest in utilizing it to its fullest potential. However, I regret to inform you that we do not currently offer specialized technical support for the API.

Me:

(Asking if I can file a bug report)

pCloud:

Yes, of course. You can share any bugs and problems you encounter, and we will forward them to the dev team.

Me - some time later:

(Checking in asking if there are any updates)

pCloud:

There is no update yet.
To expedite the process, please provide me with the URL of the request you attempted and include some screenshots of the outcome.

Me:

Please forward the attached .txt file to the dev team.
It shows the commands I used, the URL that was called and the result.

pCloud:

It appears that access to the pCloud API for rclone has been restricted and the team has indicated that this restriction will not be lifted in the near future.

I am sorry but we won't be able to assist you in resolving this.

Me:

Iā€™m sorry, access has been restricted?

pCloud:

Unfortunately, yes.

We had to temporarily limit the API's ability to create new apps due to severe abuse of the system. Once the necessary security measures are in place, we will lift these restrictions.

Meanwhile, we can create the apps on our end if users provide a specific for the app details. That being said, the team has decided to not create such an app for rclone.

I am sorry but we will not be able to assist you further.

ok, now that is more clear.

but you already created an app for rclone, so your account is still working, albeit the partial issue.

So if I understand it correctly, they don't wish to investigate my bug report since I'm reporting it in relation to rclone?

What are our options at that point? Try to reason with pCloud and reproduce the issue with a supported app or mark pCloud as a partial upload destination in rclone and use the sftp-like .partial files for upload?

EDIT:
Also, the funny thing is that I just attempted to connect rclone to another pCloud account I had never used with rclone and it authenticated just fine on pcloud.com. Very odd.

As suggested in the previous thread, changing the pCloud backend to include the "feature" PartialUploads does indeed give me the desired behavior. Every file gets a .*.partial temporary name until completed.

Is this too blunt of a solution here? It would certainly address my issue and anyone else who relies on partial uploads not having the final name (e.g. when used as a backend for Duplicacy).

I would also argue for keeping the nopartial=1 flag sent to the server, in case pCloud ever fixes this bug. Then at least the .partial files would not be hanging around, and we could then easily remove the PartialUploads feature flag.

E.g.

diff --git a/backend/pcloud/pcloud.go b/backend/pcloud/pcloud.go
index 64a48c25c..2273d3020 100644
--- a/backend/pcloud/pcloud.go
+++ b/backend/pcloud/pcloud.go
@@ -326,6 +326,7 @@ func NewFs(ctx context.Context, name, root string, m configmap.Mapper) (fs.Fs, e
        f.features = (&fs.Features{
                CaseInsensitive:         false,
                CanHaveEmptyDirectories: true,
+               PartialUploads:          true,
        }).Fill(ctx, f)
        if !canCleanup {
                f.features.CleanUp = nil
1 Like

Hi signal9,

I've been reading through your posts regarding this issue with pCloud over the last 2 threads.

Thank you for the information. This now answers a lot of questions as I have had issues with pCloud through Cryptomator, Mountain Duck, and now Rclone.

While waiting for the official fix by the Rclone devs, is there a simple way to implement "PartialUploads: true" functionality (such as in the config), or is it essential that the fix is incorporated into the source code (a bit out of my current depth)? Thanks again.

Signal9 - I'm confused here, so is rclone working with pcloud still? Or have they stopped it from connecting/working?

Anyways, to avoid the .partial files from staying on a cloud, as log as the API is configured properly, if your using the basic copy command, you can simply add --timeout 2h , or whatever integer that .partial files will automatically delete/expire that you think you need. I was having trouble with very large files ( 60gb+ ) on 5 transfers, each file was about 60gb+, but the speed to the remote was very slow. So when the file finally finished uploading, some of the. partial files were not there to compile the file back together, even though it said it successfully transferred the file, but the remote server kept on saying the file was 0 bytes after transfer. I added --timeout 24h, and the big files finally started completing since the source was retaining the partials for 24 hours.

Hey @Sullie,

While waiting for the official fix by the Rclone devs, is there a simple way to implement "PartialUploads: true" functionality (such as in the config), or is it essential that the fix is incorporated into the source code (a bit out of my current depth)? Thanks again.

From what I can tell, unfortunately you can only force rclone to not use .partial files with the --inplace flag.

Asking it to use .partial files seems to require the above code change... I hope I'm wrong.

Hi @mvjunkie ,

Signal9 - I'm confused here, so is rclone working with pcloud still? Or have they stopped it from connecting/working?

Anyways, to avoid the .partial files from staying on a cloud, as log as the API is configured properly, if your using the basic copy command, you can simply add --timeout 2h , or whatever integer that .partial files will automatically delete/expire that you think you need. I was having trouble with very large files ( 60gb+ ) on 5 transfers, each file was about 60gb+, but the speed to the remote was very slow. So when the file finally finished uploading, some of the. partial files were not there to compile the file back together, even though it said it successfully transferred the file, but the remote server kept on saying the file was 0 bytes after transfer. I added --timeout 24h, and the big files finally started completing since the source was retaining the partials for 24 hours.

Sorry for the confusion, pCloud is still working for me.

My issue is that rclone asks pCloud to discard any partially uploaded files but pCloud decides to keep it around with incomplete content.

rclone doesn't use .partial files for pCloud and so the files have the same name as the source but not all the content.

An all-or-nothing approach would be expected.

Gotcha, I would still use the --timeout 2h flag, as this is suppose to tell whatever backend your using, how long to keep the partial fires on the server. You can change it to 1h, 4h, 60m, whatever you would like, but it qoulsnt hurt to try some uploads with that flag and see if the partial files fall off ( after the time you designated in the command )

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.