Pcloud stopped working and rclone hanged (pacer retries)

Hello, pcloud today stopped working for one of my account (other accounts works fine). I am suspecting it is a local pcloud issue, but the problem is that I can't ignore it using standard rclone timeout option. Please review the following symptoms:

rclone ls pcloud:/dir/
10240 file.txt
rclone -vv -P --contimeout=30s --timeout=30s copyto pcloud:/dir/file.txt /tmp/file.txt
2023/09/28 19:10:17 DEBUG : rclone: Version "v1.63.1" starting with parameters ["rclone" "-vv" "-P" "--contimeout=30s" "--timeout=30s" "copyto" "pcloud:/dir/file.txt" "/tmp/file.txt"]
2023/09/28 19:10:17 DEBUG : Creating backend with remote "pcloud:/dir/file.txt"
2023/09/28 19:10:17 DEBUG : Using config file from "**/rclone.conf"
2023/09/28 19:10:23 DEBUG : Creating backend with remote "/tmp/"
2023/09/28 19:10:23 DEBUG : fs cache: renaming cache item "/tmp/" to be canonical "/tmp"
2023-09-28 19:10:27 DEBUG : file.txt: Need to transfer - File not found at Destination
2023-09-28 19:10:58 DEBUG : pacer: low level retry 1/10 (error Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZpVqDykZ2ZZCjZZatkZU0ZJRZDLZi0lEmcyQsCJhxjVMzOsLMhFiuxdV/file.txt": dial tcp i/o timeout)
2023-09-28 19:10:58 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023-09-28 19:11:28 DEBUG : pacer: low level retry 2/10 (error Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZpVqDykZ2ZZCjZZatkZU0ZJRZDLZi0lEmcyQsCJhxjVMzOsLMhFiuxdV/file.txt": dial tcp i/o timeout)
2023-09-28 19:11:28 DEBUG : pacer: Rate limited, increasing sleep to 40ms
2023-09-28 19:11:58 DEBUG : pacer: low level retry 3/10 (error Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZpVqDykZ2ZZCjZZatkZU0ZJRZDLZi0lEmcyQsCJhxjVMzOsLMhFiuxdV/file.txt": dial tcp i/o timeout)

I've added --retries=1 --low-level-retries=1 to rclone options and after that rclone fails (as expected):

2023-09-28 19:21:34 DEBUG : pacer: low level retry 1/1 (error Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZT4qDykZ2ZZCjZZatkZU0ZJRZDLZ9syshzNIAvRlJsdiGBCQwpFSq1PV/file.txt": dial tcp i/o timeout)
2023-09-28 19:21:34 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023-09-28 19:21:34 ERROR : file.txt: Failed to copy: failed to open source object: Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZT4qDykZ2ZZCjZZatkZU0ZJRZDLZ9syshzNIAvRlJsdiGBCQwpFSq1PV/file.txt": dial tcp i/o timeout
2023-09-28 19:21:34 ERROR : Attempt 1/1 failed with 1 errors and: failed to open source object: Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZT4qDykZ2ZZCjZZatkZU0ZJRZDLZ9syshzNIAvRlJsdiGBCQwpFSq1PV/file.txt": dial tcp i/o timeout
2023/09/28 19:21:34 DEBUG : 7 go routines active
2023/09/28 19:21:34 Failed to copyto: failed to open source object: Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZT4qDykZ2ZZCjZZatkZU0ZJRZDLZ9syshzNIAvRlJsdiGBCQwpFSq1PV/file.txt": dial tcp i/o timeout

I can't understand how we can control 20ms timeout from pacer (Rate limited, increasing sleep to 20ms). I've found options: --drive-pacer-min-sleep, --dropbox-pacer-min-sleep, --webdav-pacer-min-sleep, but no pacer min sleep option for pcloud.

I thought --timeout limits everything including internal timeouts. Do you know other options for controlling internal timeouts? Thank you.

You want to use tps limit as those other flags are for Google Drive, Dropbox, WebDav, etc.

Hello, I've added --tpslimit=1 --tpslimit-burst=1 but it doesn't affect pacer low level retries and pacer sleep time.

2023-09-28 20:35:26 DEBUG : pacer: low level retry 1/10 (error Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZ42tDykZ2ZZCjZZatkZU0ZJRZDLZCCJyTEm0fk5LxA94UBGggX83ClNX/file.txt": dial tcp i/o timeout)
2023-09-28 20:35:26 DEBUG : pacer: Rate limited, increasing sleep to 20ms
2023-09-28 20:35:56 DEBUG : pacer: low level retry 2/10 (error Get "https://p-lux4.pcloud.com/DLZnvGlf6ZuKiTrX7Zg7Is7ZZ42tDykZ2ZZCjZZatkZU0ZJRZDLZCCJyTEm0fk5LxA94UBGggX83ClNX/file.txt": dial tcp i/o timeout)
2023-09-28 20:35:56 DEBUG : pacer: Rate limited, increasing sleep to 40ms

Those are network timeouts indicating a local issue that you'd need to resolve.


I am sure, this is an issue for one account on pcloud, other accounts works perfect using same remote. It is possible that I've reached egress/write op quotas on this account.

I've understood the way to customize pacer timeout retries (--low-level-retries 1), but can't understand how to customize 20ms pacer timeout (better terminology is min sleep time).

--drive-pacer-burst int                    Number of API calls to allow without sleeping (default 100)
--drive-pacer-min-sleep Duration   Minimum time to sleep between API calls (default 100ms)

I see that such options exist for google drive (--drive prefix), dropbox and webdav. But there are no such options for pcloud and it looks like there no such options for default case.

If PCloud just times your connection out, that would be one possible, but very unlikely explanation.

That error screams networking and nothing to do with TPS limits, but I really don't use PCloud.

As I shared above, you want to use TPS Limit.

      --tpslimit float                     Limit HTTP transactions per second to this
      --tpslimit-burst int                 Max burst of transactions for --tpslimit (default 1)

I use 12 and 0 respectively as Dropbox tends to limit to about 12 API calls per second.

You mean I need to use tpslimit and my quota will soon come back. I will apply this option for sure. Thank you.

But I am talking about a bit another thing. There are no reliable option for now that can provide guarantee that rclone won't hang anytime. --low-level-retries=1 and --retries=1 are excellent things, but it is not enough from my point of view. We have uncontrollable pacer timeout and pacer burst method.

I've found a place in code where we have these constants. Pacer has been created for pcloud without calculator using just min and max sleep time.

Meanwhile google drive has default min time, option for update min sleep (I've mentioned --drive-pacer-min-sleep). Google drive doesn't have calculator for pacer too, but it is possible to limit pacer min sleep time.

BTW almost all backends has no ability to update min sleep time, it has just constants. I can help to add global flag --pacer-min-sleep for all backends. Do rclone project need this feature?

Thank you.

Sure but I have no idea if that matters for PCloud as you are getting what looks to be a network timeout, not a Pcloud timeout.

I want to make global option --pacer-min-sleep instead of vendor specific option (without prefix), because every backend has this option as a constant. Not every backend has burst option, some backend has different logic of burst. This option may allow user to disable sleep time completely (--pacer-min-sleep 0) and rclone will never hang in sleep.

Why? Not every user want to set --low-end-retries 1, but I think many users will want to disable pacer sleep completely. I've just received a call today: backup functionality stopped working, script is in progress for 8 hours and nothing happens, it don't want to fail and don't want to provide a backup. I am sure nobody wants to be on my side. =)

Thank you. I am going to prepare PR on the next week.

The reason that some backends have specific items is that's how they are asked to be configured by the vendor. Not sure a global one is a great idea due to the differences per vendor. That's my understanding in why some have it and some don't.

That doesn't have to do with the pacer. That is when you have a low level networking issue like a timeout, disconnect or something along those lines. Most folks want to have things like that retry a few times.

It's not about sides. If you are not sure PCloud is throttling you and you have what (from the logs) looks to be a networking issue, I'd investigate more with PCloud and see what they say before you try to think about a rather large, encompassing change in rclone. At the end, if it's not PCloud throttling and what the error says, the flag won't do anything.

@ncw - does that error from PCloud seem to be throttling to you or just a random networking event? I've never used PCloud so I really can't offer much with it.

Sure, we will keep default min sleep time as it is, just minSleep variable will be renamed to defaultMinSleep and global option wil be able to update its value. It will be similar to google driver backend logic. We won't touch these default values. Meanwhile when SleepTime will be equal to zero (lib/pacer/pacer.go) sleep will be disabled. User will be able to disable pacer sleep by setting global flag --pacer-min-sleep 0.

Yes, I want not to set --low-end-retries 0 too, because of possible issues with network. But today I had to set it, this option stops pacer sleep after first retry that provides infinite hang (pacer have poisoned backup script for 8 hours). Customer has just an excellent admin: he noticed and addressed this issue quickly.

This issue may happend with almost any backend. Rate limit exceed and rclone launches pacer sleeps. Customer don't know what to do: should he pay more to vendor or make a direct call to developer, because just nothing happens. There are no user friendly messages for customer.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.