Errors reading from rclone mount (pCloud)

What is the problem you are having with rclone?

Hello,
Trying to read from a pCloud rclone mount (in daemon mode), I end up with uncorrectable read I/O errors after about 1,000 (small) files have been transferred.

Run the command 'rclone version' and share the full output of the command.

Yes, I know that's not the latest fresh 1.64 that should hit the repos in a few days...

❯ rclone version
rclone v1.63.1

  • os/version: arch 23.02 (64 bit)
  • os/kernel: 6.1.52-1-MANJARO-ARM-RPI (aarch64)
  • os/type: linux
  • os/arch: arm64 (ARMv8 compatible)
  • go/version: go1.20.6
  • go/linking: dynamic
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

pCloud

The command you were trying to run (eg rclone copy /tmp remote:tmp)

Sync inside Joplin fails after having transferred about 1,000 files, so I tried a rsync to some temp folder that fails the same with I/O error.

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

Unsupported on my version. Still manually :

[pcloud_petaramesh]
type = pcloud
hostname = api.pcloud.com
token = {"access_token":"[redacted]","token_type":"bearer","expiry":"0001-01-01T00:00:00Z"}

A log from the command that you were trying to run with the -vv flag

That's a sample from what system journal logs :

rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : JoplinSync/[redacted_1].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: too many errors 11/10: last error: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to kick waiters: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to _ensure cache vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : IO error: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: too many errors 12/10: last error: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to _ensure cache vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to kick waiters: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : IO error: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : JoplinSync/[redacted_2].md: vfs cache: failed to download: vfs reader: failed to write to cache file: pcloud error: Internal error, no servers available. Try again later. (5002)
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused
rclone[28752]: ERROR : vfs cache: updating systemd status with current stats failed: can't open unix socket: dial unixgram /run/user/1000/systemd/notify: connect: connection refused

What is your mount command?

It's done from a systemd user service that does :

/usr/bin/rclone mount --vfs-cache-mode full --vfs-cache-max-size 1G pcloud_petaramesh: /home/[myself]/mnt/pcloud_petaramesh

(I tried with --vfs-cache-mode writes with same results)

And what is your systemd service file?

That's something “standard” that comes in the rclone-mount-service AUR package, which I edited adding the vfs-cache options :

# Credits: kabili207 - https://gist.github.com/kabili207/2cd2d637e5c7617411a666d8d7e97101

[Unit]
Description=rclone: Remote FUSE filesystem for cloud storage config %i
Documentation=man:rclone(1)
After=network-online.target
Wants=network-online.target 
AssertPathIsDirectory=%h/mnt/%i

[Service]
Type=notify
ExecStart=/usr/bin/rclone mount --vfs-cache-mode full --vfs-cache-max-size 1G %i: %h/mnt/%i
ExecStop=/bin/fusermount -u %h/mnt/%i

[Install]
WantedBy=default.target

Both mount command and systemd file look OK.

It looks like some kind of transient error or rate limiting performed by pCloud.

Perhaps you can find some useful tips in this somewhat similar thread:
https://forum.rclone.org/t/how-to-handle-pcloud-rate-limit/31279

My best advice is to test using a very simple copy command like described here:
https://forum.rclone.org/t/how-to-handle-pcloud-rate-limit/31279/5

You can add flags:

--tpslimit 10 --tpslimit-burst 0

to your mount command. They control speed rclone "talks" to pcloud. And keep decreasing tpslimit until errors are gone.

1 Like

Thanks for the tips, I will try this.

To be complete : I installed this on my Pinebook Pro (ARM aarch64, Manjaro), and I could sync my Joplin from the rclone pCloud mount without a problem.

Then I replicated the exact same config on my Raspberry Pi 4 (same architecture and distribution), and there I could never get Joplin to sync, from the same rclone pCloud mount and files, and got those errors which I couldn't get rid of, event after a dozen tries...

This is interesting.... not sure how to interpret it.. Maybe others will have some better ideas

BTW I notice that on my Pinebook Pro (same setup), rclone consumes 25-30% CPU just sitting there doing nothing (when the mountpoint is active but there are no file transfers as far as I can tell...)