Rclone Serve S3 / ListObjects timeout

What is the problem you are having with rclone?

I want to setup Proxmox Backup Server 4 with the new S3 Backup functionality. Therefore i connected my pcloud storage and serve it as s3.
Backup works fine, chunks get created.

But garbage collection task is executing a ListObjectsV2 call to the api with a prefix of /<store-name>/.chunks, in order to list chunk objects only. This list api call has currently a timeout of 60 seconds but does deliver in my scenerio 65.000 folders with 1-3 files in each.

This does not finish within 60 secs, hence it quits.

You can see this thread: Rclone serve S3 / GC fails - Deadline has elapsed | Proxmox Support Forum

Is it possible to use some caching, so that rclone is not trying to fetch the list from pcloud on demand?

Run the command 'rclone version' and share the full output of the command.

rclone v1.70.3

  • os/version: debian 12.11 (64 bit)
  • os/kernel: 6.8.12-11-pve (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.24.4
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Pcloud (eu)

The command you were trying to run (eg rclone copy /tmp remote:tmp)

ListObjects

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[pcloud_proxmox]
type = pcloud
password = XXX
token = XXX
username = XXX
hostname = eapi.pcloud.com
root_folder_id = XXX
use_multipart_uploads = false

A log from the command that you were trying to run with the -vv flag

2025-08-21T00:01:01+02:00: TASK ERROR: failed to list chunk in s3 object store: request timeout: deadline has elapsed

Use VFS

set --dir-cache-time to “forever“ e.g. 9999h

Your data is only modified via S3 interface so you do not care about picking up changes made directly on your pcloud. Hence you can keep directories listing in cache all the time.

Also add --vfs-refresh flag to populate dir cache on rclone serve startup.

i adjusted that to your suggestions, but it does not help.

I enabled debug logging in rclone and tested with another s3 client, where the directory listing is there within a second.

Client → timeout after 60 sec:

2025/08/21 08:30:50 DEBUG : serve s3: LIST BUCKET
2025/08/21 08:30:50 DEBUG : serve s3: bucketname:%!(EXTRA string=backup, string=prefix:, gofakes3.Prefix=prefix:"pcloud/.chunks/", string=page:, string={Marker: HasMarker:false MaxKeys:1000})

Client → directory listing within 2sec:

2025/08/21 08:33:43 DEBUG : serve s3: LIST BUCKET
2025/08/21 08:33:43 DEBUG : serve s3: bucketname:%!(EXTRA string=backup, string=prefix:, gofakes3.Prefix=prefix:"pcloud/.chunks/", delim:"/", string=page:, string={Marker: HasMarker:false MaxKeys:1000})

The only difference is the delim:”/”.

The documentation says that in case delimiter is empty, it will treat it as /,

I can not see your screen:) please post exact command you used

sure :smiley:

rclone serve s3 pcloud_proxmox: \
        --auth-key xx,yy --addr :8443 \
        --key /root/private.key --cert /root/public.key \
        --vfs-fast-fingerprint \
        --no-modtime --no-checksum \
        --vfs-cache-mode full --dir-cache-time 9999h \
        --vfs-cache-max-age 72h --vfs-refresh --no-cleanup \
        --config /root/.config/rclone/rclone.conf -vv

This looks good.

I wonder if --vfs-refresh finished populating cache. How long it takes depends on your network and pcloud seed mostly. Clearly it will be more than 60s …

You could use rc interface to query refresh status (it is async job).

PS. I would refrain from using cache mode full. It creates extra dependency which can lead to your data corruption in case of issues. minimal is what I would use. But this is secondary thing.

sadly the recursive file listing without the delimiter is the problem.

The directory listing is super fast using /, because it only retrieves the directories.

See response over here: Rclone serve S3 / GC fails - Deadline has elapsed | Proxmox Support Forum

The delimiter leads to grouping of object keys being identical after the prefix up to the first encounter of the delimiter into a common prefix in the result. so this does not work as the PBS client needs the full chunk object list. That does also explains the significant speed difference in listing.

serve s3 is not very clever if you use a prefix. It should really be counting the number of / and limiting the recursion depth.

The code is here

Which calls this to do the recursion which I don't think is limited

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.