Rclone -upload with thousands of chunks from Proxmox Backup Server very slow

We use for virtualization, among others, Proxmox (KVM). There is now a final version of the Proxmox Backup Server, which creates backups of virtual machines. Chunks of approx. 2 MB in size are created and only the deltas are added or replaced with new chunks for incremental backups. With a VM with a size of approx. 80 GB, several tens of thousands of files come together quickly.

Now another user besides us has the problem (and published this in the Proxmox forum) that the upload of these chunks to several cloud storage providers with rclone is very slow. In some cases, despite GBit uploads, only 3.5 MByte / s are uploaded.

A Proxmox employee sees the problem here, among other things, at rclone:

"most of that carries over to object storages on the WAN as well (eg, AWS S3 starts throttling at 3.5k write requests per second per prefix, so with the same prefix scheme we use for the local datastores that would be 2 ^ 16 x 3500 x 4M = 875TB / s of logical writes! Even if all the chunks would end up in a single prefix that's still 13.5GB / s, and we could use a longer prefix to ensure that we always hit multiple prefixes in a single operation) . so the problem is not the chunk size, but either how rclone translates the chunk structure to object requests, or your object / cloud storage provider having too low limits (or both). " increase default chunk size in PBS for better rclone-uploads | Proxmox Support Forum

Is that actually conceivable or just a lazy excuse?

Depends on the cloud provider. Google Drive has a limit of roughly 2 files per second, I believe, so you'll need to adjust your chunk size accordingly.

Rclone will do a small number 2 or 3 requests per file it uploads.

What cloud storage provider are you using?

What is your rclone command line?

You'll probably want --size-only so rclone doesn't read metadata (1 less transaction per already synced file) and --fast-list to reduce number of transactions (provided you have enough memory) if you are using S3.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.