VFS cache slow to split chunks

What is the problem you are having with rclone?

Hi, I'm new to Rclone and configured it to use S3 but I notice that when it comes time to split into chunks it's just really slow. And I would love to understand why, and how to get him to do it quickly

I have a separate drive just for cache, I need to back up around 10TB to the remote location. the software I use for backup creates 200GB files for me (that's how I set it up) each so that Rclone can upload the files in the background because it uses VFS
After the 200GB is created I see that Rclone starts to do something but never reaches the stage that it uploads to the remote location. I think something in the creation of the chunks is probably stuck or doesn't know.
I would appreciate help understanding why it is slow, and what it depends on.
Drive speed on which the cache is located? CPU speed? RAM speed? Holy Spirits :wink: ?

Run the command 'rclone version' and share the full output of the command.

rclone v1.67.0

  • os/version: Microsoft Windows 10 Pro 22H2 (64 bit)
  • os/kernel: 10.0.19045.4780 (x86_64)
  • os/type: windows
  • os/arch: amd64
  • go/version: go1.22.4
  • go/linking: static
  • go/tags: cmount

Which cloud storage system are you using? (eg Google Drive)

iDriveE2

The command you were trying to run

--config {my config location} mount --cache-dir "E:\Rclone Caching" iDriveE2:backups  S: --volname \\127.0.0.1\backups --vfs-cache-mode full --vfs-fast-fingerprint  --no-modtime  --multi-thread-streams 10 --s3-upload-cutoff 4.6G --s3-chunk-size 4.6G --poll-interval 30s --dir-cache-time 1m0s --vfs-cache-max-age 30s --vfs-cache-min-free-space 15G --vfs-cache-poll-interval 10s --transfers 8 --s3-upload-concurrency 10 --log-file C:\Rclone\logs.txt

Please run 'rclone config redacted' and share the full output.

[iDriveE2]
type = s3
provider = IDrive
access_key_id = XXX
secret_access_key = XXX
endpoint = {the-end-point}

welcome to the forum,

Normally rclone will calculate the MD5 checksum of the input before uploading it so it can add it to metadata on the object. This is great for data integrity checking but can cause long delays for large files to start uploading

to confirm that, monitor disk usage by rclone.


--poll-interval 30s, does nothing, can remove that.
S3 remotes do not support polling. a debug log would show that.


--s3-upload-cutoff 4.6G --s3-chunk-size 4.6G
why use that, instead of default values?

what is the name of the software?

Hi, thanks for the reply.

Is looking at the task manager enough? Is there another way to see it? I see that it does read from the disk at 30 MB/s but I don't know what it does in this process if it is the MD5 checksum as you said or something else.

really!?
So in s3 is it always up to date? Is there something else that is unnecessary in my config?

According to what I read and understood
The bigger the parts, the faster uploading will be. In addition, it will use less time in large files (which is what I have) to split them and then there will be fewer parts
And 5GB is the maximum possible in iDriveE2 in a single part size, so I took a little less. At first, I wanted to use even 50GB parts size but I realized that it was not possible.

Acronis True Image - one of the best backup software I use
for now, I limit his upload speed to give Rclone more time to do what it does before it uploads to the remote location - I prefer not to limit Acronis' upload speed but it is a fix for the meantime until I find out why it takes rclone so much time until it starts to uploads

tl;dr - the smaller the value, the sooner each file will start to upload.


can use task manager, resource monitor, sysinternals process explorer or any number of such other tools.


yes. a debug log would show that.


changes made directly on the cloud storage by the web interface or a different copy of rclone will only be picked up once the directory cache expires if the backend configured does not support polling for changes

check out my summary of the two types of cache


for each file, it is a two-step process

  1. calculate the hash. to do that, rclone has to read the entire file.
  2. upload the file.

in addition to the rclone two-step, at the same time, ATI can be writing a new file into the cache.

Yes, I get that, but I don't want a million or so files that were in the folder.

Ok, so it's set for a minute in my case, it seems to be fine.

When he calculates the hash is it when he creates the different parts to upload?
I just get more confused, I thought he creates the hash to make sure everything uploads without a problem and another process that creates the parts :thinking:

this is why I split the backup for multiple sizes because rclone can't touch the cache and start the upload when ATI do something with the files.

if you want to fully understand what rclone is doing.

  1. kill the mount
  2. empty the cache
  3. start the mount
  4. copy one file that is 16GiB
  5. wait for the file to upload to idrive
  6. read the debug log

Ok, Will do! After the first backup finish then I can kill the mount.

But let's go back to the main question:

Why is it slow: from what I understand is mainly a disk speed, no?
if I see via task manager that rclone read about 30MB/s from the disk cache
is that the limit? if I have a faster drive then will get to part of the upload faster?

And what kind of disk operation is it? sequential read?

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.