Azure BLOB -> BLOB copy. Logfile shows that data size to copy is growing while source is static

What is the problem you are having with rclone?

Amount of data to be transferred is shown to be growing during copy operation.
Nothing was written to the source BLOB directory during the time of copy.

What we found strange, is that target amount of data (value# 2 in "Transferred" in the logfile) was growing as soon as copy progressed: 303.178 GiB -> 350.904 GiB -> 433.195 GiB.

This pattern is consistent in our configuration among many BLOBs, and the difference between initial data size to copy and final data size can take up to 2X of initial data size, while source directory is static and copy itself takes a couple of minutes;

Is it possible to find out where this difference comes from?

Run the command 'rclone version' and share the full output of the command.

rclone v1.69.1

  • os/version: oracle 8.10 (64 bit)
  • os/kernel: 5.15.0-307.178.5.el8uek.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.24.0
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

Azure BLOB.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

/bin/rclone copy -cvP --stats=1m --transfers=4 --min-age=1m --no-traverse --log-file=${LOG_FILE} bloba:somedir blobb:somedir

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[bloba]
type = azureblob
use_msi = true
chunk_size = 100M
access_tier = Cool
account = XXX
msi_mi_res_id = XXX

[blobb]
type = azureblob
use_msi = true
chunk_size = 100M
access_tier = Hot
account = XXX
msi_mi_res_id = XXX

A log from the command that you were trying to run with the -vv flag

Transferred:      223.171 GiB / 303.178 GiB, 74%, 3.899 GiB/s, ETA 20s
Checks:              3786 / 3786, 100%
Transferred:           40 / 519, 8%
Elapsed time:       1m0.0s
...
Transferred:        266.937 GiB / 350.904 GiB, 76%, 392.861 MiB/s, ETA 3m38s
Checks:              3786 / 3786, 100%
Transferred:          421 / 519, 81%
Elapsed time:       2m0.0s
...
Transferred:        433.195 GiB / 433.195 GiB, 100%, 1.971 GiB/s, ETA 0s
Checks:              3786 / 3786, 100%
Transferred:          519 / 519, 100%
Elapsed time:      2m57.8s

Missing a log file to show you exactly why.

Add -vv and look at the log file.

I've set "-vv" - the job runs tonight, so I'll upload the extract logfile tomorrow.
With "-v" the amount of data is shown only in the end, and it looks correct (I'll double-check that tomorrow as well).

The output I shared so far is from the "stats" output to console rather than logfile contents, since the discrepancy is noticed only there so far.
It seems that stats output initially shows around twice less amount of data to transfer than it really is, and then increases that value at each subsequent progress interval up to the correct value in the end.

Hello.
Logfile for "-vv" takes up ~3MB, so I am not sure it should be uploaded in full.
Briefly, 3771 files were checked there, and 532 transferred.
Logfile contains no errors, and correct data size is shown in the end (that's expected and fine).
Please, let me know if you need any specific info from the log, and I'll publish it.
Some typical extracts:

2025/06/10 23:30:57 DEBUG : Creating backend with remote "bloba:somedir"
2025/06/10 23:30:57 DEBUG : Using config file from "/etc/rclone.conf"
2025/06/10 23:30:57 DEBUG : Creating backend with remote "blobb:somedir"
2025/06/10 23:30:58 DEBUG : 2025-06-03/file_1399134: Dst hash empty - aborting Src hash check
2025/06/10 23:30:58 DEBUG : 2025-06-03/file_1399134: Src hash empty - aborting Dst hash check
2025/06/10 23:30:58 DEBUG : 2025-06-03/file_1399134: Size of src and dst objects identical
2025/06/10 23:30:58 DEBUG : 2025-06-03/file_1399134: Unchanged skipping
...
2025/06/10 23:30:59 DEBUG : 2025-06-10/file_1404092: Need to transfer - File not found at Destination
...
2025/06/10 23:30:59 DEBUG : 2025-06-10/file_1404093: Multipart upload session started for 36 parts of size 100Mi
2025/06/10 23:30:59 DEBUG : 2025-06-10/file_1404093: open chunk writer: started multipart upload
2025/06/10 23:30:59 DEBUG : 2025-06-10/file_1404093: Starting multi-thread copy with 36 chunks of size 100Mi with 16 parallel streams
2025/06/10 23:30:59 DEBUG : 2025-06-10/file_1404093: multi-thread copy: chunk 16/36 (1572864000-1677721600) size 100Mi starting
2025/06/10 23:30:59 DEBUG : 2025-06-10/file_1404093: multi-thread copy: chunk 7/36 (629145600-734003200) size 100Mi starting
...
Transferred:   	  513.560 GiB / 513.560 GiB, 100%, 1.963 GiB/s, ETA 0s
Checks:              3771 / 3771, 100%
Transferred:          532 / 532, 100%
Elapsed time:      3m45.4s

However, regular stats updates which go to console output still initially publish too low data volume to transfer, and then corrects it with each iteration: 351.538 -> 423.649 -> 432.733 -> 513.560

Transferred:      208.092 GiB / 351.538 GiB, 59%, 3.319 GiB/s, ETA 43s
Checks:              3771 / 3771, 100%
Transferred:           43 / 532, 8%
Elapsed time:       1m0.0s
...
Transferred:       326.937 GiB / 423.649 GiB, 77%, 952.745 MiB/s, ETA 1m43s
Checks:              3771 / 3771, 100%
Transferred:          309 / 532, 58%
Elapsed time:       2m0.0s
...
Transferred:        394.783 GiB / 432.733 GiB, 91%, 2.327 GiB/s, ETA 16s
Checks:              3771 / 3771, 100%
Transferred:          437 / 532, 82%
Elapsed time:       3m0.0s
...
Transferred:        513.560 GiB / 513.560 GiB, 100%, 1.963 GiB/s, ETA 0s
Checks:              3771 / 3771, 100%
Transferred:          532 / 532, 100%
Elapsed time:      3m45.4s