Slow lsd/tree in azure blob

What is the problem you are having with rclone?

Slow lsd/tree in azure blob when there are hundred thousands small files (100KB~3MBimage ) in azure blob. it is also consuming very high memory utilization

What is your rclone version (output from rclone version)


Which OS you are using and how many bits (eg Windows 7, 64 bit)

windows server

Which cloud storage system are you using? (eg Google Drive)

azure cloud

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone tree SIT_AzureBlobFS:enovia-docs-container

The rclone config contents with secrets removed.

type = azureblob
account = XXXXXXXXX

A log from the command with the -vv flag

2020/09/17 06:52:52 DEBUG : rclone: Version "v1.53.0" starting with parameters ["rclone" "tree" "SIT_AzureBlobFS:enovia-docs-container" "-vv"]
2020/09/17 06:52:52 DEBUG : Using config file from "C:\\Users\\\\.config\\rclone\\rclone.conf"
2020/09/17 06:52:52 DEBUG : Creating backend with remote "SIT_AzureBlobFS:enovia-docs-container"

rclone tree is not very memory efficient - it builds the entire tree in memory first.

You should find rclone lsf and rclone lsd use minimal memory.

thank you! however, lsd and lsf does not seem to count subdirectories and files. Is there a way to count all directories and files? Im trying to have a quick check if source and remote has same file count

If you just want to count then use rclone size that will count number of files and total file size recursively.

If you want rclone lsf to include subdirectories then use rclone lsf -R same with rclone lsd.

thank you adding -R worked :slight_smile:

1 Like

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.