STOP and READ USE THIS TEMPLATE NO EXCEPTIONS - By not using this, you waste your time, our time and really hate puppies. Please remove these two lines and that will confirm you have read them.
What is the problem you are having with rclone?
rclone mount azure blob hitting an excessive amount of ListBlobs transactions.
Run the command 'rclone version' and share the full output of the command.
rclone v1.72.1-beta.9336.73bcae224.v1.72-stable
- os/version: alpine 3.23.0 (64 bit)
- os/kernel: 6.12.51-0-lts (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.25.5
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
azure blob storage
The command you were trying to run (eg rclone copy /tmp remote:tmp)
- rclone
- mount
- '-v'
- '--allow-non-empty'
- '--read-only'
- '--dir-cache-time=8h'
- '--vfs-cache-max-age=72h'
- '--vfs-read-chunk-size=4M'
- '--vfs-read-chunk-size-limit=16M'
- '--vfs-cache-mode=full'
- '--buffer-size=256K'
- '--no-checksum'
- '--cache-dir=/mnt/cache'
- '--vfs-cache-max-size=1G'
- 'blobcrypt:/MYDATA/'
- /mnt/data
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[blob]
type = azureblob
account = XXX
key = XXX
[blobcrypt]
type = crypt
remote = blob:crypt
password = XXX
password2 = XXX
A log from the command that you were trying to run with the -vv flag
A lot of HTTP requests -- most likely because of how the files are structured in the directory.
I would like to ask, if I have a directory structure with lot of subdirectories, rclone mount logs suggests, its’ making n number of HTTP requests (n = number of directories). I looked into other options like fast-list but seems like that option is not available.
I am trying to understand is it possible to mount the azure blob in similar way http backend, wherein the adhoc HTTP requests issued on demand when someone requests a particular file.
e.g. in a http backend, I can have 1M directories nested, but rclone mount won’t issue 1M http requests to make it lot efficient. Is it somehow possible with azure blob backend?
NOTE: It’s a readonly backend.
PS: Just to clarify, imagine having 200K directories in azure storage, a single mount would cost 1$. If dir-cache-time is set to 1h it would incur 24$ charge per day just to keep the mount alive without even reading a single file.