I have 37 million files in a bucket, mounted with RClone, so far it has cost me $584 to upload, I need to know what i am doing in the mountbucket that causes so many class C transactions
Run the command 'rclone version' and share the full output of the command.
rclone v1.65.0
os/version: ubuntu 20.04 (64 bit)
os/kernel: 5.4.0-186-generic (x86_64)
os/type: linux
os/arch: amd64
go/version: go1.21.4
go/linking: static
go/tags: none
Are you on the latest version of rclone? You can validate by checking the version listed here: Rclone downloads
-->
Which cloud storage system are you using? (eg Google Drive)
Backblaze B2
The command you were trying to run (eg rclone copy /tmp remote:tmp)
rclone mount --vfs-cache-mode writes --transfers 200 --no-modtime --allow-other --vfs-read-chunk-size off b2-2:fsassets-vw /mnts/b2-ass3 --inplace --daemon
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
no idea what programs are accessing the mount and how they access it.
tho, the answer would be in the debug log.
can use --dump=headers to see each api call.
one reason could be listing all those files over and over again
from the docs "The b2_list_file_names request will be sent once for every 1k files in the remote path"
might increase --dir-cache-time
and what is the reason for --vfs-read-chunk-size off ?
i have over 30million files so list files names uses 30,000 class C transactions
was reccommmened as most if not all files are small and this setting gives better performance.
What am I doing that is generating the b2_list_file_names operation?
all I am doing is creating a mount point?
and was using rclone copy to upload, i have had to stop as the current bill from Backblaze is existentially threatening.
tl;dr for many tasks, rclone copy|sync is a better choice, more tweaks to reduce api calls.
that is a huge amount of money, what is the total size of all the files?
and can you post the details from the b2 website, breakdown of the api calls and costs?
just starting the mount should not incur an api call.
rclone is pretty good about not accessing the remote until asked to by a program or the OS.
that would incure api calls. --dump=headers will capture the api calls.
Suggestion: mounting the remote drive is a convenience and also very useful for interactive access. Why not copy all your files one time with the rclone copy command, then mount the drive after transfering the bulk of the data, and use the mounted drive thereafter for small changes?
phase 1 has cost $584 so far and is not yet complete
phase 2 must use a mount and accessing the mount will incurr class C transactions confirmed by backblaze