Cheapest way to get bucket size

Hello,

I am using Rclone to backup my NAS to a couple of object storage services (Google Cloud Storage & B2) spread over several buckets. I would like to periodically check the size of all my buckets and, since I pay for transactions, I was wondering what the cheapest way of doing this would be using Rclone?

How does Rclone size work? Is it a basic implementation of du -b insofar as it lists all object sizes, or does it do something more sophisticated?

Rclone size lists the objects as fast as it can and adds them up.

rclone lsd on the root ( eg rclone lsd b2: ) can show number of objects and size. I can't remember whether Google Cloud Storage & B2 support this though, I suggest you give it a go!

would --fast-list help reduce the amount of transactions?

Thanks for the quick response, Nick.

lsd does not appear to work on GCS; it returns a list of folders and mod times, but not size

size works, but is relatively slow on buckets with many objects. I presume that this means that size is using many transactions (roughly equivalent time du) and is therefore relatively expensive?

I may have to try and do something with GCS metric reports or logs instead. Is there anyway to easily see the number of HTTP requests made by Rclone, without resorting to logging the requests manually?

rclone size will engage --fast-list automatically

It will be listing all the objects so take roughly number_of_objects / 1000 transactions

Rclone will show stats about files and bytes transferred with -v

If you want to see the http transactions then use -vv --dump headers

1 Like

Ah! That makes sense.

A dump of the HTML headers confirms what you said above. My test bucket has a little over 2000 objects in it and the header dump shows three GET requests.

Thanks again for your quick help. Rclone is an awesome bit of software.

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.