Google cloud storage number of operations much higher than expected with simple sync

What is the problem you are having with rclone?

I store less than 40GB in an archival google cloud storage bucket and I estimated the storage cost to be £0.04 per month (ignoring cost of operations). However I've just checked and by the end of the month I'll have to pay up to £8.40!

I assumed the operation cost to be next to nothing after the first upload, but that does not appear so.
I use rclone once a day to sync any changes on my server. There are 56,000 objects in the bucket.
When I go to GCS billing reports, I can see that all the cost is coming from Class A operations, which amounts to 0.28 GBP daily. Class A / B ops cost 0.05 per 1000, so that makes 5.6K operations daily.

Presumably rclone is listing directories recursively but I don't think I have more than a couple thousand directories.
How do I reduce the number of operations rsync uses for a single sync? And for my purposes should I consider using Standard storage where the operation costs are significantly lower, or using Autoclass which decides for me?

your topic is similar to
https://forum.rclone.org/t/optimization-of-class-a-and-class-b-operations/46565

From what you described you are using archival storage class not for what it is designed for.

It is like archiving data to a tape (deep archive) stored in a vault but then asking daily to put it back to the drive and read again its directory:) This is exactly usage Google (and others) try to prevent.

Typical archive solution involves storing directory on hot storage class and using archive class only for what it is for (archiving data). But for this you need something more specialised than simple utility program like rclone.

With 40GB of data it is not worth of effort IMO. Just use whatever hot class Google offers. And if too expensive use something else.

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.