Backblaze b2 bucket size increasing (buckets growing)

What is the problem you are having with rclone?

Backblaze bucket size increasing over time (>18 months) when using rclone sync

Run the command 'rclone version' and share the full output of the command.

rclone v1.58.1
- os/version: ubuntu 22.04 (64 bit)
- os/kernel: 5.15.0-72-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.9
- go/linking: static
- go/tags: none

Which cloud storage system are you using? (eg Google Drive)


The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone -v --syslog --bwlimit 5M --links --local-no-check-updated --exclude-from /usr/local/etc/rclone.excludes --fast-list --transfers 32 sync /etc ovtt001-crypt-sys-etc:

The rclone config contents with secrets removed.

type = b2
account = 007
key = mysupersecretkey

type = crypt
remote = bblaze:0vtt001-sys-etc-00
password = averystrongpass
password2 = anotherverystrongpass

My apologies in advance if my question is out of place. This is not a specific usage question but rather I'm hoping to fill a possible gap in my knowledge.

I've been using rclone against Backblaze b2 for over 3 years. During this time I've had to do nearly zero maintenance or troubleshooting to keep things functional. However, within the past year or so I've noticed the monthly bill increasing and without deep investigation, casually wrote that off to an increase in storage. Yet with a more recent in-depth look I'm finding my source storage is significantly less than the remote bucket.

To emphasize, I am using rclone sync.

An example is this: I have an existing bucket ovtt001-crypt-sys-etc (Current Files: 48,784 | Current Size: 1.2 GB). In some recent testing I created a bucket ovtt001-crypt-sys-etc-test and without introducing any new rclone parameters or new filtering, with the same exact backup, I see (Current Files: 30,389 | Current Size: 254 MB). Again, the first bucket noted has been in use for years, the -test bucket just a few recent test runs.

What could be causing this long-term bloat? I've made a point of using rclone sync to avoid this.

I could delete or replicate these long standing buckets and start over but I'd rather avoid that.

Check out:

I'd imagine you are getting versions.

1 Like

Check what rclone thinks of the source and destination

rclone size --links --exclude-from /usr/local/etc/rclone.excludes /etc

rclone size --links --exclude-from /usr/local/etc/rclone.excludes ovtt001-crypt-sys-etc:

If these are near enough the same, then it is almost certainly versions as mentioned above.

If not, then check the log of a recent transfer and check to see if rclone is getting to the delete phase of the sync or not. Rclone will skip the delete phase if there are errors in the source.

1 Like

Might be worth to so some small one now and update rclone to the latest version.

You have v1.58.1 and current one is v.1.62.2

Thanks a million for the helpful suggestions. This was definitely the case of versions. Time to read that B2 page in careful detail.

I found this (general) approach significantly reduces storage consumption with Backblaze:

# du -sh /opt/storage
# rclone size --links --exclude-from /usr/local/etc/rclone.excludes /opt/storage
# rclone size --links --exclude-from /usr/local/etc/rclone.excludes 0ff010-crypt-opt-storage:
# rclone -v cleanup 0ff010-crypt-opt-storage:
# rclone size --links --exclude-from /usr/local/etc/rclone.excludes 0ff010-crypt-opt-storage:
# /usr/bin/rclone -v --syslog --bwlimit 5M --links --local-no-check-updated --exclude-from /usr/local/etc/rclone.excludes --fast-list --transfers 32 sync /opt/storage 0ff010-crypt-opt-storage:
# rclone size --links --exclude-from /usr/local/etc/rclone.excludes 0ff010-crypt-opt-storage:

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.