What is the problem you are having with rclone?
I'm using rclone with Wasabi for backups and storage. I have one bucket which I only use for rclone. Now I have the problem that the size of my bucket shown by Wasabi is way larger than the size of all my files if I run rclone size wbac:
or aws s3 ls --summarize --human-readable --recursive s3://<bucket> --endpoint-url=https://s3.eu-central-1.wasabisys.com
.
Now how I think how I got this problem. Every time I start my pc I run a script which uses copy
and sync
to backup some folders of mine. One file I have is 36GB large. Since my internet connection is to slow and I didn't let my pc run long enought, this file couldn't be uploaded at one time. This resulted in a total of about 300 GB of encrypted chunk files, since reclone did always start the procress of uploading again. I did not realize this problem untill a few days ago, when I was wondering why my bucket is so large. I then used Wasabi explorer to delete these files manually. I think this was the problem. Now if I use rclone or aws cli to show the size of my bucket it is 300GB smaler but the size shown by wasabi is the same as before.
I wrote the wasabi support and got the following answer:
This is caused by the way the backup strategy/process of the application you are using interacts with our system.
Whenever the same object body is uploaded using certain backup applications, links and composing objects are created in the database.
Complete info of how the composing objects work can be found here: Wasabi API GuideSome backup applications like Veeam, Commvault, MSP360, Altaro, etc. use a different backup strategy, and they do not use the exact same object body which circumvents creation of composed objects on Wasabi.
We would recommend using those applications to reupload data from the bucket(s) you are experiencing this issue with to new buckets.
We could have your old bucket(s) waived for any deleted charges once you successfully re-upload data to new buckets, so you do not get billed for that going forward.You may decide to continue using your current application for your backups, but please keep in mind that due to its backup strategy, composing objects will be part of your bucket, and hence they would not appear exactly in the utilization stats that you see in that or other s3 applications, those become internal DB links and function of that as mentioned in our API guide (above).
Let us know how you would like to proceed.
Do you have any idea how to maybe fix this problem without following the solution proposed by the support and uploading everything in a new bucket? Is it possible to delete or manipulate the wrongfull links in the database using the wasabi api or something else?
I'm currently still in contact with the support and will let you know if I have a new solution.
As for the original problem of uploading my 36GB file, I managed to do this by implementing the same crypt and chunker configuration locally and than used this to encrypt and chunk my file. Aferwards I manually uploaded the 400MB files step by step to the appropriate folder.
Run the command 'rclone version' and share the full output of the command.
rclone v1.56.1
Which cloud storage system are you using? (eg Google Drive)
Wasabi
The rclone config contents with secrets removed.
[wasabi]
type = s3
provider = Wasabi
access_key_id = ******************
secret_access_key = ******************
region = eu-central-1
endpoint = s3.eu-central-1.wasabisys.com
[wbaccrypt]
type = crypt
remote = wasabi:\<bucket>\crypt
password = ******************
password2 = ******************
[wbacchunker]
type = chunker
remote = wbaccrypt:chunker
chunk_size = 400Mi
[wbac]
type = alias
remote = wbacchunker:.