What is the problem you are having with rclone?
I'm using deja dup (which uses duplicity) to create full/incremental backups on a schedule. This works well and is very convenient, but only backs up locally.
So I want to use rclone to send these backups into an S3 Glacier bucket, since that seems like a very cost-effective and robust solution for taking the backup off site.
So far so good, however: How can I confirm that the backup files that arrive in the S3 bucket are actually correct and complete? Does rclone perhaps already do this transparently? I notice that the files I sent to S3 using rclone as a test, all have a value in a thing called eTag
. It looks like this: e22bbbef23997bb271a6209637ae59c4-46
. It kinda looks to me like it might be a hash of some kind?
I also noticed that at least some of the files have a metadata tag like this:
x-amz-meta-md5chksum
:RPPT/eE0pm+SLZwvXgiJGw==
This surely is an MD5 hash of the file, but when is it created and how is it used?
It doesn't help if rclone is all kinds of careful before sending the file, but it gets corrupted on the way to S3. Ideally, I guess we'd want S3 to calculate the hash, and report it back to rclone for verification against a locally created hash?
I noticed that rclone has a command called checksum
. I tried to experiment with it a bit, but it wants a SUM file; which isn't too unexpected given how I expect the command to work... but I can't seem to find a way to make S3 give me a file full of checksums.
I spent hours scouring the documentation and couldn't find an explanation.
Run the command 'rclone version' and share the full output of the command.
- os/version: arch "rolling" (64 bit)
- os/kernel: 6.1.22-1-lts (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.20.2
- go/linking: dynamic
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Amazon S3 Glacier deep archive
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
To get files into S3. This worked fine, I can see the files in S3.
rclone copy laptop-backup/ aws-backup:backups-8011/laptop
The rclone config contents with secrets removed.
[JohanGDrive]
type = drive
client_id =
client_secret =
scope = drive
token = {"..."}
team_drive =
[aws-backup]
type = s3
provider = AWS
access_key_id =
secret_access_key =
region = af-south-1
location_constraint = af-south-1
acl = bucket-owner-full-control
server_side_encryption = AES256
storage_class = DEEP_ARCHIVE
A log from the command with the -vv
flag
I don't see a way this can be relevant, I'm trying to understand how it works, not getting an unexpected error. Let me know if more info will help.