What is the problem you are having with rclone?
I have been backing up files with rclone for a couple of years. In the spring 2022 I switched to using Amazon AWS S3 Deep Glacier to save costs. This worked until now when I noticed that rclone tried to re-upload files I knew for a fact were already backed up. I don't know what caused this change or when.
I am using crypt to encode the contents and filenames. When I run either rclone ls
or rclone sync
I can see that for the files rclone is trying to re-upload the remote file has a leading ./
and local file does not. An example from rclone sync --dry-run
output is below. So it seems that rclone treats these as different files due to difference in path and tries to upload the file without ./
and tries to delete the file with ./
.
...
2022-08-13 10:47:10 NOTICE: Some Directory/Example File.flac: Skipped copy as --dry-run is set (size 36.620Mi)
2022-08-13 10:47:16 NOTICE: ./Some Directory/Example File.flac: Skipped delete as --dry-run is set (size 36.620Mi)
...
I have multiple remotes configured. All of them use S3 and crypt but different buckets. For simplicity's sake I only included two example remotes: s3_crypt_a which has the re-upload problem and s3_crypt_b which works fine. I noticed that the affected remote (s3_crypt_a) has configuration as shown below whereas the working remote (s3_crypt_b) does not have these explicitly stated.
filename_encryption = standard
directory_name_encryption = true
However according to documentation of Crypt (rclone.org) these should be the default values so I don't know if that actually matters.
I can delete the previous files and re-upload them without ./
but as Deep Glacier has early deletion fee and re-uploading also costs money so I'd really like to understand what happened and how I could avoid it in the future.
Run the command 'rclone version' and share the full output of the command.
rclone v1.58.0
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.10.102.1-microsoft-standard-WSL2 (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.17.8
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
Amazon AWS S3 Deep Glacier
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
Sync commands:
rclone sync --immutable --size-only --progress --transfers 32 "/mnt/c/A_Source/" s3_crypt_a:.
rclone sync --immutable --size-only --progress --transfers 32 "/mnt/c/B_Source/" s3_crypt_b:.
ls commands:
rclone ls --fast-list s3_crypt_a:
rclone ls --fast-list s3_crypt_b:
The rclone config contents with secrets removed.
[s3]
type = s3
provider = AWS
env_auth = false
access_key_id = !!OMITTED!!
secret_access_key = !!OMITTED!!
region = eu-north-1
location_constraint = eu-north-1
acl = private
storage_class = DEEP_ARCHIVE
bucket_acl = private
upload_cutoff = 5G
chunk_size = 2G
upload_concurrency = 8
[s3_crypt_a]
type = crypt
remote = s3:a
filename_encryption = standard
directory_name_encryption = true
password = !!OMITTED!!
password2 = !!OMITTED!!
[s3_crypt_b]
type = crypt
remote = s3:b
password = !!OMITTED!!
password2 = !!OMITTED!!
A log from the command with the -vv
flag
...
2022-08-13 10:47:10 NOTICE: Some Directory/Example File.flac: Skipped copy as --dry-run is set (size 36.620Mi)
2022-08-13 10:47:16 NOTICE: ./Some Directory/Example File.flac: Skipped delete as --dry-run is set (size 36.620Mi)
...