Rclone to Wasabi sync: sizes/counts don't match


I'm marking this as a question as I'm still trying to learn about Wasabi and Rclone. I'm trying to sync a local directory from openSUSE Leap 15.0 to Wasabi S3 storage using rclone. The local consumed amount is 1,514,889 items totaling 437.9 GB storage. My Remote amount is currently 1,678,599 / 1,688,611 with 1.169T transferred. I'm confused as to why these two sets of data are so far apart. The command I'm using for the sync is:

/usr/bin/rclone --config=<~/.config/rclone/rclone.conf> -l -v --log-file=/var/log/rclone/rclone.log sync /srv/backups Remote_Encrypted:backups

The rclone log shows no errors being reported. Checks are 3394033 / 3394033.

The rclone.conf is as follows:

type = s3
provider = Wasabi
env_auth = false
access_key_id =
secret_access_key =
region = us-east-1
endpoint = s3.wasabi.com

type = crypt
remote = bucket-name:folder/subfolder
filename_encryption = standard
directory_name_encryption = true
password =
password2 =

Using rclone lsf confirms what my Wasabi dashboard is telling me: Terabytes worth of storage consumed. Can someone explain why I might see numbers like these? I'm thinking I should expect to see 1,514,889 items totaling 437.9 GB storage, but I understand that this number might be potentially inflated because of the 90 day storage policy, but even then, my stored value should at least match, or close to it, and my deleted value should be larger. I certainly don't understand why my transferred count should be far from what I think it should be. Please advise. Thank you!

You'd probably have to take a look at the log and see what it is doing. You are using -l, which means it is going to follow links so maybe you are crossing into a different file system?

I'd probably stop it and try to figure out where the issue in terms of copying things over and over or going across a file system.

Thank you for the reply. I initially thought that, too, when I saw that -L (--copy-links) actually did what you're suggesting. I exchanged that original option (-L) with the lower case -l (--links), which only copies the symbolic link from local storage and writes it as '.rclonelink' on S3. I've confirmed this by completely purging S3 and syncing again. As I understand the documentation, when I go to restore it, rclone will translate the .rclonelink file back to the symbolic link, reattaching everything as it was.

This topic was automatically closed 90 days after the last reply. New replies are no longer allowed.