Rclone and S3 AWS

Hello everyone, I use Rclone to upload files to s3. I would like some tips on how to make this cheaper. I read the Rclone manual and came across some doubts, maybe I'm on the right path, but I would like your help. I upload every day at noon and night.
From what I saw in the documentation, a command that would be cheap to use would be copy --last-list --size-only
Using these two parameters, every day and once a week a full sync command is a good idea? I also saw the use of the --max-age 24h --no-traverse flag
Well, I would like your help on which one to use at noon and night or if you use both the same during the week and a complete sync at the weekend, thank you very much in advance

yes, that works well. often, depends on the provider. which provider(s) do you use?
i find no need for a time limited sync, do not use --max-age 24h --no-traverse

imho, nothing better than S3 -> MFA, SSE-C, session tokens, policies(iam|bucket), ...

there are many providers to choose from, to reduce cost.
in my case,
--- upload recent backups, mostly veeam and .7z, max 90 days old, to wasabi
--- older than 90 days, copy/move to aws deep glacier.

note: wasabi does not charge for api calls or downloads.

Hello so I have RCLONE installed on a local linux machine and I send the files directly to s3. I will use the parameters "--fast-list" "--size-only"
with the copy
Twice a day. I think that's it, thank you very much for your help, I think the only problem would be the size-only when, for example, a wrongly written word is changed inside a file, for example "ihhh" to "Hiii", so I don't think it will be uploaded .

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.