Adding S3 Lifecycle Rules

Hello

We have been using Rclone to sync files from Google Drive to S3, using:

/usr/bin/rclone sync gdrive-prod: s3:bucketrclone/prod

The S3 bucket has been setup with the Standard storage class.

We'd like now to introduce lifecycle rules to move older files (+30d) to Deep Archive.

Will Rclone encounter any problems when the rules start to be applied? Or are there any gotchas we should watch out for.

In case it matters we also plan to change the command to:

However we plan to change this to:

/usr/bin/rclone copy --drive-skip-dangling-shortcuts --max-age=7d gdrive-prod: s3:bucketrclone/prod

Config below.

Thanks in advance

Stuart

[gdrive-post]
type = drive
scope = drive
service_account_file = /home/ubuntu/google.json
team_drive = XXX
client_id = XXX
client_secret = XXX
root_folder_id =

[gdrive-prod]
type = drive
scope = drive
service_account_file = /home/ubuntu/google.json
team_drive = XXX
client_id = XXX
client_secret = XXX
root_folder_id =

[s3]
type = s3
provider = AWS
env_auth = true
region = eu-west-1
location_constraint = eu-west-1
acl = private

welcome to the forum,

that should work, tho you need to do a test and confirm.

api calls for deep glacier are more expensive.
depending on your use-case, might test --no-traverse, --no-check-dest and https://rclone.org/s3/#reducing-costs

Thanks so much for the response - especially the advice about API calls. Appreciate you taking the time.