Copy from local to S3 - Storage Type Change?

I have been using rsync up to now and am currently loving rclone. My question is, as I "copy" from a local source to my s3 bucket for standard storage. It's possible that on the s3 bucket in the lifecycle settings I have archived some data to glacier.

What will happen when I try to copy a local file and rclone checks to see if we already have that file in s3, but it's in glacier storage. Is it able to know, we already have that file, don't copy it. Or will it try to copy it anyway?

What is your rclone version (output from rclone version)

1.51.0

Which OS you are using and how many bits (eg Windows 7, 64 bit)

Debian 64bit

Which cloud storage system are you using? (eg Google Drive)

AWS S3

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy /media/storage/Events/01/ events:gfevents/Events/01/ -P -v --log-file=/var/log/rclone.log

What should happen is rclone will read the metadata which should still be present even though the file has been archived to glacier and decide the file doesn't need copying.

That was my thinking and what I was hoping to hear. Just wanted to make sure I wasn't going to duplicate everything.
Thanks

1 Like

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.