I have been using rsync up to now and am currently loving rclone. My question is, as I "copy" from a local source to my s3 bucket for standard storage. It's possible that on the s3 bucket in the lifecycle settings I have archived some data to glacier.
What will happen when I try to copy a local file and rclone checks to see if we already have that file in s3, but it's in glacier storage. Is it able to know, we already have that file, don't copy it. Or will it try to copy it anyway?
What is your rclone version (output from rclone version
)
1.51.0
Which OS you are using and how many bits (eg Windows 7, 64 bit)
Debian 64bit
Which cloud storage system are you using? (eg Google Drive)
AWS S3
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone copy /media/storage/Events/01/ events:gfevents/Events/01/ -P -v --log-file=/var/log/rclone.log