When syncing from a S3 bucket to a S3 bucket if rclone detects that the source object timestamp has changed and the file is the same size and etag is matching in source and destination it attempts to update the metadata that stores the original modtime. For S3 standard tiers this update succeeds (and results in a server side copy) but for objects that are in Glacier or Glacier Deep Archive tier on the target side updating the modtime updates fails since you can't do a server side copy of an object that is in Glacier tier.
The error message is:
Failed to set modification time: InvalidObjectState: Operation is not valid for the source object's storage class
I believe the behavior should be that if the object is in Glacier or Glacier Deep Archive that it should upload the object with the new modtime rather than attempt to do the server side copy/modtime update.
I am using rclone 1.47 and have tested with the latest beta, rclone-v1.47.0-074-g81fad0f0-beta-windows-amd64. Tested on Windows and Linux.
Rclone command line:
rclone sync ecs:bucket1 aws:bucket1 --bwlimit 95M --transfers 64 --s3-upload-concurrency 10 --checkers 500 --s3-chunk-size 100M --log-level INFO
Thanks for the reply ncw.
I have no knowledge of Go (or development in general) and am struggling to figure out how to fix this.
Would the SetModTime function need to return nil in order to cause the calling function to copy rather than update mod time? Can we detect if the object is in Glacier with something like if o.fs.opt.StorageClass == "GLACIER", and does this account for Glacier Deep Archive tier?
You need to return the specific error fs.ErrorCantSetModTime then higher layers will do a copy. If you return nil, then you are saying that the operation was successful and the higher layers will do nothing.
I guess that will be enough since opt.StorageClass indicates the desired storage class.