Is "rclone backend restore" from AWS GLACIER broken?

rclone v1.64.0-beta.7132.f1a842081
- os/version: darwin 13.4.1 (64 bit)
- os/kernel: 22.5.0 (x86_64)
- os/type: darwin
- os/arch: amd64
- go/version: go1.20.5
- go/linking: dynamic
- go/tags: cmount
type = s3
provider = AWS
access_key_id = XXX
secret_access_key = XXX
acl = private
rclone lsl  s3:test-bucket-kptsky/test/glacier.test
2023/07/10 15:05:50 NOTICE: S3 bucket test-bucket-kptsky path test: Switched region to "eu-west-3" from "us-east-1"
       15 2023-07-10 13:42:02.909294624 glacier.test

file is in glacier storage class:

rclone backend restore s3:test-bucket-kptsky/test/ --include /glacier.test -o priority=Standard -vv
2023/07/10 15:12:44 DEBUG : rclone: Version "v1.64.0-beta.7132.f1a842081" starting with parameters ["rclone" "backend" "restore" "s3:test-bucket-kptsky/test/" "--include" "/glacier.test" "-o" "priority=Standard" "-vv"]
2023/07/10 15:12:44 DEBUG : Using config file from "/Users/kptsky/.config/rclone/rclone.conf"
2023/07/10 15:12:44 DEBUG : name = "s3", root = "test-bucket-kptsky/test/", opt = &s3.Options{Provider:"AWS", EnvAuth:false, AccessKeyID:"XXX", SecretAccessKey:"XXX", Region:"", Endpoint:"", STSEndpoint:"", LocationConstraint:"", ACL:"private", BucketACL:"", RequesterPays:false, ServerSideEncryption:"", SSEKMSKeyID:"", SSECustomerAlgorithm:"", SSECustomerKey:"", SSECustomerKeyBase64:"", SSECustomerKeyMD5:"", StorageClass:"", UploadCutoff:209715200, CopyCutoff:4999341932, ChunkSize:5242880, MaxUploadParts:10000, DisableChecksum:false, SharedCredentialsFile:"", Profile:"", SessionToken:"", UploadConcurrency:4, ForcePathStyle:true, V2Auth:false, UseAccelerateEndpoint:false, LeavePartsOnError:false, ListChunk:1000, ListVersion:0, ListURLEncode:fs.Tristate{Value:false, Valid:false}, NoCheckBucket:false, NoHead:false, NoHeadObject:false, Enc:0x3000002, MemoryPoolFlushTime:60000000000, MemoryPoolUseMmap:false, DisableHTTP2:false, DownloadURL:"", DirectoryMarkers:false, UseMultipartEtag:fs.Tristate{Value:false, Valid:false}, UsePresignedRequest:false, Versions:false, VersionAt:fs.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}, Decompress:false, MightGzip:fs.Tristate{Value:false, Valid:false}, UseAcceptEncodingGzip:fs.Tristate{Value:false, Valid:false}, NoSystemMetadata:false}
2023/07/10 15:12:45 NOTICE: S3 bucket test-bucket-kptsky path test: Switched region to "eu-west-3" from "us-east-1"
2023/07/10 15:12:45 DEBUG : pacer: low level retry 1/2 (error BucketRegionError: incorrect region, the bucket is not in 'us-east-1' region at endpoint '', bucket is in 'eu-west-3' region
	status code: 301, request id: T8SA2Q4S7AWXSBNB, host id: fiZB6BkOqS4ftJgzRpkK/B08esY6YdjjI55zpmey49s6Wi60D4RLdFKnUFlX+qlJReTLE96/XMY=)
2023/07/10 15:12:45 DEBUG : pacer: Rate limited, increasing sleep to 10ms
2023/07/10 15:12:45 DEBUG : pacer: Reducing sleep to 0s
		"Status": "MalformedXML: The XML you provided was not well-formed or did not validate against our published schema\n\tstatus code: 400, request id: T8S3QMRFGQYE56BZ, host id: GK7OdrJ0LgII1fBYeR3T1XeLNqzHh1o0kYwFYAVQKEJKU9STZfpLFafMHqrsWQAvmgA4FR3S/0U=",
		"Remote": "glacier.test"
2023/07/10 15:12:45 DEBUG : 7 go routines active

it is the same behaviour with rclone v1.63.0

Trying to restore all folder or bucket:

rclone backend restore s3:test-bucket-kptsky/test -o priority=Standard
rclone backend restore s3:test-bucket-kptsky -o priority=Standard

produces the same error

When file is not in glacier it works:

rclone backend restore s3:test-bucket-kptsky/ --include /glacier.test -o priority=Standard
2023/07/10 15:34:18 NOTICE: S3 bucket test-bucket-kptsky: Switched region to "eu-west-3" from "us-east-1"
		"Status": "Not GLACIER or DEEP_ARCHIVE storage class",
		"Remote": "glacier.test"

if you add -o lifetime, then should work.

RestoreObject - Amazon Simple Storage Service

Lifetime of the active copy in days. Do not use with restores that specify `OutputLocation`.
The Days element is required for regular restores, and must not be provided for select requests.
Type: Integer
Required: No
1 Like

Thank you! You are right. Documentation is wrong:

Restore objects from GLACIER to normal storage

rclone backend restore remote: [options] [<arguments>+]


All the objects shown will be marked for restore, then

rclone backend restore --include "*.txt" s3:bucket/path -o priority=Standard

Now it works:

rclone backend restore s3:test-bucket-kptsky/test/ --include /glacier.test -o priority=Standard -o lifetime=1

		"Status": "OK",
		"Remote": "glacier.test"
rclone backend restore s3:test-bucket-kptsky/test/ --include /glacier.test -o priority=Standard -o lifetime=1

		"Status": "RestoreAlreadyInProgress: Object restore is already in progress\n\tstatus code: 409, request id: 5GY8RZQFMSK96S8K, host id: i/QZ6MJJ5lD82ihUhzuUgn0dW6994A2bXIFx02XH2sN6UiOA9OtmY6/CwaUr2mTtOb9Nb43wTXM=",
		"Remote": "glacier.test"

This topic was automatically closed 3 days after the last reply. New replies are no longer allowed.