Erro copy, sync google cloud storage to Idrive

I'm getting these errors trying to copy or sync between Google Storage and Idrive

command:
rclone copy gcp:prd-test e2:prd-test --fast-list -v -P --checkers 32

error:
2023-07-31 13:56:23 ERROR : files/e8e7ba5edef00c1ce048e3bcd09c55f9: Failed to copy: failed to open source object: SignatureDoesNotMatch: Access denied.
status code: 403, request id: , host id:
2023-07-31 13:56:23 ERROR : Attempt 3/3 failed with 16 errors and: failed to open source object: SignatureDoesNotMatch: Access denied.
status code: 403, request id: , host id:

The delete option works

rclone version:

[root@id958152130 ~]# rclone --version
rclone v1.63.0-beta.7074.b26db8e64

  • os/version:centos 8 (64 bit)
  • os/kernel: 4.18.0-358.el8.x86_64 (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.20.5
  • go/linking: static
  • go/tags: none

hi,

  • as per the help and support template, post the redacted config file and a full debug log.
  • to keep the log small, copy a single file
  • unless there is a specifc reason, best to test using latest stable rclone, not beta.

Hi,

I tried first with the latest stable version searching the internet I saw a topic suggesting to use the latest version.

Follow the log file in debug
rclone-log.txt (2.0 KB)

please, copy+paste all the requested information into a new post.

also, to keep the log short , add --retries=1

now i think i did it correctly

rclone copy -v -P --verbose gcp:prd-test e2:prd-test --retries=1 --log-file=rclone.log
rclone-log.txt (4.5 KB)

please, third time asking, post all the requested information,

[gcp]
type = s3
provider = Other
env_auth = false
access_key_id = xxxxxxxxxxxxxxxx
secret_access_key = xxxxxxxxxxxxxxxx
endpoint = https://storage.googleapis.com
region = us-east1

[e2]
type = s3
provider = IDrive
env_auth = false
access_key_id = zzzzzzzzzzzzzz
secret_access_key = zzzzzzzzzzzzzzzzzzzzzzzzzz
endpoint = t6w4.mi.idrivee2-42.com

2023/07/31 15:25:19 DEBUG : rclone: Version "v1.63.0-beta.7074.b26db8e64" starting with parameters ["rclone" "copy" "-v" "-P" "--verbose" "gcp:prd-test" "e2:prd-easydoc-test" "--retries=1" "--log-file=rclone.log"]
2023/07/31 15:25:19 DEBUG : Creating backend with remote "gcp:prd-test"
2023/07/31 15:25:19 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2023/07/31 15:25:19 DEBUG : name = "gcp", root = "prd-easydoc-test/folders", opt = &s3.Options{Provider:"Other", EnvAuth:false, AccessKeyID:"xxxxxxxxxxxxxxxxxxxxxxxxxx", SecretAccessKey:"xxxxxxxxxxxxxxxxxxx", Region:"us-east1", Endpoint:"https://storage.googleapis.com", STSEndpoint:"", LocationConstraint:"", ACL:"", BucketACL:"", RequesterPays:false, ServerSideEncryption:"", SSEKMSKeyID:"", SSECustomerAlgorithm:"", SSECustomerKey:"", SSECustomerKeyBase64:"", SSECustomerKeyMD5:"", StorageClass:"", UploadCutoff:209715200, CopyCutoff:4999341932, ChunkSize:5242880, MaxUploadParts:10000, DisableChecksum:false, SharedCredentialsFile:"", Profile:"", SessionToken:"", UploadConcurrency:4, ForcePathStyle:true, V2Auth:false, UseAccelerateEndpoint:false, LeavePartsOnError:false, ListChunk:1000, ListVersion:0, ListURLEncode:fs.Tristate{Value:false, Valid:false}, NoCheckBucket:false, NoHead:false, NoHeadObject:false, Enc:0x3000002, MemoryPoolFlushTime:60000000000, MemoryPoolUseMmap:false, DisableHTTP2:false, DownloadURL:"", DirectoryMarkers:false, UseMultipartEtag:fs.Tristate{Value:false, Valid:false}, UsePresignedRequest:false, Versions:false, VersionAt:fs.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}, Decompress:false, MightGzip:fs.Tristate{Value:false, Valid:false}, UseAcceptEncodingGzip:fs.Tristate{Value:false, Valid:false}, NoSystemMetadata:false}
2023/07/31 15:25:19 DEBUG : Resolving service "s3" region "us-east1"
2023/07/31 15:25:19 DEBUG : Creating backend with remote "e2:prd-easydoc-test"
2023/07/31 15:25:19 DEBUG : name = "e2", root = "prd-easydoc-test", opt = &s3.Options{Provider:"IDrive", EnvAuth:false, AccessKeyID:"zzzzzzz", SecretAccessKey:"zzzzzzzzzzzzz", Region:"", Endpoint:"t6w4.mi.idrivee2-42.com", STSEndpoint:"", LocationConstraint:"", ACL:"", BucketACL:"", RequesterPays:false, ServerSideEncryption:"", SSEKMSKeyID:"", SSECustomerAlgorithm:"", SSECustomerKey:"", SSECustomerKeyBase64:"", SSECustomerKeyMD5:"", StorageClass:"", UploadCutoff:209715200, CopyCutoff:4999341932, ChunkSize:5242880, MaxUploadParts:10000, DisableChecksum:false, SharedCredentialsFile:"", Profile:"", SessionToken:"", UploadConcurrency:4, ForcePathStyle:true, V2Auth:false, UseAccelerateEndpoint:false, LeavePartsOnError:false, ListChunk:1000, ListVersion:0, ListURLEncode:fs.Tristate{Value:false, Valid:false}, NoCheckBucket:false, NoHead:false, NoHeadObject:false, Enc:0x3000002, MemoryPoolFlushTime:60000000000, MemoryPoolUseMmap:false, DisableHTTP2:false, DownloadURL:"", DirectoryMarkers:false, UseMultipartEtag:fs.Tristate{Value:false, Valid:false}, UsePresignedRequest:false, Versions:false, VersionAt:fs.Time{wall:0x0, ext:0, loc:(*time.Location)(nil)}, Decompress:false, MightGzip:fs.Tristate{Value:false, Valid:false}, UseAcceptEncodingGzip:fs.Tristate{Value:false, Valid:false}, NoSystemMetadata:false}
2023/07/31 15:25:19 DEBUG : Resolving service "s3" region "us-east-1"
2023/07/31 15:25:19 DEBUG : 3101d05982701c4df8a28ccb148713e5: Need to transfer - File not found at Destination
2023/07/31 15:25:19 DEBUG : 402e2991435b7c9f2cf764e8b907dd34: Need to transfer - File not found at Destination
2023/07/31 15:25:19 DEBUG : S3 bucket prd-easydoc-test: Waiting for checks to finish
2023/07/31 15:25:19 DEBUG : S3 bucket prd-easydoc-test: Waiting for transfers to finish
2023/07/31 15:25:19 ERROR : 3101d05982701c4df8a28ccb148713e5: Failed to copy: failed to open source object: SignatureDoesNotMatch: Access denied.
status code: 403, request id: , host id:
2023/07/31 15:25:19 ERROR : 402e2991435b7c9f2cf764e8b907dd34: Failed to copy: failed to open source object: SignatureDoesNotMatch: Access denied.
status code: 403, request id: , host id:
2023/07/31 15:25:19 ERROR : Attempt 1/1 failed with 2 errors and: failed to open source object: SignatureDoesNotMatch: Access denied.
status code: 403, request id: , host id:
2023/07/31 15:25:19 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Errors: 2 (retrying may help)
Elapsed time: 0.2s

2023/07/31 15:25:19 DEBUG : 8 go routines active
2023/07/31 15:25:19 Failed to copy with 2 errors: last error was: failed to open source object: SignatureDoesNotMatch: Access denied.
status code: 403, request id: , host id:

Any reason you access GCS over S3? instead of using Google Cloud Storage directly?

Is it possible to configure google storage with only the access and secret keys credentials?

I do not think so - you need different type of credentials.

S3 should work in theory but it is always better to go "native":). Check on GCS instance if you have s3:GetObject permission. As your error looks like you have only s3:ListBucket

The strangest thing is that it works with S3 Browser

But since I have to migrate 112 million files and 21TB with it it will never end.

perhaps,
https://rclone.org/s3/#s3-use-accept-encoding-gzip
"some providers such as Google Cloud Storage may alter the HTTP headers, breaking the signature of the request."

This is why I thought about suggesting native interface as it is supported by rclone directly. S3 is great API but individual operators peculiarities can be frustrating to tweak properly. And make it working is one. Then there is performance story.

@Rodrigo_Buch - 112 million objects is substantial number:) What is max in single folder you expect? I am asking as based on others experience transferring this amount of objects requires a bit more than simple rclone copy src dst

There are several backets, but some folders can have up to 5 million files, I already migrated from S3 AWS and Minio and everything was fine with Google alone with this problem

ok 5 millions is manageable. max number objects per folder is more critical than overall. 100 million in one bucket could be a challenge.

try also to change provider to GCS - as per docs:

[gs]
type = s3
provider = GCS
access_key_id = your_access_key
secret_access_key = your_secret_key
endpoint = https://storage.googleapis.com

[root@id958152130 rclone]# rclone lsd -v -P gcp:prd-easydoc-test
2023/07/31 18:50:55 Failed to create file system for "gcp:prd-easydoc-test": didn't find section in config file

run:

rclone config show

and post output here - remove secrets

[gcp]
type = s3
provider = GCS
access_key_id =
secret_access_key =
endpoint = https://storage.googleapis.com

[bra]
type = s3
provider = Other
env_auth = false
access_key_id =
secret_access_key =
region = BR
endpoint = https://bigfile02.brascloud.com.br
location_constraint =
acl = private
server_side_encryption =
storage_class =

[e2]
type = s3
provider = IDrive
env_auth = false
access_key_id =
secret_access_key =
endpoint = t6w4.mi.idrivee2-41.com

This topic was automatically closed 30 days after the last reply. New replies are no longer allowed.