I set up a new bucket in iDrive e2, region is London.
I cannot even list its contents with rclone, it just hangs at 'resolving s3 region'.
I have another bucket in Frankfurt, which works fine.
Run the command 'rclone version' and share the full output of the command.
A log from the command that you were trying to run with the -vv flag
2024/02/25 22:06:25 DEBUG : rclone: Version "v1.65.1" starting with parameters ["rclone" "lsd" "idriveuk:" "-vv"]
2024/02/25 22:06:25 DEBUG : Creating backend with remote "idriveuk:"
2024/02/25 22:06:25 DEBUG : Using config file from "/home/dinosm/.config/rclone/rclone.conf"
2024/02/25 22:06:25 DEBUG : Resolving service "s3" region "eu-west-1"
It just hangs there and doesn't go any further. I have tried other s3 regions, as well as region codes as given by iDrive e2 on their own website (e.g. gb-ldn), none of them works.
I have recreated the bucket, recreated the credential, no luck.
If I don't put region in the config, it just hangs like so:
2024/02/25 22:19:10 DEBUG : rclone: Version "v1.65.1" starting with parameters ["rclone" "lsd" "idriveuk:" "-vv"]
2024/02/25 22:19:10 DEBUG : Creating backend with remote "idriveuk:"
2024/02/25 22:19:10 DEBUG : Using config file from "/home/dinosm/.config/rclone/rclone.conf"
2024/02/25 22:19:10 DEBUG : Resolving service "s3" region "us-east-1"
Which led me to believe it is automatically selecting the wrong region, so I put a region there, but it didn't make a difference.
I put this there because I am having some issues streaming and read that http2 may be affecting this (both here when debugging a OneDrive issue and elsewhere).
EDIT: It hangs the same way even without disable-http2
Log with headers:
2024/02/25 22:23:51 DEBUG : rclone: Version "v1.65.1" starting with parameters ["rclone" "lsd" "idriveuk:" "-vv" "--dump=headers"]
2024/02/25 22:23:51 DEBUG : Creating backend with remote "idriveuk:"
2024/02/25 22:23:51 DEBUG : Using config file from "/home/dinosm/.config/rclone/rclone.conf"
2024/02/25 22:23:51 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2024/02/25 22:23:51 DEBUG : Resolving service "s3" region "us-east-1"
2024/02/25 22:23:51 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2024/02/25 22:23:51 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2024/02/25 22:23:51 DEBUG : HTTP REQUEST (req 0xc000aeeb00)
2024/02/25 22:23:51 DEBUG : GET / HTTP/1.1
Host: XXX
User-Agent: rclone/v1.65.1
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20240225T222351Z
Accept-Encoding: gzip
2024/02/25 22:23:51 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
I've manually checked the credentials letter by letter, and same for the endpoint, they are all correct. I am stumped.
EDIT: I deleted the London bucket, the London region (and the credential then got auto-deleted too), then recreated them all from scratch. It gave me a different endpoint. Still same situation.
I noticed that rclone config gets the endpoint automatically, which must mean it can access iDrive using my credentials correctly enough to get the endpoint, so might this be a temporary iDrive issue?
I have multiple buckets in multiple regions - no issue.
I have tried to replicate the problem and this is what I find:
my original setup - all is working OK
$ rclone config redacted iDrive:
[iDrive]
type = s3
provider = IDrive
access_key_id = XXX
secret_access_key = XXX
endpoint = p7v1.ldn.idrivee2-40.com
$ rclone lsf iDrive: -vv
2024/02/27 13:52:09 DEBUG : rclone: Version "v1.65.2" starting with parameters ["rclone" "lsf" "iDrive:" "-vv"]
2024/02/27 13:52:09 DEBUG : Creating backend with remote "iDrive:"
2024/02/27 13:52:09 DEBUG : Using config file from "/Users/kptsky/.config/rclone/rclone.conf"
2024/02/27 13:52:09 DEBUG : Resolving service "s3" region "us-east-1"
test-crypt/
test-dal/
test-dal-lock/
test-lock/
test-uk/
2024/02/27 13:52:10 DEBUG : 7 go routines active
Wrong but valid endpoint (I used one from @asdffdsa test):
$ rclone config redacted iDrive:
[iDrive]
type = s3
provider = IDrive
access_key_id = XXX
secret_access_key = XXX
endpoint = p1e1.ldn.idrivee2-24.com
$ rclone lsf iDrive: -vv
2024/02/27 13:48:40 DEBUG : rclone: Version "v1.65.2" starting with parameters ["rclone" "lsf" "iDrive:" "-vv"]
2024/02/27 13:48:40 DEBUG : Creating backend with remote "iDrive:"
2024/02/27 13:48:40 DEBUG : Using config file from "/Users/kptsky/.config/rclone/rclone.conf"
2024/02/27 13:48:40 DEBUG : Resolving service "s3" region "us-east-1"
2024/02/27 13:48:40 ERROR : : error listing: InvalidAccessKeyId: The Access Key Id you provided does not exist in our records.
status code: 403, request id: 17B7BC63CE1C0BBF, host id:
2024/02/27 13:48:40 DEBUG : 7 go routines active
2024/02/27 13:48:40 Failed to lsf with 2 errors: last error was: error in ListJSON: InvalidAccessKeyId: The Access Key Id you provided does not exist in our records.
status code: 403, request id: 17B7BC63CE1C0BBF, host id:
Wrong and invalid endpoint:
$ rclone config redacted iDrive:
[iDrive]
type = s3
provider = IDrive
access_key_id = XXX
secret_access_key = XXX
endpoint = blabla-p7v1.ldn.idrivee2-40.com
$ rclone lsf iDrive: -vv
2024/02/27 13:51:05 DEBUG : rclone: Version "v1.65.2" starting with parameters ["rclone" "lsf" "iDrive:" "-vv"]
2024/02/27 13:51:05 DEBUG : Creating backend with remote "iDrive:"
2024/02/27 13:51:05 DEBUG : Using config file from "/Users/kptsky/.config/rclone/rclone.conf"
2024/02/27 13:51:05 DEBUG : Resolving service "s3" region "us-east-1"
# it hangs here forever
Which let me believe that as test (3) behaves exactly like original OP problem then most likely problem is typo in the endpoint.