What is the problem you are having with rclone?
BucketNameUnavailable copying a a single file to GCS, but not when copying a full directory.
Run the command 'rclone version' and share the full output of the command.
rclone v1.66.0
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-162-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.1
- go/linking: static
- go/tags: none
Which cloud storage system are you using? (eg Google Drive)
GCS. Does not happen with minio.
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone copy file0 s3test:mybucket123/dir0/webtest/testdir -vv
Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.
[s3test]
type = s3
provider = GCS
endpoint = https://storage.googleapis.com
region = us-east1
### Double check the config for sensitive info before posting publicly
A log from the command that you were trying to run with the -vv
flag
2024/03/27 19:22:14 DEBUG : rclone: Version "v1.66.0" starting with parameters ["rclone" "copy" "file0" "s3test:mybucket123/dir0/webtest" "-vv"]
2024/03/27 19:22:14 DEBUG : Creating backend with remote "file0"
2024/03/27 19:22:14 DEBUG : Using config file from "/home/ingres/.config/rclone/rclone.conf"
2024/03/27 19:22:14 DEBUG : fs cache: adding new entry for parent of "file0", "/tmp/web"
2024/03/27 19:22:14 DEBUG : Creating backend with remote "s3test:mybucket123/dir0/webtest"
2024/03/27 19:22:14 DEBUG : Setting access_key_id="#REDACTED#" for "s3test" from environment variable RCLONE_CONFIG_S3TEST_ACCESS_KEY_ID
2024/03/27 19:22:14 DEBUG : Setting secret_access_key="#REDACTED#" for "s3test" from environment variable RCLONE_CONFIG_S3TEST_SECRET_ACCESS_KEY
2024/03/27 19:22:14 DEBUG : s3test: detected overridden config - adding "{_JUcP}" suffix to name
2024/03/27 19:22:14 DEBUG : Setting access_key_id="#REDACTED#" for "s3test" from environment variable RCLONE_CONFIG_S3TEST_ACCESS_KEY_ID
2024/03/27 19:22:14 DEBUG : Setting secret_access_key="#REDACTED#" for "s3test" from environment variable RCLONE_CONFIG_S3TEST_SECRET_ACCESS_KEY
2024/03/27 19:22:14 DEBUG : Resolving service "s3" region "us-east1"
2024/03/27 19:22:14 DEBUG : fs cache: renaming cache item "s3test:mybucket123/dir0/webtest" to be canonical "s3test{_JUcP}:mybucket123/dir0/webtest"
2024/03/27 19:22:14 DEBUG : file0: Need to transfer - File not found at Destination
2024/03/27 19:22:14 ERROR : file0: Failed to copy: failed to prepare upload: BucketNameUnavailable: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
status code: 409, request id: , host id:
2024/03/27 19:22:14 ERROR : Can't retry any of the errors - not attempting retries
2024/03/27 19:22:14 INFO :
Transferred: 0 B / 0 B, -, 0 B/s, ETA -
Errors: 1 (no need to retry)
Elapsed time: 0.6s
2024/03/27 19:22:14 DEBUG : 6 go routines active
2024/03/27 19:22:14 Failed to copy: failed to prepare upload: BucketNameUnavailable: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
status code: 409, request id: , host id:
Further notes:
What I don't understand is I don't get an error if I copy a directory.
If I have this script named tc5:
rclone version
rclone delete s3test:mybucket123/dir0/webtest
rclone copy test s3test:bucketname123/dir0/webtest/testdir/
rclone copy test10 s3test:mybucket123/dir0/webtest
rclone copy file0 s3test:mybucket123/dir0/webtest
rclone ls s3test:mybucket123/dir0/webtest
and:
/tmp/web$ ls test/*
test/test1 test/test2
/tmp/web$ ls test10/*
test10/test10 test10/test11
and I do a 'sh -x tc5'
I get:
+ rclone version
rclone v1.66.0
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-162-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.1
- go/linking: static
- go/tags: none
+ rclone delete s3test:mybucket123/dir0/webtest
+ rclone copy test s3test:mybucket123/dir0/webtest/testdir/
+ rclone copy test10 s3test:mybucket123/dir0/webtest
+ rclone copy file0 s3test:mybucket123/dir0/webtest
2024/03/27 19:21:24 ERROR : file0: Failed to copy: failed to prepare upload: BucketNameUnavailable: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
status code: 409, request id: , host id:
2024/03/27 19:21:24 ERROR : Can't retry any of the errors - not attempting retries
2024/03/27 19:21:24 Failed to copy: failed to prepare upload: BucketNameUnavailable: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
status code: 409, request id: , host id:
+ rclone ls s3test:mybucket123/dir0/webtest
7 test10
7 test11
6 testdir/test1
6 testdir/test2
Further notes:
It seems to runs okay if I remove the 'provider = GCS' line.
But I don't think I'm supposed to do that, and we added the provider line to resolve another issue.
`
Edit: found it, it's to resolve: Failed to copy: failed to open source object: SignatureDoesNotMatch: Access denied.
status code: 403, request id: , host id: errors.