Rclone copy file to GCS fails with BucketNameUnavailable but copy dir works

What is the problem you are having with rclone?

BucketNameUnavailable copying a a single file to GCS, but not when copying a full directory.

Run the command 'rclone version' and share the full output of the command.

rclone v1.66.0

  • os/version: ubuntu 20.04 (64 bit)
  • os/kernel: 5.4.0-162-generic (x86_64)
  • os/type: linux
  • os/arch: amd64
  • go/version: go1.22.1
  • go/linking: static
  • go/tags: none

Which cloud storage system are you using? (eg Google Drive)

GCS. Does not happen with minio.

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy file0 s3test:mybucket123/dir0/webtest/testdir  -vv

Please run 'rclone config redacted' and share the full output. If you get command not found, please make sure to update rclone.

[s3test]
type = s3
provider = GCS
endpoint = https://storage.googleapis.com
region = us-east1
### Double check the config for sensitive info before posting publicly

A log from the command that you were trying to run with the -vv flag

2024/03/27 19:22:14 DEBUG : rclone: Version "v1.66.0" starting with parameters ["rclone" "copy" "file0" "s3test:mybucket123/dir0/webtest" "-vv"]
2024/03/27 19:22:14 DEBUG : Creating backend with remote "file0"
2024/03/27 19:22:14 DEBUG : Using config file from "/home/ingres/.config/rclone/rclone.conf"
2024/03/27 19:22:14 DEBUG : fs cache: adding new entry for parent of "file0", "/tmp/web"
2024/03/27 19:22:14 DEBUG : Creating backend with remote "s3test:mybucket123/dir0/webtest"
2024/03/27 19:22:14 DEBUG : Setting access_key_id="#REDACTED#" for "s3test" from environment variable RCLONE_CONFIG_S3TEST_ACCESS_KEY_ID
2024/03/27 19:22:14 DEBUG : Setting secret_access_key="#REDACTED#" for "s3test" from environment variable RCLONE_CONFIG_S3TEST_SECRET_ACCESS_KEY
2024/03/27 19:22:14 DEBUG : s3test: detected overridden config - adding "{_JUcP}" suffix to name
2024/03/27 19:22:14 DEBUG : Setting access_key_id="#REDACTED#" for "s3test" from environment variable RCLONE_CONFIG_S3TEST_ACCESS_KEY_ID
2024/03/27 19:22:14 DEBUG : Setting secret_access_key="#REDACTED#" for "s3test" from environment variable RCLONE_CONFIG_S3TEST_SECRET_ACCESS_KEY
2024/03/27 19:22:14 DEBUG : Resolving service "s3" region "us-east1"
2024/03/27 19:22:14 DEBUG : fs cache: renaming cache item "s3test:mybucket123/dir0/webtest" to be canonical "s3test{_JUcP}:mybucket123/dir0/webtest"
2024/03/27 19:22:14 DEBUG : file0: Need to transfer - File not found at Destination
2024/03/27 19:22:14 ERROR : file0: Failed to copy: failed to prepare upload: BucketNameUnavailable: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: , host id: 
2024/03/27 19:22:14 ERROR : Can't retry any of the errors - not attempting retries
2024/03/27 19:22:14 INFO  : 
Transferred:              0 B / 0 B, -, 0 B/s, ETA -
Errors:                 1 (no need to retry)
Elapsed time:         0.6s

2024/03/27 19:22:14 DEBUG : 6 go routines active
2024/03/27 19:22:14 Failed to copy: failed to prepare upload: BucketNameUnavailable: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: , host id: 

Further notes:
What I don't understand is I don't get an error if I copy a directory.
If I have this script named tc5:

rclone version
rclone delete         s3test:mybucket123/dir0/webtest           
rclone copy   test    s3test:bucketname123/dir0/webtest/testdir/  
rclone copy   test10  s3test:mybucket123/dir0/webtest           
rclone copy   file0   s3test:mybucket123/dir0/webtest           
rclone ls             s3test:mybucket123/dir0/webtest 

and:
/tmp/web$ ls test/*
test/test1 test/test2
/tmp/web$ ls test10/*
test10/test10 test10/test11

and I do a 'sh -x tc5'
I get:

+ rclone version
rclone v1.66.0
- os/version: ubuntu 20.04 (64 bit)
- os/kernel: 5.4.0-162-generic (x86_64)
- os/type: linux
- os/arch: amd64
- go/version: go1.22.1
- go/linking: static
- go/tags: none
+ rclone delete s3test:mybucket123/dir0/webtest
+ rclone copy test s3test:mybucket123/dir0/webtest/testdir/
+ rclone copy test10 s3test:mybucket123/dir0/webtest
+ rclone copy file0 s3test:mybucket123/dir0/webtest
2024/03/27 19:21:24 ERROR : file0: Failed to copy: failed to prepare upload: BucketNameUnavailable: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: , host id: 
2024/03/27 19:21:24 ERROR : Can't retry any of the errors - not attempting retries
2024/03/27 19:21:24 Failed to copy: failed to prepare upload: BucketNameUnavailable: The requested bucket name is not available. The bucket namespace is shared by all users of the system. Please select a different name and try again.
        status code: 409, request id: , host id: 
+ rclone ls s3test:mybucket123/dir0/webtest
        7 test10
        7 test11
        6 testdir/test1
        6 testdir/test2


Further notes:
It seems to runs okay if I remove the 'provider = GCS' line.
But I don't think I'm supposed to do that, and we added the provider line to resolve another issue.
`
Edit:  found it, it's to resolve:  Failed to copy: failed to open source object: SignatureDoesNotMatch: Access denied.
        status code: 403, request id: , host id:  errors.

This means that you chose a bucket which is owned by someone else.

Can you double check your commands? And if you can't get it to work, post the actual (unredacted) commands and we'll see if we can spot any mistakes.

That makes no sense to me.
I own the bucket, and I can write to it if I'm copying a directory.
It's only a single file that fails.

If I didn't own the bucket, then every command would be failing with BucketNameUnavailable, wouldn't it?
I'll also note that the commands run as expected if I use version 1.59.2 of rclone.

The only thing I redacted was the secret_access_key and the access_key_id.
The tc5 script I posted has the actual commands I used.
Try cutting and pasting that into a linux terminal window.

This works when I try it using GCS with s3 interface. My config does not have region in it though - can you try removing that line?

The region line is required if the bucket has a region.
If you don't enter it, everything fails with status code 400/'Invalid Argument'.

I actually downloaded the source code and am trying to debug this thing myself.
I know roughly what is wrong but I don't know what is the proper fix for it.

Note I am now using a new bucket 'ak-spudf' in my tests. It still has the same symptoms.

=================
The rough stack of failure is:
0 0x0000000001e05056 in github.com/rclone/rclone/lib/bucket.(*Cache).Create
at ./lib/bucket/bucket.go:94
1 0x0000000003148085 in github.com/rclone/rclone/backend/s3.(*Fs).makeBucket
at ./backend/s3/s3.go:4346
2 0x0000000003147777 in github.com/rclone/rclone/backend/s3.(*Fs).Mkdir
at ./backend/s3/s3.go:4317
3 0x0000000003147d6c in github.com/rclone/rclone/backend/s3.(*Fs).mkdirParent
at ./backend/s3/s3.go:4337
4 0x00000000031618a8 in github.com/rclone/rclone/backend/s3.(*Object).prepareUpload
at ./backend/s3/s3.go:6142
5 0x00000000031657cf in github.com/rclone/rclone/backend/s3.(*Object).Update
at ./backend/s3/s3.go:6329
6 0x00000000031466d9 in github.com/rclone/rclone/backend/s3.(*Fs).Put
at ./backend/s3/s3.go:4234
7 0x0000000001f61328 in github.com/rclone/rclone/fs/operations.(*copy).updateOrPut
at ./fs/operations/copy.go:209
8 0x0000000001f621f5 in github.com/rclone/rclone/fs/operations.(*copy).manualCopy
at ./fs/operations/copy.go:262
9 0x0000000001f6308a in github.com/rclone/rclone/fs/operations.(*copy).copy
at ./fs/operations/copy.go:302
10 0x0000000001f64c8d in github.com/rclone/rclone/fs/operations.Copy
at ./fs/operations/copy.go:404
11 0x0000000001f951dd in github.com/rclone/rclone/fs/operations.moveOrCopyFile
at ./fs/operations/operations.go:1992
12 0x0000000001f64f1c in github.com/rclone/rclone/fs/operations.CopyFile
at ./fs/operations/copy.go:409
13 0x000000000381527f in github.com/rclone/rclone/cmd/copy.init.func1.1
at ./cmd/copy/copy.go:106
14 0x00000000037b65b2 in github.com/rclone/rclone/cmd.Run
at ./cmd/cmd.go:255
15 0x0000000003815112 in github.com/rclone/rclone/cmd/copy.init.func1
at ./cmd/copy/copy.go:102
16 0x0000000003795d08 in github.com/spf13/cobra.(*Command).execute
at /home/ingres/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:987
17 0x0000000003796d90 in github.com/spf13/cobra.(*Command).ExecuteC
at /home/ingres/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:1115
18 0x0000000003796232 in github.com/spf13/cobra.(*Command).Execute
at /home/ingres/go/pkg/mod/github.com/spf13/cobra@v1.8.0/command.go:1039
19 0x00000000037b9477 in github.com/rclone/rclone/cmd.Main
at ./cmd/cmd.go:563
20 0x0000000003b8fd6f in main.main
at ./rclone.go:14
21 0x00000000011a5d52 in runtime.main
at /usr/lib/go-1.22/src/runtime/proc.go:271
22 0x00000000011dffc1 in runtime.goexit
at /usr/lib/go-1.22/src/runtime/asm_amd64.s:1695

=======

Here's what I roughly see happening.

As a reminder the command line is: rclone copy file1 s3ckp:ak-spudf/wf/ --config s3.conf -vv

prepareUpload hits this line:
// Create parent dir/bucket if not saving directory marker
6141: if !strings.HasSuffix(o.remote, "/") {
=>6142: err := o.fs.mkdirParent(ctx, o.remote)

at this point in time, o.remote is just "file1" so it runs mkdirParent.
f.Mkdir then gets called with a parameter "dir" which is an empty string.

e := f.makeBucket(ctx, bucket) gets called with a parameter 'bucket' which is 'ak-spudf' (which is correct).
This then gets passed to f.cache.Create.

Now, I'm pretty sure that what SHOULD be happening is that in f.cache.Create it should hit this:

     // If bucket already exists then it is OK
     if created, ok := c.status[bucket]; ok && created {
             return nil
     }

and return nil, because the bucket already exists.
However, at this point in time, c.status[bucket] is empty:

(dlv) print bucket
"ak-spudf"
(dlv) print c.status[bucket]
Command failed: key not found
(dlv)

And then it continues to try to create the bucket here:
128: // Create the bucket
129: c.mu.Unlock()
=> 130: err = create()

And that's when you get the BucketNameUnavailable errors.

=============================

Now, if you recall, if I copy a directory, like this:
rclone copy testdir s3ckp:ak-spudf/wf/ --config s3.conf -vv
it works.

Then it actually goes through this stack/lines as the above example.
However, prior to that, it goes through THIS stack first:

(dlv) stack
0 0x0000000001e04ee7 in github.com/rclone/rclone/lib/bucket.(*Cache).MarkOK
at ./lib/bucket/bucket.go:64
1 0x0000000003144aaf in github.com/rclone/rclone/backend/s3.(*Fs).listDir
at ./backend/s3/s3.go:4117
2 0x000000000314583c in github.com/rclone/rclone/backend/s3.(*Fs).List
at ./backend/s3/s3.go:4158
3 0x0000000001dfcfc2 in github.com/rclone/rclone/fs/list.DirSorted
at ./fs/list/list.go:24
4 0x0000000001f4d827 in github.com/rclone/rclone/fs/march.(*March).makeListDir.func1
at ./fs/march/march.go:90
5 0x0000000001f52802 in github.com/rclone/rclone/fs/march.(*March).processJob.func2
at ./fs/march/march.go:406
6 0x00000000011dffc1 in runtime.goexit
at /usr/lib/go-1.22/src/runtime/asm_amd64.s:1695

And that's where it sets
4116: // bucket must be present if listing succeeded
=>4117: f.cache.MarkOK(bucket)
4118: return entries, nil
4119: }
4120:
(dlv) print bucket
"ak-spudf"
(dlv)

and that's when it marks c.status[bucket] = true.

=========================

My gut feeling is that makeListDir should also be called for single files,
if it's going to used to mark whether or not the bucket exists.

Either that, or makeBucket needs to have better logic to see whether or not the bucket exists before blindly trying to (re) create it.

Try --s3-no-check-bucket

I think there error message from GCS is misleading here.

Yeah, it works fine with --s3-no-check-bucket
Thanks!

This has to be a bug though, isn't it?

I'd call it a mis-feature probably.

Rclone creates buckets if they don't exist on first use. That is the mis-feature - it shouldn't do that.

In order to do that it needs to know that the bucket exists. When you do a copy a directory it lists the bucket and at that point it knows the bucket exists.

However when you copy a single file, it doesn't list the bucket so it tries to create it just before upload. I guess you have restricted your access to just one bucket so the create fails? When I try this GCS returns the BucketAlreadyOwnedByYou error which rclone ignores silently.

I see.

I didn't create the bucket, one of our sysadmins created it for our group and gave me write access to it.

I guess we'll just leave it at that - I'll use the new flag.
Thanks again for the help.

1 Like