S3 upload: 404 Not Found <Error><Code>NoSuchKey

What is the problem you are having with rclone?

I need to overwrite some object and these objects are 0 byte objects in the destination which is left over from RGW sync. In the source 'no-one is 0 byte'. These objects has metadata and I use latest metadata version of rclone thanks to '@ncw'
I can not copy object from bucket/path to bucket/path but i can copy to another path in the bucket so I think that means some how the 0 byte objects or metadata blocking the rclone.

Also I have tried to remove object with 'rados rm' and I've tried to copy again but no luck. There is some weird things going on... When I listxattr I see pending attr in rados. After rados remove there is no object in the destination as below but after rclone copy I still getting error. The weird thing is that after rclone error if I check the object againt I can see it at the destination with rados listxattr and rados stat but attr'in pending state... What is going on here?

[root@SV1 rclonerundir]# rados listxattr -p prod.rgw.buckets.data c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
user.rgw.olh.idtag
user.rgw.olh.info
user.rgw.olh.pending.00000000606aeb5674b70fzrl4f2gd3w
user.rgw.olh.pending.00000000606aeb56ob709fm7a2p6el6n
user.rgw.olh.pending.00000000606aeb56q91o6f68auzugtck
user.rgw.olh.pending.00000000606af5acxjez0bb46e2jv53c
user.rgw.olh.pending.00000000606af5f3p69xdfxbsyh01pfp
user.rgw.olh.pending.00000000606af6575onhsriofptsvaiq
user.rgw.olh.pending.00000000606af657xvi7ox4xay1rv032
user.rgw.olh.ver

[root@SV1 rclonerundir]# rados -p prod.rgw.buckets.data stat c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
prod.rgw.buckets.data/c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f mtime 2021-04-05 14:30:43.000000, size 0

[root@SV1 rclonerundir]# rados -p prod.rgw.buckets.data rm c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f

[root@SV1 rclonerundir]# rados -p prod.rgw.buckets.data stat c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
error stat-ing prod.rgw.buckets.data/c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f: (2) No such file or directory

[root@SV1 rclonerundir]# rados listxattr -p prod.rgw.buckets.data c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
error getting xattr set prod.rgw.buckets.data/c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f: (2) No such file or directory

[root@SV1 rclonerundir]# rclone ls new:mybucket/images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f --no-traverse

[root@SV1 rclonerundir]# rclone copy old:mybucket/images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f new:mybucket/images/2018/01/29/ --no-traverse --checksum --dump headers --s3-no-check-bucket --no-check-dest --no-update-modtime
2021/04/05 14:48:23 ERROR : ed4ba79c-bb66-4ff6-847a-09a1e0cff47f: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>NoSuchKeymybuckettx00000000000000016167f-00606af907-21d7d2f4-prod21d7d2f4-prod
2021/04/05 14:48:23 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>NoSuchKeymybuckettx00000000000000016167f-00606af907-21d7d2f4-prod21d7d2f4-prod
2021/04/05 14:48:23 ERROR : ed4ba79c-bb66-4ff6-847a-09a1e0cff47f: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>NoSuchKeymybuckettx000000000000000161681-00606af907-21d7d2f4-prod21d7d2f4-prod
2021/04/05 14:48:23 Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>NoSuchKeymybuckettx000000000000000161682-00606af907-21d7d2f4-prod21d7d2f4-prod

[root@SV1 rclonerundir]# rclone ls new:mybucket/images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f --no-traverse
0 ed4ba79c-bb66-4ff6-847a-09a1e0cff47f

[root@SV1 rclonerundir]# rados listxattr -p prod.rgw.buckets.data c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
user.rgw.idtag
user.rgw.olh.idtag
user.rgw.olh.info
user.rgw.olh.pending.00000000606af9076ypupcti987i6341
user.rgw.olh.pending.00000000606af9078pipikch8mc685pf
user.rgw.olh.pending.00000000606af907fii2066pie1zieau
user.rgw.olh.pending.00000000606af907z0eyu6219w0t0jw5
user.rgw.olh.ver

[root@SV1 rclonerundir]# rados -p prod.rgw.buckets.data stat c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
prod.rgw.buckets.data/c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f mtime 2021-04-05 14:48:23.000000, size 0

What should I do? What am I doing wrong?

RadosGW log:

2021-04-05 13:49:58.727 7fb3c955c700 1 ====== req done req=0x55631e7f2710 op status=-2 http_status=404 latency=0.140001s ======
2021-04-05 13:49:58.727 7fb3f85ba700 1 beast: 0x55630b390710: 10.10.10.171 - - [2021-04-05 13:49:58.0.727646s] "PUT /mybucket/images/2020/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=0JFZ9C9EW9W7MAJ6YYBM%2F20210405%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210405T104958Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-feature-count%3Bx-amz-meta-mtime&X-Amz-Signature=63c8f01e80550c3b050ae382fea6c403f3744017afbd8c3fe34dd0cd09822a1f HTTP/1.1" 404 52562 - "rclone/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata" -

What is your rclone version (output from rclone version)

rclone v1.55.0-beta.5247.b7199fe3d.fix-111-metadata

  • go version: go1.16

Which OS you are using and how many bits (eg Windows 7, 64 bit)

  • os/arch: linux/amd64

Which cloud storage system are you using? (eg Google Drive)

Ceph RadosGW - S3 - Beast

The command you were trying to run (eg rclone copy /tmp remote:tmp)

rclone copy --files-from "test.list" old-$bucket:$bucket new-$bucket:$bucket --log-file run.log -vv --progress --fast-list --checksum --transfers 10 --checkers 10 --s3-list-chunk 2000 --no-traverse --s3-no-check-bucket --no-check-dest --no-update-modtime

The rclone config contents with secrets removed.

[new]
type = s3
provider = Other
bucket = $bucket
endpoint = http://10.10.x.x

[old]
type = s3
provider = Other
bucket = $bucket
endpoint = http://10.10.x.y

A log from the command with the -vv flag

2021/04/05 13:49:58 DEBUG : rclone: Version "v1.55.0-beta.5247.b7199fe3d.fix-111-metadata" starting with parameters ["rclone" "copy" "--files-from" "rtest.list" "old:mybucket" "new:mybucket" "--log-file" "mybucket-run.log" "-vv" "--progress" "--fast-list" "--checksum" "--transfers" "10" "--checkers" "10" "--s3-list-chunk" "2000" "--no-traverse" "--s3-no-check-bucket" "--no-check-dest" "--no-update-modtime"]
2021/04/05 13:49:58 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2021/04/05 13:49:58 DEBUG : Creating backend with remote "old:mybucket"
2021/04/05 13:49:58 DEBUG : Creating backend with remote "new:mybucket"
2021/04/05 13:49:58 DEBUG : S3 bucket mybucket: Waiting for checks to finish
2021/04/05 13:49:58 DEBUG : S3 bucket mybucket: Waiting for transfers to finish
2021/04/05 13:49:58 DEBUG : images/2019/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b: src = &s3.Object{fs:(*s3.Fs)(0xc00026d200), remote:"images/2019/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b", md5:"398a33a5c10b4fe64d52811ad1ea0f93", bytes:52352, lastModified:time.Time{wall:0x0, ext:63727902290, loc:(*time.Location)(nil)}, meta:map[string]*string{"Feature-Count":(*string)(0xc0010a8050)}, mimeType:"application/octet-stream", storageClass:""}
2021/04/05 13:49:58 DEBUG : images/2019/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b: Reading metadata from images/2019/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b
2021/04/05 13:49:58 DEBUG : images/2019/02/25/b2eb009d-c191-4ed9-ac00-6a1692cf71d0: src = &s3.Object{fs:(*s3.Fs)(0xc00026d200), remote:"images/2019/02/25/b2eb009d-c191-4ed9-ac00-6a1692cf71d0", md5:"4fd2cff5d87bcdd1f994c941452e530e", bytes:90624, lastModified:time.Time{wall:0x0, ext:63727900865, loc:(*time.Location)(nil)}, meta:map[string]*string{"Feature-Count":(*string)(0xc0010a80f0)}, mimeType:"application/octet-stream", storageClass:""}
2021/04/05 13:49:58 DEBUG : images/2019/02/25/b2eb009d-c191-4ed9-ac00-6a1692cf71d0: Reading metadata from images/2019/02/25/b2eb009d-c191-4ed9-ac00-6a1692cf71d0
2021/04/05 13:49:58 DEBUG : images/2015/01/09/2eefad1f-646b-4a2a-92a6-10018e397e90: src = &s3.Object{fs:(*s3.Fs)(0xc00026d200), remote:"images/2015/01/09/2eefad1f-646b-4a2a-92a6-10018e397e90", md5:"22930f48264c2312b4bc68ad7feb28f1", bytes:134272, lastModified:time.Time{wall:0x0, ext:63727858566, loc:(*time.Location)(nil)}, meta:map[string]*string{"Feature-Count":(*string)(0xc0007ee370)}, mimeType:"application/octet-stream", storageClass:""}
.
.
.
.
2021/04/05 13:49:58 ERROR : images/2006/01/25/ef7b1753-7c08-44bd-84d1-4a864b344960: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000013fc63-00606aeb56-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>
2021/04/05 13:49:58 ERROR : Attempt 3/3 failed with 10 errors and: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000013fc63-00606aeb56-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>
2021/04/05 13:49:58 INFO  :
Transferred:   	    2.556M / 2.556 MBytes, 100%, 4.734 MBytes/s, ETA 0s
Errors:                10 (retrying may help)
Elapsed time:         0.5s

2021/04/05 13:49:58 DEBUG : 42 go routines active
2021/04/05 13:49:58 Failed to copy with 10 errors: last error was: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000013fc63-00606aeb56-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>

It looks like to me that the rados bucket has got corrupted somehow and has lost track of the name to data mapping of those objects.

I'm not a CEPH expert so I can't tell you how to fix that.

I expect if you abandonded that bucket and moved everything you could out of it then the new bucket would be fine.

Thanks for the answer ncw!

I dont think the bucket is corrupted. I can copy unique files without problem. I copied 50K file yesterday. Yes there must be an issue. I suspect that there is problem with the rados index and rados data and somehow these objects are protected with rados and can not overwrite them.
But there must be easy way because the bucket has 30M object and Moving them is not an option right now but I want to split them in the feature due to high object count.

I use --no-check-dest but maybe Rclone thinks these objects are folders due to 0 byte problem?
How can I get more information about what is blocking the Rclone?

Can you first find a really small example which shows the problem.

Next run with rclone -vv --dump bodies and post the results. We can have a look at the http transactions to see if we can see what is going on.

Do they end with / these 0 byte objects?

Or are there objects under these objects? So are they both files and directories? So if you had an object called XXX are there objects called XXX/YYY ?

Either of those would confuse rclone into thinking that they were directories instead of files.

The body was too long I removed most of it.

2021/04/05 20:09:19 DEBUG : rclone: Version "v1.55.0-beta.5247.b7199fe3d.fix-111-metadata" starting with parameters ["rclone" "copy" "old:mybucket/images/2018/08/01/18b529e5-df5f-48fb-ac4a-367a38868ab0" "new:mybucket/images/2018/08/01/" "--no-traverse" "--checksum" "--dump" "headers" "--s3-no-check-bucket" "--no-check-dest" "--no-update-modtime" "-vv" "--dump" "bodies" "--retries" "1" "--log-file" "mybucket-run.log"]


2021/04/05 20:09:19 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2021/04/05 20:09:19 DEBUG : Creating backend with remote "old:mybucket/images/2018/08/01/18b529e5-df5f-48fb-ac4a-367a38868ab0"
2021/04/05 20:09:19 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : HTTP REQUEST (req 0xc0006b3b00)
2021/04/05 20:09:19 DEBUG : HEAD /mybucket/images/2018/08/01/18b529e5-df5f-48fb-ac4a-367a38868ab0 HTTP/1.1
Host: 10.x.x.x:8091
User-Agent: rclone/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20210405T170919Z

2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : HTTP RESPONSE (req 0xc0006b3b00)
2021/04/05 20:09:19 DEBUG : HTTP/1.1 200 OK
Content-Length: 53888
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/octet-stream
Date: Mon, 05 Apr 2021 17:09:19 GMT
Etag: "f3de9605396ad093d8e37df60efadf05"
Last-Modified: Tue, 16 Jun 2020 10:13:30 GMT
X-Amz-Meta-Feature-Count: 421
X-Amz-Request-Id: tx0000000000000000e588f-00606b443f-2122af4c-ank
X-Rgw-Object-Type: Normal

2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : fs cache: adding new entry for parent of "old:mybucket/images/2018/08/01/18b529e5-df5f-48fb-ac4a-367a38868ab0", "old:mybucket/images/2018/08/01"
2021/04/05 20:09:19 DEBUG : Creating backend with remote "new:mybucket/images/2018/08/01/"
2021/04/05 20:09:19 DEBUG : You have specified to dump information. Please be noted that the Accept-Encoding as shown may not be correct in the request and the response may not show Content-Encoding if the go standard libraries auto gzip encoding was in effect. In this case the body of the request will be gunzipped before showing it.
2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : HTTP REQUEST (req 0xc000411900)
2021/04/05 20:09:19 DEBUG : HEAD /mybucket/images/2018/08/01 HTTP/1.1
Host: 10.151.106.171:8091
User-Agent: rclone/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20210405T170919Z

2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : HTTP RESPONSE (req 0xc000411900)
2021/04/05 20:09:19 DEBUG : HTTP/1.1 404 Not Found
Content-Length: 210
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/xml
Date: Mon, 05 Apr 2021 17:09:19 GMT
X-Amz-Request-Id: tx000000000000000205260-00606b443f-21d7d2f4-prod

2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : fs cache: renaming cache item "new:mybucket/images/2018/08/01/" to be canonical "new:mybucket/images/2018/08/01"
2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : HTTP REQUEST (req 0xc000411d00)
2021/04/05 20:09:19 DEBUG : HEAD /mybucket/images/2018/08/01/18b529e5-df5f-48fb-ac4a-367a38868ab0 HTTP/1.1
Host: 10.x.x.x:8091
User-Agent: rclone/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20210405T170919Z

2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : HTTP RESPONSE (req 0xc000411d00)
2021/04/05 20:09:19 DEBUG : HTTP/1.1 200 OK
Content-Length: 53888
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/octet-stream
Date: Mon, 05 Apr 2021 17:09:19 GMT
Etag: "f3de9605396ad093d8e37df60efadf05"
Last-Modified: Tue, 16 Jun 2020 10:13:30 GMT
X-Amz-Meta-Feature-Count: 421
X-Amz-Request-Id: tx0000000000000000e5890-00606b443f-2122af4c-ank
X-Rgw-Object-Type: Normal

2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : 18b529e5-df5f-48fb-ac4a-367a38868ab0: Need to transfer - File not found at Destination
2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : HTTP REQUEST (req 0xc000a51700)
2021/04/05 20:09:19 DEBUG : GET /mybucket/images/2018/08/01/18b529e5-df5f-48fb-ac4a-367a38868ab0 HTTP/1.1
Host: 10.x.x.x:8091
User-Agent: rclone/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata
Authorization: XXXX
X-Amz-Content-Sha256: e3b0c44298fc1c149afbf4c8996fb92427ae41e4649b934ca495991b7852b855
X-Amz-Date: 20210405T170919Z
Accept-Encoding: gzip

2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : HTTP RESPONSE (req 0xc000a51700)
2021/04/05 20:09:19 DEBUG : HTTP/1.1 200 OK
Content-Length: 53888
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/octet-stream
Date: Mon, 05 Apr 2021 17:09:19 GMT
Etag: "f3de9605396ad093d8e37df60efadf05"
Last-Modified: Tue, 16 Jun 2020 10:13:30 GMT
X-Amz-Meta-Feature-Count: 421
X-Amz-Request-Id: tx0000000000000000e5891-00606b443f-2122af4c-ank
X-Rgw-Object-Type: Normal

[ngjn

nd
 .cR


4F

-C! n=
1,km!
	SR)7,

hnEE>
'4.["�=z
        )
i	 *%�
        	 �
           !8.m
��%2$
,`3
    �~
      (
	 )
 +�'B"$�
  n'*5	�(�+
&g+
   $'
�Fw.=]

     9fG$7,NJ-.
               28f$c& 4XH+50K[5df]	U2AF2?2)0fR
Cf6
:D
$\5
    !Id
      dd
Dd:B"dA/!5=9d9<24d d
                    ?>0.F!3.2]]
                               =X4@%6+A[^2.n"(�%R(
k-%�*	�/.	_,                                      'n%	3�-�X:
                  -�n
                     8%E>
                         N.;
                            w
                             Hw)S(0+E!,+C(w
                                           ?w *&Y]F:	17ww"
,D=#w

    'x	HG!)/Qm&" I*x	/w-
                           p\Q!
M?v[v7B=v$vv<VX$,bvI
	%vn@!
         7S		=
                         5vA
)=R \/ '                    ',48F9%JD   )+"4$1(656@B"2B%KC,.&(9BB?1G\$A\) -\A47/)*3"P.%K\"\J$!"-$<$=.$!
)@5"!/;.&*
'"\?S(
/+6/($0#("')\\\.0/:/"@3'
,(-21"BR!.=D452*        =$@"(60!\\/\633AH?
97&FwwI'5H
         I
          ]F
 Q&7%71<)ww	)O2 %$+(LXw#,E,G("(
:
 $=-$8^&/+$GG

/�

  �1(b
      XdI
         6sv �
              )/3+
.0="                   @3t3��$-�h
�z   h�i+�7

e7
  !m
    +
      O(
       ?
        �j-�A8I&^��G1HP
                       	,Q��5.A

H5&
2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : 18b529e5-df5f-48fb-ac4a-367a38868ab0: src = &s3.Object{fs:(*s3.Fs)(0xc0001a9d40), remote:"18b529e5-df5f-48fb-ac4a-367a38868ab0", md5:"f3de9605396ad093d8e37df60efadf05", bytes:53888, lastModified:time.Time{wall:0x0, ext:63727899210, loc:(*time.Location)(nil)}, meta:map[string]*string{"Feature-Count":(*string)(0xc0008a4190)}, mimeType:"application/octet-stream", storageClass:""}
2021/04/05 20:09:19 DEBUG : 18b529e5-df5f-48fb-ac4a-367a38868ab0: Reading metadata from 18b529e5-df5f-48fb-ac4a-367a38868ab0
2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : HTTP REQUEST (req 0xc000a51c00)
2021/04/05 20:09:19 DEBUG : PUT /mybucket/images/2018/08/01/18b529e5-df5f-48fb-ac4a-367a38868ab0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=0JFZ9C9EW9W7MAJ6YYBM%2F20210405%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210405T170919Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-feature-count%3Bx-amz-meta-mtime&X-Amz-Signature=71bf71364e3989c61e81232fb8bbd27cbb65ed25988c6c41302ee0b5634a85e6 HTTP/1.1
Host: 10.151.106.171:8091
User-Agent: rclone/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata
Content-Length: 53888
content-md5: 896WBTlq0JPY4332DvrfBQ==
content-type: application/octet-stream
x-amz-acl: private
x-amz-meta-feature-count: 421
x-amz-meta-mtime: 1592302410
Accept-Encoding: gzip

[ngjn

nd
 .cR


4F

-C! n=
1,km!
	SR)7,

hnEE>
'4.["�=z
        )
i	 *%�
        	 �
           !8.m
��%2$
,`3
    �~
      (
	 )
 +�'B"$�
  n'*5	�(�+
&g+
   $'
�Fw.=]

     9fG$7,NJ-.
               28f$c& 4XH+50K[5df]	U2AF2?2)0fR
Cf6
:D
$\5
    !Id
      dd
Dd:B"dA/!5=9d9<24d d
                    ?>0.F!3.2]]
                               =X4@%6+A[^2.n"(�%R(
k-%�*	�/.	_,                                      'n%	3�-�X:
                  -�n
                     8%E>
                         N.;
                            w
                             Hw)S(0+E!,+C(w
                                           ?w *&Y]F:	17ww"
,D=#w

    'x	HG!)/Qm&" I*x	/w-
e7
  !m
    +
      O(
       ?
        �j-�A8I&^��G1HP
                       	,Q��5.A

H5&
2021/04/05 20:09:19 DEBUG : >>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>
2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 DEBUG : HTTP RESPONSE (req 0xc000a51c00)
2021/04/05 20:09:19 DEBUG : HTTP/1.1 404 Not Found
Content-Length: 210
Accept-Ranges: bytes
Connection: Keep-Alive
Content-Type: application/xml
Date: Mon, 05 Apr 2021 17:09:19 GMT
X-Amz-Request-Id: tx000000000000000205261-00606b443f-21d7d2f4-prod

<?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx000000000000000205261-00606b443f-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>
2021/04/05 20:09:19 DEBUG : <<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<<
2021/04/05 20:09:19 ERROR : 18b529e5-df5f-48fb-ac4a-367a38868ab0: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx000000000000000205261-00606b443f-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>
2021/04/05 20:09:19 ERROR : Attempt 1/1 failed with 1 errors and: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx000000000000000205261-00606b443f-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>
2021/04/05 20:09:19 INFO  :
Transferred:   	   52.625k / 52.625 kBytes, 100%, 463.842 kBytes/s, ETA 0s
Errors:                 1 (retrying may help)
Elapsed time:         0.1s

2021/04/05 20:09:19 DEBUG : 6 go routines active
2021/04/05 20:09:19 Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx000000000000000205261-00606b443f-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>

And this the Radosgw log:

2021-04-05 20:05:48.896 7fb46bea1700 1 beast: 0x55630dd7c710: 10.x.x.x - - [2021-04-05 20:05:48.0.896457s] "PUT /mybucket/images/2018/08/01/18b529e5-df5f-48fb-ac4a-367a38868ab0?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=0JFZ9C9EW9W7MAJ6YYBM%2F20210405%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210405T170548Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-feature-count%3Bx-amz-meta-mtime&X-Amz-Signature=9db90cbf9fbb63c70f6fd3fc598c5444460891319019a5a50f69a8e79f78e4ae HTTP/1.1" 404 54098 - "rclone/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata" -

I've created the same problem with my test bucket. The root cause is versioning and manuel rados operations. Rclone works great as always. :slight_smile:

BTW: Is there any flag to delete object with all versions?

Ah ha!

Rclone should just ignore versions in the S3 protocols - at least that is what it does with AWS. Maybe this is an oddity in rgw.

Not at the moment, no. Though b2 has an equivalent command and I should really build it into the s3 backend.