What is the problem you are having with rclone?
I need to overwrite some object and these objects are 0 byte objects in the destination which is left over from RGW sync. In the source 'no-one is 0 byte'. These objects has metadata and I use latest metadata version of rclone thanks to '@ncw'
I can not copy object from bucket/path to bucket/path but i can copy to another path in the bucket so I think that means some how the 0 byte objects or metadata blocking the rclone.
Also I have tried to remove object with 'rados rm' and I've tried to copy again but no luck. There is some weird things going on... When I listxattr I see pending attr in rados. After rados remove there is no object in the destination as below but after rclone copy I still getting error. The weird thing is that after rclone error if I check the object againt I can see it at the destination with rados listxattr and rados stat but attr'in pending state... What is going on here?
[root@SV1 rclonerundir]# rados listxattr -p prod.rgw.buckets.data c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
user.rgw.olh.idtag
user.rgw.olh.info
user.rgw.olh.pending.00000000606aeb5674b70fzrl4f2gd3w
user.rgw.olh.pending.00000000606aeb56ob709fm7a2p6el6n
user.rgw.olh.pending.00000000606aeb56q91o6f68auzugtck
user.rgw.olh.pending.00000000606af5acxjez0bb46e2jv53c
user.rgw.olh.pending.00000000606af5f3p69xdfxbsyh01pfp
user.rgw.olh.pending.00000000606af6575onhsriofptsvaiq
user.rgw.olh.pending.00000000606af657xvi7ox4xay1rv032
user.rgw.olh.ver[root@SV1 rclonerundir]# rados -p prod.rgw.buckets.data stat c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
prod.rgw.buckets.data/c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f mtime 2021-04-05 14:30:43.000000, size 0[root@SV1 rclonerundir]# rados -p prod.rgw.buckets.data rm c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
[root@SV1 rclonerundir]# rados -p prod.rgw.buckets.data stat c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
error stat-ing prod.rgw.buckets.data/c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f: (2) No such file or directory[root@SV1 rclonerundir]# rados listxattr -p prod.rgw.buckets.data c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
error getting xattr set prod.rgw.buckets.data/c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f: (2) No such file or directory[root@SV1 rclonerundir]# rclone ls new:mybucket/images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f --no-traverse
[root@SV1 rclonerundir]# rclone copy old:mybucket/images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f new:mybucket/images/2018/01/29/ --no-traverse --checksum --dump headers --s3-no-check-bucket --no-check-dest --no-update-modtime
2021/04/05 14:48:23 ERROR : ed4ba79c-bb66-4ff6-847a-09a1e0cff47f: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>NoSuchKey
mybuckettx00000000000000016167f-00606af907-21d7d2f4-prod21d7d2f4-prod
2021/04/05 14:48:23 ERROR : Attempt 1/3 failed with 1 errors and: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>NoSuchKey
mybuckettx00000000000000016167f-00606af907-21d7d2f4-prod21d7d2f4-prod
2021/04/05 14:48:23 ERROR : ed4ba79c-bb66-4ff6-847a-09a1e0cff47f: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>NoSuchKey
mybuckettx000000000000000161681-00606af907-21d7d2f4-prod21d7d2f4-prod
2021/04/05 14:48:23 Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?>NoSuchKey
mybuckettx000000000000000161682-00606af907-21d7d2f4-prod21d7d2f4-prod[root@SV1 rclonerundir]# rclone ls new:mybucket/images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f --no-traverse
0 ed4ba79c-bb66-4ff6-847a-09a1e0cff47f[root@SV1 rclonerundir]# rados listxattr -p prod.rgw.buckets.data c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
user.rgw.idtag
user.rgw.olh.idtag
user.rgw.olh.info
user.rgw.olh.pending.00000000606af9076ypupcti987i6341
user.rgw.olh.pending.00000000606af9078pipikch8mc685pf
user.rgw.olh.pending.00000000606af907fii2066pie1zieau
user.rgw.olh.pending.00000000606af907z0eyu6219w0t0jw5
user.rgw.olh.ver[root@SV1 rclonerundir]# rados -p prod.rgw.buckets.data stat c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f
prod.rgw.buckets.data/c106b26b-5150-4fd6-9504-dee3ca5c0968.121384004.3_images/2018/01/29/ed4ba79c-bb66-4ff6-847a-09a1e0cff47f mtime 2021-04-05 14:48:23.000000, size 0
What should I do? What am I doing wrong?
RadosGW log:
2021-04-05 13:49:58.727 7fb3c955c700 1 ====== req done req=0x55631e7f2710 op status=-2 http_status=404 latency=0.140001s ======
2021-04-05 13:49:58.727 7fb3f85ba700 1 beast: 0x55630b390710: 10.10.10.171 - - [2021-04-05 13:49:58.0.727646s] "PUT /mybucket/images/2020/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b?X-Amz-Algorithm=AWS4-HMAC-SHA256&X-Amz-Credential=0JFZ9C9EW9W7MAJ6YYBM%2F20210405%2Fus-east-1%2Fs3%2Faws4_request&X-Amz-Date=20210405T104958Z&X-Amz-Expires=900&X-Amz-SignedHeaders=content-md5%3Bcontent-type%3Bhost%3Bx-amz-acl%3Bx-amz-meta-feature-count%3Bx-amz-meta-mtime&X-Amz-Signature=63c8f01e80550c3b050ae382fea6c403f3744017afbd8c3fe34dd0cd09822a1f HTTP/1.1" 404 52562 - "rclone/v1.55.0-beta.5247.b7199fe3d.fix-111-metadata" -
What is your rclone version (output from rclone version
)
rclone v1.55.0-beta.5247.b7199fe3d.fix-111-metadata
- go version: go1.16
Which OS you are using and how many bits (eg Windows 7, 64 bit)
- os/arch: linux/amd64
Which cloud storage system are you using? (eg Google Drive)
Ceph RadosGW - S3 - Beast
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone copy --files-from "test.list" old-$bucket:$bucket new-$bucket:$bucket --log-file run.log -vv --progress --fast-list --checksum --transfers 10 --checkers 10 --s3-list-chunk 2000 --no-traverse --s3-no-check-bucket --no-check-dest --no-update-modtime
The rclone config contents with secrets removed.
[new]
type = s3
provider = Other
bucket = $bucket
endpoint = http://10.10.x.x
[old]
type = s3
provider = Other
bucket = $bucket
endpoint = http://10.10.x.y
A log from the command with the -vv
flag
2021/04/05 13:49:58 DEBUG : rclone: Version "v1.55.0-beta.5247.b7199fe3d.fix-111-metadata" starting with parameters ["rclone" "copy" "--files-from" "rtest.list" "old:mybucket" "new:mybucket" "--log-file" "mybucket-run.log" "-vv" "--progress" "--fast-list" "--checksum" "--transfers" "10" "--checkers" "10" "--s3-list-chunk" "2000" "--no-traverse" "--s3-no-check-bucket" "--no-check-dest" "--no-update-modtime"]
2021/04/05 13:49:58 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2021/04/05 13:49:58 DEBUG : Creating backend with remote "old:mybucket"
2021/04/05 13:49:58 DEBUG : Creating backend with remote "new:mybucket"
2021/04/05 13:49:58 DEBUG : S3 bucket mybucket: Waiting for checks to finish
2021/04/05 13:49:58 DEBUG : S3 bucket mybucket: Waiting for transfers to finish
2021/04/05 13:49:58 DEBUG : images/2019/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b: src = &s3.Object{fs:(*s3.Fs)(0xc00026d200), remote:"images/2019/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b", md5:"398a33a5c10b4fe64d52811ad1ea0f93", bytes:52352, lastModified:time.Time{wall:0x0, ext:63727902290, loc:(*time.Location)(nil)}, meta:map[string]*string{"Feature-Count":(*string)(0xc0010a8050)}, mimeType:"application/octet-stream", storageClass:""}
2021/04/05 13:49:58 DEBUG : images/2019/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b: Reading metadata from images/2019/08/14/6e0b4f92-ef24-4de0-9604-f153a254da8b
2021/04/05 13:49:58 DEBUG : images/2019/02/25/b2eb009d-c191-4ed9-ac00-6a1692cf71d0: src = &s3.Object{fs:(*s3.Fs)(0xc00026d200), remote:"images/2019/02/25/b2eb009d-c191-4ed9-ac00-6a1692cf71d0", md5:"4fd2cff5d87bcdd1f994c941452e530e", bytes:90624, lastModified:time.Time{wall:0x0, ext:63727900865, loc:(*time.Location)(nil)}, meta:map[string]*string{"Feature-Count":(*string)(0xc0010a80f0)}, mimeType:"application/octet-stream", storageClass:""}
2021/04/05 13:49:58 DEBUG : images/2019/02/25/b2eb009d-c191-4ed9-ac00-6a1692cf71d0: Reading metadata from images/2019/02/25/b2eb009d-c191-4ed9-ac00-6a1692cf71d0
2021/04/05 13:49:58 DEBUG : images/2015/01/09/2eefad1f-646b-4a2a-92a6-10018e397e90: src = &s3.Object{fs:(*s3.Fs)(0xc00026d200), remote:"images/2015/01/09/2eefad1f-646b-4a2a-92a6-10018e397e90", md5:"22930f48264c2312b4bc68ad7feb28f1", bytes:134272, lastModified:time.Time{wall:0x0, ext:63727858566, loc:(*time.Location)(nil)}, meta:map[string]*string{"Feature-Count":(*string)(0xc0007ee370)}, mimeType:"application/octet-stream", storageClass:""}
.
.
.
.
2021/04/05 13:49:58 ERROR : images/2006/01/25/ef7b1753-7c08-44bd-84d1-4a864b344960: Failed to copy: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000013fc63-00606aeb56-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>
2021/04/05 13:49:58 ERROR : Attempt 3/3 failed with 10 errors and: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000013fc63-00606aeb56-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>
2021/04/05 13:49:58 INFO :
Transferred: 2.556M / 2.556 MBytes, 100%, 4.734 MBytes/s, ETA 0s
Errors: 10 (retrying may help)
Elapsed time: 0.5s
2021/04/05 13:49:58 DEBUG : 42 go routines active
2021/04/05 13:49:58 Failed to copy with 10 errors: last error was: s3 upload: 404 Not Found: <?xml version="1.0" encoding="UTF-8"?><Error><Code>NoSuchKey</Code><BucketName>mybucket</BucketName><RequestId>tx00000000000000013fc63-00606aeb56-21d7d2f4-prod</RequestId><HostId>21d7d2f4-prod</HostId></Error>