What is the problem you are having with rclone?
I'm a beginner about using rclone, and I'm looking for several days how to copy all the ceph s3 buckets in a file or under a directory on a standard File system such XFS or ext4.
I wonder if it is really feasible with rclone ?
To test it, I've just one bucket in the Ceph s3 space, and I wanted to copy it in /tmp/backups3 for example.
So I've created a new remote with rclone config as copied in paragraph "rclone config contents" below.
and test commands such as :
rclone -vv copy cephs3: /tmp/backups3
but this one reports no errors but : INFO : There was nothing to transfer
I tried also this one :
rclone -vv copy cephs3:"registry" /tmp/backups3
as "registry" is my only bucket for now (for my tests of rclone) in ceph s3 space :
radosgw-admin bucket list
[
"registry"
]
but I got : 2021/10/18 09:01:49 ERROR : S3 bucket registry: error reading source root directory: AccessDenied:
Finally, I tried a syntax to get all buckets, such as :
rclone -vv copy cephs3:"*" /tmp/backups3
but I got : ERROR : S3 bucket *: error reading source root directory: InvalidBucketName
So my question are :
1/ is it possible with rclone to save all Ceph S3 buckets in one File or under a directory on a std FS ?
2/ if it is possible, where are my misunderstanding about rclone command syntax ?
Thanks a lot.
Regards
What is your rclone version (output from rclone version
)
v1.56.2
Which cloud storage system are you using? (eg Google Drive)
None
The command you were trying to run (eg rclone copy /tmp remote:tmp
)
rclone -vv copy cephs3: /tmp/backups3
or
rclone -vv copy cephs3:"xxxx" /tmp/backups3
with xxxx = the name of an existent bucket
or
rclone -vv copy cephs3:"*" /tmp/backups3
The rclone config contents with secrets removed.
[cephs3]
type = s3
provider = Ceph
endpoint = http://10.0.200.2:8081/
acl = public-read-write
A log from the command with the -vv
flag
test commands such as :
rclone -vv copy cephs3: /tmp/backups3
but this one reports :
2021/10/18 08:59:14 DEBUG : rclone: Version "v1.56.2" starting with parameters ["rclone" "-vv" "copy" "cephs3:" "/tmp/backups3"]
2021/10/18 08:59:14 DEBUG : Creating backend with remote "cephs3:"
2021/10/18 08:59:14 DEBUG : Using config file from "/root/.config/rclone/rclone.conf"
2021/10/18 08:59:14 DEBUG : Creating backend with remote "/tmp/backups3"
2021/10/18 08:59:14 DEBUG : Local file system at /tmp/backups3: Waiting for checks to finish
2021/10/18 08:59:14 DEBUG : Local file system at /tmp/backups3: Waiting for transfers to finish
2021/10/18 08:59:14 INFO : There was nothing to transfer
2021/10/18 08:59:14 INFO :
Transferred: 0 / 0 Byte, -, 0 Byte/s, ETA -
Elapsed time: 0.0s
I tried also this one :
rclone -vv copy cephs3:"registry" /tmp/backups3
as registry is my only bucket in ceph s3 space :
radosgw-admin bucket list
[
"registry"
]
but I got :
2021/10/18 09:01:49 ERROR : S3 bucket registry: error reading source root directory: AccessDenied:
Finally, I tried a syntax to get all buckets, such as :
rclone -vv copy cephs3:"*" /tmp/backups3
but I got :
ERROR : S3 bucket *: error reading source root directory: InvalidBucketName