rsignell
(Rich Signell)
May 7, 2018, 11:08am
1
I’m trying to set up RCLONE with a Swift OpenStack endpoint on NSF’s Jetstream cloud.
The method I’ve been using to authenticate for writing is to use amazon-like credentials with a bucket_endpoint:
bucket_endpoint=https://iu.jetstream-cloud.org:8080
ubuntu@rsignell-api-u-1:~$ more ~/.aws/config
[default]
region=RegionOne
aws_access_key_id=22e2ce9b704548a29ee06aef1xxxxxxx
aws_secret_access_key=4ea8f872c4cb49caa7066dxxxxxxx
In case it’s useful, these credentials were created with:
openstack ec2 credentials create
Does anyone know where these credentials would go in either the S3
or Swift
sections of rclone.conf
?
ncw
(Nick Craig-Wood)
May 7, 2018, 3:32pm
2
Those looks very much like s3 credentials I’d say, so I suggest you try making an S3 remote. There is a place to put an endpoint
. Note that a swift endpoint would likely end with /v2
or /v3
.
There is S3 middleware for openstack swift or the provider may be using CEPH which has an S3 endpoint.
Note that if you set env_auth
to true in the config then rclone should read your .aws/config
file.
ncw
(Nick Craig-Wood)
May 7, 2018, 3:33pm
3
A bit of searching reveals that they are using CEPH.
rsignell
(Rich Signell)
May 7, 2018, 9:13pm
4
@ncw , thanks for the suggestion to try s3 AWS for my OpenStack Swift endpoint!
The command
rclone sync s3:rsignell/nwm/test_week5c swift:rsignell/nwm/test_week5c
is now working with this rclone.conf
:
$ more ~/.config/rclone/rclone.conf
[s3]
type = s3
provider = AWS
env_auth = true
access_key_id = AKIAJWVSISGxxxxxxxxxxxx
secret_access_key = qY/Q3XkVMBegKf4uxxxxxxxxxxxxxxxxxxxxxxxxxxx
region = us-west-2
location_constraint = us-west-2
acl = public-read
storage_class =
[swift]
type = s3
provider = AWS
env_auth = true
access_key_id = 22e2ce9b70454xxxxxxxxxxxxxxxxxx
secret_access_key = 4ea8f872c4cxxxxxxxxxxxxxxxxxx
region =
location_constraint =
acl = public-read
endpoint = https://iu.jetstream-cloud.org:8080
storage_class =
but it’s way slower than the copy I did of the same dataset between s3 and gce.
The size of the dataset I’m transferring from S3 to Swift is:
(IOOS3) ubuntu@rsignell-api-u-1:~/data$ rclone size s3:rsignell/nwm/test_week5c
Total objects: 1307
Total size: 94.490 GBytes
And it’s copying at a rate of 3 objects/5 minutes, which means it will take 36 hours to transfer 95GB. !!
Does that mean something is likely wrong with the transfer or I just need different settings?
ncw
(Nick Craig-Wood)
May 8, 2018, 7:59am
5
I suppose transfer rates might depend on netwoking speed, though I calculate 95GB is about 0.75 MByte/s which is quite slow…
Things to try: Use --checksum
this will avoid the read of the modtime on S3 which will save a transaction - that should speed things up. --fast-list
should also speed things up. And you can increase transfers - say --transfers 16
.
rsignell
(Rich Signell)
May 8, 2018, 11:21am
6
Wow! I tried the 95GB/1300object AWS S3
to SWIFT S3
transfer again with the settings
rclone sync s3:rsignell/nwm/test_week5c swift:rsignell/nwm/test_week5d --checksum --fast-list --transfers 16
and it finished in just one hour!
Thank you! I hope your wife enjoys the flowers!