Rclone from aws s3 to oci object storage

What is the problem you are having with rclone?

the rclone command is taking too long to transfer around 360 gb for almost 7 to 8 hours can we make it faster?

command:

What is your rclone version (output from rclone version)

go version: go1.15.5

Which OS you are using and how many bits (eg Windows 7, 64 bit)

iam running in the Oracle instance with the below config:

os/arch: linux/amd64
memory : 24Gb
6 OCPU

Which cloud storage system are you using? (eg Google Drive)

Oracle Object storage

The command you were trying to run (eg rclone copy /tmp remote:tmp)


rclone --verbose --cache-workers 64 --transfers 64 --retries 32 copy $SOURCE oci:XXXX

hello and welcome to the forum,

you did not post the config file, redact id/passwords, not sure why?

the reason i ask is that you are using this flag --cache-workers, which is for the cache backend, which is not recommened unless you know 1000% you must use it.

why do you think that, as compared to what?
what is the result of a speedtest from the ocacle instance?
where is the oracle instance located, in cloud or local?
where is the oci: located, in cloud or local?

compared to the network bandwidth that I see in my instance as
6Gbps it should be faster right?

how to do the oracle instance speed test?

the instance and the oci are located in the cloud.

when you posted, you were asked for some info such as

  • the config file, redact id/passwords.
  • debug log with the exact command that was run.

why the need for a buggy, beta, not maintained, cache backend for a simple file copy?

is that between the server and the oci storage, both of which as in the same cloud location?
or
is the speed you can download/upload files to the internet?

you need to find the botteneck, if there is one?
is it downloading from aws
or
copying to oci:

In my oracle instance it is showing like this during the transfer:

Transferred: 157.527G / 328.457 GBytes, 48%, 6.655 MBytes/s, ETA 7h18m20s
Transferred: 176 / 180, 98%
Elapsed time: 6h44m0.0s

and the exact command that was run is below:

the aws s3 configuration is done like this:

$ export RCLONE_CONFIG_S3_TYPE=s3
$ export RCLONE_CONFIG_S3_ACCESS_KEY_ID=<your_access_key>
$ export RCLONE_CONFIG_S3_SECRET_ACCESS_KEY=<your_secret_key>
$ export RCLONE_CONFIG_S3_REGION=<region_of_your_bucket>
$ export SOURCE=s3:<your_source_bucket> 

and the oracle oci configuration is done like this :slight_smile:

$ export RCLONE_CONFIG_OCI_TYPE=s3
$ export RCLONE_CONFIG_OCI_ACCESS_KEY_ID=<your_access_key>
$ export RCLONE_CONFIG_OCI_SECRET_ACCESS_KEY=<your_secret_key>
$ export RCLONE_CONFIG_OCI_REGION=<your_region_identifier>
$ export RCLONE_CONFIG_OCI_ENDPOINT=
https://<your_namespace>.compat.objectstorage.<your_region_identifier>.oraclecloud.com

after the configuration, the below copy command is run

rclone --verbose --cache-workers 64 --transfers 64 --retries 32 copy $SOURCE oci:

you can remove --cache-workers 64, as it does nothing in your setup

  • best to test a copy of a single file and then tweak settings.
  • i would try removing --transfers 64
  • is there a reason for --retries 32 ?
  • 6Gbps, so you can download a file at the rate from the internet, not using rclone?
  • you need to do some bandwidth testing from internet to the vm and from the vm to the oci:
  • would need to see debug log for errors and other problems.

Actually iam new to oci hence i am not aware of the --cache-workers and --transfers what they will do and if i change any of those parameters how it will affect my transfer

if you do not know what to expect from oci, you have to test and figure that out.

as i suggested, step one, you need to figure out the bandwidth from aws to vm and from vm to oci.
copy one file from aws to vm, using the simplest command possible and calculate the bandwidth
copy that same file from vm to oci, using the simplest command possible and calculate the bandwidth.

From this it looks like you have a small number of very big files.

In which case you might want to tweak these

  --s3-chunk-size SizeSuffix             Chunk size to use for uploading. (default 5M)
  --s3-upload-concurrency int            Concurrency for multipart uploads. (default 4)

Increasing --s3-chunk-size will have the most difference - try 64M. Note that this will use more memory.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.