How to setup and optimize Bifrost Cloud storage with rclone

Download Rclone binary from official website that matches your system architecture

Open this file in the Explorer and extract rclone.exe. Rclone is a portable executable so you can place it wherever is convenient.

Open a CMD window (or powershell) and run the binary. Note that rclone does not launch a GUI by default, it runs in the CMD Window.

Run rclone.exe config to setup.

You should be presented with a wizard that walks you through storage backend configuration

Select the following options as they are prompted:

  • New remote
  • Name: bifrost (arbitrary, but need to be noted for command execution later on)
  • Select “Amazon S3 Compliant Storage Providers”, usually option 4
  • Select “Any other S3 compatible provider”, usually option 13
  • Select “Enter AWS credentials in the next step”
  • Obtain access key, secret key and endpoint information from the bifrost portal before continuing to the next step
  • Enter access_key_id
  • Enter secret_access_key
  • Select “Use this if unsure. Will use v4 signatures and an empty region.”
  • Enter the endpoint provided by bifrost portal during access key creation, this is usually us1-dcs-s3.bifrostcloud.com (no need to include http/s)
  • Press enter for the next few options to enter default values until you are presented with the final configuration
  • Enter y for “Yes this is OK”
  • Enter q to Quit configuration wizard

Now you are ready to execute commands using Rclone to copy/sync files from windows file system to Bifrost Cloud.

Examples (assumes that the config name is set to “bifrost” in step 2 above):

Listing buckets: >rclone.exe lsd bifrost:

Make a new bucket: >rclone.exe mkdir bifrost:BUCKET_NAME

Copying a single file: >rclone.exe copy C:\temp\file.txt bifrost:BUCKET_NAME/file.txt

Syncing an entire folder to bifrost: >rclone.ext sync C:\temp bifrost:BUCKET_NAME

For more examples visit rclone copy

Optional Rclone Settings to Improve Performance:

  1. include "--s3-chunk-size 64M" parameter when running rclone to improve multi-part uploading, this is the sweet spot for our storage backend

  2. depending on how much RAM you have access to for rclone, you can also tweak concurrency parameters to achieve higher transfer performance, consider the examples below

Ram usage equals concurrency * chunk size and as a rule of thumb

Good (Will use 2.5GB of ram to upload)

This will achieve 25% of theoretical max performance but uses much less ram

rclone copy --progress --s3-upload-concurrency 40 --s3-chunk-size 64M 10gb.zip remote:bucket

Better (Will use 5GB of ram to upload)

This will achieve 50% of theoretical max performance but uses less ram

rclone copy --progress --s3-upload-concurrency 80 --s3-chunk-size 64M 10gb.zip remote:bucket

Best (Will use 10GB of ram to upload)

10,240/64=160 this is the max concurrency * chunk size possible for 10GB

rclone copy --progress --s3-upload-concurrency 160 --s3-chunk-size 64M 10gb.zip remote:bucket

1 Like

Two helpful links:

Do you want to add bifrost as an S3 provider for rclone? It is relatively straightforward to do that.

That would be great

If you want you can send a pull request to do that - I'm happy to help you get it merged. For example see: s3: add petabox.io to s3 providers list by cadet354 · Pull Request #6963 · rclone/rclone · GitHub

If you want me to do that I can but it is something I'd normally charge consultancy rates to do - let me know at nick@craig-wood.com if you would like that.