New provider filebase.com

hello guys, i've gotten filebase.com to work :smiley:

the configure looks like this

[sia]

type = s3
provider = Other
env_auth = false
access_key_id = xxxxxxxxxxxx
secret_access_key = xxxxxxxxxx
region = us-east-1
endpoint = https://s3.filebase.com
location_constraint =
acl =

filebase is on the sia network

A good benchmark is often to see what limits there are on the amount of transfers that can be initiated ni a short time-frame.

ie. make a few hundred copies of a tiny text-file and set transfers to 32 or 64 - and see how it copes with it. How long does it take to complete? Since the file-size is trivial this really only measures the API and hard backend limits (which is where performance often is capped on services that don't charge egress or pr-operation).

1 Like

I don't think the bandwidth will matter much in such a test.
latency to the datacenter would matter more - if you knew where it was - but I think you will quite quickly get a feel for how many connections it will allow you regardless. We don't need a scientifically accurate test here - just get a general idea :wink:

(although some full compilation comparing this on all the major providers would genuinely be really useful data)

For example, Gdrive will allow 2-3 new transfers a second, but has that sweet affordable flat-rate for unlimited with no use-charges.
Premium pay-pr-use like Backblaze, Gcloud and the like seem for the most part to be unrestricted on this front and can easily deal with 32+ (at which point things like latency start to really factor in as limiting factors)
The question is, since this is not a pay-pr-use service - where does it fall between these extremes?

ill try :expressionless:

generate 5000 4k to 16k files, inside folder test.

seq -w 1 5000 | xargs -n1 -I% sh -c 'dd if=/dev/urandom of=file.% bs=$(shuf -i1-10 -n1) count=2000'

uploading to filebase 5000 files :scream: , few minutes in, received GOAWAY , but process keeps uploading.

http2: Transport: cannot retry err [http2: Transport received Server's graceful shutdown GOAWAY] after Request.Body was written; define Request.GetBody to avoid this error
Transferred: 52.387M / 52.387 MBytes, 100%, 101.824 kBytes/s, ETA 0s
Errors: 2 (retrying may help)
Checks: 2162 / 4992, 43%
Transferred: 4998 / 4998, 100%
Elapsed time: 8m46.8s

upload looks to be slow, not sure, i assume it depends on which filebase--sia-network-server your files are being uploaded to.

for streaming off the network it's pretty good, filebase can be used with aws cli for file management.

thanks, looking forward for your results, i will be using filebase for anime :nerd_face:

So we can fairly assume it is uncapped then - for all practical purposes.

LOL - that's a funny error message :stuck_out_tongue:
I assume this comes from overloading the API though, which would not be a surprise for 5000 requests all at once.

This seems very promising then - considering the pricing is good compared to similar competitors.

Doing the lords work /respectful nod
:stuck_out_tongue:

only seen 4 concurrent transfers i think :zipper_mouth_face:

Oh... but you did of course use --transfers 64 (or some other high number) right?
Because otherwise 4 concurrent is rclone's default...

Usually the concurrent amount of transfers is not restricted (or at least the cap is so high it does not matter in practice). Even Gdrive has no issues in this. The restriction is usually in the amount of transfers you can start pr second.

The reason this is an important metric to know is that this basically determines how much performance you can get when files are relatively small. You may then end up hitting the "new transfers pr second" limit long before you fully use your bandwidth, which can be frustrating.

PS: On a related topic... NCW has implemented a new beta for a request I made quite a while back.
Most of you probably know you can use --order-by to sort the transfer-order now (by size,asc for example).The new thing is that NCW introduced a mixed mode now that can do a mix of the largest files AND the smallest files - thus maximizing both bandwidth and connection-rate for the full transfer (in scenarios where the transfers contains a mix of large and small files obviously). I am currently doing some testing on it now, but I expect it will make it into public beta soon.

just tested it with --transfers 64 got 8 transfers :scream:

Hmm ok, that isn't that great.

But to get a useful number out of it - do this:
transfer 1000 files (as small as possible) with --transfers 64 or higher.
Note the total time it takes to finish.

The new transfers limit will then be the approximately 1000 divided by number of seconds to complete.This will be a lot more accurate than trying to just eyeball it from the output of -P

rclone copy C:\testfiles filebase:\testfiles --transfer 64 -P
(use a new empty location to copy to of course) :slight_smile:

2020-03-15 20:47:05 ERROR : Attempt 2/3 succeeded

Transferred: 5.385M / 5.385 MBytes, 100%, 168.122 kBytes/s, ETA 0s
Checks: 992 / 992, 100%
Transferred: 1000 / 1000, 100%
Elapsed time: 32.7s

1000 4kb to 8kb files, took 32.7s , concurrent uploads were 64 i believe :open_mouth:

totally agree, it is also very new. sia looks like its based on a decentralized storage system.

on a side note, rclone purge my remote filebase bucket, of 1000 4kb files.

2020-03-15 20:58:49 ERROR : Attempt 3/3 succeeded
Transferred: 0 / 0 Bytes, -, 0 Bytes/s, ETA -
Checks: 4841 / 4841, 100%
Deleted: 4841
Elapsed time: 1m43.3s

time taken, 1m43.3s for 1000 files to be purged

GOAWAY ERROR, was due to sending, 1000,5000 upload requests :smiley:

wouldn't it be scary if wasabi was using sia storage :scream:, just saying because parts of the UI look the same on filebase and wasabi like where the API keys.

There is an rclone page for it: https://docs.filebase.com/client-configurations/rclone but nothing there yet!

Maybe you should drop them a note @kanchan with your config!

It looks to be price competitive with b2/Wasabi.

going to email them now :smiley:

b2, is cheap but they charge for, storage, downloads, transections for 1TB on B2

STORAGE COSTS

Storage Cost for Initial Month: $5.00
Data Added Each Month: $5.00
Data Deleted Each Month: - $0.00
Net Data $5.00

DOWNLOAD COSTS
Monthly Download Cost: $10.00
Total Cost for 12 Months
$570.00

for what it is worth,

wasabi

  1. does not charge for downloads
  2. does not give GOAWAY messages.
  3. does not charge per transactions.
  4. can easily saturate my 1Gbps verizon fios connection each and every day.
  5. does not spread their data over dozens of unknown servers with unknown reliability.
  6. does not rely on sia, https://www.sec.gov/enforce/33-10715-s
    " Sia Reaches $225K SEC Settlement"

seems to me that paying an extra $0.99/TB/month,
wasabi is a safe choice for backups and critical data

but given that filebase is s3, rclone already supports it.
and the more providers that rclone can support the better for all of us.

holycrap , sec.gov/enforce/33-10715-s

ohh that's insane, for .99 extra wasabi seems way safer :scream:

i'm rethinking ...

i think wasabi offers a free trial, but not a free tier.
so you can check it out for yourself.

it all depends on what you want from a provider.

for me, i need a provider i can trust to backup my critical data to the cloud.
rclone + veeam + wasabi = a trusted tested solution.

i know that many users use gdrive, which is free and they are very happy with it.

"transfer 1000 files (as small as possible) with --transfers 64 or higher. Note the total time it takes to finish."

For /L %%i in (1,1,1000) do fsutil file createnew ".\1000files\test.%%i.txt" 1

2020/03/15 18:35:48 DEBUG : rclone: Version "v1.51.0" starting with parameters ["c:\data\rclone\scripts\rclone.exe" "sync" "C:\data\rclone\scripts\rr\other\1000files" "wasabieast2:filebasetest/test/1000" "--log-file=C:\data\rclone\scripts\rr\other\test\log.txt" "--log-level=DEBUG" "--transfers=64"]

Transferred: 1000 / 1000 Bytes, 100%, 53 Bytes/s, ETA 0s
Transferred: 1000 / 1000, 100%
Elapsed time: 18.6s


For /L %%i in (1,1,5000) do fsutil file createnew ".\5000files\test.%%i.txt" 1

2020/03/15 18:40:41 DEBUG : rclone: Version "v1.51.0" starting with parameters ["c:\\data\\rclone\\scripts\\rclone.exe" "sync" "C:\\data\\rclone\\scripts\\rr\\other\\5000files" "wasabieast2:filebasetest/test/5000" "--log-file=C:\\data\\rclone\\scripts\\rr\\other\\test\\log.txt" "--log-level=DEBUG" "--transfers=64"]

and keep in mind that this is using a log with debug level info, which will slow down the sync.

Transferred:   	    4.883k / 4.883 kBytes, 100%, 54 Bytes/s, ETA 0s
Transferred:         5000 / 5000, 100%
Elapsed time:      1m31.3s