New provider filebase.com

that's pretty good, took me 8m46.8s for 5k on ovh 100mbps port

2020/03/15 18:58:41 DEBUG : EN07/EN072020-03-14T104230.vbk: multipart upload starting chunk 1 size 256M offset 0/35.700G

2020/03/15 19:04:18 DEBUG : EN07/EN072020-03-14T104230.vbk: multipart upload starting chunk 143 size 205.125M offset 35.500G/35.700G

2020/03/15 19:04:35 DEBUG : EN07/EN072020-03-14T104230.vbk: MD5 = 61cfe66ebbcfdfa3f0d627bb0681c8e9 OK

2020/03/15 19:04:35 INFO : EN07/EN072020-03-14T104230.vbk: Copied (new)

Well then to me the performance looks very good. Definitely in the higher tier.
5000 / 91 seconds = ca 55 transfers pr second - so that is about as good as you can expect I think. Very much on par with some of the best pay-pr-use.

I think when you can saturate a 1Gbit connection you also have to give full marks for bandwidth.

It seems like a very enticing combination of price and performance to me - for those who need more responsiveness than Gdrive can offer but want to keep the price down.

But for the company and the platform - I have no clue how reliable they are.
That article doesn't look great, but at the same time that doesn't mean the technology and platform isn't good. For a relatively new competitor I would be more worried about the service shutting down after a few years because the business didn't work out - more than I would be worried about the data-loss. That is one big benefit that the big and old providers have. You know they will be providing the same or similar service many many years from now. That might be important for some.

i just setup a filebase account and i will do all i can earn a GOAWAY response.

Maybe if you escalate connections after getting GOAWAY you can get a FUCKOFF!!! ?
:stuck_out_tongue:

1 Like

and then a DROPDEAD!!!?

for the multipart upload ? did you use aws cli ? i couldn't get mine to work, it kept saying access denied, i assume it is because, i was on a test account. :scream:

the speed is pretty fast.

all tests with rclone.

as i understand it, and i could be wrong,
and if i am wrong, @thestigma, will be all too happy, to let me know i am wrong.

that rclone uses the aws go library.

I have no reason to suspect multipart support would be excluded on a free account. It wouldn't make much sense.

so i did a bunch of testing of filebase.

my conclusion:
if filebase was free, i would not use it. use gdrive.
even if they paid me, it is scary slow and scary unreliable.
keep in mind, that filebase is just $0.99/TB cheaper then wasabi,

filebase

using wasabi with same settings

and now pushing wasabi to its limits

i think this sums it up...

wasabi wins, thanks for your tests. the speed difference could be really seen filebase uploading is unfortunately really bad reminds me of hubic :nerd_face:. i was surfing yesterday looking for a low budget vps to hook rclone, came across buyvm's 1TB for $5 per month

they call this Dedicated Block Storage Slabs, not sure if they mean object storage S3 compatible service or a addon storage for a virtual private server :scream:

Why are the speed results suddenly so bad now - when you demonstrated 128MB/sec earlier?
Did it just randomly vary over time?
Or have you possibly uncovered some sort of hidden trotting mechanism due to high upload activity? I mean - if that was the case that would also be quite bad...

Also your Wasabi stress-test is probably just your harddrives maxing out - but I'm sure you are aware of that. I don't doubt for a second that Wasabi can eat up a Gbit upload given the right workload. Gdrive does that fine too - it just doesn't like small files due to the hard-limit on the backend for new connections.

Something must be up to produce 5MB/sec... can't be accurate. Not that I don't trust your testing, but there is something else at play then. Maybe I will have to try to verify this, but I kinda have my plate full atm.

Unrelated PS: I just picked up 4x6TB Seagate Enterprise disks (barely used) for a stupidly low price. The biggest drive I've had so far was a 2TB Barracuda, and now it seems I suddenly have more local space than I know what to do with lol - but the price was so stupidly good I had to take them all anyway. I could flip these tomorrow and make money for sure but... I don't want to because these drives are like storage-porn... :stuck_out_tongue: This the same ultra-heavy-duty disks Google's datacenters run and they normally cost a ludicrous price I could never afford...

Can you tell I am a little giddy? :smiley:

i noticed that rclone is not accurately reflecting what is going on.
rclone will states that 0s are left but in fact, there is a large time lag before rclone exists.
and while the time left stays at 0s, the download speeds keep dropping.

in this case, rclone progress states that 5GB has been transferred, and 0s time left.
but for 42 seconds rclone, the download speed keeps dropping.
it seems that rclone considers a chunk that has started to upload as being already uploaded.
as a result rclone progress does not seem to reflect what rclone is doing.

in the logs below you can see that there is a large amount of time between
approx 42 seconds.

2020/03/17 13:33:41 DEBUG : 5GB.file: multipart upload starting chunk 10 size 512M offset 4.500G/5G
2020/03/17 13:34:23 DEBUG : 5GB.file: MD5 = 234c0a13105308447761f77f66b596a8 OK
2020/03/17 13:34:23 INFO : 5GB.file: Copied (new)

2020/03/17 13:33:13 DEBUG : rclone: Version "v1.51.0" starting with parameters ["c:\data\rclone \scripts\rclone.exe" "sync" "C:\data\rclone\scripts\rr\other\thedump\5GB" "wasabieast2:thetestbucket/5GB" "--s3-upload-concurrency=20" "--s3-chunk-size=512M" "--log-file=C:\data\rclone\scripts\rr\other\test\log5GB.txt" "--log-level=DEBUG" "--progress"]
2020/03/17 13:33:13 DEBUG : Using config file from "c:\data\rclone\scripts\rclone.conf"
2020/03/17 13:33:14 INFO : S3 bucket thetestbucket path 5GB: Waiting for checks to finish
2020/03/17 13:33:14 INFO : S3 bucket thetestbucket path 5GB: Waiting for transfers to finish
2020/03/17 13:33:25 DEBUG : 5GB.file: multipart upload starting chunk 1 size 512M offset 0/5G
2020/03/17 13:33:27 DEBUG : 5GB.file: multipart upload starting chunk 2 size 512M offset 512M/5G
2020/03/17 13:33:29 DEBUG : 5GB.file: multipart upload starting chunk 3 size 512M offset 1G/5G
2020/03/17 13:33:31 DEBUG : 5GB.file: multipart upload starting chunk 4 size 512M offset 1.500G/5G
2020/03/17 13:33:33 DEBUG : 5GB.file: multipart upload starting chunk 5 size 512M offset 2G/5G
2020/03/17 13:33:35 DEBUG : 5GB.file: multipart upload starting chunk 6 size 512M offset 2.500G/5G
2020/03/17 13:33:36 DEBUG : 5GB.file: multipart upload starting chunk 7 size 512M offset 3G/5G
2020/03/17 13:33:37 DEBUG : 5GB.file: multipart upload starting chunk 8 size 512M offset 3.500G/5G
2020/03/17 13:33:39 DEBUG : 5GB.file: multipart upload starting chunk 9 size 512M offset 4G/5G
2020/03/17 13:33:41 DEBUG : 5GB.file: multipart upload starting chunk 10 size 512M offset 4.500G/5G
2020/03/17 13:34:23 DEBUG : 5GB.file: MD5 = 234c0a13105308447761f77f66b596a8 OK
2020/03/17 13:34:23 INFO : 5GB.file: Copied (new)
2020/03/17 13:34:23 INFO : Waiting for deletions to finish
2020/03/17 13:34:23 INFO :
Transferred: 5G / 5 GBytes, 100%, 74.240 MBytes/s, ETA 0s
Transferred: 1 / 1, 100%
Elapsed time: 1m8.9s

2020/03/17 13:34:23 DEBUG : 23 go routines active
2020/03/17 13:34:23 DEBUG : rclone: Version "v1.51.0" finishing with parameters ["c:\\data\\rclone\\scripts\\rclone.exe" "sync" "C:\\data\\rclone\\scripts\\rr\\other\\thedump\\5GB" "wasabieast2:thetestbucket/5GB" "--s3-upload-concurrency=20" "--s3-chunk-size=512M" "--log-file=C:\\data\\rclone\\scripts\\rr\\other\\test\\log5GB.txt" "--log-level=DEBUG" "--progress"]

Hmm, I think rclone might think the chunk is uploaded as soon as it gets into the buffer. Normally people don't use such big buffers with S3 so it doesn't matter too much.

It is possible to fix this by unwrapping the accounting while filling the buffer and wrapping it back on for reading out the buffer.

i agree, it does not matter too much and no real need to tweak rclone.
those large buffers were just for testing.

enjoy,

as we PM about,
go with micro$oft free, awesome, windows server, hyper-v edition.
setup a REFS file system with soft-raid file and hashsum integrity checking. the windows version of linux ZFS.

This topic was automatically closed 60 days after the last reply. New replies are no longer allowed.