"Unlimited" alternatives to Google Drive, what are the options?

I've transferred around 1,5TB to uloz in the past day.
Large and small files.
From Switzerland in rclone mount modus i get constant ~60MB/s in upload with peaks of 75MB/s with big files.
With small files it is around 11MB/s.
Dropbox and google performed exactly the same in the same test setup.
So i have absolutely nothing to complain.

1 Like

Unlimited is dead.

10 TB for 6.50 USD is not sustainable.

1 Like

A fellow Schwiizer or VPNler?

That's why I'm using this as a backup of another cloud backup. If it lasts, great...if not, then I have nothing to loose.

I have a homebuilt SuperMicro 36bay case running 20x 14TB disks in raidz2.

That data is mirrored online 1:1. Uloz will be a mirror of my main online backup. At the prices they are charging, it's worth it for an extra layer of security/backup. No, it's not unlimited...but this is a different "type" of backup. With the bandwidth limitations they put on each account, it's stops the people from trying to turn storage into a public plex server.

Different regions of the world have different costs, these prices, which seem too good to be true, might be enough FOR them. This storage at this price is no difference then what Google Does in different regions. If you live in Turkey for example, and subscribe to Google One, you can purchase a 50TB add-on equivalent to $10.00/us/month. They justify it by costs to operate in Turkey, the average cost of living etc... Try to buy 50TB of Google One in the US and it's like $300.00/month.

Point is, you can't say $xx.xx isn't viable when servers are hosted in that particular country, and what the value to of money is ratio to their costs.

Does not matter where servers are hosted they use hard disks and other components manufactured by the same factories (and costed the same). You can save on electricity, rent or labour but rest is pretty much fixed price.
If, like uloz, storage prices do not even cover equipment cost (over many years) then it is not sustainable business. It is from the start designed to crash.

3 Likes

This is speculation at best. We do not know, or have their balance "books" to see what their costs are, or if this is making them money. Only time will tell if they can sustain this model. That's why I would not use it as a primary backup. But a backup of a backup is fine for me. All my my 20x 14TB drives are used WD enterprise disks with 5 year warranties on them, picked them up for $105.00 a piece. For all we know they could be going the same route. All my refurbished drives have been going for 2yrs + with no issues.

True but cheap capital no longer exists with higher interest rates. Companies have been existing on cheap debt. Like all business models, they hope consumers will not use entire 50 TB limit. If everyone who signed up does, then game over for them.

Hard drives are only one aspect. I suspect the greatest costs are cooling.

I hope it is sustainable. I would be interested if my data increases to make this viable for me.

@Dual-O, is what I'm doing right? Your having a look would be appreciated.


[blomp]
type = swift
user = XXX
key = XXX
auth = https://authenticate.ain.net
tenant = XXX
auth_version = 2
endpoint_type = public
leave_parts_on_error = true
chunk_size = 1P
no_chunk = false

[blompc]
type = crypt
remote = blomp:x@gmail.com/E
password = XXX
password2 = XXX

[blompc-chunker]
type = chunker
remote = blompc-gzip:
chunk_size = 4Gi

[blompc-gzip]
type = compress
remote = blompc:Google-archive
level = 9

And the command I'm running to copy to this remote is

rclone copy remote: blompc-chunker: 

This is not true lol. 30TB plan is ~$50 / Month now in Turkey and it's $150 in the US.

They will just do what T-Mobile does/is doing....keep raising prices on plans to make more $$ :joy:

Not a resident but got some servers there ^^

Ontopic:
Regarding the price-point of Uloz one should maybe also add that they basically have zero support i had to find out in the past three days.
It isn't a catastrophe really because service still works, but maybe it is also a big factor regarding the price point.
And just to warn people:
Don't try to pack more than 9999 files into a folder.
It is also written in their documentation somewhere, but easy to oversee.
This won't work and on top you'll have folders that you can empty, but you cannot delete them anymore. Not through API and also not through webinterface.
But at least renaming and moving works, so as workaround you can "park" those defunct folders somewhere but nonetheless its a bit funky really. ^^
Maybe it would be a good idea to account for this in rclone-module for uloz and stop uploading clientside.

A few days ago I tried Uloz after someone mentioned it on Reddit. A price too good to be true and even rclone support. So guess what.....indeed too good to be true.

I started with a free account, did some tests with and without encryption and got around 5 MB per thread using 6 threads.

So I decided I'd get a 50TB account for 1 month as I didn't want to risk a 12 month account. At the very first transfer I already noticed the speed was much lower but I started a large(around 2.5 TB) transfer and let it run overnight. In the morning I noticed only around 1GB had been transferred and the copy was still going.....at 25KB/second.

After that I tried several rclone switches(threads, checkers, chunk size etc) but nothing seemed to help. I also stopped the copy and waited a few hours but speeds don't get much higher than 100KB/sec maximum.

In theory it could be the rclone implementation but to me it seems they are severely rate limiting.

2 Likes

As another data point for uloz.to ... I also signed up for the trial ... and have been getting 50-100 MiB/s using rclone copy --fast-list --transfers 50 ...

E.g.:
Transferred: 1.844 GiB / 4.468 GiB, 41%, 69.315 MiB/s, ETA 38s

This is on a remote oracle VM which has something 4Gbps.

When you say "5 MB" – is that MB/s?

Anyone know why with --transfers 50 it's only using 16? Is there a limit somewhere?

1 Like

I'm in central Europe. I've also tried using various VPN locations and various computers and connections but it doesn't seem to make much of a difference.

I'll do some more testing later today.

Edit: as a test I started a copy with --order-by size, as usual larger files transfer faster. But what is considered smaller files seems to be different from any other cloud provider I have tried (as in "smaller" is anything smaller than let's say 500 MB). My other theory is that the number of files in a folder influences the transfer speed.

Currently transferring 40+GB files using 6 threads, every thread fluctuates between 20 and 25 MegaBytes/sec so that's a great speed.

I just signed up with ULOZ & am seeing amazing transfer speeds from Google Drive. I'm getting around 1Gbps. Obviously I am soon going to hit the 10TB/day download limit but it does mean that I should be able to get my whole 115TB backed up to ULOZ within a couple of weeks.

I am seeing the amazing speeds when I copy running rclone on my seedbox which is located in Amsterdam. When I try copying from my home I max out at 40Mbs even though I have 500Mbps/500Mbps line.

1 Like

The overview of storage systems page reports the Uloz is case sensitive. This would lead me to believe that it's safe to use base64 filename encoding in conjunction with crypt.

However, it appears to be the case that Uloz is not fully case sensitive. It looks like filenames are case sensitive (e.g. "test.txt" and "Test.txt" are separate files). However, directories appear to be case insenstivie (e.g., "Test Folder" and "test folder" are the same folder).

Does anyone know if it's safe to use base64 filename encoding in this case? I'm thinking it would be safe as long as you don't have many folders that could possibly conflict.

1 Like

Speeds seem to be a bit and and miss but by now I'm pretty sure it's a combination of filesize and numbers of files in a folder. Initially I had terrible speeds but the copy I started this morning with --order-by size descending is still going strong at around 20-25 MegaBytes per thread using 6 threads. I'm quite sure once the filesizes go below around 500 MB speed will drop.

1 Like

Generally speaking, if a remote is case preserving even if it is case insensitive, it is safe to use base64. From another thread:

1 Like

Thanks! That was my suspicion, but I hadn't bothered to think through the encoding as ncw did there.

1 Like

I was checking uloz FAQ for what kind of mirroring they use (I didn't find an answer) and saw the following: FAQ | Ulož.to Disk - The Personal Backup Service

You have exceeded your monthly limit for Fast Download of your own files. Each plan has its own monthly limit for transferring your own files. The limit is 50 GB for the FREE plan, 10 TB for the PREMIUM plan etc. Once you exceed this limit, you can only download your own files through Fast Download using data. The limit for downloading your own files is reset once a month, either on the day of your plan purchase or on the registration day for users with the FREE plan. It is always possible to use Slow free downloading from the File detail.

Seems they have two tiers of download speeds. I don't quite get how it works tbh.

1 Like