Any free-tier cloud services with an unrestrictive API?

I am wondering if anyone knows about any cloud-service that has an unrestrictive API (in terms of being able to cope with many transfers and a large burst of API calls) - but also has some sort of free-tier.

The idea is - good performance in handling many but very small files, in short but intensive bursts. Something like Wasabi would do that for example, but they run a minimal billing model of 1TB it looks like.

It would not need to be very large at all. Even just a handful of GB would go a long way.

Anyone have any suggestions on what might be worth investigating? I am perfectly capable of reading up on the tech-specs myself, but it's a jungle of cloud-services out there these days so I need a place to start looking :slight_smile:

i use 7zip to compress/encrypt certain folders and then upload the .7z file.

@set ZipCmd=C:\data\rclone\scripts\7za.exe a "\\vserver03\en07.rcloner\firefox\zip\firefox.20191026.112905.7z" "b:\firefox_20191026.112905\data\C\firefox\6f95nzz4.default" -p%zp%

@set RcloneSyncZipCmd=C:\data\rclone\scripts\rclone.exe sync "\\vserver03\en07.rcloner\firefox\zip\firefox.20191026.112905.7z" wasabiwest01:en07\firefox\zip\backup\ --stats=0 --backup-dir=wasabiwest01:en07\firefox\zip\archive\20191026.112905\ --log-level DEBUG --log-file=C:\data\rclone\logs\firefox\20191026.112905\firefox_20191026.112905_rclone.log

Thanks for the workaround suggestion. I'm aware of this, but in this case I am looking to be able to individually access small files in groups.

It looks like backblaze (hot hybrid storage type) has a 10GB free-tier that I run some experiments on, so I am going to look into that.

But if anyone else has other suggestions I would love that too :slight_smile:

let me know what you think of b2?

Sure! Will do. I suspect it's not too dis-similar to Wasabi.

The only question is if they have more strict limits on their free-tier or if they leave that open for new potential customers to get excited by the speed potential. Sometimes the cloud providers give very spesific quota limits, but Backblaze does not specify too much. I will quickly find out when I run some tests on it :slight_smile:

I was going to suggest b2 also. Their free tier has limits of transactions though.

alibaba cloud has a free tier I seem to remember but I don't remember if it has limits.

gcs/s3/azure all have free tiers with limited number of transactions.

https://www.backblaze.com/b2/b2-transactions-price.html

Yea, unfortunately backblaze has some fairly significant transaction limits.
So it's probably blazing fast .... for one whole test-run =P

It looks like it is quite common to have such restrictions.
Wasabi is one of the few that offer free API and no egress in their deal, but have minimum plan. Oh well...

I guess I will have to find a friend I can piggyback off for a few GB in return for a favor or something. I have a tendency to accumulate those =P

what is it you are trying to accomplish?

Just experimenting... nothing too serious.
But the basic gist of it is I am toying around with the idea of a "composite drive".
A google drive system can do bulk storage for days - but it is terribad on small files due to concurrent access limits (about 2-3 files a second at best).

Potential solution? Have a small but fast location that can priority-override fast-mirror all your small files.
Just as example, files less than 1M in my current archive only take up 6.3GB but are about 38.000 files - a good 40% of my total files. Being able to fetch these at 32 transfers compared to 2-3 would make responsiveness and "practical speed" hugely improved, despite low space being required to do the job. On larger files it doesn't matter as bandwidth will be the limitation anyway.

Of course, local cache can do that for you too even better, but that does carry some limitations - like only having that speed as long as you fetch through your local server. Tough luck if you are on the go and/or your system can't easily make use of a VPN to connect to home (say a phone or tablet).

I think the basic idea has merit, so I thought if I could find something to run some tests on I would see just how much practical benefit there was to it :slight_smile: I am always looking stuff to further optimize :wink:

do you pay for google storage?

Not currently no. I've been fortunate enough to gain access to 2 reliable drives through work and favors done for friends and acquaintances, and I picked up a couple of sketchy ones on ebay too just to test and have a little redundancy (though I obviously don't trust those much).

So yea, we are talking about a poor-mans setup here no doubt. I mean ideally, you'd have everything on Wasabi or equivalent and just not worry about it :wink:

i get that, respect that, not wanting to pay for storage and trying to figure out creative ways to achieve on the cheap/free!

When you rich, you buy.
When you poor - you improvise :wink:
It's fun in it's own way too - too see how much you can do with few resources. You end up learning a lot.

why do you want to have those files in the cloud?
backup or what?

that, and just having an alternative to having loads of (expensive) harddrives for mass-storage. Especially if you want failure-safety, there really no way around having a large RAID-setup. Not only expensive, but also complex - and even that is not truly failure-safe.

That was the initial motivation that lead me on the rclone journey of discovery, but now I have found a lot more additional advantages. Having unlimited (at least for my scope of needs) storage was a pretty big mind-blow :smiley: And having google's redundancy systems + it being maintainance free to boot? It sure beats a big homemade server with a mess of old drives inside.

ah you meant the small files in particular probably - well yea, local cache does solve MOST of that problem (with some caveats as mentioned). It's "good enough" for most purposes. I am mainly exploring the limits of what is possible/practical with non-standard setups.

yeah,
i have a home server, windows server 2019 hyper-v version tho it is FREE as in beer using refs filesystem, which is like linux zfs, software raid, integrity checking and on-the-fly checksumming and protection against bit-rot.

and a bunch of different vpns solutions, including for tablets and cellphones.

That's one nice setup :smiley:
You don't take -any- shortcuts with your data security do you? I can respect that. Anything that is worth doing is worth doing thoroughly. Although, the ability to sync to cloud really takes most of that pressure away from my perspective. No longer a disaster if a drive fails without some complex redundancy system to it.

Hope that more advanced filesystems like zfs get more mainstream soon. I'll probably be rewamping my ancient gaming rig as a central home-server sort of thing when I eventually upgrade to a Zen2 system. Mostly just to be a central access-point and cache for everything on the cloud. Never hurts to have a place to run some always-on services.

well, as you know i tend to run paranoid and sync to cloud is not really a backup solution.
i actually have two home servers. really just two old used computers, each with 4 old used hard drives.

  1. data server and host of my virtual machines including vpn server.
  2. backup of that data server, running veeam backup software.
    i have that 300+ lines of python that uses veeam, 7zip, fastcopy and rclone.