I used to be CTO of an Uber Eats style service that ran on hosting costs of about £100pm. Thousands of orders every few days, real time driver tracking, self hosted invoice generation, the works. Hetzner servers, Docker Swarm and Cloudflare for the tech.
It is truly amazing to me the value that is provided by renting hardware instead of VPSes. As long as you're willing to roll your own infrastructure instead of buy into a cloud provider's infrastructure.
If you know what you're doing it's easy enough to roll out a multi-region distributed system with HA and backups on a pretty modest (<100pcm) budget that can handle competitive QPS.
However, most people do not - some will learn, but most will fall for the cloud marketing depts and become infra renters for life. Teach a man to fish, and so on.
Another reason is that if you look for an investor, one of the first things they ask is how / where you host it. If it is anything remotely DIY you'll be turned down.
It was actually painful to see start ups spending thousands a month on hosting, that could be easily achieved as you say under 100 pcm plus. They would have to get a contractor to set it up and for support, but it would have worked out much cheaper and they could have bumped the salaries of their workers.
I'm sure you can guess - pure risk aversion. Your business idea is risky enough, and they would need engineers to assess your (possibly ever-changing) DIY stack.
You see the same thing in the corporate world for in-house stuff. Your manager (and your manager's manager) don't want to hear about in-house or self-hosted things that AWS can provide.
This is totally understandable. It's a repeat of the whole "nobody ever got fired for buying IBM" mantra of computing's early decades.
> some will learn, but most will fall for the cloud marketing depts and become infra renters for life
Do you have any learning recommendations for someone looking to start down this path? I've only ever worked in an infra-renter context, and I've begun exploring the 'rent from Hetzner, manage your own infra' for personal projects, but I would love to learn from the paths of experts where possible.
I'm no expert but hopefully I can still point you toward the happy path: start very small and increase distributed complexity at your own pace until you can fully appreciate the entire end-to-end system and all the processes involved. The book: DDIA is a well-known 101, if a little primitive, and has references you can dig into as well.
Ideally, you also have some exposure to this at $job as simply building DIY infra horrors without seeing the real-world context, tradeoffs, etc. in which they typically operate will be misleading.
It's a pretty common abbreviation of high availability on this context, with a heavy implication on active-passive redundancy (although the GP looks like an exception here). It's more used than the full wording, and part of the name of some important tools.
So, yeah, it's good that you asked, because it's not as widely known as the people that use it think it is.
You'd be amazed how much you can get out of one of the Hetzner $3/mo ARM servers with the right code.
I use a $6/mo box for my primary business hosting, but I have a $3/mo one that I'm using to build v2, really just to prove what's possible. If you set up your DB and caching right you can do so much with so little...
Similarly, but on the opposite spectrum, I also wonder how far you can go on vertical scaling nowadays. For 200 EUR/month on Hetzner you get a dedicated 80-core ARM CPU, 128GB ECC RAM, 2TB SSD...using a good performance multi-threaded language, what _can't_ you run on that? It's ridiculous value.
> what _can't_ you run on that? It's ridiculous value.
Yeah - I remember a StackOverflow talk[1] where they basically said that they just vertically scale their database.
The fact that they were able to make it work tells me that most businesses should probably just go that route and avoid the headache of distributed systems[2].
2. Obviously a business should probably invest in redundancy when it comes to data (as did StackOverflow), but a pure "Raid 1" setup is the easiest of distributed systems to understand.
Those ARM boxes are incredible for the price, I'm using them to document a hobby K8S cluster because the overall cost is low enough that it won't price a hobbyist out.
Hetzner is great and I love it and use it for all my dev needs. But there availability is in limited zones and for production I do suffer from the latency. Haven't found any other provider which costs anywhere near the same(every comparable server is at least 2x the cost). I use their AX41-NVME. Any suggestions of an alternate?
Vultr has some very cheap offerings and maybe worth a look. I can't speak to their reliability or service, but they have DCs everywhere even in South America and Africa where there are few cloud options outside of AWS. They're more expensive than Hetzner, maybe even a little over your 2x range depending on your load, but if latency is your issue they probably got you covered.
I've been pretty happy with my OVH server, but agreed on support... had better luck with chatgpt and a lot of additional reading getting my CIDR block of addresses configured under ProxMox (which made me far less worried about ChatGPT taking my job any time soon).
In the end, was an interesting learing experience, that I hope to not have to repeat. Only went for a single server as I had several smaller VPSes on DigitalOcean and wanted to add a mail server in the mix, and couldn't reliably send via DO or Linode, so was easier to consolidate and run on a single/larger host for hobby projects.
I did that once and the data center burned down. Sure I could have the service spread over several centers and build distributed backups etc.
In the end self hosting and self managing is a money / time trade off, especially for a side gig I’d us SAS and managed solutions. The one thing one has to make sure off is to not get locked in with a particular provider, so knowing how to do everything yourself is a very valuable skill.
To be fair, Datacenter burning down is pretty much on the bad luck side of risk management. It's in the same category as your distribution center being hit by an earthquake... I'd guess it wouldn't happen again, but who knows...
Didn't that happen from someone leaving a sink on or something, causing flooding and then the UPS batteries to short and explode?
I was there when someone at AWS accidentally unplugged an entire region.
Shit happens, it doesn't matter where it's hosted. People act like the cloud is infallible or something. You're literally sending lightning bolts worth of electricity through bricks of metal. Anything can happen.
I'm working on a middle-tier solution for this gap -- I call it Nimbus, but basically the idea is to provide managed services at low cost cloud prices.
There's no reason someone should have to run a service like Chatwoot themselves, the software is so good that it's mostly set & forget for most small use cases.
That's where I come in. Unfortunately I don't have ChatWoot yet, but I have (and use) Umami for page view tracking extensively on my own projects now, with Nimbus[1]. The dogfood tastes decent so far.
Yeah, but all improvements will likely be upstreamed/made open source (their license isn't AGPL or anything but just makes sense to me since it's MIT).
The first thing I want to do is add a backup mechanism that isn't just take a snapshot -- There are a bunch of similar tools to Umami and I don't think that any of them have a really good cross-project way of taking backups.
Feels like there should be a page view/analytics backup standard, so you can easy move from a tool like Plausible or Umami and try out a new one, like Fugu.
But outside of advanced functionality I think my platform is just a lot closer on cost. The instance costs don't go up per traffic served (especially since 99% of people won't need that) -- it's more like parts + maintenance (and since Umami is good software it doesn't need a TON of maintenance either, just regular patches and some monitoring/extremely light resilience engineering).