Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I used to be CTO of an Uber Eats style service that ran on hosting costs of about £100pm. Thousands of orders every few days, real time driver tracking, self hosted invoice generation, the works. Hetzner servers, Docker Swarm and Cloudflare for the tech.


> Hetzner servers

It is truly amazing to me the value that is provided by renting hardware instead of VPSes. As long as you're willing to roll your own infrastructure instead of buy into a cloud provider's infrastructure.


If you know what you're doing it's easy enough to roll out a multi-region distributed system with HA and backups on a pretty modest (<100pcm) budget that can handle competitive QPS.

However, most people do not - some will learn, but most will fall for the cloud marketing depts and become infra renters for life. Teach a man to fish, and so on.


Another reason is that if you look for an investor, one of the first things they ask is how / where you host it. If it is anything remotely DIY you'll be turned down.

It was actually painful to see start ups spending thousands a month on hosting, that could be easily achieved as you say under 100 pcm plus. They would have to get a contractor to set it up and for support, but it would have worked out much cheaper and they could have bumped the salaries of their workers.


Labor vs. commodity. Need an expert key-man to roll your own AWS. A few thousand bucks a month for the automatic one. Penny wise pound foolish etc.


> If it is anything remotely DIY you'll be turned down.

Why is that?


I'm sure you can guess - pure risk aversion. Your business idea is risky enough, and they would need engineers to assess your (possibly ever-changing) DIY stack.

You see the same thing in the corporate world for in-house stuff. Your manager (and your manager's manager) don't want to hear about in-house or self-hosted things that AWS can provide.

This is totally understandable. It's a repeat of the whole "nobody ever got fired for buying IBM" mantra of computing's early decades.

It also totally sucks.


> some will learn, but most will fall for the cloud marketing depts and become infra renters for life

Do you have any learning recommendations for someone looking to start down this path? I've only ever worked in an infra-renter context, and I've begun exploring the 'rent from Hetzner, manage your own infra' for personal projects, but I would love to learn from the paths of experts where possible.


I'm no expert but hopefully I can still point you toward the happy path: start very small and increase distributed complexity at your own pace until you can fully appreciate the entire end-to-end system and all the processes involved. The book: DDIA is a well-known 101, if a little primitive, and has references you can dig into as well.

Ideally, you also have some exposure to this at $job as simply building DIY infra horrors without seeing the real-world context, tradeoffs, etc. in which they typically operate will be misleading.


For your basic needs you'd need:

1. a DNS monitoring with failover (DNSMadeeasy has a decent solution)

2. a Haproxy setup with health checks for switching to a working upstream service

3. a distributed filesystem

4. a master-slave replication with monitoring (something like Mariadb + orchestrator service)

and nightly backups for all this. Database and FS are latency sensitive so they shouldn't be too far apart.


I would also like to hear some recommendations as I've been considering making a similar switch.


What is HA ?


Guessing "high availability"


It's a pretty common abbreviation of high availability on this context, with a heavy implication on active-passive redundancy (although the GP looks like an exception here). It's more used than the full wording, and part of the name of some important tools.

So, yeah, it's good that you asked, because it's not as widely known as the people that use it think it is.


It stands for High Availability, so that one of the regions can completely fail, for whatever reason, and the site will still stay up.


You'd be amazed how much you can get out of one of the Hetzner $3/mo ARM servers with the right code.

I use a $6/mo box for my primary business hosting, but I have a $3/mo one that I'm using to build v2, really just to prove what's possible. If you set up your DB and caching right you can do so much with so little...


Similarly, but on the opposite spectrum, I also wonder how far you can go on vertical scaling nowadays. For 200 EUR/month on Hetzner you get a dedicated 80-core ARM CPU, 128GB ECC RAM, 2TB SSD...using a good performance multi-threaded language, what _can't_ you run on that? It's ridiculous value.


> what _can't_ you run on that? It's ridiculous value.

Yeah - I remember a StackOverflow talk[1] where they basically said that they just vertically scale their database.

The fact that they were able to make it work tells me that most businesses should probably just go that route and avoid the headache of distributed systems[2].

---

1. https://www.infoq.com/presentations/stack-exchange/

2. Obviously a business should probably invest in redundancy when it comes to data (as did StackOverflow), but a pure "Raid 1" setup is the easiest of distributed systems to understand.


Those ARM boxes are incredible for the price, I'm using them to document a hobby K8S cluster because the overall cost is low enough that it won't price a hobbyist out.

You get a lot of bang for your buck out of them.


Which k8s are you using and how are you managing it?


k3s, and nothing fancy except using a splash of helm to set some things up. The cluster has an auto-updater for unattended upgrades.


Talos Linux is great.


Hetzner is great and I love it and use it for all my dev needs. But there availability is in limited zones and for production I do suffer from the latency. Haven't found any other provider which costs anywhere near the same(every comparable server is at least 2x the cost). I use their AX41-NVME. Any suggestions of an alternate?


Vultr has some very cheap offerings and maybe worth a look. I can't speak to their reliability or service, but they have DCs everywhere even in South America and Africa where there are few cloud options outside of AWS. They're more expensive than Hetzner, maybe even a little over your 2x range depending on your load, but if latency is your issue they probably got you covered.


Maybe OVH or Scaleway?


Both of them are french companies and from my experience with them if you need support is a hit or miss.


I've been pretty happy with my OVH server, but agreed on support... had better luck with chatgpt and a lot of additional reading getting my CIDR block of addresses configured under ProxMox (which made me far less worried about ChatGPT taking my job any time soon).

In the end, was an interesting learing experience, that I hope to not have to repeat. Only went for a single server as I had several smaller VPSes on DigitalOcean and wanted to add a mail server in the mix, and couldn't reliably send via DO or Linode, so was easier to consolidate and run on a single/larger host for hobby projects.


I did that once and the data center burned down. Sure I could have the service spread over several centers and build distributed backups etc.

In the end self hosting and self managing is a money / time trade off, especially for a side gig I’d us SAS and managed solutions. The one thing one has to make sure off is to not get locked in with a particular provider, so knowing how to do everything yourself is a very valuable skill.


To be fair, Datacenter burning down is pretty much on the bad luck side of risk management. It's in the same category as your distribution center being hit by an earthquake... I'd guess it wouldn't happen again, but who knows...


A Google Cloud region in France was also unavailable for months (or still is?) due to flooding/electricity. But no one is talking about it.


Didn't that happen from someone leaving a sink on or something, causing flooding and then the UPS batteries to short and explode?

I was there when someone at AWS accidentally unplugged an entire region.

Shit happens, it doesn't matter where it's hosted. People act like the cloud is infallible or something. You're literally sending lightning bolts worth of electricity through bricks of metal. Anything can happen.


> As long as you're willing to roll your own infrastructure

Uh huh. This is why DevOps/SRE roles pay more than development roles.

“Step 2: draw the rest of the owl”


They do? That's interesting news!


It pays more because the on-call/pager requirements can be absolutely soul-crushing.


If you have a terrible manager, team, and product/platform. The reality is that most outages are caused by 9-5 changes made by humans.


Also more open positions, at least in some market.


I'm working on a middle-tier solution for this gap -- I call it Nimbus, but basically the idea is to provide managed services at low cost cloud prices.

There's no reason someone should have to run a service like Chatwoot themselves, the software is so good that it's mostly set & forget for most small use cases.

That's where I come in. Unfortunately I don't have ChatWoot yet, but I have (and use) Umami for page view tracking extensively on my own projects now, with Nimbus[1]. The dogfood tastes decent so far.

[1]: https://nimbusws.com/managed/umami/


do you anticipate that hosted umami will have features unavailable in the open source version?


Yeah, but all improvements will likely be upstreamed/made open source (their license isn't AGPL or anything but just makes sense to me since it's MIT).

The first thing I want to do is add a backup mechanism that isn't just take a snapshot -- There are a bunch of similar tools to Umami and I don't think that any of them have a really good cross-project way of taking backups.

Feels like there should be a page view/analytics backup standard, so you can easy move from a tool like Plausible or Umami and try out a new one, like Fugu.

But outside of advanced functionality I think my platform is just a lot closer on cost. The instance costs don't go up per traffic served (especially since 99% of people won't need that) -- it's more like parts + maintenance (and since Umami is good software it doesn't need a TON of maintenance either, just regular patches and some monitoring/extremely light resilience engineering).


What's the best way to build and run a containerized PaaS on Hetzner?


CapRover / Dokku.




Consider applying for YC's Summer 2026 batch! Applications are open till May 4

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: