Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

The problem is that many microkernel haters keep repeating what used to be true like 30 years ago, while running tons of containers for basic tasks.


There are hordes of developers completely dismissing the idea of microkernels with no serious argument other than "lmao didn't Linus destroy Tanenbaum that one time?"

Designing a modern and secure kernel in 2025 as a monolith is a laughable proposition. Microkernels are the way to go.


Well, here's some for you:

* In modern times, the practical benefit from a microkernel is minimal. Hardware is cheap, disposable, and virtual machines exist. The use case for "tolerate a chunk of the kernel misbehaving" are minimal.

* To properly tolerate a partial crash takes a lot of work. If your desktop crashes, you might as well reboot.

* Modern hardware is complex and there's no guarantee that rebooting a driver will be successful.

* A monolithic kernel can always clone microkernel functionality wherever it wants, without paying the price elsewhere.

* Processes can't trust each other.

The last one is a point I hadn't realized for a while was an issue, but it seems a tricky one. In a monolithic kernel, you can have implicit trust that things will happen. If part A tells part B "drop your caches, I need more memory", it can expect that to actually happen.

In a microkernel, there can't be such trust. A different process can just ignore your message, or arbitrarily get stuck on something and not act in time. You have less ability to make a coherent whole because there's no coherent whole.


You describe microkernels are if there is only one way to implement them.

> A different process can just ignore your message

> arbitrarily get stuck on something and not act in time

This doesn't make sense. An implementation of a microkernel might suffer from these issues, it's not a problem of the design itself. There are many ways of designing message queues.

Also:

> In a microkernel, there can't be such trust [between processes]

Capabilities have solved this problem in a much better and scalable way than the implicit trust model you have in a monolithic kernel. Using Linux as an example of a monolith is wrong, as it incorporates many ideas (and shortcomings) of a microkernel. For example: how do you deal with implicit trust when you can load third-party modules at run-time? Capabilities offer much greater security guarantees than "oops, now some third-party code is running in kernel mode and can do anything it wants with kernel data". Stuff like the eBPF sandbox is like a poor-man's alternative to the security guarantees of microkernels.

Also, good luck making sure the implicitly trusted perimeter is secure in the first place when the surface area of the kernel is so wide it's practically impossible to verify.

If you allow me an argument from authority, it is no surprise Google's Fuchsia went for a capability-based microkernel design.


I’m not sure I would consider fuschia an example that supports your point.

It’s design largely failed at being a modern generic operating system and it’s become primarily an os used for embedded devices which is an entirely different set of requirements

It’s also not that widely used. There’s only a handful of devices that ship fuschia today. There’s a reason for that.


Don't mistake Google politics with technical achievements.


Did it fail because of its microkernel design?

It’s quite disingenuous to use “success” as a metric when discussing the advantages microkernel vs monolithic, as the only kernels you can safely say have succeeded in the past 30+ years are three: Linux, NT and Mach, one of which is a microkernel (of arguably dated design), and the other is considered a “hybrid microkernel.”

Did L4 fail? What about QNX?

This topic was considered a flame war in the 90s and I guess it still isn’t possible to have a level-headed argument over the pros and cons of each design to this day.


When I read this thread, I think it’s pretty level headed except your last reply lol.


> Designing a modern and secure kernel in 2025 as a monolith is a laughable proposition.

I've seen this exact opinion before, only the year in it was "1992". And yet Linux was still made and written regardless of it.


Point taken but at that time there was no other free (as in beer and freedom) "UNIX" kernel?

Someone may come along and correct me about BSD. Apologies I'm not super familiar with it's history.


There was GNU Hurd in development at the time (which is actually a microkernel), with the first public release in 1990. Needless to say, it never amounted to much.


> while running tons of containers for basic tasks.

Those containers run on a monolithic kernel; what's your point?


The supposed performance gains from monolithic kernel being wasted on features that mimic microkernel features.


> The supposed performance gains from monolithic kernel being wasted on features that mimic microkernel features.

So two things:

1. Containers don't have a meaningful performance hit. (They are semi-frequently used with things that can have a perf hit, like overlay filesystems, but this is generally easy to skip when it matters.)

2. I don't think containers meaningfully mimic microkernel features. If I run everything on my laptop in a container, and a device driver crashes, then the machine is still hosed.


1. The amount of memory consumption I see, versus traditional processes, must be a mirage.

2. It depends on what the containers are being used for. Microkernels aren't only about using drivers in userspace.


And still manage to run better than a complete microkernel


So goes the mythological tales from ancient times.

That is what happens when people don't update themselves.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: