Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Well, here's some for you:

* In modern times, the practical benefit from a microkernel is minimal. Hardware is cheap, disposable, and virtual machines exist. The use case for "tolerate a chunk of the kernel misbehaving" are minimal.

* To properly tolerate a partial crash takes a lot of work. If your desktop crashes, you might as well reboot.

* Modern hardware is complex and there's no guarantee that rebooting a driver will be successful.

* A monolithic kernel can always clone microkernel functionality wherever it wants, without paying the price elsewhere.

* Processes can't trust each other.

The last one is a point I hadn't realized for a while was an issue, but it seems a tricky one. In a monolithic kernel, you can have implicit trust that things will happen. If part A tells part B "drop your caches, I need more memory", it can expect that to actually happen.

In a microkernel, there can't be such trust. A different process can just ignore your message, or arbitrarily get stuck on something and not act in time. You have less ability to make a coherent whole because there's no coherent whole.



You describe microkernels are if there is only one way to implement them.

> A different process can just ignore your message

> arbitrarily get stuck on something and not act in time

This doesn't make sense. An implementation of a microkernel might suffer from these issues, it's not a problem of the design itself. There are many ways of designing message queues.

Also:

> In a microkernel, there can't be such trust [between processes]

Capabilities have solved this problem in a much better and scalable way than the implicit trust model you have in a monolithic kernel. Using Linux as an example of a monolith is wrong, as it incorporates many ideas (and shortcomings) of a microkernel. For example: how do you deal with implicit trust when you can load third-party modules at run-time? Capabilities offer much greater security guarantees than "oops, now some third-party code is running in kernel mode and can do anything it wants with kernel data". Stuff like the eBPF sandbox is like a poor-man's alternative to the security guarantees of microkernels.

Also, good luck making sure the implicitly trusted perimeter is secure in the first place when the surface area of the kernel is so wide it's practically impossible to verify.

If you allow me an argument from authority, it is no surprise Google's Fuchsia went for a capability-based microkernel design.


I’m not sure I would consider fuschia an example that supports your point.

It’s design largely failed at being a modern generic operating system and it’s become primarily an os used for embedded devices which is an entirely different set of requirements

It’s also not that widely used. There’s only a handful of devices that ship fuschia today. There’s a reason for that.


Don't mistake Google politics with technical achievements.


Did it fail because of its microkernel design?

It’s quite disingenuous to use “success” as a metric when discussing the advantages microkernel vs monolithic, as the only kernels you can safely say have succeeded in the past 30+ years are three: Linux, NT and Mach, one of which is a microkernel (of arguably dated design), and the other is considered a “hybrid microkernel.”

Did L4 fail? What about QNX?

This topic was considered a flame war in the 90s and I guess it still isn’t possible to have a level-headed argument over the pros and cons of each design to this day.


When I read this thread, I think it’s pretty level headed except your last reply lol.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: