Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Kind of feels like apps should opt in to (or out of) mitigations individually. Obviously a web browser needs it, but does Clang? VSCode? Zoom? Probably not.


Three main things here:

1) we can’t trust people to categorise their own apps because the incentive for performance over security is a trade off we’ve all made time and time again.

2) efforts to address mandatory access controls have a coloured history here: selinux and apparmor both have very low adoption rates no matter your personal anecdotes.

3) These mitigation’s are so thorough that it would be more expensive on performance to even _check_ per application than it would be just to enable it everywhere.


I don't think that (3) is true.


How would you implement such a change?

Considering that you have:

A) some list of allowed applications/programs

B) a run of this check on every syscall

C) to be faster than a TLB flush


I don't know but I can't imagine a highly predictable branch being slower than a TLB flush.


Well consider the fact that checking a table of “ok” programs is a branch and a lookup in of itself.


It would be a branch, but surely it would be a flag on the process struct set when the process started, rather than a lookup each time.


Yeah that should be really fast, still. Programs could also opt to just tell the OS "hey don't check this system call from me", on each system call, avoiding any lookup.

The impact of TLB flushing, not just the cost of the flush, is really significant - it's going to take a lot of work to be as expensive within the syscall path.


What would stop malware telling the os to not check it?


Nothing, but that only makes reading the malware's memory possible with these exploits. That malware won't be able to access memory of some other process, if that other process is using those flags itself.

Edit: For that to work that flag would have to work on the context switch level. So every time you switch away from a sensitive process, flush all buffers and whatever else, then switch. This also requires the kernel itself to enable mitigations as necessary when it touches encryption keys before switching back to user space.


That assumes that the malware can already have arbitrary control over system calls, at which point spectre isn't the issue.


Just require everyone evil to set the evil bit, and everything would be much easier.


Didn't browsers implement their own mitigations? Or were those only for some vulns?


Vscode is a browser


As in "it runs JavaScript and renders HTML", yes. As in "it runs stuff in a security sandbox", no.


How about extensions? I would have thought these amount to a comparable security concern as web pages. Do they have adequate isolation?


No, extensions are fully trusted. They can do anything.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: