Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

No, it wouldn't. For example, the "Heartbleed in Rust" blog post [1] re-used a buffer without freeing it. No destructor runs in between the two uses, so a zeroing destructor could not possibly prevent the bug.

Maybe zeroing destructors make sense as defense-in-depth, but I don't see how they can fix a Heartbleed-style exploit in Rust. In code where the buffer is freed and its destructor runs, Rust's memory safety guarantees already prevent it from being accessed after free. In vulnerable code that just uses the same buffer twice, the destructor never has a chance to run so its behavior doesn't matter.

The real Heartbleed vulnerability (CVE-2014-0160 in OpenSSL) involved reading into uninitialized memory in a newly-allocated buffer, which safe Rust code already prevents [2].

[1]: http://www.tedunangst.com/flak/post/heartbleed-in-rust

[2]: https://news.ycombinator.com/item?id=8984169



Thanks - that's a good example of what I was trying to convey.

The point is Rust already provides safety guarantees. If you don't trust the runtime, then why would you trust the built-in zero'ing? I get the "defense in depth" argument, but it feels a bit like doing this:

    {
      int a = secret;  // Get secret.
      assert(a == secret);  // Check "a" is actually that.
      a = 0;  // Ensure "a" is zero'd on exit.
      assert(a == 0);  // Just because.
    }
And yes, I get that you can build this into the language so it's not quite as ridiculous - you actually wipe tainted stack, for example.

But the point is: the runtime has an ABI and a machine model. Information is allowed to leak across function boundaries, beacuse it doesn't matter. Without using the "unsafe" keyword, there are no methods of getting around the machine model and dipping into the underlying actual machine.

Even if you don't have a "safe" language and runtime, it's still of limited value. It protects against threats involving data or control flow corruption after key usage, and where there isn't sufficient control of the program to perturb the secret-consuming functions. That's more of an annoyance than prevention. On the other hand, it gave the programmer a false sense that it was properly wiping secrets.


It is very probable that a sufficiently smart optimizer could see the assertion was always true and delete it. Then see no one reads "a" and delete it as well. In certain circumstances this can cause a secret to be leaked in, say a register, making our safe function unsafe. You need to be very careful writing secure code, and probably need to go down to the level of writing assembly to be sure the optimizer isn't turning your safe code into unsafe code.

We actually have an interesting project in rust where someone is writing a syntax extension to take rust like code and generate assembly [0]. It's probably unsafe to use right now but if sufficiently well implemented it could be the foundation of a lot of interesting cryptography work.

[0]: https://github.com/klutzy/nadeko


Sorry, I wasn't clear enough that my code was intended as sarcastic. It's obviously silly to zero variables because the compiler is free to ignore you. The point is, the underlying machine is going to do the same.

There are many ways to dig out stale memory if you're running at sufficient privilege, for example. Direct cache introspection, for example, or bypass. Zero-izing alone is not sufficiently strong to mitigate the threats people imagine it works against.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: