Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Sorry for the delay in replying!

>> On the other hand, I am guessing that it would be relatively straightforward to take a theorem-prover and modify it so that its logic is inconsistent? If I am not mistaken, this is a question you are well-positioned to answer, so your comments would be appreciated. If inconsistent theorem "provers" are possible, this would seem to defeat Penrose's argument, unless one adds the premise that humans are consistent reasoners, which seems to me to be obviously false.

Yes, you could modify an automated theorem prover to be inconsistent, or unsound, in fact you could do that in many different ways. My specialty is Resolution-based theorem proving. While Resolution is sound and complete in principle, in practice it doesn't make sense to consider it so except under the Closed World Assumption (CWA). The CWA, very simply stated is that only what is currently known, either as axiomatic truths or theorems derived from those, is true, and everything else is false. That might sound dumb, but it's actually one of the simplest ways to deal with uncertainty: anything that can't be known with certainty is set aside. The good thing of course is that the CWA does not have to be seen as a static assumption. In particular, it is a very well known fact that if new information is received, under a CWA (more precisely, with Negation-As-Failure, under a CWA) the set of logical consequences of a theory can both increase, and decrease - non-monotonically. What that means, in practice, is that, under the CWA, beliefs can be updated as new information comes in, so that a reasoner doesn't have to be stuck forever in a "bad world" entailed by a faulty set of initial assumptions.

In a sense, given a sound and complete inference rule like Resolution, the CWA means a reasoning system can be "locally consistent" without having to be "globally consistent". It can be right for what it knows for sure, without having to worry too much about what it doesn't know, because once unknowns become known, local consistency kicks in and the reasoner's model of the world is updated to better approximate reality. Unfortunately most of that remains mainly theoretical. To my knowledge nobody has really gone and built a system that learns continuously by exploiting the non-monotonicity of Resolution under a CWA. Although people have build systems, they tend to be academic prototypes, that are only described in a couple of papers at most and then forgotten about.

There's another way in which Resolution can be inconsistent ("inconsistent" is a very loose term, I hope I'm using it right :). That is by performing induction, or abduction. Normally, Resolution is deductive: one has a theory (in practice that's a logic program, a set of definite clauses) and proves new consequences of the theory. In the Inductive use of Resolution ("Meta-Interpretive Learning"... doesn't quite roll off the tongue, I know) one has some background theory and some observations not covered by the theory, and Resolution is used to add clauses to the theory so that it "covers" the new observations. In abduction, the premises of the theory, its very axioms, are updated.

In both cases, reasoning is unsound. In induction, because we can never observe every logical consequence of a new theory that we derive from some small number of observations, so we can never verify the theory (but we can falsify it). In abduction, because axioms are by their nature unsound, arbitrary assumptions that are not provable and we only hold true by choice.

I guess all this is very abstract and will sound vague, so sorry about that. Note also that it's all a drastic departure from classical logic, that assumes that everything to be known is known, and knowable (although maybe undecidable). There's a certain opinion that non-monotonic reasoning is closer to human reasoning, because of its ability to be locally consistent and globally who-cares. I don't know about human-like, but it certainly seems like a good way to approach the real world, with all its uncertainties, where we nevertheless want to have good answers that we can rely on.

Let me know if you would like to read further on all this and I can give some pointers.



Thanks for your reply. I agree that non-monotonic logic seems a more useful way to deal with an uncertain real world than classical logic. I would be interested in following up on a few pointers, though I don't want you to spend too much effort on it, as I can't guarantee I have either the time or the aptitude to take it too far.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: