Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

You're right, we probably have different ontologies. To me an intelligent system is a system which aims to realize a goal through modelling its environment and planning actions to bring about that intended state. That's more or less what humans do and I think that's more in line with the colloquial understanding of it.

There are basically two approaches to defining intelligence, I think. You can either define it in terms of capability, in which case a system that has no intent and does not plan can be more intelligent than one that does, simply by virtue of being more effective. Or you can define it in terms of mechanism: something is intelligent if it operates in a specific way. But it may then turn out to be the case that some non-intelligent systems are more effective than some intelligent systems. Or you can do both and assume that there is some specific mechanism (human intelligence, conveniently) that is intrinsically better than the others, which is a mistake people commonly make and is the source of a lot of confusion.

I tend to go for the second approach because I think it's a more useful framing to talk about ourselves, but the first is also consistent. As long as we know what the other means.



If intelligence is treated as a scale, should it be measured primarily by (a) the diversity of valid actions an entity can take combined with its ability to collect and process information about its environment and predict outcomes, or (b) only by its ability to collect and process information and predict outcomes?

In either case, the smallest unit of intelligence could be seen as a component of a two-field or particle interaction, where information is exchanged and an outcome is determined. Scaled up, these interactions generate emergent properties, and at each higher level of abstraction, new layers of intelligence appear that drive increasing complexity. Under such a view, a less intelligent system might still excel in a narrow domain, while a more intelligent system, effective across a broader range, might perform worse in that same narrow context.

Depending on the context of the conversation, I might go along with some cut-off on the scale, but I don't see why the scale isn't continuous. Maybe it has stacked s-curves though...

We just happen to exist at an interesting spot on the fractal that's currently the highest point we can see. So it makes sense we would start with our own intelligence as the idea of intelligence itself.


I think it's an issue of hierarchies and the Society of Mind (Minsky). If a human touches a hot stove, or any animal's end effector, a lower-level process instantly pulls the hand/paw away from the heat. There are no doubt thousands of these 'smart body, no brain' interactions that take over in certain situations, conscious thinking not required.

Ken Goldberg shows that getting robots to operate in the real world using methods that have been successful getting LLMs to do things we consider smart -- getting huge amounts of training data -- seems unlikely. The vastness between what little data a company like Physical Intelligence has vs what GPT-5 uses is shown here: https://drive.google.com/file/d/16DzKxYvRutTN7GBflRZj57WgsFN... 84 seconds

Ken advocates plenty of Good Old-Fashioned Engineering to help close this gap, and worries that demos like Optimus actually set the field back because expectations are set too high. Like the AI researchers who were shocked by LLMs' advances, it's possible something out of left field will close this training gap for robots. I think it'll be at least 5 more years before robots will be among us as useful in-house servants. We'll see if the LLM hype has spilled over too much into the humanoid robot domain soon enough.


> But it may then turn out to be the case that some non-intelligent systems are more effective than some intelligent systems.

That is surely the case on limited scopes. For example the non neural net chess engines are better at chess than any human.

I think that neural networks compare with human intelligence in a fair way, because we should limit their training to the number of games that human professionals can reasonably play in their life. Alphago won't be much good after playing, let's say, 10 thousand games even starting from the corpus of existing human games.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: