Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Just hypothesizing but could it be that the brain is structured as two adversarial networks, which train each other e.g. during sleep?


It's always tempting to think about the body in terms of the technological concepts of the age. It's not automatically wrong, but there's nothing special about this particular era of computing that would mean we're working with the same methods as the human brain. Not even the fact that we call our systems "neural networks".


They're called neural networks because their design mirrors observations from neurons in the brain.

There's truth to your sentiment, that the current level of knowledge is always somewhat arbitrary, but to say there's "nothing" that suggests a connection between the two is far too dismissive of an interesting question posed by the original comment.


Yet in so many ways they don't act like neurons in the brain.


'neural networks' are actually very different from the neurons inside the brain. There's almost nothing in common, once you look at the mechanics of the two systems.


So true! 50 years ago they were using concepts from early computing to describe how the human brain works. With each new wave of advances in computing, there is a new set of analogies that philosophers and neuroscientists adopt to say: "See, the brain is like this". It Seems problematic to me. Models are models, not reality. They may be good for descriptive effects, but that doesn't me they are anything other than story-telling devices.

As a tangent, this seems to me to be a problem with Daniel Dennett's ideas and why, in the end, David Chalmers seems to be gaining ground with every passing year.


That sentiment makes it easy to dismiss the models we use, but here's the thing - the technological concepts of the age limit what models people can express in that age. Better technology = better models, i.e. models that explain more and predict more.

Also this sentiment casually dismisses the fact that behind computing there's a whole lot of theoretical work that stems not from technology itself, but from how reality works (i.e. math).


If there's optimal way to solve the problems brain (and AI) are good at - then as times passes probability both we and nature use these methods increase. So we're indeed special in that we had more time than people before.


How? Why? According to what line of reasoning?

What do you mean by "we" and "nature"?


How? Because that's how "optimal" is defined.

Why? Ditto.

Line of reasoning? That evolution is an optimization process, and that human intelligence is an optimization process - hence both are likely to reach good solutions for intelligence eventually, and if there's a strong optimum in the design space of those, then both are likely to converge at least in some areas.

What's "we"? Human technological civilization.

What's "nature"? Evolutionary process that already produced working brains.


Interesting.

I'm not entirely sure that "optimal" has an agreed upon definition. At best, "optimal" is relative to the system within which it is being applied. "Optimal" in a Trump world is very different from "optimal" in a Bernie Sanders world. Optimization seems to require some objective. In a practical sense, you cannot optimize a piece of software if you don't know what you are optimizing for.

It is a bold premise that the evolutionary process and human technological civilization have the same optimization goals.


> It is a bold premise that the evolutionary process and human technological civilization have the same optimization goals.

We have similar, when we optimize for "what works" (instead for e.g. "what sells").

The only existing instance of what we're trying to build - intelligence - is something a dumb, random, incremental optimization process following simple rules managed to somehow stumble upon. Now, if there's a strong local optimum in the design space of intelligent machines, then it seems plausible that evolution ended up there, and that we may stumble upon it too, thus converging with the evolutionary solution somehow.

Now I'm not saying our solution will be identical to biological brains. We have different goals (hell, we have goals, nature does not). But we're likely to end up doing many aspects of it in a way that resembles biology.

The core observation here is that it's the structure of reality (implications of laws of physics) that shape the search space we're traversing. Compare flight. Yes, human planes are very different from birds - but that's because they have less efficient energy sources, and also because we want them to go faster (have you ever seen a supersonic bird?). Still, both share some aspects, like the airfoil. Both we and nature "discovered" those because airfoils are dictated by the laws of physics - that's how you do flight in gases.

--

Basically, what I'm saying is that humans always describe things in terms of the technology of their age, but that doesn't mean it's wrong. Better technology means better description. Birds are unlike planes, but analyzing them using the model of airfoil we developed is a good idea and leads to more and better understanding.

--

EDIT

> Optimization seems to require some objective. In a practical sense, you cannot optimize a piece of software if you don't know what you are optimizing for.

Yes, optimization always has an objective - that's how we define it, in contrast to complete randomness. But the objective can be implicit or explicit. Explicit goals require a mind to be involved. Evolution has only implicit objectives, human-driven process have both (because we suck at knowing what we actually want).

But the second important part of an optimization process is the shape of the optimization space. This here is defined by laws of physics. And insofar as evolution's implicit objective is in some aspects similar to our objective, both get similarly influenced by the shape of the optimization space :).


> Basically, what I'm saying is that humans always describe things in terms of the technology of their age, but that doesn't mean it's wrong. Better technology means better description. Birds are unlike planes, but analyzing them using the model of airfoil we developed is a good idea and leads to more and better understanding.

Totally. "Better technology means better description" is a great idea/concept here.

Since reading Michel Foucault while I was studying in the UK (someone I feel like I never even heard mentioned in US university), it made me rethink the "what/essence" of the things we do/build. Just like the supposed lesson-learned (or potentially still to be learned) in the finance world after the financial crisis: models are models not reality. We develop a culture around the descriptions that we use but in the end we aren't truly describing the essence of the "what (i.e. the thing described)". We are layering various descriptions, cultural-ideas, and pre-conceptions on top of the thing itself in order to better communicate some aspect of it to other people. In a sense, different technologies gives us a shared set of concepts with which we can communicate with each other.

Your point is great. Just because they are "descriptions" doesn't mean they are wrong/right. They are a shared language that we use to communicate with each other complex ideas and, hence, we better understand previously wish-washy concepts.


I'm on a phone so I won't type much. From my dissertational research and reading, I've got no hint that the hippocampus and the cortex learn advesariily, at least to my limited understanding of how such networks work. The hippocampus is thought to bind together the features of events into an integrated memory representation. It may do so by relating coded indexes in the hippocampus for features that actually exist in the cortex. So for example, you observe a car and a bike hit each other. Your percepts for the car and bike (features) are in your perceptual cortices. The hippocampus generates an index that when propogated back to the cortex, reactivates those cortices into a state similar to when you originally saw the event. That is memory...that reactivation. At least that is a popular idea in memory research that has empirical support.


I saw Terry Sejnowski give a talk last week in San Diego, and he made the exact same suggestion. He did not provide any evidence, but it's hard not to take his ideas seriously.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: