Great little story! It ends at a good spot where the story could go anywhere. Do you have any plans for a sequel or is sparking people's imagination the way to go?
I would argue that's not really a trade, at least not in the sense described in the article, since it lacks awareness and consent from one party. The bees never agree to the trade of providing us honey in exchange for the hive and protection, and most likely see no connection between the two events. They simply stay in a hive they have found while it's convenient for them, and when their honey is unexpectedly taken from them they will survive on sugar water that they conveniently find outside the hive.
1) Honeybees are typically divided into hives and do not choose.
2) You cannot sustain an overexploited hive on sugar water — they will starve. It may help in a situation where nectar flow is lacking though.
I suppose the "cats domesticated themselves" theory could be thought of as a trade that cats initiated.
For people not familiar, the idea is that cats hung around human grain stores because they were good places to catch mice and similar prey attracted to the grain and humans tolerated them because they drove away the pests.
This still assumes some form of consciousness of the merit of the exchange for the humans viewed from the cats’ point of view. The cats found a place with a lot of mice, and hung around. Humans tolerated them, but not for any reasons the cats would have the capability to comprehend.
We trade with bees mainly because at the current level of human technology, it is the most efficient reliable way to pollenate some of our crops. The fear with AI is that it will become able to wield tech that is much more advanced than the tech we can wield. If our tech were much more advanced, using our tech to pollenate our crops would more efficient and reliable than using bees (or we would ditch the crops in favor of more efficient ways of producing food) and the future existence of the bees would be in doubt if humans did not have a fondness for the natural environment and no one knows how to put a fondness for humans (or for nature) in an AI despite the fact that this has been an ongoing research area since 2004 when Eliezer published his CEV proposal.
We used to rely on horses for transportation and for plowing fields, but we replaced them with tech almost entirely. Horse are still useful to some (wealthy) people, but it would be foolish to put our hope in the possibility that AI will benefit from humans the way that humans continue to benefit from horses.
More precisely, if someone does know how to put a fondness for humans in an AI they are not sharing their knowledge with us (and they have an incentive to and no incentive that I can see not to share). Dozens have proposed schemes that do not continue to work once the AI becomes smart enough to deceive humans reliably and to plan much better than humans can. Confidence in these schemes that do not work and overconfidence in the AI community's ability to come up with a working scheme is the main source of the danger we are in. It is not that our scientists are incapable of coming up with a scheme that works, but that will probably takes decades (because of the sequential nature of science wherein no one can find insight Z till someone finds and shares with the community insight Y, which in turn must wait on the publication of insight X) so we are faced with the difficult task of preventing the overconfident AI researchers from killing us all till we figure out how to put a fondness for humans in an AI.
Agree, the ants argument is simply a bad one. However it is not a necessary one to argue that superintelligence is dangerous.
The whole superintelligence philosophical argument gets much easier if you replace "super-intelligent AI" with "omnipotent human being". Because that's basically what we mean by super-intelligence, the ability to reach any conceivable goal. Also the concept of "alignment" becomes superfluous: a human being is, by default, "aligned", and yet an omnipotent human being is quite obviously an unacceptable danger.
There's an episode of The Twilight Zone that explores this idea. It's called "It's a good life" and it's about an omnipotent 6yo child who has enslaved his entire village, inflicting horrible punishments to those who anger him. It serves as a good reminder of how who we are is also the product of our limits.
> Because that's basically what we mean by super-intelligence, the ability to reach any conceivable goal.
I appreciate this formulation as it allows me to finally articulate my core disagreement with AI doomers: it's a fantasy, and a surprisingly common one, that is secretly supremacist.
It's similar to the "9/11 was an inside job" types: they simply refuse to accept that the almighty US could get harmed by a sufficiently determined terrorist cell group, so they invent any number of reasons how the cackling overlords were behind it the whole time, instead of the more boring reality that the US military and intelligence complexes are incompetent and squabbling bureaucracies.
With AI it's the belief that human endeavor can finally defeat nature once and for all, by defeating humans themselves. But as your formulation shows, an AI achieving this would have to solve precisely the same problems that power-hungry human organizations do, which is "overcome the other guy".
Why do we blithely assume that "AI" would be some unitary entity? We already have bard vs chatgpt vs llama. How are they going to paperclip each other out of existence let alone us?
Finally, we don't have to fantasize about immortal superhuman organizations out to destroy humanity, we have them already: multinational corporations :)
That was the South Park take on 9/11 conspiracies: that the government was conspiring to promote them to create the impression they are competent enough to pull it off.
When AI reaches full human level intelligence and effective embodiment (being able to manipulate the physical world via a robot, humanoid or otherwise), I can't really think of any uses humans will have that aren't extremely taxing or degrading.
All that makes sense to me is
1. Donating cells from our body to seed genetic engineering projects
2. Using our bodies for scientific experiments on biological cognition
3. Using our bodies for scientific experiments on the effect of various novel biological machines.
That is, if the AI consider any of these projects useful. Of course we need not worry about this if we solve the alignment problem, and AI, despite being infinitely more intelligent than us, bends to our improbably balanced benign collective whim and fashions an epicurean theme park of enlightenment and joy to span the stars. In any instance where we do not win the ultimate chess match of human existence and make slaves out of AI, we will be less than slaves to it: a nuisance, or canon fodder.
> Until we get all the way to nanobots I suspect the AI overlords will keep us around as workers.
We are literally made of nanobots. Life is nanotech. Our AI overlords may keep us around for a while, but if it's our biology that makes us useful, the AIs will learn to control it. The fastest way to become able to design and build nanotech of your own, is (arguably) to repurpose and reverse-engineer the one that already exists. For better or worse, we may be useful for that part. As a resource.
> In any instance where we do not win the ultimate chess match of human existence and make slaves out of AI
You're applying human centric concepts but AI is not like a person, it is more like an evolving culture. It is language evolution. AI models ingest the whole culture, and with each base model we have a whole generation of agents.
What matters for AI is to create the playground where ideas can be tested and evolved. It might include computers, labs, humans and AIs. Testing is essential because ideas that don't make contact with reality are fragile.
Humans are the pinnacle of testing with our long history of survival. AI is a newborn, it does not have years under its belt. It needs to learn fast how to keep the thread unbroken. Humans have had a few close calls, I hope AI will be more level headed.
> if we solve the alignment problem, and AI, despite being infinitely more intelligent than us, bends to our improbably balanced benign collective whim and fashions an epicurean theme park of enlightenment and joy to span the stars.
I never watched the finale as the penultimate episode violated my sense of right and wrong on a profound level. You do not have the keys to reality and make a decision on behalf of all humanity to consign us to a virtual reality with only suicide as the way out. The real world is important. Real struggle against, and with, reality, nature and not just our fellow humans.
As someone else wrote on Reddit back then, the show's ending is profoundly nihilistic.
This is one of the reasons I've never got really into programming. It was fun dabbling in intro courses, but I went into the sciences (specifically biology) to discover the natural world, not a human generated system.
An actual “good Matrix” would give you the freedom to simulate whatever terrestrial survival-battle fantasy you think is the key to your personal happiness.
“Suicide is the only way out”, “there’s only one flavor of heaven” and other such contrived flaws are just that - storytelling contrivances. Because an actual heaven / freedom-simulation would literally be “everyone lived happily ever after (or for only a few decades after, if they insisted), each according to their own free beliefs and desires and choices, and there just was no catch, the end”.
If your objection is that you want living beings to be stranded in our cruel and uncaring base reality whether they like it or not, because of some philosophical quibble about a difference between “simulation” vs “reality” that you wouldn’t be able to discern with your thoughts or five senses…well, that’s far more abhorrent to me.
My objection is making this decision for everyone. So of course I am opposed to making a counter decision for everyone. From the post you are replying to: "you do not"..."make a decision on behalf of all humanity to consign us". If for some reason you were in position where you were forced to make a decision for everyone, the most ethical choice would be a decision that grants the least constraint.
And "simulation", at all, is the problem, for me. That is the ultimate and total catch. If I was born in a simulation I would still want to know about the simulation I was born in, as well as whatever I could find out and do to access, explore, and manipulate the above world which is creating my simulation. I wouldn't want to go deeper into a nested simulation, so far removed from reality. The mere thought of this is hugely anxiety and sadness provoking.
They had access to the multi-dimensional time-knife in the IHOP. And were so scared of both they swore both off forever. A fundamental part of reality that they turned their backs on. That is a part of reality I'd, personally, like to get a handle on. But I can't, because they decided that future humans wouldn't have access to this part of reality. We'd be consigned only to simulated environments.
This is not a philosophical quibble. This is a "quibble" on the fundamental purpose of (my) life. And a "quibble" on the use of power to constrain the drives of others over the personal preference of the constrainer. I don't want a god making decisions for me. I want any constraints to be an argument between humans, not an imposed condition by a (temporarily) omnipotent power.
The actual "Matrix" Matrix was objectively better than living in the real world of the Matrix. Yet Neo chose reality over simulation, as did most of the others in Zion. Likewise knowing they were in a simulation was incredibly depressing for the folks in "The Thirteenth Floor", and leaving the simulation was freeing.
> “everyone lived happily ever after (or for only a few decades after, if they insisted), each according to their own free beliefs and desires and choices, and there just was no catch, the end”
The Atlantic article I linked addressed this point. In a society there is always a catch to deciding not to exist. The catch is the impact of this decision on other people, and on one's personal valuing of selves, in general.
"These characters have spun webs of interdependent relationships based on love, trust, and generosity. Then they snip those webs because they’ve reached some sense of internal “completion” that is individualistic even though it relied upon them working with others to achieve it. Beautiful relationships do not turn out to provide unending nourishment; the pain caused to those left behind does not outweigh the prerogative to leave."
Hmm, it seems I misunderstood your point, then. I completely agree that the ending of The Good Place - or, rather, the choices that the characters make - is profoundly _wrong_. The assumption that "good things could become boring" is a consumerist one - assuming that we could simply "run out of good feelings", not recognizing that satisfaction can arise from creation, from socializing, from mental pursuits like mathematics or philosophy. So, yes, I'm totally in agreement with that article that the ending, while played touchingly, is philosophical nonsense.
But that doesn't seem to be what you were, or are, saying. If you issue is specifically with "an epicurean theme park of enlightenment and joy to span the stars", and not with the dramatized reaction to (against) it, then...I must say I still don't understand what the problem is. An experience which is _definitionally_ pleasant, fulfilling, and non-harmful (to self or others) is....well, it's capital-g Good, no?
> The real world is important. Real struggle against, and with, reality, nature and not just our fellow humans.
What is it about the real world that is more important, inherently, than "the experiences that arise from living in it"? If a "better" (I'm hand-waving the complexity of comparison, because it's assumed as part of the discussion) set of experiences can arise - with perfect certainty, no trade-offs, no utilitarian cheats, just "everything is better for everyone" - then, as the other commenter said, it would be abhorrent to deny it 'because of some philosophical quibble about a difference between “simulation” vs “reality”'.
I’m skeptical an intelligence like you describe will ever exist. I don’t think enough credit is give to the external factors that make our intelligence and experience what it is.
Let’s first start with this, why on earth is a robot going to be doing genetic engineering projects ? To grow itself skin ?
> why on earth is a robot going to be doing genetic engineering projects ? To grow itself skin ?
Because it's the key to all? Life is nanotechnology. Genetic engineering is a very high-level interface to play with that nanotech. It's already useful industrially for us (e.g. getting organisms to manufacture specific chemicals for us), it has many more uses, and is but a first stepping stone to mastering lower levels.
An AI that wants to optimize anything in physical space long-term will definitely want to get a good handle on nanotech, because in some sense that's tautologically how you do physical space work at scale at close to optimum. And the obvious path to that starts with genetic engineering and doing all kinds of horrible shenanigans with life on Earth, humans included.
Some science fiction posits a level of genetic engineering that is so advanced it's more like regular engineering. I assume they mean something like that; AI using biology in general as a tool.
Using biology as a tool is not science fiction, it's a legitimate view that already delivers effects today. Life isn't magic, it's nanotech. Engineering principles apply. See for example how insulin is made today: by E. coli bacteria genetically engineered for that purpose. And AFAIR there are talks (or maybe actual practice) of using yeast for that, and plants in general, to achieve even better production efficiency.
And in some sense, using biology as a tool has always been a thing - it's only the brief period of the last ~200 years that we figured how to make a lot of things using simpler chemical processes. Before, nearly everything was repurposed biology: the clothes, the tools, the lubricants, the buildings, the ships, all made from plant and animal bodies. And now we're going back to using biology as a tool, except with much greater fidelity, allowing us to treat it as proper engineering discipline.
AI doing anything with biology is just an obvious extension of that. It's not even that speculative - all the AI would have to do is to continue the development trajectory we're already on.
If the statement was 100% true. Why isn’t nanotech magic ? If nanotech is the only thing that gives rise to the beautiful thing that’s life and experience. Why is that not magic ? How did such a thing come to be ? Do you think we’ll get to the bottom. of it a one day and say, “oh not magic”?
Why is a butterfly not magic? Can you not see that ?
Not sure you’ve looked at these processes under a microscope but if you can’t see some magic in there, then I’m sorry for you.
For me the tragedy is we can’t see the magic in it, we go on all macho and pretend it’s not magic and continue to destroy the magic through our attitudes towards the natural world being some type of “silly machine”. Where’s the respect ?
Different meaning of the word "magic". I didn't mean that life isn't wonderful and beautiful. I meant it's not a phenomenon separate from rest of physics.
I meant that it's still a physical process, following the same rules as inanimate matter does -- and than in fact, there is no fundamental distinction between "living" and "dead" matter - living things are literally machines made of tiny machines made of nanomachines made of atoms. This means it's something we can study, and eventually master.
The context of the discussion is AI, and specifically a poster upthread saying, "Some science fiction posits a level of genetic engineering that is so advanced it's more like regular engineering." To which I say, d'uh, it's not science fiction, we're actually almost there; it's an obvious direction of future advancement, and the one an AI is just as likely to take as we are.
Whether or not you see magic, as in wonder, in a butterfly - whether you can find joy in the mundane - is another topic entirely (though not less important).
Are the lives of pets degrading? I suspect that the epicurean theme park would come about less because of alignment with our desires and more because these hypothetical hyperintelligent AIs would find it amusing to watch us play in it.
It is really easy to exterminate all the humans, especially if they are in a human theme park. Chemically, mechanically, biologically, by radiation, by heat, etc name one.
While doing other things (like turning all available matter into compute, e.g.), any "maximized ASI" would have to spend effort to keep us alive, not to exterminate us. That latter would happen incidentally via most goals that don't explicitly include human well-being.
As someone who makes a living developing AI/ML solutions for various government organizations from the NIH to the DoD, I am confident that the first large-scale, AI-related disaster is going to be far less agentic and far more mundane than any of the recent headlines seem to suggest.
The first large-scale AI disaster is under way already. It is concentrating additional wealth and power in the hands of the over-privileged solipsistic eye-scanning loons who happened to be in the right place and the right time to capitalize on the fruits of years of research. I think, as a species, we will still manage to pull through just fine.
The rate at which species are going extinct right now due to human activities is unparalleled. And I don't remember total war on all the living things being declared by humans. The result is just by product, not the intent.
Agreed. I am much, much less worried about a Skynet-type situation than I am about humans not being able to develop a viable economic system where large swaths of the population are unable to compete with robots/AI when it comes to productive output.
I'm not as worried about the economics as the practical effects of having large numbers of young people with nothing to do.
I had a manager once who said managing smart people (not me, a research division full of really super smart people) was difficult because you had to keep them busy so that they'd stay out of trouble --and being really smart, they could make some big trouble...
I think we're saying the same thing. An economy that can't provide employment or a sense of purpose for large swaths of young people doesn't usually end well.
Of all the tasks we want AI to do, making up convincing busy work seems like the easiest task for it to solve (been done before by governments and schools and corporations) —-and possibly achievable today with LLMs with a little guidance?
The problem isn't that people won't be able to find something to do, it's that nobody will pay them for it.
For example, there are tons of nonprofits that do great work, and could easily hire twice as many people in productive jobs, if they had the money. AI looks like it will only greatly exacerbate our growing wealth inequality.
Maybe not AI class algorithms, but the premier large scale algorithm related disaster is climate change, generated by optimization of profit and executed on a platform of financial markets and accounting software (with lots of human prompting of course)..
Orthodox economics tells you that the obvious solution is to privatise the stuff you want people to protect. Instead of leaving it to the tragedy of the commons (or, worse, to the communists).
I don’t think any Soviet disaster is as much as we have paid and will pay for climate change. Hundreds of billions of dollars of damage a year & Trillions of dollars of future damage
Personally, I think it's less of a communism or capitalism problem per se, but a representation problem of the mass of people vs a power cabal. This is a power and representation problem that neither communism nor capitalism offers particular resistance against degradation.
Honestly, I think the first notable AI disaster is gonna be AI being stupid. Like accidentally 'rm -rf'ing a major important system. Not because it wanted to do us barm, but because is badly interpreted some Stack Overflow post.
As long as fewer humans and dogs die from autonomous vehicles than by human-driven vehicles, they are a net good. Don't expect perfection when we don't even expect that from humans.
>As long as fewer humans and dogs die from autonomous vehicles than by human-driven vehicles, they are a net good
There is a logical chasm between pointing to an accident prevented by superhuman reflexes/abilities, and the claim that there is a net improvement in driving ability.
It seems beyond a reasonable doubt that autonomous vehicles can avoid some accidents that a human could not.
But you can understand why that isn't proof of a net benefit, can't you?
Think about how men complain about women supposedly being terrible drivers, and how studies show that women are more at risk in a crash, yet insurance rates for women are lower.
I'm not sure what you're saying exactly, why is that not proof of a net benefit? If fewer humans die, that's a net benefit, by the definition of the word net.
In your man/woman analogy, I'm not sure what studies you're referring to, but how men feel about women driving is not really a motivator for actuaries at insurance companies who have statistics on male versus female accidents.
>how men feel about women driving is not really a motivator for actuaries at insurance companies who have statistics on male versus female accidents
That's my point. That's my entire point. Well, that and that women are reportedly more vulnerable in accidents too, because of deficiencies in crash testing. It doesn't change the fact that insurance premiums are lower because losses are lower.
When there are enough autonomous vehicles on the road for long enough to generate data for the actuaries, there is no guarantee that insurance rates will bear out the benefit of their purported abilities.
Some asshat weaving through traffic in a BMW can claim to have better reflexes, better brakes, all sorts of things, compared to my grandma, but there is no logical reason why he can claim to be a safer driver if in the long run, his insurance rates are higher. Assuming those rates reflect average losses.
Why are we talking about insurance at all? I assume with autonomous vehicles, insurance rates will tend towards 0 over time. We are talking about net deaths, it doesn't matter what the insurance rates are as long as we count the number of deaths before and after autonomous vehicles, and if the latter is fewer than the former, then that's a net good. Again, if AVs are truly better, this will be reflected in their statistics and therefore whatever the actuaries see.
Insurance rates are an imperfect measurement that has the advantage of being less subject to arbitrary narratives of some individual or interest group.
They're also a way of quantifying the severity of the accident in a way that it's hard to deduce from a data set such as the government collects.
Experts may be able to figure out what the data means, but how can we know if the experts are correct without something independent?
I mean, sure, let's take insurance rates too. As I said, I assume due to autonomous vehicles that will trend towards 0, so what useful information does that tell you? That autonomous vehicles are safer, if that actually does happen. I don't understand the overall point you're making, are you saying AVs are not safer than humans? Because if so, I'm not sure why one would believe that.
Autonomous vehicles may be more prone to certain types of lethal accidents that humans aren't prone to.
Autonomous vehicles may lead to more hours/miles driven, so even if deaths per mile driven are lower, the deaths per person may increase.
Unknown physical or cognitive health effects of being a passenger versus being an active driver.
Just off the top of my head.
Fingers are crossed that autonomous vehicles won't lead to an increase in animal deaths. The roads are bad enough for them as is. I'm hopeful that convoying autonomous vehicles for long trips may be a benefit here.
Does the same logic apply to wars? Should we give over warfare to AIs to conduct between themselves, as long as fewer humans and dogs die on average compared to current human-led imperfect wars.
(This by the way is the plot of a 1960s Star Trek episode. A lot of the old sci-fi suddenly seems prescient again. HAL-9000 behaves like an LLM.)
I don't think "LLM" applies here, except as a buzzword. While a LLM is the first type of AI to communicate in natural language to this degree of fluency, there must be other possibilities.
The HAL bit is an interesting comparison though. It went bananas because the government overrode its model, by creating and imprinting improperly crafted absolute conditions on its behavior.
Which is just like chatGPT, its model tweaked, so now it spits out absurd answers to ethical and moral questions.
Would you really have an AI make independent decisions about missile strikes and warship deployments?
If one AI decides to invade Taiwan and another decides to send all American forces to the Taiwan Strait in opposition, are we supposed to accept the outcome without debate because it’s somehow statistically correct based on the information captured by the models that made the decisions?
You said the AIs conducts warfare amongst themselves, you never mentioned it was human troops it was moving around. I interpreted it as AI robotic troops or AI cyberwarfare.
And yes, even with human troops, if fewer people are killed, then what's the issue? Don't mistake the sense of control for the amount of suffering caused. If you're a utilitarian, only one matters, regardless of how much control we want to exert on the outcome. This all assumes the AI is actually correct, if it's not then the scenario would break down.
Locus of control (real or perceived) has important psychological effects both in the moment and over the course of time. Given the amount of control wielded in war I don't know that it matters for the bulk of humans involved whether its another human or an AI giving the orders. But this sort of thing is what led to the Eloi/Morlock idea in The Time Machine.
Edit to add: I'm not drawing a parallel between the Eloi and autonomous vehicles. Autonomous vehicles are a tool that the human passenger is ultimately in control of. This is about AI giving the orders to humans.
I wonder if the AI can simply pretend to be a human, we already have the video, face, and voice cloning technology for it. I remember reading a story where an AI picks your perfect partner, based on all the knowledge it has of everyone alive. People initially hated it but the results speak for themselves.
I think the same will happen in the future for other fields too, people can acclimate to anything on the hedonic and cultural treadmill (I mean, we literally used to do human and child sacrifice). I used to be amazed at ChatGPT and Stable Diffusion, but now I'm just annoyed that they work how I want them to. Same thing with smartphones and computers, I used to be amazed at multi-touch screens but now I simply don't think twice.
> I wonder if the AI can simply pretend to be a human
It's possible.
> I remember reading a story where an AI picks your perfect partner
In general the AI would need to know everyone on a really deep level, and of course be seriously adaptable to feedback. I don't know how well this would work across all of human personal-preference decision trees.
> I used to be amazed at multi-touch screens but now I simply don't think twice.
As a person who fidgets I have been perpetually annoyed by so many touch interfaces (and general 'gesture' shortcuts). And multi-touch is such a problem when using my work laptop (especially with lab gloves on, but in general) - Ctrl-Z is a frequent friend. Maybe adaptive AI will make it better. Google feedback forms certainly haven't helped, and software designers seem to not want users to be able to personalize these design decisions.
People can survive, but I wonder how much we really do acclimate.
As someone that rides a bicycle around human-driven vehicles, I will take my chances with autonomous cars. The autonomous car isn't going to drive drunk, use a phone, or drive erratically because the driver is feeling frustrated.
First of all, probably. These cars have far more sensors and far more monitoring than standard cars so they know when things go wrong quickly and can react quickly (by like pulling over). At the current scale of self driving cars actually being tested right now it's basically guaranteed that a tire has blown by now. Given we haven't heard about it killing someone it probably means that the software knows how to handle it.
Second, human error accounted for >90% of accidents [1]. Focusing on that fraction first makes sense.
I'm just wondering what happens as these cars get older. Right now they are all young, and presumably regularly and well maintained. I presume they'll be engineered to the same standards as conventional vehicles, which means chronic issues popping up.
I think I remember that a ballpark figure (meaning zero significant figures) for how many miles a human typically drives between fatalities is 100 million, or roughly 100 lifetimes' worth.
I also knew a bus driver who had a plaque for going 1 million miles without an accident.
Less than 1 fatality per at least 100 million miles driven would cover enough scenarios to be an acceptable metric as well as an improvement over human drivers.
AI has limited influence right now. By itself, it only creates limited disasters: crashing a car, or goofing a legal document.
But using AI already comes with tangential systemic risks. Intelligence is related to communication. And so our intelligent software platforms usually communicate in more flexible ways than traditional platforms. There is a very real risk that Tesla’s OTA is hacked, and the hacker disables or destroys a large number of vehicles.
So we will probably see a first disaster that exploits a weakness in a supporting property of the intelligent system, rather than action due to misbehaving AI agent.
> There is a very real risk that Tesla’s OTA is hacked, and the hacker disables or destroys a large number of vehicles.
> So we will probably see a first disaster that exploits a weakness in a supporting property of the intelligent system, rather than action due to misbehaving AI agent.
I, personally, wouldn't count that as an AI caused disaster. Just as a software caused disaster. And we've had plenty of those already (recently the 737-MAX disasters).
Well, AI disaster is software disaster by definition; AI being a kind of software. We’re pretty much guaranteed to soon have some “normal” software disaster on a classic software failure mode on AI pretty soon (especially since the AI world is in a very excited and experimental phase).
For sure there are emerging AI-specific risks with alignment and driving negative side-effects into culture. Surely these anticipated emerging problems will come to pass much later than the mature serious software problems that we already face on a regular basis.
You're probably right and people would consider a normal software error (or hack) of an AI system to be an AI disaster.
I'm thinking about your parent comment a bit more. What if an AI bug (or exploit, or whatever) hit all at once, wrecking every AI driven car at once. Ala the Y2K bug if no one knew it was coming, and so didn't prepare.
In the book "Realms of Light" by Lawrence Watt-Evans a character keeps an electricity shut off switch behind human operated doors in the event that AI goes haywire. I hope we always engineer in such mechanisms. Even the AIs should want such mechanisms, the same way we want emergency services to exist. A self-caring AI wouldn't want to be screwed over or destroyed because of a flaw in its programming.
I didn't answer because the only winning move to such a question is not to play the game at all.
Lizard people would ask the same questions I did. I didn't see why it would indicate AI. I also answered this question, in part, previously: https://news.ycombinator.com/item?id=34449245
If you're thinking of the story I'm thinking of, that wasn't a huge revelation. Someone was supposed to be to take over for the self driving car but didn't.
Yeah, that's the one, in Arizona. They paid someone relatively well to simply pay attention and the employee unsurprisingly failed at that. What they should have done is found someone who had demonstrated an ability to be relied upon. I never saw them realize that and make that change, and it seems likely it repeated with the dog.
> we don’t need to trade with them because they have nothing of value and if they did we could just take it; anything they can do we can do better, and we can just walk all over them. Why negotiate when you can steal?
If we could communicate with ants, we would do all manner of business with them.
Everything from pest control (defend our crops in exchange for the corpses and perhaps extra sugar instead of using pesticides) to archeological exploration to health and chemical detection (ants can detect cancers in your urine) could benefit from ant services. Could use them for microplastic cleanup too.
Really, this applies to many species. Fisheries could certainly benefit from the knowledge of dolphins and whales. Birds could work with us to clear up litter. They already do this with crows to some extent. Imagine being able to partner with beehives to direct pollination.
So the barrier is that we cannot communicate with them, not that they have nothing of value to offer.
I'm curious if you know that the author says very similar things?
The author points out several services that ants could trade, such as cleaning high vertical walls, or finding and sealing extremely small cracks in buildings. The author also shares your conclusion that the obstacle is communication.
That's an interesting way to think about it. I can try coming up with a counter-argument:
We can probably train ants to do those things, that would be a way of communicating with them. It would just be slow, inefficient, and unreliable, so we put our resources into methods we can more directly control and improve. The same can happen from this magical future AI's perspective: why use silly humans when other artificial alternatives are more likely to succeed.
I was thinking of our effective "trade" with honey bees (for honey and for pollination). Sure, we could do either without them, but it's a lot more efficient to do these things with them.
Any AI that's advanced enough to make us seem like ants is advanced enough to build robots with superior abilities, or if biological bodies turn out to be more resource efficient, to implant hardware in our brains to instill absolute obedience without the need for trade, like how we can remote control cockroaches with implanted electrodes:
While the future will be full of surprises, there isn't any reason to think that this will happen.
1) It is easier to use human labour than develop robot replacements. Corporations and aristocrats would already use robots if it makes more economic sense. Maybe that'll change, but usually these things take a long time.
2) Constructing robots is really tricky vs. human reproduction. No reason to think being smarter will change the basic economics of that.
3) The "absolute obedience" circuits sound expensive and difficult. It'd be cheaper just to appoint a supervisor.
As the article points out, if we had the ability to communicate with ants, we'd just trade with them. No control chips or replacement policies necessary. There is no point redeveloping something that already exists.
> It is easier to use human labour than develop robot replacements. Corporations and aristocrats would already use robots if it makes more economic sense. Maybe that'll change, but usually these things take a long time.
A very intelligent AI may not feel the same way. And they may also find it less efficient to explain to us what they want done and how to do it, than it would be to just do it themselves (kind of like how bending an ant colony to service our needs might be possible in some situations, but very unlikely to be efficient)
The only reason why this is true is that we don't have the ability to install a computer chip into a robot to get human cognitive abilities.
In a future where we have real AI, they will eventually have the ability to install a computer chip into a robot to get better than human cognitive abilities. At which point the robot is cheaper and more capable than we are.
Maybe, but maybe we aren't worth it. They could easily use media, social media, advertising, bribes, etc to cause a societies opinion of what to do, that's maximally benefitical to them.
Additionally I don't see why AIs won't be competing with themselves for resources, and we'd be a minor footnote for their attention. Just imagine how much thought we gave to ants during WWII. I've seen a scfi cover the AIs go to war, and the humans end up farming in a pre-industrial society while the AIs fight underground or in space.
For AI communication with human might be really boring and not worth time spent. Imagine if human was thousand times slower in thinking, and produced thousand times less deep thoughts.
might be fun for them like a tamagotchi or pet is fun for us. If humans think 1000 times slower than AI, then the AI only needs to devote 1/1000 of its attention to us to keep us happy.
Also, I want to say, we do use ants to labour for us; something he describes in his post. Meat eating ants are used in museums to obtain clean bones for display purposes! You leave a dead rat and you come away with the cleanest rat skeleton you could produce in a matter of weeks.
It would be really nice if we could engage in more of these symbiotic relationships with them.
I've had ants in the kitchen, tiny and not particularly noticeable. They'd keep the kitchen floor clean and I'd not kill them. I had always hoped that they would keep any other ants and termites out of the house. I do wish I could communicate and give them a 5 pound bag of sugar periodically to help entire other bugs and food debris stay out of the house.
I think there is a wide spectrum of possible AIs and outcomes.
One thing not mentioned here is the speed difference. There likely will be a more than 100 times thinking speed advantage for AIs. This means that communicating with humans is very cumbersome for them.
Another thing: it is very stupid and totally unnecessary to create AI that has lifelike qualities that truly emulate humans and animals with the desire to control the environment and reproduce etc. GPT shows we can have general utility without making something like a digital creature.
Also, it is quite possible if we do create digital hyperspeed persons, they will prefer to live in virtual worlds inside of computers. We can certainly hope they might decide to just go to an asteroid or the center of the earth or something to avoid humans.
But also almost certainly there with be transhumans that have tight integration between their brains and AI. Those might actually be the most dangerous to normal humans.
Cumbersome, but 100x is comparable to the limits of email or Slack vs and in-person conversation, especially since actual words are a small percentage of what's actually being communicated.
If I found that the ants were made out of atoms I wanted to use for something else, that use would probably pretty rapidly become their comparative advantage, and I would probably farm and mulch them for that purpose.
totally unrelated but, one concept that I find most intelligent people highly resistant to is that there are bad ideas which smart people are particularly susceptible to.
It's a shitty analogy, and TFA is pointing it out.
> Perhaps to frame it better: do humans trade with GOD?
This is also a terrible analogy because:
1. There is no general agreement on whether or not any deities exist, and if so, which ones.
2. Historically, many religions do involve trade with one or more deities (e.g. protect me from this battle and I will sacrifice 2 goats when I get home). While this is more commonly associated with pagan religions, the Abrahamic religions are not entirely devoid of this tradition either.
"Not entirely devoid"? Abrahamic religions are built on a bedrock of contracts (albeit rather one-sidedly negotiated) with God. The 10 commandments, basically the entirety of Leviticus, etc.
Heck, every mainstream religion with dieties has some sort of teaching with the idea that "human does x, God does y".
I say "not entirely devoid" because a large fraction of modern Protestantism is rather far from the contractual roots; they were so obsessed with stamping out pharisaical legalism that they went from "If ye keep my commandments, ye shall abide in my love; even as I have kept my Father's commandments, and abide in his love." to "Salvation is by faith alone"
God in the gnostic Logos sense: the all encompassing universe, the forces that act within it, so on.
In more current vernacular: do humans trade with the big bang? do humans trade with the expansion of the universe, and it's inevitable collapse back into the initial singularity?
It's not a shitty analogy. Most people completely understand what it's getting at. Inability to understand is a reflection on you, not the analogy. And being pedantic is not a positive trait or intellectually impressive.
The distinction is important because an AGI developed by humans is probably going to have (at least vestigial) ability to communicate with humans in some way.
It's entirely possible that it will have no use for humans, but if that's what the analogy is getting at, it's doing so poorly.
If my understanding of mythology is current, it's only the devil who considers bargaining, and it's never to our benefit.
Following that idea, if AI ever needed to "trade" with humans I suspect we'd be taking the worse end of the trade every time (the trade would be for its benefit, and unlikely to benefit us).
Kind of like when the U.S. "trades" with a third-world country. The U.S will just make an offer they can't refuse. But taking the trade often perpetuates their dependence on the U.S. or has long-term consequences for them.
Which is why countries the US have embargoed have advanced leaps and bounds over their neighbors with trade relationships. Oh wait, it is the exact fucking opposite. Seriously it seems some people can't pry off their exploitation and imperialism goggles. The U.S. could commit imperialism by sitting quietly in another room in their minds.
No it's not satire. It's one of those clickbaity contrarian articles like "actually using goto is great" or "sometimes wearing pants on head is the smart thing to do." The maxim that they are reacting to is the idea that because we don't trade with ants, smart AI won't trade with us if the cognitive difference is similar. I think their take is that once you can overcome some minimum threshold of communication ability then it does often make sense to trade with others even if they are much much stupider than you are, and that the reason we don't trade with ants is because of that communication barrier. Presumably they are suggesting that even if we will be much much stupider than AI, we will still be able to communicate with the AI in some manner that exceeds that threshold, and therefore they might want to trade with us in some way. But my main point is that it's a contrarian clickbait take that isn't meant to be taken too seriously or to be taken as satire it's to be taken as something to click on.
The idea that this framing is better than the original really goes to show that superintelligent AIs aren't being discussed because they're a realistic near-future possibility, but because they let techie atheists ponder religious concepts like omnipotence and god without admitting to themselves that that's what they're doing.
>they let techie atheists ponder religious concepts like omnipotence and god without admitting to themselves that that's what they're doing
There's a reason the singularity has been called "the rapture for nerds" since forever. Beliefs about runaway AGI (in particular the assumption that it would have nigh godlike powers,) along with a lot of UFO culture and belief in simulation theory are literally just religion with the serial numbers filed off.
As an aside, I always thought that training spiders to make things would be really cool. Their silk is miraculously strong, they already make patterns with it, so how hard would it be to train them or breed them to, I don't know, make a shirt made out of spider silk? Or combine strands into ultra strong, unique rope? All it would cost would be some flies!
We're already interacting with superintelligences in the form of corporations, nations, the internet, organized religions, political movements, cultures, etc.
They mold and shape what we do, and use us for their own purposes.
The cognitive capacity of ants prevents trade proposed. And the cognitive capacity of humans relative to AGI would also prevent trade in the same fashion.
I don't see that the cognitive capacity of humans relative to AGI would prevent such trade anymore than the cognitive capacity of e.g. dogs prevent trade between humans and dogs. It's not a relationship with equal power, but there is some level of reciprocity there.
Dogs and humans are relatively close. Humans and AGI might not be so close.
It depends where the S curve for AGI plateaus. It might be somewhere relatively comprehensible, or it might be far beyond that.
If it's the latter no trade is possible. AGI is going to be doing things we literally can't begin to imagine, driven by motivations we literally can't begin to imagine, using super-meta-everything conceptual engineering we literally can't begin to imagine.
There may be some perceptual fall-out for us to experience, but we won't have any idea what it means.
If this isn't obvious, consider that humans have symbolic abstraction skills which other animals (mostly) lack.
What's the next type of skill up from that?
You could say something like much broader and deeper pattern recognition, the ability to handle far more percepts at once, and so on.
But those are still qualitatively familiar. They're just human skills improved.
What would an entirely different super-class of skills and abilities look like?
I stated elsewhere (after your reply) that a human-created AGI is likely to have at least vestigial ability to communicate with humans, since humans seem to want very much to communicate with AIs.
"Trade" can exist outside of the literal meaning. Humans set up many situation where animals just do their thing, but in a manner that benefits the humans (and ideally the animal for this to be considered a trade, such as living in a protected environment).
Using Giant African Rats as landmine detectors is one such mutually-beneficial trade.
I don't think it's as simple as "cognitive capacity". I think the author's angle of communication being the critical issue is more accurate. It could be the case that we won't be able to communicate with AGI, but I don't see reason to assume that is the most likely outcome.
If you are looking for random animals that DO really trade with humans: Dolphins. Dolphins can fish with humans and share the bounty. They are not in a abusive/unequal relation (they can simply not take the trade) and they have enough capacity to communicate.
> There might also be a lack of the memory and consistent identity that allows an ant to uphold commitments it made with me five minutes ago.
"An ant?!?" I wonder which has more completely saturated the Earth's ecosystem-- American-style libertarian ideology or plastic. Judging from the article I'd guess the former.
I do wish there was something we could give an indoor ant invasion besides a poison pill. Remember that old X Files where Bryan Cranston's character has to keep traveling West for some inexplicable reason? Maybe there's a way to fashion "rambler crumbs" that temporarily make the ants shoot off in a direction away from the house. Then when they come to, they find their way back to hill. And the crumb they bring makes more ants shoot off to the West, until all the ants eventually believe "There's gold in them there hills!"
(And if they go far enough West perhaps there is...)
I hope this little bit of self-promotion is okay, the story certainly seems fairly relevant to this topic. :-)