Last week I talked about missing out on the next big revolution in fiction, and how that can make it hard to make future fiction hard to write believably. However, if you thought I was going to go so far as to predict the impending technological singularity, you’re wrong.
The supposedly approaching technological singularity is some point of exponential advancement that changes the game so much, we cannot really see past it, and depending on the exact definition, I’ve seen it predicted to occur as early as 2011 and as late as 2050.
Well, I disagree. Depending on the more precise definition of this technological singularity, I say maybe, no, and Hell No. If you’ll bear with me on this rather long entry, I’ll explain why.
AI: the Easy Singularity
The tamest definition of this technological singularity is that we will create a computer intelligence that is more intelligent than the smartest humans. On the face of it, this seems believable. Given the advancements that Moore’s Law brought to computational power in the last fifty years, it might even seem to be inevitable.
In specific areas, we have already reached this. Notably, computers can play some games perfectly, i.e. they cannot be beaten. For other games, they can beat the best human players. Chess was a recent and notable triumph for the silicon team. But they are still losing other games to human players. (See this informative and humorous XKCD comic )
But skill at games is not the only measure of human intelligence. Visual and speech processing are still difficult for computers, though they are improving. Creativity is hard to measure, but with the exception of some isolated problems, computers have not shown much creativity. A sense of humor still seems a long ways off. The brass ring, of course, is the self-aware computer. That’s the real cogito ergo sum moment.
If Moore’s Law continues, we may reach the required processing power within the predicted timeframe, but I foresee a couple of problems to the hyper-intelligent computers of the singularity prediction.
The first problem is that reaching the level of even human intelligence is probably harder than it looks on paper. It’s about more than just processing power. Specifically, it’s going to require that we reach an understanding of how human intelligence works in the first place, and we’re simply not there yet. How is it that these sporadic neurons firing translate into the subjective experience of sentience? How important is the structure of the human brain that has evolved from more primitive brains? How do the chemical regulators keep our neural nets in good operating condition? It’s not just a matter of connecting enough transistorized neurons and flipping the switch. There’s structure and a billion years of Darwinian design at work.
The second problem is this notion of hyper-intelligence. Briefly consider qualitative aspects of intelligence vs. quantitative aspects of intelligence. A human is qualitatively more intelligent than a lizard. He thinks about problems, designs tools to solve them, and ultimately eats the lizard. Mmmm, that’s good lizard. Some humans (my wife, for example) are quantitatively more intelligent than I am. She can solve mathematical problems much faster than I can, but given enough time, I’ll get there eventually.
The upward ramp of Moore’s Law gives us a lot of hope for computers that could be quantitatively more intelligent than humans, but I don’t think automatically provides a qualitatively higher intelligence. Certainly, the old Church-Turing thesis is often interpreted to suggest that any calculation that can be performed (i.e. the human experience of consciousness) can be performed by a Turing machine, and that interpretation is one of the strongest arguments that increasing computing power will lead to human-level computer intelligence. However, the Church-Turing thesis also makes it clear that there are some problems (e.g. the halting problem) that are beyond the ability of a Turing machine.
Thus, it seems to me that while computers may become significantly quantitatively more intelligent than humans, there may be a real upper limit on qualitative improvements in intelligence. What would that look like? I would expect it to be like talking to someone who knows pretty much everything and can answer hard questions quickly, but they would still be just as clueless as we are on questions like “will I be happy with Sue?”
Of course, the zero-eth problem with all this is that Moore’s Law may not continue long enough to reach this singularity. For the last twenty years, I’ve been reading predictions that Moore’s Law only has another five to ten years left in it. Eventually, they’ll be right. I’m not saying that we’ll never get the required processing power – after all, evolution managed to crank it out – but we might have to give up on the notion of getting exponential results in logarithmic time.
So, will we see this easy singularity of artificial intelligence by 2050? Ehhh, maybe. Maybe not. I think we’ll see it eventually, but I don’t know if we’ll ever get that qualitative advance.
Impenetrable Wall: the Medium Singularity
More advanced definitions of “the singularity” typically say that once we build these hyper-intelligent computers, they will change the world in ways that we cannot imagine, and hence, we cannot see into the future past that event. After all, we only have normal intelligence, so how can we possibly guess at where hyper-intelligence is going to lead us?
Personally, I don’t think that gives human imagination enough credit. Scholarly study of this question leads to possibilities ranging from utopia to human extinction and all manner of possibilities in between. Utopias are fairly easy to imagine, though the road to reach them is hard. Human extinction has been over-imagined, from the Terminator to the Matrix. I think we’ve also seen plenty of in-between’s. One of my favorites is the Poul Anderson series ending in the Fleet of Stars, where hyper-intelligent computers simply want to manage humanity into a safe, peaceful, and boring existence.
We don’t seem to have any trouble imagining futures with hyper-intelligent computers, and face it, we’re getting by in this area on the odd-balls, the kooks, the SF-writers. Put some serious policy wonks on it, and we’ll soon be talking about the best tax strategies to manage Skynet’s homicidal rage.
Ah, but it’s not enough just to imagine the possibilities, is it? In order to foil this aspect of the singularity, we have to predict what’s going to happen beyond that impenetrable wall of exponential change. How on earth can we lowly humans do that?
Well, we can’t.
But we can’t predict what’s going to happen on this side of that impenetrable wall of change either. Who’s going to win the U.S. presidential election this fall? Will Iran build a nuclear bomb or fall to a populous revolution? Will wireless broadband ever reach parity with physical cables for the last-mile problem of connectivity? Will solar panels ever get cheap enough to drive us towards a privately-owned distributed power system, and if so, when? Will the Cubs ever get back to the World Series?
The only thing I can grant the singularity camp is that predictions beyond the achievement of hyper-intelligent computers will be more difficult, just as any significant change makes predictions more complex. The creation of the personal computer threw technologists for a loop. Ditto with the creation of the Web. However, some things remain the same, no matter how much change we throw at them. Top among them is human nature.
My predictions for a post-hyper-intelligent-computer world: Humans will be noble but petty. They will be greedy and charitable. They will love, and they will hate. Fathers will want to play ball with their sons, and daughters will declare that their mothers have RUINED THEIR LIVES!!! These things haven’t fundamentally changed in ten thousand years. The arrival of hyper-intelligent computers, friendly or not, won’t change them either.
Unless…
Post-Humanism: The Really Hard Singularity
In fairness to the original singularity camp (Vernor Vinge, etc.), this kind of thing was not in their definition of the singularity. They were making what they felt were reasonable predictions up to the point where they felt they could no longer make such predictions. They didn’t sign on for humans becoming immortal demi-gods.
But I include this here because enough post-humanists (or trans-humanist, take your pick) have hitched their miraculous transformations onto the computing singularity bandwagon, and they’re making predictions in the same timeframe as the computer singularity folks. What’s more, I’ve run into too many woo-woo technology lovers who have looked at a few exponential charts and convinced themselves that the techno-rapture is at hand.
So, what the hell am I talking about here? Some folks believe that we’re on the verge of changing human nature in big ways. The most aggressive think that we’re going to download our minds into computers at the earliest opportunity, shedding our physical bodies like gas-guzzling SUV’s. Others think that life-extension is advancing rapidly towards the point that life expectancy will grow by greater than one year for each year that passes – essential immortality, even for those of us alive today. Still others think that we’re a generation away from engineering children who are as far in advance of us as the hyper-intelligent computer is ahead of my laptop.
To which I say: Bullshit, not likely, and not soon.
The notion of downloading into a computer has been around for a while. I can’t say when I first ran into it, but when I saw it dealt with in SF (again by Poul Anderson) it seemed an old concept to me. Old yes, but practical, no. The first thing I’ll throw out there are the technical problems with non-destructively reading a brain’s complete state, building an electronic system that can match it, and duplicating all the chemical support systems electronically. But they landed a man on the moon, so I won’t make it a sticking point.
The second problem, though, is a messier one. Would you really want to live as a computer? In most of the ways I’ve seen this envisioned, the downloads live virtual lives with no physicality. Perhaps they interact some with the physical world, but only at an intellectual level. Is that really enough for you?
I direct your attention again to that games diagram from XKCD. One of the games that computers will never play better than humans is “Seven Minutes in Heaven”. I think a human mind living in a computer would go mad without the comfort of physical touch, without the sensation of the wind and the rain, without the taste of food or the smell of freshly cut grass. I believe this goes beyond a mere craving. I think our minds need that physicality. It’s part of who we are. We are animals of flesh, not free-floating motes of intellect.
We could, of course, turn ourselves into robots, but they would have to be exceptional robots. More properly, they would have to be androids with at least all the senses and capabilities we have today. Again, that’s another technical challenge, but I’ll waive it here in Wonderland. Still, if we do manage all this, how different is our human nature? Haven’t we just turned ourselves into immortals with an off-site backup?
That brings me to that second notion of post-humanism: biological immortality through life-extension techniques. Again, there are technical problems, though before I waive my objection, let me point out that we know far less about manipulating biology with precision than we know about silicon, and there’s no Moore’s Law pushing us along here. Still, life expectancy is increasing. How far can it go?
The real problem is that we’re kind of fighting evolution here, or at the very least, evolution is not our friend in this case. We’ve been bred to breed and then die. Pass on our genes to the next generation, and evolution is done with us. At best, we’re useful to make sure that our genes continue on to a second or even third generation, but before long, we’re standing in Darwin’s way.
So we’ve been designed to not last that long, or at the very least, we’ve not been designed to last that long. Planned obsolescence at the genetic level. To get around that, we have to solve some problems that evolution has never bothered to try, and we’re trying to do it for people who are already up and moving. Personally, I think we’ve got a better shot at downloading into androids.
But perhaps that third post-human notion has some merit, eh? Design our kids to be immortal, immune to disease and age and any human frailty we want to edit out. How about that? We’ve mapped the human genome. Let’s start writing some new code.
I don’t think so, and for once, I’m not going to put the strongest barrier at the technical level – though be assured, that’s no cake walk either. Instead, it’s human nature that’s going to slow us down, and ironically, I think it will be parents’ love for their children that will limit the gifts that we give them.
Think about it. You and your spouse are about to start your family. This is today, or perhaps it’s a year after the hyper-intelligent computers have dropped by to say “dude”. Now a doctor tells you that he wants to significantly rewrite the genetic code of your offspring so that he’ll be smarter, healthier, and immortal. “Sounds great,” you say, but then you ask the first question any potential parent would ask. “How many times have you done this?”
“Ummm… well, never. You’ll be the first.”
“I’m sorry, but you need to get the fuck out of my house.”
Sure, sooner or later, someone would give it a shot, but 99.99% of parents would wait until that first 0.01% had grown up and designed some kids of their own. Then maybe another one or two percent of that next generation would try it. It would grow, generation by generation, until there would be a tipping point of everyone doing it, and the poor would be demanding universal genetic health care. But it would not happen overnight, and it sure as hell won’t happen in the next couple of generations from now as a number of folks are predicting. This will take a century or more, especially for some of the more radical proposals.
Still, in all three of these post-humanist scenarios, I think they fail on the impenetrable wall of unpredictability. People will still be people, even if they’re androids or immortal meat-bags. We can hope that they will be better people, but we’ve already known better people: Mother Theresa, the Dali Lama, Martin Luther King Jr., and of course, Tom Landry. (Go Cowboys!) We can readily imagine stories in a world filled with these types, just as we can imagine worlds filled with their opposites. Utopian and dystopian fantasies are a staple of the SF genre.
So, no, we’re not on the verge of some biotech rapture which blinds us to the future.
Story-telling: the Non-existent Singularity
But as much as I may poo-poo the likelihood of any of these singularity events, I don’t ignore them. Even if they never come to pass, they’re fun ideas to play with, simply because we SF geeks like to think about odd scenarios and then ask, “What happens next?” Because they postulate such a different world, we’re drawn to the other side of that impenetrable wall to explore, have fun, and tell stories.
It’s because of that imaginative drive that I don’t think any change will ever present us with an impenetrable wall.
And I also think it’s that same drive that gives us any chance of ever reaching those theoretical walls in the first place.