The title may seem fatalistic, yet Kevin Kelly’s latest best-seller, The Inevitable is anything but that. Futurist, environmentalist, and philosopher, Kevin Kelly is someone who illuminates our future—in every sense of the word—and proposes solutions. Privacy, artificial intelligence, media, technological trends … Kelly’s piercing analysis of our future shakes all our certainties.
You’re pretty active on twitter, and you’ve pinned this sentence: “Over the long term, the future is decided by optimists.” Isn’t it a bit optimistic to say that? Are there not other forces at work?
It’s a fair question! Especially when I’m writing that in the context of The Inevitable. First of all, my reason for optimism is historical. If we look at the scientific data of human conditions—and not the news!— in any realm, it has been getting better over the last 200 years. But the difference is very very tiny each year; it’s a fraction of a percent or 1% at the most. It’s not visible on a year-to-year basis, you have to look back over time to see it. If you take a long view, it’s very clear that there is progress, which is optimistic.
Can anybody really decide? I do believe that the march of technology is inevitable. Once you have electric wires and switches, you will have telephones on any planet that has discovered the principles of electricity. And when you make telephones, you will eventually make the Internet. That’s inevitable. But what kind of Internet? What kind of telephone system? That’s not inevitable. There are a lot of choices that we have: whether it’s an open system or a closed system, a national or international one, commercial or non profit. Those are wide-open decisions and they make a huge difference. Yes we can decide, in a kind of native way, to make an Internet that is very expensive and non-democratic, or one that is open. We have a choice. When we make optimistic decisions, we’re able to make a better future… And we do keep making slightly better decision overtime. That’s why I believe that, over the long term, optimists are actually writing our future.
With all the dystopias that we are seeing in science-fiction movies, it seems that we are lacking a positive vision of the future. What can we (or should we) do to create a more positive vision of the future?
We’re confronted with the reality that dystopias make better movies and stories. They reduce the world to good and evil, winners or losers. This simplified version is very appealing to us. And the people making them are very good at it. The downside is: the images of the world that these stories convey come to dominate our imagination of what is possible.
I think there is a hope that this is just a phase, that people who make them will eventually try to show us a world that is inspiring and friendly. It’s difficult because, as we get older and more used to technology in our lives, we come to understand that technologies have costs. They introduce almost as many problems as they introduce solutions. That’s sort of known now amongst people who observe how things happen. I don’t advocate this dystopian world, nor a utopian one. I advocate a protopian world, where we have 1% improvement every year. That’s not very cinematic! But it’s a challenge.
I’m giving my energy to try to describe a world that people would actually want to live in. I put forward some scenarios— like in The Inevitable. Maybe if more people did the same, we could actually persuade everyone that it is worth trying to make it happen.
Could you summarize some of the long-term trends that you believe will change our future?
One of the things that I would say right away is: most of what is going to happen is not new. Just as most of the technology in our life today is not new technology, it’s old stuff: concrete, wood, plumbing… Stuff that was invented before we were born. In the future, most of the technology that surrounds people will also be things that were invented before they were born. The new stuff is a small percentage, but it comes to dominate the conversation – which is the dumbest way to think about it! So, most of what will happen, most of the forces operating in the future, will be old forces.
I make the claim that I’m not going to be able to find any invention on this planet that went extinct globally. It all continues. It’s an add-to process. There are probably more people making stone tools by hand today that there were 100,000 years ago, just because of the population. All the old habits like blacksmithing, glass blowing… people are still making them the old way. 100 years from now, there will still be people who will have laptops and smartphones. The forces that are going to determine our future are already here; they are already operating.
In this digital environment, we will continue to increase the demand of sharing (collaboration, coordination, cooperation). We will continue the shift from owning things to having access to them—subscribing to them. We will continue the shift to cognify, make everything that surrounds us smarter, some of them very very smart. We will continue to remix everything in new ways. Most of the new things will be remixes of old things. There are twelve general trends that I talk about in The Inevitable; they are all happening right now, and will continue to happen in the foreseeable horizon of ten to twenty or thirty years.
You have said before that “ubiquitous tracking of our lives is inevitable”. Do you think that the death of privacy is inevitable in the digital age?
I think “privacy” is a word for which we don’t have a good definition. We use it in a lot of different ways. We think we know what it means, but is it really clear? In a sense, a lot of what I do, like walking in the street, is public. The assumption is that nobody cares about it, that nobody is tracking my movements. But they could! So when I go out, is it a private or a public act? Once we start to examine the meaning of what we think may be “privacy”, we realize that it’s a very complicated, multifaceted things. It probably doesn’t really mean what we think it means. I think privacy as we imagine it never existed to begin with—and is certainly not going to be compatible with our lives in 30 years. The final thing I would say on the subject relates to history. For hundreds of thousands of years, humans evolved in quiescence in an environment where everything that they did was known to everybody else. That was the natural state of our evolution, and we were comfortable with it. So, one: I think the idea that nobody should know about what we do is quite recent. It’s kind of an utopian idea, because it never really existed or only for a few people—like Ted Kaczynski, who was a hermit living in a hut in the mountains with no contact. Two: it’s not entirely feasible or wanted.
The funny thing is, we created a paradox in our behavior. We want people to know about us as individuals, to care about us, to respect us. But that actually trumps our desire to remain hidden. What we need to do (and haven’t done yet), is to invent ways to have personal attention and share our lives that are comfortable and symmetrical. Today, we have a situation that is not symmetrical, where they—governments, corporations—know a lot more about us than we know about them. I think it’s that imbalance that causes our current discomfort. What I call for— and what David Brin has named The Transparent Society—is restoring that symmetry by using technology to some extent. Mutual watching, mutual observation, mutual viewing—we could get absolute benefit from sharing in both direction.
In your book you talk about how “every fact has its anti-fact.” How do we advance in a world filled with as many anti-truths as truths and as many alternative facts as facts. Where do we go from here?
This is the question of the hour! It’s a real issue and a real challenge. Some of the principles we took for granted in terms of knowledge exchange and news ain’t working anymore. I advocate for a truth-signalling layer to be added into the Internet, which would have to be deeply rooted. You could almost imagine an internet referencing for truth. Referencing is the way that Google makes its billions. When you make a search, they basically show you a webpage that they say has a high reputation, a high reliability to answer your question. It’s a network reputation. The ranking doesn’t come from a central judge or comity, it comes from the web itself. That is also called citation indexing. We can extend that kind of indexing to whether it’s truthful.
My theory is that facts are all networks—they don’t stand alone, they are linked to each other. The more pieces of knowledge can be linked to other pieces of knowledge in agreement, the more they are reliable. That’s how science works. If it doesn’t fit, that’s a problem. In the same way, we can introduce a kind of ranking for truthiness on the Internet, based on whether or not this fact is backed up by others. For example, most people have a consensus that London is the capital of England. People and sites which confirm that are themselves reputable, and that’s going to have an increasing probabilistic ranking of truth. All of theses probabilities would enable a ranking system, saying that “this fact has a probability of being true of 90%”. We could institute that into the system overtime. I don’t think it could eliminate everything that is fake news, but it would certainly help.
When asked about when an AGI (Artificial General Intelligence, an AI as smart as a human) will be created, most specialists range it from 2027 to 2050? What is your guess?
I don’t believe there is such a thing as general intelligence. I think that’s a complete myth. Every single intelligence is specialized, including human intelligence. There is this idea that human intelligence is the purpose. But as we’ll start inventing multiple types of artificial intelligence, we will discover that there is a big possibility space. Just as we went from the copernican days of understanding that the Earth was not at the center of the universe, and the sun was only at the edge of the galaxy, in the middle of many other galaxies! We are going to understand that our intelligence is not a general purpose in the center, but way off into the corner somewhere. It’s a very specific intelligence that has evolved on this planet for our humanoid survival. All the artificial intelligences we will make will mostly be specialized.
There is an engineering principle that says you cannot optimize everything all at once. We think of intelligence as a single dimension, thing but it’s not. Our intelligence is a suite of many different types of thinking, of logic, of computation, of cognition. You cannot optimize all that. And all these types are going to be different in each entity. The human mix was optimized for our survival; it’s not a general purpose at all.
So the answer about when we are going to get it, is never. We will get artificial intelligences that we can have long conversations with, around 2040. But that, again, is not a general purpose of AI, it’s just what it was built for.
You might also like:
AI is inevitable. The question is: what are we doing with it? As you described, it’s going to be a lot of different things, and very specialized. What are our main concerns? What should we be cautious about?
We should be very cautious about weaponizing them, as we are already doing. Our militaries are very eager to brain the AIs into weapons. That needs to be done with great care. It will inevitably happen but, going back to what I said earlier, we have a lot of choices about how it’ll happen; whether to choose civil oversight or an ethical board. It’s actually not that hard to integrate ethics and morals into AIs and robots. The problem is, we, humans, don’t really have a good ethical moral system! We’re not very consistent, we are very shallow, and pretty much unaware. So the difficulty is actually for us to ascertain what our ethics are, in a programmable way. Once we have a consensus on it, it can easily be conveyed. The challenge is actually coming to this consensus and making it consistent, when we are not very consistent ourselves.
In a sense, teaching the AIs and robots is going to make us better humans. It will ruminate and help to sharpen our own ethics and morality. Just as a parent actually improves by teaching a kid. It’s going to be exciting, very challenging and difficult, mostly because we’ll have to figure out what it is we want to teach.
You just described ways you think AI will make us better humans. Do you see other opportunities for this type of improvement? Augmented humans for example?
I think AI is going to be the best tool for understanding our own minds. That’s the main side benefit for me. Neurobiology and psychology have yielded a certain amount of results toward understanding how our mind and brain work. But they have probably gone as far as they can. The real breakthroughs are going to come from trying to make artificial minds, artificial beings and artificial robots. That’s when we are really going to learn how our own brain works. We’re going to have access to ourselves in a way that we’ve never had before. Some people will even try to improve humans, enhance them, using that same technology. Whether that will work, I don’t know. Whether it’s a good thing, I don’t know either. But it’s still going to help us understand ourselves, and that understanding will inevitably help us improve. I think of AI as a microscope or a telescope that will allow us, for the first time, to really penetrate our minds and our being—to see how they work.
What are you most excited to explore moving forward? What new projects are coming for you?
The things that I recently became most interested in are bots, conversation bots and conversation interfaces. I have Alexa, there is Siri on the phone, Google Home… I think it’s a very very powerful technology, but we’re at the very dawn of knowing how to use and maximize it—which is very exciting in that sense. It’s like having a PC before the graphical user interface exists. There is so much unknown and so much potential at the same time. I can get a hint of its great power, but also a very good hint that we don’t know how to use this yet! We will, very quickly.
You might also like:
- Andrew McAfee – The Second Machine Age
- Raffaello d’Andrea – Catching Interconnected Robots on the Wing
- Ray Hammond – Homo Virtualis: the Virtual Ape in Virtual Reality