Forging new paths for the internet of the future: A Conversation with Vint Cerf
NOTE: The following has been edited for length and clarity.
Larry: I’m Larry Magid, and this is “Are We Doing Tech Right?” A new podcast from ConnectSafely where we speak with experts from tech, education, healthcare, government, and academia about tech policies, platforms, and habits that affect our daily lives.
As ConnectSafely launches our new podcast, we’re speaking with some internet pioneers, including today’s guest, Vint Cerf. Vint is widely regarded as one of the fathers of the internet, as a co-designer of the TCP/IP protocol that established the foundation for today’s internet.
He served as the founding President of the Internet Society, Chairman of the Board of the Internet Corporation for Assigned Names and Numbers [ICANN] and is a visiting scientist at the Jet Propulsion Laboratory. But in addition to his impressive historical credentials, Vint remains active in many aspects of internet governance and as an advocate for safety, security, and civility.
He is currently the Vice President and Chief Internet Evangelist at Google. Nice to have you on the podcast, Vint.
I just wanna start with a date. When did you start working on what eventually became the internet?
Vint: Well, yeah, exactly, because it starts before the internet. I got involved as a graduate student at UCLA in 1968, which is when the Defense Advanced Research Projects Agency [DARPA] decided it was going to build a packet switch network to link a dozen universities together that it was funding to do research in artificial intelligence and computer science.
So I got drawn into this because at UCLA, The Network Measurement Center, which is in Leonard Kleinrock’s laboratory, was selected to analyze the network’s performance and match it against the queuing theoretic models that Len Kleinrock and his students were developing to predict the performance of store-and-forward packet switch networks.
So I got drawn into that and was the principal programmer for the Network Measurement Center, and also helped Steve Crocker, who led the network working group to develop the host-to-host protocols and some of the application protocols that animated the original ARPANET.
Larry: Did you have any thought as to what would be the practical or impractical applications of what you were doing?
Vint: Well, what we were doing was deliberately very practical. The reason that ARPA had this ARPANET project was to save money. The situation was that they had a dozen universities that they were paying to do this research in AI and computer science.
And every year, each of the departments would ask for a new computer. And of course, even ARPA couldn’t afford to do that. So they said, we’re gonna build a network, and you’re gonna share your computing resources. And they said, by the way, we’re funding all of you, so please share your results as well, and the result was that we were a resource sharing project. And we also, as you might remember, developed electronic mail, networked electronic mail, in 1971, thanks to Ray Tomlinson who had this idea that you could move files around and associate them with particular users.
So we saw email pop up, and what came along with that, very quickly, were distribution lists. And so by 1972 or so, we’re starting to see a social component of this project, because people were starting distribution lists. The one I joined was called Sci-Fi Lovers. There were people arguing over who were the best science fiction authors and which novels were the most compelling. And then another one was called Yum Yum, which was a restaurant review that The Stanford was doing for Palo Alto.
So we all saw the potential for social interaction, and of course we were using it as a practical matter, for managing and coordinating our research projects. The other thing that became apparent were other social side effects. One of them was flame mail. And we recognized very quickly that text interaction, that email, lacked nuance. And if you started to get into an argument, sometimes you were in the wrong medium: you needed to get on the phone or meet in person or have a meal together. And the reason that that became so visible is that when an email shows up that causes you to be upset, you keep reading the thing over and over again as if the person who sent the email is saying it repeatedly, and, you know, it’s just a triggering thing.
We were already thinking about and concerned about social interactions in this online world. It has led me to believe, now looking back over all that time, that we probably should have had anthropologists and sociologists and psychologists and maybe neuroscientists participating in this project to understand more deeply how technology influenced our social behavior. One of the questions that has concerned me is the dynamics of online harm and trying to understand: why is it so toxic? And one answer is scale. I think there are large audiences that are part of the social networking environment.
It’s a little bit like the school yard fight where a crowd collects and because you are in the middle of a large crowd and they have the potential for piling on, the harmful effects are, I think, dramatized because of scale. So there’s a large audience that’s observing whatever harmful information is being flung at you, accusations and what have you. Maria Ressa, who is the recent Nobel Prize winner, speaks very, very eloquently about what she experienced in the Philippines when people piled on and attacked her online. And you can tell as you listen to her discussions how painful this experience was and how concerned she is about it and why we should all be concerned about it.
Larry: And you had some inkling of this, those early flame wars that were happening back in the early 1970s. I mean, you observed what was obviously probably a primitive precursor, given the fact that it was a very limited network at the time. But did that give you any pause to think about what could be done about this…going back to that period?
Vint: The kinds of flame wars that we saw were email-based and so there was a fairly straightforward response in that case, which was to switch to a different medium. You know, meet face-to-face, get on the phone, get into a medium where there were a lot more human and sociological clues in the interaction relative to just a text. But what I have concluded is that in today’s world, we have a much more difficult problem to deal with because we have a global scale system. Sometimes hundreds of millions, if not billions of users. So the potent possibility of people piling on is significant.
And I’m thinking about social engineering in general. I’ll give you two examples of social engineering, which are important, at least in American society and perhaps in others as well.
One of them is the requirement to wear a seatbelt when you’re driving, and the other one is to prevent people from smoking in places where they might do harm to others. In both of those cases, two things had to happen for it to be effective. One of them is that the public had to become convinced that these were important decisions to make, one of which is wearing a seatbelt. The other one is not smoking except where it’s permitted.
But it turns out there were some legal elements to put teeth into those social norms. So they said, if we catch you driving without a seatbelt, there’ll be consequences. You can’t sell a car in the U.S. without a seatbelt. If you’re caught smoking in this room where it’s not permitted, there will be consequences.
So we had enforcement in addition to moral suasion for those two things, and it may very well be that we need something more than moral suasion in order to deal with some of the social media side effects. The two phrases which come to my mind frequently these days are “accountability” and “agency.”
On the one side, we need to hold parties accountable for things that they do that are harmful to others. And the second one is to give agency to people to increase their sense of [security] and real security and safety and privacy.
Larry: Talk a little about democratizing, or opening the internet up to normal folks. How did that influence your thinking about issues like privacy and safety, you, things of that nature.
Vint: In the initial euphoria, the idea was that everybody would have access to this capability and could share what they knew and that would be potentially beneficial. And indeed, as the Mosaic browser became popular, literally millions of people learned how to make webpages by crafting their own HTML, and I was really happy to see this avalanche of content being shared, because a lot of it, at least that I was aware of, tended to be helpful, useful, friendly information. Some of it was kind of silly, but not much of it was harmful. However, as the general public gets more and more into the internet, we begin to see one of the problems we have as a society, which is that not everybody has their best interests at heart.
The scammers and the phishers and the malware people and the denial of service attacks and everything else start to emerge as the technology becomes more and more available. In some sense, it’s the same reason that we still watch Shakespeare’s plays 400 years later because he deals with all of the weaknesses of the human race.
And those are clearly exposed in the internet environment as well, and that forces us to try to figure out what to do. And it’s a hard problem because the internet is global in scope and relatively insensitive to international boundaries. So traffic just flows, which I think is a good thing. The side effect means that if there’s somebody harming others, the victim might be in one jurisdiction and the perpetrator in another. So all of that is literally influx today, but there’s a lot of attention being paid to that.
Larry: So there are some paradigm shifts that are on the horizon. Some would argue the metaverse is one paradigm shift, but the major paradigm shift that almost everyone’s talking about now is generative AI. And in a way, aren’t we all kind of where you were back in the 1960s when we start thinking about generative AI in the sense that we’re playing with something that we don’t fully understand. It’s in its infancy. We think it has tremendous potential. We worry it could literally have existential harm.
But it does bring up the broader question—and you are very, very qualified to talk about this, having been there at the original paradigm shift of the creation of the internet and having seen paradigm shifts all along—on the precipice of what appears to be a major change in how we’re gonna be using the technology if, indeed, this does what people say it might do?
Vint: Well, the best way to look at this from my point of view is to understand the enabling effect of the technology. So the internet is an enabling thing. So is the world wide web, so is the smartphone. The large language models of the artificial intelligence and machine learning space are enabling capabilities, but they can also be abused and they can also hallucinate, which is a technical term used by the experts.
My sense right now is that we still don’t fully understand exactly how these things do what they do. Some of it is pretty astonishing. Some of them write code, some of them write poetry, some of them translate languages. We see how to train them and we have various ways of arranging the layered neural networks, very deep neural networks, to produce the effect that they have.
But I think our depth of understanding of what can happen and how they can go off the rails is still in sort of early days. So just like all of these other technologies, we have to look back and say, “What is it that we wish we had understood to anticipate some of the potentially harmful side effects?”
Larry: There’s already been a White House meeting where the leaders of AI companies met with the president. I don’t suppose—what would it have been, LBJ who was in office when you were doing your initial work or maybe Richard Nixon?—I don’t suppose you were having meetings at the White House back then.
Vint: No. I was not having meetings with Richard Nixon, and thank goodness, given the possible side effects of that. However, I did start having meetings in the White House in the 1990s during the Clinton administration. I do remember vividly in 2000, President Clinton invited several of us to the White House to explain what this “Love Bug” thing was.
And since that time, I would say Senator Gore, before he became vice president, also engaged very significantly in the early days of the internet, and he deserves a great deal of credit for having helped to pass legislation that has enabled the internet to be the amazing tool that it has become.
Larry: And he never claimed to have invented it, by the way.
Vint: He said he was answering a question about what he had done while he was senator. And, remember, the question was being asked in 2000 and he was responding with a time period in the 1980s and he said, “I took the initiative in creating the internet.”
And what he meant was that he had taken legislative initiative. He first triggered the National Research and Education Network proposal that came out of NSF as a consequence of a question that he asked in 1986 at a hearing. He asked, should we be connecting the supercomputers that NSF is sponsoring together with an optical fiber network, creating an information superhighway as he put it. Then later, of course, he helped to pass the legislation that allowed commercial traffic to flow on the government backbone in the wake of the early permission that I got in the late 1980s.
So he deserves a great deal of credit, to say nothing of all the work he’s tried to do on global—
Larry: So here’s the guy who actually did help invent the internet, giving credit to Al Gore for having played an important role in—
Vint: Bob Kahn and I wrote an op-ed outlining what Gore’s contributions were, and I’m sorry to tell you that the editors of the New York Times, the Washington Post, and the LA Times refused to publish it.
Larry: I had the privilege of spending an entire day with Vice President Gore in 1994, just as they were announcing the Clinton Administration’s internet policy: travel on Air Force Two with him and interview him and I was incredibly impressed with his depth of knowledge—I mean, I’ve talked to a lot of politicians who claimed to be on top of an issue. He understood it very deeply.
Vint: Al does. Al does his homework. He really does his homework.
Larry: But let’s talk about well-intentioned legislation that had unintended consequences or that bucked up against other values. How do you feel about the way in which Congress has in the past and continues to try to tackle some of these very thorny issues?
Vint: Well, some of them are well-intentioned. Others are probably motivated by: “How do I get reelected?”
And unfortunately, a lot of the tools that we might use to protect privacy and safety can also be repurposed as means of suppressing speech and introducing surveillance and all kinds of other things that we would consider generally harmful. At least…
Larry: Those debates are happening within the United States as well. I mean, as you know, laws that would prevent you from censoring anything and calling it censorship? But they essentially would not allow you to moderate content.
Vint: The problem here is dual use. These same things that you would use to try to protect someone might also be used to suppress and censor speech.
This is like a Solomon solution or a Scylla and Charybdis trying to navigate. Because on the one side people will say, you prevented me from speaking my mind. And on the other side you’re saying, but you’re speaking toxic, harmful things, and it’s hurting people.
It’s causing all kinds of bad things to happen. Right. How do we strike a balance? Well, here in America, freedom of speech is, as you well know, captured in our First Amendment, but that particular freedom has to do with government suppression of speech. And the situation we’re facing right now is that the private sector is being essentially forced to adjudicate between parties that say things that other people don’t like. So you know, the private sector is told you have to go figure out how to balance that. Frankly, I think a number of actors in the private sector are saying to the government and to the legislators, you need to create a framework here because you are asking us to do something that we’re not in a position to do as a private company.
Larry: Getting back to things like the Communications Decency Act—that was a case where, I think Congress passed it overwhelmingly. President Clinton signed it with a sincere effort to want to protect children online from harm, yet it would’ve essentially controlled the internet.
How do we protect innocent citizens from scams while at the same time allowing for free speech? I know you’re not gonna have a definitive answer in the next 30 seconds, but I also know you’ve put a lot of thought into that.
Vint: Well, you know, one thing I think is helpful would be a practice called critical thinking. And this is not too different from the scientific method. Basically, you ask yourself, where did this information come from? Is there any corroborating evidence for the assertions that are being made? You have to learn how to evaluate these things. You can imagine introducing mechanisms like parental controls for children’s access to online resources.
So we need to teach people about how to adopt practices that are safer. We need to get companies to introduce mechanisms that will enhance our ability to give us agency to protect ourselves. So technology is part of the solution.
Another part of the solution, frankly, is legislative. When we can’t prevent harm from happening technically, then we can say, well, if you do the following things, there will be consequences. It’s sort of post hoc enforcement.
And then finally we can say, don’t do that. It’s wrong. And that’s moral suasion. And even if that sounds wimpy, remember, gravity is the weakest force in the universe. And yet when there’s enough mass, it keeps the planets in orbit. And if there’s enough social mass to adopt particular norms and behaviors, that can have a very big influence on the way people behave in the online world.
Larry: Yeah, I think it’s certainly for people of good will. There’s no question that—and we’ve already seen that—social pressure, social norms, can have a huge impact. But then we also have social engineering. I mean, you’re an engineer. As good of a coder as you may be, or your colleagues at Google may be, they can’t keep people from necessarily falling for scams. In fact, I recently reported on a case where I came very close to falling for a virtual kidnapping scheme. As much as I know, and I’ve literally written volumes on computer security, when I thought my wife was in danger, my reptilian brain reacted. Didn’t matter what I knew, I was worried, I was panicked.
Vint: Sure. I mean, well that’s because we’re human beings. You know, we love our family members and we’re concerned for their wellbeing, and that’s of course what the scammers know. And so another reason why some of the large language models are of such concern is because they’re capable of generating the kinds of texts that would trigger these kinds of reactions. So we do have to worry about these powerful tools that have the ability to generate text that will cause us to become alarmed.
Larry: And not only text, also audio. For example, in the case where I clearly reacted to the scam, there was a crying woman who I thought might’ve been my wife, but wow—going forward, I could have a conversation with this person and I could be absolutely convinced it’s my wife if the AI is doing its job.
Vint: If the AI is good enough. Of course, this also gets us into the Screen Actors and Screenwriters Guild strike, because of the ability to generate text and sound and imagery that looks like someone else is increasingly powerful.
Larry: Even that wonderful Apple commercial series, the Think Different campaign where they had Martin Luther King or Albert Einstein essentially endorsing Apple products, not exactly. I even had some problems with that, even though I know that their estate signed off on it. I don’t know what kind of computer Martin Luther King would’ve bought. He might have been a Windows fan. How do we know? And so even that bothered me a teeny bit. Well, compared to what we can do with AI, that’s nothing.
Vint: It’s pretty dramatic. And I think here we have good reason to train people to become more suspicious.
Frankly, we’re back to critical thinking again. And once again, if there’s real harm being done, then we have to pull accountability into this equation somehow.
Larry: And of course the thing about, you know, going back to what you’re talking about, the Writers Guild and the SAG-AFTRA—by the way, I’m a SAG-AFTRA member. SAG-AFTRA being not only the Screen Actors Guild, but also the American Federation of Television and Radio artists. They merged, and I’m a radio person.
But even if you look at that, there are two issues. There’s one: the exploitation of someone’s image. And two: creating a character out of whole cloth that can do just as good a job as an actor, or creating a script out of whole cloth that can do just as good as a Writer’s Guild writer. That’s very threatening to a different class of people than were threatened by the Industrial Revolution.
Vint: Well, this gets back to exactly what happens when we automate things.
You know, the Industrial Revolution that you mentioned essentially destroyed a lot of jobs that were done manually then. They’re now done automatically. It also created a whole bunch of new jobs. And the problem, of course, is the people whose jobs went away may not be capable of doing the new jobs without retraining and may not want to do that new job.
And so there’s a tension there. My guess is that we will live through this in the same way that we lived through the Industrial Revolution with a lot of new jobs being created and new opportunities for people. The crazy examples that people have now is that you have a bot that generates the email and you have another bot that reads it.
And it reminds me a little bit of a little cartoon series of three panels. The first panel shows a professor lecturing to a bunch of students. The second panel shows a professor lecturing to a bunch of recording machines. The third panel shows a recording machine lecturing to a bunch of recording machines, and the caption at the bottom says, “university education,” the mechanism by which the notes of the professor become the notes of the students without passing through the minds of either one.
Larry: This has been an amazing walk down memory lane, but probably even more important, a little bit of a glimpse of perhaps memory that others may express 10, 20, 30 years from now, perhaps after you and I are no longer online, so to speak. Any closing thoughts?
Vint: Well, two things.
First of all, you ain’t seen nothing yet. I mean, technology continues to evolve. There are plenty of opportunities for evolving even the internet of today into the internet of tomorrow. One very exciting angle on that is the interplanetary extension of the internet, which is well underway.
And finally, I just want to reinforce the idea that the internet’s architecture is such that it welcomes new ideas, it welcomes new protocols, it welcomes new layers of protocol, which is where the world wide web comes from. There are huge opportunities here. They are far from exhausted, and so I hope that some of the people who are listening will go on to forge new paths for the internet in the future.
Larry: Vint Cerf who is currently the Vice President and Chief Internet Evangelist at Google, but goes way back in terms of internet development. Thank you so much for taking the time.
Vint: Well, thank you for having me on the show. I look forward to another opportunity to talk about the things that we didn’t have time to talk about today.
Larry: Are We Doing Tech Right? is produced by Christopher Le. Maureen Kochan is the Executive Producer. Theme music by Will Magid. I’m Larry Magid.