Share this...

In this second episode with Jeff Jarvis, Larry and Jeff discuss regulation, encryption, the dangers of techno-moral panics, and Section 230 of the Communications Decency Act, which has been quite controversial in the United States. Find out more in this follow-up with Jeff Jarvis, author of The Gutenberg Parenthesis.

You can listen here or on Apple PodcastsSpotify or wherever you get your podcasts.

Listen to “Jeff Jarvis Follow Up: How to Dodge Techno and Moral Panics” on Spreaker.

Transcript:

Larry Magid: I’m Larry Magid, and this is Are We Doing Tech Right? A podcast from ConnectSafely, where we speak with experts from tech, education, government, and academia. I had a great conversation with Jeff Jarvis, the author of the new book, The Gutenberg Parentheses, as well as other books about all things internet, including focusing mostly on freedoms and understanding how. Regulations, while they may have their place, can actually have some negative consequences. And in this part of our two part podcast conversation, we talked a lot about regulation. We also talked about encryption, about the dangers of techno moral panics, and about Section 230 of the Communications Decency Act, which has been quite controversial in the United States. Jeff, you’ve been an observer and commentator and author about all things internet for many years now. And as you know, the title of this podcast is, Are We Doing Tech Right?

And I think by implication of asking that question, the answer is maybe yes, maybe no, maybe a little bit of both. So I want to get your perspective on where we’ve been and where we’re going. 

Jeff Jarvis: So in the next book I have coming out this year called The Web We Weave, Why We Must Recapture the Internet From Moguls, Misanthropes and Moral Panic is the full title, I think.

And in that I have to look back a little bit and remember the internet that we care about. Remember the internet that it could be. And I think it’s important to recapture that optimism; it’s also important. I wrote a chapter called A Thank You Note To the Internet in the Pandemic. How can we not be grateful for the internet? It saved many of our careers and families and schools and our economy. Um, not everyone’s, but it went a huge way to getting us through that turmoil. But there are problems. You’re right. There are issues. Most of those issues, as I say, are brought human upon human. And we’ve got to make people, uh, suspicious. Just not of everybody. Uh, again, we’ve got to give our children the agency and the respect that they’re going to have their own judgment and they’ll learn lessons and they’ll figure this out just as they did on the playground. But at the higher level, I think where I went wrong was to first, as we talked about earlier, be a little too optimistic to not see the bad things that could happen. And you beat yourself up in 1994, just as the web was coming out, thinking you couldn’t see foreign election interference.

Larry Magid: I was being facetious in that comment, but the fact is that there were things that I simply couldn’t anticipate. Like you, I’m not a futurist.

Jeff Jarvis: Right. It’s our excuse. But the other one is, I think Elon Musk has taught us all an unfortunate and necessary lesson, and that is that when we hand over public discourse and public functionality to a company that can be taken over by a single narcissist like he, um, we’re jeopardizing the value of what we build as a society.

Now, I think there are still good companies out there. I wrote a book called What Would Google Do? That was my first and I still like Google, even though they kill products I like all the time. I think they’re decent. I think Meta has gone through a lot of bad decisions, but I think they’re kind of still more okay than media would give them.

I think Elon Musk and Twitter is horrible, but it was finally after Musk bought Twitter that I too late came to Mastodon, an open source federated social network that can be owned by no one, where you can take your friendships elsewhere, where you have your own content, where it’s smaller in scale and more human in scale.

And that reminds me that we’ve got to recapture that fundamental architecture of the internet. Yeah, I think we’ve got to support open source and federated solutions. So when we come to, for example, AI, the Europeans have talked about trying to forbid open source AI models. It’s the exact wrong thing to do because they’re afraid that there’ll be all kinds of rogue things going on.

Larry Magid: Let me back up the letter that Sam Altman and Sam Altman, who is, as you know the CEO of open AI, which does ChatGPT and many other technology leaders put out claiming that AI is an existential threat. They compared it to nuclear war and climate change. Why would the head of open AI be signed on to something like that if not to maybe discourage competition?

I’m, I’m trying to figure out. Yeah. 

Jeff Jarvis: Well, that’s a fascinating story and I can recommend some folks for you to talk to. I think you really enjoy Emil Taurus and Timnit Gebru have done a lot of work around what they call TESCREAL, which is an acronym for a bunch of the, I think, faux philosophies that motivate folks like Altman, Musk and others.

And there’s transhumanism, which is why they want to put chips in your head and make the human and the computer come together. And effective altruism, which now has a bad reputation, and long termism. And as I’ve read about this, in a great measure from what Taurus has written, there are these, I think, somewhat loony philosophies that are about, in the end, power and money as much as in life.

So let me explain that. In Longtermism from William McCaskill and others out of Oxford, oddly, says that we owe a greater obligation to the possible 10 to the 58th human beings, real and, um, transhumanist, of the future. And that is their perspective. They think that they should be making decisions based on that long future and that they’re the smartest ones to make those decisions on behalf of the future.

They’re the real futurists. They don’t just predict it. They make it. They control it, yeah. And that they should have the power and the money to make those decisions. And by the way, it means that present day problems aren’t really a big deal. They’re just blips. Now part of this is saying that the worst thing they could do for that future is to destroy mankind and they think that the tools they’re building are so powerful they could destroy mankind.

It’s about ego. It’s also about being able to try to control resources and wealth. And it’s also about marketing because what they’re saying is, I made this tool that’s so powerful it can destroy mankind, but I won’t do that. And you can trust me and you should put me in charge of that. 

Larry Magid: Maybe you can’t trust the other guy.

Jeff Jarvis: Yeah, right. Exactly. Somebody’s going to use this badly. Now, some people go way over the line with this. Eliezer Yudkowsky is already over saying, well, it’s impossible. We have to shut this down right now until we figure it out. Then you have Marc Andreessen, who wrote his effective accelerationism screed a few months ago, where he says, nothing should get in our way, nothing should stop us. Let’s get there as fast as we can. All of this is nothing but male ego and testosterone. Now, on the other hand, you have people like, uh, Timnit Gebru and Margaret Mitchell, who were two of the authors of the Stochastic Parrots paper that got them fired from Google, who said, that’s all BS. That’s futurist crap. Um, but what it does more than anything else in their view is it distracts us from present perils. Right. Uh, the fact that these tools can be used to defraud people, that exhibit the biases of society and those who have the power to publish in the past. They’re so huge.

They can’t really be audited for input or output and there’s environmental danger and there’s labor issues of having people in poor countries that have to work on these things and so on. So thinking about this, this future of 10 to the 58th human beings is a wonderful distraction. And so it’s another way to look at it.

And I’ll finish here is that it’s almost a way of thinking about regulatory capture, which is to say, uh, what you said a second ago, well, don’t let anybody else do this. Let me do this. Let only the big and powerful companies do this.

Larry Magid: AT&T did that all the time back when it owned the telephone monopoly, right?

So that’s why Europe…No one can touch the network because they’re going to wreck it. 

Jeff Jarvis: Exactly. So that’s why Europe was saying, uh oh, open source could be dangerous. Well no, what that does instead is it empowers the big companies. You know, I sound like a libertarian sometimes I’m not, I’m a good old fashioned Joe Biden Democrat, but what gets me about regulation so often is that it has the unintended consequence of only giving more power.

So in Europe, you have the Right to Be Forgotten, which makes Google very powerful to decide what to do. What is remembered or not? You have hate speech laws in Germany, which make Facebook so powerful as to decide what and what is not illegal content and legal speech. And then you have Europe thinking about getting rid of open source AI.

Well, that makes open AI and Google and Meta and Microsoft all powerful in AI. And that’s a key lesson to learn. Now, this goes back to what I was trying to say a few minutes ago. This is why I think we have to keep in mind the original architecture of the net. I think it was a smart architecture. We have learned lessons in these past three decades.

We can apply them still, but I want to protect the freedoms that the net has brought us to have more people speak and be heard than ever were heard before. To allow us to assemble and act in more ways, to give us information in a nanosecond, to let us find friends around the world, as you and I are, thanks to the internet.

Those are wonderful gifts that we could lose. If we panic too much about this latest technology. 

Larry Magid: I have one more question and I have to admit I’m being opportunistic here. And this is less about the podcast and more about some free consulting on an article I’m writing. I’m doing a piece in the middle of a piece that I plan to release sometime in February called how encryption protects children.

And I’m trying to make the argument that everyone, whatever your age, deserves privacy, and privacy ties into security, and security is necessary for safety. Yet, the people who will counter argue against me are my very good friends at the National Center for Missing and Exploited Children, whose board I served on for 20 years, and Thornton Foundation, and all of the child protection agencies and law enforcement people who are convinced that if we allow people to have encrypted technology, it’s going to, we’re going to see an increased child exploitation, child sexual abuse material, et cetera, et cetera.

And I’m just curious of your thoughts on this issue. 

Jeff Jarvis: It’s a good and nuanced and difficult question. I wrote a book that nobody bought called Public Parts. Uh, it was terribly timed. I was arguing that there were plenty of protectors of privacy, which really matters a great deal, but I also wanted to protect publicness and the fact that we could be public and share it together.

Wrong time. It’s just as, as scandals were hitting Facebook and people were scared about privacy and so on. But in that, I think I learned a fair amount about privacy. Privacy is a fairly recent concept in society. You know, we used to live in single rooms and we didn’t have hallways and everybody in the village knew everybody’s business.

But as we went to cities, as the population grew, privacy mattered more and more, and I do believe it matters greatly. And I think our children, too, have a right to privacy. They have a right to experiment and to try things.

Larry Magid: They have a responsibility to experiment and try things. Exactly. That’s their job.

Jeff Jarvis: And I think, you know, one of my fears about my generation of parenting is that we protected the next generation all too much, the play date generation, the don’t play on the street generation, the stranger danger generation. I think that we didn’t give them enough agency responsibility, credit, trust.

And so privacy is part of that. Now, can bad things happen? Can bad guys use that to get around the law? Yes. Yes. Yes, there will be edge cases in everything you can say, but does that mean we then take away that privacy for everyone and take away the benefit that comes from that from everyone? But it was all about danger, danger, danger.

And in that the police authorities and others tried to demand a backdoor to kill encryption and it didn’t work at the end of the day. And the battle is barely won. On behalf of privacy and on behalf of encryption and security, but at the same time we’re telling people that they need to watch out for bad guys and they need to protect their conversations and their data.

We can’t at the same time get rid of encryption, right? In fact, I think we have to expand encryption and see where that goes, but that’s going to hurt law enforcement. A very recent example is that Google is just deciding to get rid of geo fencing. They’re going to stop keeping the data that allows police to say who all was in this area at that time and on that day.

Well, that’s how the feds got most of the January 6th defendants is they could geofence a warrant and find out whose cell phones were around the Capitol at that time that day. Um, am I glad the justice department has brought those people to justice? Yes, I am. But is it worth the price of making their job harder so that we all can have more confidence and safety and privacy in what we do?

I still think so. 

Larry Magid: Yeah, that, those are the tradeoffs and what’s, and I’m glad you brought that example up because we have a tendency to be very transactional in our analysis. We believe in free speech because somebody is suppressing our speech, but wait a minute, when the bad guys speak, somehow we believe in we don’t call it censorship moderation or whatever because you can’t be transactional about certain values. 

Jeff Jarvis: Yeah. You know, one thing I’ve come to see, and this is in a lot of the regulations about the internet where, uh, and this is about section 230, which I’m a big supporter of because it supports public discourse online.

Um, but there’s a pincer movement against it, right? So section 230 gives a sword and a shield and the shield says that A) a platform or a publisher can’t be held liable for what people do on their platform so that people can have a conversation there, but it gives them the sword to give them freedom to moderate what people do on their platform because otherwise it would all be a channel.

It’ll be awful. Right. And I think it’s, I think it’s very wise legislation though Congress keeps on trying to slice it like a baloney. So we see this pincer movement from both the left and the right and the left says, ‘There’s hate speech there. You should get rid of it.” And the right says, “whoa, that’s my speech you got rid of” and we don’t find that happy medium. So as a result, and there probably is no happy medium, as a result of late in Texas and in Florida, we saw legislation that was going to require, and still hasn’t found its way completely to the Supreme Court, going to require platforms to carry certain political speech.

Well, compelled speech is not free speech either. That’s a violation of the first amendment. Yeah. Right. So, when people say, “Oh, the, you know, the platforms are censoring us, we’ll know.” Uh, that’s no more censoring than you deciding who’s going to be on your podcast, or than 60 Minutes deciding what story they’re going to do.

Larry Magid: Or Fox News deciding what’s, let’s, let’s be fair about it. Or Fox News deciding what their story’s going to do. Yeah. 

Jeff Jarvis: So to walk into their offices and say, you must carry uh, that liberal Jarvis at Fox News would be a violation of their free speech rights to choose. Right. And so editing is a choice, editing is speech as well.

So I’m not saying that everybody’s speech should be everywhere and that we have to listen to it all. We don’t. We simply don’t. In fact, I do think the platforms have a responsibility to recognize when they’re being manipulated and used and recognize bad characters. Look what’s happening right now with Substack and Nazis and Platformer just left the platform because Substack wouldn’t, in Casey Newton’s view, be responsible and get rid of Nazi Newsletters. And that was, it’s Substack’s right to choose to have them. It’s Casey’s right to choose to go elsewhere. And we have to, I think, enable everyone to be able to make their own choices in these cases. And sometimes it’s going to be to say, No, I don’t want to hear that. No, that can’t happen here.

Um, sometimes it’s going to be to say, I’m going to carry that even though I know people don’t like it. Those are decisions that make up public discourse. 

Larry Magid: You know, the other irony about section 230, I remember Donald Trump during his, uh, presidency was a big proponent of reviving or eliminating 230. Yet ironically, I think he was the most protected person on the planet by 230 because he did things which could very well have triggered litigation against Meta and other social media platforms, but 230 protected him as it should.

Jeff Jarvis: Absolutely. Absolutely. And so that’s the irony is that no one in this fight, I think, understands what 230 does. And then to your listeners, if they haven’t read it already, Jeff Kossoff’s book, The 26 Words That Created the Internet is a definitive book on this. And it’s brilliant and describes how we got there and why it was so smart.

But if we didn’t have 230, the internet would’ve gotten one or two ways. One way is I’m liable for anybody else’s speech when nobody else is talking here. And we would have seen basically the internet would have become a controlled newsstand of media PDFs. Right. It would have been boring. 

Larry Magid: And you and I have already worked for plenty of those companies in our careers where some editor has a right, rightfully so, to control what goes on their platform.

Jeff Jarvis: But then we never would have had all this wealth of conversation we’ve had. The other way to go was to say, don’t touch anything because you’re going to be liable if you screw up. And so everything goes. So then everything becomes 4chan and 8chan becomes junk. 230 is brilliant because it found a middle line to say that you have the right to speak, but the responsibility to make decisions about the speech you allow.

And we can disagree with those decisions. That’s fine. And we can then leave that platform as Casey Newton just did with Substack and that’s part of speech as well. Right. In The Web We Weave, my next book at the end of the day, it’s all about defending the internet as a means of speech. And, and everything I just included, including moderation, including choice, including being able to set your own standards and rules. But speech, if we can’t speak in a society, we can’t have a democracy. 

Larry Magid: And ironically, you’re about to have just written a book and about to write another book. And you know, for centuries, people have banned books and are still banning books. And so going back to that technology that’s bringing your book to us, which was invented in the 15th century, the more things change, the more they stay the same.

Jeff Jarvis: I’ll give a spoiler.

If anybody does pick up my Gutenberg parenthesis, um, so in my first book, uh, What Would Google Do?, I being optimistic about technology, argued that the book needed to be updated, that you need to become clickable and searchable and linkable. And we weren’t changing the book because we thought it was too holy and untouchable.

We had to get over it. In the Gutenberg Parenthesis, I recant and I say I was wrong. And I say that the book has stood the test of a half a millennium. And the book has survived and prospered, and we’ve figured out how to make it work for us as a society. And it’s what we judge our discourse and our institutions against now. So the last words in my book, here’s the spoiler, are, “Let the book be the book.” 

Larry Magid: And with that note, I urge everybody to go to eBay and see if you can find an old copy of my books from the 90s and 80s to find out what an idiot Larry Magid was back then.

Jeff Jarvis: Not an idiot. No, no, an innocent, an innocent Larry. 

Larry Magid: No, but seriously, I mean, that it preserves history.

I mean, and that’s one of the things that does concern me about the internet is that we do not have that indelible history. I love the archive. I love the archive. org because I can go back and see what websites looked like in the Nineties. 

Jeff Jarvis: Well, here’s another one. Here’s another one I wasn’t enough aware of is Common Crawl Foundation.

And for those who don’t know, Common Crawl, uh, archives, the entire textual web. I forget how many petabytes of data, how many billions of pages. And it was useful tremendously for scholars. 10,000 scholarly papers have come out of the ability to, to analyze our content and the entire web. Well, guess what?

Along come large language models and the need to train them on language. Well, Common Crawl was really handy. I’ll bet and so, but it’s free to everyone. It’s free to scholars. It’s free to you or me. If we want to ask questions of it, it’s basically, let’s anybody be at Google. It’s a former Googler who started it, who recognized that he had this access to this incredible panoply of online speech.

And he was leaving it when he left Google and he said, others have to have this right too. So we started Common Crawl and I was disturbed to learn that the New York Times came to Common Crawl and demanded that all of their content, uh, in the past, that which was freely available, be erased. They’re doing that because they think that there’s a pot of gold to be had from the hands of open AI and Microsoft and Google and so on.

This goes back to the problem that I had, we discussed earlier about if we think that our value is inherent in this thing we call content, we’re going to lose. Because content is now commodified. We have machines that can spit out no end of it. That’s not where our value lies and if we, if we don’t see the value in reading and learning from each other, then we’re going to be worse off.

So I went to the Senate the other day, for a hearing on AI and journalism and I reminded the learned legislators that copyright didn’t cover newspapers until 1909. That newspapers used to share copies, uh, for free thanks to the postal service. And they employed people with the literal title of scissors editor, so they could share articles freely and thus create a national network and a nation. And that to this day, journalists, you and I do this every day. We read and learn and use information from others and that’s what copyright really protects, is the ability to do that. So should the machine or the machine’s owners not have the same rights?

Now when I said it that way, people, some people got mad at me and said, well, the machine doesn’t think and it doesn’t read. Yeah, yeah, I’m being metaphorical. If we put it in corporate terms, should open AI have the same rights as clear channel radio to rip and read from newspapers to learn. So is there a right to read?

Is there a right to learn? Is there a right to use? And these are questions that we’re going to have to grapple with, with an outmoded sense of copyright, which didn’t come along at all until 1710, two and a half centuries after movable type. These are questions that we need to grapple with as a society now.

The presumption that I saw at the Senate is that the lobbyists next to me are saying they’ve got to pay us. And the senators are saying, yeah, they got to pay you and I’m the only voice in the room saying, well, actually here’s an unpopular opinion. No, let’s be open, 

Larry Magid: Which is ironic from somebody who makes part of his living from selling books, but I understand it. It almost reminds me of, uh, my old friend, Abby Hoffman wrote a book called steal this book. He was a yippie leader back in the sixties and you know, I think he had a point, although I remember having to buy his book, but that’s another issue as I bought yours. 

Jeff Jarvis: The other database that was used to train the large language models called Books Three, which probably pirated the books probably didn’t acquire them.

Honestly. Well, the Atlantic came out with the story of saying, here’s all the books in Books Three. And I looked with trepidation. I thought, am I, are my books in there? And if they hadn’t been, I would have been miffed. Exactly. But they were.

Larry Magid: I actually, I mean, it’s so funny. I and this conversation going forward, I never understood Rupert Murdoch’s enmity towards Google providing, uh, snippets to his content, because that’s how I get to his content.

That’s the reason I pay to read the Wall Street Journal because Google is constantly pointing me to articles there that I want to read, and so therefore I pay for them. But he has this obsession with keeping Google from crawling his newspapers, which I don’t get and I think I’m going to end it there.

And I think for those who are threatened, who feel their careers are threatened by generative AI, that that is the key to creativity. Yes, ChatGPT could write a poem that’s never been written before, but can it really create new ideas, new thoughts? Can it really move us forward? And I would argue, at least for the current time, not as well as human beings.

Jeff Jarvis: Amen. And I’ll end with this amusing thing from my Senate testimony before a judiciary subcommittee. Marsha Blackburn rushed in, pushed me and said, you know, the media are all liberal. Fine, stipulated. And now AI is all liberal. And she said, you know, you go to ChatGPT and you ask it to write an admiring poem of Donald Trump, it’ll refuse.

You ask it to write an admiring poem of Joe Biden and it does it. And she proceeded to recite this poem to me that ChatGPT had written and my response was simply “that’s terrible poetry.”

Larry Magid: Mm hmm. Admiring though it may be. Jeff Jarvis, the author of many books, including the newest one, the Gutenberg Parenthesis.

Thank you so much for taking the time. 

Jeff Jarvis: Thank you so much. So it’s such a pleasure. And this is one of those wonderful moments where we may not see each other except once every God knows how many years, but we can follow each other online as I do. And I’m, yes, I’m glad that we call each other friends. 

Larry Magid: Me too. 

Are We Doing Tech Right? is produced by Christopher Le. Maureen Kochan is the executive producer. Theme music by Will Magid. I’m Larry Magid.

 


Share this...