Share this...

By Larry Magid

A headline you’ll never see (Image generated by ChatGPT)

ConnectSafely Senior Researcher and Oxford University graduate student Nathan Davies contributed to this article

If you ever experience or hear others panic about new technologies, consider the historical context. We’ve been hearing about moral panics for centuries, along with plenty of hype from those promoting new technologies. That’s certainly now the case with generative AI, where we’re seeing a combination of doomsday predictions along with incredible claims about the potential benefits of this emerging technology.

A sense of humility

I’m writing this article from a sense of humility because, after more than four decades of writing about technology, I can think of times when I overstated risks and underestimated the benefits of new technologies. I wrote my first book about connected technology in 1984 and failed to say anything about safety, security, or privacy. My second book, written in 1994, said nothing about disinformation and the risks posed by malicious nation-states. That same year, I wrote Child Safety on the Information Highway, which focused on risks that turned out to be rare and failed to mention risks that would ultimately have a greater impact. None of my early work covered cyberbullying or uncivil behavior, wellness and mental health, or obsessive use of technology. And it wouldn’t surprise me if my current work fails to mention future risks that I can’t anticipate.

I’ve learned from my experience that it’s easy to speculate about what might happen and excusable to fail to anticipate the future, but important to realize that even those of us with the best intentions don’t always get things right.

Moral panics go back to at least antiquity

One of the first known cases started way back in 370 BC when ancient Greek philosopher Socrates faulted writing for weakening the necessity and power of memory, and for allowing the pretense of understanding, rather than true understanding.” (Source)

I’m sure there were plenty of moral panics in between, but fast forward to 1440 when Johannes Guttenberg invented the printing press, which was followed by a series of panics, including a group of scribes who, in 1474, petitioned the Republic of Genoa (now Italy) to outlaw the invention. Church officials were also quite concerned that the masses would soon be able to read the Bible and bypass them, as was Swiss scientist Conrad Gessner who, in 1565, worried about what we now call information overload, claiming that reading could be “confusing and harmful.” (Source)

The loom and the Luddites

The mechanical loom, in the early 19th century, greatly upset a group of English textile workers known as the “Luddites,” some of whom physically destroyed looms and other labor-saving devices. Their name still refers to modern-day people who resist the introduction of new technologies. In the early 20th century, the invention of the electric lightbulb threatened professional lamplighters, who went on strike, refusing to light the thousands of street lights in New York City. (Source)

There are numerous other examples of workers fearing losing their jobs or reduced wages. Switchboard operators, elevator operators, buggy whip makers, railroad firemen, typesetters and coachmen are among the thousands of occupations that all but disappeared. Indeed, the Industrial Revolution did have an initial negative impact on employment but ultimately helped create far more jobs than were lost. Some workers were able to take on these new jobs, often at higher wages than the jobs they lost. Others, however, were victims of bad timing (no appropriate jobs available at the time) or were unable or unwilling to retrain to take on newer jobs.

Having said that, there are cases where the jobs created by technology were less satisfying than the jobs they replaced. A craftsperson may have wound up working in a factory and, regardless of wages, may have experienced poorer working conditions than the job they lost to automation.

There are plenty of examples of technology creating new jobs even as it disrupts existing ones. You don’t have to go back to the industrial revolution. If you include part-timers, self-employed “gig workers,” so-called influencers, and the millions of people who work full-time in tech industries, it’s pretty clear that technology – whatever its negative impacts, has been a major job creator.

In 1960, in a speech at the AFL-CIO convention, resident John F. Kennedy heralded what he called the new industrial revolution, just as “electrical impulses which make the settings and automatically correct all errors,” were beginning to invade factories. He predicted that this could lead to “a new prosperity for labor and a new abundance for America.” But, he warned, “it is also a revolution which carries the dark menace of industrial dislocation, increasing unemployment, and deepening poverty.” He was right on both counts. Automation has created and continues to create high-paying jobs, but it has also helped to increase unemployment and poverty in some communities.

A more recent example is app-based ride-sharing services like Uber and Lyft, which created many new part-time jobs for drivers, partially at the expense of the taxi industry. What’s more, as Carl Benedikt Frey and Michael Osborne point out in an Economist article, How AI benefits lower-skilled workers, “when Uber expanded its operations across America, drivers with only limited familiarity with the cities in which they worked were able to thrive.” They may have earned less than professional taxi drivers, but the technology-driven industry opened up work opportunities for people who would have otherwise not been among those anointed to drive taxis. When it comes to AI, they argue that low-skilled workers are poised to benefit disproportionately, as they are now able to produce content that meets the “average” standard. Whether that turns out to be true, however, remains to be seen.

Social fears

Displaced workers are far from the only ones to worry about the impact of new technologies. Politicians, media, pundits and the public at large have frequently expressed fears about how technologies would affect society.

The sewing machine

The sewing machine, introduced in the 1840’s brought about its own moral panic, mostly regarding women. Some worried that it would cause mass unemployment among seamstresses, but there was also fear that economic independence for women, based on employment opportunities created by sewing machines, could lead to significant social changes, including shifts in family dynamics, marriage patterns, and overall societal structure.

The telegraph

In the mid-1800s, the telegraph, according to historian Jean-Michael Johnston, sparked concerns for some that the increased speed of information flow would erode leisure time, leading to heightened work pressure and a faster-paced life. (Source)

Cartoon caption: “These two figures are not communicating with one another.”

A 1906 newspaper cartoon predicted that the telegraph would cause us to lose personal contact with one another. “These two figures are not communicating with one another,” said the caption. “The lady is receiving an amatory message, and the gentlemen some racing results.”

There were also anxieties that the rapid transmission of information could lead to mental conditions caused by overstimulation and the constant influx of news, and in 1858, the New York Times expressed concerns that the telegraph was leading to a decline in writing standards, as the necessity for brief and concise communication led to widespread abbreviation in writing. These two concerns seem a lot like today’s worries about social media and texting.

It’s important to note that these concerns did not keep the telegraph from becoming ubiquitous. For the most part, people didn’t overreact, and it remained an important means of communication for more than a century. The same is now true of social media and texting. While some sound alarms, these technologies remain extremely popular.

As I mentioned earlier, hype was often a close companion to moral panic. In their 1858 book, “The Story of the Telegraph,” authors Charles F. Briggs and Augustus Maverick wrote:

“How potent a power, then, is the telegraphic destined to become in the civilization of the world! This binds together by a vital cord all the nations of the earth. It is impossible that old prejudices and hostilities should longer exist, while such an instrument has been created for an exchange of thought between all the nations of the earth.” (Source)

We’re still waiting for old prejudices and hostilities to no longer exist.

The Kodak fiend

During the decades that followed George Eastman’s 1888 introduction of the Kodak camera, there were fears that it would end personal privacy. In 1890, The Hawaiian Gazette published The Kodak Fiend, which began:

“Have you seen the Kodak fiend? Well, he has seen you. He caught your expression yesterday while you were innocently talking at the Post Office. He has taken you at a disadvantage and transfixed your uncouth position and passed it on to be laughed at by friend and foe alike.” (Source)

In 1901, President Theodore Roosevelt chided a 15-year-old boy who tried to take his picture as he left church. The president ordered a police officer to block the camera and, as reported in the New York Times, he exclaimed, “You ought to be ashamed of yourself! Trying to take a man’s picture as he leaves a house of worship. It is a disgrace!”

Today’s public officials expect to have their pictures taken whenever they’re in public. Not only does everyone have a high-resolution camera in their phone, but an increasing number of people have cameras embedded in their smart glasses.  Which is what prompted ConnectSafely to write Guide to Meta Ray-Ban Smart Glasses: Bystander Privacy in a World of Wearable Cameras

The advent of the locomotive steam engine in the 1800s brought about a significant change in transportation, introducing a new era of rapid travel over long distances. But the rapid speeds of the steam engine also unleashed widespread fears and health concerns. There were concerns that the human mind was not designed to cope with moving at such high speeds, potentially leading to mental health problems, including insanity. The concept of ‘railway madmen’ emerged, with the belief that the motion and sounds of train travel could trigger madness in passengers.

In addition to the fears of physical and mental problems, the steam engine was perceived as a danger to the social fabric, reflecting deep-seated fears about the rapid changes brought about by this new mode of transportation.

Concern over paper preceded concern over screens

There is currently a big concern about “screen time.”  But, in the mid-19th century, there were anxieties brought about by the explosion of media brought about by printing technology. Philosopher and economist John Stuart Mill, in his essay “Civilisation,” worried about the overwhelming number of voices in public discourse. “Literature has suffered more than any other human production by the common disease. When there were few books, and when few read at all save those who had been accustomed to read the best authors, books were written with the well-grounded expectation that they would be read carefully, and if they deserved it, would be read often.”

Mill described a society where subtle or nuanced voices were drowned out by louder, more exaggerated ones. He observed a shift in value from substantial qualities to marketable ones, with more emphasis on appearance than actual achievements and lamented that individual authors and thinkers were losing their influence, overshadowed in a crowded market filled with ideas, opinions, advertisements, and questionable content.

Despite the benefits, there were concerns about the ability of the middle-class reader to critically engage with this vast array of information. It was feared that the sheer volume of content led to superficial and erratic reading habits, overwhelming the public’s capacity to discern and process information effectively.

Health concerns about the telephone

In the late 19th century, there were widespread concerns about the telephone’s potential to cause deafness. These fears were indicative of early apprehensions about the physical health impacts of new technology, as explored in the “Diseases of Modern Life” research project by Oxford University.

It was also seen as a disrupter of social order. The integration of the telephone into private homes was criticized for its potential to disrupt established social order and norms. (Source)

The use of telephones for conversational calls was often trivialized and gendered in criticism. Journalists and telecommunication leaders particularly denigrated these calls as frivolous activities, predominantly associating them with women. (Source)

Contrary to the prevalent fears, a study by UC Berkeley sociologist Claude Fischer on telephone use in three California communities showed that telephones actually strengthened social connections. They enhanced ties within both immediate and distant social networks, demonstrating a positive social impact. (Source) Despite initial fears, Fischer concluded that the telephone did not fundamentally alter American lifestyles. Rather, it became a tool that allowed Americans to more vigorously pursue their characteristic ways of life.

As Hunter Oatman-Stanford observed in his article, Don’t Panic: Why Technophobes Have Been Getting It Wrong Since Gutenberg, “Indeed, many of the complaints about smartphones, ranging from etiquette issues to health risks, were also heard when the telephone began its march to ubiquity.”

That dangerous radio in your home

During the 50s and 60s, there was a great deal of concern about children and television, followed by even more concern about children and the internet, beginning in the mid-90s. But long before children were looking at screens, they were listening to radio broadcasts which led to a 1941 Journal of Pediatrics. A study of  hundreds of 6 to 16-year-old children concluded that more than half were severely addicted to radio and movie crime dramas, having given themselves “over to a habit-forming practice very difficult to overcome, no matter how the aftereffects are dreaded” (Sage Journal The Sisyphean Cycle of Technology Panics)

The moral panic over radio wasn’t just limited to children. In the early 20th century, there were concerns about its potential to spread harmful content and undermine traditional values, including concerns that it would expose listeners to immoral music, degenerate language, and subversive political ideas. There were also fears about its potential to spread propaganda and misinformation, particularly during times of political unrest and war.

Computer phobia

In the 1980’s PCs were starting to proliferate, and even though they were quite popular, there was some concern about their impact on women. In their 1996 book Women and Computers, Anna Frances Grundy and John Grundy wrote, “These can take such forms as fear of physically touching the computer or of damaging it and what’s inside it, a reluctance to read or talk about computers, feeling threatened by those who do know something about them, feeling that you can be replaced by a machine, become a slave to it, or feeling aggressive towards computers.”

In their 1984 book, Computerphobia: How to Slay the Dragon of Computer Fear, Sanford Weinberg, and Mark Fuerst estimated that about five percent of people were severely computer-phobic with symptoms that included nausea, sweaty palms, dizziness, and high blood pressure.

In a 2003 article for the Association of Research College Libraries, Rita Kohrman focused on computer anxiety among college students around the turn of the millennium, writing, “Behavior may also be manifested through the expression of feelings or emotions. Students’ fears are usually irrational or out of proportion to the actual computer use.”

And there has long been fear about the impact of computer games on children. As recently as 2023, I heard anti-gun control advocates claim that violent games are a major reason for mass shootings, even though there is no evidence of causation. In 2005 California Gov. Arnold Schwarzenegger signed a bill banning the sale of violent video games to minors, but it was struck down in 2011 on free speech grounds by the U.S. Supreme Court on a 7-2 decision.

Y2K and the millennium bug

In the lead-up to January 1, 2000, there was a great deal of concern about the so-called Y2K or “millennium  bug.” Like many fears, it had its roots in reality. Many computer systems developed during the latter half of the 20th century used two-digit dates for years, and there was concern that when the calendar switched to 2001, the computers would stop working. But there were two fallacies to that rumor. First, most computers – even if they hadn’t been updated – would simply display the wrong date, but they would continue to work. But, perhaps because of all the hoopla about the “bug,” the vast majority of systems had been fixed long before the clock struck midnight on January 1, 2000. On assignment for CBS News, I spent the New Year’s morning at HP headquarters in Palo Alto, where its CEO at the time, Carly Fiorina, assured the assembled press that nothing horrible had happened at midnight.

Similar concerns across millenniums

By now, you have probably observed a clear pattern. Many of our fears of current technology look a lot like concerns raised hundreds or even thousands of years ago. With almost all new technologies, anxious onlookers have asked recurring questions. From the reed pen used by ancient Greeks to the generative AI technologies of today, people ask if these technologies will:

  • Endanger our children?
  • Harm our bodies?
  • Degrade our morality?
  • Spell the end of privacy?
  • Damage our mental faculties?
  • Destroy our jobs?
  • Undermine our social relationships?

Fears can start from many places, but they are often repeated, exacerbated and exaggerated by media, pundits, politicians and, sometimes, even academics

Amy Orben writes about what she called The Sisyphean Cycle of Technology Panics. Much like the Greek myth of Sisyphus, who was doomed to an eternal, futile task, society keeps repeating a seemingly endless cycle of alarm and panic over new technologies.

She identified four stages of the Sisyphean cycle of technology panics:

  • In Stage 1 (panic creation), psychological and sociological factors lead to a society becoming worried about a new technology.
  • In Stage 2 (political outsourcing), politicians encourage or utilize technology panics for political gain but outsource the search for solutions to science.
  • In Stage 3 (wheel reinvention), scientists start working on a new technology but lack the theoretical and methodological frameworks to efficiently guide their work.
  • In Stage 4 (no progress; new panic), scientific progress is too slow to guide effective technology policy and the cycle restarts because a new technology gains popularity and garners public, policy, and academic attention.

As per wheel reinvention, Orben “Nearly identical questions about addiction to emergent technologies have been raised for radio (Preston, 1941), comic books (Wertham, 1954), television (Lowery & DeFleur, 1988), video games (Bushman & Anderson, 2002), and social media (Twenge, 2018).”

Panic cycle

In a paper titled “The Privacy Panic Cycle: A Guide to Public Fears About New Technologies,” authors Daniel Castro and Alan McQuinn describe four states of “privacy panic,” which seemingly also applies to other safety and security-related concerns about new technologies.

  1. Trusted Beginnings, when the technology is very new and largely unknown and concerns are minimal.
  2. Rising Panic, often driven by “politicians in search of hot issues to attract voters; government regulators trying to maintain or gain relevancy; and researchers seeking to advance their academic careers.”
  3. Deflating Fears as the technology becomes increasingly commonplace and the general public comes to embrace the technology.
  4. Moving On “as the technology becomes increasingly commonplace and interwoven into society.”

Severity is not the same as prevalence

This popular TV show that exposed potentially dangerous online predators was seen by millions and sparked a bit of “predator panic” during its run from 2004 to 2007. The potential harms it exposed were serious but rare.

Several years ago, I wrote an article pointing out that cyberbullying was not as prominent as many people thought. Some people were claiming that as many as 85% of youth had been cyberbullied when the actual percentages, at the time, were mostly in the single digits. After my article appeared, I got an angry response from a parent who detailed the horrific bullying experience that her child endured. I felt extremely sad for both the child and the parent. No one should have to endure what this child went through. But the severity of her child’s experience – however awful it was – does not change the statistical probability of it happening to other children. I say this not to diminish the suffering of victims but – to the contrary – to argue that we need to focus our energies on supporting those who are actually victimized rather than exaggerating its probability.

There are terrible things that happen to small numbers of people. Society must and should do all it can to prevent these occurrences and support those who are victims, but to exaggerate the problem does nothing to solve it and everything to create panic among a general population of people who are very unlikely to experience it. If anything, it takes away the attention to the people who are suffering by positioning the situation as something that may be common and therefore seen as “normal.”

A better approach would be to use the public health model of primary, secondary and tertiary prevention. Although these three levels, as described by the Public Health Agency of Canada refer to health issue, the same approach can be used for many other situations:

  • Primary prevention involves activities aimed at reducing factors leading to health problems.
  • Secondary prevention activities involve early detection of and intervention in the potential development or occurrence of a health problem.
  • Tertiary prevention is focused on treatment of a health problem to lessen its effects and to prevent further deterioration and recurrence.

Moral-panics can distract from more pressing risks

There are many examples of moral panics actually interfering with focusing on more likely or, in some cases, more severe risks. In public health, for example, we fear well publicized life threatening diseases while often failing to take precautions against far more likely illnesses by failing to wash our hands, wear masks when appropriate, avoid large indoor crowds and so much more.

We worry about the dangers associated with smartphones and computers, but do little to protect ourselves from death and injuries from guns and automobiles. We worry rightfully about the safety of police officers, but most people don’t think about jobs that have a much higher injury and fatality rate such as loggers, roofers, agricultural equipment operators and supervisors of landscaping, lawn service, and groundskeeping workers.

When it comes to tech, focusing on often unlikely risks to the general public causes us to underestimate the impact on vulnerable communities and individuals. Not all risks affect all people equally and exaggerating the risk for everyone can interfere with protecting those who are most vulnerable.

There is also the risk of focusing only on short term negative consequences and failing to see how the technology can benefit people in the long-run. Airplanes were very dangerous when they were first introduced but they led to what is now the safest mode of transportation. The impact of a technology is not predetermined, it is up to us to shape how it is used and regulated.

Finally, our fears are shaped by our biases, and can be manipulated by dominant groups. There are many communities that are underrepresented among those who develop our technologies, regulate them or report on them.

Things that we should have worried about

In hindsight, there are some inventions that received widespread use only to later be recognized as dangerous or harmful. In many cases, there was little or no public outcry until many years after these products were introduced. Some are still in widespread use. Examples include: internal combustion engine & vehicles, plastics, tobacco products, CFCs (chlorofluorocarbons, asbestos, lead in paint and gasoline and DDT. Some products have great value but are often misused such as pesticides and prescription drugs

Stakeholder responsibilities

There are numerous stakeholders when it comes to technology, including children and teens and tech companies, but I am only focusing here on media, governments, NGOs, academia, parents, educators and “the public.”

Media

There is an expression in the news business, “if it bleeds it leads.” But the media should avoid spreading panic, but also be skeptical of hype. Journalists should understand the difference between severity and prevalence, get all points of view on emerging technologies, including advocates and skeptics and consider both the long-range and short range risks and opportunities. Media should also exercise some humility. There is a lot that we don’t know.

A headline you’ll never see. (Image generated by DALLE-2)

Governments, NGOs and academia

Working in collaboration, these stakeholders can play a vital role both in terms of alerting the public to possible dangers and reassuring them when harms are unlikely. Through consultation and legislation, these groups play an important role in helping to regulate technologies, however, well-meaning legislation sometimes results in negative unintended consequences

Recognizing the cyclical nature of technology, moral panics, and our common mistakes, it’s important to make decisions based on data and facts rather than speculation. Invest early on in the data needed to make informed decisions but avoid micromanaging technology to avoid stifling innovation. Technology changes faster than law.

Consult and consider the needs of all stakeholders, including children and teens. Be very thoughtful about any legislation that takes away rights, including access to technology, information and speech.

Parents and K-12 educators

Talk with your kids and students about the technologies they use and why they use them. Be curious, have an open mind and make a learning experience for both the adults and the young person. Each generation uses online platforms differently when it comes to connecting to others, expressing our creativity and identity, and finding the information we want to know. Understand that technology products come and go but what’s important is how young people adapt to technologies and safeguard themselves and others through civil behavior and kindness, critical thinking, media literacy and adhering to the rules of their families and their schools. Educators should clearly explain school or district rules as they impact what students can, must or cannot do in terms of their use of technology while trying to better understand and advocate for the needs of their students. Also, as a professional educator it’s important to pursue professional learning on media literacy, digital literacy, and effective use of technology in teaching and learning to continue to serve students in the best way.

The public

Cautiousness is helpful, fear is not. Panic is counter-productive. It’s important not to get swept up in narratives of techno-determinism, and instead remember that technology can and will have both positive and negative effects. It is up to us to shape both regulation and social norms

Learn as much as you can about emerging technologies, including their positive uses as well as any risks and understand that many risks can be managed and minimized. Learn to use available safety, privacy and security tools

A personal note and general advice about generative AI concerns

While I am not worried about the various doomsday scenarios, I have to admit that, as a knowledge worker, I am both excited and nervous about generative AI. I’ve already used it to enhance my creativity. For example, this article benefited from GAI images and research.

However, I worry about its impact on my work. I’m pretty good at explaining things, but so are ChatGPT and other GAI systems. Sometimes I’ll use GAI to generate a response and think, “Hmm. That’s as good or even better than what I would have written.”

I’m at a stage in life where I am not personally threatened, but my experience thus far gives me empathy for those who have real worries about the future of their work. My main advice to my younger colleagues is to learn from history. Though, just as with investment advice, past performance does not guarantee future results, there is plenty of evidence to suggest that many, if not most, people ultimately benefit from new technologies even if there is short-term disruption. The key is to remain flexible. Learn to harness and use the new technologies. Think about alternative approaches or even entirely new careers. There is nothing new about change, I have changed careers five times during my working lifetime, usually drawing on but expanding my existing skill sets.

In addition to impacting our jobs, GAI is likely to have an impact on our lives. From what I can see early on, that impact has been mostly positive, but, even though I don’t subscribe to the doom and gloom catastrophic predictions, I have no doubt that there will be unintended and unknown consequences going forward, just as there are with nearly every technology starting with stone cutting tools all the way to today’s devices, apps and online services. Like a lot of other observers, I can speculate about both the risks and long-term benefits, but only time will tell which are real, which are “moral panics’ and what risks and benefits emerge that we can’t even imagine. That, my friends, is what it means to be alive in an age of fast-paced innovation. It can produce anxiety but it can also bring enormous excitement and potential. We have a lot to look forward to.


Share this...