Share this...

By Larry Magid
This post first appeared in the Mercury News

On October 30, President Biden issued an executive order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. As expected, the nearly 20,000-word document committed the federal government to manage the risks from its own use of AI, but also set some standards for private sector use and development of AI. The White House also posted a 1,900-word fact sheet that summarizes the order.

Guiding principles

The administration established eight guiding principles. Some apply to both government and the private sector, while others focus on the federal government’s own use of AI.

In summary, the Biden administration says that artificial intelligence must be safe and secure; it must promote responsible innovation, competition and collaboration; it should support American workers; advance equity and civil rights; protect the interests of Americans who interact with or purchase AI and AI-enabled products and protect privacy and civil liberties. The order also calls for the federal government to manage the risks from its own use of AI and to “lead the way to global societal, economic, and technological progress.”

As is frequently the case with executive orders, it provides a broad framework, leaving plenty of room for Congress to pass laws and government agencies to issue regulations, but it does put the federal government, at least while Biden is in the White House, squarely on the side of promoting safe use of AI not just by government but by all stakeholders, including private industry.

Security and public safety

Much of the order deals with issues of national security, cybersecurity, public safety and national infrastructure. There are also clauses to promote innovation and competition, including protecting the industry’s ability to recruit foreign talent. The clauses call to “streamline processing times of visa petitions and applications” and “support the incorporation or expansion of AI-related curricula, training, and technical assistance, or other AI-related resources.”

In addition to government use of technology, the order requires companies developing “any foundation model that poses a serious risk to national security, national economic security, or national public health and safety” to notify the federal government when training the model to ensure that “AI systems are safe, secure, and trustworthy before companies make them public.” Depending on how this is interpreted and implemented, it could have a profound impact on AI development going forward, similar to how automobile safety standards impact the way auto companies design and build vehicles.

Preventing fraud and misinformation

When it comes to federal use of AI-generated content, the administration is calling on the Department of Commerce to “protect Americans from AI-enabled fraud and deception by establishing standards and best practices for detecting AI-generated content and authenticating official content.” Federal agencies will be required to watermark and label AI-generated content, which could also set a standard or at least a benchmark for private use of AI data. It’s important to remember, however, that AI is not the only source of inaccurate information. Humans are more than capable of creating and spreading false information.

There isn’t a lot in the order regarding the use of AI in advertising and product promotion, but there is a clause about protecting “consumers while ensuring that AI can make Americans better off,” including “the responsible use of AI in healthcare and the development of affordable and life-saving drugs.”

One of my worries about AI is the advantage that large, well-resourced companies have over small businesses and startups when it comes to developing and profiting from AI technology. To that end, the order hopes to “promote a fair, open, and competitive AI ecosystem” and provide small developers and entrepreneurs access to technical assistance and resources. This strikes me as more aspirational than enforceable, but I’m glad it was included.

Jobs and leveling the playing field

A lot of people worry about the impact of AI on jobs and, no matter how much the Biden administration wants to protect jobs, I have no doubt that AI will eliminate some jobs and create others just as new technologies have always done. Still, the order calls for the federal government to address AI-related workforce disruptions and for the Department of Labor to analyze “the abilities of agencies to support workers displaced by the adoption of AI.”  This impacts the private sector as well as the government and calls for companies to provide “transparency, engagement, management, and activity protected under worker-protection laws” and to ensure that employees whose work is “monitored or augmented by AI” are appropriately compensated for all their work time.

One of the big concerns about AI is the risk of it exacerbating discrimination and inequalities, including in the criminal justice system. To that end, the order requires the attorney general to address civil rights and civil liberties violations and discrimination related to AI, including “algorithmic discrimination.” This includes policing, sentencing, parole decisions, crime forecasting and “predictive policing.” Reading through these provisions reminds me of the 2002 Tom Cruise movie “Minority Report,” which was based on a 1954 novella predicting a time when police would arrest people before they commit actual crimes. What was once science fiction is now a real possibility.

There are plenty of other examples of equity such as landlords using AI to screen tenants or banks using it to make credit decisions or set interest rates or for insurance companies or hospitals to decide who gets access to life-saving operations or drugs.

The order also has provisions to protect Americans’ privacy, including strengthening privacy-enhancing technologies such as cryptography and to strengthen privacy guidance for federal agencies to account for AI risks.

Whether you agree or disagree with what the administration is putting forward, one thing is for sure. This is a thorough and well thought out order, resulting from a great deal of staff time and considerable expertise and technical knowledge by the people who helped draft the order. That has also been the case with most previous Democratic and Republican administrations, where the president, regardless of his political views, surrounded himself with competent advisors.

Larry Magid is a tech journalist and internet safety activist. Contact him at [email protected].


Share this...