Critics question tech-heavy lineup of new Homeland Security AI safety board

A modified photo of a 1956 scientist carefully bottling reader comments 25

On Friday, the US Department of Homeland Security announced the formation of an Artificial Intelligence Safety and Security Board that consists of 22 members pulled from the tech industry, government, academia, and civil rights organizations. But given the nebulous nature of the term "AI," which can apply to a broad spectrum of computer technology, it's unclear if this group will even be able to agree on what exactly they are safeguarding us from.

President Biden directed DHS Secretary Alejandro Mayorkas to establish the board, which will meet for the first time in early May and subsequently on a quarterly basis.


The fundamental assumption posed by the board's existence, and reflected in Biden's AI executive order from October, is that AI is an inherently risky technology and that American citizens and businesses need to be protected from its misuse. Along those lines, the goal of the group is to help guard against foreign adversaries using AI to disrupt US infrastructure; develop recommendations to ensure the safe adoption of AI tech into transportation, energy, and Internet services; foster cross-sector collaboration between government and businesses; and create a forum where AI leaders to share information on AI security risks with the DHS.


It's worth noting that the ill-defined nature of the term "Artificial Intelligence" does the new board no favors regarding scope and focus. AI can mean many different things: It can power a chatbot, fly an airplane, control the ghosts in Pac-Man, regulate the temperature of a nuclear reactor, or play a great game of chess. It can be all those things and more, and since many of those applications of AI work very differently, there's no guarantee any two people on the board will be thinking about the same type of AI.

Advertisement

This confusion is reflected in the quotes provided by the DHS press release from new board members, some of whom are already talking about different types of AI. While OpenAI, Microsoft, and Anthropic are monetizing generative AI systems like ChatGPT based on large language models (LLMs), Ed Bastian, the CEO of Delta Air Lines, refers to entirely different classes of machine learning when he says, "By driving innovative tools like crew resourcing and turbulence prediction, AI is already making significant contributions to the reliability of our nation’s air travel system."


So, defining the scope of what AI exactly means—and which applications of AI are new or dangerous—might be one of the key challenges for the new board.


A roundtable of Big Tech CEOs attracts criticism


For the inaugural meeting of the AI Safety and Security Board, the DHS selected a tech industry-heavy group, populated with CEOs of four major AI vendors (Sam Altman of OpenAI, Satya Nadella of Microsoft, Sundar Pichai of Alphabet, and Dario Amodei of Anthopic), CEO Jensen Huang of top AI chipmaker Nvidia, and representatives from other major tech companies like IBM, Adobe, Amazon, Cisco, and AMD. There are also reps from big aerospace and aviation: Northrop Grumman and Delta Air Lines.


Upon reading the announcement, some critics took issue with the board composition. On LinkedIn, founder of The Distributed AI Research Institute (DAIR) Timnit Gebru especially criticized OpenAI's presence on the board and wrote, "I've now seen the full list and it is hilarious. Foxes guarding the hen house is an understatement."

Here's a full member list of the inaugural AI Safety and Security Board as named by the DHS:



  • Sam Altman, CEO, OpenAI

  • Dario Amodei, CEO and Co-Founder, Anthropic

  • Ed Bastian, CEO, Delta Air Lines

  • Rumman Chowdhury, Ph.D., CEO, Humane Intelligence

  • Alexandra Reeve Givens, President and CEO, Center for Democracy and Technology

  • Bruce Harrell, Mayor of Seattle, Washington; Chair, Technology and Innovation Committee, United States Conference of Mayors

  • Damon Hewitt, President and Executive Director, Lawyers’ Committee for Civil Rights Under Law

  • Vicki Hollub, President and CEO, Occidental Petroleum

  • Jensen Huang, President and CEO, Nvidia

  • Arvind Krishna, Chairman and CEO, IBM

  • Fei-Fei Li, Ph.D., Co-Director, Stanford Human-centered Artificial Intelligence Institute

  • Wes Moore, Governor of Maryland

  • Satya Nadella, Chairman and CEO, Microsoft

  • Shantanu Narayen, Chair and CEO, Adobe

  • Sundar Pichai, CEO, Alphabet

  • Arati Prabhakar, Ph.D., Assistant to the President for Science and Technology; Director, the White House Office of Science and Technology Policy

  • Chuck Robbins, Chair and CEO, Cisco; Chair, Business Roundtable

  • Adam Selipsky, CEO, Amazon Web Services

  • Dr. Lisa Su, Chair and CEO, Advanced Micro Devices (AMD)

  • Nicol Turner Lee, Ph.D., Senior Fellow and Director of the Center for Technology Innovation, Brookings Institution

  • Kathy Warden, Chair, CEO and President, Northrop Grumman

  • Maya Wiley, President and CEO, The Leadership Conference on Civil and Human Rights.


In early 2023, OpenAI employees rang alarm bells over the danger of certain AI systems, which they said posed "an existential risk" that could lead to humanity's extinction. Altman also appeared before the Senate and basically asked for government regulation over advanced AI systems. Critics said these efforts served to distract from the potential of existing AI harms (such as inherent bias) and could potentially lock down advanced AI to a few large tech companies that can afford to comply with regulations.

So it's notable, then, that no one from Meta is on the new DHS AI board. Meta is arguably the largest proponent of open-weights and source-available AI models, which representatives of "closed" AI firms heavily represented on the board typically criticize as being dangerous or potentially insecure because those models might allow technology they perceive as dangerous (and make money from gatekeeping) into anyone's hands.

Advertisement

“This table seems to be missing some critical seats”


Critics like Gebru worry that the tech industry's outsized influence on the AI Safety and Security Board may lead to policies that prioritize the interests of large corporations over the public good. This could potentially stifle innovation, perpetuate existing biases, and leave critical voices unheard in discussions that shape the future of AI development and governance.


As a counterbalance to Big Tech's presence on the board, the DHS also selected a few notable members of civil rights organizations and academia, such as Dr. Fei-Fei Li of Stanford University, CEO Maya Wiley of The Leadership Conference on Civil and Human Rights, President Damon Hewitt of Lawyers’ Committee for Civil Rights Under Law, and CEO Alexandra Reeve Givens of Center for Democracy and Technology.

Ars asked Dr. Margaret Mitchell, an AI ethics researcher for Hugging Face, for her thoughts on the new board. Mitchell praised the inclusion of Dr. Arati Prabhakar of the White House Office of Science and Technology Policy but felt some key voices were lacking. "This table seems to be missing some critical seats," she told us.


Who would she put on the board? "Representatives from Data & Society, DAIR, or AI Now could be very helpful," Mitchell said. "Georgetown Law's Center on Privacy & Technology has also done some stellar work." Mitchell also listed a few names, such as Claire Garvie, a privacy lawyer at the National Association of Criminal Defense Lawyers, and Joy Buolamwini of the Algorithmic Justice League.


But most of all, Mitchell emphasized the need for more balance in representation to offset the corporate-heavy focus. "If we can all agree that we care about keeping people 'safe' with respect to how AI is used," she said, "Then I think we can agree it's important to have people at the table who specialize in centering people over technology."