A Presidential Meeting on AI
On Thursday, President Joe Biden held a meeting at the White House with CEOs of leading AI companies, including Google, Microsoft, OpenAI, and Anthropic. The meeting emphasized the importance of ensuring the safety of AI products before deployment and addressing the risks posed by AI. However, some AI experts criticized the exclusion of ethics researchers who have warned of AI’s dangers for years.
Over the past few months, generative AI models such as ChatGPT have quickly gained popularity, driving companies to develop similar products rapidly. However, concerns have been growing about potential privacy issues, employment bias, and the potential for using them to create misinformation campaigns.
AI Ethics Researchers Respond
Critics of the companies’ ethical track records were not impressed by the meeting. They questioned the choice of inviting people to the meeting who, they argue, represent companies that have created the issues with AI that the White House seeks to address.
On Twitter, AI researcher Dr. Timnit Gebru wrote, “It seems like we spend half our time talking to various legislators and agencies and STILL we have this… A room full of the dudes who gave us the issues & fired us for talking about the risks, being called on by the damn president to ‘protect people’s rights.'” In 2020, Google fired Gebru following a dispute over a research paper she co-authored that highlighted potential risks and biases in large-scale language models.
University of Oxford AI ethics researcher Elizabeth Renieris tweeted, “Unfortunately, and with all due respect POTUS, these are not the people who can tell us what is “most needed to protect society” when it comes to #AI.”
AI Safety and AI Ethics
The criticism highlights the divide between “AI safety” (a movement concerned primarily with hypothetical existential risk from AI) and “AI ethics” (a group of researchers concerned largely about misapplications and impacts of current AI systems, including bias and misinformation).
Author Dr. Brandeis Marshall suggested organizing a “counter-meeting” that would include a diverse group of AI ethicists, practitioners, and researchers to discuss the real-world implications of AI and propose more inclusive and responsible solutions.
Although the White House meeting brought attention to the potential risks and challenges posed by AI, it remains to be seen whether the discussion will lead to concrete actions that address these issues. It is crucial for government, industry, and academia to collaborate and ensure that AI development is safe, responsible, and equitable.
To foster a more comprehensive dialogue on AI ethics and safety, including voices from marginalized communities and interdisciplinary perspectives, could be a critical step in building more resilient and fair AI systems. Engaging stakeholders from various backgrounds in the decision-making process will help address concerns around bias, privacy, and the potential misuse of AI technologies.
As AI continues to advance rapidly, it will become increasingly important for policymakers and industry leaders to work together and create robust regulatory frameworks that protect the public interest while enabling innovation. Only through a collaborative approach can we ensure that AI technologies are harnessed to benefit society as a whole, while mitigating potential risks and negative consequences.