Skip to main content

US senator open letter calls for AI security at ‘forefront’ of development

Interior of Russel Senate Building in Washington, DC

Want smarter insights in your inbox? Sign up for our weekly newsletters to get only what matters to enterprise AI, data, and security leaders. Subscribe Now


Today, Sen. Mark Warner (D-VA), chairman of the Senate Intelligence Committee, sent a series of open letters to the CEOs of AI companies, including OpenAI, Google, Meta, Microsoft and Anthropic, calling on them to put security at the “forefront” of AI development.

“I write today regarding the need to prioritize security in the design and development of artificial intelligence (AI) systems. As companies like yours make rapid advancements in AI, we must acknowledge the security risks inherent in this technology and ensure AI development and adoption proceeds in a responsible and secure way,” Warner wrote in each letter. 

More broadly, the open letters articulate legislators’ growing concerns over the security risks introduced by generative AI.   

Security in focus

This comes just weeks after NSA cybersecurity director Rob Joyce warned that ChatGPT will make hackers that use AI “much more effective,” and just over a month after the U.S. Chamber of Commerce called for regulation of AI technology to mitigate the “national security implications” of these solutions. 

The top AI-specific issues Warner cited in the letter were integrity of the data supply chain (ensuring the origin, quality and accuracy of input data), tampering with training data (aka data-poisoning attacks), and adversarial examples (where users enter inputs to models that intentionally cause them to make mistakes). 

Warner also called for AI companies to increase transparency over the security controls implemented within their environments, requesting an overview of how each organization approaches security, how systems are monitored and audited, and what security standards they’re adhering to, such as NIST’s AI risk management framework.