In the US, building on the momentum of the Biden AI Executive Order and NIST AI Risk Management framework, and the AI Bill of Rights, federal and state regulation is starting to emerge, with at least 40 states having announced AI-related bills.
And in a Presidential election year, AI is increasingly becoming a political, and partisan, hot topic. While election campaigns use AI tools to increase donations and target audiences, Congress grapples with national security – how to safeguard elections and limit the spread of AI-driven political disinformation and deepfakes. Meanwhile regulators are also relying on existing frameworks to regulate against potential harms arising from the use of AI, making regulatory compliance a key imperative.
Senate AI Task Force Releases “AI Roadmap”
In May 2024, The Bipartisan Senate AI Working Group, led by Senate Majority Leader, Chuck Schumer, together with Senators Mike Rounds, Martin Heinrich and Todd Young, introduced their sweeping Roadmap for AI Policy. Their objective is “to complement the traditional congressional committee-driven policy process, considering that this broad technology does not neatly fall into the jurisdiction of any single committee.”
As such, the comprehensive AI Roadmap builds on the numerous ongoing US federal AI initiatives by identifying areas of consensus that merit bipartisan consideration in the Senate. The aim is to stimulate the momentum of bipartisan AI legislation, ensure the US remains at the forefront of innovation in this technology, and help all Americans benefit from the many opportunities created by AI.
State level developments
During 2024’s legislative session, state AI-related legislative developments swelled, with at least 40 states, plus the US Virgin Islands, Washington, DC and Puerto Rico reportedly announcing AI-related bills, while at least 6 states adopted AI-specific legislation or resolutions.
The growing list of state-specific AI regulation now includes the Utah Artificial Intelligence Policy Act, the New York AI Bias Law and, most recently, the Colorado Artificial Intelligence Act. With a classification system akin to the EU’s AI Act, Colorado’s AI Act seeks to protect against algorithmic discrimination and is expected to have a significant impact on several industry sectors.
In addition to Colorado’s landmark legislation, Florida established grants to implement AI in schools, to support teachers and students, while Tennessee similarly focused on schools, requiring policies concerning the use of AI for instructional purposes.
Indiana and West Virginia created an AI task force and select committee, respectively, and Utah developed its Artificial Intelligence Policy Act.
South Dakota strengthened its criminal law to expand the offense of possession of child pornography to include computer-generated child pornography and simulated acts, while the US Virgin Islands implemented within its police department a centralized, real-time crime data system.
Maryland enacted procedures and policies concerning use, assessment, development and procurement by state government of AI-driven systems, while Washington earmarked funds for institutions to incubate tech business start-ups, particularly AI-focused ones, and to teach workers how to use AI within business.
Impact of AI on the presidential election
Importantly, 2024 is a Presidential election year in the US, heightening concerns about the potential use of deep fakes and other AI-driven influence in the campaign and election process. To that end, multiple bills have been introduced in the Senate, including the Protect Elections from Deceptive AI Act, and the AI Transparency in Elections Act and the Preparing Election Administrators for AI Act.
While it is unclear what, if any, AI-related election legislation may be passed, Republicans in Congress have opposed certain prior efforts to limit the spread of political disinformation or that otherwise would increase burdens on freedom of speech. Notably, if elected, former President Trump reportedly has vowed to repeal Biden’s AI Executive Order, with a stated goal of protecting free speech.
In addition, AI tools reportedly are being used by candidates to grow campaign donations and better target audiences. A May 2024 report by Higher Ground Labs, a tech venture fund, reportedly cited a study finding that use of AI to create fundraising emails “grew dollars raised per work hour by 350 percent to 440 percent.”
Regulatory action against AI washing
Regulated organizations in the finance sector also need be aware of the impact of existing regulatory frameworks as they increasingly integrate AI systems. This includes avoiding making fraudulent – or potentially inadvertent - “AI washing” when making claims about their AI capabilities.
The SEC recently charged two investment advisers, Delphia (USA) Inc. and Global Predictions Inc., respectively, with making misleading and false statements about their AI use. The adviser claimed that Delphia “put[s] collective data to work to make our artificial intelligence smarter so it can predict which companies and trends are about to make it big and invest in them before everyone else.”
The SEC conceded that Delphia had "intended to use artificial intelligence and machine learning to collect data from its clients (such as from social media, banking, credit card, online purchases, etc.) as inputs into its algorithms", although it "never accomplished this goal". While Delphia did collect certain client data intermittently between 2019 and 2023, it never used that data with artificial intelligence or machine learning or otherwise used that data in any way as inputs into its investing algorithms.”
The SEC determined that Delphia’s statements were false and misleading, as Delphia did not have the AI and machine learning capabilities that it claimed, and material, because Delphia was representing “to current and prospective clients that its use of client data as inputs into its investing algorithms was a key differentiating characteristic from other advisers".
The importance of meaningful regulatory compliance
When it comes to AI, regulators are generally focused on the consumer and are seeking to evolve the regulatory framework to address increasing risks of customer harm.
For US companies, ensuring compliance when deploying AI in an evolving regulatory landscape is essential to responsible innovation and maintaining customer trust in their products and services. This includes taking care to avoid inadvertent “AI washing”, as firms ramp up their AI offerings and services.
Read more on developments in the regulation of AI in the context of online harms: in our next post - subscribe to Tech Insights to receive.