AI Needs Regulation. Here’s Where Congress Should Start


Washington is starting to get interested in artificial intelligence (AI). Just this week, a bipartisan group of lawmakers released a roadmap for congressional action on artificial intelligence. It proposes investments to harness the benefits of AI—and asks for over $30 billion of funding. It is sweeping, yet not very detailed. The federal government has also convened an advisory committee of AI entrepreneurs to opine on AI safety. At the same time, a diplomatic delegation is meeting with China to assess some rules of the road for AI in trade and defense systems. The bottom line: There is a lot of activity concerning the safe use of AI, but little has been accomplished.

As a country, we moved far too slowly on the regulation of social media, and we’re now paying the price in the form of increasing misinformation and disinformation, erosion of privacy, lack of commerce and advertising guidelines, lack of consistent speech guidelines, and, most importantly, crushing mental-health issues. We must not make this mistake again when it comes to AI.

Given the potency and proliferation of this technology, we need a nationwide regulatory framework governing artificial intelligence. In order to post some quick wins that don’t require billions of dollars in investment, we should focus on areas where we already have experience in regulation or enjoy established legal practice. There are four pillars of action that can be established from the outset.

First, a regulatory framework for AI should echo legislation passed in Florida in March that prohibits social-media accounts for younger children and restricts them for older children. Similar legislation is being considered in the Senate Commerce Committee. Children should not have access to AI applications before high school, and parental opt-in should be required for children of high school age. There must be similar protections for the elderly, an age cohort the Justice Department has recognized as being particularly vulnerable to abuse and manipulation. Anyone over 65 should be given the option to actively opt in or out of any AI applications. Any company providing AI services to these age groups without the proper opt-in permission would be breaking the law.

Second, civil and criminal laws must be immediately “hardened” for AI use. Civil actions should cover AI’s role in, for instance, intellectual property theft, financial fraud with automated forms, fees and penalties with unfair applications, and material business-contracting errors. Criminal acts should cover, to start, AI’s role in scams, election fraud, and the incitement of violence. The technology cannot become a “get out of jail free” card. Having an AI accomplice, either intentionally or by happenstance, should not impact the force of the law already on the books.

Close-up of a person’s hand holding an iPhone and using the ChatGPT GPT-4o model released in May 2024 by OpenAI, a generative artificial intelligence model that natively processes speech and vision, Lafayette, California, May 13,…


Smith Collection/Gado/Getty Images

Third, AI models must operate subject to existing equal protection laws that prohibit, for example, employment discrimination. We cannot allow systemic bias in AI algorithms used by financial services determining eligibility and pricing of loans, insurance, and banking services, or in health care, housing and real estate, and democratic institutions and processes (e.g., elections). This pillar would require mere tweaks to current legislation, by adding clauses stipulating that 1) the use of AI without testing algorithms for bias would be a violation, and 2) any bias later found due to the use of AI would not be an excuse to escape accountability in the form of fines and other punishments.

Fourth, regulators must take steps to prevent AI from being exploited by bad actors. A proven model is available for emulation. The Treasury Department’s Office of Foreign Assets Control, which is charged with preventing prohibited transactions, uses a “Know Your Customer” (KYC) process to verify actors and assess their risk profiles. This helps prevent money laundering or the movement of currency through the banking system for criminal purposes.

This same process should be incorporated into a regulatory framework for AI, specifically with regard to companies that sell products and models for use. Vendors like Microsoft and Google should be required to run a KYC-like process for every customer buying their tools and services much like JP Morgan and Bank of America are required to do. Companies that fail to properly verify and assess their customers should be fined and forced to improve their processes. Bad actors that seek to or successfully evade verification and assessment should be fined as well, in addition to other civil or criminal penalties for breaking the law.

Each of these four pillars will have wide-ranging implications—and, admittedly, unforeseen consequences. But waiting until we have “the perfect solution,” as we did with social media, is obviously not the answer because we will end up with no solution whatsoever. And the congressional roadmap, while laudable, will simply take too long to implement. Establishing a foundation for governing AI now would not just quickly usher in a series of substantive wins, but also create mechanisms for corrections as the technology and our circumstances inevitably evolve.

Phil Siegel is co-founder of the Center for Advanced Pathogen Threat and Response Simulation and a managing partner at Tritium Partners.

The views expressed in this article are the writer’s own.