International Finance
MagazineTechnology

Reimagining AI under Trump 2.0

Trump
According to Donald Trump's own first-term AI order, federal AI systems must adhere to civil rights, which will necessitate social harm analysis

Chances of the development of artificial intelligence (AI) being free from restrictions have gotten brighter, as Republican Donald Trump is about to enter the White House for his second Presidential stint from January 2025.

The president-elect has promised to dismantle the incumbent Joe Biden’s landmark AI executive order. Experts suggest that state-level regulations and the drive for AI innovation will likely continue regardless of federal oversight changes, though concerns remain about balancing technological advancement with safety and ethical standards. Under the executive order (EO) signed by Biden in October 2023, the federal government was monitoring and counselling AI companies.

Republicans’ stand on AI

The US government’s current AI policy attempts to balance innovation with safety, security and ethical standards through several key initiatives, including the US Artificial Intelligence Safety Institute Consortium (AISIC) established in February 2024 and the executive order on AI that Joe Biden signed into law in October 2023.

The order mandated federal agencies to establish standards for AI safety and security, protect privacy and promote equity. It also required the creation of a chief artificial intelligence officer position within major federal agencies to oversee AI-related activities.

Before this order came along, the “Blueprint for an AI Bill of Rights” was released by the White House Office of Science and Technology Policy (OSTP) in October 2022. It proposed five principles to protect the public from harmful or discriminatory automated systems. Many of these were incorporated into the executive order.

The Republicans claim that the order was “hindering AI innovation” and “imposing radical leftwing ideas” on AI development. Donald Trump’s pledge further enthused those opposing the executive order, who believed it to be unlawful, risky, and a hindrance to the United States’ digital arms race with China.

Even if the Trump administration dismantles federal regulations, this won’t change state laws. California, for example, has the AI Transparency Act, signed into law by Governor Gavin Newsom in September 2024. This mandates providers of generative AI systems to make AI detection tools available in their solutions, thereby offering the users the option to include a manifest disclosure indicating the content is AI-generated, include a latent disclosure in AI-generated content and enter into contracts with licensees to maintain the AI system’s capability to include such disclosures.

“A Trump administration might ease federal AI regulations to boost innovation and reduce what they might perceive as regulatory burdens on businesses, however, this wouldn’t impact state laws like those that have recently passed in California,” Sean Ren, associate professor in computer science at University of Southern California (USC) and CEO of Sahara AI, told Newsweek.

“This could create a patchwork of rules, making it more complex for businesses operating nationally to stay compliant. While companies may see less federal red tape, they’ll still face varying state regulations,” he added.

With the lack of federal rules specifically targeted at frontier AI companies, Markus Anderljung, director of policy and research at the Centre for the Governance of AI and adjunct fellow at the Centre for a New American Security (CNAS), told Newsweek the biggest difference “will be intensified efforts to make it easier to build new data centres and the power generation, including nuclear reactors, needed to run them.”

“Beyond the potential dismantling of the executive order, I expect the biggest difference will be on social issues, stripping out anything that’s seen as woke. With other things, I think you can expect more continuity,” Anderljung said of possible changes when Donald Trump returns.

“There’s bipartisan consensus on the importance of supporting the US artificial intelligence industry, of competing with China, of building out US data centre capacity,” he added.

“The focus on competing with China was an element of the previous Trump administration, which started imposing stricter AI-related export controls on China, beginning with the controls on [telecoms firm] Huawei, followed by export controls on semiconductor manufacturing tools,” Anderljung noted.

“This suggests a new Trump administration might keep or strengthen controls imposed since then by the Biden administration, especially in light of recent reports of Huawei chips being produced by TSMC (Taiwan Semiconductor Manufacturing Company),” he added.

Oversight and advice, hand in hand

Joe Biden’s order covered a wide range of topics, including establishing protections for AI’s application in drug discovery and leveraging AI to enhance veterans’ healthcare.

However, two provisions in the section addressing digital security risks and real-world safety impacts were the main source of political controversy surrounding the EO.

According to one clause, owners of strong AI models must inform the government how they train the models and safeguard them against theft and tampering, including by submitting the findings of “red-team tests,” which simulate attacks to identify weaknesses in AI systems.

As per the other clause, the National Institute of Standards and Technology (NIST) of the Commerce Department was required to create guidelines that assist businesses in creating AI models that are impartial and secure against threats.

These projects have a lot of work in progress. NIST has launched several initiatives to encourage model testing, and the government has proposed quarterly reporting requirements for AI developers.

Additionally, NIST has published AI guidance documents on risk management, secure software development, watermarking synthetic content, and preventing model abuse.

Proponents of these initiatives argue that basic government oversight of the rapidly expanding AI industry is essential to promote security improvements among developers.

Joe Biden’s attempt to gather data regarding how businesses are creating, evaluating, and safeguarding their AI models caused a stir on Capitol Hill practically immediately after it was introduced.

Republicans in Congress took advantage of Biden’s use of the 1950 Defence Production Act, a wartime law that permits the government to control private-sector operations to guarantee a steady supply of goods and services, to support the new requirement.

Joe Biden’s action was deemed unnecessary, unlawful, and inappropriate by GOP lawmakers.

According to the Conservatives, the reporting requirement is a burden on the private sector. Representative Nancy Mace stated during a hearing she chaired in March 2024 on “White House overreach on AI” that the clause “could scare away would-be innovators and impede more ChatGPT-type breakthroughs.”

Steve DelBianco, the CEO of the conservative tech group NetChoice, says the requirement to report red-team test results amounts to de facto censorship, given that the government will be looking for problems like bias and disinformation.

Conservatives contend that the United States will suffer greatly in the technology competition with China if any regulations are implemented that hinder AI innovation.

Woke safety standards

The NIST guidelines are criticised by Republicans as being a kind of covert government censorship. NIST’s “woke AI safety standards,” according to Senator Ted Cruz, are a component of the Biden administration’s “plan to control speech” because they are based on “amorphous” social harms.

AI models have biases that support discrimination in hiring, law enforcement, and healthcare, as studies and investigations have repeatedly demonstrated. Research indicates that when people encounter these biases, they may unintentionally adopt them.

Conservatives are more concerned about the overcorrections made by AI companies to this issue than they are about the issue itself.

Republicans wanted NIST to concentrate on the physical safety risks of AI, such as how it could aid terrorists in creating bioweapons, which is something Biden’s EO does address.

According to Representative Ted Lieu, the Democratic co-chair of the House’s AI task force, these initiatives “allow the United States to remain on the cutting edge” of AI development “while protecting Americans from potential harms.”

Reporting requirements are vital for alerting the government about potentially dangerous new capabilities in advanced AI models, according to a US government official focused on AI issues.

The official, who spoke on condition of anonymity, cites OpenAI’s acknowledgement of its most recent model’s “inconsistent refusal of requests to synthesise nerve agents.”

According to the official, the reporting requirement isn’t very onerous. They contend that Biden’s EO represents “a very broad, light-touch approach that continues to foster innovation,” in contrast to AI regulations in China and the European Union (EU).

Nick Reese, the first director of emerging technology at the Department of Homeland Security from 2019 to 2023, denies conservative arguments that the reporting requirement will compromise businesses’ intellectual property.

He suggests that it may even assist start-ups in developing AI models that are more computationally efficient, require less data, and are not subject to the reporting threshold.

Experts say that NIST’s security guidelines are an essential tool for incorporating security features into new technology. They point out that bad AI models have the potential to cause major societal problems, such as unfair loss of government benefits and discrimination in lending and rental arrangements.

According to Donald Trump’s own first-term AI order, federal AI systems must adhere to civil rights, which will necessitate social harm analysis. For the most part, the AI community has embraced Joe Biden’s safety agenda.

Reversing Biden’s executive order would send a worrying message that “the US government is going to take a hands-off approach to AI safety,” according to Michael Daniel, a former presidential cyber adviser who currently serves as the head of the nonprofit information-sharing group Cyber Threat Alliance.

Regarding competition with China, the EO’s supporters argue that safety regulations will help America win by guaranteeing that the American AI models outperform their Chinese counterparts and are shielded from Beijing’s economic espionage.

With the announcement of the creation of DOGE (Department of Government Efficiency), to be headed by Tesla and X boss (also a hardcore Trump backer) Elon Musk, AI regulation—or deregulation—could potentially come under the tech billionaire’s remit.

“Elon Musk’s role as CEO of AI-driven companies like Tesla and Neuralink presents inherent conflicts of interest, as policies he helps shape could directly impact his businesses,” said Sean Ren, associate professor in computer science at University of Southern California (USC) and CEO of Sahara AI, as he told the Newsweek further, “This complicates any direct advisory role he might take on. That said, Elon Musk has long advocated for responsible AI regulation and could bring valuable insights to AI policy without compromising the public interest.”

However, given the complexity of AI policy, Ren believes that “effective guidance requires more than just one mind,” adding that “it’s essential to have experts from multiple fields to address the diverse ethical, technological and social issues involved.”

Ren suggested that to better address AI issues, Donald Trump could establish a multi-adviser panel that includes experts from academia, technology and ethics, balancing Elon Musk’s influence and ensuring a well-rounded perspective.

What's New

AI: A tool, not a job-stealer

IFM Correspondent

New infostealers target global businesses

IFM Correspondent

Bullfrog & Robot Dogs: Gun warfare gets AI push

IFM Correspondent

Leave a Comment

* By using this form you agree with the storage and handling of your data by this website.