The possibility that an artificial intelligence system might launch a nuclear attack on its own has prompted House lawmakers to propose legislative language that would ensure America’s nuclear arsenal remains under human control.

Rep. Ted Lieu, D-Calif., has proposed a bipartisan amendment to the 2024 defense policy bill that requires the Pentagon to put in place a system that ensures "meaningful human control is required to launch any nuclear weapon." It defines human control by saying people must have the final say in selecting and engaging targets, including when, where and how they are hit with a nuclear weapon.

It is a concept that senior military leaders say they are already following. In April, top AI advisers at U.S. Central Command told Fox News Digital that their goal is to use AI to more rapidly assess data and provide options for military leaders, but to let humans have the final say in tactical military decisions.

CONGRESS PUSHES AGGRESSIVE USE OF AI IN THE FEDERAL GOVERNMENT, SAYS AI ‘UNDER-UTILIZED’ IN AGENCIES

Reps. Juan Ciscomani and Ted Lieu

Reps. Juan Ciscomani, R-Ariz., and Ted Lieu, D-Calif., proposed an amendment to the National Defense Authorization Act that would require human control over nuclear launch decisions.

However, the bipartisan support for Lieu’s amendment shows lawmakers are increasingly worried about the idea that AI itself might act on decisions as quickly as it can assess the situation. Lieu’s amendment to the National Defense Authorization Act (NDAA) is supported by GOP lawmakers Juan Ciscomani of Arizona and Zachary Nunn of Iowa, along with Democrats Chrissy Houlahan of Pennsylvania, Seth Moulton of Massachusetts, Rashida Tlaib of Michigan and Don Beyer of Virginia.

House Republicans, as early as next week, are expected to start the work of deciding which of the more than 1,300 proposed amendments to the NDAA will get a vote on the House floor. Lieu’s proposal is not the only AI-related amendment to the bill – another sign that while Congress has yet to pass anything close to a broad bill regulating this emerging technology, it seems likely to approach the issue in a piecemeal fashion.

Rep. Stephen Lynch, D-Mass, proposed a similar amendment to the NDAA that would require the Defense Department to adhere to the Biden administration’s February guidance on AI on the "Responsible Military Use of Artificial Intelligence and Autonomy."

CHINESE GOVERNMENT MOUTHPIECE VOWS BEIJING WILL RAMP UP DRIVE FOR AI GLOBAL SUPREMACY

Rep. Mike Rodgers

Rep. Mike Rogers, a Republican from Alabama and chairman of the House Armed Services Committee, has moved a bill requiring "responsible development and use" of AI, but some lawmakers want to go further. (Getty Images)

Among other things, that non-binding guidance says nations should "maintain human control and involvement for all actions critical to informing and executing sovereign decisions concerning nuclear weapons employment.

"States should design and engineer military AI capabilities so that they possess the ability to detect and avoid unintended consequences and the ability to disengage or deactivate deployed systems that demonstrate unintended behavior," it added. "States should also implement other appropriate safeguards to mitigate risks of serious failures."

However, not all the amendments are aimed at putting the brakes on AI. One proposal from Rep. Josh Gottheimer, D-N.J., would set up a U.S.-Israel Artificial Intelligence Center aimed at jointly researching AI and machine learning that has military applications.

SENATE URGED TO PUNISH US COMPANIES THAT HELP CHINA BUILD ITS AI-DRIVEN ‘SURVEILLANCE STATE’

An aerial photo of the Pentagon

Lawmakers have several proposals for how the Pentagon should be dealing with AI, including one that would require cooperation with Israel. (STAFF/AFP via Getty Images)

"The Secretary of State and the heads of other relevant Federal agencies, subject to the availability of appropriations, may enter into cooperative agreements supporting and enhancing dialogue and planning involving international partnerships between the Department of State or such agencies and the Government of Israel and its ministries, offices, and institutions," the amendment stated.

Another, from Rep. Rob Wittman, R-Va., would require the Pentagon to set up a process for testing and evaluating large language models like ChatGPT on subjects like how factual they are and the extent to which they are biased or promote disinformation.

CLICK HERE TO GET THE FOX NEWS APP

The bill as passed by the House Armed Services Committee last month already includes language that would require the Pentagon to set up a process ensuring the "responsible development and use" of AI, and to study the possible use of autonomous systems to make the military’s work more efficient.