What if an Artificial Intelligence-enabled combat system simply goes too far by expediting and completing a decision-making cycle beyond what was intended by a human operator? Can there be specific boundaries or safeguards engineered into a system to properly define and set parameters of possible computerized decisions based on algorithmic calculations?

U.S. Army weapons developers are carefully considering some of these nuances as the service quickly carves out and “applies” the best warfare uses for fast-increasing levels of AI-empowered autonomous decision-making. Perhaps a human, operating in a crucial, decision-making command and control capacity, will need to be sure the machines adhere to necessary safeguards or standards.

“If I direct a machine to do something, what happens when it starts making decisions on its own? How have I properly told it what I want it to do and what is acceptable and what isn’t?... A lot of time people who are desirous of AI behavior patterns have not fully understood that there are a lot of boundaries you have to make sure you pay attention to,” Dr. Bruce Jette, assistant secretary of the Army, Acquisition, Logistics and Technology, told The National Interest in a recent interview.

SOLDIERS USE AI TO FIRE PRECISION GRENADES, GUIDE DRONE ATTACKS

Of course, Pentagon doctrine specifies that any use of lethal offensive force would need to be authorized by a human decision-maker, yet short of pre-programmed limits such as preventing an actual weapons firing, are there certain decision-making processes that should not be left solely to a computer?

U.S. Army Soldiers from Delta Company, 3rd Battalion, 187th Infantry Regiment, 3rd Brigade Combat Team, 101st Airborne Division (Air Assault), fire the TOW missile system during a live fire at Fort Campbell, Ky. Oct. 24, 2018 - file photo. (U.S. Army Photo by Capt. Justin Wright)

“We are trying to develop an interactive approach to AI. How do I get soldiers involved and how do I understand what the implications are? How do I link that into a development program when it comes to aiming a gun?” he said.

Some of these nuances explain the precise reason Army leaders and senior weapons developers explain that the best use of AI involves human-machine interface or a certain kind of collaborative teaming between the two. While computer automation, or advanced, AI-specific algorithms can perform certain crucial combat functions exponentially faster and more efficiently than humans, there are still many key decisions that need to be made by a human.

PENTAGON APPROACHES MASSIVE NEW AI, MACHINE LEARNING BREAKTHROUGH

Human cognition is itself an extremely complex, unique and unprecedented phenomenon, not easily mirrored or replicated by even the most advanced machines. Procedural functions, such as data gathering, data organization and essential analytical processes, can be done exponentially faster than humans, vastly improving situational awareness and making potentially life-saving calculations among a host of otherwise too complicated, interwoven variables.

CLICK HERE TO GET THE FOX NEWS APP

While Jette and other senior weapons developers are clear that proper applications of this kind of technology, drawing upon AI-enabled real-time analytics, bring new paradigm-shifting dimensions to warfare, there is also consensus that there are, without question, faculties of human intuition and problem solving that simply cannot be replaced in any kind of machine. At least… not yet.