If an enemy fighter was several hundred yards away, another was attacking from one mile away, while yet a third fired from a nearby room in a close-quarters urban warfare circumstance, how would U.S. Army soldiers apprehend, integrate and quickly map the locations of multiple targets at once in 3D, all while knowing the range and distance of the enemy forces? Could something like this be possible, one might wonder, given the nuances in perspective, range, navigational circumstances and the limitations of a human eye?

These complexities form the conceptual basis upon which the Army is fast-tracking its Integrated Visual Augmentation System (IVAS), soldier-worn combat goggles engineered with advanced sensors able to overcome some of the limitations of human vision and quickly organize target data.

Army soldiers recently conducted a wide range of combat operations using an augmented reality visual heads up display device that enables individual soldiers to fight, rehearse and train all on a single system.

The service’s Integrated Visual Augmentation System has been demonstrated by an Army team of developers through a series of recent exercises at Fort Pickett, Va., an Army statement said.

ARMY IMPROVES ABRAMS TANK GUN SYSTEM WITH UPGRADED FIRE CONTROL

"IVAS is designed to enhance the lethality and survivability of the Army’s close combat force through a combination of technologies and augmented reality capabilities delivered in the form of a Heads-Up Display device. It is a single platform that allows the soldier to fight, rehearse and train, because it leverages networked information sharing and mixed and augmented reality technologies,” an Army report said.

Integrated Visual Augmentation System (IVAS), soldier-worn combat goggles (Army)

Dr. Bruce Jette, assistant secretary of the Army, Acquisition, Logistics and Technology, told Warrior in an interview earlier this year that engineers created IVAS with an ability to compensate for what might otherwise be some of the limitations of the human eye.

Operation of IVAS calls upon a degree of what could be described as “Human-Machine Interface” because it integrates some of the neurological processes of human vision with software engineered to process, organize and display otherwise challenging factors such as “depth perception,” surrounding peripheral objects and other elements of human visual orientation.

“We don’t perceive distance with one eye, we just see larger or smaller – but if I can put it in both eyes I can get the object in 3D. To do that I need to have the sensing system to know where the eye is looking and focusing. The IVAS does that. It determines what you are looking at and what type of object you are looking at and focusing on to generate a 3D image in front of you. The good part about this is I don’t need all those heavy optics on my face,” Jette said.

As designed, the IVAS system is built to lessen the hardware footprint, reduce weight and, perhaps of greatest combat relevance, streamline time-sensitive combat data.

'ATTACKING AT SPEED': ARMY PROJECT CONVERGENCE AND BREAKTHROUGH LIGHTNING-FAST WAR

“The sensor is seeing where my eyes are looking and preserving it based upon certain measurements. Then if I fly a UAV up there, IVAS can show the UAV coming into the scene – and converge the two onto each other so I can put the UAV right where you want it,” Jette said.

CLICK HERE TO GET THE FOX NEWS APP

Part of the soldier feedback process, interestingly, involved requests to build even more data, icons, detail and combat information into the sensor. Developers deliberately limited the amount of information displayed on the IVAS system to avoid overloading soldiers, however, soldiers really liked the system and asked for an even more integrated display.

-- Kris Osborn is the managing editor of Warrior Maven and the defense editor of The National Interest --