What if a small army of forward operating robots were able to detect an upcoming river gap and then ... without requiring human intervention -- autonomously mesh together into a bridge-like structure enabling manned combat units to advance?
What if an aerial swarm of mini-drones was able to autonomously operate in a synchronized fashion to act collectively as a group - perhaps combining otherwise separate small munitions into a larger explosive to attack a large emerging target if directed by human decision-makers?
These types of hypothetical scenarios might be possible in 20 years or so, depending upon the progress of current Army Research Lab work investigating new scientific frontiers in the area of robotics and autonomy.
The Army Research Laboratory, working with Northwestern University and Georgia Tech, is experimenting with groups of small robots to observe “emergent collective behavior” -- wherein otherwise disparate individual robots operate in a coordinated, synchronized fashion in ways they cannot do by themselves.
The best example of what Army scientists and academic partners are seeking to develop can, interestingly, be found in nature, according to Dr. Sam Stanton Program Manager, Complex Dynamics and Systems, Army Research Office, Army Combat Capabilities Development Command’s Army Research Laboratory.
“Ants are using coordinated behavior that is staggering. There is no one grand control over the entire colony, rather coordinated collective behavior. We are working to engineer robotics that behave like an ant colony. We have never tried to engineer things like this, which demonstrate the kind of collective behavior we see in nature,” Stanton said in an interview with Warrior.
Stanton likened the research aims to replicating natural phenomena observed within ant colonies which exhibit a certain collective synergy; individual ants participate in purposeful coordinated group functions, such as building anthills, transporting tiny pieces of food along a pheromone trail and regularly moving in an organized group manner to accomplish tasks. These group tasks, showing “emergent collective behavior,” far exceed the capabilities of any given individual ant, something which has inspired intrigue and focus among ARL scientists.
Researchers are exploring these questions with small mini-robot particles called “smarticles” to observe how they move, adapt, change and morph in a larger collective, coordinated fashion, without requiring human direction to perform certain functions. The smarticles, as they are called, operate in what Stanton referred to as a “stochastic” manner, meaning they are randomly determined yet in a pattern that can be recognized but not fully predicted.
The smarticles, which look like small maneuvering robots with plastic wings, are being observed with a mind to how they operate and collide into one another in groups.
“When you get a group of them to bump up against each other with random motions, collective behavior emerges. It creates controllable locomotion. There is this emergent behavior you could not predict from looking at one particular component,” Stanton described
Stanton explained the entire premise in the context of both natural phenomena and computer science, saying “how would you assimilate a host of algorithms to generate a function you would not see from one machine alone?”
Engineers, of course, know how to architect individual autonomous robots increasingly able to perform a range of functions without needing human intervention. Now, researchers with the ARL are exploring the forefront of new scientific paradigms analyzing collective behavior and its potential military utility. This Army Basic Research in the area of robotics, designed to inform operations 20 to 30 years from now, is not focused on building prototypes or near term products per se but is rather oriented toward exploring and refining groundbreaking scientific principles likely to present options a decade or two from now.
“People have been wanting to build robots that assemble into other configurations for a while. It has always been done where we know everything about every little part and we know the inputs and the outputs. It is a complicated system but you have to control all the pieces to get a larger configuration to get a collective robot. This requires complex control scheme and is computationally complex,” Stanton said.
Although this work, exploring new frontiers in scientific research, is described as “very far out,” it raises interesting questions about the kinds of mission possibilities it might inspire. Perhaps a swarm of mini-drones blanketing an area with surveillance cameras could autonomously align themselves to complete a larger, collective task -- such as combining sensors on a single area, or forming a collective “mass” of explosives in the air to descend upon and explode together on a specific target - if directed by a human operator. Or perhaps a small, mobile unit of ground robots could, upon detecting a threat, autonomously change shape and merge together to form a wall or protective shield against approaching enemy armored vehicles.
Stanton offered an interesting example, suggesting that perhaps a group of robots could autonomously self-organize to mesh together and form a bridge to cross a river gap.
“They could start to entangle and become a collective able to interlock down on each other and form actual structures. We don’t have robots that can entangle themselves,” Stanton said.
The Army’s 2017 Robotics and Autonomous Systems Strategies document speaks to the merits of these kinds of scenarios; the TRADOC (U.S. Army Training and Doctrine Command) paper maps out future operations and strategies in time increments, near, mid and long term. The “far-term” portion of the strategy paper, oriented toward the 2030 to 2040 timeframe, specifically refers to the anticipated benefits expected when multiple robotic systems work together.
“To facilitate maneuver, formations benefit from armed ground and aircraft robotic platforms with smaller signatures and longer endurance, working alone or in pairs, to destroy high-value targets deep in enemy territory,” the paper writes.
It is likely important to note that the research is not taking up the often-discussed question of autonomous robot attack being possible without human intervention, but rather exploring new scientific boundaries for collective robot operations. Pentagon doctrine, of course, maintains that any use of “lethal” force will require a “human-in-the-loop.” Operating within this understanding, the ARL research is working with Army futurists to investigate the realm of the possible with regard to the potential combat implications of this kind of collective robotic autonomy. Extending this reasoning, researchers describe the collective autonomy as potentially scalable in many respects and therefore able to tailor its functionality as required by specific missions.
For instance, there are some mission areas that allow for high levels of autonomy as compared to others requiring more human supervision. Interestingly, this question is explored in a Sept. 2019 essay in the Army University Press’ Military Review Journal, called “Potential for Army Integration of Autonomous Systems by Warfighting Function.” High levels of autonomy are useful for missions that do not require the use of lethal force such as reconnaissance, the essay states. Other missions will need lower levels of autonomy, according to the essay, such as “tactical troop movements, occupying an area and countermobility operations.” These tasks “can involve the use of force, so human involvement is required,” the paper states. (Maj. Thomas Ryan, Vikram, Mittal, Ph.D.).
Extending this reasoning, there are significant defensive applications for autonomous systems that can massively decrease sensor-to-shooter time when it comes to intercepting approaching enemy weapons. The well-known Iron Dome missile defense system, for instance, can “go through the detection, identification, launch and destruction kill chain against incoming fire in less than 30 seconds,” according to a 2017 essay from the National Defense University called “Robotics and Autonomous Systems.” (The Dwight D. Eisenhower School for National Security and Resource Strategy).
Stanton explained that, even though these technical possibilities may be years away from operational reality, they are closely aligned with Army thinking when it comes to modernization strategy and ongoing preparations for future war. Autonomous unmanned systems, operating in a collective and coordinated fashion, can offer new possibilities to connect otherwise separate domains such as cyber, space, air, land and sea.
“The Army is aware that it is operating in a complex interconnected system. The multi-domain doctrine seeks to capture this understanding of complexity with layered interactions,” Stanton said.