Pentagon officials hired a former Google Cloud executive to reduce public worry about computers becoming military decision makers.
The U.S. Central Command hired Andrew Moore as its first-ever CENTCOM Advisor on AI, Robotics, Cloud Computing, and Data Analytics, according to a department statement. Moore is a former Dean of the Carnegie Mellon University School of Computer Science. He later served as Director of Google Cloud AI.
U.S. military officials may be acting to allay fears of AI epitomized by Hollywood’s hit movie, “The Terminator.” In that movie, AI thwarts attempts to shut down “SKYNET,” the self-aware global computer that launches nuclear war and begins to hunt humans who survived “Judgement Day.”
Fox News further reported:
The U.S. military is embracing artificial intelligence as a tool for quickly digesting data and helping leaders make the right decision – and not to make those decisions for the humans in charge, according to two top AI advisors in U.S. Central Command.
CENTCOM, which is tasked with safeguarding U.S. national security in the Middle East and Southeast Asia, just hired Dr. Andrew Moore as its first AI advisor. Moore is the former director of Google Cloud AI and former dean of the Carnegie Mellon University School of Computer Science, and he’ll be working with Schuyler Moore, CENTCOM’s chief technology officer.
In an interview with Fox News Digital, they both agreed that while some are imagining AI-driven weapons, the U.S. military aims to keep humans in the decision-making seat, and using AI to assess massive amounts of data that helps the people sitting in those seats.
“There’s huge amounts of concern, rightly so, about the consequences of autonomous weapons,” Dr. Moore said. “One thing that I’ve been very well aware of in all my dealings with… the U.S. military: I’ve never once heard anyone from the U.S. military suggest that it would be a good idea to create autonomous weapons.”
Schuyler Moore said the military sees AI as a “light switch” that helps people make sense of data and point them in the right direction. She stressed that the Pentagon believes that it “must and will always have a human in the loop making a final decision.”
“Help us make a better decision, don’t make the decision for us,” she said.
One example they discussed in CENTCOM’s sphere of influence is using AI to crack down on illegal weapons shipments around Iran. Ms. Moore said that officials believe AI can be used to help the military narrow the number of possibly suspicious shipments by understanding what “normal” shipping patterns look like and flagging those that fall outside the norm.
Once a subset of possibly suspicious ships on the water is identified, AI might also be used to quickly interpret pictures and videos and deliver interpretations and assessments to human military leaders.
“You can imagine thousands and thousands of hours of video feed or images that are being captured from an unmanned surface vessel that would normally take an analyst hours and hours and hours to go through,” Ms. Moore said. “And when you apply computer vision algorithms, suddenly you can drop that time down to 45 minutes.”
Dr. Moore says that to get this kind of a system up and running, tons of data need to be crunched by an AI system so it knows what normal shipping patterns look like.
“There’s two big things going on when it comes to data, computing and networks within the combatant commands such as CENTCOM,” he said. “The first one is getting hold of data. And the next one is, having computers which can understand and draw conclusions from that data.”
In the example of monitoring shipments around Iran, Ms. Moore said the goal is to get AI to the point where it understands the “patterns of life” in that area of the world, so the U.S. can understand when those patterns are broken in ways that might threaten U.S. national security. She described the effort as a “crawl, walk, run” effort that will make U.S. military decisions sharper and faster.
“The crawl is, do I see anything at all? Do I have a sensor that can take a picture?” she said. “The walk is, can I tell what is in the picture? And then the run is, do I understand the context of what is in the picture? Do I know where it came from, do I know where it’s going and do I know if that’s normal.”
Similar efforts will likely be made in areas such as air traffic, so the U.S. can interpret threatening patterns in the air more quickly than humans could understand what traffic is “normal” and what traffic is outside the norm.
Dr. Moore said his role is to help CENTCOM incorporate current AI capabilities into the military in these ways. He said some commercial products are able to predict things like how much inventory should be shipped to a certain store.
“The technology got very good at spotting and predicting, even very minor fluctuations,” he said. “The thing that I hope to be able to do in this role… is to see if I can help make sure that some of these very clever methods being used in the commercial sector, we can also apply them to help with some of these big public facing issues in the military sector.”
Dr. Moore said he also believes the U.S. is in a race to create a more responsible AI system compared to those being developed by U.S. adversaries. He said some countries are getting “scarily good” at using AI to conduct illegal surveillance, and said, “We have to be ready to counter these kinds of aggressive surveillance techniques against the United States.”
Ms. Moore said the U.S. hopes to lead the way in developing responsible AI applications. “That is something that we are able to positively influence, hopefully by demonstrating our own responsible use of it,” she said.
Scroll down to leave a comment and share your thoughts.