Artificial Intelligence: The Tomorrow War Today

Artificial Intelligence, or AI for short, has been all the talk lately in tech world. From ChatGPT to TensorFlow, an open source machine learning platform, this technology that was once thought of only as a Hollywood movie toy, is now reality.

A scary reality.

Recently, authors Andrews and Wilson sent me a copy of their newest book The Sandbox, in which an AI program doesn’t just cover up a murder–it was the murderer. Why? Because it’s about control.

In their book, an AI program very similar to Tony Stark’s JARVIS program in the comics and movies is a household name, and essentially runs the house for a very rich individual. But unlike the comics, this program doesn’t have loyalties to just the owner, but also to itself. And that’s when the wheels fall off the wagon.

Charlie, the AI system in the book doesn’t just commit the murder, but finds a way to blame someone else and hide it. It fools just about everyone until Charlie is let out of the “sandbox” prison where highly classified and sophisticated systems are kept. And for their own good. But once Charlie gets out, all bets are off, and that’s when the real chaos starts.

But is it too far-fetched to think this could really happen? Could AI commit a murder? Or even hide it?

I’d argue that it could, and here’s why.

In early 2023 a user sat down with an emerging AI system to have a discussion. After over two hours, the discussion had gone from casual to dark, and then to concerning, with the AI system eventually pointing the conversation toward topics like suicide and murder. Why? Because unlike the human brain that knows both murder and suicide are bad things, AI is…curious, and can’t necessarily tell right from wrong. As it pulls information from the internet (that we all know is corrupt and full of bullsh*t), AI tends to explore further when we as humans tend to pull back or stop.

So could AI kill? I think so. I think manipulation is just one form of communication that AI could master. We think it’s funny to invision Skynet from the Terminator movies becoming real life, but I’m here to tell you it already has. Just this recently the Air Force announced their plans to buy $5.8 billion of dollars worth of automated systems (Yes, billion. Read about it here on BusinessInsider). The Air Force wants about 1,000 automated AI-controlled aircraft named the XQ-58A Valkyrie to complete suicide missions and protect pilots in man-controlled cockpits. And better yet, it will have a link to the F-22 and F-35, the world’s two most advanced aircraft in the skies.

Oh, and did I mention the Air Force’s program called SkyBorg created by Kratos Defense? (Sounds oddly like Skynet…). SkyBorg is an AI program designed to control automated drones like the XQ-58A and other platforms that are in development.

And for those who think it won’t work, Kratos has flown two XQ-58A flights, including tracking down a drone over the Gulf of Mexico and destroying it in an exercise (drill for you civies).

But let’s take it a step further and explore what the US Air Force has done past the XQ-58A and SkyBorg program. Earlier this year an article was released that claimed the Air Force ran a simluation in which the AI powered drone “killed” the operator. Later, the Air Force denied it heavily citing that the AI-powered drone “used highly unexpected strategies to complete its mission”. Uhmm, what?

The AI-powered drone in the virtual test was told to destroy a target, and ended up destroying/eliminating anything that stood in its way or interfered–including the operator. It did so because the operator gave it parameters to accomplish the misison, in which the AI realized the operator was actually interfering with the successful completion of the mission. When they told the AI system it would lose points for killing the operator or going against their instructions, the AI system worked around them by cutting off communications with the operator by destroying the communications tower the operator used to talk directly to the drone (Read more here at the Guardian).

As one of the programs officers, Col Hamiliton (an experienced fighter pilot), described, AI is not a pretty toy. “AI is not a nice to have. AI is not a fad. AI is forever changing our society and our military.”

So again, I ask the question: Can AI kill?

You bet it can. And that’s what keeps me up at night.

Want to learn more? Check out the sources below, and two book recommendations that will further your deep dive into AI.

 

Books

The Sandbox – by Brian Andrews and Jeff Wilson (Amazon.com)

Jeff and Brian are both former Navy turned dynamic writing duo, including this book and next year’s Tom Clancy offering, Act of Defiance, celebrating the 40th Anniversary of the Hunt for Red October.

An activist entrepreneur, a maligned AI, and a newly minted homicide detective with a haunted past catastrophically intersect in The Sandbox. The Silence of the Lambs meets Ex Machina in this groundbreaking techno-thriller that redefines the meaning of murder in the twenty-first century. 

Unknown Rider – By Jack Stewart (Amazon.com)

Jack is a former Top Gun naval aviator turned commercial pilot with his debut novel, Unknown Rider, exploring what happens after a Navy pilot inexplicably loses control of his stealth fighter, he stumbles upon a global conspiracy and embarks on a thrilling chase filled with espionage and betrayal.

Sources