“Peace is the virtue of civilization; war is its crime. But the sharpest instruments of peace are often forged in the furnace of war.” – Victor Hugo
In 1971, an ominous message began appearing on several computers that made up ARPANET, the forerunner of what we now know as the Internet. “I am the Creeper. Catch me if you can,” the message was the output of a program called Creeper, developed by well-known programmer Bob Thomas while he was working at BBN Technologies. Though Thomas’s intentions were not malicious, the Creeper program represented the emergence of what we now call computer viruses.
The emergence of Creeper on ARPANET led to the first antivirus software. Although not confirmed, it is believed that Ray Thomlinson, best known for inventing email, developed Reaper, a program designed to remove Creeper from infected machines. The development of this tool, used to defensively track and remove malicious programs from computers, is often cited as the beginning of the field of cybersecurity. It marked an early recognition of the potential power of cyber attacks and the need for defensive measures.
The need for cybersecurity has become evident, so it’s not all that surprising, since the cyber realm is merely an abstraction of the natural world. Just as we evolved from fighting with sticks and stones to swords and spears to bombs and planes, so has the war for the cyber realm evolved. It started with the rudimentary Creeper virus, a bold expression of what it saw as a harbinger of digital doom. The discovery of weaponized electronic systems necessitated the invention of antivirus solutions such as Reaper, and as attacks grew more complex, so did defensive solutions. Fast forward to the era of network-based attacks, and the digital battlefield began to take shape. Firewalls replaced massive castle walls, load balancers acted as generals commanding resources to ensure no single point was overwhelmed, and intrusion detection and prevention systems replaced sentinels in watchtowers. This doesn’t mean that all systems are perfect. There’s always the existential fear that the world’s favorite well-intentioned rootkit, known as an EDR solution, may contain a null pointer reference that acts as a Trojan horse capable of breaking tens of millions of Windows devices.
A catastrophic situation, and even if it were all accidental, would leave us with the question of what would happen next. Enter offensive AI, the most dangerous cyber weapon to date. In 2023, Foster Nethercott published a whitepaper at the SANS Technology Institute detailing how threat actors could exploit ChatGPT with minimal technical ability to create new malware capable of evading traditional security controls. Numerous other articles have also explored the use of generative AI to create advanced worms such as Morris II and polymorphic malware such as Black Mamba.
A seemingly contradictory solution to the growing threats is further development and research into more sophisticated offensive AI. Plato’s maxim “necessity is the mother of invention” aptly describes today’s cybersecurity, where new threats from AI drive the innovation of more advanced security controls. The development of more sophisticated offensive AI tools and techniques continues to emerge as an inevitable necessity, though not morally laudable. To effectively defend against these threats, we need to understand them, which requires further development and research.
This approach is based on one simple truth: you can’t protect yourself against threats you don’t understand, and you can’t hope to understand them without developing and researching these new threats. Unfortunately, bad actors are already leveraging offensive AI to invent and deploy new threats. Any attempt to deny this would be misguided and naive. For this reason, the future of cybersecurity depends on the further development of offensive AI.
If you would like to learn more about Offensive AI and gain practical experience in implementing it in penetration testing, please join me on September 7th in Las Vegas for the SANS Network Security 2024: Offensive AI for Social Engineering and Deepfake Development workshop. This workshop will be a great introduction to my new course SEC535: Offensive AI – Attack Tools and Techniques, which will be released in early 2025. The entire event is also a great opportunity to meet some of the leading experts in AI and learn how AI is shaping the future of cybersecurity. More details about the event and a full list of bonus activities can be found here.
Notes: This article was expertly written by Foster Nethercott, a US Marine Corps and Afghanistan veteran with nearly 10 years of experience in the cybersecurity field. Foster is the owner of security consulting firm Fortisec and an author with the SANS Technology Institute. He is currently developing a new course called SEC 535 Offensive Artificial Intelligence.