The Language of Deception: Weaponizing Next Generation AI by Justin Hutchens.
He holds a masters in Computer Security Management from Strayer University and is a candidate Executive MBA at Texas A&M University.
Justin is a former Director, Cybersecurity Implementation & Operations at PwC and Cybersecurity Instructor at The University of Texas at Austin. Today he is a principal at Trace3.
Justin has written an insightful book. In fact, this should be recommended reading for everyone managing AI projects, they’re teams and of course cybersecurity professionals.
So, remember the hype cycle for AI peaked after OpenAI introduced ChatGPT in November 2022? The amazing explosion of ChatGPT’s adoption rate overshadow very basic security flaws within AI systems. Justin is addressing to reveal how AI services are not secure by the companies who are heavily promoting their AI services. And this allows for exploits to thrive since all the attention continues to focus on AI’s hype cycle.
In fact, he provides a history of technology and social engineering and addressing Consciousness, Sentience, and Understanding. Perhaps this helps address how GPT-3 (not ChatGPT-3) by OpenAI was trained on more than 45 TB of text. This is roughly 200 million textbooks at 200 pages per book with 1,200 characters per page. The most important insight is the level of complexity leading to unpredictability. Let that sink in.
AI as a weapon
We cannot deny America’s adversaries have already weaponized AI. In fact, we are missing this message. Why? Because Wall Street is calling and Madison Avenue is helping everyone look the other way. So Justin shows the marketing promise of AI becoming a single source of truth. Many users will simply not question this. However, this opens the door for our adversaries to exploit via social engineering:
Justin elaborates on vulnerabilities within Large Language Models (LLMs) and how Command and Control (C2) malware can certainly be embedded into LLMs. In fact, consider LLMs as autonomous C2 agents. This is where AI goes off the rails. In addition, these vulnerabilities are within Not with a Bug, But with a Sticker which reveal fundamental flaws of AI systems.
Since the release of ChatGPT 3 in late 2022, our globally connected world is observing yet another rapid innovation. And the race to capitalize this marketspace is very competitive. However, this is without understanding the impact and consequences. It is no longer enough to rollout a promising technology service and not expect criminals and nation states to quickly find vulnerabilities. We are certainly just scratching the surface.
In conclusion, The Language of Deception certainly demonstrates how LLMs are already being used for social manipulation, disinformation, deception and fraud. Justin helps further the discussion regarding OpenAI, Microsoft, Google and other leading AI service providers who are not providing efficient controls and protections. With the global adoption of AI services continuing to accelerate we cannot side step security within LLMs.