Categories
Artificial Intelligence Education Reading

Latest Read: The Language of Deception

The Language of Deception: Weaponizing Next Generation AI by Justin Hutchens.

The Language of Deception: Weaponizing Next Generation AI by Justin Hutchens

He holds a masters in Computer Security Management from Strayer University and is a candidate Executive MBA at Texas A&M University.

Justin is a former Director, Cybersecurity Implementation & Operations at PwC and Cybersecurity Instructor at The University of Texas at Austin. Today he is a principal at Trace3.

Justin has written an insightful book. In fact, this should be recommended reading for everyone managing AI projects, they’re teams and of course cybersecurity professionals.

So, remember the hype cycle for AI peaked after OpenAI introduced ChatGPT in November 2022? The amazing explosion of ChatGPT’s adoption rate overshadow very basic security flaws within AI systems. Justin is addressing to reveal how AI services are not secure by the companies who are heavily promoting their AI services. And this allows for exploits to thrive since all the attention continues to focus on AI’s hype cycle.

In fact, he provides a history of technology and social engineering and addressing Consciousness, Sentience, and Understanding. Perhaps this helps address how GPT-3 (not ChatGPT-3) by OpenAI was trained on more than 45 TB of text. This is roughly 200 million textbooks at 200 pages per book with 1,200 characters per page. The most important insight is the level of complexity leading to unpredictability. Let that sink in.

AI as a weapon

We cannot deny America’s adversaries have already weaponized AI. In fact, we are missing this message. Why? Because Wall Street is calling and Madison Avenue is helping everyone look the other way. So Justin shows the marketing promise of AI becoming a single source of truth. Many users will simply not question this. However, this opens the door for our adversaries to exploit via social engineering:

People like to believe, whether accurately or not, that their past decisions were guided by well-informed reasoning. If they engaged in a specific action in the past, surely that decision was rational enough that their previous conclusion should suffice to inform their future decision, without further consideration. Using this pattern of thought, people can act instinctively without having to stop and think through each of their decisions. Perhaps we can credit millions of years of natural selection for the prevalence of this trait in human behavior. The process of establishing patterns based on past behaviors is efficient and would certainly be favorable in survival situations, where a person often lacks the time to deliberate on the most optimal course of action. By establishing habits, we instinctively choose to engage in the actions that we have already empirically confirmed to be safe and non-threatening

pg. 52

Justin elaborates on vulnerabilities within Large Language Models (LLMs) and how Command and Control (C2) malware can certainly be embedded into LLMs. In fact, consider LLMs as autonomous C2 agents. This is where AI goes off the rails. In addition, these vulnerabilities are within Not with a Bug, But with a Sticker which reveal fundamental flaws of AI systems.

December 2023 Review

Since the release of ChatGPT 3 in late 2022, our globally connected world is observing yet another rapid innovation. And the race to capitalize this marketspace is very competitive. However, this is without understanding the impact and consequences. It is no longer enough to rollout a promising technology service and not expect criminals and nation states to quickly find vulnerabilities. We are certainly just scratching the surface.

In conclusion, The Language of Deception certainly demonstrates how LLMs are already being used for social manipulation, disinformation, deception and fraud. Justin helps further the discussion regarding OpenAI, Microsoft, Google and other leading AI service providers who are not providing efficient controls and protections. With the global adoption of AI services continuing to accelerate we cannot side step security within LLMs.


ITSPmagazine | The Language of Deception: Weaponizing Next Generation AI
ITSPmagazine | Keeping Up With Technology and Societal Impacts of Generative AI
Phillip Wylie | Justin “Hutch” Hutchens: AI’s Impact on Cybersecurity
Wiley | Mastering cybersecurity with industry experts
BarCode | Hutch with Justin “Hutch” Hutchins