Categories
Artificial Intelligence Education Reading

Latest Read: Not with a Bug, But with a Sticker

Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What to Do about Them by Ram Shankar Siva Kumar and Hyrum Anderson.

Not with a Bug, But with a Sticker: Attacks on Machine Learning Systems and What to Do about Them by Ram Shankar Siva Kumar and Hyrum Anderson

Ram Shankar holds a Masters degree in Electrical and Computer Engineering from Carnegie Mellon University. For over 10 years he has worked as a Data Cowboy at Microsoft in the Azure Security data science team.

Hyrum holds a PhD in Electrical Engineering from the University of Washington. He is a former Principal architect at Microsoft and co-found er of CAMLIS. Today Hyrum is CTO of Robust Intelligence.

Ram Shanker and Hyrum introduce adversarial machine learning, the dedicated field launching cyber attacks upon AI. While this is a most timely book to release, it should have been published twenty years ago so the impact could be well understood.

Perhaps it will clash with Wall Street and Silicon Valley desire to overcome slumping known PC sales and be the spark to drive billions in revenue for a new class of technology companies while reinforcing the market cap of established companies. So, let the voice of Ram Shanker and Hyrum, who attack AI machines for a living provide deep insights. In fact, most of us do not consider the civil liberty implications of attacking AI systems.

AI systems, machine learning in this focus are certainly vulnerable to cyberattacks. As noted above released twenty years ago, the impact to national security, business, education, and government would be much better known and mitigations in place could block those attacks. This in fact, moves all the technical, legal, business, and certainly national security implications front and center now that the ChatGPT gold rush is driving AI integration non-stop. Think I am making this up? Read this article or this one.

Is AI driven by Madison Avenue ?

We are in fact, losing the ability to correctly deploy AI services (systems, programs, application, etc.) due to Madison Avenue. AI and more specifically ML have now become integrated components to corporate America’s product strategies. In fact, from Madison Avenue to every foreign nation, and the Global 500, to emerging organizations are keenly aware that their secure AI services provides them with a brings an almost unbeatable advantage:

Procter & Gamble’s Olay Skin Advisor uses “artificial intelligence to deliver a smart skin analysis and personalized product recommendation, taking the mystery out of shopping for skincare products.”

pg. 2

You must realize we have lost the race to accuracy and security. And with Madison Avenue promoting ML’s amazing speed and insights to drive revenue across any marketplace, sales of ML services have simply skyrocketed. And yet, for countries deemed hostile to the United States adversarial machine learning provides a new ability (threat vector) to disrupt the US economy, society via social media, and institutions they deem hostile.

In AI we trust ?

So what could go wrong? Plenty. Ivan Evtimov a PhD student at the University of Michigan pitched the idea of common objects disrupting traffic signs. This became a project with three other universities and their AI researchers. By carefully placing graffiti stickers on a stop sign at an intersection in Washington near Mount Rainier, caused self driving AI systems to fail stopping at the intersection. There are so many clips on Youtube revealing these AI based autonomous driving failures including Tesla’s autopilot:

Wham Baam Teslacam | TESLACAM STORIES #72

Yes, in fact it can get even worse. Perhaps you need more convincing? Then view this footage. Furthermore, our enemies would seemingly have a field day disrupting AI vulnerabilities with simple objects. In fact, these failures are a focus of Think for Yourself by Vikram Mansharamani.

August 2023 Review
So, is your organization ready ?

While placed within “Why Is Defending Against Adversarial Attacks Hard?” Ram Shanker and Hyrum perhaps reveals the most important outcome of their research. Managers, regardless of marketplace must shift from a ‘promotion-focused’ regarding AI services to a more relevant ‘prevention-focused’ approach. Their key point is that cyber criminals or nation states do not really need to a lot about ML systems to deliver malware or simply prompt the model (prompt engineering) with a series of inputs, then simply steal the collected training set and exactly mimic the victim’s ML model.

In conclusion, Not With A Bug, But With A Sticker is simply a must read. The key audience is certainly not your IT Division. Instead, this falls into your organization’s Lawyers and Risk Management Director. Accordingly, loop in your CIO and CTO along the President/CEO, and Finance VP so they can fully understand why Madison Avenue cannot drive your organization’s AI service.


Robust Intelligence | Not with a Bug, But with a Sticker
The Berkman Klein Center for Internet & Society | Not With A Bug, But With A Sticker
Stanford HAI | A Few Useful Lessons about AI Red Teaming