top of page

AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference - A Book Review

Writer's picture: THE GEOSTRATATHE GEOSTRATA

Artificial Intelligence (AI) has become one of the most talked-about technologies of our time, sparking both utopian promises and dystopian fears. On one end of the spectrum, alarmists predict a future where AI surpasses human intelligence and threatens our very existence. On the other, sceptics dismiss AI’s ability to replace human ingenuity, seeing it as just another over-hyped technology.


AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference - A Book Review

Illustration by The Geostrata


The reality, as Arvind Narayanan and Sayash Kapoor argue in their book AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference, lies somewhere in between.


The book offers a grounded and critical examination of AI’s capabilities, challenging the misleading narratives that have been shaped by corporate interests, sensationalist media, and overzealous researchers. Through sharp analysis and concrete examples, the authors dismantle the notion that AI is a magic silver bullet capable of solving all of humanity’s problems.


Instead, they present AI as a tool—one with strengths, limitations, and significant risks, particularly in areas such as content moderation, bias, and regulatory policy.


AI AND THE SNAKE OIL ANALOGY


The authors draw a compelling parallel between today’s AI hype and the snake oil salesmen of 19th-century America. Just as fraudulent elixirs were once sold as cures for every ailment, AI is now marketed as the ultimate solution to complex societal challenges. The book’s cover itself is designed in the style of vintage snake oil advertisements, reinforcing this comparison.


Narayanan and Kapoor argue that corporate-driven AI narratives have created unrealistic expectations, particularly in the commercial sector. Whether it is hardware manufacturers promising limitless computing power or software firms touting the latest generative AI models, the discourse around AI is often exaggerated to fuel investment and public interest. The authors emphasize that while AI is indeed powerful, it is not infallible, nor is it capable of independent reasoning or moral judgment.


UNDERSTANDING AI: PREDICTIVE, GENERATIVE, AND GENERAL AI


A key contribution of the book is its clear categorization of AI into three broad types:

  1. Predictive AI – The most widely used form of AI, focused on analyzing patterns in data to make forecasts. Examples include recommendation algorithms on streaming platforms, fraud detection systems, and financial forecasting models.

  2. Generative AI – AI models designed to create content, such as text, images, and videos. Tools like ChatGPT, Midjourney, and Stable Diffusion fall into this category.

  3. General AI – A theoretical concept referring to AI that matches or surpasses human intelligence across all domains. Unlike the other two, General AI remains speculative and has not yet been realized.


By distinguishing between these types, the authors clarify the boundaries of AI’s current capabilities and debunk the myth that AI is anywhere close to achieving human-like reasoning or autonomy.


THE PROBLEM OF AI IN CONTENT MODERATION


One of the book’s central themes is the role of AI in content moderation on social media platforms. The authors argue that AI-driven moderation systems, despite their sophistication, are fundamentally inadequate at addressing the nuances of human communication. They outline several key challenges:


  • AI Struggles with Context and Nuance – Determining whether content is harmful requires an understanding of cultural, political, and linguistic context—something AI still fails to grasp effectively.

  • Cultural Bias in Moderation – Western-designed AI models often fail to recognize harmful content in non-Western languages or contexts, leading to inconsistent enforcement.

  • The Rapid Evolution of Online Behavior – AI moderation systems are reactive rather than proactive. By the time they learn to detect certain harmful trends, new ones emerge, rendering previous models obsolete.


The authors highlight real-world examples to illustrate these points. For instance, Facebook’s repeated removal of the famous Napalm Girl photograph—a Pulitzer Prize-winning image from the Vietnam War—demonstrates how AI struggles to differentiate between harmful content and historically significant material. Additionally, the book sheds light on the hidden workforce behind content moderation, known as Ghost Work—low-paid workers in developing countries who manually filter out graphic and disturbing content, often under exploitative conditions.


THE LIMITS OF AI REGULATION


Another critical discussion in AI Snake Oil is the effectiveness of AI regulation. While governments and tech companies advocate for stricter AI oversight, the authors caution against over-regulation, which often leads to what they call collateral censorship—overly aggressive content removal to avoid legal repercussions. They argue that policymaking, much like content moderation, is inherently human and cannot be outsourced to AI. AI can assist in regulatory compliance, but it cannot replace human judgment in navigating ethical and legal dilemmas.


The book also challenges the widespread fear of "rogue AI"—the idea that AI will one day become sentient and act against human interests. The authors dismiss this notion as science fiction, coining the term criti-hype to describe how some critics exaggerate AI’s risks while simultaneously reinforcing its supposed omnipotence. They argue that the real dangers lie not in AI gaining autonomy, but in its misuse by bad actors—governments, corporations, and individuals who exploit AI for surveillance, misinformation, or unethical business practices.


WHY AI MYTHS PERSIST


So why do AI myths continue to thrive? The authors identify four major sources of AI hype:

  1. Tech Companies – Seeking to attract investors, corporations often exaggerate AI’s capabilities.

  2. Researchers – Some academics overstate their findings to gain media attention and funding.

  3. Journalists – Sensationalist reporting amplifies both AI optimism and AI fear-mongering.

  4. Public Figures – Influential voices, including tech CEOs and policymakers, push misleading narratives to shape public perception.


By dissecting these sources, the book provides a roadmap for separating genuine AI advancements from exaggerated claims.


FIANL THOUGHTS


The AI Snake Oil is an essential read for anyone interested in AI’s real-world impact. Narayanan and Kapoor offer a refreshing counterpoint to both AI evangelists and doomsayers, advocating for a balanced and evidence-based approach to understanding artificial intelligence.


The book succeeds in making AI accessible to a general audience without sacrificing depth. It provides a clear-eyed assessment of AI’s capabilities, limitations, and ethical concerns, urging readers to be more critical of the hype surrounding this rapidly evolving field.

Ultimately, AI Snake Oil serves as a crucial reminder: AI is not a magical cure-all, nor is it an existential threat.


It is a tool—one that must be wielded responsibly, with an awareness of its strengths and limitations. Those hoping for a world where AI effortlessly solves all of society’s problems will find little comfort in this book. But for those seeking a more nuanced and realistic understanding of AI’s future, AI Snake Oil is an invaluable guide.


 

BY NISARG JANI

TEAM GEOSTRATA

Comentarios


bottom of page