Perhaps the best that can be said for Gary Marcus’s new book sounding the alarm about the dangers of artificial intelligence is that it comes from a good place.
A decorated AI developer, a renowned neuroscientist and psychologist, and a highly successful entrepreneur, Marcus notes at the outset of Taming Silicon Valley, his polemic against current trends in the evolution of machines, that "AI has been good to me. I want it to be good for everybody."
Alas, Marcus’s noble intentions and meticulous analytical thinking succumb far too easily in his book to one-sided and unsupported assertions that, unfortunately and uncharacteristically, evince a lack of nuance, overstate admittedly challenging issues, and misattribute blame. Sadly, this method of argumentation has pervaded much of the contemporary debate over AI at the very moment that supple, fair-minded discourse is most desperately needed.
"Unless we stand up as a society," Marcus proclaims in his introduction, advanced chatbots will cause "the loss of the last shreds of privacy" and "the greater polarization of society," all while sparking "a host of new problems that could easily eclipse all that has come before." This parade of horribles includes power imbalances, unfair elections, supercharged disinformation campaigns, embedded biases, despoliation of our natural environment, even the destruction of democracy.
Yet most if not all of these apocalyptic-sounding results are not only overstated but find their roots in the humans programming AI, not in the software itself. We are flawed, and therefore, so too is the technology we develop. The solution isn’t to scrap or curb the tools we develop but simply to do a better job of developing them.
Take as an example the case of Google’s Gemini, a generative AI model designed to create images and text in response to user prompts that, during its emergence early last year, produced a series of nonsensical results. "It absurdly drew ahistorical pictures with black US founding fathers and women on the Apollo 11 mission," Marcus observes, because "nobody actually knows how to make reliable guardrails."
But the problem with Gemini wasn’t that nobody knew how to make guardrails, it was that the guardrails themselves were unsound. Google’s programmers hewed to a set of progressive assumptions about racial, ethnic, and gender diversity that proved risible when taken to their digital extremes. "We got it wrong," Google’s CEO acknowledged; Gemini’s results "have offended our users and shown bias." The company adjusted its human parameters and fixed its technological problem.
A similar problem addled the latest iteration of OpenAI’s DALL-E image generation service, which according to one user "refused" to create a picture of black African doctors treating white children. "The problem here is not that DALL-E 3 is trying to be racist," Marcus argued. "It’s that it can’t separate the world from the statistics of its data-set." In this regard, Marcus concedes the key point: As long as OpenAI cleans up its data-set, which it has since done, the "bias" problem vanishes.
Then, too, Marcus highlights the problem of "hallucination," or the tendency of large language models to occasionally weave entirely false results out of wholecloth. AI developers have made significant recent strides to stamp out these fabrications, but, in the meantime, users have learned not to take all chatbot output at face value. "Bing is powered by AI," Microsoft states in a disclaimer, "so surprises and mistakes are possible."
Or take the dissemination of faulty information. "Generative AI systems," Marcus contends, "are the machine guns (or nukes) of disinformation, making disinformation faster, cheaper, and more pitch-perfect." Indeed, deepfakes, cheapfakes, and doctored videos have entered the stream of political campaigns around the world. But Marcus adduces little evidence these digital manipulations are any more pernicious than the very human-made media distortions that have infected our politics for generations, including most recently and egregiously the suppression of the Hunter Biden laptop story.
Marcus also denigrates the output of generative AI as "regurgitation" of existing copyrighted content and cautions against the potential of widespread intellectual property violations. But this argument both exaggerates and undersells what chatbots are actually doing: With rare exceptions, far from merely vomiting up what they find while scraping the internet, programs like OpenAI are producing new and different works in the style of existing ones, much as human authors draw inspiration from their predecessors.
In addition, Marcus laments that "the so-called alignment problem—how to ensure that machines will behave in ways that respect human values—remains unsolved." This is a common plaint among the most apocalyptic of anti-AI activists, and yet developers, ethicists, and academics alike have made great advances in recent years in forging exactly such an alignment.
Along the way, Marcus slams the technology titans—Microsoft, Meta, Google—currently bestriding the AI world like a colossus for their efforts to shape the regulatory regimes beginning to crystallize around computing. A strong proponent of the European Union’s AI Act, which entered into force last year (and threatens to stifle innovation), he chides Silicon Valley for seeking "voluntary rather than mandatory commitments." But he’s wrong on both counts: The AI giants largely seek regulation in order to lock in their primacy and prevent smaller competitors from challenging them (later, he grudgingly admits that OpenAI’s Sam Altman does appear to want his company to be regulated).
More fundamentally, though, society as a whole should prefer rigorous industry-wide guidelines adopted by machine-learning companies varied in size and character to a one-size-fits-all, top-down, government-imposed mandate. Marcus insists that merely "optional" standards won’t work because "the fox can’t guard the henhouse," instead demanding "multiple layers of oversight" and entirely new federal and even global agencies to govern the field. But as I explain in my forthcoming book on AI policy, industry associations can ensure independent and enforceable guidelines that benefit developers and users alike far better than national or international bodies.
"AI is almost always harder than people think," Marcus astutely notes midway through the book. Nobody appreciates this more than Marcus, whose extensive and successful work with machine learning has cemented his elevated status in the field. If only he’d applied this level of subtlety and epistemic humility to his analysis of AI policy, an issue that’s as critical as it is complex.
Taming Silicon Valley: How We Can Ensure That AI Works For Us
by Gary Marcus
MIT Press, 235 pp, $18.95
Michael M. Rosen is an attorney and writer in Israel, a nonresident senior fellow at the American Enterprise Institute, and author of the forthcoming Like Silicon From Clay: What Ancient Jewish Wisdom Can Teach Us About AI.