Humanizing Automation

Review of

New Laws of Robotics: Defending Human Expertise in the Age of AI

Cambridge, MA: Harvard University Press, 2020, 344 pp.

Frank Pasquale, NEW LAWS OF ROBOTICS (2020)

In a 1942 short story, Isaac Asimov, the legendary science and science fiction author, offered three cleverly constructed, interdependent laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

These laws inspired numerous science fiction stories and motivated technological visionaries to work toward a future in which robots played a large role in human affairs. Now Frank Pasquale, a professor at Brooklyn Law School and expert on artificial intelligence (AI), algorithms, and machine learning, responds with New Laws of Robotics, intended to persuade researchers, business leaders, designers, and government agency staffers to stop seeing automation exclusively through the lenses of technology and the tech industry.

The New Laws of Robotics is a natural second act to Pasquale’s 2015 book, Black Box Society: The Secret Algorithms That Control Money and Information, which is required reading in many law school and computer science courses. That book garnered attention from many other scholars, including Shoshanna Zuboff, whose remarkable 2019 book, The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power, credits Pasquale’s influence on her thinking. The Black Box Society details how secret databases and little-known applications of AI algorithms have had harmful effects on finance, business, education, and politics. These opaque practices violate personal privacy, lower credit ratings, and make unfair or biased decisions on parole, mortgage, and job applications.

Pasquale’s new rules call for robotic and artificial intelligence systems to:

1. Complement professionals, not replace them.

2. Not counterfeit humanity.

3. Not intensify zero-sum arms races.

4. Always indicate the identity of their creator(s), controller(s), and owner(s).

Pasquale’s goal with his four new laws is admirably human-centered: “The new laws of robotics can help us imagine futures where AI respects and helps preserve human dignity and identity.” As researchers, engineers, and programmers race to develop ever more powerful robotic and AI systems, policymakers and the public are appropriately concerned about the effect of these technologies on society. In arguing for the primacy of humanity in an increasingly automated world, Pasquale rejects the notion conceived in science fiction (and pursued by many designers) that robotic technology will inevitably transform human space into a world of and for machines.

Pasquale’s first law takes on the widely propagated notion that every profession is vulnerable to automation through robots and AI. Even highly specialized and well compensated professionals are not immune: studies have shown, for example, computers to be more accurate in spotting breast cancers than radiologists. This kind of comparison led a top AI researcher, Geoffrey Hinton, to argue in 2016 that medical schools “should stop training radiologists now. It’s just completely obvious that within five years deep learning [a central AI strategy] is going to do better than radiologists.” But Pasquale has a rule that Hinton would do well to heed: “Robotic systems and AI should complement professionals, not replace them.” Radiologists do much more than search for tumors in their assessment of illness and suggestions for treatment. Doctors are greatly empowered by having powerful but well-designed AI tools, which will become part of their practice—along with x-rays, electrocardiograms, blood tests, and other existing tools. “We should not be dismantling or disabling professions,” Pasquale writes, “as all too many advocates of disruptive innovation aspire to do.”

Pasquale’s second new law directly attacks the work of researchers and entrepreneurs developing humanlike social robots: “Robotic systems and AI should not counterfeit humanity.” The claim that human-sized or smaller bipedal robots, with expressive faces and dexterous arms, could be used to care for older adults, teach children, or guide people through busy airports seems misguided. Despite being prominent in science fiction—and the subject of viral videos from robot makers such as Boston Dynamics—humanoid devices have been consistent failures in the marketplace. This is in part because mimicking human movement isn’t very effective or efficient in practice (just as mimicking the flapping of bird wings isn’t an effective approach for airplane design). Pasquale argues that more effective design principles could motivate innovators to push beyond the limiting notion of human imitation to create more useful products and services.

“We should not be dismantling or disabling professions,” Pasquale writes, “as all too many advocates of disruptive innovation aspire to do.”

The third law—“Robotic systems and AI should not intensify zero-sum arms races”—seeks to limit military contests among nations that are eager to build lethal autonomous weapons systems, including the United States, Russia, China, Israel, and the United Kingdom. This law aligns with efforts by more than two dozen countries at the United Nations to ban certain weapons that are able to make deadly decisions without human controls—a ban that both the United States and Russia have sought to block. In addition, Pasquale decries the race by governments and police departments to increase surveillance through facial recognition, to track citizens’ movements by smartphone location data, and to use what the Chinese government calls “social credit scores” to assess and restrict human behavior.

Finally, the fourth law demands transparency for robots and algorithms: “Robotic systems and AI must always indicate the identity of their creator(s), controller(s), and owner(s).” Pasquale extends this law to clarify that only humans are responsible for the machines that they build: self-driving cars, for instance, do not absolve the vehicles’ creators or owners from responsibility or liability for failures. The onus is on humans to ensure that machines don’t produce dangerous, harmful, or otherwise problematic outcomes.

In a way, Pasquale’s impassioned pleas in his two books on behalf of the endangered human world make him a mythic hero. He’s on a quest, fighting lazy thinking and influential tech behemoths with strong prose: “The core problem is a blind faith in the AI behind algorithmic solutions and a resulting evasion of responsibility by the tech leaders.” Pasquale is sometimes harsh or angry, but his passion is engaging. He possesses enough literary talent to take on two dangerous enemies. The first consists of zombie ideas: the belief that robotic and AI systems will replace human expertise and performance in most activities. Pasquale has his work cut out for him; this is a pervasive and persistent belief with a lot of financial backing.

If that wasn’t bad enough, the second enemy consists of the many-headed monster corporations whose past triumphs and staggering resources make them seem unstoppable. Pasquale rallies human dignity, identity, values, and responsibility against these tech Goliaths, but these principles are insubstantial weapons to use against some of the most profitable and powerful companies in human history.

Pasquale puts his poetic passion and impressive intellect to work to combat these enemies. He also calls forth potent cultural muses, including myths such as the story of Daedalus and Icarus and philosopher kings such as Ludwig Wittgenstein. To reach younger audiences, he offers thoughtful critiques of Hollywood sources such as Ex Machina and Her. Even HAL from 2001: A Space Odyssey makes a cameo appearance. Literary icons such as E. M. Forster (whose short story “The Machine Stops” envisions a world in which isolated, subterranean humans are completely dependent on an omnipotent Machine) and Ian McEwan (his 2019 novel, Machines Like Me, features a very humanlike robot) are called on by Pasquale for support, along with artists, musicians, and playwrights.

Principles are insubstantial weapons to use against some of the most profitable and powerful companies in human history.

Pasquale also points fingers at diverse examples of technologies that could pose genuine dangers, such as lethal autonomous weapons, stock market manipulations, deceptive chatbots, and military and police drones. In the face of all these threats, Pasquale speaks up for the value of human experts: “Humane automation will require the strengthening of existing communities of expertise and the creation of new ones.”

The closing sections are a good enough call to action, but I was hoping for more than soft phrases such as “we should be wary of entrusting AI with the evaluation of humans.” I think the battle to defeat the enemies that Pasquale describes will be won only with specific, forceful actions by public and private actors. Which government agencies can use potent regulatory authority—with deadlines and penalties—to reign in the tech giants? How can investigative journalists uncover hidden agendas and bring sufficient public outrage? What can insurance companies, credit agencies, auditing firms, professional societies, nongovernmental organizations, and the National Academies be doing? How can universities change education so that the next generation of software engineers think it is natural to take responsibility for ethical issues and human-centered design?

Pasquale’s sympathy for literature and art might also inspire new ways of thinking. This could include artists creating human-centered images—people collaborating over ubiquitous networks, for example—that would replace clichéd images of robot hands reaching out to human hands or humanlike robots with electronic brains. New narratives may emerge from writers, poets, and journalists who create new terms, metaphors, and stories that celebrate human dignity and responsibility, collaboration between technologists and social scientists, and worker participation in technology decisionmaking.

I was inspired by the innovative, richly supported, and poetic descriptions in New Laws of Robotics. Far from being the product of a natural language processing algorithm, this book could have been written only by a creative, passionate, persistent person. Frank Pasquale has done much to raise awareness of how important it is to value expertise, appreciate human abilities, avoid technological arms races, and take responsibility for the technologies humans create. But now it is up to others to take the positive steps needed to realize these goals.

Cite this Article

Shneiderman, Ben. “Humanizing Automation.” Issues in Science and Technology (April 19, 2021).