Skip to Main Content
PCMag editors select and review products independently. If you buy through affiliate links, we may earn commissions, which help support our testing.

The AI Industry's Year of Ethical Reckoning

In 2018, distant threats of killer robots and mass unemployment gave way to concerns about the more immediate ethical and social impact of AI. Tech leaders were forced to respond.

January 31, 2019
Artificial Intelligence AI and Machine Learning ML

Ever since deep neural networks won the world's most important image-recognition competition in 2012, everyone has been excited about what artificial intelligence could unleash. But in the race to develop new AI techniques and applications, the possible negative impacts took a backseat.

Opinions We're now seeing a shift toward more awareness about AI ethics. In 2018, AI developers became more conscious of the possible ramifications of their creations. Many engineers, researchers, and developers made it clear that they would not build technologies that harm or cause damage to the lives of innocent people, and they held their companies.

Facial Recognition in Law Enforcement

In the past, creating facial-recognition applications was arduous, resource-intensive, and error-prone. But with advances in computer vision—the subset of AI that allows computers to recognize the content of images and video—creating facial-recognition applications became much easier and within everyone's reach.

Large tech companies such as Microsoft, Amazon, and IBM started providing cloud-based services that enabled any developer to integrate facial-recognition technology into their software. This unlocked many new use cases and applications in different domains, such as identity protection and authentication, smart home security, and retail. But privacy rights activists voiced concern about the potential for misuse.

AI and Virtual Humans

In May 2018, the American Civil Liberties Union revealed that Amazon was marketing Rekognition, a real-time video-analytics technology, to law enforcement and government agencies. According to the ACLU, police in at least three states were using Rekognition for facial recognition on surveillance video feeds.

"With Rekognition, a government can now build a system to automate the identification and tracking of anyone. If police body cameras, for example, were outfitted with facial recognition, devices intended for officer transparency and accountability would further transform into surveillance machines aimed at the public," ACLU warned. "By automating mass surveillance, facial-recognition systems like Rekognition threaten this freedom, posing a particular threat to communities already unjustly targeted in the current political climate. Once powerful surveillance systems like these are built and deployed, the harm will be extremely difficult to undo."

The ACLU's concerns were echoed by Amazon employees, who in June wrote a letter to Jeff Bezos, the company's CEO, and demanded that he stop selling Rekognition to law enforcement. "Our company should not be in the surveillance business; we should not be in the policing business; we should not be in the business of supporting those who monitor and oppress marginalized populations," the letter read.

In October, an anonymous Amazon staffer disclosed that at least 450 employees had signed another letter that called on Bezos and other executives to stop selling Rekognition to police. "We cannot profit from a subset of powerful customers at the expense of our communities; we cannot avert our eyes from the human cost of our business. We will not silently build technology to oppress and kill people, whether in our country or in others," it said.

The Fallout of Google's Military AI Project

While Amazon was dealing with this internal backlash, Google was experiencing similar struggles over a contract to develop AI for the US military, dubbed Project Maven.

Google was reportedly helping the Defense Department develop computer-vision technology that would process drone video footage. The amount of video footage recorded by drones every day was too much for human analysts to review, and the Pentagon wanted to automate part of the process.

Acknowledging the controversial nature of the task, a spokesperson for Google stipulated that it was only providing APIs for TensorFlow, its machine-learning platform, for detecting objects in video feeds. Google also stressed it was developing policies and safeguards to address ethical aspects of its technology.

How AI Blurs the Line Between Reality and Fiction

But Project Maven did not sit well with Google employees—3,000 of whom, including dozens of engineers, soon signed an open letter to CEO Sundar Pichai that called for the termination of the program.

"We believe that Google should not be in the business of war," the letter read. It asked that the company "draft, publicize and enforce a clear policy stating that neither Google nor its contractors will ever build warfare technology."

The Google employees also warned that their employer was jeopardizing its reputation and its ability to compete for talent in the future. "We cannot outsource the moral responsibility of our technologies to third parties," the Googlers stressed.

Shortly after, a petition signed by 90 academics and researchers called on top Google executives to discontinue work on military technology. The signatories warned that Google's work would set the stage for "automated target recognition and autonomous weapon systems." They also warned that as the technology develops, they would stand "a short step away from authorizing autonomous drones to kill automatically, without human supervision or meaningful human control."

As tensions grew, several Google employees resigned in protest.

How Tech Leaders Responded

Under pressure, Google declared in June that it would not renew its contract with the Department of Defense on Project Maven after it expires in 2019.

In a blog post, CEO Sundar Pichai (pictured below) declared a set of ethical principles that would govern the company's development and sale of AI technology. According to Pichai, the company will from now on consider projects that are for the good of society as a whole and avoid the development of AI that reinforced existing unfair biases or undermined public safety.

Pichai also explicitly stated that his company will not work on technologies that violate human rights norms.

Sundar Pichai Google IO 2017

Amazon's Bezos was less fazed by the outrage over Rekognition. "We are going to continue to support the DoD, and I think we should," Bezos said at a tech conference in San Francisco in October. "One of the jobs of senior leadership is to make the right decision, even when it's unpopular."

Bezos also underlined the need for the tech community to support the military. "If big tech companies are going to turn their back on the DoD, this country is going to be in trouble," he said.

Microsoft President Brad Smith, whose company faced criticism over its work with ICE, published a blog post in July in which he called for a measured approach in selling sensitive technology to government agencies. While Smith did not rule out selling facial-recognition services to law enforcement and the military, he stressed the need for better regulation and transparency in the tech sector.

"We have elected representatives in Congress [who] have the tools needed to assess this new technology, with all its ramifications. We benefit from the checks and balances of a Constitution that has seen us from the age of candles to an era of artificial intelligence. As in so many times in the past, we need to ensure that new inventions serve our democratic freedoms pursuant to the rule of law," Smith wrote.

In 2018, distant threats of killer robots and mass unemployment gave way to concerns about the more immediate ethical and social impact of AI. In many ways, these developments indicate that the industry is maturing, as AI algorithms become more prominent in critical tasks. But as algorithms and automation become more ingrained in our daily lives, more debates will arise.

Get Our Best Stories!

Sign up for What's New Now to get our top stories delivered to your inbox every morning.

This newsletter may contain advertising, deals, or affiliate links. Subscribing to a newsletter indicates your consent to our Terms of Use and Privacy Policy. You may unsubscribe from the newsletters at any time.


Thanks for signing up!

Your subscription has been confirmed. Keep an eye on your inbox!

Sign up for other newsletters

Table of Contents

TRENDING

About Ben Dickson

Ben Dickson

Ben Dickson is a software engineer and tech blogger. He writes about disruptive tech trends including artificial intelligence, virtual and augmented reality, blockchain, Internet of Things, and cybersecurity. Ben also runs the blog TechTalks. Follow him on Twitter and Facebook.

Read Ben's full bio

Read the latest from Ben Dickson