Why companies should democratize A.I.

This is the web version of Eye on A.I., Fortune’s weekly newsletter covering artificial intelligence and business. To get it delivered weekly to your in-box, sign up here.

Everyone can become a data scientist.

That’s the somewhat radical view of Alan Jacobson, the chief data and analytic officer at Alteryx, a company that sells data analytics software to many of the Fortune 500.

Jacobson says that while he frequently hears executives complain about being unable to hire people with data science experience, let alone machine-learning skills, these executives are ignoring the amazing human resource already sitting inside their own organizations. These businesses could develop that talent if only they would invest a bit of time and money to teach their employees data science skills.

“Most of the data science applied in the business world is within the reach of most knowledge workers,” Jacobson recently told me. “It is a lot easier to teach an accountant some data science than teach a data scientist accounting.”

Although Alteryx markets itself as a software platform for “advanced data analytics,” it can really be thought of as a kind of education software, Jacobson argues. “We can see the upskilling happen,” he says. “When they first start using the product, they are just finding data and preparing data. Seven months later, they are building models that are delivering real value.”

Alteryx, based in Irvine, Calif., offers its customers data analytics courses in different formats, some simple online tutorials and others more like full university courses. Using the company’s software, people can either deploy pre-built models or construct new ones using relatively light coding methods, including simple algorithms in the mathematics program R and Python machine-learning libraries.

Jacobson is a big believer that the best way to get business value from data science and machine learning is to put tools directly in the hands of domain experts. In a study Harvard Business Review undertook with Alteryx, it found that 63% of companies rely on centralized IT and data analytics teams to deliver advanced data analytics and yet almost none of them reported being happy with that solution. “You can’t ask the right questions if you don’t actually understand the data,” Jacobson says. “You need to know what you are seeing in order to ask better questions.”

Many companies are afraid to push this powerful technology into their employee ranks because they are too focused on the risks of something going wrong, Jacobson says. He says there are only a handful of business use cases in which a model must be robust enough to make fully automated, mission-critical decisions day in and day out—think a credit scoring model that a bank may use or an algorithm to triage patients in a hospital. It might make sense to ensure those models are built by a central team of data science and machine-learning experts, in close consultation with the domain experts. There may also be a lot of organizational, and even regulatory, oversight of those kinds of algorithms.

But as important as those models are, Jacobson says, they are rare. The vast majority of use cases for advanced analytics, he says, are one-offs: An analysis of why sales in a particular geography have fallen in the past quarter, for instance. These models are designed to give human decision-makers greater insight in a particular moment, not to fully automate the lifeblood of the business, and they are basically disposable. “Once you have an answer, the model is not applicable anymore,” he says.

Teaching domain experts to build their own machine-learning models is also, Jacobson argues, one of the best ways to avoid the pitfalls and ethical issues around deploying A.I. For instance, one problem with many machine-learning models is that they can find spurious correlations in data. Domain experts are more likely to sniff out those nonsensical inferences, he says, than data scientists without any deep knowledge of that particular business area.

Jacobson also says that many concerns organizations have about a lack of data governance and oversight when everyone in the organization is empowered to build their own models is overblown. He sees this as no different than letting people use other kinds of software, like accounting programs or ERP systems that track supply chains. “At the end of the day, your tax expert is creating the tax filing and no one from the IT department is checking their work,” he says.

What’s more, by using a central software system for building models, such as Alteryx, he says organizations actually have more insight into what their teams are doing and more control than when every employee is creating their own spreadsheet in Microsoft Excel and storing it on their desktop hard drive.

As John Lennon and Yoko Ono once sang: power to the people. And with that, here’s the rest of this week’s news in A.I.

Jeremy Kahn
@jeremyakahn
jeremy.kahn@fortune.com

A.I. IN THE NEWS

World Health Organization unveils A.I. principles. The international organization published a long report outlining an ethics and governance framework for A.I. It is organized around six key principles: 1) Humans should have the final say over all medical decisions. 2) A.I. should be used to promote human welfare. 3) The system and its decisions should be understandable to all people using the technology or affected by its decisions. 4) A human must always be accountable for the decisions an A.I. system makes. 5) A.I. should be inclusive. 6) A.I. should also be developed in a way that is sustainable, including trying to make it as efficient as possible to lower the amount of electricity needed to train and run such systems.

Graphcore's A.I.-specific chips lag Nvidia's and Google's in first head-to-head test. ML Commons, an organization set up by a group of researchers from academia and leading technology companies to create a set of standardized benchmark tests for comparing hardware and software for machine learning tasks, released its latest set of results. The news was highly anticipated because it was the first time that British chip startup Graphcore had participated. Graphcore builds A.I.-specific chips it calls intelligence processing units (IPUs), which it says are optimized for machine learning tasks and, in theory, were designed to outperform Nvidia's graphics processing units. Those Nvidia chips remain the industry standard for many A.I. tasks even though they were not designed specifically for machine learning. Graphcore fans will be disappointed by the results. Google's tensor processing units, A.I.-specific chips that are built to run in Google's own datacenters, were the fastest on many key tests, followed by Nvidia's GPUs. Graphcore's chips were about nine minutes slower than Nvidia's on a common computer vision training set and seven minutes slower on a key natural language processing training task. Graphcore says its chipset cost half of what Nvidia's setup does, so the company still trumpeted its superior price-performance characteristics. You can read more in this story in tech publication The Next Platform as well as some criticism of the way Graphcore has tried to spin its third place finish in this blog for server-focused website STH.

A.I. software that will help companies spot potential bias in their existing A.I. software is gaining traction. That's according to a story in The New York Times. The article, by the Times' A.I. reporter Cade Metz, spotlighted a startup called Parity that sells this bias detection software and profiled its CEO, Liz O'Sullivan. The use of bias detection software is an interesting and important trend in how companies try to govern A.I. But the story itself proved controversial because Metz originally failed to mention that Parity was actually founded by Rumman Chowdhury, who had been the responsible A.I. lead at Accenture and now is director of ML ethics, transparency and accountability at Twitter, and that Chowdhury built the bias-detection software Parity sells. After Chowdhury, who is of South Asian descent, complained on Twitter that Metz had "erased her" contribution in his feature in favor O'Sullivan, who is white, a number of other researchers also joined in the criticism of Metz. The New York Times updated the story to make it more clear that Chowdhury had founded the startup but Chowdhury and other researchers said they were dissatisfied with the result. 

Zebra Technologies buys warehouse robot company Fetch Robotics. The $290 million deal by Zebra, a maker of bar code scanners and other devices that businesses use to track inventory through the logistics process, will give the company a stronger foothold in selling complete supply automation services, The Wall Street Journal reported. Fetch makes autonomous robots that help move pallets and boxes of goods around warehouse or factory floors. It had already been working in partnership with Zebra and Zebra had previously invested in the robotics company.

EYE ON A.I. TALENT

Kevin Novak has launched a new $15 million A.I.-focused venture capital fund called Rackhouse Venture Capital, according to a story in TechCrunch. Novak had been an early engineering hire at Uber.

Pony.ai, a major driverless car startup with operations in the U.S. and China, has hired Lawrence Steyn as its chief financial officer, CNBC reported. Steyn had been vice chairman of investment banking at JPMorgan Chase.

TeraRecon, a Durham, N.C., firm that uses A.I. to analyze medical imaging, has appointed Dan McSweeney as its president, according to trade publication HealthImaging. McSweeney had been a senior executive with GE Capital and GE Healthcare.

EYE ON A.I. RESEARCH

Facebook improves its simulated environment for training domestic robots. Last year, the company unveiled and open-sourced a simulated three-dimensional interior environment that could be used to model the kinds of spaces one would find in a typical house or small office. It calls the simulator Habitat. The idea is that researchers can use these kinds of simulated environments to train A.I. software and then transfer those skills to real-world robots, allowing them to complete tasks in a house or office. Now Facebook has upgraded Habitat so that it is faster and the company also released a set of very detailed, pre-configured environments including 11 different room layouts and some 92 different objects, such as furniture, kitchen utensils, and books. This allows an A.I. agent to do more than just navigate the interior space, but actually perform a number of chore-like tasks, such as setting a table or stocking a fridge, according to a story in tech publication VentureBeat. No word yet on whether Facebook is working on a household robot of its own.

FORTUNE ON A.I.

China’s business ‘ecosystems’ are helping it win the global A.I. race—by François Candelon, Michael G. Jacobides, Stefano Brusoni, and Matthieu Gombeaud

Why the robotaxis of the future must be more than robots—by Fortune editors

BRAIN FOOD

Crash testing your business. As A.I. becomes more ubiquitous and powerful, it will be increasingly important to test and simulate all the ways in which A.I. systems can fail, either on their own, or because someone has decided to deliberately attack them. (This could be cybercriminals or fraudsters or state actors.) In essence, as we hand more control to intelligent software, companies will have to perform a kind of crash testing on larger parts of their business. But how can you do that without actually crashing your business? Well, for one thing, it is going to require much better simulators. And, it may not be enough to simply let clever humans play around in the simulator, thinking up all the ways in which things can go wrong. It will probably be necessary to make attacking the business a kind of adversarial game in which deep reinforcement learning is used in the simulated environment to try to discover all kinds of attacks and possible failure points that are beyond human imagination.

That is one of the conclusions of a recent report from the think tank the Centre for European Policy Studies (CEPS) and Vectra Networks, a cybersecurity company whose software is designed to defend networks by using machine learning to spot aberrant activity. The report recommends that companies engage in war games to increase the reliability of A.I. systems and the security measures that protect them from being tricked or misused. The report also recommends that regulators should mandate that A.I. systems be tested for security and safety reasons, and for ethical concerns, such as bias.

One organization that is definitely thinking hard about this is the U.S. government. The National Security Agency recently put out a research paper with the MITRE Corporation, the not-for-profit spinout from MIT that does out-there technology development and research for the U.S. government, on creating a highly advanced computer network simulator, called FARLAND (that's short for "Framework for Advanced Reinforcement Learning for Autonomous Network Defense"), that could be used to simulate cyberattacks and train A.I.-based cybersecurity software the best way to defend against them. The key here is the use of reinforcement learning. That's the kind of A.I. training where an agent learns a model of the world from trial-and-error, rather than by looking at historical data. And one benefit is that it can learn to do things that no human has ever conceived of.

Increasingly, it won't just be government that will need to do this. And it won't just be for network security. Companies will need to test their entire operations in these kinds of simulators and with these kinds of A.I.-enabled war games. 

Our mission to make business better is fueled by readers like you. To enjoy unlimited access to our journalism, subscribe today.