One Genius’ Lonely Crusade to Teach a Computer Common Sense

For decades, as the tech world passed him by, Doug Lenat has fed computers millions of rules for daily life. Is this the way to artificial common sense?
This image may contain Furniture Desk Table Human Person Shelf Electronics Computer Lcd Screen Monitor and Screen
Mary Shepherd and Doug LenatBrent Humphries for WIRED

Over July 4th weekend in 1981, several hundred game nerds gathered at a banquet hall in San Mateo, California. Personal computing was still in its infancy, and the tournament was decidedly low-tech. Each match played out on a rectangular table filled with paper game pieces, and a March Madness-style tournament bracket hung on the wall. The game was called Traveller Trillion Credit Squadron, a role-playing pastime of baroque complexity. Contestants did battle using vast fleets of imaginary warships, each player guided by an equally imaginary trillion-dollar budget and a set of rules that spanned several printed volumes. If they won, they advanced to the next round of war games—until only one fleet remained.

Doug Lenat, then a 29-year-old computer science professor at nearby Stanford University, was among the players. But he didn’t compete alone. He entered the tournament alongside Eurisko, the artificially intelligent system he built as part of his academic research. Eurisko ran on dozens of machines inside Xerox PARC—the computer research lab just down the road from Stanford that gave rise to the graphical user interface, the laser printer, and so many other technologies that would come to define the future of computing. That year, Lenat taught Eurisko to play Traveller.

Lenat fed the massive Traveller rulebook into the system and asked it to find the best way of winning. Each night, after several hours of trial and error, Eurisko would spit out a few strategies—some interesting, some impractical, some ridiculous. At one point, Lenat remembers, it suggested he could win the game by changing the rules. But each morning, Lenat would rejigger the system, pushing Eurisko away from the ridiculous and toward the practical. In other words, he would lend the machine a little common sense.

With Lenat providing wisdom from above, Eurisko eventually settled on a strategy that was both unexpected and powerful. Rather than spend that trillion-dollar budget on very large, very mobile, and very well protected warships, it suggested Lenat build hundreds of tiny ships that carried ample firepower but didn't move much and weren't well protected. At the banquet hall in San Mateo, he put the strategy into action, and it worked. Opponents could easily sink his ships, but they couldn't sink them all. Eurisko advanced through the tournament bracket and won.

It was an early triumph for artificial intelligence—a precursor to systems that would later beat the top humans at chess, Scrabble, and Jeopardy. And it wasn't a fluke. The next year, tournament organizers changed the Traveller rulebook. They closed Eurisko’s loophole, ensuring that fleets couldn't win unless they were extremely mobile. In other words, they had to move. But Lenat fed Eurisko the new rules, and with a bit more of his common-sense coaching, it won again. This time, the organizers threatened to disband the tournament if Eurisko kept playing. So Lenat quit.

The experience taught him a lesson: If computers were to reach true artificial intelligence—tackling big problems entirely on their own—they needed more than just the raw speed that let them analyze more possibilities than the average human. They needed something beyond brute force. They needed a common sense of their own. So he decided to build it.

Getty Images

Thirty-five years later, Lenat is having lunch at his favorite restaurant, Bartlett’s, on the outskirts of Austin, Texas. He's wearing a multicolored, short-sleeved button-down tucked into a pair of baggy blue jeans. As he talks, he pieces his words together in a way that's both immediate and deeply analytical—a kind of Buddha for artificial intelligence. At lunch, he’s joined by his wife, Mary Shepherd—who gives off the same shamelessly-unhip-but-decidedly-intellectual vibe—and two other AI researchers, Michael Witbrock and Keith Goolsbey. With help from this trio and a small team of additional researchers that includes as many trained philosophers as computer scientists, Lenat has spent the last three-and-a-half decades building what he calls a “common sense engine”—a system that attempts to digitally codify all the basic concepts humans take for granted but machines have never really grasped.

Funded by the big tech companies of the '80s and '90s and beyond, they built this system one logical rule at a time. These rules include simple truths like "you can't be in two places at the same time," "you can't pick something up unless you're near it," and "when drinking a cup of coffee, you hold the open end up." Viewed on their own, each of these 15 million rules are rather mundane. But taken together, Lenat says, they can help give machines something that approximates common sense.

This common sense engine is called Cyc, and after three decades of work—basically since that last game of Traveller—Lenat says it's ready for the real world. Already, Goldman Sachs has used Cyc as a way to monitor the inner workings of its technological infrastructure. The Cleveland Clinic has experimented with it as a way of exploring new areas of medical research. And judging from a document leaked by whistleblower Edward Snowden, the National Security Agency has used it in an effort to identify terrorist threats in international communications data.

Brent Humphries for WIRED

Now, in the wake of these projects, a group of Austin tech vets have launched a new company, Lucid.ai, that aims to commercialize Lenat's common-sense engine. Lenat trumpets this as a new dawn for AI. "I see this as a whole new paradigm for how science will proceed over the next 10 to 20 years," he says, chewing on his Thai chicken salad.

The trouble is that the rest of the tech world doesn't really agree with him. A decade into his project, the Internet arrived, reinventing not only the fundamentals of computer science but everyday life. And soon, it remade the very notion of artificial intelligence. As Lenat and his philosophy PhDs pieced AI together rule by rule at a tiny office in the middle of Texas, Silicon Valley giants like Google showed that machines could more quickly approach human intelligence by seeking patterns in enormous amounts of digital data.

On the strength of neural networks—a technology that can actually learn tasks by crunching vast amounts of digital data—AI has entered a new golden age. Neural networks are so adept at tasks like recognizing images and understanding natural language that some voices, from Tesla founder Elon Musk on down, are warning that artificially intelligent machines could spell doom for the human race. This movement came to a head last week when AlphaGo, an artificially intelligent machine built by researchers at Google, used neural networks to beat one of the world’s best players at the game of Go, the ancient Eastern pastime that's exponentially more complex than chess.

Amid the enormous success of neural networks, the leading tech companies and even the heart of the AI research community have turned their backs on Lenat—or forgotten him entirely. Machine learning is the future, many say, and Lenat's breed of symbolic logic is the past. In his popular overview of the AI landscape, The Master Algorithm, University of Washington professor Pedro Domingos mentions Cyc only once, calling it "the most notorious failure in the history of AI." Many of the top AI researchers don’t even realize it still exists.

But it does. And Lenat insists the world of AI is about to get the same kind of lesson Traveller did.

Getty Images
The Admiral and the AI Researcher

In 1982, Admiral Bobby Inman was looking for someone who could lead the United States to the future. After serving as the director of naval intelligence, the director of the NSA, and the deputy director of the CIA, he’d taken the reins of the Microelectronics and Computer Technology Corporation, or MCC. Funded by DEC, RCA, NCR, Motorola, and others among the alphabet soup of that era's tech giants, MCC was a sweeping tech research consortium meant to ensure that the US kept its edge over the rest of the world. It was an unusual arrangement. Antitrust laws prohibited an organization like this, but Inman bootstrapped new legislation that made it possible. "It moved through the Congress in about four months—back in the days when they used to cooperate," he says.

After taking bids from several other cities, Inman settled the project in Austin, not far from where he grew up. There, MCC would explore the next wave of computing, and AI was to be one of four overarching areas of research. As Inman looked for someone to head the AI work, Alan Kay, then the chief scientist at gaming giant Atari, suggested Lenat. Kay is the inventor of the seminal Smalltalk computer programming language and a key figure in the creation of the Alto, the XEROX PARC invention that inspired the Apple Macintosh—in other words, an enormously influential thinker who knew Lenat from his days in Palo Alto. "He's one of the two or three smartest people I know," Kay says of Lenat. "He's also a bulldog."

Inman flew to California to meet with Lenat and Shepherd near Stanford. Shepherd was living across the country in Cambridge, Massachusetts, at the time, working for an early AI company called Thinking Machines, but she and Lenat were a couple—something Inman didn't realize until his own wife clued him in. After the meeting, he brought both Shepherd and Lenat to Austin. "He clearly was way over the horizon in what he wanted to think about," Inman says of Lenat. "What I admired from the beginning was the vision." He also calls Lenat “a classic nerd”—a term that, in the 1980s, was synonymous with those who ruled the world of computing.

Shepherd says they took the job because Austin was a place she and Lenat could agree on. "He hated Cambridge because of the snow, and I hated Palo Alto because of the earthquakes," she says. But they also believed in the project's higher purpose. Lenat says that he and his closest friends—including Kay as well as Marvin Minsky and Ed Feigenbaum, some of the biggest names in AI—believed that someone should tackle the problem of instilling computers with common sense. He admits this was a quixotic pursuit. In 1986, two years into the project, he told Time it would require more than 350 human-years of work to even approach success. After all, the scope of human knowledge is always growing. But he felt that someone should tackle the task.

Courtesy of Doug Lenat

Over the next several years, Lenat and a small team of researchers built the framework for his common sense system, including a new coding language called CycL, and he started hiring the philosophy PhDs who would use that language to write all of Cyc's logical rules. Shepherd, who was originally part of that team but now oversees the project’s finances, says she and Lenat began by telling the system about themselves, before methodically expanding into the rest of human knowledge. "When the system doesn't know anything, it doesn't matter where you start," she says. "So we started by telling it about us." It was a task of almost endless scope. After covering one area—say, the animal kingdom—they could move on to practically anything else, from healthcare to terrorism.

But as grand as all this sounds, the rules themselves are so very mundane. A bat has wings. Because it has wings, a bat can fly. Because a bat can fly, it can travel from place to place. But that banality is the whole point. These concepts are intuitively obvious to humans but not to a computer. A computer doesn’t come with intuition. "It's all the things that a 5-year-old knows," says Ramanathan Guha, who worked on Cyc for eight years in the late '80s and early '90s and now serves as a top researcher at Google. "Computers still don't know that."

The ultimate aim was to codify common sense so that any computing system could use it for guidance—so that something like Eurisko could operate without help from a human. As it analyzed other data, the system could step through each rule to reach a logical conclusion. "Almost every application that you can imagine could benefit from having this underlying layer of common sense," Guha says.

Guha describes Cyc in those days as an almost holy mission—a pursuit that wouldn't reach fruition for decades but was still completely urgent. If he woke up in the middle of the night and suddenly realized how to fix a bug, Guha remembers, he would fix it. "This was in '91, when we didn't have an Internet connection at home," he says. "I did what I considered the most logical, obvious thing to do: I got back in the car, went to the office, fixed the bug, and came back home. Because it had to be done."

This attitude, he explains, began with Lenat. "He redefines optimism," Guha says. When Guha wanted to leave the project, Lenat convinced Stanford to accept him into its computer science PhD program—even though Guha has never taken a computer science course—and then he opened a new Cyc office in Palo Alto, so that Guha could do both. "That kind of dedication? Eight years is a long time to take," Guha says. "But he instilled it in some of the people around him." And then things changed: MCC disbanded, and the Internet arrived.

Things Have Changed

Eric Horvitz, who oversees Microsoft's Redmond, Washington research lab, calls Lenat's work "controversial." And he’s being kind. He took a graduate seminar with Lenat at Stanford in early ’80s, after shifting his studies from neuroscience to a related field: artificial intelligence. Over coffee and pastries at an Italian restaurant not far from Stanford, he calls Lenat a friend. And he praises Lenat's rather unusual breed of tenacity. But he questions whether Cyc has a role in the modern world. "He had the same vision all along, as the technology has changed around him and the availability of resources has changed," Horvitz says.

According to Horvitz, Microsoft was once an investor in Cyc. But in the early '90s, when the project failed to progress as quickly as the Microsoft braintrust would have liked, the software giant moved in other directions. Among other things, it moved toward machine learning—like the rest of the big Internet companies. Today, along with Google, Facebook, and Chinese search giant Baidu, Microsoft is at the forefront of a movement known as deep learning*—*a technology that is quickly remaking the way we use the Internet.

Getty Images

With deep learning, rather than writing explicit rules for how machines should behave, engineers feed data into neural networks—vast networks of hardware and software that approximate the web of neurons in the human brain—so that these systems can learn to perform tasks on their own (or mostly on their own). Feed thousands of cat photos into a neural net, and it can learn to identify a cat. Feed it reams of old movie dialogue, and it can learn to carry on a conversation—or at least something close to a conversation. Deep learning is how Facebook recognizes faces. It's how Skype translates conversations from one language to another. And it's rapidly improving the Siri-like digital assistants we use on our smartphones.

Some researchers even believe that neural nets will progress to the point where they can not only understand natural language but duplicate human thought. All you need, they believe, is enough data, enough processing power, and the right algorithms. "We're starting to see progress in reasoning," says University of Montreal professor Yoshua Bengio, one of the figures at the heart of the deep learning movement. "The algorithms are being improved as we speak." You hear much the same from Google, which is forging the sharp end of the deep learning movement. This past fall, Google held a day-long seminar to explain its AI work to Silicon Valley reporters. The company framed its effort as a departure from the kind of logical reasoning embodied by Doug Lenat. "By building learning systems, we don't have to write these rules anymore," said John Giannandrea, who oversees Google's AI work and now heads its all-important search engine team. "Increasingly, we're discovering that if we can learn things rather than writing code, we can scale these things better."

But Lenat argues that we just don't have the data needed to reach common sense through these newer methods. Common sense isn't written down. It's not on the Internet. It's in our heads. And that’s why he continues to work on Cyc.

When MCC folded in the mid-'90s, he kept his project alive inside a new company called Cycorp, setting up shop at an office building just off Highway 1 in northwest Austin. It's a place that feels trapped in time. Like a lot of the tech research labs of '80s and '90s, it resembles the living room of an academic. The bookshelves are filled with big beefy textbooks—math and computer science and philosophy. Mary Shepherd decorates the walls with her homemade quilts.

Courtesy of Doug Lenat

In the '80s, when Lenat started work on Cyc, tech giants like IBM and DEC ran research operations that looked to the horizon. They packed these labs with some of the world's brightest minds, and these minds built technologies that wouldn't arrive for decades. But today, things are different. DEC is no more. IBM Research no longer commands the top talent. And at companies like Google and Facebook—the places that do command the top talent—research doesn't work in quite the same way. The brightest minds are typically focused on the here and now. But not Doug Lenat.

"He's one of the few people, certainly of his age, but pretty much of any age, who was able to put really sustained focus and effort over a long period of time on a very hard problem," Alan Kay says, comparing Lenat to Douglas Engelbart, the father of the computer mouse. "His interest has been much more on making progress on a very good, very hard problem than on becoming famous or becoming rich."

Behind the Enigma

Because Lenat was always more interested in building Cyc than in bringing it to market, because it's used mostly by organizations who say precious little about their internal technologies, and because Cyc is an immensely complicated thing—a piece of infrastructure that you can't put into action without great effort—it remains an enigma. Oren Etzioni, the CEO of the Allen Institute for Artificial Intelligence, says this probably means it doesn't work. "If it worked," he says, "people would know that it worked."

But Ken Forbus, a computer science professor at Northwestern University, has peered behind the curtain. After the 9/11 attacks, Forbus says, he worked with unspecified members of the "intelligence community" on a project based on Cyc. (He also says that NSA slides leaked by Snowden show evidence of Cyc at work, which indicates that the agency may have used the system to help identify terrorist threats by monitoring international communications traffic). With Cyc, he explains, machines cannot only analyze information at speed, but do so with a degree of common sense—with the nous of a human. In some cases, Forbus says, a system can use the rules of Cyc and other AI models to analyze a trove of data and work to identify who may have carried out a particular terrorist attack.

How would it work? Think of it like this: International communications traffic represents an astronomically large volume of information. The NSA couldn't possibly employ enough humans to look through it all. The agency uses machines to analyze the data, looking for keywords and other red flags. But these filters alone can't make the subtle connections between a stream of words and the threat of an attack. That still requires human intelligence, the kind that Lenat would provide to help Eurisko figure out which of its suggestions for winning the Traveller Trillion Credit Squadron tournament would actually make sense. But at least on some level, Cyc can provide this kind of "understanding," helping to make subtle connections on the fly. To that end, Lenat and his team have built a "terrorism knowledge database" that describes thousands of terrorist attacks using a complex set of logical rules. As machines analyze streams of communications data related to a new attack, they can then use Cyc to better "understand" this data, to make subtle connections between communications and events.

Getty Images

Cyc is a unique thing—a far cry from what companies like Microsoft and Google are doing with deep learning. But the irony is that both Microsoft and Google also use systems that are kinda like Cyc—if not nearly as complex. At Google, they call it the Knowledge Graph, a database covering things like movies, books, TV shows, celebrities, locations, and companies—in effect, all the things of the world that humans understand as specific, discrete entities but that computers, at least for now, can't really grasp. This database holds not just these "things" but the relationships between these things. It doesn't just include the words "Taj Mahal." It includes information that helps define Taj Mahal. As Google likes to point out, it can distinguish between the Indian monument, the blues musician, and the Atlantic City casino.

This Knowledge Graph is a way of enhancing Google's Internet search engine. When you type a couple of keywords into Google and the Knowledge Graph recognizes them, a digital card pops up on the right-hand side of your browser. Type in "Tom Stoppard," and this card tells you who Stoppard is; who he's married to; who his children are; and what he's written. It shows you what he looks like. Google talks about the Knowledge Graph in terms that sound like Cyc: a step toward a more advanced form of search that "understands the world a bit more like people do."

Microsoft operates a similar system called Sartori. And both companies rely on Schema.org, a shared database of carefully organized knowledge. According to Guha, all of these systems owe a debt to Lenat's work. "A lot of the ideas in the semantic web came directly from Cyc," he says. "A lot of these things have very common roots—even if the actual code is not being used."

The difference is that Cyc goes much further. It defines concepts that you can't necessarily find on the 'net, and it defines them with an added degree of complexity. At the same time, Microsoft's Horvitz acknowledges, Sartori and the Google Knowledge Graph are evolving in the direction of Cyc. "These efforts," he tells me, "are headed toward the vision that Doug was very strongly proselytizing."

Just as Lenat and his small team spent years building Cyc on their own, Google and Microsoft built at least some of their graphs by hand as well. But they've also begun to rely more on the power of the Internet. Instead of a team of philosophers hand-coding rules for years at a time, the Internet allows tech giants to build large collections of knowledge with great speed. You can generate these "knowledge graphs" automatically, teaching machines to go online and pull data. Or, as you build this vast collection of data, you can rely on the crowd, enlisting the help of countless people across the 'net. "Doug started out with people. Others started out with automation," Horvitz says. "But of course, what's going to happen is we'll see that there will be this hybrid approach."

Lenat paints Cyc as a shortcut to that endgame. After all, so much of the work is already done. What's more, he says, he and his team now build Cyc in more automated ways. The trouble is that there's an enormous gap between Lenat's world and the world of Google and Microsoft. Cyc is unlikely to ever find a place inside these companies. In the years since Lenat began Cyc in Austin, Silicon Valley has adopted very different ways of operating. Instead of small, out-of-the-way offices stuffed with books and a few dedicated PhDs, today's Internet giants work at enormous scale, using thousands of computers to juggle tasks from millions of people. They also use a different breed of technology. Cyc is a relic of the past. But its ideas may still point the way to the future.

Brent Humphries for WIRED
'Either I'm Right Or I'm Wrong'

Last week, Google's AlphaGo rocked the tech world when it won four games against Lee Sedol, one of the world’s best Go players. Go is often described as the Eastern version of chess, but it’s significantly more complex. As Google likes to point out, a Go board offers more possible positions than there are atoms in the universe. Although a machine beat chess world champion Gary Kasparov as far back as 1997, many believed it would be at least another decade before a machine beat a grandmaster at Go. But then the Google system cracked the problem with help from deep neural networks and a sister technology called reinforcement learning. In describing AlphaGo, Google engineer Demis Hassabis paints it as a triumph of machine learning over the "brute force" methods of the past.

But there is a (small) footnote to Google's historic victory. "Brute force" was also part of the equation. After narrowing the possible moves with machine learning, the system then turned to something called Monte Carlo tree search, the brute force method for calculating the outcomes of possible moves that has driven Go-playing machines for years. AlphaGo was an enormous triumph for machine learning. But it required other technologies too. And as impressive as this system is, it's a long way from true intelligence. It will require still more technologies to reach that ultimate goal.

Today, all machines are a long way from true intelligence. In TV ads featuring everyone from Jeopardy champ Keith Jennings to Bob Dylan, IBM paints its Watson system as some sort of sentient creation that can carry on a conversation and learn new languages all on its own. But this is far from the truth. Watson does some things well. It can win at Jeopardy, for instance. It can help power medical chatbots that work reasonably well—in some cases. But this is not really cognitive computing. It's not truly "thinking." It even stumbles when trying to carry on a conversation. Hints of truly intelligent machines are appearing on the horizon, but they have not arrived. Creating real AI demands more than what we have today.

Though many paint machine learning as the future of AI and Doug Lenat's breed of symbolic logic as the past, both ideas are still very much in play. The final outcome might be that these ideas end up complementing each other—in more ways that one. Yoshua Bengio believes that neural nets will reach "common sense" by analyzing enormous amounts of data. All they need, he says, is the right data. And this could include data supplied by a project like Cyc. "We can take something like Cyc," he says, "and we can treat it like data. We can learn from that."

But Lenat looks at this future in different terms. Today, machine learning is very good at tasks like recognizing images and translating from one language to another. But because they rely on what is essentially statistical analysis, neural nets get things wrong. They identify the wrong photo. They choose the wrong word. They can't grasp the nuance of your requests. They travel down paths a human knows not to travel down, much like Eurisko in trying to play Traveler Trillion Credit Squadron. But Cyc can help guide these neural networks, Lenat says. It can give them at least some of the common sense they need.

Today, as the machine learning movement surges, Lenat cuts a lonely figure. Yes, there are AI researchers who still believe in his ideas. "It's way ahead of its time," Erik T. Mueller, who helped build Watson, IBM's Jeopardy-winning supercomputer, says of Cyc. "We have to somehow get back to symbolic reasoning if we're ever going to have machines that can communicate with humans." But typically, these are researchers who are also on the outside of the recent revolution in machine learning, who have an interest in propping up the value of symbolic logic. Lenat represents an enormous split in the AI world, with the machine learning crowd in the ascendance and the school of symbolic logic shoved out of the limelight.

But for Lenat, that's not a big deal. After 30 years of work, he's not someone who's overly concerned with the short view. In the middle of lunch at Bartlett’s, as he eats his Thai chicken salad, I ask him about the new book that calls Cyc the biggest failure in the history of AI. He shrugs, and for a moment, he looks wistful. But the moment quickly passes, and he repeats what he has said before, no doubt to himself as well as anyone who will hear. Cyc, he says, is the future.

In the wake of AlphaGo's victory, I phone Lenat at his office in Austin, just to hear how he viewed Google's big win. As usual, he's not short of opinions. Before I can even finish my question, he points out that although Go is enormously complex, it's not as complex as some other games—much less real life. "You're not going to see this approach scale up to levels one to sixty in World of Warcraft, or even to mediocre puzzles in The Secret World," he says, referring to a pair of massively multiplayer online video games. "Even in the gaming world—leaving aside real world tasks—there's this long tail of more complicated games that are not going to be learned using the Google approach. In the same way, you can build taller and taller towers—and that may be impressive—but you're never going to get to the moon by building a tall enough tower."

This may be true. But does it bother him, I then ask, that Google and AlphaGo and machine learning have been so successful at so many different tasks, that they have received so much attention, that they will continue to receive so much attention? Not at all, he tells me. "This is the best thing that could have possibly happened," he says, pleased that the success of machine learning is driving a much greater interest in AI as a whole.

This, in turn, will lead to a bigger push toward artificial common sense, and that gives him hope. Either the machine learning crowd is right and their technologies will reach full human intelligence on their own, Lenat says, or they're only partly right and they'll need the help of something like Cyc. "Either way, real AI will exist on Earth," he tells me, "which is all I really care about anyway."

Update: This story has been updated to remove a sentence that misrepresented a statement from Google researcher Ramanathan Guha about the nature of AI research.