Firewalls Don't Stop Hackers. AI Might.

A startup founded by former spies uses machine learning to tackle the newest cybersecurity threats.
Image may contain Lighting
monsitj/iStock

The cybersecurity industry has always had a fortress mentality: Firewall the perimeter! Harden the system! But that mindset has failed—miserably, as each new headline-generating hack reminds us. Even if you do patch all your software, the way Equifax didn’t, or you randomize all your passwords, the way most of us don’t, bad actors are going to get past your heavily guarded gate, into your network. And once they do, they’re free to go wild.

That’s why some in the industry are beginning to focus less on sealing borders from outside threats and more on sensing bad behavior inside as it happens—when it can be stopped. They’re shifting from military metaphors to the language of biology; they’re designing immune systems rather than barricades.

Darktrace, founded by Cambridge University mathematicians and ex-British spies, uses machine learning to define what “normal” looks like for any network and all its devices and then report on deviations and anomalies in real time. That’s a big break from the usual security routine of cataloguing prior attacks and guarding against repeat performances. Darktrace CEO Nicole Eagan argues that artificial intelligence is the only way to defend networks against the “unknown unknowns”—the inside jobs and novel exploits your antivirus scan won’t find.

Eagan is an Oracle veteran who, like many of her cofounders, also did a stint at Autonomy, the Cambridge-based big-data innovator that was acquired by Hewlett Packard in 2011 (and sparked a legal battle after HP decided it had paid too much). Darktrace isn’t the only company promising to flag malefactors at work inside networks; other outfits like AlienVault, NetWatcher, and SS8 also say they can offer that kind of deep-field defense. But none of these competitors relies as heavily on AI capabilities as Darktrace—and some are outspokenly skeptical that it can be done at all.

Eagan recently sat down with Backchannel to explain how Darktrace’s approach could help democracies trying to protect their elections and how the hacking of Equifax will change the way businesses connect with one another.

Scott Rosenberg: Say you’re running an online retailer. You’ve already got all the usual network defenses. What do you need AI for?

Nicole Eagan: The big challenge that the whole security industry and the chief security officers have right now is that they're always chasing yesterday’s attack. That is kind of the mindset the whole industry has—that if you analyze yesterday’s attack on someone else, you can help predict and prevent tomorrow’s attack on you. It’s flawed, because the attackers keep changing the attack vector. Yet companies have spent so much money on tools predicated on that false premise. Our approach is fundamentally different: This is just learning in real time what's going on, and using AI to recommend actions to take, even if the attack’s never been seen before. That’s the big transition that Darktrace is trying to get folks like that online retailer to make: to be in the position of planning forward strategically about cyber risk, not reacting to the past.

I know military language gets overused in this industry, but it sounds like you’re taking the defense from the network’s borders to the whole interior. Does Darktrace just replace the old firewalls and virus catchers?

Nicole EaganCourtesy of Darktrace

I think there’s been an over-investment at the perimeter, trying to harden it—as well as in things like looking at the dark web to see if your data’s already published out there, or buying third-party threat intelligence feeds about historical attacks on other companies. If they did make any investments inside, it’s usually done on what’s called rules and signatures [pattern-matching known exploits and viruses], which, again, is just yesterday’s attack written in a rule, and then you try to catch it.

When we start working with companies, it changes their mindset about security. It gives them visibility they've never had before into the goings-on of the pattern of life of every user and device inside their network. It lets them see their network visually in real time, which is an eye opener. They also realize that you can catch these things early. The average attacker is in a network 200 days before real damage is done. You’ve got a lot of time.

We talk a lot about the human immune system. We've found it’s a very effective analogy because boards of directors can understand it, non-technical people can understand it, as well as deep technical people. We’ve got skin, but occasionally that virus or bacteria is going to get inside. Our immune system is not going to shut our whole body down. It’s going to have a very precise response. That is where security needs to get. It needs to become something that, like our immune system, is just in the background always running—I don’t have to think about it.

Are the attackers wising up and adapting to your approach?

Like everyone else, we don't know exactly when AI will start to be widely used in the attack vector itself. We did see one instance, about six months ago in a network in India. It wasn’t super sophisticated. I wouldn’t call it a full-blown AI attack. But it was using some bits of machine learning to learn what normal looked like in this network, and try to blend into the background noise. Luckily, we detected it because it was doing a lot of lateral movement and unusual behavior, so our models went off loud and clear.

It might not be trying to steal data; it might just hang out and learn, right? If you want to learn about new medical research or alternative energy, maybe you just want to camp out in the network and observe. Or maybe it’s about subtly changing data, patient records, blood types, bank account balances, and that’s going to wreak havoc because no one’s going to know what data they can trust.

Those are the attacks we think are more apt to happen as AI comes to the forefront. Machine learning, especially unsupervised self-learning systems, the longer they’re in place, the smarter and stronger they get. Just like the longer our immune system has been with us, the stronger it gets, the more things it’s exposed to.

Some of the machine learning approaches out there take all of their customer's data, load it up to a cloud and do a comparison. A quarter of our customers are in financial services. Some of our founders came out of the intelligence community, MI5, MI6. We kind of looked at this and said, if you’re a financial services company, or if you’re even in healthcare, if you’re anyone—why would you be uploading your data to someone else’s cloud? Now you’ve increased your security risk, not decreased it.

What do you think when you hear about a situation like the Equifax story?

One of the things that comes to mind for me is: For many of our customers, Equifax is part of their supply chain. They’ve relied on Equifax to do some of that background credit checking and scoring. I think the last time this came up at this magnitude would have been around Target. Your supply chain is really critical. Everyone in it, especially in the data supply chain, can increase your cyber risk. A lot of consumers are obviously impacted by this breach, but there are a lot of other supply chain partners of Equifax who are also impacted by the breach. How can we help, in the future, everyone who was relying on Equifax, who was in their data chain, to know that that breach was happening when it was happening? Not after the fact, when Equifax reports it.

I think we’re going to see something emerge—it might be the equivalent of a FICO score, in fact—where everyone in the supply chain has some type of real-time visibility feed into [the status of the other companies]. You’d see something going from what we would call an amber alert to a red alert. Something would light up on your dashboard saying, “Someone in your data supply chain has early indicators that something’s going wrong.”

These breaches are getting reported as historical events, long after something could have been done about it. That has got to change.

How would you get companies to share such sensitive information?

Today what happens is, everyone has a legal agreement with Equifax. In order to become a supply chain partner, there’s usually a cyber risk survey that you fill out. It often has anywhere from 200 to 1,000 questions per security survey, and it’s subjective. You give it to the other party; they say, yes, no, yes, no, yes, yes, yes. It may or may not be factually accurate.

The reality is, your cyber risk score changes on a regular basis—just like your credit rating, your FICO score. Numerous attacks on every company happen every day. Every day their cyber risk profile dynamically changes. It’s not static, and yet this whole supply chain thing is a subjective questionnaire, at one point in time at the beginning of the relationship. That relationship’s in place 2 years, 5 years, 10 years; that form may never get updated.

We think this whole area of cyber risk scoring is going to need to move to a real-time model. Instead of a legal agreement that we sign up front and then put in a drawer, there's going to be a sharing of cyber risk scores between supply chains. You might not know exactly what the breach is, but you would have an indication that that vendor supplier has a problem in some dimension right now.

These models are being worked out today. I think it will change the future of cyber risk insurance. Potentially it could also extend to the consumer side. Maybe I don’t want to do banking or do business with an entity that has a low cyber risk score.

Darktrace's Enterprise Immune System interface.Courtesy of Darktrace

Let’s talk about the public sector. Is there a way to take what you're doing in enterprise security and apply it to, say, protect our elections, for instance?

Usually, the part of Defcon and Blackhat that everyone's most excited about is the hacks on vehicles—like showing how you can take over a Jeep and stop its brakes from working. This year, though, the most interesting one was actually around how easy it was to tamper with voting machines.

It was extremely easy! There is a lot of concern—not only in the United States, but in other countries that haven’t yet moved to [electronic] voting. They were moving more and more towards automated systems, and now they’ve put the plans on hold. They’ve been coming to Darktrace, saying, “We really want to automate some of the voting process, but security is our number one issue.” We’ll have some pilot projects underway soon to help other countries who are also concerned based on what they saw happen in the US.

Using our approach, we see everything, and by everything, I mean any device that connects. So, IoT devices, there’s a flood of those coming on the market right now.

I read about the connected fish tank caper.

Yeah, and everyone's trying to figure out what agent or chip they're going to put in every device to secure it. What’s great about our approach is, they don't have to put anything in their device. We actually will model its behavior, the normal pattern of life and the not normal, and know whether it's under attack. That is as true for voting machines, if they are connecting over either internet or wifi, as it would be for a corporate server.