Skip to main content

Applying Machine Learning to Cybersecurity

Applying Machine Learning to Cybersecurity

In a recent article on the OPM hack, the author describes a pretty typical security situation for a large enterprise:

The Office of Personnel Management repels 10 million attempted digital intrusions per month—mostly the kinds of port scans and phishing attacks that plague every large-scale Internet presence—so it wasn’t too abnormal to discover that something had gotten lucky and slipped through the agency’s defenses.

Enormous pressure at scale from criminals makes automated systems essential for security. While humans can inspect packages coming into the building, only a computer can work quickly enough to inspect packets. Firewalls are the prototypical example: you allow certain traffic through according to a set of rules based on the source and destination IPs and the ports and protocols being used.

In recent years, there's been a lot of buzz about machine learning in cybersecurity--wouldn't it be great if your automated system could learn and adapt, stop threats you don’t even know about and find suspicious shifts in behavior? Given the success of deep learning at recognizing images and speech, of latent factors for music recommendation systems, of reinforcement learning for self-driving cars, it's reasonable to expect that cybersecurity might be one of the next fields to get a big boost from machine learning.

In this post we will discuss the three basic approaches to designing an automated system to detect things (threats or cats). We will talk about the situations in which each are effective and lay out the consequences for cybersecurity. Spoiler alert: more expert knowledge is needed for systems hoping to catch an APT than those trying to distinguish a dachshund from a siamese.

Let's start, as must all things online, with cats (and dogs). Say you're trying to build an automated system to distinguish  cats from dogs in photos.

  1. The most straightforward way would be with a bunch of rules, like a flowchart from the back page of Wired. For example:

      * If the animal has vertical pupils, it's a cat. If you can't see the eyes, then check the mouth.
      * If the mouth juts out from the face, it's a dog. If not, it could be a cat or a flat-faced dog like a pug.
      * If the shape of the ears,...

    You explicitly specify which features of the photo to pay attention to and give rules to follow based on the values of those features.

  2. If you have some labelled data - like a bunch of photos labelled cat or dog - then you can let the machine learn the rules. You tell it the important features of each photo: eye shape, jaw length, ear length, fur color, etc. Then it compares the values for all those features for the cat photos and all those features for the dog photos and tries to learn rules that will differentiate the two groups. Typically that will take the form of a cat-score and a dog-score, each of which is built from a bunch of those parameters. Popular algorithms for the rule-learning part include logistic regression and random forests. One advantage of these systems is that the score can be explained: the score can be traced back to the features that most influenced it. This makes debugging easier; e.g.,  if one feature is driving garbage answers (or maybe you've been recording it wrong) you can see the problem. If you picked features that have enough information to distinguish the animals, then the machine can figure out the best way to combine that information to accomplish your goal.

  3. If you have enough labelled data, then you can let the machine learn the features and the rules. You just give it the raw pixels and the label for each photo and say "go!". Deep neural nets are a popular algorithm in this category. They have been quite successful for image recognition if you have millions of labelled images. The machine needs a lot of data since you are asking it to learn to recognize things like fur and eyes etc, then to learn what to do with that information to make a decision. These methods can be hard to debug or explain, because none of the learned features come with names.

To summarize, you might call these

  1. Hand-written rules (Human-engineered rules with human-engineered features)
  2. Machine learning with human-engineered features
  3. Machine learning with machine-learned features

Let's see how this plays out for online music recommendations:

  1. Beats (now part of Apple Music) was acquired for its excellent playlists. Expert curators assemble the best tracks and set rules for cycling them.

  2. Pandora hired hundreds of music analysts to listen to every song and score it for 400 features, ranging from "strong horn lines" to “mood”. The application learns from your likes and dislikes which features are important for you.

  3. Spotify takes the play matrix (who played which songs how many times) and tries to learn features that could explain the patterns in popularity. This method, where you use a massive collection of interactions or ratings to try to uncover hidden features, is called latent factor analysis. (For their popular Discover Weekly feature, they use methods from 2 as well. Very few production systems rely entirely on machine-learned features.)

Cybersecurity needs differ from images and music. There is an immense diversity present in a typical enterprise environment (thousands of distinct processes running on thousands of computers in offices and datacenters around the world), and you get many kinds of data. A system that takes in all the raw logs from an enterprise without any guidance will have a lot of work to do to create a reasonable internal world. It is also adversarial. People are deliberately trying to sneak past your defenses.

Labeled data are also very hard to come by. Since big attacks are rare, you don't have very many examples of what bad activity looks like. This makes it important to use all the knowledge you have available, a lot of which resides in the heads of security personnel--the makers of the product and the users of the product. The makers might know things like "a large number of IPs associated with a given URL can be a sign of a fast-flux botnet", and so include the count of distinct IPs as a feature. The users might know that "these servers handle our web traffic so they *should* be acting differently than the rest of the machines", and when they see false alarms triggering, they can add "server type" as a feature.

If we know that whiskers might be important for recognizing cats, it's best to make that a feature explicitly. The reason that machine-learned features are so important for image processing is that there is no "whisker pixel". It's pretty hard to say how many whiskers a cat has by looking at the numbers in raw pixel data. In security, the information is much more understandable. One record might show a piece of malware being executed and another record might show a bunch of data being sent to a foreign IP. This makes human-engineered features easier to write and easier to understand.

Imagine a security camera looking at a yard. You are worried about intruders. If you can tell it about the kinds of animals it might see in advance--cats, dogs, squirrels, people--it'll do much better than if you just ask it to tell you about strange things. A leaf fell, call the police!

Think about the fire detection problem: is there a fire in my house? Here's how the three approaches break down.

  1. Smoke detectors. If you get more than a certain amount of particulate matter, sound the alarm! We know how often those go off without good reason. That's the problem with hand-written rules in a diverse environment.

  2. Thermometers. Big sudden changes in temperature are a bad sign.

  3. Video cameras. Install a dozen video cameras all over a building, start a few dozen fires yourself to get labelled data, and then build a model. Then you get a foggy morning and it dials the fire department. That's the problem with pure machine-learned features in a changing environment, where your labels come from historical data.

The Sift approach is to focus on human-engineered features, supported by rules and sometimes the rich data that can fuel machine-learned features. Using the fire detection analogy, if the system sees a sudden change in temperature that coincides with a beep from the smoke detector, it passes along a warning to a human operator along with the video feed to provide context. They can quickly decide to call the fire department or not.

At Sift we take a human-centered approach to machine learning. It's a person who will need to make decisions on the basis of the software's recommendations, and it's a person that that knows their environment the best. By using good judgement in designing features and providing context to evaluate the resulting alerts, we maximize overall effectiveness of the system: finding and mitigating threats.

Popular posts from this blog

Sift Security Tools Release for AWS Monitoring - CloudHunter

We are excited to release CloudHunter, a web service similar to AWS CloudTrail that allows customers to visually explore and investigate their AWS cloud infrastructure.  At Sift, we felt this integration would be important for 2 main reasons:
Investigating events happening in AWS directly from Amazon is painful, unless you know exactly what event you're looking for.There are not many solutions that allow customers to follow chains of events spanning across the on-premises network and AWS on a single screen. At Netflix, we spent a lot of time creating custom tools to address security concerns in our AWS infrastructure because we needed to supplement the AWS logs, and created visualizations based on that data.  The amazing suite of open source tools from Netflix are the solutions they used to resolve their own pain points.  Hosting microservices in the cloud with continuous integration and continuous deployment can be extremely efficient and robust.  However, tracking events, espec…

Sift Security and WannaCry

😢 The WannaCry ransomware attack has left security teams around the world scrambling to make sure they are protected and to assess whether they have been victimized. To protect themselves, organizations need to have visibility into which systems are vulnerable and be able to rapidly roll out patches.  To understand whether they have been targeted, they need visibility into the channels over which the ransomware is distributed.  To understand whether they have been infected, they need visibility into the endpoints.
Over the past weekend, I was bombarded with questions from current customers, potential customers, former colleagues, friends, and family.  Am I vulnerable? How do I protect myself? How do I know if I’ve been hit? What do I do if I’ve been hit? What can you do to help me?
This post focuses primarily on the last question. What can we at Sift Security do to help an organization respond to a massive ransomware attack? I break this down into four categories, visibility, analyt…

Next-Generation SIEMs

Intelligence, Speed, Simplicity, AutomationOverviewMajor network and security trends, including exponentially increasing network traffic, cloud architectures, complex attack surfaces, and advanced adversaries, have created new challenges for security operations in adapting to this changing threat landscape.
Increased Traffic, Hybrid Cloud Architectures, and Sophisticated Adversaries Are Overwhelming SOCs
The Security Operations Center is faced with alert overload, often paralyzed with too much information to filter, prioritize, and act upon. The end result is an inability to find the true risk amongst thousands of alerts every day with the largest of organizations easily facing millions of alerts per day.Incident responders are challenged at finding relevant information or seeing the relationships amongst disparate indicators of compromise that are buried within logs, causing delays in assessing, understanding, and mitigating incidents.T…