Yesterday, I was bored. It had been a while since I had discussed anything useful on social media, so I decided to pick a subject and just brain dump what I know about it out loud. Last night’s subject was penetration testing, red teaming, and adversary emulation. Most people know me as that blue team guy, the one dude that knows some stuff about NSM, some malware analysis tidbits, and maybe where to find the dankest memes, but I do know a thing or two about the offensive side of security. I’m no OSCP, but I know things.

By and far however, Ben is the better red teamer in our little dynamic duo at rallysec, so I’m guessing that if I did this wrong, then he’ll be the one to tell me later whilst shaking his head. So without further adieu, let’s discuss vulnerability assessments, penetration testing, red teaming, and adversary emulation, because all of these terms are inter-related in some way. I feel it’s important to understand the lingo to know what a penetration test (pentest) is and is not, as well as what drives so many security firms to provide them as a service offering.

Pentesting today is typically driven by law (aka regulatory compliance): Most businesses verticals utilize information systems that are responsible for processing and/or storing sensitive information, or controlling sensitive resources. Regulatory compliance is essentially a set of guidelines that state certain security controls and/or mitigations must be in place in order to assure there is at least some sort of a token effort towards ensuring the confidentiality, integrity and/or availability of these sensitive resources and/or data that is being stored or processed by said information systems.

These guidelines are enforced by an auditor that is usually certified or associated with the regulatory compliance body. The auditor comes in on a regular basis, goes through a list of items the company has to prove they are doing or have been doing to ensure that they are complying with the regulations/rules, and the company provides evidence that they are actually doing so. If a company is NOT in compliance, this usually results in pretty hefty monetary fines, and in some cases, can result in a loss of certification for the information system — meaning that until the company gets their act together, the information system cannot be used for processing sensitive information. There are tons of different regulatory compliance bodies for all sorts of verticals. NERC/CIP, PCI/DSS, HIPAA, FISMA, SOX, and so on and so forth.

So now you know what regulatory compliance is, what does this have to do with pentesting? You see, most regulatory compliance doesn’t really define what a penetration test actually is, but require it in some way, shape, or form. I found this (written by the PCI security standards council no less!). On pages 3 and 4, they go over some of the basic differences between a vulnerability assessment and a penetration test. Still, be that as it may, most regulatory compliance does NOT differentiate between the two or if they do, nobody cares.

Most of the time, organizations subjected to these compliance audits are motivated by money and/or least required effort. Typically this means that the cheapest solution, not necessarily the best solution, wins. So most companies will spring for a vulnerability scanner, someone to run that vuln scanner, scan their network, generate a report, and present that as evidence that they have been pentested, and the auditors buy it. Problem solved, checkbox checked.

This results in most security practitioners having a very unfavorable view towards compliance, calling it “checkbox security”, so-called because the auditor comes in reads off security controls from a list, and checks off items as “evidence” is presented. I’ve heard of stories where an auditor asks to see the organization’s firewall, and the IT person kicks a box under their desk. The cardboard box that contains the firewall that isn’t racked, stacked, plugged in, or configured. The auditor checks their box and moves on. If you’ve ever heard of following the letter of the law as opposed to the spirit of the law, that is what this situation amounts to. This is what leads to companies calling vulnerability assessments penetration tests. “They’re basically the same thing, right? Just check the box and move on.”

As stated above, a cheap vulnerability assessment is someone throwing Nexpose, Nessus, or OpenVAS (God help you) against your regulated network, generating the PDF report that the vuln scan tool provides, and calling it a day. The scan may be credentialed (that is, some vulnerability scanners will test for additional vulnerabilities if you provide the software with valid network credentials) if you’re lucky, but most of the time, they won’t bother. A good vulnerability assessment is someone throwing a vuln scanner at your environment (with credentialed scans), and actually testing to see if the vulnerabilities are exploitable, As well as writing the reports themselves, and prioritizing the vulnerabilities in the order that they should be remediated (usually they’re prioritized according to the risk they present to interrupting operations). The /best/ vulnerability assessments do all of this, plus provide some alternative means of remediating a vulnerability aside from “patch your stuff”, for organizations who have restrictive or very limited change control windows.

Pentests can be performed by a single person, or by a group of people (red teaming), they can be entirely remote, or may incorporate physical security aspects as well (e.g. social engineering and/or defeating physical access controls, etc.). Now, the difference between penetration tests and adversary emulation mainly boil down to scope (what can the pentester target vs. what is considered off limits), time allotted to achieve the goals set forth in the engagement, money you paid for the engagement (expertise costs money), and how much of a message you want to send about organization security (or in most cases, lack thereof). The red team is kinda like a casino: the house always wins. You may end up ahead temporarily if the scope and timeframe are narrow enough, but the red team will always win if there is enough time allotted and a big enough incentive. If you don’t believe me, look at nation-state unit’s like the NSA’s TAO or the recent supposed Russian infiltration of both the DNC as well as the Hillary campaign. Nation-state hackers are just hyped up pentesters with patience, time and a ton of motivation.

Speaking of nation-state hackers, that is the adversary that adversary emulation is attempting to mimic: Adversaries with no time limit, no scope limitations, and a goal in mind of knowing your network better than your sysadmins do. Most places won’t spring for adversary emulation engagements because the thought of having pentesters run rampant all over business critical systems with no boundaries whatsoever is horrifying to them. Those systems are their bread and butter. The thought of that going down to the tune of thousands lost per minute is kinda scary. But here’s the thing: The bad guys don’t have limits, and what’s more is that they don’t care unless it impacts them.

I’m going to stop here for now. We’ll continue this series another time. If there is anything you take away from this, it should be that vulnerability assessments are never penetration tests, however penetration tests can incorporate most of the aspects of a vulnerability assessments by their very nature. Now you know the difference and knowing this isn’t even half the battle. See you next time!

DA_667

About the Author


Tony Robinson (@da_667) is a network security engineer. He is currently wrangled by hurricane labs. He had an affinity for network security monitoring, malware analysis, and threat intelligence. When not saving the internet, he can be found playing video games and savoring dank memes.


Tweet