Home

Visit our page at https://twitch.tv/rallysecurity Tuesdays at 7pm to see us stream live!

 

SpecterOps Adversary Tactics course review

I recently had the opportunity to attend the first public offering of the SpecterOps Adversary Tactics: Red Team Operations course. This excellent, from-scratch training took participants through several modern Tools, Tactics, and Procedures (TTPs) and demonstrated their countermeasures and detections. I think this intensive four day course would best benefit intermediate to advanced participants who have a couple of years of pentesting or redteaming, though it would also benefit beginners who wanted to “drink from the firehose” and extremely accelerate their learning.

According to SpecterOps, “This intensive course immerses students in a simulated enterprise environment, with multiple domains, up-to-date and patched operating systems, modern defenses, and active network defenders responding to Red Team activities.” You won’t be using MS03-026 or MS08-067 to pop these networks!

This course primarily used Raphael Mudge’s Cobalt Strike as the attack platform, with training modules on how to customize MalleableC2 profiles to better emulate an adversary or remain undetected. The course modules also heavily relied on open source powershell tools that the instructors had created and released. Notable examples were PowerView, PowerUp, BloodHound, and Empire.

Image via https://twitter.com/Al_Jaber/status/908800006156689408

On the first day, the instructors introduced the backstory for the student’s engagement on the network by walking through the goals and objectives. Next, the instructors introduced course modules such as setting up attack infrastructure, OSINT, and gaining initial access. Although setting up infrastructure probably could have been covered more in depth as it is a huge topic, the instructors referenced several externally available resources that covers this topic extensively, and that students could work through in their own time. Since the majority of the class were experienced red teamers, I think they sped up the pace of the content and moved along to more fun activities such as “initial access,” though OSINT was a fun module 🙂

The second day covered host enumeration, EDR evasion, persistence, and privilege escalation. Each topic was accompanied with a relevant and sometimes humorous war story from the instructor’s experience. This itself would have sold the class for me as the instructors have several years of intense experience, and a wealth of stories to share. Having an active defender (Brian Reitz) present definitely made the course more challenging and engaging, as he would keep an eye out for any poor attacker opsec or TTPs and demonstrate why good habits and attention to detail matters in the form of killed beacons or other consequences.

The third day covered Active Directory enumeration using BloodHound, as well as token, session, and password theft and reuse. Sean Metcalf of Trimarc Security and adsecurity.org fame, one of a handful of Active Directory wizards, made a surprise appearance and answered several AD related questions that came up. The instructors introduced a tool that I was not familiar with, PowerUpSQL made by the folks over at NetSPI. This tool was created along the same lines as the other Powershell tools that SpecterOps members have released, and enabled some awesome SQL enumeration that I look forward to trying out on engagements.

The fourth day was a brain wracking session on how kerberos attacks work, data exfiltration, and an overview from Brian about what he, as the blue team, had observed throughout the course. Kerberos is an extremely dense topic, but the instructors walked us through the topic and answered all the questions that came up. Once we had completed the course objectives and let the instructors know, the course wrapped up with Brian turning on REALLY hard mode, amping up the difficulty. He began kicking us out of the network in earnest, which made it incredibly challenging and fun.

What I appreciated about this course, aside from how well it was organized and run, was that they gave both offensive and defensive perspectives. An instructor would give the objectives for the training module and walk through how we were to complete it, and then Brian Reitz, a member of the SpecterOps threat hunting team, would talk us through how defenders could detect and respond to the attack that we were about to perform. This aspect would also make the course beneficial for any network defenders who attend, to get experience in seeing how pentesters, red teamers, and adversaries with similar TTPs perform attacks.

In addition to the course materials and objectives, it was amazing having a braintrust such as Sean Metcalf, Raphael Mudge, and all of the SpecterOps folks in one place. Everyone was easily approachable and helped out with any questions the students had. During the course module introductions, the instructors would bounce questions off each other as they are each thorough domain experts for various aspects of security. For instance during one course module on COM objects, Will Harmjoy called in Matt Nelson who gave a very indepth talk as he is currently doing a deep dive on all things COM objects and has released several blog posts and 0day on the topic. All of these factors made it an invaluable experience.

I would highly recommend this course to anyone. The course materials are an excellent quick reference guide that I’ll be making good use of on future engagements. You may not get a certification out of this course, but the knowledge and hands on experience you gain will accelerate your operational capabilities significantly.

Creative Commons License
SpecterOps Adversary Tactics course review by Ben Heise is licensed under a Creative Commons Attribution 4.0 International License.
Based on a work at https://rallysecurity.com/specterops-adversary-tactics-course-review/.

Cat and Mouse: The Effects of Threat Research on Nation-State Actors

Hello again! I’m moving/porting a blog post that I did on a whim about a week or two ago that I posted on my personal blog. I figured I may as well post it here as well. Considering everything that has happened recently.. I consider myself to be blessed with good timing. -da_667

I’m going to preface this blog post before I get started. This is supposed to be a 101-level discussion with a low bar to entry, relatively fast, without much in the way of technical definitions. Sometimes, it’s hard for me to do that. Either I want to turn over every stone, go for the 100% completion, or go down every rabbit hole I can. The last thing I want is to be called a charlatan by my peers or be told that content I wrote sucks. So, I’m going to hope that for this subject that I go deep enough into the subject to keep it interesting and explain my perspective, but not deep enough into the weeds to make things boring and pedantic. All that being said, I include citations where I can, and welcome new information, or constructive criticism.

god help you, I was asked to write about this.

god help you, I was asked to write about this.

In recent years, the information security industry has seen the rise of threat intelligence and/or threat intel reports. Most would cite Mandiant’s APT1 report as being one of most well-known instances of modern “Threat Intelligence Reporting”. I suppose I should back up for a minute and give you my opinion on what modern threat intelligence reporting looks like. Admittedly, its a little bit arbitrary, but here is how I discern it:

Reports in the past seemed to have a laser-like focus “Just the facts, ma’am”, where the facts were file hashes, host-based artifacts, network-based artifacts, and observed activity. For example,let’s look at something like say… Code Red.

Okay so, Code Red wasn’t really a targeted attack, but I couldn’t help but pick it as an example because of the unique message it would leave on hacked websites: “HELLO! Welcome to http://www.worm.com! Hacked By Chinese!” That message was essentially screaming for researchers to say “CHINA DID THIS”. It almost seemed like they coldly, almost methodically made mention of the message, not caring if it was actually the Chinese that unleashed this worm unto the world; like attribution didn’t matter. At all. It leaves these artifacts, it uses these methods to propagate, here is what it looks like, here is how you combat it. Straight and to the point. It felt like solid research, with minimal marketing and drama.

Attribution is tough! But really, nobody seemed to care back then.

Attribution is tough! But really, nobody seemed to care back then.

Let’s compare this to the APT1 report , published in 2013. The report reads like a combination of a military intelligence brief, a news report, and a post incident reponse report, all rolled into one. You’re given network and host-based artifacts all the same, but in between there is serious effort applied to attributing the attack, identification of targeted verticals, and a serious effort to dramatize and/or draw attention to /LIVE/ action that the actors performed against targeted systems.

The live videos of actors compromising systems feels almost like its a spectacle, like its meant to serve as some sort of a smoking gun to somebody in a position of power, like there is some sort of a trial going on. You have your perpetrator, you have your motive, and you have their means. This is what modern threat intelligence reporting feels like.

In the fifth domain, cyber operations are represented by two separate yet equally important groups: the incident responders, who investigate breaches; and the threat actors, who operate against targets. These are their stories.

In the fifth domain, cyber operations are represented by two separate yet equally important groups: the incident responders, who investigate breaches; and the threat actors, who operate against targets. These are their stories.

(sidebar: I know a lot of fireant researchers. You know who you are, and you know I respect your work.)

As we have established, threat intel had existed for some time before APT1. Depending on who you ask, some would consider Cliff Stoll’s work, “The Cuckoo’s Egg”, to be one of the first “cyber” threat intelligence reports, as it is one of the first recorded observations of a malicious actor’s CNE capabilities and tradecraft. I wouldn’t really classify it as “modern” threat intelligence, though. It’s a factual story with no spectacle so-to-speak.

The APT1 report was largely credited for coining the term APT (Advanced Persistent Threat) though interestingly enough, Richard Bejtlich states that the “APT” terminology dates back to around 2006, with the TITANRAIN intrusion set, and that the phrase was originally coined by the US Air Force. My research (which consisted of lazily consulting wikipedia and backtracing through the works cited) shows that TITANRAIN dates as far back as 2003, and targeted a few different verticals and organizations. Though no APT1-style report was ever released publicly (Based on my limited research, we only ever found out about TITANRAIN in the public realm due to Shawn Carpenter leaking it), I would consider the TITANRAIN intrusion set to be the first instance of “modern” threat intelligence – massive effort being put towards attribution, identification of targeted organizations and verticals, etc.

So, now that you have some history, and somewhat of an explanation of what I consider modern threat intelligence, Why all of a sudden do security firms care about attribution? Its tied to the rise of the “fifth domain”. In years past, the internet was considered a nebulous space with no well-defined boundaries. No one country had any will they could exert on the internet, until suddenly “Cyber is considered a domain of war, lol.”. The idea of “Cyberspace” being considered a fifth domain dates as far back as the mid-90s, but wasn’t really taken seriously up until the recent administration, with a so-called cyber tsar being appointed, and Capitol Hill actually paying attention to information security — for better or worse. Now, suddenly the internet became like the Reese’s commercial — “You got your geopolitics in my internet. You got your internet in my geopolitics!” except without chocolate and peanut butter. Now people who are political experts suddenly think they’re qualified to make cyber security decisions, and people who are cyber security experts suddenly believe that they’re policy experts. To quote Krypt3ia, probably one of my favorite researchers, “STAY IN YOUR LANE”. I’m not against political experts getting involved in cybersecurity, attending conferences, and becoming more well-informed (I mean, it would kinda help defeat the argument that infosec conferences are an echo chambers if we had more outsiders attending), but what /does/ bother me is the uninformed making important decisions that affect us all, feigning that they ‘know better’, when in fact they know nothing. I digress.

Very suddenly, the internet has become a militarized zone, and a massive territorial, international pissing match. Countries are doing everything they can to establish dominance – by owning the their neighbors. If you want a picture of what electronic warfare mixed with the fifth domain looks like, you need only look at what is going on between Russia and Ukraine. It’s essentially a case-study in how devastating CNA (Computer Network Attack – Attacking computer assets to bring about affects in the real-world) and CNE (Computer Network Exploitation – hacking for the express purpose of sustained intelligence gathering) can be, and proof that it has a place in a country’s electronic warfare catalogue. And in a single paragraph, I have described “Cyberwar”.

There is geopolitical pressure by nations and intelligence communities worldwide to be able to attribute threat actors to nation-states for a number of reasons. These reasons mostly boil down to being able to use the attribution of cyber attacks as a form of leverage during international relations and/or conflicts. I mean, it looks REALLY bad when your ambassador goes to another country and denounces them for hacking, only for the country your denouncing to be able to fire back “So what? You were hacking us, too.” I’m paraphrasing here, but this is essentially what happened when the US accused china of hacking US infrastructure and business.

In addition to geopolitical motivations for attribution, there is also financial motivation by sufficiently large corporations. “Cyber Insurance” is an emerging market that a lot of corporations are investing in. Its emergence is the direct result of security researchers and practitioners telling companies and organizations for years that its only a matter of time before they become the next victim of a major breach. “Man, I really wish there was an insurance policy we could fall back on in the event we get hacked.” Lo and behold, cyber insurance is born, and companies move to rapidly replace, pare down, or outsource their internal security operations. After all, if getting hacked is an inevitability, what are security professionals being paid for? There is a slight problem however..

These insurance policies often have minimum requirements that a company must meet before an insurer will pony up. This is more or less the same as having an insurance claims adjuster come out to your house to verify that there are no glaring defects or issues that would be considered a risk to them to ensure — like say, stairs without railings, shoddy construction, structural defects, pre-existing damage, etc. If you suffer a breach, and the cyber insurance claims adjust comes by and determines you didn’t mean the “minimum required practices”, the insurance company will deny your claim. What’s the alternative? Prove that the adversary that breached you was sufficiently advanced, the attack was unprecedented, and had a degree of sophistication that no defense could reasonably hope to detect.

this owl is sophisticated. The skids that popped your EOL JBOSS server are not. (http://blog.talosintel.com/2016/03/samsam-ransomware.html)

this owl is sophisticated. The skids that popped your EOL JBOSS server are not.

This is more or less the scenario that played out with the Sony Breach: A belief that the actors were an “incredibly advanced” North Korean intrusion set, oh and a full cyber insurance policy payout. The company gets their insurance money, the IR firm that investigated the breach looks like rockstars (and they get to publish a report stating how advanced and sophisticated the actors were, while neglecting to mention the poor security in place) and everyone gets paid. Sophisticated, advanced, nation-state hackers means money all around.

So now you know why modern threat intelligence reporting is the way it is:
1. Countries can use the reports as leverage for geopolitical conflicts and negotiations
2. Large corporations can use it as justification for a cyber insurance policy payout and/or an excuse if they are found to be noncompliant with whatever regulatory compliance they fall under
3. Incident Response firms use it as a marketing rag to show off how fucking awesome their IR team is

Now, what is going on behind the scenes as these reports get released or the intrusion set(s) are discovered and caught in the act? The short answer is that there are a lot of things going on that you don’t see until the report gets posted from both the nation-state adversary side, as well as the side of the security researcher. I’m gonna start by telling you what’s happening on the nation-state side. First and foremost, I can almost 100% guarantee you that by the time a threat intelligence report is publicly posted, that the IOCs from the report are totally stale. How am I so confident? Because any intrusion set or nation-state worth their salt has iron-clad opsec, and they know when they are being watched.

Tell me, how many of you are familiar with the concept of a “Burn Notice“? How does this apply to cyber operations? The moment nation-state actors notice that something has happened, it all goes out window. All of it. Something has happened could be defined as:

– an implant was caught by an antivirus vendor, or somehow made its way on to virustotal
– a security firm is probing the C2 infrastructure
– there are network/infrastructure changes occurring on the target network that hint towards implants having been discovered

Slap my hand.

Slap my hand.

Nation-states have a ton of manpower and usually have resources dedicated towards detecting anything that could be considered a threat to their operations. Combine that with nation-state actors being trained to notice changes to the environments they are operating in, and well.. the bottom line is if there is even the slightest change that indicates that they’ve been discovered, you can bet your that it has been noticed, and that efforts are being made to burn the C2, implants, tradecraft, and everything. Upon discovery, they throw all of that out the window and completely reinvent themselves from the bottom up.

Don’t believe me? Take a look at Duqu vs. Duqu 2. Just about everything changed (except maybe targeted organizations — The decision to monitor Kaspersky being a notable and very interesting decision that ultimately lead to their capture). C2 (new IP addresses and domains), implant design (in memory only vs. dropped file artifacts everywhere) and other miscellaneous tradecraft (e.g. no longer using stolen certificates that could be backtraced, having only a few footholds/persistance points in the network on high uptime systems, etc.). There’s a good chance that if you were to read the report on Duqu, then read the report on Duqu 2, you’d never know that they were suspected to be the same nation-state without the names tying them together, and that is the whole point.

For a more recent case study, let’s look at “ProjectSauron“. Technical details of the report state that some of the implants have some sort of a targetID associated with certain servers in targeted organizations. This implies (and is later confirmed by the report) that the actors customize implants on a per-target basis, or at a minimum, use some sort of polymorphism. This hypothesis isn’t really so far-fetched if you think about it. Even common ransomware authors build out new versions of their malware daily to avoid detection. What’s interesting about this from a nation-state malware perspective is that if the implant(s) in one target environment are discovered, then theoretically that allows some limited operational damage control to be performed and only burn the implants used for that target’s network only. However in this case, Kaspersky caught multiple instances, in multiple target networks, all sharing the same TTPs. This means that its pretty much back to the drawing board for whoever the “Strider Group” is.

If the nation-state actors are any good, then there should be absolutely nothing that ties campaigns /or/ versions of an implant together. It should be noted that in the rare cases you’re able to pivot off of a name or a registration e-mail address used to register new domains between one campaign and another, or code re-use allowed you to link one campaign/set of implants to another, then that nation-state was terrible at compartmentalizing. That’s cross-contamination and that gets you caught. That’s what happened in DNC hack that allowed researchers to supposedly tie the hack back to a Russian intrusion set that was also observed in Germany.

So now, you have some idea about what’s going on behind the scenes with the nation-state actors, what is going with regards to the security researchers? You see, the security researchers know that information security is a cat and mouse game. They also know that as soon as a nation-state catches wind that something is amiss, that actor will disappear, like a spooked gazelle. This puts them in a very tight position:

-How long do I lay low to see if I can find additional implants, modules, tools, targets, and/or C2 the actor uses?
-How long can I stay under the radar and observe these actors without them knowing I’m watching?

You have to measure the potential gains from monitoring the actors as they perform their operations, and temper that with the knowledge that they’re in the network for some express purpose (usually gathering intelligence and/or obtaining trade secrets) and that each moment you let them keep operating in the network, is another moment that they’re inevitably screwing over your client. It’s a tough position to be in, to have to tell the client to wait so we can observe before we pull the plug for good. As soon as that plug is pulled and remediation efforts are underway, the jig is up, and the actors gone.

Let’s summarize all of the above, shall we?

1. Cybersecurity is a constant cat and mouse game. Offense informs defense, defense informs offense. Yin and Yang. The world in balance.
2. “Cyber Threat Intel” has been around for a long time. It’s only recently with the rise of the “Fifth Domain” that attribution has gotten thrown into the mix and it has suddenly become a Big Deal(tm) and somewhat of a spectacle due to the geopolitical ramifications (using attribution of cyber attacks as leverage during geopolitical conflict and negotiations), justification for cyber insurance payouts and/or security negligence in big corporations domestically, and finally for marketing and proof that our IR team is the bee’s knees and you should totes hire us (marketing).
3. You can bet almost anything that before the threat intel report is even posted, that the nation-state actors already knew and already had plans well under way to burn down the current infrastructure and rebuild it all from scratch in a totally different form.
4. Security researchers who discover nation-state actors in client networks are in a hell of a bind between wanting to observe the actors as long as they can to discover more details of their operation, and shutting down the actors as soon as possible due to the obligation to their clients and/or moral obligations.

RS_101: Penetration Testing Part 2

Hello again!

I promised that I would continue this series of 101-level posts on penetration testing/red-teaming so… here I am. And we’re as done as a half-eaten sandwich. If you want to brush up, the first blog post in this series can be found here.

Don't get too excited.

Don’t get too excited.

Today’s lesson will be on the general phases of a penetration test, as well as covering the Pre-Engagement and Reconnaissance (aka “recon”) phases. Some people and organizations will call different parts of a pentest engagement by different names, and/or lump different phases together. Here’s my interpretation:

Pre-Engagment [edited 8/19, per conversation with my betters]
Reconnaissance
Initial Access
Persistence
Lateral Movement/Privilege Escalation
Achieving Goals
Post Engagement Write-up and Reporting

Lets compare the phases I have laid out to another interpretation. Like, say, the Penetration Testing Execution Standard (PTES). They have their own belief on what the phases look like:

Pre-Engagement Interactions
Intelligence Gathering
Threat Modeling
Vulnerability Analysis
Exploitation
Post Exploitation
Reporting

 

With regards to intelligence gathering, threat modeling, and vulnerability analysis, I feel like good pentesters do all of that as a part of the recon phase. The sole reason you’re gathering information is figure out how you’re going to breach their defenses and/or determine if there are other findings that, while they may not be exploitable, may be worth noting in your report since they represent some sort of a risk. Which is why recon is the most important phase of any pentest, and this will NOT be the last time I say this. The only reason I could see to have these broken out into separate steps, would be to break recon into more manageable portions to iterate just HOW IMPORTANT RECON IS >>foot stomp<<.

On the other hand, I take the exploitation/post exploitation phases and break it into initial access, persistence, lateral movement, and achieving goals. I feel its important to separate these steps out as they are all important portions of the exploitation/post-exploitation phase of a penetration test that all kinda get lumped together. I guess I’m a bit of a contradictory like that.

Phase 0: Pre-Engagement [Edited 8/19]

So originally, I had a blurb in here in how I kind of consider it crap that pre-engagement things are included as pentester responsibilities in the PTES. “In any normal scenario, this should be handled primarily by a sales rep/sales engineer, and/or a project manager to handle the hairy logistics and things that need to be negotiated before a penetration test.”

What are said, hairy things? little details like… agreeing on services to perform, rules of engagement, billing, scope, ip ranges, third party equipment/vendor (e.g. MSSPs) equipment that may or may not be in scope, ensuring that authorization forms of SOME sort are signed by an authoritative entity in the client organization (e.g. the “Get out of jail free card”), and finally organizing everything into a conherent contract that is the rational, reasonable solution that makes sense to the pentester as well as serves the clients needs (among other things).

On paper, this is what /should/ be done, but in reality… salespeople exist to make sales. While technically, I still think that pre-engagement should NOT be an activity that the penetration tester has any sort of primary responsibility over (in a perfect world), the fact of the matter is… if you aren’t present, the salespeople will do whatever they can and whatever the customer wants (including increasing the scope to a massive degree and/or sell things that cannot be delivered or make NO SENSE whatsoever) in order to make that sale, get that engagement, and get another client in the books. THEN, it will be YOUR job, as the pentester to deliver. It is in your own best interest to be present during these meetings to make sure you can answer the client’s questions (shows good customer support, especially if you practice good soft skills – which you need for social engineering anyway… more on that later), and ensure that salespeople or clients aren’t pulling anything stupid during contract negotiations.

I mean, it makes sense and I can’t believe I was actually against including it initially. I worked at a company that sold network security appliances that operated like this. Sales would promise them the world to get them to buy the security appliance, then deliver grains of sand. When the customers were dazed, confused, had no idea how to manage the systems and/or configure them.. Sales would point them towards technical support and we would be stuck fielding deployment issues for a customer who has no idea what they were doing, only that they were promised it performs X function(s)

The bottom line here is to be present for these meetings and ensure that terms make sense to both the client you are servicing, as well as you, the pentester who are going to be performing the work detailed in the contract. If you happen to be an independent pentester, then god help you, because you get to write up the statement of work for the engagement, ensure that it makes sense, ensure that the scope isn’t creeping out of control (that is, the client doesn’t keep piling things in), that billing is rational, and that the work actually gets performed. Keep an eye on the statement of work.

(special thanks to @viss and @redteamwrangler)

Phase 1: Reconnaissance

If you don't have good recon, you just end up throwing attacks at something hoping to slip between the cracks.

If you don’t have good recon, you just end up throwing attacks at something hoping to slip between the cracks.

Recon is important. Recon is important. Recon is important. I like turtles. Oh, and RECON IS IMPORTANT. Good recon can make or break an engagement. Your goal in recon is to recover as much information about the organization, its employees, business processes, technologies deployed, and/or security controls in place (physical and/or technical — depending on scope) as possible. Afterwards, you are responsible for analyzing this information and making inferences based on it in order to identify potential weaknesses that can be exploited in order for you to gain initial access. Additionally, as mentioned above, you have the responsibility of noting other potential issues as well. These issues might not net you a shell, or get you access to a juicy information dump but may possible represent some sort of risk that the client needs to be aware of.  Food for thought.

For instance, if breaching physical security is considered in scope for the engagement, then sizing up the building, performing surveillance, RF spectrum analysis, wireless site survey, inspection of physical security controls, among other activities may be things you consider doing depending on time you have available. This may, for instance, lead to you identifying a rogue access point with no encryption or poor encryption capabilities (e.g. WEP) that could be abused to gain initial access to the client’s network. Or perhaps discovering that tailgating (the practice of allowing unauthorized individuals into secure spaces who are not authorized to be there) may be a common practice, allowing you to walk right into the building, set up to a conference room or wiring closet, and literally plug into the client’s network directly. You’d think such occurrences are infeasible, but… I’ve seen and heard of them happening.

Generally speaking, recon falls into two categories: active and passive recon. Passive recon involves making use of information about your target made publicly available from a variety of different resources. If you know intelligence community and/or natsec (national security) nerds, you might know passive recon by its other name: OSINT (Open-Source Intelligence). Essentially, any information that you can derive from freely available sources that does not involve you directly asking the questions yourself, or probing your client’s network infrastructure directly is considered passive recon/OSINT. Think of it as a giant game of “IM NOT TOUCHING YOOUUUUU”.

What are some examples of passive recon/OSINT resources? I did a talk related to this:

If videos aren’t your thing, the slide deck and a huge collection of web browser bookmarks to a ton of other resources can be found here (and as a backup, here). So for the most part, the resources I collected are mainly associated with blue-team or security operations and resources to help make their lives easier. However, there are choice data sources in there that penetration testers can use as well, such as shodan, censys, punkspider (Currently unavailable due to SSL issues), Hurricane Electric, ipintel, netcraft, and robtex.

Those are just for starters. You could use google street view to determine building layout, Wigle for gathering information about nearby wi-fi access points, various job posting boards to learning about technologies deployed in the company, social media (facebook, twitter, etc. — especially including linkedin) to find out more about the client’s employees and technologies, the SEC’s EDGAR database (if the company is publicly traded) to figure out who is at the director/board level (for social engineering, etc.), company websites and press releases surrounding new facilities, organization charts, new employees, new projects, preferred vendors, mergers and acquisitions and so. much. more. You can find mountains of information about different organizations, and most the time, you never have to send a single packet towards their infrastructure. That is the beauty of passive recon and OSINT: that data is out there and ripe for the taking.

If passive recon is looking at public information a client exposes to the world, then active recon is the opposite of that and actively looking for information about a client. Visiting their website(s) analyzing sitemaps, fingerprinting services, actively scanning network ranges owned by the client, visiting physical locations, interacting with employees in the building, or entering/leaving the building, asking probing questions about different aspects of business, calling various people or business units in the company in order to extract information about the client and so on and so forth. The idea here is that you are attempting to answer questions about the client, their employees, and their network that you cannot easily acquire answers to by finding direct (or in some cases, indirect) methods to ask them on your own.

Sometimes, you may choose to use tools and frameworks as a part of your recon investigations. There’s a variety of them out therealmost all of which I have never used, but for the sake of completion, and because I know most of you are clamoring for toys to play with, I linked to a few that I’m aware off the top of my head. Always be mindful, that as a penetration tester and a network security professional, the tools do not make you a good penetration tester. It is your skill, your curiosity, and your capability to ask questions and draw conclusions that will win you the day and separates you from script kiddies. The recon phase requires you to be a good detective and draw as many conclusions as you can in a limited amount of time. The tools are just icing on the cake.

That’s all I’m gonna cover for now. The next chapter will be Initial Access. Until then, this has been an RS_101 lesson.

bush

RS_101: Penetration Testing Part 1

Yesterday, I was bored.

It had been a while since I had discussed anything useful on social media, so I decided to pick a subject and just brain dump what I know about it out loud. Last night’s subject was penetration testing, red teaming, and adversary emulation. Most people know me as that blue team guy, the one dude that knows some stuff about NSM, some malware analysis tidbits, and maybe where to find the dankest memes, but I do know a thing or two about the offensive side of security. I’m no OSCP, but I know things.

I mean, it's a pretty accurate description.

I mean, it’s a pretty accurate description.

By and far however, Ben is the better red teamer in our little dynamic duo at rallysec, so I’m guessing that if I did this wrong, then he’ll be the one to tell me later whilst shaking his head. So without further adieu, let’s discuss vulnerability assessments, penetration testing, red teaming, and adversary emulation, because all of these terms  are inter-related in some way. I feel it’s important to understand the lingo to know what a penetration test (pentest) is and is not, as well as what drives so many security firms to provide them as a service offering.

Pentesting today is typically driven by law (aka regulatory compliance): Most businesses verticals utilize information systems that are responsible for processing and/or storing sensitive information, or controlling sensitive resources. Regulatory compliance is essentially a set of guidelines that state certain security controls and/or mitigations must be in place in order to assure there is at least some sort of a token effort towards ensuring the confidentiality, integrity and/or availability of these sensitive resources and/or data that is being stored or processed by said information systems.

These guidelines are enforced by an auditor that is usually certified or associated with the regulatory compliance body. The auditor comes in on a regular basis, goes through a list of items the company has to prove they are doing or have been doing to ensure that they are complying with the regulations/rules, and the company provides evidence that they are actually doing so. If a company is NOT in compliance, this usually results in pretty hefty monetary fines, and in some cases, can result in a loss of certification for the information system — meaning that until the company gets their act together, the information system cannot be used for processing sensitive information. There are tons of different regulatory compliance bodies for all sorts of verticals. NERC/CIP, PCI/DSS, HIPAA, FISMA, SOX, and so on and so forth.

So now you know what regulatory compliance is, what does this have to do with pentesting? You see, most regulatory compliance doesn’t really define what a penetration test actually is, but require it in some way, shape, or form. I found this (written by the PCI security standards council no less!). On pages 3 and 4, they go over some of the basic differences between a vulnerability assessment and a penetration test. Still, be that as it may, most regulatory compliance does NOT differentiate between the two or if they do, nobody cares.

Most of the time, organizations subjected to these compliance audits are motivated by money and/or least required effort. Typically this means that the cheapest solution, not necessarily the best solution, wins. So most companies will spring for a vulnerability scanner, someone to run that vuln scanner, scan their network, generate a report, and present that as evidence that they have been pentested, and the auditors buy it. Problem solved, checkbox checked.

This results in most security practitioners having a very unfavorable view towards compliance, calling it “checkbox security”, so-called because the auditor comes in reads off security controls from a list, and checks off items as “evidence” is presented. I’ve heard of stories where an auditor asks to see the organization’s firewall, and the IT person kicks a box under their desk. The cardboard box that contains the firewall that isn’t racked, stacked, plugged in, or configured. The auditor checks their box and moves on. If you’ve ever heard of following the letter of the law as opposed to the spirit of the law, that is what this situation amounts to. This is what leads to companies calling vulnerability assessments penetration tests. “They’re basically the same thing, right? Just check the box and move on.”

As stated above, a cheap vulnerability assessment is someone throwing Nexpose, Nessus, or OpenVAS (God help you) against your regulated network, generating the PDF report that the vuln scan tool provides,  and calling it a day. The scan may be credentialed (that is, some vulnerability scanners will test for additional vulnerabilities if you provide the software with valid network credentials) if you’re lucky, but most of the time, they won’t bother. A good vulnerability assessment is someone throwing a vuln scanner at your environment (with credentialed scans), and actually testing to see if  the vulnerabilities are exploitable, As well as writing the reports themselves, and prioritizing the vulnerabilities in the order that they should be remediated (usually they’re prioritized according to the risk they present to interrupting operations). The /best/ vulnerability assessments do all of this, plus provide some alternative means of remediating a vulnerability aside from “patch your stuff”, for organizations who have restrictive or very limited change control windows.

Pentests can be performed by a single person, or by a group of people (red teaming), they can be entirely remote, or may incorporate physical security aspects as well (e.g. social engineering and/or defeating physical access controls, etc.). Now, the difference between penetration tests and adversary emulation mainly boil down to scope (what can the pentester target vs. what is considered off limits), time allotted to achieve the goals set forth in the engagement, money you paid for the engagement (expertise costs money), and how much of a message you want to send about organization security (or in most cases, lack thereof). The red team is kinda like a casino: the house always wins. You may end up ahead temporarily if the scope and timeframe are narrow enough, but the red team will always win if there is enough time allotted and a big enough incentive. If you don’t believe me, look at nation-state unit’s like the NSA’s TAO or the recent supposed Russian infiltration of both the DNC as well as the Hillary campaign. Nation-state hackers are just hyped up pentesters with patience, time and a ton of motivation.

I bet you're expecting som Sun Tzu right now. UNLESS ITS A FARM.

I bet you’re expecting some Sun Tzu right now. Here ya go.

Speaking of nation-state hackers, that is the adversary that adversary emulation is attempting to mimic: Adversaries with no time limit, no scope limitations, and a goal in mind of knowing your network better than your sysadmins do. Most places won’t spring for adversary emulation engagements because the thought of having pentesters run rampant all over business critical systems with no boundaries whatsoever is horrifying to them. Those systems are their bread and butter. The thought of that going down to the tune of thousands lost per minute is kinda scary. But here’s the thing: The bad guys don’t have limits, and what’s more is that they don’t care unless it impacts them.

I’m going to stop here for now. We’ll continue this series another time. If there is anything you take away from this, it should be that vulnerability assessments are never penetration tests, however penetration tests can incorporate most of the aspects of a vulnerability assessments by their very nature. Now you know the difference and knowing this isn’t even half the battle. See you next time,

 

DA_667

 

Help us make more Episodes