Tuesday, May 14, 2013

Warm-ups and mobility training for BJJ, Judo, and MMA

As a 30-something desk jockey who dabbles in BJJ, Judo, and MMA, I've had my share of injuries over the years. I thought I would share some helpful tips for recovery and injury prevention that I've learned along the way.

Disclaimer: I'm not a doctor, just a random guy on the Internet who has had his fair share of injuries and has been lucky enough to work with some great doctors to find some things that worked for me.

I was prompted to dash off this quick blog post in part to share some information with my friend +Christofer Hoff,, a fellow BJJ-practicing geezer and fellow security guy who has recently joined the ranks of those of us with neck/back injuries.

On my side, I've got L3/L4 disc degeneration and I've had partial rotator cuff tears in each shoulder at different times. I've been lucky enough to avoid shoulder surgery and by being careful about warm-ups and rehab, I'd say I've recovered 85-90% range-of-motion in my shoulders while avoiding further injury.

The typical BJJ / Judo warmup


I notice many, many BJJ practitioners will extensively warm up their neck, shoulders, hips and legs before practice. The typical instructor-led warm-up you seen in BJJ or Judo involves things like running, "butt-kicks", "high-knees", side-to-side skipping, pushups, "shrimping" drills, shoulder circles, neck rotations, plus forward and backward rolls.

While these are great overall warm-up exercises to get your blood pumping, they don't focus enough on the thoracic spine or the posterior chain, key parts of the "core" that have to work properly if you want to avoid injury.

Thoracic spine

The thoracic spine or "T spine" consists of the middle part of the back which is involved heavily in twisting and trunk rotation, motions which are very important to both striking and grappling. Note that when I talk about "T spine", I'm not just talking about the vertebra and discs but also the soft tissue in this area including the paraspinal muscles, traps, rhomboids and the serratus anterior muscles which help to stabilize the scapula.


While it's pretty common in BJJ to hear about cervical (neck) injuries and lumbar (lower back) injuries, one seldom hears complaints about thoracic (mid-back) injuries. This is because problems in the T spine tend to lead to injuries in other parts of the "kinetic chain", namely the lower back, neck, and shoulders. What follows is my (admittedly amateur) attempt to explain why this happens.

All 3 regions of the spine (cervical, thoracic, and lumbar) are responsible for flexion (bending forward) and extension (bending backward). The T spine is unique because it bears additional responsibility for trunk rotation. If your T spine is limited in its ability to rotate, your body will place rotational stress on the lumbar spine (which isn't really designed to rotate), causing lower back injuries. In these situations, people may focus their efforts on rehabilitating the lumbar region while ignoring the original cause of the injury (poor thoracic mobility).

Likewise, if you sit at a desk most of the day like me, even if you are very active outside the office you will likely end up with poor thoracic mobility, manifesting as slight hunching, rounding of the traps shoulders, and "closing" of the chest. Often times, those with rotator cuff injuries will be led to focus rehab on the shoulder stabilizers without looking into the fact that poor thoracic mobility may be what led to your shoulder injuries in the first place. This is why Michael Boyle lists thoracic spine mobility in the #1 spot on his list of mobility drills that everyone should do, noting:

"The nice thing about t-spine mobility is that almost no one has enough and it's hard to get too much"

Posterior Chain

The posterior chain is the group of muscles on the rear side of the body from the deltoids and traps all the way down through the glutes, hamstrings, and calves.

You hear a lot about knee injuries in BJJ. While some of these knee injuries are inevitably due to awkward take-downs, knee-bars, etc. a fair number of knee injuries in sports are actually caused by poor posterior chain conditioning. Many athletes are "quad dominant", meaning they rely too much on their quad strength without adequately recruiting the glutes and hamstrings -- this is a sure recipe for placing too much stress on the knee leading to injury.

Doing squats, deadlifts, and barbell glute thrusts with proper form will really help to strengthen the posterior chain but will also train your body to recruit the proper muscle groups when needed. If you're already doing squats and deadlifts, the barbell glute thrust or glute bridge, popularized by Brett Contreras, is a great addition to your lifting regimen.

I've taken to showing up early for class (along with the other old guys) and adding the following warm-up exercises to my regimen to make sure the glutes, hamstrings, and lower back are warm enough. You can run through this whole series quickly in less than 5 minutes.

  • Simple trunk rotation for 60 seconds, left and right twists with elbows up. Don't force the range of motion.
  • Bird-dogs (opposite arm/leg): 25 each side
  • Bird-dogs (same side arm/leg, takes some getting used to the balance at first), 25 each side
  • 25 deep "air squats" with proper form, make sure you get your quads down to at least parallel with the floor, remember to press up through the heel like with any squat.
  • T spine openers. I love these - cut to the 2:00 mark at this video to see a decent explanation.
  • Walk-outs with yoga pose (repeat entire sequence 3 times). This video shows a variation, but basically you want to stand straight up, then bend over and walk your hands out to a pushup position (taking at least 10 "steps" with your hands), then repeat the sequence shown in the video (bring right foot to right hand, twist right for 10 seconds, twist left for 10, back to pushup position, bring left leg up, twist left for 10, twist right for 10, then back to pushup, then walk your hands back up). Repeat the sequence 3 times.
  • Glute bridge: lay on your back with your knees bent and feet close to your butt. Bridge up, hold, and then back down. Yeah, it looks like aerobics class - deal with it :)

Taking it to the next level

These warm-ups will really help to prevent injury and boost your performance. If you really want to have big gains in your mobility and posterior chain performance, I'd make the following 4 recommendations:

  • Hit the foam roller every day. Yeah, I know it hurts at first, but if you stay on top of the foam roller (no pun intended) after a couple weeks you'll find it won't hurt as much. Typical foam rollers have a hard time getting into the T spine area, so I'd recommend the Trigger Point myofascial release package, which includes not only a few different sized rollers but also their "quad balls", which are great at getting into the shoulder blades and T spine. It's not cheap at $129 (down from $189) but I find that I use it all the time -- the fact that the small roller fits into my gym bag means I can take it with me wherever I train.
  • Spend one workout a week focusing exclusively on mobility. Pick up the 3-DVD set "The Encyclopedia of Joint Mobility", a stretching and mobility program by Steve Maxwell. Steve is a BJJ black belt and his exercises are very relevant and focused on martial artists.
  • Find a great soft-tissue specialist who specializes in "active release" therapy. If you are in Southern California, I can recommend the great docs at Back To Function, who helped me rehab my shoulders and avoid rotator cuff surgery. If you've never had soft tissue work done before, don't think of it as a nice therapeutic massage: think of excruciating pain, pouring sweat, and cursing at the doctor the whole time. You'll feel great afterwards (this may be due to the endorphins, I'm not sure) but more importantly you'll stay healthy and avoid injury.
  • Extra credit: If you haven't already, pick up a copy of the "Magnificent Mobility" DVD by Eric Cressey and Mike Robertson. This DVD takes a whole-body approach with great focus on the core and posterior chain. There is also the sequel, "Inside-Out: The Ultimate Upper Body Warmup" which focuses, as you might guess, more on the T spine and upper body. They're both great products.

Monday, April 29, 2013

UK likely to outsmart Obama on cyber security? Think again

In the April 26th article for V3 titled "UK government likely to outsmart Obama on cyber security", +Alastair Stevenson opined:
"While the US [cyber security] spending does dwarf that of the UK, I'm still convinced the British government will get more bang for its buck, thanks mainly to its more measured focus on education and collaboration.
Obama is yet to release the full details about where the US money will go, but given the nation's track record when dealing with new threats to its borders or citizens, it's unlikely much of it will reach the country's education system. "

In Stevenson's cursory analysis of U.S. cyber security spending, I believe he has made a number of mistakes. First, he states that "Barack Obama followed suit" in increasing cyber security spending after the U.K. announced its Cyber Strategy in November of 2011. In fact, Obama's focus on cyber security goes back to at least May of 2009, when the White House published its "Cyber Space Policy Review". This 30-page document focuses almost exclusively on cyber security, summarizing the administration's policy and proposing action plans to improve cyber security across both the public and private sectors. Federal funding for cyber security has been increasing steadily year-over-year according to the plans laid out in the policy review.

Stevenson seems to focus exclusively on the increase in cyber security funding within the U.S. Department of Defense including the Air Force and DARPA, while ignoring the significant increases in funding for other cabinet-level agencies, including the Department of Justice, the Department of Homeland Security, and the Department of Commerce (which includes NIST). No wonder, then, why Stevenson doubts that "much of [the funding] will reach the country's education system".

In the U.S., the Department of Defense isn't responsible for cyber security education. That job falls more to NIST and DHS. In my blog post last week, I broke down the NIST cyber security spending and provided an overview of NIST's already significant cyber security mission. Both NIST and DHS play significant roles in cyber security education and collaboration - this has recently expanded to include NIST's National Initiative for Cybersecurity Education (NICE) and the DHS's National Initiative for Cybersecurity Careers and Studies (NICCS).

The U.S. is already years ahead of the U.K. when it comes to public-private cyber security coordination and education. What remains to be seen is which efforts (in both countries) end up being worth the investment of tax payer dollars.

Tuesday, April 23, 2013

United State spending on federal cyber security grows in Obama's new 2014 budget (part 1)

Obama's proposed federal budget for 2014 includes broad cuts to a number of departments and programs, including funding cuts of 34.8% for the Department of Homeland Security, 17.7% for the Department of State, and 8% for the Department of Defense.

Despite these cuts, one area the new budget doesn't skimp on is cyber security. The President has consistently called for increased focus on cyber security across both public and private sectors, declaring "The cyber threat is one of the most serious economic and national security challenges we face as a nation".

This policy is reflected in Obama's 2014 budget, the introduction to which states:
"We must also confront new dangers, like cyber attacks, that threaten our Nation’s infrastructure, businesses, and people. The Budget supports the expansion of Government-wide efforts to counter the full scope of cyber threats, and strengthens our ability to collaborate with State and local governments, our partners overseas, and the private sector to improve our overall cybersecurity."

This blog post series will examine the increases in cyber-security spending across each federal agency in the 2014 budget. We will start with the Department of Commerce.

Department of Commerce

The Department of Commerce will allocate $754M (an increase of $131M from the 2012 enacted level) to the National Institute of Standards and Technology (NIST), a good chunk going towards NIST's cyber security mission:
"This funding will accelerate advances in a variety of important areas, ranging from cybersecurity and smart manufacturing to advanced communications and disaster resilience."

NIST's own 2014 budget request contains more details about their cyber-security spending, including the following increases:


When it comes to R&D and Standards (the first line item above), NIST already has well-established role. NIST is the main agency responsible for approving  cryptographic standards used all over the world, including the Advanced Encryption Standard (AES) and the various secure hashing algorithms we've all come to know and love. Much of the rest of the world takes its cue on approved cryptographic practices from NIST.

In addition to its cryptographic mission, NIST is responsible for developing security standards and policies for government agencies through its use of "Special Publications", including most notably:


I recommend that you browse the complete list of NIST's special publications, as there are some good resources there.

NIST runs the NVD (National Vulnerability Database) and the CSRC (Computer Security Resource Center). More information about NIST's computer security initiatives can be found on the NIST Computer Security Division site.

NIST maintains some technical standards related to security automation and the interoperability of security tools like the ones we develop at Rapid7. This family of related standards includes SCAP (Security Content Automation Protocol), OVAL (Open Vulnerability Assessment Language), and XCCDF (Extensible Configuration Checklist Description Format).

In the next part of this series, we will look at the Department of Defense's proposed increases in cyber-security spending.

Monday, April 22, 2013

Microsoft's EMET 4.0 - a free enterprise security tool for blocking Windows exploits

Last week Microsoft announced their 4.0 beta release of EMET (Enhanced Mitigation Experience Toolkit). If you are responsible for securing Windows systems, you should definitely be looking at this free tool if you haven't already.

EMET is a toolkit provided by Microsoft to configure security controls on Windows systems making it more difficult for attackers to successfully launch exploits. EMET doesn't take the place of antivirus or patch management, but it does provide an important set of safeguards against not only existing exploits, but also against future 0-day exploits which have yet to be developed or released. Even the best signature-based antivirus programs don't do a good job at protecting from 0-days.

EMET allows administrators to exercise fine-grained control over Windows' built-in security features in Windows 7 and higher, including:



While DEP and ASLR have been supported by Microsoft since Windows XP SP2 and Windows Vista (respectively), one of the main weaknesses of this mitigation is that existing applications needed to be recompiled by the developer to "opt-in" to these security controls.  A great benefit of EMET is that it allows administrators to "force" DEP and ASLR onto existing legacy applications.

While there are many exploits out there which bypass DEP and ASLR, it's worth noting that the first versions of these exploits are sometimes thwarted by these controls, which buys you some time for either patches or antivirus detection to become available. There are good reasons why the Australian DSD (Defense Signals Directorate) has included DEP and ASLR on its "Top 35 Mitigations" for two years running.

EMET 3.0 and 3.5 introduced the ability to manage EMET via GPO, putting installation and configuration within reach of the enterprise. EMET 4.0 builds on this feature set and includes some very useful new protections, including:

  • SSL certificate pinning - allows mitigation of "man-in-the-middle" attacks by detecting situations where the Root CA for an SSL certificate has changed from the "pinned" value configured in EMET. For example, you can configure EMET to say "There is only a single trusted root CA that should ever be issuing certificates for acme.com, and if I see a certificate for any FQDN ending in .acme.com from a different CA, report this as a potential man-in-the-middle attack. You can pin the CA for entire domains or for individual certificates. EMET 4.0 beta ships with pinned certificates for login.live.com and login.microsoftonline.com, but administrators can add their own.
  • Enhanced ROP mitigation. There is a never-ending arms race between OS and application developers on the one side and exploit developers on the other side. When a new mitigation technique is developed by Microsoft, clever exploit developers work hard to find ways to bypass the mitigation. In the case of ROP mitigations, EMET 3.5 included some basic ROP mitigations that blocked assembly language "return" calls to memory addresses corresponding to known lists of low-level memory management functions in certain DLLs. This rendered a common exploit technique ineffective. However, exploit developers responded with adjusted techniques to bypass EMET's ROP mitigations, such as returning into the memory management code a few bytes beyond the function prologue. I don't have enough time or space to do this fascinating topic justice, but you can read a good overview of ROP exploit techniques here.

    EMET 4.0 blocks some of these mitigation bypass techniques, which puts the onus back on exploit developers in this cat-and-mouse game. I'm looking forward to the first white paper detailing how the new mitigations can be bypassed.
  • Improved logging. With the new and improved EMET notifier agent, EMET 4.0 does a much better job at logging events to the Windows event log. This opens up the possibility of using a centralized event log monitoring systems such as Microsoft Systems Center Operations Manager (SCOM) 2012 to act as an enterprise-wide early detection system for exploit attempts. Imagine having instantaneous alerting any time EMET blocked an attack on any Windows system across the enterprise.

    One could also use a free tool like event-log-to-syslog to gather event logs centrally, or even something like Splunk (with universal forwarders) if you don't mind breaking the bank.

    Another benefit of centrally logging and analyzing EMET events is it will give you early warning on EMET compatibility problems. Past versions of EMET have been known to cause problems with certain applications, for example I found that the LastPass extension for Chrome needed certain EMET settings disabled in order to run. If you haven't used EMET before in your enterprise, you will definitely want to introduce EMET in a limited rollout before going enterprise-wide via GPO. Note any programs requiring exemption or settings customization and make sure those settings are reflected in the GPO policy.
Update 4/22/2013: +gaten guess was nice enough to point out that ASLR was introduced in Vista, not Windows XP so I clarified my comments above. Many of these controls work poorly or not at all in XP, so it goes without saying that if you're running Windows XP anywhere in your enterprise, EMET should be the least of your worries. :)

Tuesday, April 9, 2013

JavaScript static analysis and syntax validation with Google Closure compiler

A while back, I found myself needing to have automated syntax checking and static analysis for JavaScript code. I found JSLint to be less-than-ideal, even though it has a Maven plugin. JSLint is fairly hard to tune, and it does a poor job at grokking the syntax and constructs from 3rd-party libraries such as jQuery and YUI. JSLint also tends to be noisy and difficult to tune.

I played around with a few different options and ultimately settled on the Google Closure Compiler. This is a JavaScript minifier/optimizer/compiler which also does (of necessity) a good job at syntax validation and error checking.

I ended up writing an Apache Ant task to invoke Closure on parts of the project source tree, excluding known third-party libraries from analysis. I'm reasonably happy with the results, although I'm sure one day this should be migrated to a Grunt task using the Grunt Closure Compiler plugin.

Without further ado, here is the Ant task definition. Hopefully the in-line comments make the usage pretty clear -- let me know if you find this useful or if you have any questions! Note that this task definition assumes that the Closure compiler JAR file is located in the ant lib directory.

<!--
   <timed-audit-task> is a reusable macro to run a specific audit tool against the source code,
   storing its output under @{audit-output-dir}. The output directory will be deleted and recreated
   prior to running the audit tool. Some basic logging and timing statements are added for clarity
   and profiling.

   To skip the running of a specific tool, the person invoking ant can specifiy -Daudit-skip-<toolname>,
   where <toolname> is the value passed in to the @{audit-task-name} parameter. By convention this should
   be the short name of the tool, for example "findbugs", "checkstyle", or "pmd". Thus, invoking ant with
   -Daudit-skip-findbugs=1 will cause the findbugs audit tool to be skipped. The actual value of the defined
   property is irrelevant.
-->

<macrodef name="timed-audit-task">
   <attribute name="audit-task-name"/>
   <attribute name="audit-output-dir"/>
   <element name="auditTaskBody"/>
   <sequential>
      <if>
         <not><isset property="audit-skip-@{audit-task-name}"/></not>
         <then>
            <echo>Running @{audit-task-name} on ${ant.project.name}</echo>
            <stopwatch name="audit.timer.@{audit-task-name}" action="start"/>
            <delete dir="@{audit-output-dir}"/>
            <mkdir dir="@{audit-output-dir}"/>
            <auditTaskBody/>
            <echo>Finished running @{audit-task-name} on ${ant.project.name}, see @{audit-output-dir}</echo>
            <stopwatch name="audit.timer.@{audit-task-name}" action="total"/>
         </then>
         <else>
            <echo>Skipping @{audit-task-name} because the "audit-skip-@{audit-task-name}" property is set</echo>
         </else>
      </if>
   </sequential>
</macrodef>


<target name="audit-js" description="Runs source code auditing tools for JavaScript">
        <!--
            JavaScript auditing with Google closure
            Warnings flags are defined at http://code.google.com/p/closure-compiler/wiki/Warnings
            The order of parsing JS files is somewhat important here. You should try to pass
            filenames in the rough order they would be parsed by a browser visiting your site or application.

            We use Google's provided "extern" annotated version of jQuery 1.9 to provide additional
            strict error checking. See https://code.google.com/p/closure-compiler/source/browse/contrib/externs/jquery-1.9.js for more information.

            Best place to find documentation on command-line options for the compiler is
            https://code.google.com/p/closure-compiler/source/browse/src/com/google/javascript/jscomp/CommandLineRunner.java
          -->
         <sequential>
            <!-- Exclude known 3rd party scripts from analysis by filename or path -->
            <selector id="audit.js.3rdparty.selector">
               <or>
                  <filename name="scripts/jquery/jquery-*.js"/>
                  <filename name="scripts/yui/**/*.js"/>
               </or>
            </selector>

            <path id="audit.js.3rdparty.path">
               <fileset dir="${source.dir}/html/scripts">
                  <selector refid="audit.js.3rdparty.selector"/>
               </fileset>
            </path>

            <!-- Include our JS source code to be analyzed, excluding 3rd-party stuff defined above -->
            <path id="audit.js.source.path">
               <fileset dir="${source.dir}/html/scripts">
                  <and>
                     <filename name="**/*.js"/>
                     <not>
                        <selector refid="audit.js.3rdparty.selector"/>
                     </not>
                  </and>
               </fileset>
            </path>

            <!-- Pipe compiler output to /dev/null in a platform-sensitive way -->
            <condition property="dev.null" value="NUL" else="/dev/null">
               <os family="windows"/>
            </condition>

            <pathconvert pathsep=" " property="closure.args" refid="audit.js.source.path"/>
            <timed-audit-task audit-task-name="closure-js" audit-output-dir="${closure.dir}">
               <auditTaskBody>
                  <java jar="${ant.home}/lib/closure-compiler.jar" output="${dev.null}" error="${closure.dir}/closure-warnings.txt" fork="true">
                     <arg value="--jscomp_warning=checkRegExp"/>
                     <arg value="--jscomp_off=checkTypes"/>
                     <arg value="--jscomp_off=nonStandardJsDocs"/>
                     <arg value="--jscomp_warning=internetExplorerChecks"/>
                     <arg value="--jscomp_warning=invalidCasts"/>
                     <arg value="--jscomp_off=externsValidation"/>
                     <arg value="--process_jquery_primitives"/>
                     <arg value="--js"/>
                     <arg line="${closure.args}"/>
                  </java>
               </auditTaskBody>
            </timed-audit-task>
         </sequential>
</target>

Mitch McConnell's leaked strategy recording has staff crying "bugged"

Today, Mother Jones magazine features a leaked recording of Senate Minority Leader Mitch McConnell's private strategy session, in which his insiders discuss ways to beat Ashley Judd should she run for his seat. Aside from the Nixonian element to the story, and the frankness with which they discussed using Ashley's mental health issues against her in a campaign, there is an interesting security-related angle here.

The meeting consisted only of a small group of loyal insiders, and all deny having recorded the session. Sen. McConnell's office is asking the FBI to investigate: "Obviously a recording device of some kind was placed in Senator McConnell’s campaign office without consent."

Joan Goodchild writes in her blog for CSO Magazine "McConnell’s campaign all adamantly deny any involvement in the recording of the sessions (and its consequential leaking). They are working with the FBI on an investigation into how it happened. But my gut tells me they need to look inward again and evaluate the people they consider allies and consider who may be a potential insider threat."

Eric Wemple from the Washington Post blogs "Let’s just roll with the bug scenario. For the sake of some legal entertainment, suppose that someone, in the wee hours of Feb. 2, broke into this secure location via ductwork, expertly fiddled with ceiling tiles and planted a pea-size device in one of the room’s grommets."

I wonder whether anyone is considering a simpler scenario. Did the room contain a Polycom conference phone system? Back in 2012, my colleague HD Moore published his research into conference phone vulnerabilities, which was covered widely by the mainstream press. There were several scenarios which allowed anyone with a telephone or web browser to silently call into the Polycom and use it to listen to the room and to watch video (for camera-enabled systems) without anyone knowing. It's not too much of a stretch to think that -- it's certainly more plausible than a Watergate-style bugging of a secure room in the capitol.

Wednesday, February 22, 2012

Interview with Dan Guido, co-founder of Trail of Bits (re-post from Feb 2012)


Having been involved in information security for the last 15 years, I've had the opportunity to meet some really amazing people and to view the industry through their eyes. I've been toying with the idea of a blog series where I interview some of the people I've had the privilege to meet, and hopefully to introduce some of my readers to the awesome research that's being done. I've decided to call the blog series "Dangerous Things", which is a reference to the fact that so many of us in this industry are fascinated by things that go boom - whether that be fast cars, martial arts, firearms, or exploits (or all of the above).

The first installment of Dangerous Things is an interview with +Dan Guido , co-founder of a new venture named Trail of Bits.  I was originally introduced to Dan by my business partner Tas, Rapid7's co-founder and CTO.  The Malware Exposure features of Nexpose 5.0 were inspired in part by conversations between Tas and Dan at Security Confab last year. We invited Dan to speak at the 2011 UNITED Summit in San Francisco to present his research and to participate in panels - if you haven't read Dan's research yet, I recommend you check it out!

Thanks and enjoy the interview. If you have any follow-up questions for Dan Guido, please post them here and I will do my best to hound Dan until he answers them

Disclaimer: The opinions represented on this blog belong to the people being interviewed, and are not necessarily representative of my views or the views of Rapid7.

CL: Can you tell us a little bit about yourself and what you do for a living?

DG: I'm a co-founder of Trail of Bits, an intelligence-driven security firm that helps organizations make better strategic defense decisions. People tend to find our approach to security unique because we guide organizations to identify and respond to current attacks, rather than broadly address software vulnerability. At Trail of Bits, we acknowledge that it’s not possible to fix all of an organization’s vulnerabilities and that it’s far more productive to focus on minimizing the effectiveness of attacks in a tangible and measurable way instead. We came to this belief after research that +Dino Dai Zovi  and I published last year and after ten years of watching Alex Sotirov obliterate any technology he stared at long enough.

In addition to my work at Trail of Bits, I'm a Hacker in Residence at NYU-Poly where I oversee student research and teach classes in Application Security and Vulnerability Analysis, the two capstone courses in the NYU-Poly security program.

CL: Your presentation "An Intelligence-Driven Approach to Malware" was very well received at the UNITED Conference.  Can you summarize your  thesis for people who haven't seen the presentation?

DG: "Attackers are resource-constrained too."

In order to scale, different classes of threats become dependent on specific processes towards achieving their goals. In this presentation, I investigated the workflows that mass malware groups have built out and identified points in them that were the most vulnerable to disruption. With this information, I can evaluate the precise impact or lack of impact of any given security decision an organization can make. This is fundamentally different from the due diligence approach towards security that most organizations use today and it presents actual metrics for the effectiveness of a security program. Throughout the presentation, I introduced, defined, and walked through re-usable analytical techniques that viewers could use to re-apply this method of thinking to other threats they care about. If you’re interested in seeing this presentation, all of our research can be found on our website at www.trailofbits.com.

CL: Does the principle of limited resources apply equally well to criminal groups, hacktivists, and nation states? Without rehashing the entire presentation, can you give a couple examples from each type of threat about where the likely resource constraints are?

DG: All the groups that you mentioned have limited resources and have to struggle with problems of scale. These constraints influence the techniques, tactics, and procedures (TTPs) that each group adopts and what they are able to achieve.

Let’s take hacktivist groups as an example because we’ve seen many public examples of their work over the last year and this let’s us more easily draw conclusions about them. Thinking about hacktivists is particularly fun since they uniquely have no path to financial remuneration for their attacks. In this way, they’re closely related to open-source projects and are similarly constrained by the people they can attract and the talent those people have. Since anyone can contribute to an open-source project it would seem like their resources are infinite, but in reality we know they have the arduous task of convincing people to work for them for free. This is why open-source software hasn’t really destroyed everything else and it’s one of the reasons why hacktivists groups don’t display the level of operational sophistication that, say, APT or financial crime groups do.

In terms of their TTPs, this plays out in a variety of different ways with hacktivist groups. Since hacktivist groups don’t care as much about exactly what they’re breaking into, they’re perfectly fine trying low overhead attacks like SQL injection and whenever they get in, they get in. It’s much less targeted and more opportunistic because they’re not going after anything specific, they’re going after the entire organization (see: http://attrition.org/security/rant/sony_aka_sownage.html). Hacktivist groups don’t need as much sophistication as APT or other attack groups, but they can be equally effective in the embarrassment they cause. For their goals this is enough and the attacks that require setting up infrastructure and time to develop and deploy, like client-side attack campaigns, zero-day browser exploits, or long-term persistence, are infrequently used by hacktivist threats. If you understand this, then you can understand how to best defend against them.

CL: Separating the hype about APT, there are still organizations out there which really DO face determined, sophisticated adversaries.  Does your research offer any advice beyond the mass malware threat?

DG: Of course! In fact, I've used these techniques primarily against APT groups and wanted to demonstrate that the same approach was applicable to other threats as well. I chose mass malware because they're an easy threat to beat up on, data about their operations is widely available, and they represent an impasse that an organization needs to overcome before it can effectively take on a more advanced adversary like APT.

Many of these techniques are incredibly well documented by Eric M. Hutchins et al in a paper they released late last year and I would recommend anyone interested in this topic to read their paper after seeing my presentation.

Intelligence-Driven Computer Network Defense Informed by Analysis of Adversary Campaigns and Intrusion Kill Chains.
http://papers.rohanamin.com/wp-content/uploads/papers.rohanamin.com/2011/08/iciw 2011.pdf

CL: Sometimes it seems like IT security pros are losing hope.  I talk to many practitioners who are discouraged and overwhelmed.  The task of patch management, which seems simple on the surface, is a huge amount of work for most organizations.  You are one of the rare few that seems to offer some hope for defenders - is that true and if so, what is your message for the doom and gloom crowd?

DG: I think the key problem here is that people are unable to connect the work that they're doing with the impact it has on attackers. That leads them to focus enormous amounts of resources on, for example, patching everything in sight rather than going about it more strategically or defending against their adversaries an entirely different and less resource-intensive way. If you're able to connect the product of your work with its impact, you're going to be considerably more effective at describing and selling security initiatives within your organization. The way to do that is with attacker data and with attacker scenarios derived from actual events.

CL: In your work, where do you see IT security organizations wasting the most time and money?

DG: Organizations waste the most time and money scrambling to identify the defenses that work and the defenses that don’t after they’ve experienced their first major incident. The common wisdom of doing your best at security and then hoping that nothing happens is misleading and counterproductive. Instead, organizations should identify attacks that their peers have experienced and talk to an expert to simulate these on paper or for real. The output of such an exercise are actionable metrics about the effectiveness and gaps of processes and technologies inside your organization. It’s only after you know what you’re defending against that you can make educated decisions about security and avoid the marketing-driven snake oil and ineffective best practices that are so prevalent in this industry.

Said another way, people designing defenses who have never had them evaluated by a good attacker is kind of like learning one of those martial arts that look more like dancing than fighting. They look nice, but when you get into a fight your dance kungfu isn’t going to help you not get your ass kicked.

CL: You have mentioned that organizations should view the desktop environment as one big public attack surface. Why is that and how do you see trends like VDI and application sandboxing affecting this?

DG: We built DMZs to tightly contain and control access to our firms most critical assets but over the last 10 years, these critical assets now reside on systems as directly connected to the internet but without such protections: our desktop computers. As an attacker, these systems are effortless to interact with from outside your perimeter and can be precisely targeted to specific individuals when necessary. E-mail, social networking, even targeted advertisements on general purpose websites allow attackers to directly interact with the systems that hold your critical assets today.

Sandboxing and Virtual Desktop Infrastructure (VDI) are steps in the right direction and allow organizations to gradually separate their assets from applications that are under direct attacker control, with the eventual goal of total separation. I would caution that many VDI solutions are built for ease of management rather than security and don’t typically isolate applications well enough to prevent attacks. Organizations should ask for reviews of such technologies by teams capable of performing real exploitation before relying on them as a barrier. This is in contrast to application sandboxes like those from Google Chrome and Adobe Reader X, which have gone under such study and have had a demonstrable mitigating effect on exploitation in the wild.

It’s unfortunate that widespread implementation of these technologies will take quite a long time. In the near to medium term, sandboxing will have little to no impact on attackers abilities to perform successful attacks. For instance, APT did not suddenly disappear or even radically change strategies when Adobe Reader X was released and it won’t when the next application is sandboxed either. It’s going to take years.

CL: What would be your top 3 practical and easy recommendations for typical IT security organizations?

DG:

  1. Have a major compromise, but make sure it happens on paper. Most organizations are completely unaware of how an actual attack looks until one happens to them, but one of the most surprising developments of last year was the wealth of compromises that occurred in full view of the public. At this point, we know almost every step of how Google, RSA, Sony, and others were compromised: what if these same attackers had targeted your company instead?
  2. Acknowledge that mass malware continues to abuse old vulnerabilities with unreliable exploit code and enable simple memory protections like Data Execution Prevention (DEP) and consider the Enhanced Mitigation Experience Toolkit (EMET) or Google Chrome Frame if you are unable to switch to a newer browser altogether. In particular, I like that these technologies provide an effective toggle switch that organizations can use when the risk of exploitation by a zero-day or other vulnerability is increased. Long term, organizations should understand that the needs of their intranet browser and their internet browser are diverging and they need a more secure, constantly updated browser that moves at “internet speed” to browse the web safely.
  3. Identify common methods that malware uses to persist and detect them with standard desktop management or security tools. In actual use, there are a very small number of locations in the registry and on-disk that malware like to start from and they typically have a variety of other characteristics that give them away: they’re usually unsigned, impersonating Microsoft binaries in the wrong locations, composed of random filenames, or are only found on one or two hosts in an organization at a time.


CL: We've chatted about some of your research-in-progress. Can you tell us what you're working on next, and what are some of your predictions for attack patterns over the next 18 months?

DG: Coming up in April, I'll be publishing an intelligence-driven analysis of mobile phone exploits with +Mike Arpaia , an ex-coworker of mine from +iSEC Partners . We’re comprehensively mapping out all of the exploits that exist for Android and iOS and we'll be using that data to chart a course for the future of mobile malware. This presentation should accurately describe the attacks that enterprises are likely to experience if they roll out these devices to their workforce as well as evaluate the effectiveness of current mobile security products.

As for predictions over the next 18 months, I think it’s important to break them out by threat groups so here are three of mine for APT, mass malware, and hacktivist groups:


  1. As adoption of application sandboxes increases, APT groups will demonstrate the ability to break out of them. They will not target specific application sandbox implementations, rather they will rely on more generic Windows kernel exploits. If this happens, it validates that the architecture and implementation of the sandbox in a targeted application is effective as the exploit developer found it easier to avoid rather than attack head-on.
  2. Mass malware groups will continue their operations unchanged. Their ability to exploit client-side vulnerabilities will decline due to increased adoption of modern web browsers and their lack of capability to perform any customized exploit development. They will lack the necessary skills to take advantage of kernel exploits inadvertently disclosed in APT attacks. Instead, they will continue to innovate on social engineering and this will become their dominant infection vector over the long term.
  3. The continued success of hacktivist groups to seemingly compromise organizations at-will will cause companies to question their investments in security and demand greater justification regarding the effectiveness of proposed products and services. Hacktivists will continue to use SQL injection, remote file includes, and other remotely accessible web application flaws as their primary attack vector. Hacktivist groups will avoid the use of client-side attack campaigns, like those used by APT groups, since they are too slow to gain access and require significantly more investment in infrastructure and coordination.


CL: You also teach security at NYU-Poly.  What recommendations do you have for students wanting to enter the security industry?

DG: Learn to code, develop big projects, and do at least some of them in C. Participate in capture the flag competitions and war games. Disregard social media and what the security industry thinks is cool right now. Code repositories, research projects, and CTF standings are the certifications you want to have. Attend local security events, meet people in-person, and demonstrate your competence to them.

I tried to collect all my thoughts about this for my students on my course website: http://pentest.cryptocity.net/careers

CL: What security-related studies or papers have you found surprising and illuminating over the last year? How has your thinking changed?

DG: I thought 2011 was a great year for security research, particularly for our understanding of attacker behavior and capabilities. If you're going to read any papers or presentations from the last year, I would recommend the following:


  1. Eric Hutchins et al's paper on Intelligence-Driven Security. This paper describes the approach of Intelligence-Driven Defense, defines much a common language for practitioners to use, and walks the reader through a scenario with an attack that was previously observed. http://papers.rohanamin.com/wp-content/uploads/papers.rohanamin.com/2011/08/ici w2011.pdf.
  2. UCSD's Click Trajectories. Replace "value chain" with "kill chain" and you might get deja vu after reading the last paper. The UCSD folks use different language, but they further demonstrated the effectiveness of an intelligence-driven approach against a professional spammers -- a threat that most accept as a reality of existing on the internet these days. http://cseweb.ucsd.edu/~savage/papers/Oakland11.pdf
  3. Dino Dai Zovi's Attacker Math 101. In this presentation, Dino describes a body of analytical techniques for charting the future of exploitation of a given platform. He describes a language for modeling the actions and incentives of an exploit developer and then applies those techniques to one of the most active exploitation communities today: iOS jail-breakers. http://blog.trailofbits.com/2011/08/09/attacker-math-101/
  4. Microsoft's Mitigating Software Vulnerabilities. Writing exploits is incredibly hard. Many people don't understand this and simply equate knowledge of a vulnerability with the development of an exploit. Remember that your attackers are resource-constrained too. This paper will help you understand just the level of resources that one needs to exert to overcome modern memory protections like those offered by Microsoft's developer toolchain. http://www.microsoft.com/download/en/details.aspx?displaylang=en&id=26788


CL: What sorts of research and data would you like to see coming from the industry?

DG: I think more people need to learn from and understand what attackers are using against them. In fact, I'd like to see it made an informal requirement for publishing in the future that papers describing defensive techniques evaluate their effectiveness against observed attacker behavior and capabilities. We have several good standards now. Authors can readily use intrusion kill chains, courses of action, or value chains, define their adversary, and provide a meaningful estimation of the utility of their contribution.

CL: What should we expect from Trail of Bits in the next few months? Are you going to be at RSA?

DG: We’re entirely focused on product development right now so, if everything goes according to plan, the answer to your first question should be “not much.” Dino and I will be speaking more about our research and general approach to security at RSA, Blackhat EU, and SOURCE Boston, but we’re waiting to announce anything related to our product offerings until we’re convinced that they’re ready to ship. If you’re interested in hearing more about what our company is up to, you can sign up to our mailing list on the Trail of Bits website and you’ll be the first to know when we have something to release or when we’re looking for beta testers.

ShareThis