Wednesday, October 29, 2014

Automating nmap and ndiff with PowerShell

I like to perform reconnaissance against my network border on a regular basis to identify new services and new hosts.  With nmap, some bash, and cron, this is pretty easy to do.  Unfortunately, the system I have outside our network to do this with is a Windows system with Kali running in a VM.  I say unfortunately, because for some scans, running from Kali within a VM is significantly slower than performing a scan from nmap in Windows.  I know you're thinking "But why don't you just get a *nix box in the cloud?"  You deal with what you have available to you.  Larger organizations sometimes come with less flexibility.

I wanted to be able to scan a given set of hosts and compare the results of each scan against the previous scan.  The ndiff utility included with nmap was designed to do just that.  I also wanted to get those results emailed to me.  I also wanted to encrypt these results before emailing them, because reasons.  I hadn't spent a lot of time with PowerShell, so this seemed like an opportunity to both get something done and learn PowerShell.  The result is scandiff, a partial wrapper for nmap, ndiff, and 7Zip written in PowerShell.

Scandiff does a discovery scan against the targets supplied looking for open ports to identify live hosts.  These hosts are then re-scanned using a larger set of ports and service version probing (-sV).  I chose this two step method to decrease overall scan times when scanning large IP spaces.  If the IP space being scanned is behind a firewall blocking pings and not returning RST, nmap will determine all ports are "open|filtered" and marks the host as up.  The downside to this approach is that  it is possible to miss hosts which have ports opened but are not included in the probed ports list.  It is advisable to review your infrastructure and include all common ports for your network's services in your discovery probe list.

Scandiff retains the previous output in an XML file, $basename-prev.xml.  Once the scan has completed, ndiff is run to compare the current scan results against the previous results to generate a -diff.txt file.

Once the ndiff operation has completed, the XML files, nmap log, and diff file are added to an encrypted 7Zip archive.  This archive is added as an attachment to a System.Net.Mail.MailMessage email object.  The results are then emailed to a designated recipient using the PowerShell Net.Mail.SMTPClient.

I used Gmail to relay the output to my inbox.  This section can be modified to relay through a different email server, and authentication can be disabled if not required by your mail server.  Email output can be disabled completely by specifying "-email 0" or "-email $False".

I'm still playing around with the nmap options to find the right balance of accuracy and thoroughness vs. performance.  The script also has very little error handling.  I will be working on adding this in the coming weeks.  I am also looking at being able to specify the majority of configuration options from a configuration file to make the command line operation less unwieldy.

Scandiff can be downloaded from github here: https://github.com/hardwaterhacker/scandiff.  Let me know if you find this useful.  I'd welcome any input on how to improve this script.

Friday, October 3, 2014

Automating Simple Website Reconnaissance Measures a.k.a An Ounce of Prevention

As a pen tester as part of an internal security team, I'm responsible for periodically sweeping our networks to identify web servers and determine if there are risks presented by those websites such as information disclosures, default credentials, or insufficient access and authorization measures.  (aside: yes, change control would make sure this never happened again.  In a world filled with unicorns farting rainbows.) On anything other than a small network, this can quickly become a time-consuming task. It didn't take long to decide to automate as much of this process as possible.

Since our vulnerability scanners are regularly touching all parts of our network, they are a good choice as a source for a list of hostnames, IPs, and ports for any service speaking HTTP or HTTPS. After massaging the data in Excel I have a list of URLs to test using either the FQDN or IP and the port number.

Once I have this list, typically several thousand different URLs to test, I need to quickly eliminate the systems I don't need or want to inspect.  To do this, I wrote a simple python utility which uses urllib2 to pull in the page associated with each URL and analyze it through a simple string.find() loop.  I built a dictionary of common sites that I know I won't need to inspect, such as
  • Sites with the corporate authentication mechanisms presented
  • Default Apache / IIS web pages
  • Default Tomcat or JBoss install
  • KVMs and SAN switch interfaces
  • etc.
When the utility finds a URL matches something in the dictionary, it records this in the output file.This resulting report contains far fewer sites needing inspection than the original list.

The biggest return isn't in time saved, however. The real value comes when the utility isn't able to classify the site. These sites often contain information that should have been secured, or authentication mechanisms using weak/default credentials.  I can easily filter the output into additional tasks, such as testing for default Tomcat or JBoss credentials, etc.

In the past, I would take these unclassified results and dump them into a spreadsheet and then review them individually. Any site that would attempt to perform a Javascript redirect or refresh to a different landing page when '/' was requested would fool my utility as urllib2 is unable to follow the redirect. This lead to manually reviewing a lot of sites that would otherwise be easily identified if my utility could see the landing page.

A while back I experimented with being able to take a screenshot of each site to quickly eliminate these sites visually. Unfortunately, at the time, every utility I investigated was also stumped by the redirect. AJAX-heavy sites also fooled my utility as well as the other utilities I tested.

This summer Netflix released a tool they wrote - Sketchy - which they use to assist in their IR processes. Sketchy addresses the same issues I was experiencing with Javascript and AJAX sites. After reading about Sketchy, I knew that I wanted to try applying this to my processes to see if I could get better results and be more efficient.

Feeling inspired by all the incredible talks presented at DerbyCon,I decided it was time to start putting Sketchy to work. I blogged earlier about my experience setting up Sketchy, you can read about it here.

While Sketchy does have an API, a quick and dirty shell script worked for my needs.  The script supports grabbing a screenshot (sketch), grabbing the DOM as text (scrape), or grabbing the rendered HTML (html). For sites sketchy is unable to connect to, my script makes a log entry and does not produce an artifact.I can quickly view these resulting images and determine if the site is something that warrants further inspection.

Examples


Linksys router login page
Twitter login page

Conclusion

Reviewing websites is essential to identifying information disclosures, weak authentication mechanisms, and new web apps or devices that may have been deployed without your knowledge. Regularly reviewing these websites for this information prevents audit findings and helps keep your network and data safe from unauthorized access.

Sketchy was easy to install, and it didn't take long to whip up a functioning system.  With a few hours of setup, scripting, and testing, I'm able to automate what used to be several hours of work. In the end, I'm free to get more done, and much more of the proverbial low-hanging fruit is picked.

If you're using different tools to achieve the same end, I've love to hear about it. Leave me a comment or reach out to me on Twitter.

-Mike


Thursday, October 2, 2014

Lessons learned from setting up Sketchy

Have you ever wanted to pass a URL off to a program and have it return a screenshot of that site?  This is incredibly useful for things like DFIR, allowing you to get an initial look at a page without having to poke at it with a potentially vulnerable browser.  I've used various tools to try to take screenshots of sites that either have a Javascript-based redirect at initial load or are AJAX-based and these tools always failed me.

Earlier this summer, I heard about a suite of tools released by Netflix.  This suite included Sketchy, a conglomeration of python, Flask, phantomjs, gunicorn, celery and redis.  Sketchy uses lazy-rendering within phantomjs to allow it to take screenshots of AJAX-heavy sites.

Based on the writeup by the Netflix crew, I was hopeful this would solve the problem once and for all.  I finally had time this week to sit down and play with Sketchy.  There were a few bumps along the road, so I decided to put down what I did here in case anybody else is interested in getting Sketchy working.

Installation

I installed Sketchy in my Kali linux VM.  The installation was straight forward.  Use git to clone the Sketchy repository to your machine.  I chose to put mine in /opt/sketchy.  With an up-to-date Kali installation, simply running ubuntu_install.sh will pull down all the necessary dependencies and build your environment for you.  If you don't want to trust a script to do this for you, the dependencies are clearly noted in the manual install section of the wiki.

User Setup

The Sketchy wiki doesn't discuss this, but if you're going to run Sketchy as root, celery will complain about being started as UID 0.  To get around this, I created a standard privilege user named sketchy, a group named sketchy, and made the sketchy user a member of the sketchy group.  I then changed ownership of the Sketchy install directory and all files and subdirectories to the sketchy user and sketchy group.

Database Setup (and the first hiccup)

By default, Sketchy creates a SQLite database to store information.  While the wiki recommends a different RDMBS such as MySQL, for low volume purposes you should be fine using the default database.  This was where I ran into a problem which would confound me for some time.

If you use Kali, you're probably running most of your commands as root.  If you are, when you set up the database using `python manage.py create_db'.  If you proceed down this path and follow the Test Startup instructions, everything will work fine, however you will get an Internal Server Error message if you try to follow the Production startup instructions.

In my case, production startup failed to render images because the database was set up by root but gunicorn was running under a reduced-privilege user (to be discussed later).

To get around this, I created a tmp directory within my Sketchy install as /opt/sketchy/tmp.  In order for manage.py to create the DB in this directory, I modified config-default.py to point to the new location:
# Database setup
SQLALCHEMY_DATABASE_URI = 'sqlite:////opt/sketchy/tmp/sketchy.db'
If set up the database as root, you'll want to change ownership of the new database to sketchy.sketchy to allow gunicorn to update it.

Configuration

The Sketchy wiki indicates you should remove ":8000" from the HOST variable in config-default.py.  I did not find it necessary to remove this to allow Sketchy to work properly.

Make sure to update your PHANTOMJS location according to your local system.  The setup script detected I had phantomjs installed, however config-default.py was looking for it in /usr/local/bin instead of /usr/bin.

supervisord.ini

There isn't too much to change in this file.  For [supervisord], you may want to store your log files in /var/log.  Changing the loglevel to debug will help you identify issues.  In both the [program:celeryd] and [program:gunicorn] sections, set the directory to your Sketchy installation directory and change the user to the account you created to run the daemons (I used sketchy).  I also changed the address gunicorn was using to 127.0.0.1 to prevent it from listening on any network interface.

Conclusion

Sketchy definitely has a place in my toolkit.  I haven't found anything that will reliably screenshot pages that use Javascript to redirect to another page or things that are AJAX-based.  Sketchy fits the bill perfectly for that use case.  The performance isn't bad, but it's not a speed demon either.  I haven't spent any time looking into optimizing the various components used to see if I can get better performance.

I'll be posting another blog soon about how I use Sketchy as an internal penetration tester to reduce the amount of time I spend performing website reconnaissance and looking for information disclosures.


Wednesday, August 27, 2014

Last year I gave a talk on developing a working incident response program for small IT organizations at the ND IT Symposium.

Here's the summary:

-------------------
Your organization will be breached.  It's a matter of when, not if.  How you respond may be the difference between recovering and closing your doors.

This talk is designed to help small businesses or businesses with small IT organizations to develop a viable incident response program.
-------------------

The slides can be downloaded from slideshare here: http://www.slideshare.net/MikeSaunders4/you-will-be-breached-38429510


-MS

Saturday, August 23, 2014

BSidesMSP Presentation - Problems With Parameters

Today was a first for me - my first presentation at a true security conference. The BSidesMSP crew put together a great conference with a lot of great volunteers. I'd like to thank both the crew and volunteers that put this together as well as the great sponsors that made this possible!  Plans are already underway for BSidesMSP 2015.  Follow @BSidesMSP or check out https://www.bsidesmsp.org/ for more details.

As promised, the slides from this presentation have been uploaded to Slideshare.  Feel free to reach out with any questions or comments.  The slides can be downloaded here: http://www.slideshare.net/MikeSaunders4/problems-with-parameters-b-sidesmsp

Monday, August 4, 2014

Beyond the alert box

Today I came across an excellent slide deck on getting a shell through XSS by Hans-Michael Varbaek. Varbaek's presentation steps through the execution process from finding an XSS vulnerability, bypassing CSRF defenses, and ultimately uploading a php shell to a web server. After reading this deck, I started thinking about the previous "pen test" reports from external testers I've reviewed over the past several years. Many of these reports included XSS vulnerabilities. Invariably, what was presented for proof of the vulnerability was a screenshot showing <script>alert(1)</script> being provided to some input parameter and a popup box, or a close semblance of that. Of those reports, only one included a CSRF vulnerability even though our internal analysis showed that other CSRF vulnerabilities existed.

To be fair, this is not just a symptom of bad reporting by external testers or bad reports. Anybody who has performed pen testing and subsequently provided a report to a customer has experienced the response "What's the risk? How can this be exploited?" 

As somebody who performs web app pen tests and writes reports, I've been guilty of providing the same example in the past. When discussing the XSS finding with a developers, they often didn't grasp the potential impact of XSS. Even if a CSRF wasn't possible, apparent defacing of a page through a reflected XSS certainly was. When I stopped providing a screenshot of an alert box and started providing a screenshot or POC URL for a page defacement, I immediately received a different reaction from both developers and their management. They could see the impact. Sometimes, of course, time limitations prevent me from developing a robust proof of concept. Whenever I can, I provide these kinds of examples in my reports. 

It would seem that we as pen testers too often stop at the XSS. Varbaek's presentation should serve as rallying call to consumers of pen tests to demand better reports from pen testers. Those of us writing reports should accept the challenge to go beyond the alert box. Vivek Ramachandran of PentesterAcademy deserves a big nod for developing the Javascript for Pentesters course, which provides some great examples of how to go beyond the alert box with XSS.

Wednesday, June 18, 2014

Combining sqlmap and Burp for the win



We recently had a vulnerability assessment performed by a vendor who reported a possible SQL injection in a web application. I reviewed their results and agreed SQLi was likely due to the application returning a SQL error message under certain conditions.

Through manual testing, I was able to confirm the application was vulnerable to SQLi. When I attempted to use sqlmap to automate enumeration and dumping of the database, however, sqlmap would initially report the parameter was likely vulnerable, but then later report SQLi was not possible. I tried setting the --level and --risk parameters to various settings as well as manually specifying the database type – Microsoft SQL in this case.

While reviewing these results I realized the reason sqlmap was unable to successfully identify and exploit the SQLi was due to how the vulnerable parameter was constructed and how sqlmap operates. The application took several parameters, all enclosed in brackets, a la:
     url.asp?x=[value]&y=[value].

Sqlmap operates by appending SQLi code to the end of the parameter or replacing the parameter entirely. For example, sqlmap may send the following string to determine if parameter x was vulnerable to a time-based blind SQLi:
     url.asp?x=[value' WAITFOR DELAY '0:0:5'--&y=[value].

Manual testing had shown that a single quote appended to the parameter value within the brackets would result in a SQL error message. No matter what I tried, however, sqlmap was unable to successfully exploit the SQLi and extract the database. As I mentioned earlier, the application took several parameters enclosed in brackets. In reviewing my logs, I found that sqlmap wasn’t terminating the parameter with a bracket, as shown above. This resulted in the application throwing an error because the parameter wasn’t in the right format, thus the request never made it to the SQL server, and this is why sqlmap wasn’t able to exploit the SQLi.

I spent some time researching how to append a ] to sqlmap’s queries, but I couldn’t find any solution. Back to the drawing board. I’ve used the Match and Replace functionality in Burp’s proxy to manipulate cookie values to my needs in the past. What if this same functionality could be used to insert a ] at the end of sqlmap’s attack string?

Since I knew the URL for this particular part of the application always took the same parameters in the same order, this would be easy to accomplish. Using the Match and Replace function, I created a new rule using the Request header type. Since I was testing the x parameter, I needed to append the ] to sqlmap’s input  before y parameter. To do this, I matched on &y= and set the replace to ]&y=. 




A test request from the browser through Burp showed the rule was now inserting the ] before the y parameter, allowing sqlmap to work correctly.

The original request:


The modified request:



  

I fired up sqlmap one more time. This time, sqlmap was able to properly detect and exploit the SQLi and extract the database banner and records.

back-end DBMS operating system: Windows 2003 Service Pack 2
back-end DBMS: Microsoft SQL Server 2005
banner:
---
Microsoft SQL Server 2005 - 9.00.5057.00 (Intel X86)
        Mar 25 2011 13:50:04
        Copyright (c) 1988-2005 Microsoft Corporation
        Enterprise Edition on Windows NT 5.2 (Build 3790: Service Pack 2)
---


This was the solution I found to my problem, although I’m sure there are other ways it could have been solved. If you know of a way to accomplish this using only sqlmap, I’d love to hear about it.

--------------------------------------------------
UPDATE

 Bernardo Damele (@inquisb) pointed out that --suffix would have accomplished the same effect. After reviewing, I agree.  Protip: --suffix doesn't show up when using -h to see the options, you need to use -hh to see all options.