Showing posts with label malware. Show all posts
Showing posts with label malware. Show all posts

Thursday, May 12, 2011

Proactive Incident Response

A little while ago Harlan Carvey posted on Proactive Incident Response. I've been thinking about this for a while, but have a different perspective on Proactive IR than he does. (I agree with his take on it, I just look at Proactive IR differently.)

Computer Incident Response Teams (CIRTs) are often referred to as fire fighters. This analogy is very true - most of the time CIRTs are fighting fires; the fire being a hacked server, a malware outbreak or a targeted phishing campaign. We're often jumping from one problem to the next, determining who got in, how they did it, what damage they caused and how to prevent it in the future. However, is that all CIRTs should be doing?

The CERT Handbook for Computer Incident Response Teams states that CIRTs should offer three different services: reactive, proactive and security quality management services. Reactive services are the fire fighting done on a daily basis. Security quality management services include project and security consulting for other business units; you know, those meetings you get pulled into where they ask you what you think. What about proactive services?

If we look back at actual fire fighters, we see that they don't just spend their time putting out fires. One of their duties is to help fire prevention through education and fire inspections. In the security world, this is analogous to doing user education, vulnerability scanning and penetration tests. This is what proactive services are. But I believe these is another aspect of proactive services that CIRTs tend to miss.

One of my co-workers has coined a term: hunting trips. This basically boils down to proactively looking around the interwebs for attackers you've seen in the past. Since attackers tend to use the same, or similar tools and tactics, indicators of their compromises in other organizations appear if you know where to look. You can then use the new indicators you've just found to check for signs of compromise in your network.

Where can you look? Anywhere that information on security analysis can be found. This includes blogs, twitter, forums, online sandboxes, AV signature descriptions, etc. All of these places (and more) have information you can use to tie attackers to new attacks and malware they are using.

Of course, I wouldn't recommend hand-searching each of these places for information. Google is the obvious place to start, but be prepared to get back hundreds of results (at best) that are not of interest to you. I would recommend using the Google Malware Analysis Search, created by those behind the Hooked on Mnemonics Worked for Me blog, that narrows Google's search to 75 different security sites and feeds.

So, an example so this might actually make sense. In the last few days there has been an uptick in spammed emails that contain a link to a zip file named order.zip. Within this file is a SpyEye trojan. Analysis of the trojan shows that it drops itself as c:\recycle.bin\recycle.bin.exe (which to my knowledge is not a default location for SpyEye). This location is fairly unusual and can be a good indicator to use on a hunting trip.

Using the Google Malware Analysis custom search to look for "recycle.bin.exe", we come across a ThreatExpert report from March 2011 for the same filename being dropped for a SpyEye trojan. The TE report also shows that it attempts to contact zweor.com for its C&C server. We now have a new indicator to search our network for and to go hunting with.

This is a very simple scenario, but demonstrates the usefulness of performing information gathering to find additional indicators. I have a feeling most CIRTs are not doing this and would benefit greatly from setting aside time to make sure this is done.

Wednesday, May 12, 2010

Simulating the User Experience: Part 3

In the first two parts of this blog series, I detailed an issue I found where not all of the user environment variables were set for a program run with winexe. This was causing an issue during analysis of some malware since the samples were looking for those variables. As a work-around, a batch script was uploaded to the Windows sandbox and scheduled to run. When the scheduled job ran, all of the environment variables were set and the malware ran as it normally would.

The whole situation got me thinking - are public sandboxes setting all of the environment variables? As was seen, some malware rely on these variables and if they aren't set the malware won't run. If someone were to use a public sandnet to test malware that relies on these variables and the malware didn't run, they could be under the false impression that the program is benign.

Before I go on I should state that this post is not a knock against public sandboxes. They provide a great service to the security community. I did not do this to find any weaknesses in them to exploit or publish maliciously. My goal here was to determine which sandboxes, if any, miss some variables that may be required for malware to run.

To test this, I wrote a program that would obtain the environment variables and write each one to its own registry key/value pair. Since the public sandboxes report any registry modifications made by the program, I would be able to see all of the environment variables available to the program. This program was then uploaded to a number of different public sandboxes and the results analyzed. The sandboxes I used were Anubis, Comodo, CWSandbox, Joebox, ThreatExpert, BitBlaze and the Norman Sandbox.

In my testing, none of the sandboxes set all 30 of the environment variables I originally saw in my test. BitBlaze set 29; Anubis, Comodo, CWSandbox and Joebox set 28; and the Norman Sandbox only set 16. For some reason, ThreatExpert did not report anything back from my program - this could be a problem with my program or some type of security measure on their part.

* Note: I will not say which variables were and were not set. That information could be used by malware to determine it was running in one of these sandnets and that is not my purpose.

Due to the way the malware is executed in my system, I think that having only 28 or 29 environment variables is a perfectly normal variation. Therefore, my conclusion to all of this is that with the exception of Norman Sandbox, the sandnets appear to be setting the variables they should and represent a likely variation in the systems malware would run on.

As for Norman Sandbox, they are setting a small number of environment variables. This is perhaps a likely scenario for some systems. However, the variation of such a small amount being set would concern me as I don't know if all malware would work as it normally would. Only further testing can tell.

Saturday, May 8, 2010

Simulating the User Experience: Part 1

Part of malware analysis, especially automated malware analysis, is to simulate the user environment as closely as possible. After all, our goal is to determine how malware behaves when it is run by a user. For the last few months I've worked on an automated malware analysis system which I thought did just that.

Let me explain my automated analysis system. It is similar to the one I described in my Hakin9 articles last year. Basically I have a host system running Linux that executes an automation script. The automation script starts up a VM, launches some monitoring tools, uploads and executes the malware, records the results and performs cleanup. In all, it takes about 5-7 minutes per malware, depending on the settings I am running. So far it performed extremely well and cut my analysis time down dramatically.

Imagine my frustration this week when I ran a new Koobface sample in it only to find the malware didn't do anything. It would launch, perform some start-up operations, then exit. No registry modifications, no process injection, no network traffic. However, when I would manually launch it or run it through ThreatExpert, it would run fine.

In looking closer, I found out that the malware was trying to place a copy of itself in the %APPDATA% directory. Since %APPDATA% is an environment variable for the user, it should have been set - or so I thought.

I took a step back and started to examine the method I was using to execute the malware. My "host" system which executes the automation scripts runs Linux. In order to execute the malware in the Windows system, smbclient is used to upload the malware and winexe is used to execute it. After some thought, I came up with a theory that winexe was not setting all of the environment variables when it executed malware. I was right.

It turns out that in a default Windows XP SP3 system, 30 environment variables are set. With the way I was running winexe (--system --interactive=1), only 22 of the variables were set - %APPDATA%, %CLIENTNAME%, %HOMEDRIVE%, %HOMEPATH%, %LOGONSERVER%, %SESSIONNAME%, %USERDOMAIN% and %USERNAME% are missing.

To make sure it wasn't because of the way I was running winexe, I ran a number of tests. Each test consisted of running winexe with different settings. The command that was run was "cmd.exe /c set > outfile". To be fair, I also tested PsExec (from another Windows system). These are the results I found:

winexe
no settingsinteractiveinteractive + system
%APPDATA%   
%CLIENTNAME%   
%HOMEDRIVE%   
%HOMEPATH%   
%LOGONSERVER%    
%SESSIONNAME%   
%USERDOMAIN%   
%USERNAME%   


psexec
no settingsinteractiveinteractive + system
%APPDATA%XX 
%CLIENTNAME%XX 
%HOMEDRIVE%XX 
%HOMEPATH%XX 
%LOGONSERVER% XX 
%SESSIONNAME%XX 
%USERDOMAIN%XX 
%USERNAME%XX 



It turns out that no matter what options you use, winexe does not set the environment variables above. Note that I also ran winexe with the --runas option and got the same results. PsExec sets all of the environment variables, except when you specify it to run as SYSTEM. This makes sense as most of those variables are used to specify user settings and SYSTEM would not have those.

Obviously, winexe wasn't going to cut it any more because it wasn't setting a complete user environment which, in turn, was preventing malware from running. So, what to do? Winexe was my only way to remotely execute a program on a Windows system from a Linux system (without modifying the Windows system and installing other programs). To find out what I did, you'll have to stay tuned for part 2! :)

As a side note, if anyone knows of another program similar to winexe, please let me know. Also, if anyone knows of a way to get winexe to run correctly, I'd love to hear it.

Thursday, October 22, 2009

Tracking the Defenders

I've been working hard the last few weeks to get my malware analysis class ready, but something popped up that got me thinking. In the last few days a number of blogs have reported about avtracker.info, a site which is tracking the IP addresses that AV companies use to research malware.

According to the supposed author*, the reason this site is in existence is:
If you DDoS them, then you will lame down the whole AV business, then there won’t be any new detections for the time you cut them from the internet. The IP list is also useful for software that downloads something from the internet, in order to hide it from automatic analyzers like Anubis. You can simply exit the program when the IP matches with one of the AV list – and then your program stays secure from automatic analysis.
I have to admit that I'm not surprised at these reasons, or even that this is happening. In fact, I suspect its been happening a long time and this is just the first time a public list has been made.

Think about it - we watch where the attackers are coming from. We have honeypots, block lists, and share information amongst each other - why should we think the attackers are doing any differently?

This does illustrate a good point, however. In my class I teach that you should never allow malware you are analyzing to contact its home servers from your organization. When you do, the attackers can figure out where you are coming from and, in the best case, block your access. In the worst case you would be on the receiving end of a DDoS attack.


* I say supposed because I have no proof one way or another.

Friday, August 7, 2009

Automating Malware Analysis Part 2

I've heard rumors that the latest issue of Hakin9 is on stands now. This issue contains the second part of my article on automating malware analysis and adds memory analysis and sandnet capabilities to the analysis script.

In the script, memory analysis is performed by suspending the virtual machine (as opposed to shutting it down as the first script did). When a VMWare VM is suspended, the memory for the machine is dumped into a file which can then be analyzed. This file is analyzed using the Volatility Framework.

Volatility is an amazing tool which can extract information from Windows XP SP2 & SP3 memory images. The analysis script in the article uses Volatility to extract the process list, network connections, list of loaded DLLs and list of loaded modules of the VM memory. However, Volatility can do so much more that I highly recommend extended what is in the article.

In addition to memory analysis, the article adds sandnet capabilities to the script. In the original script, the VM was set up in host-only networking mode which prevented the malware from communicating to anything over the network. This really limited the analyst in what they could see. For example, if the malware wanted to download additional files from a web server, the analyst would never see it.

To allow network connectivity, and still keep the network the analyst was on safe from infection, the script uses a tool set called InetSim to create a fake Internet for the malware to interact with. InetSim loads a number of localized servers (DNS, HTTP, etc) and logs any data sent to it. Now, when malware attempts to connect to a web server it will be able to and the analyst will see what it is attempted to download. I blogged about InetSim and how to install InetSim back in February.

I hope everyone enjoys the article. Please send me any feedback on the article or enhancements to the script. It does not appear that Hakin9 has posted the code listing for it yet, but as soon as they do I'll link to it from here. Of course, feel free to contact me to get the code if you want.

Thursday, May 7, 2009

Automating Malware Analysis article

In the latest Hakin9 issue (3/2009), I have an article on automating malware analysis. The article discusses how one can set up their own malware analysis automation system using VMWare, some analysis tools and two scripts. The article uses a Linux system as the base system and a Windows XP Pro as the guest/analysis OS, but I don't see why one couldn't use Cygwin on Windows for a base system with a few tweaks.

The scripts I created for the article are meant to be used as a base for your own automated analysis system - they are meant to be expanded upon. I encourage others to add other tools and capabilities to the scripts and share them here on the blog. The scripts used are available on Hakin9's site. However, if anyone wants the actual files let me know and I'll send them out.

I should point out that the system and scripts in this article assume you are in VMWare's host-only network mode. This is to prevent malware from accidentally infecting other systems on your network, the Internet, etc. However, since the system is set up host-only mode your malware will not be able to communicate with any hosts. The only network traffic you will see are DNS requests and probes to systems that go unanswered.

I encourage others to implement this into their automation system using software such as Truman, fakedns, or InetSim to create a virtual network. Don't want to take the time? Then you'll have to wait for the next issue of Hakin9 where I have part 2 to this article and show how to set this up (along with some other cool things).

I'd love to hear any feedback on the scripts, tools, or the article...including anything you use to expand upon it.

Wednesday, February 11, 2009

InetSim Installation

For a project I'm working on*, I've been looking at network simulation software to use in malware analysis. The most common one out there is Truman, written by Joe Stewart. However, Truman has some shortcomings - the biggest being it doesn't have an HTTP server and it hasn't been updated since it was released. So, I wanted to try a different one and that let me to InetSim.

InetSim has a number of software packages that need to be installed before it works. For my benefit, and I guess others as well, I'm documenting the process I took to install it on my Gentoo Linux system.
  1. InetSim has the capability to do connection redirection, but some options have to be compiled into the kernel first. Specifically, the Netfilter NQUEUE over NFNETLINK interface (CONFIG_NETFILTER_NETLINK_QUEUE) and IP Userspace queueing via NETLINK (CONFIG_IP_NF_QUEUE) need to be compiled in. I compiled them directly into the kernel, but they could be modules as well.

    Obviously, after re-compiling and installing your kernel (if needed), you should make sure that iptables is installed.

  2. A number of Perl modules need to be installed. Fortunately, most of these are in the Portage repository and can just be emerged:
    # emerge -av perl-Getopt-Long perl-libnet perl-Digest-SHA perl-digest-base perl-Digest-MD5 MIME-Base64 Net-DNS net-server
  3. There were two Perl libraries which were not in Portage that needed to be installed from source. The first was IPC::Sharable which is located in CPAN here. Once downloaded, installation was easy:
    # tar zxvf IPC-Shareable-0.60.tar.gz
    # cd IPC-Shareable-0.60
    # perl Makefile.PL
    # make
    # make test
    # make install
  4. The next required Perl library, Perlipq, took a little longer. This is a library used to interface with the packet queueing on the system for redirection. Initially, it could not find the libipq.h file in the correct location but a manual edit of the Makefile (shown below) fixed that. Perlipq is downloaded from here.
    # tar zxvf perlipq-1.25.tar.gz
    # cd perlipq-1.25
    # perl Makefile.PL
    At this point, the Makefile.PL prompts you for the location of the iptables development components. Specifically, its looking for libipq.h. It doesn't matter what we enter here as the Makefile will not find it in the correct place. Enter in some text and let the script finish.

    Once the script is finished, edit the Makefile. On line 145 is the following include line:
    INC = -I
    This is the directory which will find libipq.h. Change it to the following:
    INC = -I/usr/include/libipq
    /usr/include/libipq is where libipq.h should be located. If you are unsure, run 'locate libipq.h' to see where its at. After saving the Makefile, installation can continue.
    # make
    # make install
  5. Optional: If you want to make sure you have all of the necessary Perl modules loaded, run the following Perl script:
    use Getopt::Long;
    use Net::Server;
    use Net::DNS;
    use IO::Socket;
    use IO::Select;
    use IPC::Shareable;
    use Digest::SHA1;
    If there are no failures, you're good to go.

  6. At this point, all of the pre-requisites should be installed and InetSim installation can proceed. The latest version of InetSim at the time of this writing is 1.1 and is located here. Download it an untar it into a central location - I chose /usr/local.
    # tar zxvf inetsim-1.1.tar.gz
    # mv inetsim-1.1 inetsim
    # cd inetsim
    Note: I renamed the default directory for my own benefit, this is not necessary.

  7. InetSim uses the nobody user to run its servers. Nobody should be installed by default - but you better make sure.

  8. A group named inetsim is also required by InetSim to run. This should be created as follows:
    # groupadd inetsim
  9. InetSim comes with a setup.sh script which modifies permissions of all the files as needed.
    # sh setup.sh
  10. If you plan on running InetSim from a script, chances are you will need to modify a small piece of the inetsim program. On line 12 of the inetsim script is the use lib Perl code which tells the script where to find the InetSim modules. In its original form, it is a relative path to the lib directory. It should be changed to an absolute path similar to the following:
    use lib "/usr/local/inetsim/lib";
At this point, InetSim should be installed and ready to run. The default configuation file is located in conf/inetsim.conf and I highly recommend reading and modifying it to fit your environment. However, you should be able to use the default configuration file to test out your installation.
# /usr/local/inetsim/inetsim --session test
A number of messages of servers starting will stream by. If you don't see any errors, you are good to go!

* My new project - thanks ax0n!:


Thursday, February 5, 2009

Strings and update

Its been a while since I posted anything so I wanted to get something up here.

First, in my last post I mentioned how I use the strings utility when analyzing binaries. The utility will allow you to view embedded strings within a binary. By default, it only shows ASCII strings. The problem with this is that in Windows binaries, there are usually embedded strings encoded in UNICODE, and by default, strings will not show them. To get around this, I was using SysInternal's strings utility with wine on my Linux system.

However, in a comment craigb stated that you can change the encoding strings looks for with the -e option. Here is a snippet from the strings man page:
-e encoding
--encoding=encoding
Select the character encoding of the strings that are to be found.
Possible values for encoding are: s = single-7-bit-byte characters
(ASCII, ISO 8859, etc., default), S = single-8-bit-byte characters,
b = 16-bit bigendian, l = 16-bit littleendian, B = 32-bit bigen-
dian, L = 32-bit littleendian. Useful for finding wide character
strings.
By running strings using different encodings both ASCII and UNICODE strings in a Windows binary can be found. To do so, I whipped up a little Bash script which I now use whenever I want to pull strings from a binary:

#!/bin/bash

(strings -a -t x $1; strings -a -e l -t x $1) | sort
The script, which I named mystrings, takes the file to scan as a command line option. It then runs strings against it two times - the first time looking for ASCII strings and the second looking for UNICODE (16-bit little endian actually) strings. The -t x options prints the hex offset of the string within the file. After the strings commands run, they are run through the sort program and displayed.

My concern with this was the Linux strings would miss something that the SysInternal's strings would pick up. So, I ran a test where both programs were run against the same file. The output was the same! Woohoo!

In other news, I'd like to announce I got a new job starting at the beginning of the year (which is pretty much the reason I have not been posting). Those who know me know where I went to, so I won't go into details here. However, I've gotten into my groove and should be posting more soon.

Friday, December 12, 2008

Internal Laughs

Most malware that I look at these days is packed, sometimes double-packed, in order to hide whats inside. When they aren't packed, many times the strings inside the binary are encoded or encrypted so a strings program can't see what is going on.

Sometimes, however, if you wish REALLY hard and REALLY believe, you come across a gem like the one I looked at last night. I was notified of a piece of malware sitting on a server from one of my many sources I have. After downloading it, one of the first things I did was run the Sysinternals strings* utility against it. I found some interesting things:

C:\Documents and Settings\James\Desktop\MSN Pass Stealer\Stub\Project1.vbp

Hello AV Companies, Please Call Me

Hello AV Companies, Please Call Me Win32.MSNPassSteal.VB Thank You!

Its so nice to see things like this at times. While I'm pretty sure James didn't write this particular piece of malware, he probably did modify the source (MSN password stealer source code is easy to find) and compiled it.

James - if you are reading this let me give you some advice. First, learn how to use your compiler and how to turn off the debugging features that are turned on by default. Second, AV companies are not going to name your malware something you want. None of them did.

And finally, if you are going to use a user ID to post the results under, don't make it unique. Our intrepid fellow put the website the stolen credentials would post to as well as the user ID to use. While I'm not 100% sure it's James' ID (whh is why I didn't show it), it is very unique and can be traced back to a single user.

Then again, James, don't follow my advice. It'll be easier to catch you that way. :)


* Even though I do 99% of my static analysis on Linux, I prefer the Sysinternals strings program because it can grab unicode strings and to my knowledge the Linux strings cannot. It works just great under wine. If anyone knows of a Linux strings program that can grab unicode strings, let me know.

Wednesday, October 15, 2008

Phishing with Malware

I've been pretty busy lately with work and the malware challenge (only 11 days left!) but I figured I'd post something which came across my inbox today. Wachovia has been getting alot of phishing attempts against it which lead to a page trying to get you to install a security update, which is actually malware. I guess the bad guys decided that Wachovia had enough and decided to turn their sites on Key Bank.

I received the following email supposedly from Key Bank asking that I update my system now.



Clicking on the link took me to the following page, which is NOT located on Key Bank's website.



If you wait long enough it will refresh itself to the executable, but by clicking on the link the page will attempt to download and run (with user acceptance) the malware and will open up another browser window to the actual Key Bank login page. This page IS on Key Bank's website, but note that Key Bank is NOT compromised.



What has happened is when the user installs the "update" the initial malware loaded downloads another one which installs itself as a service on the system. This new service then watches for any credentials sent. What happens when it gets one?



This isn't a new method for doing things - its been around for a while. However, this is the first time I've seen this specific attack (from this group) directed at Key Bank. Trend Micro has a posting about the same attack against a German bank.

Thursday, October 2, 2008

Malware Challenge Contest In Full Swing!

The malware challenge contest began yesterday and from what we can tell its very popular. According to our logs, we had over 100 downloads of the malware for the challenge from over a dozen countries.

For those who don't know yet, the malware challenge is a contest to analyze a piece of malware and find out what it does. The contest runs from October 1 to October 26 and the results will be presented at the Ohio Information Security Summit. Of course, we have lots of cool prizes to give away!

We have made the contest so that if you are new to malware analysis you'll still have a great shot at winning prizes. We're going to be looking more at the way people analyze the malware as opposed to if they get the right answers. In other words, if you unsure about it still participate. The worst that can happen is you learn something in the process and win a cool prize!

Also, thanks to all who have been helping advertise it! Without you no one would know about the contest.

I look forward to seeing everyone's submission!

Thursday, September 11, 2008

Flux Agent Geographic Distribution

I've been looking into a fast flux botnet for the past day which came in the form of some banking malspam. If you don't know what fast flux networks are, check out the Honeynet Project's Know Your Enemy paper on them - its one of the best resources out there.

I set up a script to resolve the DNS name of the website which held the malware on it. The DNS record expired every 1500 seconds (25 minutes) so my script would perform the lookup, wait 25 minutes. perform another lookup, rinse, repeat. I did this for about 24 hours. The purpose was to see where the flux agents for the botnet were residing.

In the end, I had 88 unique IP addresses acting as flux agents residing in 21 different countries.



Interestingly, while the most were coming from Romania (18), the second largest was from Israel (15) and there were no .edu's in the mix. Remember, these are the flux agents, not the members of the botnet.

Friday, September 5, 2008

SEO Code Injection

Gunter Ollmann posted an excellent article explaining SEO Code Injection attacks at http://technicalinfo.net/papers/SEOCodeInjection.html. This is one of the best explanations of the attack I've read. Go read it. NOW!

SEO code injection attacks have been gaining popularity by those evil malware authors as a way to get unsuspecting victims to their attack pages. A few highly publicized attacks were done earlier this year which resulted in alot of head-aches for some major sites. Dancho Danchev has alot of excellent information on these attacks on his blog.

Tuesday, August 26, 2008

Olympic Travelers Return...Bearing Gifts?

Now that the Olympics are over everyone who was lucky enough to go will be traveling back to home and coming back in to work. Surely they'll be bringing the souvenirs they bought in Beijing - buttons, pins, T-shirts. But what about electronics?

China knows trade and knows an opportunity to increase sales in their country so they obviously did everything they could to ensure tourists could access Chinese markets and purchase their (cheap) goods. Did these include electronics? Absolutely!

While I have no first hand accounts of this and am speculating, I'm sure many of the recent Olympic visitors toured the Chinese markets and saw great deals on USB watches, digital frames, laptops and other computer accessories and picked them up. Soon these same people will be bringing in their newly-obtained items into their homes and hooking them up to their personal (or work) computers or, if administrators as lucky, they'll be bringing them to work to display (and use) on their desktops.

Anything to worry about? Naw, I'm sure we'll be fine. There's never been any instance of malware coming from Chinese hardware.

If anyone hears about anything like this, let me know please.

Monday, August 18, 2008

Is Free Better?

I'm a geek at heart so I take part in alot of geek-related activities. One of the ones I've gotten into within the last few years is boardgaming. Not your typical games like Monopoly, Scene-It or Risk (although I love Risk), but euro-games which, IMO, have a lot more strategy in them. It is because of this hobby I was at a LFGS the other night playing games with the local boardgaming group.

We were playing a game of Arkham Horror and in between turns one of my fellow gamers and I were talking about the laptop he had just brought and was playing with. He said it was mostly set up, but he had to go out and buy the latest AV suite to make sure it was protected. I mentioned that there were free AV software available which, IMO, were just as good as the commercial software. His response was that he had used them before, had liked them, but wanted the assurance he felt when he purchased the AV software. I was a little dumbfounded by his comment.

From his perspective, he felt safer paying $50+ for an AV suite of software than using free AV software which, to his own admission, would protect him just as well. I've seen this mentality in the corporate world as well. Corporations would rather shell out large amount o' cash for security suites or devices than use, just as good or better, free software because they felt safer paying for it. After all, if they are paying for it and it fails, they have someone to sue.

This post isn't meant to start a fight on commercial vs free software. I'm just confused by the perception out there in the corporate, or in the first case, the user world that paying for something will get you more protection that using free software. I guess I'm just surprised that this point of view is taken by end-users as well.

Has anyone else seen examples of this? Any good stories to share?

Thursday, July 31, 2008

Misc MA Stuff

Been busy but I still wanted to post something quick.

If you didn't know, F-Secure's yearly reverse engineering challenge, Khallenge, is about to start. It works by using levels - you download a binary, figure out how to reverse it and get the password to the next level. I've done it in years past and have gotten to the last level but not beyond. Maybe I'll try this year. It starts August 1, 2008 at midnight (EEST) which should be about 5PM July 31, 2008 EST. It lasts only two days so get out there and reversing! http://www.khallenge.com/

There are many excellent blogs out there which have alot of great information on reverse engineering and malware analysis. However, I want to call out one which I consistently find excellent information on new threats and how to perform malware analysys: The Websense Security Labs Blog.

I'll be the first to admit that when I think of Websense I think of content filtering and not malware analysis. (Actually I think of some poor fool who has to visit every site that gets submitted to categorize it.) However, they have some really smart people working for them who do alot of great malware analysis work. They constantly publish excellent blog entries about different aspects of malware. This is one of the blogs which I will ensure I read whenever they publish something new. Their RSS feed is here and I highly recommend it.

Anyone have any good resources they'd like to mention?

Friday, July 11, 2008

My First Malware Analysis

For some reason I was thinking about one of the first malware analysis/reverse engineering attempt I did. It was about 7 or 8 years ago (wow) and I was looking at a RedHat 6.2 web server. I had been given an account there from a local business in order to make sure the security of the server was up to par.

I tried to recreate this as much as possible to give a visual reference. Since I'm going by memory I'm sure I missed something so please forgive me.

While looking over the server I noticed there was an interesting account in /etc/passwd with a UID of 0 named toor. For those not in the know, a UID of 0 on a *nix box is the super-user account on the system. The games user, which no one should be able to log in as, had a password assigned to it. Soon, I found a directory named "..." in /dev, and within that directory were a number of tools.


Additionally, while most of the system logs had mysteriously disappeared, those that were left had a number of messages stating the server was in "promiscious mode". This server had been compromised.


This was really my first attempt into forensics and at the time there were no computer forensic resources available to non-LE folk (or at least I didn't know of any). In other words, looking back I did a number of things I wouldn't do now.

I copied the tools down to my local computer and started looking it over. The files I remember were:
  • A psybnc IRC eggdrop
  • A sniffer which would scan traffic for credentials and save them to a file. A file was also present which contained a few credentials which had been captured.
  • A number of log cleaning utilities
  • A rootkit.tgz file which contained compiled binaries and not source code.
The rootkit tgz file was the most interesting to me since I had never run across one before. Through some research, I discovered that it was LRKv5 (Linux Rootkit version 5) and had downloaded the source code. (You can still download the source code for it today.)

Through reading the files which came with the source I found that the rootkit would backdoor the login process (and a number of other files) such that anyone could log in or become root by knowing a secret, hard-coded password. I had the overwhelming urge to figure out what that password was. The default password of "satori" did not work, so I had to come up with a way to figure it out.

So, I started to reverse the backdoor program in order to determine how to find the password. I still use these techniques when RE'ing malware today.

I ran strings on the program to see if I could pick out the password. Nothing jumped out at me as a potential password, but I did see a number of gcc and glibc version numbers within the binary which told me it had been compiled on a RedHat 6.2 box. In my mind, that meant I could take the source, change the password to something I knew and look for it in the binary. With any luck, I could look in the same place on the rootkit'd server and find the attacker's password.

So thats what I did. I changed the password in the source code to a password which I knew and compiled the login binary. Due to the way the binary stored the backdoor password, strings did not see it.

I opened up the binary in a hex editor and started searching for my password. Eventually, I was able to find the password and record its offset. In the picture below. the password starts at offset 0x196f and the each letter is a few bytes after.


I grabbed the backdoor'd file on the compromised server, threw it in a hex editor and found what looked like it could be the attacker's password. (In the picture below the password is "pebcak".)


To test it, telnet'd to the compromised box and logged in as a normal user with the backdoor password...successfully! I then su'd using the backdoor password and I was root! Needless to say, I sat there for a few minutes with my eyes wide open that it actually worked.

Would this work now on a modern Linux system with the same type of rootkit? Possibly.

In a few days, I'll post part two of this where I analyze what I did wrong forensically and how it should have been handled.

Wednesday, May 14, 2008

Infected eBay watch

Just got back from the Ohio HTCIA 2008 conference and saw that Dave over at his Securi-D blog posted about an MP3 watch he bought off of eBay from a Chinese seller. When he plugged it into his computer, his AV detected a virus on the watch. Too funny.

Unfortunately, this thing isn't new. Within the last year, we've started to see more and more products appear which have been infected with some malware. This is not a new trend and I see this becoming more of a problem in the future.

Friday, May 2, 2008

Race to Zero Controversy

A week ago I blogged about a new contest called Race to Zero at Defcon. The goal of the contest is to obfuscate malware enough such that when it is uploaded through a portal and scanned with AV there is a zero-percent detection rate. As expected, the AV community is up in arms about this.

My original intent was to play devil's advocate about this content and talk about the reasons why this contest is not as bad as the AV vendors are saying. However, Dancho Danchev posted something which says it best. Read that. :)

I still have my own opinions on the contest and how easy it is to obfuscate malware enough to bypass signature AV. However, I feel I would probably be beating a dead horse and so am going to forget about the whole thing.

Tuesday, April 29, 2008

Kraken Botnet Infiltration

When the Kraken botnet was "exposed" at the RSA conference this year, alot of controversy surrounded it within the MA community. Was this really a new botnet? Was it really as big as the speakers were saying it was? Why weren't samples shared before hand? And so on.

Despite this controversy, there has been alot of interesting information about it. One of the most interesting pieces I've read is from two analysts at TippingPoint who infiltrated the Kraken botnet. Yesterday, they posted two blog entries which discuss how they did it - from both a high level and a technical level.

They are very good reads and I recommend reading them.

Kraken Botnet Infiltration
(high level)
Owning Kraken Zombies, a Detailed Discussion (technical)