[EAS] Password Cracking Basics
Alex Hartman
goober at goobe.net
Fri Feb 15 14:11:22 CST 2013
See inline....
> 1. I should have said as part of a "defense in depth" approach that you provide people with username/password combinations that give them the lowest level of access (user in windows). I know this is a hassle, because it means that the admin's routine password/username will give them low-level access, but that's a misreading: the admin could have the highest level of access routinely. If you are the admin, that means that you could be the way "in."
This is exactly why user-level stuff is done, on automation systems
for instance, a user might have access to 3 programs and you can
remove system functions, not allow a command prompt, etc. It may be a
hassle, but it keeps the kids with no attention span focused on the
automation instead of solitaire or facebook on the automation system,
an easy "in" for virii and such is simple stuff like this. I have been
in stations where the automation system has been infected with
malware, browser hijackers, 30 toolbars, etc because of the access
levels the "admin" allowed to the system. It's not only the purpose
specific program to protect, it's the entire machine and possibly
network. Most systems i deal with use the windows credentials as well
to log into the automation system, so it's a little difficult to
completely separate them.
> 2. Computers running and Windows Update have have about zero code-based intrusions in the last decade. For the most part, vectors in recent years have been Java, Flash, etc. Nobody's perfect, but the Stuxnet got in because the Iranians don't use legit windows copies and therefore can't use Windows update. Sony's playstation breach was caused by a never-updated Apache server running unix, and users were actually running SQL queries on their PS devices, which told even small kiddies all they needed to know to exploit three different levels of insecurity.. Stupid is as stupid does.
...Except this one:
http://www.cvedetails.com/cve/CVE-2011-5046/
and this one:
http://www.cvedetails.com/cve/CVE-2011-0096/
and this one:
http://www.cvedetails.com/cve/CVE-2010-4701/
and this one:
http://www.cvedetails.com/cve/CVE-2010-4398/
and the 173 they found last year, and the 70 they've found this year
already...(source: http://www.cvedetails.com/vendor/26/Microsoft.html)
You get the idea... Windows is, as you said, a stitched up system
these days, most of the zero-day vulnerabilities are from previous
"stitchings" that are antiquated and no longer in use. There's a delay
from the time the exploit is found to the point where MS pushes the
patch. In that time frame is when the OS is most susceptible to
attack. Pretty basic 101 stuff there and a bit of common sense. It
takes time to write the patch without breaking the rest of the
function of whatever is exploitable.
> 3. We have to assume that the hackers read the same sources of information as we do, including this list. Ideally, all security systems should be exposed to the wild, it's just the keys and username/passwords we need to protect.
Absolutely they do, and then some. As they say, you can make a bigger
lock, the other guy will just make an even bigger hammer.
> 4. Beware of systems that require you to change the password, and then don't even check to make sure that the new one doesn't contain the old one. The code to do this test is about 5 lines, and it can be done with (properly) hashed (semi-encrypted) passwords, not just plain text.
>
Exactly why i think it'd be important to maintain this in any future
firmware for "critical" devices. If they're running a *nix OS, a
simple PAM hash stored is usually good enough.
> 5. I specified a metric based on how many accesses a particular machine can take in within a certain period of time. That's the important metric, not the number of computers that be directed to attack that target. If your device can only take 10 login attempts per second and there are 100 computers trying that often against the target, legitimate access will be degraded and the hacking attempt will be easily disclosed. I should also point out that many hosting providers (mine costs $59 per year) will detect this activity and filter out the hackers. Usually.
Correct. And most hosting providers do this to protect their
investment first, not always yours. In the case of an EAS box or STL
however, that's not a hosted service, which is why it has to rest in
the hands of the station management/IT/Engineer staff. It's about
protecting the house, not the city. ;)
> 6. " When your code gets to a few thousand lines, and upgrades happen without cleaning code, then you have problems. Programmers can be (and typically are) lazy. They'll just comment out or leave in older code while just adding new stuff and these are where the exploits come in play." This is so wrong, on so many levels. I don't know how to clean code, my codebase is over a million lines. Upgrades just replace the entire code base, and should be sandboxed. (I know it's hard to do this in *nix.) However, if the specification permits the user to use default passwords, then it's not the programmer's fault they didn't code that. I'm not necessarily totally-up-to-date on programming and coding practices, but even then, everything in this sounds very much like an ignorant slur. Or, somebody has extensive knowledge of programming practices in a device or organization and has reverse-engineered code. Which would be a violation of a license agreement ...
Okay, this is where i take issue with modern programmers. Stitching
modules is fine, if you read the other guys code and verify it. The
exploit can easily come from their module through no fault of your
own. Most programs are "cracked" typically this way simply because the
module (or .dll file for instance) is exploitable. My automation
vendor is also the coder, his code is a few million lines as well, all
hand-written because he simply is anal that way and doesn't like other
peoples coding styles (programmers are weird like that). Your codebase
should be entirely sanitized, even with other peoples modules. And
it's pretty easy to do in *nix, it's no different to write a shared
object file in *nix than it is for windows. Most of the time it's the
same language, just different platforms. (Mostly C++ and other
platform independent languages, obviously not VB or the like
proprietary to windows) Then there's the reverse-engineering factor.
If the codebase is secure, then most don't have much to worry about.
If it's published under say the Apache license, BSD license, or GPL,
then well... the code has to be provided (with exceptions, it varies
from license to license). Closed-source is typically the safest method
of course (no code, and decompiling is usually very incomplete)
requires more foot work. Again, lowest hanging fruit. If i have the
source, i can obviously scan it for flaws and potential exploits.
People set up honeypots for this all the time to test these before
attempting the attack.
> 7. There is a common misconception that each (or even any) IP port is "bound" to a particular piece of software. This just isn't the case. If your transport is not UDP/IP, then the input and output port will almost always be bound to a particular piece of software. But, that can change in the course of a minute or second. There is no way to tell. If the transport is UDP/IP, there is no need to bind ports to software, and there are good reasons (multiple access on the output side) to not so bind the port. But, this misses the point, since UNBOUND and UNCOMMITTED ports are more dangerous than ports bound to a particular application or service.
Depends entirely on the software.
In the *nix world there's a bajillion ports that are left over from
years past. Who uses the echo port any more? How about chargen? UUCP?
They're in there however because of older RFCs require it to be
compliant. They're typically unbound as the service that uses them is
shut off, but equally as dangerous, like you mentioned. In recent
years, MS and OSX have given up on compliance and did their own thing.
It's actually quite amazing how many ports are open by default on an
OSX machine.
> 8. I could put money down that the method Dave described to crack a computer-based access control list (ACL) on a different computer is not possible to accomplish within any reasonable amount of time, at least as pertains to Windows. I seem to remember that there are three levels of encryption on the basic windoze ACL. Each level would need to be cracked.
ACLs typically require brute force, which is why exploits are
typically preferred. Windows (according to the site sourced above) has
plenty of gaping holes in it. It's not that it can't be done, it's
just typically resource heavy and time consuming as you mentioned.
> 9. I still maintain that lock-out (as opposed to slow-down periods after repeated unsuccessful login attempts) are a very bad idea that will lead to infuriated, under pressure customers making irate calls to customer support at the worst of times. How about, after 5 unsuccesful login attempts, a "forgot your password" link comes up that sends a "reset password" message to your registered email account. And, this one time only, when you reset your password, it can be the same as the existing password you "forgot?" I forget passwords all the time. I don't want to be embarrased about that, I want to get into the machine quickly without human interaction. I tend to think that customers diagnose and fix EAS systems under pressure.
I have a lock out on my VPN, the router has a lockout, etc. I spent
years resetting passwords on email boxes. Here's where the social
engineering aspect comes into play. A lock out based system (for a
small finite time the first go around, say 2 minutes, second time 5
minutes, etc). If you've forgotten the password and tried 20 different
ones, chances are by now you're looking for the factory reset button
and will just deal with it. Using something like KeePass or other
password software is helpful for those prone to forgetfulitis.
EAS is typically repaired under pressure. As is any "critical" system
that's malfunctioning, doesn't mean we should ignore security for the
sake of stress the 99% of the rest of the time when it's working fine
and mostly ignored.
There's my nits to your nits. :)
--
Alex Hartman
More information about the EAS
mailing list