Check out the new USENIX Web site.
FeatureUSENIX

 

selling security

Fear Leads to . . . the Dark Side

ranum_marcus2

by Marcus J. Ranum
<mjr@nfr.net>

Marcus J. Ranum is CEO of Network Flight Recorder, Inc. <https://www.nfr.net>.




When I first started working with computer security, I used to wish sometimes, "If only people were more aware of the problem. Then I wouldn't have to start off by having to convince them they needed to do something — we could get right down to solving problems." Be careful what you wish for. It might come true. Today it seems that awareness of security is at an all-time high. But I'm afraid it's more because of hype and scare tactics than because of positive, useful awareness and education. In short, I think we're gaining in the "fear, uncertainty, and doubt" department and losing in the "clue" department. Is ignorance better than fear? As Yoda says, "Fear leads to anger, anger leads to hate, and hate leads to the Dark Side."

Computer security makes the news every day, or close to it. As I write this, the current brouhaha surrounds some undocumented and unexplained features of Microsoft's cryptography API: one of the variables — a public-key component — was named with a prefix of NSA. Is it a trapdoor in the crypto, installed by request of the National Security Agency, or is it something more innocent? Does the "NSA" prefix refer to the National Security Agency or to the "Next Security Attribute"? Well, we don't know yet. But speculation is flying, covering the complete range from "it's probably nothing" to "it's a plot spearheaded by the orbital mind-control lasers." The truth is out there. And it's probably banal, as truth often turns out to be. By the time you're reading this, we'll probably know more about what's really going on.

What's interesting to me about the NSAKEY incident isn't the existence of the key; it's the way in which the public was informed about the existence of the key. Basically, the information was dumped onto the Internet in the form of a conjecture — a perfect recipe for a tempest of conspiracy theories and wild speculation. I'm not trying to say that the person who discovered the variable did anything wrong, but I'd love to know why someone didn't dig deeper into the matter before going public with it. In the "hard sciences" we have textbook examples of what happens when you go public with a result before you've double-checked and done some quiet peer review: e.g., room temperature fusion.

So, what's really going on here? Are we trying to educate people about security, or are we hyping them with late-breaking news that hasn't been fully researched? I'm starting to lean toward the latter. I went to the Web site of the company that found the "NSA back door," read its FAQs on the issue, and browsed around a bit. The company consults on matters related to public-key infrastructures. The NSAKEY "almost certainly doesn't pose a threat to individual users." Oh, I see.

When "Joe User" reads about this stuff, he's going to conclude one of two things:

  • So what? These security guys are always making mountains out of molehills.

  • So what? Nothing is safe. So I'll just forget security.

What I fear is that, while trying to promote awareness of security, we're actually damaging our case by raising fear, uncertainty, and doubt over problems to which we can't offer solutions! Security guys are constantly amazed that, in spite of huge security problems, 99.9% of the world is comfortable with Windows. It's not that they're comfortable, guys, it's that we haven't offered them a viable, secure alternative.

Also, let's stop assuming "Joe User" is stupid. He understands vested interests. When a consulting firm specializing in public-key infrastructures does a press release about a problem with Microsoft's use of public keys, what do you think goes through his mind? A few years ago, when I worked at a company that built smartcard systems, we were overwhelmed with customer calls when someone announced a theoretical vulnerability in smartcard systems. About a week later, when the ruckus was dying down, we got a letter from the company that had publicized the vulnerability, offering us consulting services to see if our system needed fixing. I'm sure the timing was merely a coincidence.

I think what Yoda meant to say was, "Hype leads to fear, which leads to the search for a solution, which leads to apathy when no solution is available."

Hyping Holes

There's also a responsibility issue. I know that responsibility is out of fashion, but humor me in my dotage. There's been a long and vigorous debate in the security community over the question of "full disclosure" of bugs. On one side, the argument is: "Disclosing the nature of security bugs makes us more vulnerable. Keep them secret and work quietly to fix them." On the other side, the argument runs: "Disclosing the bug forces the vendor to fix it in a timely manner. The bad guys already know about them, so let's publicize it." Tragically, many people overlook the obvious answer — which is that both sides are right! The devil's always in the details: how you disclose the bug, how you communicate with the vendor, and what you disclose versus what you keep secret.

I'm not going to try to carry on the disclosure debate, because that's not what this article is about. This article is about the wrong way (and, by negation, the right way) to sell security. The disclosure debate is irrelevant to selling security, but it's important because disclosure has become one of the marketing strategies of security practitioners.

Probably the best recent example I can think of is an IIS bug that was publicized by a small security company a number of months ago. The bug, which would be easy to implement from their description, suddenly pulled the rug out from under all the Web sites that were using the latest version of IIS. Microsoft rushed a fix out, and after a while the furor died down. I read a couple of news articles about the incident, and one part really stuck in my mind: the sequence of events. The flaw was found. Notice was sent to Microsoft. Microsoft did not release a patch and then stopped responding to email about the problem. So our intrepid heroes did what they had to do: they published the bug on the Internet. What's interesting about that? The whole sequence of events lasted one week. These guys actually expected a vendor that has to manage releases for skadjillions of products to push a patch out, what, overnight? And they were worried that Microsoft had stopped the dialog with them? It was probably the weekend! When the bug was publicized, exploits were available for it within three hours of its posting, and innocent Web-site administrators' days were being ruined shortly thereafter. Not to be outdone, within six hours of the bug being publicized, the discoverers released their own exploit tool.

Now, either these guys live entirely in Internet Time (where one hour equals a day) or they had an agenda beyond just helping Microsoft fix a vulnerability in its software. I suspect the latter. Frankly, I think they did it to market themselves. I doubt there was any speed at which Microsoft could have responded that would have been adequate. What they wanted wasn't security; they wanted attention, Web hits, and, by extension, money. Did I mention that they sell vulnerability-assessment tools and other security products?

People, this is not how to sell security. What this does is convince everyone that achieving security is a hopeless task, doomed to dismal failure. Possibly it teaches everyone to be extremely skeptical about security alerts, because they're all clearly a bunch of hype, just like that Y2K thing we used to hear about.

Aiding and Abetting

There's a company that sells books on how to make bombs and drugs and how to commit identity fraud, "for educational purposes only." Now, I'm a staunch defender of the Bill of Rights — even the unpopular parts of it — but I happen to think that "rights" do not absolve one from "responsibilities." One of the other ways security is being marketed is by demonstrations of security-breaking skill or know-how. The logic goes like this: "I know 300 ways to break into systems. Therefore I know how to protect yours." The guy who knows 600 ways to break in must be twice as skillful, I guess.

This has started a trend that I believe makes the problem worse rather than better. Instead of just having the "bad guys" trying to find and exploit holes in systems, now we have the "good guys" doing it too, or hiring "ex-bad guys," repackaging them as "good guys," and selling them to you for $400 an hour. Sure, the guy will help you secure your Web site; he honed his skills by making your fellow sysadmins' lives hell. No apologies necessary, it's just business, isn't it? There are security companies that brag about their SWAT teams, which consist of largely the same guys you're trying to protect yourselves against. They're not "bad guys" anymore, though. Now they've got stock options and drive Ferraris.

A plethora of sites distribute attack tools "for educational purposes." These sites are indirectly responsible for the rapidly growing population of script-kiddies who have learned how to use point-and-hack attack tools. Some of these sites preface their tools collections with mealy-mouthed self-justification about how it's a useful meeting ground between security professionals and the "other half." Thanks, guys.

What kind of signals does all this send to our young script-kiddie who is looking at starting a career breaking into systems? Develop exploits, publish them, and hype yourself. . . . It's okay to do it; everyone else is. . . . When you get tired of being on the Dark Side, join the Jedi.

It hurts reputable security professionals as well. I was speaking at a conference the other day and made a comment about how I don't know how to break into systems. Several people in the room giggled. But it's true — I don't. Why does everyone assume that in order to be a good bank guard you have to have knocked over a few liquor stores first? My hacking skills aren't even as good as the lamest script-kiddie's, yet I get two or three emails a month from hotmail.com addresses asking me if I know any secret ways through XYZ brand firewall. If I knew of any, I'd have contacted the vendor by now, and given them more than a week to fix it, too.

Let's Get Real

Since I've complained about people who create problems but don't offer solutions, I guess I need to make some suggestions about how to roll back the tide of security hype and fear. It's not easy.

First, I offer the "show me the money" litmus test for hype:

  • If you see a press release about a security problem, it's hype.

    Nobody is going to pay a newswire service good money to announce a bug unless they expect to get something out of it in return. When you read about a shocking new vulnerability found in something, research not only the vulnerability but the individual or organization that announced it. Ask yourself if they happen to sell a solution to the problem, and keep your skepticism in gear.

    Second, I offer the "be part of the solution" test:

  • If the announcement/product/exploit makes things worse for people, it's part of the problem and is therefore hype. If it makes things better, then it's good.

    When someone releases a piece of software, does it improve the security of your system or erode the security of someone else's? Avoid the latter, promote the former. Distributing cracking tools "for educational purposes" is an insult when you consider the number of great, free, beneficial tools that are available. Tools like tcpwrappers, postfix, COPS, ipfilt, and others serve to protect — their authors deserve the
    laurels.

    Last, I urge you to disseminate positive information, not system-cracking techniques. Why waste the time writing an article on how to stack-smash sendmail, when you could do code reviews and help the authors fix their bugs? Why write tools to break into firewalls when you can write tutorials on how to improve them? Oddly, you'll make more friends in the system-administrator community if you help them solve problems than you will if you keep shoving problems in their faces. If it's money you want, you'll make more money selling solutions than creating problems. If your solution is worth buying, it will be obvious enough that you don't need to release an exploit script to force your customers to buy your product out of fear.

    Obviously, I'm opinionated on these matters, and I don't expect everyone will agree with me. In either case, if you work with security, I urge you to denounce hype when you see it. As computers and networks continue to infiltrate our lives, security is going to be a bigger and bigger problem. There's no sense making it bigger still by selling fear.

     

  •  

    ?Need help? Use our Contacts page.
    Last changed: 8 Dec. 1999 mc
    Issue index
    ;login: index
    USENIX home