The Sharp family woke up to some some good news this morning: a private investigator who works with a private bank in the UK has offered to share a fortune with us - because we are lucky enough to have the same last name as a deceased client.
In his email to "undisclosed recipients", the aforementioned P.I. says that he is "not a criminal", which is good to know. He is, apparently, doing this because "the dynamics of my industry dictates that I make this move."
Folks, if you receive an email from someone - anyone - saying they have found a pile of money previously owned by a deceased person with the same name as you, don't reply. It is a scam.
If someone says in an email that they have been hired to kill you - but will forget about it if you empty your bank account in their direction - don't reply. It is a scam.
If a bank or credit union asks you to change or verify or transmit your login credentials via email, don't do it. It is a scam.
An unfortunately large number of people are still are replying to these emails - and many are still being taken for a ride, to the tune of hundreds or even thousands of dollars.
I followed one of these threads to its natural conclusion a couple of years ago, and the guy on the other side - "a UK barrister" was pretty slick. I can see how to some folks a deal might just seem real enough to invest a few hundred bucks.
Bottom line: if an email from a stranger - or an institution - surprises you in some unexpected way, delete it, or if you bank with the institution in question, call customer service before clicking on anything in the email.
Thursday, July 31, 2008
The Sharp family woke up to some some good news this morning: a private investigator who works with a private bank in the UK has offered to share a fortune with us - because we are lucky enough to have the same last name as a deceased client.
Monday, July 28, 2008
Okay, I've been on Vista on and off since the start of the year, and as a regular user of more than three applications, I feel qualified to comment about the comparison Forrester is making between Vista and New Coke.
My two cents: New Coke sucked. Vista is just fine.
I actually have bigger problems with the Office redesign than I do with Vista - and suspect that Vista issues may not be the only reason behind the fact that 87% of corporate PCs (and I presume laptops) are still running XP (Forrester).
Whoever redesigned Office did so with little thought to the fact that the majority of new computer users would be buying laptops with wide-screen formats. And with less thought to the fact that the canvas is the most important part of the interface.
It isn't so much a CPU hog as it is a real estate hog. The design shows the designers were a lot more enamored with the application than they were with any ideas about what magic might be done with it, in the form of documents, presentations or spreadsheets.
But Vista is a different story. After several months of using it, I would not go willingly back to XP. I like it.
Stylistically, I like the way the windows open and close. I like the Aero interface. And yes, I've even gotten used to the redesigned treeviews - to the point where the internal window "jog" has actually started to feel intuitive.
In terms of performance, on my machine - a Sony Vaio with crapware removed - Vista runs really fast, and starts up faster than any of my previous machines running XP. All my plug and play game and music stuff seems to work fine. No networking issues.
What's not to like?
As it turns out, plenty. But some of that hatred is misplaced. According to the results of the Microsoft "Mohave" experiment announced last week (in which XP users were shown a new test "post Vista" operating system and proclaimed it to be great), users may tend to react more to "fuzz and buzz" than to actual experience.
Maybe the Vista team should take the Mohave experiment on the road...
Note: Full results of the Mohave experiment are due to be posted tomorrow here. Kudos to Microsoft PR folks - great job in thinking up this idea in the first place, and getting the word out there.
Saturday, July 26, 2008
Yesterday, data presented by Carnegie-Mellon University demonstrated some of the issues that stand in the way of creating the safe Internet experience that online banking consumers are seeking.
The data, based on a University of Michigan study conducted by Atul Prakash, a professor in the Department of Electrical Engineering and Computer Science (and the author of over fifty papers in the field), and two of his doctoral students, Laura Falk and Kevin Borders, examined 241 sites in 2006, including the sites of major financial institutions.
Prakash apparently initiated the study after noticing that his own interactions with financial insitutions on the web were less than secure.
The results are worthy of study. As re-reported on Friday, Prakash and his team found that three quarters of consumer banking sites suffered from some form of fundamental design flaw impacting security.
"To our surprise, design flaws that could compromise security were widespread and included some of the largest banks in the country," Prakash said.
Some of the flaws uncovered by Prakash and his team included:
Placing "secure" login boxes on insecure pages
47% of banks were found to be guilty of this particularly transgression. Doing this is rather problematic in that it exposes user names and passwords to hackers using man-in-the-middle attacks, or siphoning data off wireless networks.
Hosting of support/help/security advice on insecure pages
55% of banks presented their support pages within a non-secure environment, allowing hackers to easily intercept support request or even set up their own spoofed web pages and call centers using DNS redirects.
Prakash found that 30% of banks surveyed sent their customers to other sites in order to facilitate transactions. Unless these other sites utilize some form of identity federation or shared trust, this practice is *not good*.
SSNs and Non-Secure User IDs
Prakash and his team faulted sites utilizing social security numbers and email addresses as login credentials s user ids for exposing this information to hackers via man-in-the-middle attacks. I agree with this "outing" of this practice.
Given the ease of validation methods, allowing weak passwords to exist isn't a great idea, and doesn't safe anyone any money in the long run. According to Pradah, 28% of the sites surveyed allowed weak passwords.
31% of the web sites of financial insitutions surveyed by Prakash were found to be emailing statements and/or passwords to customers.
None of these design problems are issues if consumers do their banking using Authentium SafeCentral, but all should be examined/fixed anyway. The cost of fixing each of these issues is minor; the benefits are potentially significant.
The fact that we are able to protect against the exploitation of these weakenesses should not be used as a reason not to fix them. Consumers will on occassion need to use a non-secure browser. Banks should perhaps examine this list for indications their own sites could be improved.
Friday, July 25, 2008
Ten years ago, before co-founding Authentium, I traveled to Beijing to meet with China Radio International and discuss a possible joint venture with them and the satellite company I was working for.
I've always loved going to China. But this time there were several notable highlights to the trip.
One of the highlights was a tour of CRI's multistory, mid-city facility, during which we were shown several interesting items, including their concert hall, a map of the Chinese shortwave radio grid and the China National Radio Museum.
The museum was fascinating. At the time, CRI was broadcasting over its shortwave grid (and via AM repeater stations) in 49 languages, including Esperanto. Arranged in a large darkened room in glass and wood cabinets were gifts from all over the world, including operettas in Hungarian, bottles of whiskey (unopened), and hand-written song requests.
In one cabinet, sat the polished gray and chrome microphone used by Chairman Mao to proclaim the new order, from the Peace Hotel in Shanghai.
I checked into the room Mao stayed in during a previous visit when I was there in 1995. The room cost me $140 for the night. The guys in the jazz band in the bar downstairs were all eighty years old.
After the museum, we ended up in the basement of the building looking at a map showing the shortwave repeater stations... but what really took my attention was a powerpoint slide showing the fiber being laid between the major cities.
One of the guys in our small group said something like "this is a very ambitious plan". Our translator translated this and the Chinese just shook their heads and smiled at the poor naive Westerners sitting across from them.
"This is not our plan", our guide explained. "This is our current capacity".
He went on to explain that in major cities they already had fiber passing about 70% of buildings, and broadband uptake was in double digits and growing fast. We all looked at each other, and then looked back at the maps, and wondered - could this really be the case?
Could China have really built the largest broadband network in the world?
The news out of China today - that they now have 253 million Internet users sitting in a market that is growing above 50% a year - shows that indeed they have.
I wouldn't be surprised if it is announced next week by the ITU that China already has more broadband users than the US: after all, they were ranked second by the international body at the end of 2005.
Commentators will come out in the next few days and claim these numbers are inflated. I don't think so. Based on what we saw a decade ago, I think the numbers are real, and I think the growth figures are real too.
Note: Check out the image above - yes, that really is a program guide from China Radio International in Esperanto, complete with banner ads, also is Esperanto. Don't believe me? Click on it and check out CNI's site.
A few weeks ago, I blogged about the 500,000,000 unpatched Internet browsers that the Swiss Insitute of Technology estimates are out there.
Yesterday, I blogged about the 10 million DNS servers now at risk because of the DNS vulnerability recently identified by Dan Kaminsky.
Now, let's assume that 20% of the DNS servers have been made compliant over the past few weeks, a number that I personally believe is a stretch. That still leaves 8 million DNS servers as targets for hackers looking to redirect Internet traffic, and 500,000,000 unpatched browsers.
That's a lot of potential for evil.
A friend from one of the larger online financial service providers in the US sent me a link to a quote Kaminsky made that was published yesterday. In this quote, Kaminsky is starting to sound the alarm:
"We are in a lot of trouble," said IOActive security specialist Dan Kaminsky. "This attack is very good. This attack is being weaponized out in the field. Everyone needs to patch, please. This is a big deal."
As I mentioned yesterday, we shouldn't hold our breath when it comes to hoping all the DNS servers out there are going to get patched anytime soon.
Another issue that I can see looming regarding this issue is the difficulty that the mainstream press is going to have in "sound-biting" a technically complex (for non-IT folks) problem so it can be made interested for consumers.
That initial explanation of how large and small remote Domain Name Servers and local HOSTS files all work together to resolve URL requests is going to have folks reaching for their remotes pretty quickly...
The good news is that there is a solution available. Almost five years ago, we started work on a system that would protect there requests from the origin point through to the destination server.
As I mentioned yesterday, our service, Authentium SafeCentral, bypasses the non-secure DNS infrastructure and provides a secure means of correctly connecting to transaction sites.
This patent-pending service operates securely, anywhere in the world, regardless of whether or not your ISP's DNS servers have been patched. And if you're one of the 500,000,000 who haven't updated your browser, SafeCentral will provide you with a much safer Internet.
Note: Some of you asked where the estimate of 10 million DNS servers came from. Although I thought this was was clear in the original blog, the sources of the number was the Infoblox DNS Report Card, which estimated there were nine million DNS servers in place at the end of 2007.
I simply took the previous year's growth and used that as a guide - which produces a total base of just slightly less than 10m servers.
Wednesday, July 23, 2008
PC Magazine just published an excellent review of Authentium SafeCentral.
Our security features all worked exactly as advertised and the reviewer had many positive things to say about the enhanced security SafeCentral offers online consumers - especially when it comes to online banking transactions.
The only negatives were lack of a password manager, lack of support for Firefox antiphishing, and slight slowness in rendering pages. All of these feature requests/issues have already been addressed for our new release.
The review focused on three main areas: phishing/spoofing, keylogging and screen-stealing, and DNS (URL lookup) security.
With respect to our antiphishing capabilities, one of the things I liked about the review was that the reviewer understood the need for a systematic, real-time approach to preventing phishing. Here's what he said about our abilities in that area:
"If you always visit your sensitive sites by launching them within SafeCentral, there's almost no chance you'll be taken in by a phishing scam."
He also tested our secure DNS lookup capabilities by hacking his test system HOSTS file, and found that we prevent that kind of DNS poisoning.
"I added a line to make requests for www.pcmag.com go to a different site. IE and Firefox were totally fooled, but the SafeCentral browser brushed aside my amateur hacking and went directly to PC Magazine's site."
Excellent! That is exactly what is supposed to happen - poisoning of the local HOSTS file is one of the easiest hacks to pull off, and our patent-pending TSX library (now part of SafeCentral) does a great job of preventing this.
On the subject of sneaky key-loggers and screen-stealers, the reviewer used a keylogger that's "sneakier than most" (his words) and again compared us to IE and Firefox (check out the slide show on PC Mag's site for screen shots of this attempt):
"The keylogger totally captured everything I typed in IE and Firefox. It saved screenshots, it recorded data from the clipboard, and it even tracked what URLs I visited in IE. But it didn't get a single byte of information from the SafeCentral session. I tried several other keyloggers with the same result. Good job!"
The reviewer noted at the end of the review that we could do with some improvements in speed (already addressed), password manager support (also already addressed), and support for the Firefox antiphishing technology (included in the latest build).
The complete text of the review, including screenshots of SafeCentral, can be found by going to PC Mag's site, buying the magazine, or clicking here.
If you'd like to download SafeCentral for free, please go here.
Tuesday, July 22, 2008
Okay, like a lot of security guys, I speculated on what Dan Kaminsky was going to announce at Black Hat regarding the current DNS vulnerability.
Here's a quick recap of the problem, courtesy of Wired:
"The DNS flaw that Kaminsky discovered allows a hacker to conduct a "cache poisoning attack" that could be accomplished in about ten seconds, allowing an attacker to fool a DNS server into redirecting web surfers to malicious web sites..."
"A cache poisoning attack allows a hacker to... translate a website's name to a different address instead of the real address, so that when a user types in "www.amazon.com," his browser is directed to a malicious site instead, where an attacker can download malware to the user's computer or steal user names and passwords that the user enters at the fake site..."
My own speculation involved an assumption of stupid levels of randomness. But if Thomas Dullien (aka Halvar Flake) turns out to be right (and as of this writing, most people seem to think that he is), I was off by a force of magnitude - in terms of both stupidity levels and the ease with which this vulnerability can be exploited.
The vulnerability allows hackers to basically take over a DNS cache "in about ten seconds" (see above quote). Wired predicts the first root kits will be in circulation by *tomorrow*. Here's a link to the post from Dullien.
So if the problem is known, why do I say this ended badly? Because we're looking at a massive, Internet-wide problem. Even though vendor patches are available, Internet security - and DNS lookups - are going to be compromised for as long as it takes for everyone to get compliant.
There are an estimated 10 million DNS servers out there. According to the Infoblox DNS Report Card survey in 2006, by the end of 2006, less than two thirds of DNS servers (61%) had been upgraded to BIND 9 - an improvement of barely 3% over 2005 levels.
With no policing forces at work (other than customer complaints and market forces), I predict that it will take years for all servers to be brought compliant. Which means this problem - DNS insecurity - is going to be around for a while.
I wouldn't be doing my job if I didn't point out that our secure transaction service, Authentium SafeCentral, uses an independent system of secure DNS servers linked to a secure client to make sure that every request for a bank or brokerage web site goes to the right place.
Saturday, July 12, 2008
Software Development Kits are a big part of what we do at Authentium.
For more than a decade, we have packaged and released system-level tool kits, including Linux and Windows-based antivirus SDKs, personal firewall SDKs, and system-level file-hardening tools.
These tool kits have been used by many industry leaders in the security, SAAS-based managed services, and telecommunications industries to create new products and services.
Based on this experience, we have learned a lot about what tool kits need to offer engineering teams. But first, let's start with a proper definition.
Software Development Kits (SDKs) should enable developers outside of the distributing organization to access and utilize the code/intellectual property in a way that clearly defines both the scope of the intellectual property (IP), and the scope of what is allowed to be done with it.
The commercial model needs to closely match the scope of the toolkit. If your toolkit effectively allows other companies to compete with you, or includes some significant ongoing service commitments (both these are true with respect to anti-virus tool kits, for example), then your model needs to take into account the need to price using a "co-opetition" model and fund the ongoing service costs.
SDKs by definition also need to be well-documented, starting with the licensing schema.
It is extremely important to let developers know up-front what can and can't be done with the code. In my opinion, Firefox does an excellent job of explaining what is covered under general public license (GPL) and what is owned by the third party developer. Knowing what the tool kit owner owns, and what you could potentially own, based on your use of the kit, is important when licensing in code.
Documents designed to inform engineers are the next step. There is nothing worse than "snobby" or badly-written documentation. Engineers face deadlines and have limited time to learn your code. They need to know that your engineers are dedicated to bringing them up to speed and helping them make this deadline - the quality of your documentation reflects this better than anything else.
For me, the first step in testing your documentation should be to ask someone that has never tried your toolkit to build something with it. Does the documentation clearly enable the engineer to create something using your toolkit, without resorting to calling the manufacturer? If the answer is yes, and your legal agreement is clear, proceed.
Features found in the product that a toolkit is based on are often not included in the SDK. In my view, this is wrong - rather than force your partners to "reinvent the wheel" you should present them with features as part of your commercial model. Include them in the code and value them correctly - that way, everyone wins.
The final thing any decent open platform code-base or SDK needs is good support, provided by people who are proud of the code and willing to help. SDK need to be supported either by an interactive, wiki-based community, such as is the case with Linux, PayPal or Firefox, or by a dedicated team of engineers prepared to answer questions from other developers.
In summary, SDKs need precise legal and commercial definitions, an appropriate and understandable commercial model, great documentation, cool features, and solid support. If you have all these, your SDK should be successful.
Note: My thanks to Vladimir Dubovik at US Bank for asking the question that led to this post.
Thursday, July 10, 2008
Imagine an attack in which the hacker controls all your Internet traffic, and is able to redirect your web site requests away from your requested destination to a spoofed web site that they control.
This scenario is called a Man-In-The-Middle (MITM) attack, and is achieved when a hacker is successful in "poisoning" or modifying the Domain Name Server cache.
Once a DNS cache is poisoned, it enables intelligent interception and redirection of web site requests to be managed from a point remote from the client (and the destination.) DNS poisoning is, in many ways, a case study in online criminal efficiency.
Next month, as everyone in the security industry now knows, Dan Kaminsky is going to step up to the mic at Black Hat and talk about something everyone already knows is a big problem - DNS insecurity.
So what is Kaminsky going to tell us? The fact that an out-of-sequence patch was issued by Microsoft two nights ago (a patch that apparently kicked users of Zone Alarm firewalls off the Internet) explains where the problem probably lies.
The Register (which refers, accurately, to DNS insecurity as "the mad woman in the attic" and a "peripheral, forgotten issue") added some color today, unearthing a 2005 paper from Ian Green which makes for some interesting reading. Here's a peek at his paper:
"...as the infamous Mitnick vs Shimomura attack and other subsequent attacks have shown, many weaknesses in network protocols are a result of poor implementation rather than weaknesses in the underlying protocol. In the Mitnick attack, 'IP source address spoofing and TCP sequence number prediction were used to gain initial access'."
Hmmm. Can you can tell what is coming next? Three pages later, post a few hours of research, Green writes, of his target research (the XP DNS Resolver):
"The DNS transaction ID always begins at 1 and is incremented by 1 for each subsequent DNS query; and... the UDP source port of the query (which becomes the UDP destination port of the response) remains static for the entirety of a session (from startup to shutdown)."
In other words, Green has followed Mitnick's advice and found exactly what was predicted: stupid levels of predictability. The DNS transaction ID, which is allowed to be a random number 16 bits long, has been implemented in such a way it can be easily guessed ("n" + 1).
In his paper, Green faults Microsoft's flawed implementation of DNS in XP ("ten years after the Mitnick attack"). The Register article uses this as the basis of a theory about what Kaminsky is going to talk about - a theory that was bolstered by MSFT's out-of-sequence patch this week.
Anyway, let's assume that's right. That leaves us Internet users with a problem. Mitnick first paved the way 13 years ago. Green's paper, which was published by the SANS Institute, came out three years ago, in 2005.
If it turns out this is what Kaminsky is going to talk about, why is everyone assuming the problem will be taken care of quickly?
The truth is, it won't. Only a minority of vulnerable users will hear about this and download and install the patch - leaving lots of room for those folks looking to pull off the perfect Internet crime - the MITM, or Man In The Middle attack.
Note: It would not be proper for me to sign off without pointing out that a solution exists for XP users: Every single DNS request made inside Authentium SafeCentral is handed off to our secure DNS service.
This ensures that even users with totally compromised machines get to where they want to go, without experiencing a MITM attack.
This morning, the editors of the Hartford Courant took a walk down the Yellow Brick Road and found courage, smarts - and a heart.
In an editorial this morning entitled "Drop The Charges" the Courant challenged Connecticut prosecutors to drop the bogus charges they have lined up against Julie Amero and take the retrial off the books.
In writing the piece, they proved that it is never too late to right a wrong, or claim back some respect.
For anyone unaware of this case, Julie Amero was the schoolteacher who was kicked out of her job after pornographic pop-ups appeared on an un-patched, unprotected school computer in front of several students.
Lots of people have since looked at the exact code she was looking at at the time (thank you archive.com) and found unmistakable evidence that this is probably among the worst cases of injustice ever perpetrated in the short history of Internet-related crimes.
Amero was without any shred of doubt very unjustly punished - there were links in the code that I saw that led to places other than those advertised, popups that aggressively spawned new popups, and let's face it, even if Amero went everywhere the prosecutors claim, why isn't the IT guy at the school attracting attention for not keeping the schools filters up to date?
The whole idea of having filters is so that kids don't get exposed to stuff like this - no matter what actions adults take.
The Courant compares Amero's current dismal state with that of some of the borderline inmates waiting for trial in Guantanamo. This isn't nearly as crazy as it sounds. Amero is also sitting in limbo waiting for prosecutors to get off their butts and admit they don't have anything.
Hartford folks, when your local politicians come to you for re-election, please do all of us a favor and ask them where they stand on the Amero issue. Make it a local issue.
Take a brave action - like the Courant has done today - and vote some prosecutors into place that will make your community worthy again of respect.
Update: Re the last paragraph, an alert reader has pointed out to me that things are not done quite so democratically in Connecticut. Click "Comments" (and watch for future entries in the Authentium InSecurity blog) for more...
Wednesday, July 2, 2008
IBM, Google and the Swiss Federal Institute of Technology have just come out with a really interesting study. The subject was the relative security of the 1.4 billion users of the four main browsers currently in distribution.
Browser security is the hot area of study right now. Last week I wrote a piece in the blog on man-in-the-browser attacks, describing why it is so important that you use a secure browser. If you haven't read it, you should. But back to the study.
The study looked mainly at two things: the security "holes" or exploits that currently exist, and the effectiveness of the update strategies used by the 1.4 billion users of the four main browser developers - Mozilla (Firefox), Microsoft (IE), Opera (Opera) and Apple (Safari).
Firefox, which uses a completely automated update strategy, won the day with 83.3% of users patched up to the latest version, compared to less than 50% of IE users. IE users chose to ignore patches far more often because of IE's "permanently put-off this update" approach - leaving them more open to browser-based attacks.
As the ArsTechnica overview of the report states:
"Firefox and Opera are both credited for including an auto-update feature, but the team notes that "Firefox’s auto-update was found to be way more effective than Opera's manual update download reminder strategy." How effective? way more effective."
We like Firefox at Authentium. Authentium's SafeCentral end-to-end transaction security solution utilizes a specially-hardened version of Firefox 3 in conjunction with our system-level hardening technologies and a secure DNS system.
If you're thinking of downloading FF3, or upgrading, I'd recommend you go over to the site and get yourself a really secure browser.
Note: the ARS article was entitled "40% of Surfers Don't Bother With Browser Security Updates" - for us and all the other people working in risk mitigation, the fact that there are half a billion unpatched browsers out there is one scary fact.
Tuesday, July 1, 2008
Bank Infosecurity's Linda McGlasson has an excellent post over at her site today on what happened during a real-world, real-person penetration testing exercise at an (unnamed) financial institution.
I had had two discussions this week CSO at banks who said they are becoming overwhelmed with similar real-world security problems, like social engineering of their call-center staff and proper checking of vendors and hosting companies at the front desk.
The bottom line is that a lot of nice people just want to be nice - and that makes them easy targets for people looking to do "walk-in" style attacks. These nice people need to be better trained to understand that sometimes being nice involves being firm and inflexible.
In any case, locking down these vectors is the correct place to start. The correct prioritizing of security efforts involves first locking down the physical premises. Putting in place advanced network security is only effective in conjunction with a robust and wide-ranging set of security policies that includes every potential attack vector.
Linda's blog can be found here.
I was in Lower Manhattan on 9/11 when the planes hit. And as the horrible events of that day unfolded, I, like many other New Yorkers, tried to help.
I went first to St Vincents in Greenwich Village to donate blood, watching as thousands of dust-covered refugees from the City streamed north, past white-coated doctors and nurses waited in vain beside a line of empty gurneys.
Then, once it became clear that blood wasn't what was needed, I headed with a group of other guys over to the docks to volunteer to help dig people out - only to be turned away because they wanted people "with tools and experience" - as in, experience in digging and cutting through steel and concrete.
So I went back to the neighborhood - just as the National Guard arrived and started locking everything down, from 14th St south to Battery Park.
As it turned out, the only heroic act that I managed to perform during 9/11 was the procuring of emergency supplies and the refilling of Kristen Johnson's water cooler (it's a long story). Hardly the stuff of legend. I went to bed - late - deeply unsatisfied with my contributions.
Meanwhile, the real heroes became more heroic to us New Yorkers by the day.
That afternoon, and for all the next day, and days after that, we would cheer them on from the east side of West St, as the firefighters and cops and construction workers kept digging, looking for "the people in the pictures" as we came to call the missing in the weeks after the tragedy.
The experience was unprecedented for me. I'd never felt such insecurity - or been in the middle of a disaster scene before. I had never ever before seen real life heroes up close, working to save lives - except for doctors and nurses and mothers. This form of heroics - the disaster response - I had no ability to comprehend it beyond the obvious sacrifice happening right in front of me.
Which brings me to the subject of this blog.
In his book "The Black Swan", Nassim Taleb postulates that a forward-thinking, highly-placed politician could have prevented the tragedy of 9/11 - by forcing the adoption of laws mandating additional security in the form of terror-proof, secure cockpit doors on aircraft.
He then explains that had this additional security been put in place, 9/11 would probably not have occurred, and New York, and the WTC, would have continued much as before.
But as Taleb explains, every action has a cost. Imagine the life of our politician as he faces re-election one year after his successful legislation. His success in forward-thinking has, unexpectedly, created a large personal problem: His overwhelming success has resulted in the complete destruction of a whole class of threats.
What remains, once the threat of an attack has been removed? The cost. And only the cost. Ask George W. Bush and Dick Cheney.
And so it ends with our hero. Taleb's story concludes with our lawmaker - the politician who "prevented" the attack - being turfed out of office for imposing such a ridiculously costly and unnecessary "security burden" on the airline industry, perhaps after the running of an ad campaign explaining how "all that money" could have been "better spent".
Taleb's story (originally told to explain the theory of Black Swans, like 9/11) goes a long way to explain why CSO's and their hard-working IT security staff often feel unappreciated.
It explains why boards and governments almost never sign up for large-scale security spending. It explains why "adequate amounts of security" and "heroics" will forever be incompatible. It explains why firefighters get depressed and sometimes light fires.
It also explains a lot about the security software industry. Analysts and engineers working in antivirus facilities like Authentium's Virus Lab sometimes get frustrated when criminals and hackers get lionized by the press - especially after an all-nighter spent securing the world from the threats they've created.
Taleb's story illustrates why when insecurity is rife, as it was on 9/11, heroes are needed. But it also explains why, with few exceptions, the names of the most successful folks in the threat prevention business are seldom heard outside of the industry.
Because, by improving security, they have killed off the likelihood of threats, and the need for heroes. They have made the terrible event go away, before it could occur, and become invisible as a result.