The true cost of computer crime
As society changes, so do the crimes that people commit. And as the internet takes on an ever more important role, computer crime is emerging as the misdemeanour of choice. But just who are the victims, and how much is it costing them?
Remarkably, perhaps, we don't know. But that is about to change as researchers start to investigate the real effects of computer crime. And they have come up with some surprises. While it is well known that attacking websites and networks can prove costly for those that own them, it also hits companies such as Microsoft and Cisco that released the vulnerable software exploited by the hackers. That could provide a much-needed incentive for software vendors to produce more secure code. These attacks are also not necessarily as costly as their victims may claim, a finding which should help businesses decide how much to invest in security technologies.
Much computer crime exploits flaws or "vulnerabilities" in software that allow an attacker or a virus to gain entry to a computer, access confidential information, run malicious programs or crash the system. Last week, the UK National Infrastructure Security Coordination Centre, which protects the country's critical infrastructure from electronic threats, warned that nearly 300 government departments or essential businesses had been attacked in the past few months.
But even the possibility of an attack damages software companies whose products are found to be vulnerable. A survey by Sunil Wattal and Rahul Telang of Carnegie Mellon University in Pittsburgh, Pennsylvania, analysed the economic impact on 18 software suppliers, including Microsoft, Cisco, IBM and Red Hat. Announcing a vulnerability in one of these companies' products caused, on average, a 0.6 per cent fall in its stock price, or an $860 million fall in the company's value. "Vendors do not get off scot-free. The market reacts to vulnerabilities," says Wattal, who presented the findings to the Workshop on the Economics of Information Security (WEIS) in Cambridge, Massachusetts, earlier this month.
There is also dispute over how to deal with a vulnerability once it has been found: should it be kept secret, or publicly disclosed so that people with the faulty software can fix it? At the moment, vendors normally disclose a flaw as soon as they have developed a patch that corrects it. While this protects customers who keep their software up to date, it also shows malicious hackers how to attack less security-conscious users.
Some benign software hackers deliberately search for vulnerabilities, seeking both kudos and more secure operating systems and products. They are encouraged to privately disclose the flaws they find to the CERT Coordination Center, which is funded by the Department of Defense and operated by Carnegie Mellon University. CERT then informs the vendor of the vulnerability, giving them 45 days in which to develop a patch. After that time, CERT makes the information public. The idea is that the 45 days gives the vendor long enough to develop a solution, and an incentive to do so quickly.
But this may not be the best way to deal with vulnerabilities, according to Eric Rescorla, founder of the security firm Network Resonance in Palo Alto, California. Last year he said that disclosing vulnerabilities will lead to more harm than keeping them under wraps and not issuing a patch. He claimed there is little chance a malicious hacker will independently discover a vulnerability that one of the "good guys" has discovered. Disclosing vulnerabilities does nothing to strengthen software security, it just exposes its weaknesses, he said.
Now Andy Ozment, who researches vulnerability disclosure at the University of Cambridge, has refuted both these claims. He presented data to WEIS showing there is an 8 per cent chance that one or more people will independently discover the same vulnerability before it is patched. Ozment analysed the source code of OpenBSD, an open-source operating system that runs many web servers. Updates to OpenBSD are recorded, and by looking for the exact date on which the software was patched, Ozment has been able to create his own database of vulnerabilities. This database, he says, shows that the number of vulnerabilities decreases as a result of disclosure, which encourages people to patch their computer systems.
Ozment says his assessment is more accurate than Rescorla's, which relies on a vulnerabilities mailing list called ICAT for the dates of disclosures and patches from vendors. The list does not include all vulnerabilities, and the dates are not necessarily reliable, Ozment claims. "No one is motivated to keep the list accurate as it's not intended for research."
Long-term consequences He now plans to create a model that vendors such as Microsoft could use to evaluate the costs and benefits of disclosing vulnerabilities. A key parameter will be the average time it takes for two people to independently find a software flaw. "The likelihood of rediscovery is a really important factor," he says. If it happens quickly, vendors should release patches straight away. If it takes weeks or months, then it will be better for software firms to bundle many vulnerabilities into one big patch. These are harder for attackers to exploit, and easier for system administrators to install than a succession of patches. Software vulnerabilities are not the only facet of computer security whose cost businesses would like to quantify. Another is denial of service (DoS) attacks, in which hackers try to shut down a website by programming thousands of computers to simultaneously request information from it. They are being used by malicious e-commerce sites to shut down rivals, and by extortionists as a blackmail tool.
Until recently, assessments of the potential costs of a DoS attack have only evaluated financial losses during the attack itself, such as business lost, or overheads that still have to be paid while the cash stops coming in. "The common wisdom with DoS attacks is they were not that much worse than a power outage," says Ozment.
But Avi Goldfarb at the University of Toronto in Canada wondered whether the effect of a DoS attack might be worse than that. For his analysis, Goldfarb used data from a project that monitored 2700 volunteers with dial-up connections for three months at the beginning of 2000. During that period, a hacker called Mafiaboy orchestrated a three-hour DoS attack against Yahoo. Two weeks later, many users who had been forced to switch sites during the DoS attack were still visiting Yahoo's rivals MSN, AltaVista and Excite, and seemed to have a preference for one of the alternatives (see Chart). Three months after the attack, Yahoo users were still more likely to be visiting rival sites, but by then they had no preference for a single rival, Goldfarb told the WEIS conference. They were simply punishing Yahoo for what they perceived to be bad service during the DoS attack, he says. Overall, Yahoo lost 6 million unique visitors and $250,000 in revenue.
"It's not going to break their bank, but it's big enough that it's worth trying to prevent this," Goldfarb says. Yet, even combined with the $88,854 that Goldfarb estimates were the immediate losses caused by the attack, this is nothing like the millions of dollars Yahoo claimed the attack cost. "Because people can't access the website once, it means they are less likely to come back," Goldfarb says. His paper investigates why people don't come back, he says, and offers companies a way to target their marketing strategies so that people return.
Source: Eurekalert & othersLast reviewed: By John M. Grohol, Psy.D. on 21 Feb 2009
Published on PsychCentral.com. All rights reserved.