Arik Hesseldahl

Recent Posts by Arik Hesseldahl

2010 Was the Year the Internet Got Scary. Get Used to It.

I can’t remember a year during which computer security stories jumped so readily from the tech and business pages to the front page.

The year 2010 was bookended by two such cases. It opened with Google’s disclosure that it had come under attack in China, an apparent attempt to penetrate the Gmail accounts of certain activists and journalists.

It ended with the WikiLeaks affair, which stemmed from the alleged theft by an Army private of classified documents stored on a government network.

And let’s not forget in mid-year came the story, as fascinating as it was sobering, of Stuxnet, a computer worm developed by parties unknown–although the smart money is on Israel–that penetrated and ultimately damaged equipment used in the Iranian nuclear program.

Computer hacking–which has for too long evoked images in the public mind-set of teenagers in basements taking digital joyrides–has finally revealed itself to everyone for what it has long been for those in the know: The domain of espionage, sabotage and possibly warfare.

In Google’s case, the attacks upon its systems raised questions about where it draws the line with authorities in Beijing about such matters as freedom of speech. When the attack was first disclosed, Google publicly mulled shutting down its operations in China.

Then in protest, it stopped censoring its search results, giving mainland Chinese access to the same search results available to residents of Hong Kong. Beijing responded by blocking access to Google’s site.

Finally, Google and China came to a new agreement, and Google appeared the loser in the battle of wills.

Computer security is one of those things that companies and governments say they take seriously, but never really seem to get a grip on, judging by the results.

In any case, there is no firewall or software in existence that could have prevented Bradley Manning from stealing the documents that he is alleged to have given to WikiLeaks. As a low-level Army intelligence analyst, he was a trusted insider who had access to this material in the course of his day-to-day job.

So, it was not technology that failed. The failure was one of internal policies that allowed him access to data not relevant to his position.

Any employee of a midsize company can see how wrong that is. Human-resources documents are limited only to those who work in that department. The same is true of people who work in the legal office, business development department and so on.

But it apparently didn’t occur to anyone in government to limit the access to what became the WikiLeaks cache to people who worked only for or closely with the State Department.

If it turns out that thousands of companies are better at protecting their business secrets than the U.S. government is, then it’s not for nothing that the Central Intelligence Agency task force investigating the WikiLeaks affair bears the initials “WTF.”

Something similar was true of Stuxnet. One of the reasons the attackers, whoever they are, succeeded was that they used several so-called “zero day” vulnerabilities in Windows.

These are undocumented weaknesses that hackers save up for special occasions as a way to open a back door into a computer and then insert a troublemaking payload, like a worm. Zero day exploits are a fact of life, and once spotted in the world, they’re usually patched.

The Stuxnet attackers used as many as four zero day exploits as a way to get their worm into targeted computers. Microsoft, to its credit, made short work of fixing them once they came to light.

Even so, the Stuxnet worm burrowed its way from Windows machines into industrial control computers known as SCADA systems, which are widely used to run factories, power plants, pipelines and all sorts of other infrastructure essential to modern life.

The worm was designed to find a specific target: The systems controlling a set of as many as 1,000 centrifuges at the uranium enrichment facility in Natanz, and make them spin faster than they were supposed to.

The ability to attack industrial computers and cause them to do things they’re not supposed to do has been a lingering fear among security experts for years. Researchers at the U.S. Department of Energy in 2007 looked at the potential for attacks on SCADA systems and proved that it was possible to seize control of an electrical generator and then make it destroy itself.

They also found that many of these systems are connected to the Internet for what seem like good reasons: Convenience and cost savings. But these connections have also opened them up to the same kind of attacks that rattled the Iranian facility in Natanz.

Another Stuxnet-like worm, the thinking goes, could be used to bring down a power grid, or poison drinking water, or shut down an oil or gas pipeline. The good news is that such an attack is expensive–Stuxnet, by one estimate, cost $10 million to create–and requires a lot of specialized insider knowledge.

The bad news is that the Stuxnet source code is circulating in the wild for anyone to study. And as the WikiLeaks case shows, there are often insiders willing to take part in criminal schemes.

The other bad news? Securing these systems won’t come cheap.

If history is any judge, there will likely be a barrage of computer security companies that try to spin these incidents into opportunities to make a sales pitch. That’s what security companies do, after all.

But they usually miss the point. How can you plan for a vulnerability you’ve never seen? How can you stop an otherwise trusted insider from abusing their access to sensitive information? Both are fundamentally difficult problems for which there are no easy answers.

Spending money on last year’s security vulnerabilities is like preparing to fight the last war: Circumstances inevitably change, and they certainly will in 2011. New kinds of attacks will arise, and they will catch their targets by surprise.

And the public, like the CIA, will reasonably ask, “WTF?”

The unvarnished fact is that the networked society to which we’ve become accustomed in the last several years has a soft, vulnerable underbelly.

And the more we rely upon it, the more people with a combination of advanced technical skills and repugnant motivations are going to look for ways to turn it against us.

Some will do so as a means of making a personal profit. Others may see it as a way of advancing a political or ideological agenda.

But others will want to use theirs skills to do serious harm to innocent people on a large scale.

And the events of 2010 point the way to a world where that’s a more realistic scenario than it ever was before.

Latest Video

View all videos »

Search »

I’m a giant vat of creative juices.

— David Pogue on why he’s joining Yahoo