Sign In

Communications of the ACM

ACM News

Defending Against the Next Stuxnet


View as: Print Mobile App Share: Send by email Share on reddit Share on StumbleUpon Share on Hacker News Share on Tweeter Share on Facebook
Anticipating a cyberattack.

A recent paper by Iranian researchers describes the deployment of machine learning-based anomaly detection systems in a way that they hope will help quash a Stuxnet attack.

Credit: marsmet526

Just over a decade ago, Iran's nuclear program was stopped in its tracks when what is widely regarded as the world's first digital weapon, the joint U.S./Israeli-developed Stuxnet worm, destroyed almost 1,000 uranium enrichment centrifuges at the nuclear fuel plant in Natanz, 150 miles south of Tehran.

What rapidly became clear after Stuxnet 's existence was first revealed by alert malware investigators in Belarus in 2010 was that digital weapons had moved beyond applications in theft, espionage, and denial of service, to generating  devastating "kinetic" effects akin to those usually caused by high explosives

What was not so clear, however, was how the victims of such cyberattacks might try to defend against them in the future, beyond some Iranian bluster in 2019 about "firewalls" that, it was claimed, could neutralize such sabotage. Now, however, in a rare Iranian publication on computer security research, some of its engineers have revealed one way in which they might stymie another Stuxnet.

In a paper published earlier this year in the International Journal of Critical Infrastructure Protection, Iranian researchers describe the deployment of machine learning-based anomaly detection systems in a way that they hope will quash the time advantage Stuxnet gained from what was arguably one of its most devious moves: deploying a replay attack that left the plant's operators oblivious to the worm's destructive actions.

Fueled by zero-days

To understand why they think this, it's worth recapping Stuxnet's modus operandi. The worm used multiple zero-day Windows vulnerabilities, plus two stolen digital certificates, to embed itself deeply inside the code of the air-gapped Natanz plant's Supervisory Control and Data Acquisition (SCADA) system, and the Programmable Logic Controllers (PLCs) that run the centrifuge arrays. Air-gap jumping was supposedly supplied via a contractor's USB stick.

The worm then recorded the centrifuge's operating data, such as pressures, temperatures, and rotor speeds, and played back that data in the SCADA so the centrifuges looked to be operating normally. In the background, however, Stuxnet was forcing the PLCs to inject regular overspeed and underspeed signals to the centrifuge motors, with the accelerations and decelerations producing out-of-bounds resonant forces that shattered nearly 1,000 machines.

It's the ability of a replay attack to obscure destructive activity going on in the background that Iranian computer scientists Elham Parvinnia and Mohammad Safari of the Islamic Azad University in Shiraz, and Alireza Keshavarz Haddad of Shiraz University, say they are trying to address.

Stuxnet did two key things to centrifuges, they say: it destructively changed both their internal pressure and their high-energy rotor speeds. "Both were hidden from the operator's eyes. The physical effects of Stuxnet were hidden as it exploited the targeted system," they write in their paper.

So Parvinnia et al have come up with an answer that they hope will help defend any critical rotating machinery in industry (not just nuclear centrifuges), such as gas turbines, compressors, pumps, engines, and generators.

Their idea is that, instead of merely trusting a conventional antivirus-style IT-based intrusion detection system to monitor invasions of the SCADA and PLC networks, which can be gamed by a well-informed replay attack, they have added another layer of sensors to the critical rotating machinery itself. They then train a machine learning neural network model to recognize the normal operating behaviors of the machine, and use that to predict suspicious departures from normality, allowing critical machinery to be shut down even if the IT-based system is still saying 'everything's fine'.

In tests, the Iranian team deteeted direct intrusion attacks on a sensor- instrumented, three-stage centrifugal propane gas compressor. They say they were able to sense anomalies with high precision in sufficient time to take steps to mitigate the attack, such as simply shutting the device down.  

Geopolitics meets technology

However, when approached for comment, specialists in industrial machinery cyberprotection say the Iranian approach does not seem workable. One expert even refused to comment, saying the research was not rigorous enough, while another described the paper as "hype" fueled by geopolitics, not engineering requirements.

Dmitri Gazizulin, hybrid analytics engineer at Gothenburg, Sweden-based rotary machinery manufacturing firm SKF, highlighted some issues with the Iranian approach. "The idea of using machine learning is good, as long as you have enough historical data and the operational limits and conditions of the machine are well-defined, in order to define 'normal' behavior," he says. "But there are some drawbacks. For example, if the operational limits and conditions of the machine are very wide, and there are a great many features input to the machine learning model, some combinations of those features within the 'normal' framework can be still harmful for the machine."

For instance, he says, measurements of a pressure reading, and the degree to which a valve is opened, say, might individually be within bounds, "but for how long can a machine operate in such a state?" he asks.

The interface illusion

Replaying recorded data to hide background skulduggery is a well-known trick in the security arena, says Sujeet Shenoi, a professor of computer science and chemical engineering at Oklahoma's University of Tulsa, who helps governments and organizations worldwide secure their critical infrastructures. "We call it an interface illusion. The human-machine interface the Natanz controllers were looking at was getting bad, faked data. When the attack occurred, that was playing on a loop," he says.

Avoiding replay attacks will take a lot more than the plan hatched by the Iranian team, Shenoi says. Because manufacturing tolerances make every rotating machine slightly different, a user would have to construct a machine learning model for every single machine. Also, since each rotating machine experiences wear over time, Shenoi says, the model would need continual updating, making  the plant monitoring task a profoundly complex, difficult-to-scale AI project. 

Says Shenoi, "It's just not feasible to extend this to something like the large arrays of centrifuges they have in Natanz; they would need one of their intrusion detection systems on each centrifuge. And even if somehow they could do that, how do you defend that implementation, especially when you consider what Stuxnet did? It compromised everything."

Asked via email to elaborate on what they regard as innovative in their approach, the Iranian team did not reply.

However, plant security operatives at Natanz — a hardened, underground facility — have more pressing concerns than the finer points of anomaly detection architectures. Last July, a fire at the Natanz plant wrecked a centrifuge assembly building on the surface. Earlier this year, the electrical supply to the underground centrifuge halls running a new generation of faster machines was destroyed in an explosion, one thought to have been set by an Israeli-backed saboteur that Iran now claims to be seeking to track down through Interpol.

Ironically, or perhaps by design, some reports suggest the simple expedient of cutting the plant's power might have caused sudden, out-of-bounds centrifuge decelerations that destroyed the whirling machines in much the same way as the hyper-complex Stuxnet did.

 

Paul Marks is a technology journalist, writer, and editor based in London, U.K.


 

No entries found