Predicting the future is invariably perilous. One need only watch vintage, post-World War II movies, TV shows, and advertising videos that look ahead to the futuristic 1990s to see this in action, not to mention enjoy some hilarious entertainment. In a 1967 ad produced by Philco-Ford (http://hdmovies.org/hd-retro-a-1960s-view-of-the-future/), we see a future where a “housewife” orders products from her favorite store via a video link directly to the retailer. Not bad, actually. We then learn that a separate video system delivers the bill to her husband. He communicates with the store by scribbling a note and inserting it into a machine. Early fax or the first handwritten e-mail?
This and others depicting the future of communications and transportation get one basic technological forecast right—a move away from centralization and toward distributed actors. They also get many of the details wrong, usually by way of optimistic overestimation and conservative underestimation.
Today the “future” is seen through the lens of a so-called “smart infrastructure.” It is tempting to talk with breathless enthusiasm about cars that recognize one another, bridges that detect their own weaknesses, and power grids that exchange data with home appliances. Will today’s promise of a smart infrastructure turn into reality or be as kitschy as the Jetsons-like promises from yesteryear?
You could say that calling any infrastructure “smart” is a misnomer; as we introduce next-generation technology to the physical plant, the objects themselves don’t become more intelligent. Rather, human intelligence is being extended from a centralized location and distributed outward toward the edges.
The promise is that our infrastructure will more closely match our needs and scale gracefully. The benefits could pay dividends in terms of efficiency, the economy, and the environment. The threat is that by offloading more of our human intelligence into circuitry, we also pass along our mistakes and cede control to our automated servants, sometimes with deadly consequences.
Elastic Smarts
Before looking at the particulars of smart grids, buildings, roads, and vehicles, it is worth considering the 10,000-foot-view perspective on infrastructure planning. Perhaps the single most important variable when designing any infrastructural system—from home air conditioning to national power grid to interstate highway—is matching capacity to demand.
Provide too little capacity and good money is wasted on poor results. The air conditioner can’t adequately cool the house. The highway with too-few lanes becomes a parking lot. The power grid without enough juice subjects its customers to rolling blackouts. But provide too much capacity and you’re wasting not only money but energy, too, with potentially serious consequences. Generating unnecessary energy to power excess heating, cooling, lighting, and computing operations yields a poor return on investment for both our pocketbooks and the environment.
Today, much of the infrastructure—though impressive in its own right—can be described as “dumb” in the sense of being nonadaptive to changing needs, silent, and unconnected to a larger network. We employ models, predictions, and estimates to establish a fixed or at least relatively inelastic capacity, often resulting in waste. In contrast, most types of smart infrastructure are built around a shared principle—linking capacity to demand as closely as possible.
One way to highlight the inefficiency of the current infrastructure is to look back at March 29, 2009, when cities around the world participated in the World Wide Fund for Nature-sponsored “Earth Hour,” shutting off most of their lighting during one evening hour. Although the event arguably raised awareness about energy use, skeptics pointed out that it probably did little to reduce actual energy consumption due to the inelastic way most energy around the world is produced. Today, most power is generated by power plants producing a fixed amount of megawatts during a given time period. This output is calibrated by plant engineers based on historical usage patterns, but they are unable to respond in real time to constantly fluctuating demand. Shutting off the lights for a night might reduce your own power bill (slightly), but does not reduce the amount of power (or associated pollutants) the local plant generates during that time.
If “Earth Hour” were indeed practiced nightly, there would be a cumulative pattern to justify lowering the plant’s output. While asking people to manually shut off their lights for an hour every night is not practical, the principle is something that the planned smart infrastructure hopes to tackle from both ends of the equation—within our buildings and at the power plant.
Power of Smart
Any discussion of smart infrastructure begins with the electrical grid. Energy powers every other form of smart technology, but without the grid itself being smart it will be difficult to roll out the most efficient upgrades. What makes a power grid “smart”?
A power grid is not unlike a computer network; a (complex) series of nodes transmits “stuff” from a supply side to end-user clients. Instead of data, the “stuff” is electricity. On the demand side, the clients on a computer network are able to talk to the servers on the supply side through a relatively sophisticated language. This allows both ends of the network to negotiate their transactions with a high degree of specificity. But this is also where the analogy to the power grid ends.
With today’s infrastructure for electrical distribution, there is only very coarse communication between the demand side and the supply side. Think of a house as if it were sucking juice through a straw; the more electrical power the house needs, the more it pulls through the straw. Assuming there is enough power in the grid, the electricity keeps flowing. There is a meter on the house registering how much has moved through the straw, but that’s about it. What if the house could actually talk to the grid in a nuanced language beyond simply “give me more”?
Some residents of Boulder, CO, are helping answer this question through a program called “Smart Grid City” developed by regional power provider Xcel Energy. On the demand side, the hub of a smart grid system is a digital, two-way power meter. Today’s power meters simply count up the number of watts that flow to the house and require either self-reporting to the utility or, more inefficiently, a human representative to visit the premises.
A two-way digital power meter reports usage directly to the utility. Beyond that, it provides real-time data to users (through a Web browser) about power consumption. This instant feedback itself can have a powerful effect on demand reduction—like printing calorie counts on fast-food menus—motivating customers to alter their behavior to lower their bills.
A digital meter can maximize the use of variable pricing models, where power is cheaper when demand is lower. Variable pricing also allows customers to choose their source of power; for example, they may tell the smart meter they want to buy 10-percent wind-sourced power, charged at a different rate from, say, coal-fueled power. Looking at the reverse direction, smart meters promise to make it easier for buildings to supplement their power draws using on-site sources like solar and wind. Digital meters could accurately report back to the utility any excess power the consumer is producing (rather than consuming) and coordinate power usage between home appliances and multiple sources of energy.
In aggregate, across thousands or millions of customers, these digital enhancements should significantly elasticize demand to better match supply. Better yet, the two-way grids being built in Boulder, as well as another by Duke Energy in Charlotte, NC are being designed to “know” about anomalies in the system. Today, most power outages are reported directly by customers, but, with sensors as part of critical components, utilities will know when and where power has stopped flowing and, in some cases, anticipate problems before they lead to an outage.
Building Smart
Residential and commercial buildings in the U.S. consume a third of the country’s domestic energy production. Drill down further and more than a quarter of each building’s energy consumption comes from climate control and lighting. How much of the lighting illuminates spaces that don’t actually need it? How much of the cooling is unnecessary on a given day? There are no hard figures to answer these questions directly, but it’s not a stretch to appreciate that across millions of buildings, inefficiency adds up.
A large part of the problem is again the slack between demand and supply. Consider the trusty old thermostat. In its day, the typical unit was an early example of smart infrastructure. Without one, you would have to manually turn the heating or cooling system on and off. Since the thermostat measures the current temperature, it’s able to respond to changing conditions, even programmed to a schedule. Pretty smart, right?
But the average thermostat is still fairly coarse. For one thing, most of them measure temperature only at their physical location, which is often a distant corner and not always the ideal place for taking the temperature of a room. Plus, most are not able to communicate with each other—or with climate-control systems—beyond a simple “turn on or off” message.
Smart-energy firms like ZigBee and Trilliant are deploying next-generation thermostats that leverage the power of bidirectional communication—or networking. Remote sensors can take readings from multiple locations. Because the thermostats themselves are able to communicate over networks, they can be controlled remotely by end users, energy utilities, and sophisticated software that manages the thermostat on conditions like weather forecasts rather than just a fixed schedule.
The same principles can be applied to lighting. Sometimes called “integrated lighting control,” networked lighting fixtures adapt to a changing environment. Natural light sensors dim artificial lights when sunlight is available. Occupancy sensors shut off lights when no one is in the room. Remote control software coordinates light usage throughout an entire building or potentially even a group of buildings.
The common theme in these smart-building technologies is communications—specifically, digital networked communications. One obstacle to ubiquitous building intelligence has been the lack of communications standards. For years, many smart-building contractors have used proprietary signaling systems, raising the cost of smart solutions while being a disincentive to using them at all. Plus, a lack of standards limits the potential ecosystem of available products and technical support.
In September 2008 an alliance consisting of major technology and power-production players—including Cisco, Duke Energy, and Sun Microsystems—formed the Internet Protocol for Smart Objects (IPSO) alliance. With more than 50 members as of summer 2009, the nonprofit alliance hopes to promote a common IP-based protocol for myriad types of sensors and actuators, including those used for lighting, heating, and even pressure. Some of the challenges to extending IP this way include the possibility of hundreds or thousands of nodes in a local network sharing data with relatively low-power devices like remote sensors.
Intelligent Transportation
Whether you drive a car or take public transportation to get around, you’re likely aware how the experience degrades when demand and supply are out of sync. Is the bus on time? Is it full? When does the next subway train arrive? Is traffic backed up to a standstill on the New Jersey Turnpike? (“Yes” much too often.)
Today’s traveler can turn to a number of (mediocre) navigation tools, including toll-free phone numbers, websites, text-message updates, and radio broadcast traffic reports. More often, though, the information is not granular enough to address the situation at hand, nor does it do much to influence the behavior of the transportation systems themselves.
For bus riders, knowing whether the bus is early or late can make the difference between missing it or having to choose an alternative, like walking or a taxi, instead. A number of municipal bus services provide electronic schedules, including some available by text message, but more often than not they are published schedules, not real-time reports, though this is changing. For example, bus riders in Chicago and Gainesville, FL, use their cellphones to receive (relatively) instant updates on specific bus routes. Students on the campus of the Georgia Institute of Technology in Atlanta check solar-powered, Wi-Fi-enabled displays at each bus stop to see the exact location of their next ride.
Both remote and on-site technologies have been combined—with flair—in the EyeStop, a sleek glass bus shelter and active-signage system designed at MIT to be deployed in Florence, Italy, in 2010. In addition to text-or-Twitter updates for a bus, the EyeStop displays a live route map, along with intensifying LED lighting as the bus arrives, allowing latecomers to see from a distance if a bus is approaching.
Smartening-up public transportation enjoys an important advantage over private vehicles; by their nature, buses and trains already function as nodes in a network, even communicating and coordinating with a central home base. The automobile, on the other hand, is a lonely island.
Before the cellphone, the car truly was an isolation chamber. Today, cars are increasingly connected to the outside world; we chat on our phones and send texts (though we shouldn’t when driving), and services like OnStar summon help if we (regrettably) crash doing these things. Curiously, one network we’re still disconnected from is other cars, as well as from the road itself.
Highways are like a network topography, with cars like nodes (or clients) in the network. Unfortunately, today’s vehicular infrastructure is anything but smart. Cars know little about road conditions and nothing about other nodes. Cars have plenty of sensors to monitor conditions under the hood, but the only one that really knows what’s going on outside is the airbag sensor, which is good only after, heaven forbid, you smash into something.
Enter V2V and V2I, vehicle-to-vehicle and vehicle-to-infrastructure, communications. The underlying protocol, called dedicated short-range communications, or DSRC, operates at up to a kilometer between nodes. Today many drivers use V2I technology via electronic tags—a form of RFID—to speed through toll booths. Vehicles that talk to one another could exchange all sorts of useful data (such as speed and acceleration) and alert drivers to problems up ahead in real time, even airbag deployments and crashes. Because a smart-vehicle network would know the position of local peers, drivers could be alerted when distances are too close or provided enhanced displays during low-visibility conditions.
At MIT, math researchers have found that “phantom” traffic jams—seemingly caused by nothing—are actually the cumulative ripple-effect of small variations in human judgment. Though drivers would likely resist ceding total control to a car computer, a vehicle network could someday suggest an ideal speed under current traffic conditions, assisting human decision making enough to reduce back-ups and perhaps even some crashes.
A car connected to everything, from road sensors to traffic lights to fellow drivers, is probably decades away. For one thing, car buyers are highly price sensitive, and vehicles cannot all be replaced at once. The next logical leap—autonomous cars—may still be the stuff of futuristic Disney animations like the 1958 “Magic Highway USA.” But a glimpse of car-as-network-node technology has begun to show up in the real world. “The Dash,” a portable GPS unit for cars updates its traffic routing data through a sort of peer-to-peer technology. It reports its speed and position in real time to a server, enabling other Dash units to infer local conditions and choose navigation routes accordingly.
What if the house could actually talk to the grid in a nuanced language that goes beyond simply “give me more”?
Smart Safety
Developing smart infrastructure necessarily involves outsourcing some decision making from humans to machines. Computers excel at repetitive tasks without fatigue. Computers can be dedicated to certain tasks, allowing them to respond more quickly than people, potentially improving efficiency and lowering costs. Computers can also lower costs by replacing human staff (in other words, cutting jobs). But there are dangers, too. In the past months alone, two high-profile and deadly crashes have raised questions about the relationship between human operators and “smart” infrastructure. On June 23, 2009, a Washington, D.C. metro train crashed into another train, killing nine commuters. And just weeks earlier, on June 1, an Air France commercial jet crashed into the Atlantic Ocean off the northeast coast of Brazil, killing all 228 aboard. Though these disasters involved different forms of transportation, they had something in common—some form of computerized control was believed to have played a role.
Though it must be said that this subject is ripe for tabloid hysteria, and many millions of people move about the world each day with the assistance of “smart” technology without incident. But the fact is that computer software is only as good as the intelligence coded into it, and human beings are fallible—whether at the controls of a vehicle or writing the software behind it. Although the final reports for both tragedies are pending, the evidence suggests that digital decision making was not only flawed (in the Air France case, a leading theory implicates malfunctioning speed sensors), the systems designed to actively limit human intervention left the operators with their hands tied before tragedy struck.
Critics of a headlong rush into smart infrastructure argue that putting too much decision making into the hands of electronic circuits makes us humans over-reliant on them. We come to trust the computer more than our own judgment, a problem humorously illustrated in a famous episode (“Dunder Mifflin Infinity”) of the American television series “The Office,” in which the dimwitted lead character so thoroughly trusts his vehicle GPS system that, despite mounting evidence to the contrary, he blindly follows its instructions into a lake.
Securing the Infrastructure
Because smart infrastructure is all about decentralization it is ultimately an exercise in networking. Networks link sensors. Networks link controls. But converting the whole modern world into giant networks connecting countless nodes poses a risk beyond software bugs and tragically wrong digital decision-making—specifically, security. With today’s relatively “dumb” infrastructure, the only way an actor with bad intent in, say, North Korea (or China or Iran or, to be fair, Canada, Idaho, or the U.K) could disrupt the power to your home or office would be to physically cut the wires. But just like a malware script running on a server in Romania might infect a PC in Australia, networked infrastructure brings everyone closer together—the good guys and the bad guys.
U.S. government plans for the smart grid, in particular, are the focus of intense scrutiny for security vulnerabilities. In April U.S. officials reported that hackers (apparently from China and Russia) had penetrated the U.S. power grid. Though it appeared they only “looked around,” perhaps to gather information or probe vulnerabilities, there is a genuine risk that a “smarter” grid will, by virtue of increased digital sophistication and reliance on computerized control, provide even more opportunities for hackers to compromise the system or disrupt performance.
One source of potential concern in developing a new smart grid is the two-way residential meter itself. Because they are expected to be relatively inexpensive consumer electronics, hackers (given enough time and persistence) may eventually be able to reverse engineer them. Decoding their communications could supply useful information about the network exchanges taking place and perhaps how to interrupt or mimic the data packets. Also, because the proposed smart grid will be based on IP networks, it will likely “speak” an open, well-known language rather than a secret, closed proprietary language.
Critics of IP networks say that because the technology has been around so long, hackers already have mature tools for working with it, plus the underlying details are openly published. But defenders of IP say it is precisely because IP is mature and open that the best defenses can be built by researches, the government, and utilities. With a closed and proprietary technology, black-market incentives could motivate the leaking of critical secrets. As it stands, the Obama administration’s proposal for a smart grid seems to side with the pro-IP camp.
An additional challenge to securing the U.S. power grid is that there is no one single grid, but rather a heterogeneous patchwork of interconnected local and regional grids. Some 200 utility providers, large and small, operate around the U.S. Securing the future smart grid means ensuring they adhere to the same minimum set of security standards and interoperable technologies.
Despite the complexity, early signs suggest the Obama administration has put security near the top of the smart-grid agenda. Some $4.3 billion was allocated through the American Recovery and Reinvestment Act of 2009 President Obama signed in February. In June a consortium of power utilities that organize their efforts through the North American Electric Reliability Corporation, announced agreements on key standards for implementing grid security. The industry is especially motivated to stay ahead of the lawmakers, perhaps to lessen the burden of regulation that might otherwise follow.
Smart Promise
It’s probably naïve to imagine a smart grid will be as secure as the legacy “dumb” grid. Likewise, smart sensors on bridges and trains might fail, and software bugs could lead to more accidents large and small. Doom and gloom aside, the future smart infrastructure represents a continuation of a deal we already accept in modern society—that, ultimately, the good outweighs the bad. That the smarter our infrastructure, the more efficiently we operate as businesses and as consumers, saving money, improving our daily lives, and extending our natural resources.
Figures
Figure. The Hearst Tower in New York collects rainwater on its roof and stores it in a basement tank for watering plants and trees inside the building. Natural light sensors are wired throughout to dim interior lights when sunlight is available.
Figure. The new I-35W St. Anthony Falls Bridge, Minneapolis, MN, includes 323 sensors for monitoring everything from an automated anti-icing system to bridge security.
Figure. The EyeStop bus pole includes touch-sensitive e-ink and screens, as well as sensing technologies and interactive services. Riders can plan a bus trip on an interactive map, surf the Web, monitor their real-time exposure to pollutants, and use their mobile devices as a shelter interface (http://senseable.mit.edu/eyestop).
Join the Discussion (0)
Become a Member or Sign In to Post a Comment
Comments are closed.