Research and Advances
Architecture and Hardware

Embedded Computation Meets the World Wide Web

An infinitely accessible Web-linked physical environment united by a multitude of tiny servers could mean a life of information ease.
Posted
  1. Introduction
  2. Communication Technologies
  3. Device Technologies
  4. Applications
  5. Conclusion
  6. References
  7. Authors
  8. Figures

Two important trends are converging to help drive the radical transformation of how information flows in our world. First, the computer industry’s remarkable ability to squeeze ever-more transistors into an ever-smaller area of silicon is increasing the computational abilities of our devices while simultaneously decreasing their cost and power consumption. Second, the proliferation of wired and wireless networking—spurred by development of the Web and demand for mobile access—is enabling low-cost connectivity among computing devices. It is now possible to connect not only our desktop machines but every computing device into a truly worldwide network linking the physical world of sensors and actuators and the virtual world of information utilities and services. What amazing new applications and services will result? How will ubiquitous computation alter our everyday lives? Will the invisible computing paradigm finally be possible? We address these questions here in light of the new wave of embedded devices.

Never before have so many supporting technologies been available to assemble a network infrastructure pervading everyday life on such a scale. Moreover, over the past 10 years, we have migrated many of our work practices to electronic media. Even such mundane consumer products as ovens, toasters, and dishwashers have been automated through embedded computation. In fact, 98% of the computing devices sold today are embedded in products whose use does not reveal them to their users. Now, yet another revolution is about to take place in which we gain immense new value by connecting all these computational components [11]. But this opportunity also represents important challenges for building useful services, designing more robust and easily manageable systems, and guaranteeing user privacy and security.

The Internet is the most vital of these components. However, it is worth remembering that any new opportunity requires many components to be in place before a revolution can begin to take place. For instance, the collection of networks that formed the early Internet, including NSFNET and ARPANET, was created as long ago as the 1970s. The name Internet was not even coined until the mid-1980s, as all these emerging computer links began to be viewed as a single network, unified by the TCP/IP protocol set. By 1994, the Internet had become the network of choice for the rest of the world, thanks to the simplicity of the Web’s protocols. Only then was the Net rapidly adopted outside its traditional environments of university computer science departments and corporate research laboratories. The expanded user community in turn fueled the expansion of the Internet itself. The larger the user population grew, the larger the Net’s customer base and the more attractive it became to individuals and businesses alike. This trend continues with an important additional accelerating factor—Moore’s Law, which predicts that the number of devices that can be fabricated on a chip doubles every 18 months [5].

In the mid-1980s, the TCP/IP protocol required the computational resources typically found on a desktop or workstation-class machine. The microprocessors that could be embedded at that time were considerably less capable than those of today. The ability to support a full protocol stack at reasonable performance was beyond the scope of those underpowered computational engines. Internet connectivity was thus limited to costly and measured resources.

The last 15 years, with Moore’s Law in effect, have seen performance double 10 times over. A microcontroller costing just a few dollars combined with one megabyte of memory, is just as capable as a desktop computer was in 1985. Such devices can now support a compact embedded operating system, interface to a 10Mbps network, and run a TCP/IP stack and a Web server interacting through the ubiquitous HTTP protocol. We expect further advances in the miniaturization and reduced power consumption of these components. But already, the power budget for these devices can be as low as 50mW, or small enough for portable devices to be battery powered. Further utility has been achieved by replacing wired with wireless networking technology. Wireless connectivity, which is also possible within a reasonable power budget, enables ubiquitous connectivity and a world in which everything can be connected to everything else via a unified global network. Meanwhile, new standards and mass-produced transceivers continue to drive down the cost of wireless connectivity to levels comparable to that of microcontrollers.

These trends point to a new networked world in which we exploit the synergy afforded by literally billions of interconnected devices, thousands per person. These devices are on the verge of being embedded universally throughout our work environments; their modular composition will more efficiently facilitate many tasks that require relatively expensive monolithic solutions by today’s standards.

Back to Top

Communication Technologies

Here, we emphasize the technologies that are driving this revolutionary reorganization of our information systems, thanks to standardized ubiquitous protocols. They gather information, deliver it to user services through wired and wireless networks, and present distilled information and events to their users.

Embedded Web servers. What benefit do we derive from embedding a Web server into an appliance? The Web’s basic functionality enables client programs and browsers to fetch Web pages (files in the HTML format) and display them in a browser window. Hyperlinks within a file can further reference other files that are either local or remote to that site. More important for an appliance, a link may also reference a Common Gateway Interface (CGI) script that executes and returns HTML to the browser. Because these scripts generate HTML dynamically, they may incorporate real-time data derived from sensors. Thus, any appliance connected in this way can be monitored through a CGI script and the results presented to a user in a convenient graphical form. Similar mechanisms can also be used to control an appliance directly from a remote browser.

We’ve begun to see some competition among industry and hobbyists alike as to who can build the smallest Web server, even one small enough to fit in the palm of a human hand. Figure 1 shows a Web server designed at Xerox PARC in 1998 for exploring applications supported by embedded computation in the office environment. It has connectors that attach to a 10baseT Ethernet, a serial line, and the general-purpose I/O pins of its microcontroller. It runs the Spyglass Web server on top of the VxWorks operating system and offers 16MB of DRAM and 1MB of flash memory. An alternate software architecture developed at the University of Washington in Seattle uses uCLinux and a public-domain HTTP server. We mention these details to emphasize what is possible in such a small form factor dominated largely by connectors. Computational power is no longer the constraining factor. Also worth emphasizing is that this embedded Web server, with its onboard memory, can serve up volumes of Web pages, images, and related documents, thus operating as a self-contained Web site—in the palm of your hand.

Commercial examples of embeddable Web servers are beginning to appear, including Dallas Semiconductor’s Tini (see Figure 2), which also reflects a small computational footprint, if we ignore its adapter board and connectors.

Some Web-server designs aim in a totally different direction (see Figure 3), using a serial line rather than a direct Ethernet connection. At the other end of the Web-server spectrum is the so-called Boolean server, whose sole purpose is to turn on a single bit (in order to, say, turn a light bulb on or off or to sense the state of an electrical switch, perhaps as part of a security system). At the physical-interface side of the embedded Web server implementing bit-level control, or sensing, the system is very simple. But on the network side, the Web server has to use the same protocols as any more advanced server.

The challenge for designers of these micro Web servers is to implement as little as possible of the HTTP/TCP/IP protocol stack to meet the protocol standards while remaining small—implementations often called “slim servers.” Some of the complexity is reduced by having precomputed packets that are transmitted by a simple state machine in response to received packets. In this way, the computation is offloaded to the time the server is compiled by exhaustively calculating all the different responses it needs in order to communicate or acquire its state. This simplification makes sense only when the state machine is very small and the possible traversals are limited. Servers whose complexity exists somewhere between a fully functional Web server and a slim server have to use a more skillful implementation of the full protocol suite. As Moore’s Law continues to increase memory capacity and computational abilities while decreasing power consumption, this approach will increasingly make sense for a greater range of systems.

Java, applets, and Jini. The CGI mechanism described earlier also has limitations in that all interactions between the user and a Web page have to return to the server to be processed. The Java programming model provides a way to bring computation to the client, so interactions achieve faster response times. A link embedded in a Web page points to a Java “applet” loaded into the browser. The program is composed of Java byte codes intended for execution by a Java virtual machine (JVM). Once received, the code can execute locally in the secure environment of the local JVM. From the standpoint of embedded processing in dedicated appliances, Java applets enable a device to export its interface to a secondary machine that could be nearby or in another part of the world. Moreover, depending on the nature of the client, the interface can be customized to suit a specific need, namely, to provide user interaction or control by another program or automatic agent. The Java model represents the key to opening up convenient interaction among all forms of embedded computation.

The ability to interact with an appliance is only half the battle; the other half is knowing which types of appliances and services are at hand. The Jini architecture for network-centric computing developed by Sun Microsystems [8] is an example of a “discovery” service invented to enable local appliances or services to be located by client processes in order to form ad hoc communities of devices that communicate and benefit from mutual interaction.

A Jini “lookup” service runs on a local server acting as a clearinghouse in which services and devices register, or “join,” and others come to find out what is available. Multicast network protocols efficiently locate a lookup service; when a device registers, it can also provide Java code through which a future client might communicate with it. A client discovering the service also loads this code, which is a sort of device driver implementing all aspects of the device-specific code and protocols needed to use the device. The client can then communicate directly with the device without having to go through the Jini service any further. Java and its mechanism for remote method invocation (RMI) are key technologies enabling uniform access across all services. Ultimately, the Jini service is only an enabling component for achieving the desired interaction between clients and services.


We can imagine scientists being able to describe new experiments and have robots carry them out automatically.


Today, devices relying on small microcontrollers and those notably limited in their power budgets cannot support a JVM. Proxy mechanisms can be used to delegate Jini protocol interactions to more powerful surrogate processors. Moore’s Law again ensures that the number of situations for which proxies are necessary will keep decreasing. The research community will face major challenges in developing truly universal methods for plug-and-play operation, permitting all these highly capable devices to aggregate their abilities and provide interesting services to the user as automatically as possible.

Wireless connectivity. Until this point in the article, we’ve assumed that clients of embedded computation are able to contact target services without significant difficulty—a fair assumption in a wired world. But if we are to fully realize the benefits of ubiquitous embedded computation, many components will lack physical connections.

Wireless connectivity among embedded devices is extremely desirable, allowing unencumbered mobility and dynamic ad hoc connections between devices. Bluetooth [2] is a recent initiative launched by a large consortium of computer and consumer electronics companies to provide a low-cost wireless solution for connecting components separated by no more than several meters. Low-cost communication will be achieved through the complete integration of all the required analog and digital components onto a single mixed-mode chip. By adding an antenna and a minimal set of discrete components, the transceiver would be ready to interface to a digital bus. The most current implementations are built from a two-chip set.

The Bluetooth system is being designed to operate as a spread-spectrum device in the unregulated 2.4GHz band. It will use the frequency-hopping technique to switch among 79 frequencies separated by 1MHz; the maximum hopping rate will be about 1600hops/s. The system will provide a raw data rate of 1Mbps, translating to an application-level data rate somewhere around 721Kbps. These are respectable speeds for many applications (compare with today’s ubiquitous Web surfing tool, the 56K modem) and for this reason is likely to be an attractive proposition for application builders.

The Bluetooth consortium has established both the hardware and protocol standards for this new technology. The system is more than a paper design; live demonstrations have been given at conferences, including Comdex’99 in Las Vegas. Nokia and Motorola have demonstrated cell-phone and laptop products using Bluetooth to synchronize contact lists and transfer files. There are still many opportunities for design growth in this area, and the coming years will be crucial to the spread of this nascent wireless networking standard.

Infrared communication, as standardized by the Infrared Data Association (IrDA) [4], is also a potential candidate for linking embedded computers. At data rates ranging from 9600bps to 4Mbps, there seem to be as many opportunities for IrDA applications as there are for Bluetooth. However, the standard has been in existence for several years, and although hardware support is often available ubiquitously on mobile computers, the related application-level software has had interoperability problems due to the many operating modes the standard tries to encompass. Infrared also requires, in most cases, line-of-sight operation, which can be viewed as an advantage for applications requiring explicit selection of a device for which communications is desired (such as pointing a universal remote control at a particular piece of audio/visual equipment). However, there are other examples for which proximate non-line-of-sight operation is also desirable.

Other wireless communication approaches are also likely to find a welcome place. For example, human-body-based communication schemes, sending extremely low-power data signals through a user’s skin, can be advantageous for private communication and device selection by touching or holding, rather than proximity alone, as in radio-frequency-based systems.

Moreover, many types of wireless communication will likely coexist, necessitating development of protocols supporting data movement across these heterogeneous networks. In addition, protocols and operating systems will have to be able to support intermittent connectivity. Power limitations (only partly mitigated by Moore’s Law) and short-range communication (needed to maximize bandwidth per unit volume and possibly for privacy) mean that devices will not be connected continuously to the wired infrastructure. Proxies, caches, and active networking are some of the technologies that will play important roles in this emerging world of interconnected devices.

Back to Top

Device Technologies

Computation in isolation from the world has only limited value beyond the computer itself. Myriad input/output devices have been or are being developed to connect users to the computational infrastructure. They range from high-resolution wall-mounted displays to handheld stylus-based PDAs to conversational speech-based interfaces. However, in order to benefit from the full value of embedded computation, it must be possible for these devices to sense and control the world directly. Until recently, deployment of sensors and actuators throughout the physical environment has been prohibitively expensive for two main reasons: the cost of the devices themselves and the cost of their interconnection, possibly a wired connection to a fully featured network. Today, devices are getting cheaper thanks to new technologies and economies of scale; connectivity will be increasingly wireless and, in many cases, intermittent. Thus, because so many longstanding barriers are being lifted, we can now begin to contemplate an extensive interface between the physical world and our virtual world of information and computation.

MEMS sensors. MEMS, or microelectromechanical systems, represent an important solution to sensing, integrating computation and communication at low cost and high accuracy. MEMS sensors are made from novel mechanical structures constructed directly from silicon. They can be made to the same tolerances used in the semiconductor industry, and because silicon is very strong and robust at the micrometer scale, MEMS sensors are resilient and have considerable longevity. For example, single-crystal silicon components can be flexed back and forth for many millions of operations without material fatigue. A common mass-market commercial application of MEMS is the accelerometer for controlling deployment of airbags in automobiles (see Figure 4) [1]. MEMS structures will eventually be the technique of choice for designing embedded systems requiring computation, sensing, and control in inexpensive and reliable high-volume production. Their performance is being extended to chemical and magnetic field sensing, in addition to forces and light levels. It is conceivable that we will soon have microscopic laboratories that analyze the liquids and atmospheres in which they are immersed. This development should spur an age of personalized environmental sensing, in addition to communication and computing.

Tags. The automatic identification industry, which is not directly associated with embedded computation, has been pushing the limits of computation and miniaturization for the purpose of designing electronic tags for tracking everything from courier packages to livestock. Radio frequency identification (RFID) is the name given by industry to an electronic tag inductively powered by the tag interrogator. The captured energy is used by its miniature electronics to send its identity (a unique number) back to the interrogator through a modulated carrier.

The e-tag industry benefits from the same advances in lithography driving the computer industry. Modern tags are becoming quite sophisticated, many now containing onboard memory that can be written to or read from by the interrogator; some technologies even have anticollision mechanisms to allow multiple e-tags to be read in the same space, such as Texas Instruments’ Tag-it system (see Figure 5) [7]. Sensors are also being integrated with e-tags permitting real-time sensor data to be read and returned along with the unique identifier.

E-tags will soon support greater computational functionality. At some point, they may be able to offer full Web server performance. However, in the near term, if e-tag-interrogator technologies include an interface to the Internet, a tag can be part of the network infrastructure while proximate to the reader.

Location, tracking, sensing. The global positioning system (GPS) can be used to provide high-accuracy location data to a user within line of sight of several GPS satellites. Indoor location sensing is only beginning to receive serious attention from commercial developers. Tagging technologies can be used to detect not only an object’s presence but its position as well. By seeding the environment with enough interrogators, tags can be tracked as they move through a space. RF tags, whose signal strength can be measured by base stations, can be triangulated-on within a known coordinate system.

Important issues are the resolution of the position information and which people and applications are authorized to access it. Specially adapted technologies will be required for different applications. Knowing the position of people moving through an office environment is quite different from being able to track a folder precisely as it moves from filing cabinet to desktop to briefcase. Privacy concerns loom large with location tracking. Approaches giving users the ability to determine a position and then use that information as they see fit will have an important place, in addition to straightforward tracking approaches through which the system keeps track of the user’s location. Examples of this dichotomy have been clear since the beginning of ubiquitous computing; for example, compare the Olivetti Active Badge [10] with the Xerox PARCtab system [9].

Back to Top

Applications

Distributed sensors and actuators connected through the Web’s standard protocols and wireless communication media provide a powerful toolkit for developing rich applications affecting all our lives, as demonstrated in the following scenarios.

Home automation. A long-sought vision of the future has been the automation of the home. Many attempts have been made to create so-called smart houses, though few have been attractive models supporting a compelling case for people to want to live in them. We would argue that this is mainly the result of poor return on investment at a time when few products have any kind of embedded computation. We see more and more applications of embedded processing, as microprocessors are increasingly able to perform a wider range of tasks and exploit standards presenting some measure of interoperability. A prototypical example is the digital camera, which is beginning to replace traditional photography. The quality of imaging and postprocessing is beginning to compete with the long-established chemistry-and-paper-based photography industry. But we now have the opportunity to use photographs in ways that were never before possible, displaying them on a television screen or in an electronic picture frame and sending them via email to a relative. We’ve even begun to see services that store and catalog our photographs, as well as organize them into graphically rich Web-based photo albums.

We can also expect common devices to interoperate. Technical-standard-setting organizations, including HomeRF [3], are proposing standards to further enable interaction beyond those that would develop gradually in the market economy. Similar interaction can be found in the business environment, involving computers, PDAs, scanners, printers, computerized whiteboards, cellular phones, and automated document tracking. Bluetooth, IrDA, and various local-area networks, such as the IEEE 802.11 wireless LAN standard, have focused on this market, and businesses are likely to reap the benefits of ubiquitous embedded computation before the home market catches on.

In any case, the home environment represents many special challenges. We can start with the problems caused by the numerous infrared-enabled remote control devices around our homes. Wouldn’t it make more sense to have generic touch-screen remote control devices resembling palm-sized PDAs that would upload a user interface from the appliance itself? We would have to ensure only that we placed one in every room in our houses and apartments. However, this goal involves a more horizontal control model in which new standards permit appliance-independent descriptions of user interfaces, rather than the current vertical model, which forces us to use the specific remote control device supplied with an appliance. The Internet’s universal connectivity, along with embedded Web servers, should allow a new way of interacting with our living spaces. Environmental parameters and audio-visual displays would be able to adjust to a room’s occupants and provide levels of energy efficiency not possible before.


The synergy fostered by interconnected embedded processors will make the much-touted digital convergence in desktop publishing and entertainment look like a blip in the history of computing technology.


A project at the University of Washington is seeking to develop a virtual neighbor for elderly relatives [6]. Sensors typically found in the home would be used to collect data on traffic and resource-use patterns in the home, so remote observers or automated agents could ensure that nothing is awry. Anomalous situations, such as doors or windows left open overnight or a lack of motion around the home for an extended period, would trigger messages to those concerned.

Crucial to this range of new applications is the ability to integrate new devices readily into home information systems. Therefore, it should be possible to add or remove sensors dynamically without having to reconfigure or maintain software as we do today by, say, upgrading to new software and debugging malfunctions. This vision of pervasive home sensor networks leads to a plug-and-play model in which sensors are as small and cheap as possible and are deployed in large numbers. Storage for the data they collect, along with the programs that look for patterns in the data, would reside at nodes in the wired infrastructure. An embedded home Web server would act as the connector between the wireless sensors and the infrastructure. Code for displaying sensor data and fusing data from multiple sensors would be developed by third parties and linked with the data through such self-describing mechanisms as the Extensible Markup Language (XML) [12]. Thus, remote users monitoring their homes through Web browsers would see the graphical rendering of the sensor data as it was being gathered. The sensor manufacturer and third-party developers would ensure the longevity of these systems by continually updating the software components responsible for data fusion.

The single most important challenge in this application domain is making it trivial to deploy the sensors, actuators, and the services that use them. But doing so while maintaining privacy is not easy. New approaches to security and access domains are needed.

Experiment capture. Sensor infrastructure also promises many applications outside the home. One example, which would change the way scientific results are collected and disseminated, is being pursued in the Labscape project at the University of Washington, part of the Cell Systems Initiative. The project’s motivation is that scientists seeking to understand the inner working of the human cell find themselves hampered by the limitations of the current methods for disseminating research results. There are three main obstacles: no unified model for integrating scientists’ collective knowledge of cell chemistry and mechanics; experiments that are not completely captured or recorded ambiguously, making them difficult to reconstruct; and the lack of publication for the overwhelming majority of experiments, which wind up enriching the experience of only a handful of researchers rather than the larger scientific community.

The Labscape project seeks to instrument a cell-biology laboratory so experiments can be captured to the fullest extent possible. This instrumentation effort entails integrating a variety of tagging and location-tracking technologies, so individual samples can be tracked as they are moved, mixed, heated, and centrifuged. Embedded Web servers can be used to connect laboratory instrumentation to the Web, so devices can be controlled and configurations recorded automatically.

With this level of capture, it should be possible to record everything that goes on in the laboratory, obviating the need for imprecise, error-prone, incomplete notebooks. Moreover, a huge number of applications could be based on the collected data. Beside providing a record of all experiments for all time, they could support automated lab tutors that guide scientists, students, and others by playing back the details of previous experimental procedures. Scientists would be able to keep track of many simultaneous experiments, relieving their cognitive burden and leading to fewer errors and more effective use of time in the laboratory. We can imagine scientists being able to describe new experiments and have robots carry them out automatically.

Health monitoring. Ubiquitous sensors and inter-networking will radically change our health care status, methods, clinics, and management. When a patient needs to be observed, a physician would be able to prescribe a collection of sensors that would be swallowed. Ingested daily, they would provide chemical, temperature, and physiological data collected by a portable embedded Web server over an RF link. The physician would be able to observe how a particular therapy affects the patient continuously rather than in the discrete samples taken today during office visits. Sensors and actuators specific to a particular medication would be used to time-release appropriate doses only as needed, alerting physicians to side effects and unwanted interactions as they occur. Thus, we can imagine a world in which new drugs are developed, along with their monitoring sensors and releasing actuators guaranteeing their safe and effective use.

Privacy concerns abound in such a world. A secure Web-based architecture for patient medical data will have to be deployed. Patients should be able to keep their complete medical histories at third-party service providers. These patients would then authorize their physicians to look at subsets of that data. Similarly, insurance companies would be able to see legislated subsets. Patients would be free to purchase third-party services augmenting or backing-up physician-provided services. An example is a drug-interaction service that relates possible side-effects to the patient, as some pharmacists do today, but with more complete information. Promising commerce applications also abound. For example, personalized drug dosages and mixtures, as well as reminder services to ensure proper therapy and doctor supervision.

Back to Top

Conclusion

Embedded processing is already powerful enough to tackle a range of real-world applications. Wireless and wired networking is increasingly ubiquitous, cheap, and available at such low power, we can envision the interconnection of all our embedded processors. We will finally achieve the interconnection of our physical and virtual worlds. The synergy this fosters will make the much-touted digital convergence in desktop publishing and entertainment look like a blip in the history of computing technology.

Many challenges remain, however, before this revolution improves the quality of all of our lives. Some are technical, involving new approaches to software development and deployment, along with new networking protocols, the organization of network-based services, and techniques for the self-organization, self-configuration, and self-monitoring of large distributed systems. Others are not purely technical, including privacy and security concerns that have to be addressed across a broad front involving public policy and the law. A still greater challenge is the development of new business models that would permit the horizontal interoperability of our devices and services, enabling the consumer choices and flexibility we all deserve.

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. Hydra, Xerox PARC’s embeddable Web server.

F2 Figure 2. Dallas Semiconductor’s Tini Web server.

F3 Figure 3. A Web server on a Microchip PIC (peripheral interface controller) processor (left) and on an even smaller FairchildACE1101MT8 processor (right).

F4 Figure 4. Photomicrograph of a MEMS accelerometer from Analog Devices.

F5 Figure 5. Texas Instruments’ Tag-it system.

Back to top

    1. Analog Devices. Using iMEMS Accelerometers in Instrumentation Applications, tech. note (see www.analog.com/industry/iMEMS/library/imems_accl.htm).

    2. Bluetooth Special Interest Group (see www.bluetooth.com/technology/).

    3. HomeRF Working Group (see www.homerf.org).

    4. Infrared Data Association (IrDA) (see www.irda.org).

    5. Moore, G. VLSI: Some fundamental challenges. IEEE Spect. 16 (1979), 30.

    6. Portolano Project. University of Washington, 1999 (see www.cs.washington.edu/research/portolano/).

    7. TIRIS. Tag-it Inlays product bulletin, Texas Instruments (see www.tiris.com).

    8. Waldo, J. Jini Architecture Overview, tech. rep., Sun Microsystems, Palo Alto, Calif., 1998.

    9. Want R., Schilit, B., Adams, N., Gold, R., Petersen, K., Goldberg, D., Ellis, J., and Weiser, M. An overview of the ParcTab Ubiquitous Computing Experiment. IEEE Pers. Commun. 2, 6 (Dec. 1995), 28–34.

    10. Want R., Hopper, A., Falcao, V., and Gibbons, J. The active badge location system. ACM Trans. Info. Syst. 10, 1 (Jan. 1992), 91–102.

    11. Weiser, M. The computer for the 21st century. Sci. Am. 265, 3 (Sept. 1991), 94–104.

    12. World Wide Web Consortium. Extensible Markup Language (XML), 1998 (see www.wc3.org/XML/).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More