One of the more hyped commercial opportunities these days appears to be software as a service or SaaS. In this form of computing, a customer runs software remotely, via the Internet, using the service provider's programs and computer infrastructure. One of the first and most successful firms in the SaaS space is Salesforce. com, which was launched in 1999. Salesforce.com provides a customer-relationship management service. Using the service, a mobile salesperson, for example, can access the software from a laptop while on the road, and the head office is relieved of all the problems of infrastructure provision, the complexities of managing and upgrading software, and synchronizing data from multiple sources. Another big player is Google, which now offers email and office productivity applications in its version of cloud computing.
Many people think that the future of software lies in SaaS and cloud computing. They may well be right in the medium term, but history shows that one cannot be sure that the trend will last indefinitely.
There are two main components to SaaS: The software itself and the computing infrastructure on which it runs. Customers are at least as concerned about the quality of service as they are about the software. Indeed, for providers who use freely available open source software, quality of service is their only competitive advantage.
Organizations use in-house computing facilities or SaaS largely according to the economics of the situationwhether it is cheaper to own one's software and infrastructure or to buy services on-demand. This dilemma is not new. It is as oldindeed, olderthan the computer industry itself.
Before computers came on the scene in the mid-1950s, the most advanced information processing equipment that organizations could buy (or lease) was punched-card electric accounting machines, or EAMs. The main vendor of this type of equipment, IBM, opened the first of several service bureaus in 1932. Customers brought their data processing needs to a bureau and came back later for the results. The bureau provided customers with advanced information processing on-demand, thereby eliminating the cost of maintaining and staffing an EAM installation. Depending on the volume of data to be processed, using a service bureau tended to be more expensive per transaction than using one's own installation. Users had a choice. If one had a low volume of transactions then the economics favored the service bureau, but if one had a high volume of transactions it was cheaper to have one's own installation.
Timesharing thrived just as long as its cost and convenience was competitive with a mainframe computer installation. The arrival of the PC changed everything.
In 1949 a small firm, Automatic Payrolls Inc., was founded in New Jersey and used a variant of the service bureau business model. The firm specialized in payroll processing. It developed its own proceduresat first using bookkeeping machines, and then punched-card machines that were programmed with plug-boards. It would send a van to its customers to collect time sheets or punched cards, process the data, and drop off the results to its customers later. This made excellent business sense not only for organizations that did not want to maintain a bookkeeping machine or an EAM installation, but also for firms that simply wanted to offload the non-core activity of managing the payroll. In 1958, the company changed its name to Automatic Data Processing Inc., or simply ADP, and in 1961 it acquired an IBM 1401 computer. ADP expanded into new locations and by the mid-1960s it was using the emerging capabilities of data communications to eliminate some of the physical collection and return of data.
Many other firms began to compete with ADP, offering different services in what became the biggest sector of the "data processing services industry." In 1961 the industry formed its own trade association, ADAPSOthe Association of Data Processing Services Organizations, the ancestor of today's ITAA. By 1970 processing services accounted for more than one-quarter of total U.S. computing purchases. While firms have come and gone, ADP seems to have found the perfect nichetoday it is still the world's biggest payroll processor, preparing the paychecks for one-sixth of the total U.S. work force.a
In the mid-1960s timesharing computers came on the scene. In these systems customers could access a mainframe computer remotely. Connected to a mainframe computer via a regular telephone line, users ran programs using a clunky, 10-characters-per-second, model ASR-33 teletype. It made for a noisy working environment, but on-demand computing had real benefits. Salespeople for the timesharing firms touted their systems using the computer-utility argument: Firms did not maintain their own electric plants, it was argued, instead, they bought power on-demand from an electric utility; likewise, firms should not maintain mainframe computers, but instead get computing power from a "computer utility." Several national computer utility companies had emerged by the end of the 1960s. But then came the first computer recession in 1970. The computer utility model turned out to be very vulnerable to an economic downturn. Similar to the way firms cut back on discretionary travel during a recession, they also reduced spending on computer services. There were many firm failures and bankruptcies. For example, one of the most prominent firms, University Computingwhich had computer centers in 30 states and a dozen countriessaw its revenues hemorrhage, and its stock price dramatically declined from a peak of $186 to $17.
The timesharing industry recovered, however. In the 1970s major players included General Electric, Timeshare Inc., and CDC. They built massive global computer centers that serviced thousands of users. By then those clunky teletypes had been replaced with visual display units, or "glass teletypes" as they were sometimes known. They were silent and relatively pleasant to use, giving an experience somewhat like using an early personal computer. Increasingly firms sought to differentiate their offerings by providing exclusive software. For example, they devised financial analysis programs that can now be seen as forerunners of spreadsheet software. They implemented some of the first email systems. They also hosted the products of the independent software industry, usually paying them on a royalty basis, with typically 20% of revenues going to the software provider.
The timesharing industry died a second time around 19831984. This time it was not a computer recession that was the cause, but the personal computer.
The timesharing industry died a second time around 19831984. This time it was not a computer recession that was the cause, but the personal computer. Timesharing services cost $10 to $20 per hour, with regular users billing perhaps $300 a month. The PC completely destroyed the economic basis of the timesharing industry. Compared with a timesharing service, a PC would pay for itself in well under a year, and it had the further advantages of eliminating the telephone connection and providing an instantaneous response. Furthermore, a standalone PC was not like a mainframe computerit was a fuss-free, virtually maintenance-free, piece of office equipment. As the timesharing industry went into decline, a few of the firms morphed into consumer networks, such as CompuServe and GE's Genie, but mostly they just faded away with their vanishing revenues.b
Today, the very things that killed the timesharing industry in the 1980s have been reversed. Despite falling hardware costs, computing infrastructure has become increasingly complex and expensive to maintainfor example, having to deal with security issues and frequent software upgrades. Conversely, communications costs have all but disappeared compared with the 1980s. No wonder remote computing is back on the agenda.
Cloud computing has many parallels with the 20-year reign of timesharing systems. Timesharing thrived just as long as its cost and convenience was competitive with a mainframe computer installation. The arrival of the PC changed everything. Today, cloud computing offers tremendous advantages over the in-house alternative of maintaining a cluster of servers, application programs, and database software. However, if the cost of maintaining this infrastructure was to fall dramatically (which is entirely possible in the next few years) the economic advantage of cloud computing could be reversed. The other threat to cloud computing is a major economic downturn. Now that U.S. industry experiencing a recession, the demand for remote computing could decline, just like the demand for electric power. Further, many online services are currently funded by advertising revenuestake away the demand for advertising and there will be little to support these services.
Of course, none of the aforementioned items should be construed as a forecast of the impending demise of software as a service. Rather, this column is intended as a salutary reminder that nothing in IT lasts forever, and that technological evolution and economic factors can rapidly alter the trajectory of the industry.
b. For a history of the timesharing industry see: M. Campbell-Kelly and D.D. Garcia-Swartz, "Economic Perspectives on the History of the Computer Timesharing Industry, 19651985," IEEE Annals of the History of Computing 30, 1 (Jan. 2008), 1636.
The Digital Library is published by the Association for Computing Machinery. Copyright © 2009 ACM, Inc.
Great article, showing sides of IT industry I did not know about, I have to disagree however with the threat, that the cloud infrastructure's price can drop and be accessible to companies. The point in distributed computing is the 'distributed' part here and this is the main reason cloud computing, grids and more of these are popular. That is my opinion, the article is great, it was a real pleasure reading it.
Displaying 1 comment