Research and Advances
Artificial Intelligence and Machine Learning

Are Intelligent E-Commerce Agents Partners or Predators?

Mobile agents are changing the face of e-business and reshaping business models. In the process these agents are also posing new concerns regarding who really owns information.
Posted
  1. Introduction
  2. What Do Intelligent Agents in E-Commerce Do?
  3. Is Information Sharing a Win-Win Situation?
  4. Are Agents a Security Risk?
  5. Benefits of Agents
  6. Whose Information Is It?
  7. Conclusion
  8. References
  9. Authors
  10. Tables

EBay made headlines three years ago by initiating a drastic new policy against third-party predatory search agents. The policy was directed against intelligent agents that would enter the auction site, search for items their issuers were looking for, and then notify the issuer about pricing and deadlines. EBay’s modified user agreement would prohibit third-party sites from collecting and sharing information found on eBay’s site. The problem, as reported, was the search agents were frequently accessing eBay, sifting through auction offers, harvesting the information, and placing it on alternate Web sites known as “aggregators.” The list of aggregators included names such as biddersEdge.com (now bankrupt), AuctionWatch.com (now a portal site on auction management), itrack.com (acquired by overBid), and Ruby Lane (now an auction portal). EBay claimed the aggregators’ search agents were harmful in multiple ways. They would slow down eBay’s transaction processing systems, thus reducing performance for all other eBay visitors. Moreover, outside search agents might not show the most up-to-date information and thus lower auction users’ purchasing experience. Executives from the third-party information aggregators were quick to point out their systems were actually benevolent in that they served as “repeaters” or mirrors of eBay information, thus actually lowering the load. They also stressed that while they were promoting the offers, purchasing transactions were still carried out at eBay’s site, so business was not taken away from the company, but, in fact, promoted.

The culprits in this situation were mobile intelligent agents [8], or more specifically, one type of mobile intelligent agents. In the eBay scenario, intelligent agents were harvesting information and were sending it to their company’s computer that collected, analyzed, and redistributed that information.

Are agents truly predators? EBay’s response clearly suggests so. Yet, Murch and Johnson [9] claim just the opposite, stating “It is in the interest of all companies that wish to sell over the Internet … that their information is formatted and available in such a way that it can be easily accessed by … these agents.” In other words, agents are viewed by some as having positive characteristics.

Despite recent court rulings, third-party companies are still assessing the legality of eBay’s action and hundreds of Web users are still expressing their opinions in chat rooms and newsgroups (the vast majority criticizes eBay, see “Talkback post” at www.znet.com). Here, we discuss the various aspects related to intelligent agents and information aggregation, focusing on auction sites as well as implications for other e-commerce.

Back to Top

What Do Intelligent Agents in E-Commerce Do?

Many types of agents exist in e-commerce. Maes [7] organizes e-commerce agents into three categories that correspond to stages in the buying process: Product brokering, merchant brokering, and negotiation. Wang [11] classifies e-commerce agents into eight categories according to the various tasks they support. An overview of agents by application area (for example, in email, competitive intelligence, banking, and investment) can be found in [9], while www.botspot.com provides free periodical reports on innovative agent applications. Turban et al. [10] offer numerous examples of real-world agent applications.

Intelligent agents can carry out numerous decision-making and problem-solving tasks that traditionally require human intelligence, such as diagnosis, data classification, planning, or negotiation. They can answer email messages, search the Internet for useful information, carry out comparisons, or even become electronic pets (such as the Tamagotchi). They can also buy and sell products or services. Among the many types of agents most relevant for our discussion are mobile agents that collect information from remote sites. Mobile agents are not bound in their operation to the server from which they originate. They are typically written in a platform-independent language such as Java, and can travel from host to host where they execute as if they were local programs. Here, we mean “mobile” whenever we use the term “agent.”

Hendler [4] differentiates four types of agents by function. Problem-solving agents do what many traditional planning expert systems did, namely gather data, analyze a situation, and make a corresponding decision for how to act on the user’s behalf. Purchasing agents fall into this category. User-centric agents facilitate interaction with the user. In essence, they provide a better user interface by learning about the user’s system use preferences and tailoring the interface to the user preferences. Control agents control the operation of several agents in a multiagent environment. In this context one needs to remember that agents are not only mobile, but also small in size, each with a very specialized capability. Hence, the interaction of several agents might be necessary to provide sufficient intelligence and capability. These are very advanced agents used in research experiments. Finally, transaction agents translate information between different data standards within a heterogeneous database or file environment. Among these four types, the ones that create contention are problem-solving agents specializing in data harvesting. They may be assisted by transaction agents to access data from multiple data sources and may be controlled by control agents. Nevertheless, the critical functionality is the ability to collect and analyze information from remote sites.

From the perspective of computing paradigms, Web agents offer a new alternative that has evolved from the concepts of client/server computing and code-on-demand [6]. In client/server computing, services are implemented on a server and offered by that server to clients. Hence, the model is server centric, and intelligence is not easily added. The server holds the know-how as well as processor capability and resources. In a code-on-demand environment, clients have resources and processing power (such as any user’s PC accessing the Internet), but often not all the necessary know-how. This can then be downloaded from a host (such as in the form of Java applets). In the agent environment, all three—processing, resources and know-how—can be flexibly distributed throughout the network environment. Agents containing know-how and possibly resources can travel from host to host, carry out tasks, gather information, and then move on again. Agents are (or should be) fairly light, thus creating only a light processing load on the network environment and consuming or occupying only few resources. Small agent size offers several advantages over other computing paradigms, namely low latency and little network load.

Although agents are light and not too resource intensive, they nevertheless require some resources. Furthermore, these resources are not provided by the host that instantiated them, but by some other server where they temporarily reside. In other words, third-party sites are appropriating resources from remote hosts without compensation. However, Jim Wilcoxson, CEO of Ruby Lane, quantifies the consumption: “Our programs … are very sophisticated, automatically slowing down or stopping if the load on eBay gets too high. The agents represent only 0.025% of eBay’s traffic.” Whether 0.025% is an acceptable level of resource consumption is subject to debate. The fact is, however, that every entity represented by a server on the Internet implicitly agrees to have some of its resources occupied or consumed by outside parties whether they are buyers or not. This is, after all, one of the new realities of the Internet world where companies are opening up their databases and transaction processing systems to anyone. The resource providers can obviously insist that their resources are only available for the direct and sole benefit of the end user but not for intermediaries, and can formulate contracts to restrict the use by intermediaries. Enforcement of such agreements is difficult, however, especially if no user registration is required for site access.

Back to Top

Is Information Sharing a Win-Win Situation?

One might argue that eBay—or other companies in the same situation—are not hurt by agents at work. Wilcoxson of Ruby Lane pointed out his company’s agents only consume a fraction of resources consumed by a Web browser accessing the site. Furthermore, he stressed that by mirroring eBay’s listings, Ruby Lane would in fact take some of the search load off eBay’s site, while the transaction would still be completed with eBay.

To properly interpret this argument, it is necessary to understand different archetypal business models on the Internet. The several different classification models, for example [1] and the three business models shown in Table 1, are useful to illustrate the different arguments related this topic.

As Table 1 illustrates, the first two types of Internet businesses models charge for their service, with the purpose of generating a profit from that service. For instance, an e-shop such as eBay will charge for products (services), while an application service provider (ASP) will charge a service fee for software use. The third model, “e-free,” offers something of value, typically information or an information-based product without charge (one of the Internet’s original driving forces). However, in order to be commercially viable, e-free providers need to generate commissions or advertising fees by producing sales transactions and revenue elsewhere. Hence, they have to contain advertising, or have to offer sales leads, or have to sell their customer base to others. An e-free provider may argue that all they want are “eyeballs,” not the actual transaction. So, if eBay retains the transaction (and eBay does not need the eyeballs), the result is a win-win situation.

Unfortunately, the circumstances have changed following the significant decline in online advertising revenues during 2000 and 2001. Several of the companies with an e-free model, including third-party agent sites, have had to change their business models (or will need to do so very soon). As a result, auction aggregators changed into specialized auction sites (for example, Ruby Lane), auction management companies (for example, AuctionWatch), or sold the business to a larger competitor (for example, iTrack), thus becoming potential or direct competitors.

To analyze the potential conflict between aggregators and the sites they harvest, it is also worthwhile to investigate the market capitalization contribution of the different models. At this point, few companies with a pure e-free model remain, and lack of sales transactions is strongly reflected in market value. At the same time, many of the e-shops use their brand strength to draw advertising revenues, notably Amazon and eBay. In fact, part of the relatively high valuations of e-shops can be attributed to the brand strength and its leverage in advertising (rates), sales leads, or cross selling. A table of stock valuations of different Internet entities illustrates this concept.


Companies may ultimately need to charge a price for what is presently offered for free at a level that will reflect the true cost of that service. Thus, e-free sites are ultimately going to disappear.


Table 2 offers strong evidence that a company’s e-shop contributes significantly more to its valuation than any e-free offering, especially following the dramatic downturn in online advertising rates. In today’s market, the e-free model contributes, on a per-user basis, only single-digit ($4 for iVillage) or low double-digit dollar values ($18 for Ask) to a company’s valuation. This represents a drop of more than 95% from two years ago. At that time, a Web agent that rerouted visitors from the original data producing site (such as eBay) to an alternate site could potentially increase the valuation of the agent site by as much as a few hundred dollars per visitor. Today, the monetary values are much lower, but with e-free sites desperately fighting for survival, the stakes are even higher. Furthermore, an alternate site can cherry-pick, using its Web agents and own Internet-based information systems to handle primarily popular items that draw customer attention, whereas the original site has to handle all transactions. Hence, the resulting situation is the losses to the data originator (for example, eBay) may not necessarily be made up by extra sales generated there by referrals from the alternate (agent) site.

An additional issue is the critical mass of e-shop sites. Any seller knows that being “big enough” is very important. Even in traditional markets, companies (for example, car dealerships) co-locate in order to draw a critical mass of potential buyers. Web sites, including auction Web sites, rely on this concept even more due to the large fixed cost in infrastructure and advertisement. EBay’s top rank as an auction site—in number of customers, number of visits, number of transactions, and time spent at the eBay site—clearly gives it that critical mass. It also provides a strong incentive for others to replicate that critical mass by simply mirroring eBay’s offerings and then augmenting them with those from other sources. The mirroring weakens the need for auction buyers or sellers to choose the eBay (or other original) site as the primary point to look for list item. Buyers want to save time, so instead of visiting several auction sites, they will go to one site that aggregates information from several sources. Sellers can choose any auction site evaluated by Web agents, and may pick the one with the least commission, instead of the most popular site. Thus, mirroring weakens the reinforcement provided by critical mass and ultimately may erode it.

Back to Top

Are Agents a Security Risk?

A major concern voiced by opponents of (mobile) intelligent agent technology is that agents can pose a security risk not only to remote hosts, but also to their original host (and to themselves). A comprehensive discussion of these risks and possible countermeasures is provided by [8]. The following potential risks were identified in part from [6]:

Stealing data/Illegal access. Web agents may try to gain access to databases they are not supposed to access or for which there is an access fee.

Free use of resources (through masquerading). Agents always “steal” resources from remote hosts. As long as this is in line with accepted protocols, it is an acceptable practice. However, if agents masquerade as alternate processes, they may use unacceptable levels of resources. For example, a Web agent may even “borrow” resources from a remote host to send or receive email.

Unauthorized program execution (Trojan horse). Agents may also masquerade and then execute programs that are ultimately harmful to the remote hosts. Such Trojan horses have already been used repeatedly on the Internet. However, an open computing environment that freely accepts agents on remote hosts creates a much larger risky arena for such attacks.

Data stripping or alteration (by server). Technically it is possible to strip Web agents of their data. This is mostly a concern for a site that sends out agents to remote hosts, but also it could potentially affect other sites. For instance, suppose Buyer has a trusted relationship with both Seller 1 and Seller 2. However, there exists a competitive relationship between the two sellers. An intelligent agent that originates from Buyer and travels to Seller 1 and then to Seller 2 could be stripped by Seller 2 to obtain competitive data about Seller 1.

Resource exhaustion (trashing) resulting in denial-of-service. Web agents can exhaust remote host resources to the point where the remote host can no longer function properly. Ruby Lane’s CEO points out that companies spidering the eBay site only consume about 0.025% of eBay’s resources and even that would only take place in off-peak load situations. Other agents may not be as considerate to the remote host: in fact, they can be designed to bring down the remote server, as has been aptly demonstrated by numerous denial-of-service attacks.

Deceitful agent behavior. Agents can mislead other agents or hosts about their intention and can lie about transactions. For example, in agent activities that go beyond information collection, such as transaction completion, a malicious behavior would be the denial of a transaction that actually took place. The agent essentially reneges on the deal, like a person might do in real-world transactions. This is a fundamental issue since an increasing number of transactions will make the monitoring of each individual transaction less feasible and thus increasing the need for trust in such transactions (for example, [3, 4].


Despite their limitations and risks, mobile Web agents are hailed as one of the most attractive technologies for the near future and are considered by many an absolute necessity for e-business in light of the exponential information load increase for buyers and suppliers.


Back to Top

Benefits of Agents

Despite their limitations and risks, mobile Web agents are hailed as one of the most attractive technologies for the near future and are considered by many an absolute necessity for e-business in light of the exponential information load increase for buyers and suppliers. Hence, it is worthwhile to look at some positive effects of Web agents related to the debate.

Transaction costs. One of the benefits e-commerce can provide is lowered transaction costs. Yet, in order to achieve this goal much of the transaction processing procedures need to be automated. If closing the deal requires either negotiation, or search for information, or similar activities, it also requires intelligence, and thus provides a rich application area for Web agents.

Furthermore, the cost of processing transactions that do not add value per se needs to be lowered as much as possible, particularly customer inquiries about items or between-sales service requests. It has become so easy for customers to send out requests for information or service by email that companies are inundated with this type of mail. Companies such as Dell Computers receive tens of thousands of email messages per day, many of which are not purchase requests. Asking a question is generally quick, easy, and inexpensive, answering it may be just the opposite. Hence, an ability to handle 80% to 90% of inquiries through AI-based automatic procedures greatly reduces the cost of staying close to customers. Answer agents or advice agents that have this ability, possibly even in real time, are available from several vendors. Firepond, for instance, claims its answer agent (“eServicePerformer Answer”) can handle “up to 80% of a customer’s email with 98% accuracy” and respond within seconds.

An example of extensive use of agents can be seen at Cisco Systems. Cisco is using a suite of commerce agents that help customers/partners to do business electronically:

  • Lead Time Agent gives customers and partners the current lead times for Cisco products.
  • Service Order Agent provides access to information on service orders.
  • Contract Agent provides information about service contracts.
  • Upgrade Agent allows requests for software or hardware upgrades and documentation.
  • Notification Agent lets users specify criteria that will result in receiving email automatically about changes in order status or pricing.
  • Configuration Agent enables users to create and price online configurations.

These agents can be integrated with the customers’ information systems.

Turnaround time. In some e-commerce applications, quick turnaround (or cycle) time is absolutely crucial. Customers of Internet brokerage firms want instantaneous order execution, and also expect response to inquiries within very short time, that is, no longer than 24 hours. A brokerage firm has to be able to provide such service levels regardless of high or low market volume. In fact, on high volume days, (that is, days with large market volatility), the number of information requests might even be disproportionally higher than on normal transaction days. Hence, the brokerage has to be able to respond to the peak load, while ideally not keeping too much overhead during low load periods. Here again, Internet agents able to classify requests and answer routine inquiries can significantly lower the transaction volume and provide a high service level even in peak periods. An example is E*Trade’s “ask” agent.

Closing the deal. As agents can greatly increase the efficiency of e-commerce transactions, they can also improve its effectiveness. The sheer ability to close a deal via an agent allows companies to make sales that are otherwise impossible. A small mom-and-pop Internet store can suddenly provide 24-hour customer service and sales on a global scale, not tethered by the limitations of local time zones. Businesses can also look much more intelligent by boosting order taking through intelligent agents.

Lowest price purchase. Comparison shopping over the Internet has become one of the most popular applications for agent technology. Agents can make the search for the lowest cost almost effortless for the customer. While highly advantageous for buyers, this development has obviously raised concerns with retailers who complain about the single-minded price orientation of intelligent agents.

Back to Top

Whose Information Is It?

One of the very contentious points in the discussion that followed the original eBay story was information ownership. Many individuals argued that since the information was “in the public domain” it was no longer eBay’s. Ruby Lane’s CEO pointed out his site was not copying eBay’s information, but simply providing URL links to it.

Settling the issue of ownership right—and therefore the decision whether it might be a criminal offense for others to take the information from the original site—is tricky. As the information remains on the original site, no theft takes places, but an infringement of copyright is possible. However, information is never copyrighted, only a particular form of expression. Furthermore, the copyright may not reside with the original site, but with the customers who entered the information in the first place. These individuals might have the right to refuse third-party sites to broadcast their transaction data based on information privacy regulation, or conversely, may demand open accessibility.

More relevant for e-businesses than the legal concerns, however, may be some commercial issues. An e-business may contractually limit information use rights with its business partners. That is the path eBay has taken with its September 1999 user agreement change. Such a contractual agreement, however, does not limit nonsubscribers from exploiting any freely available data to the fullest, as nonsubscribers have no contractual agreement with eBay. To avoid giving subscribers effectively less rights than nonsubscribers, e-commerce sites may need (some already have) very different information access policies with significant information access for registered users and very little access for other parties. Yet, such policies may result in negative effects on a site’s attractiveness and increase the difficulty of drawing new customers.

Back to Top

Conclusion

We have attempted to provide two contradicting views about intelligent agents, stressing the associated problems, but also pointing to great benefits. In drawing conclusions, it may be beneficial to reflect on the potential effects brought about by a “future with intelligent agents.”

First, agents will be game spoilers. Whenever companies are offering a free service and in exchange are trying to extract something from the customer (such as a lengthy site visit), agents can be created that take over the customer’s activities. Thus, the agents extract the value without the service provider receiving the expected returns. Hence, companies may ultimately need to charge a price for what is presently offered for free at a level that will reflect the true cost of that service. Thus, e-free sites are ultimately going to disappear. We should expect this scenario to play out as soon as efficient micropayment systems become widely used.

Second, agents may create a situation that was traditionally known as the “tragedy of the commons.” The “commons” were the free grazing areas to which farmers in England could send their livestock. As the access was free, the best strategy for each individual farmer was to send as many animals as possible, thus leading to overgrazing and diminished future returns. Today’s “commons” are the Web sites that generate information and make it available to customers free of charge, while they generate the necessary revenue from transaction processing or advertisements. As more and more secondary sites harvest the information generated by primary sites, the primary sites will not receive sufficient traffic to produce enough surplus, thus forcing them to reduce or eliminate their free services. Thus, overharvesting by search sites may result in poorer original data sites.

Third, although companies such as eBay may complain about data misuse by others, they may have little choice in the long run but to make the information freely available anyway. After all, one of eBay’s sources of success has been the creation of an open market where buyers and sellers could easily link up and where at least some information was available to assess the trustworthiness of partners in the sales transaction. In order to maintain this source of competitive advantage, there may be little option but to “keep the books open” and to accept some level of information poaching. “Having one’s cake and eating it, too,” but forbidding use of openly available information may not be possible in the end. In the interim, however, eBay’s “no trespassing” rules may have lead to casualties among aggregators. For instance, Bidder’s Edge shut down on Feb. 21, 2001, due to “market and financing conditions,” following a court ruling in May 2000 that barred the company from searching eBay’s information. Bidder’s Edge maintains that the eBay ruling did not cause its demise. Interestingly enough, companies with a market position such as that of eBay are in the best position to poach from others, augmenting their already large selection of goods with specialty offers harvested from other sites. In other words, the seemingly more feasible strategy would be to not exclude others from harvesting one’s site, but to beat them at their game. This is a somewhat different strategy as suggested by [2]. They maintain that in the face of disintermediation, the traditional intermediary has to shift strategy.

Given the impact of agents in changing the nature of businesses, some e-businesses will likely try to shield themselves from information harvesting by frequently changing the interface to their data, thus undermining an agent’s capability to access information using old communication protocols. At the same time, any change in interface must be either invisible to human users or acceptable as an improvement in information access.

Denial-of-service, whether due to overload by benign ‘bots or due to malicious attacks, will remain a key source of concern. E-business Web sites will have to protect themselves against such attacks, and may possibly introduce measures such as service restriction to agents based on system load, or preferential treatment based on customer and agent profile. For example, agents issued by trusted partners or by customers with a track record may be handled on more responsive servers than those from unknown issuers.

Ultimately, agents are providing very useful services, both to customers who may benefit from buying at low costs, and for companies who also lower their search costs. As long as agents do not pose an unacceptable load on remote servers and as long as security problems can be curbed, the agents’ great benefits in lowering transaction costs, accelerating cycle time, and closing the sale will result in their widespread use. However, one class of agents—those that harvest free information and reroute it to other sites—may change the nature of the e-free business and will face a significantly more turbulent and perilous future.

Back to Top

Back to Top

Back to Top

Tables

T1 Table 1. Internet business models

T2 Table 2. Company stock valuation by revenue generation model and number of users.

Back to top

    1. Applegate L.M., McFarlan, W.F., and McKenney, J.L. Corporate Information Systems Management: The Challenge of Managing in the Information Age. Irwin Professional Publications, 1999.

    2. Chircu, A.M. and Kauffman, R.J. Reintermediation strategies in business-to-business electronic commerce. International Journal Electronic Commerce 4, 4 (Summer 2000), 7–42.

    3. Falconi, R. and Firozabadi, B.S. The challenge of trust. Knowledge Engineering Review 14, 1 (1999), 81–89.

    4. Hendler, J. Making sense out of agents. IEEE Intelligent Systems. (Mar./Apr. 1999), 32–37.

    5. Hoffman D.L., Novak, T.P., and Peralta, M. Building consumer trust online. Commun. ACM 42, 4 (Apr. 1999), 80–85.

    6. Lange, D.B. and Oshama, M. Programming and Deploying Java Mobile Agents with Aglets. Addison-Wesley, Reading, PA, 1998.

    7. Maes, P., Guttman, R.H., and Moukas, A.G. Agents that buy and sell. Commun. ACM 42, 3 (Mar. 1999), 81–91.

    8. Mandry T., Pernul, G., and Röhm, A. Mobile agents on electronic markets: Opportunities, risks, agent protection. International Journal of Electronic Commerce 5, 2 (Winter 2000–2001), 47–60.

    9. Murch, R. and Johnson, T. Intelligent Software Agents. Prentice Hall PTR, 1999.

    10. Turban, E., Lee, J., Lee, J.K., King, D., and Chung, H.M. Electronic Commerce: A Managerial Perspective. Prentice Hall, Upper Saddle River, NJ, 2002.

    11. Wang S. Analyzing agents for electronic commerce. Information Systems Management 16, 1 (Winter 1999), 40–47.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More