Research and Advances
Computing Applications The adaptive web

From Adaptive Hypertext to Personalized Web Companions

Posted
  1. Introduction
  2. The AiA Personas: Adapting the Contents and Form of a Presentation
  3. The Inhabited Market Place I: Adapting the Character's Personality Profile and Role
  4. The Inhabited Market Place II: Adapting the Degree of Activity
  5. Conclusion
  6. References
  7. Authors
  8. Footnotes
  9. Figures

The advent of Web browsers able to execute programs embedded in Web pages has enabled the use of embodied conversational characters on the Internet. In fact, an increasing number of Web sites employ animated characters for a large variety of tasks: virtual teachers guide users through online learning activities [6], Web chauffeurs provide recommendations on which links to follow, and virtual insurance agents help users to fill in forms. Web-based sales agents do not just present the customer with various products, but try to make the visit to an online store a unique experience; see [5] for an overview. Companies, such as Artificial Life, Extempo, Haptek, and Virtual Personalities,1 create virtual characters specifically for the Internet.

The added value of an animated character is not just its ability to provide additional information about a Web site, as this could be done by other means just as well. More important is the creation of a new browsing experience for the user who no longer has to surf the Web on his or her own, but is instead situated in a social context with virtual characters. Embodied conversational agents represent a great challenge to research on adaptive Web-based interfaces because they add complexity to the adaptation process and require us to rethink the design of such interfaces. In particular, the issues described in here need to be considered.

Compared to conventional Web sites, conversational agents require a high degree of context-awareness. In the hypermedia community, it is still a matter of debate whether a Web page should change if the user visits it a second time. On one side of the argument, a hypermedia system should consider whether the user has already seen a page. Conversely, it might happen that the user cannot find a previously visited page anymore, just because the system has started its adaptation process. Unlike conventional Web sites, animated characters raise expectations of human adaptation behavior. It is rather irritating if a character utters the same comment again and again whenever the user returns to a certain Web page. In addition, monotonous and predictable behavior destroys the character’s believability.

Interaction with embodied conversational agents is inherently social in nature. A character that conveys a sympathetic impression will most likely increase the user’s trust in the application. Nass and colleagues [4] have shown that computer agents representing the user’s ethnic group are perceived as socially more attractive and trustworthy. To convey information in an effective manner, the adaptation process needs to consider all facets of the character’s identity: its audiovisual appearance, its personality profile, its social role, its interests, and so forth.

Conventional Web sites are usually restricted to a pure adaptation of the contents. Conversely, personalization of a character-based site also needs to include the agent’s presentation style. For instance, consider a salesperson who advertises a product by showing high-quality photographs of it. When unable to customize the style of presentation to a particular customer, the presentation as a whole may still fail.

Adaptive systems have been often criticized for their unpredictable and opaque behavior. To remedy this problem, the adaptive part of the user interface is frequently separated from the nonadaptive part of the interface. Animated characters support such a design. For instance, the adaptive part may be conveyed by a character that utters suggestions and recommendations, whereas the nonadaptive part includes the Web-based material. In this way, each user is presented with the same Web-based environment, but nevertheless gets an individual tour.

Our contribution to this area of research was the development of a plan-based approach to automate both the generation of customized Web pages and the creation of scripts that are forwarded to the characters for execution [3]. This approach has been successfully applied to build a number of applications in which information is conveyed either by a single presenter or by a team of presentation agents. However, while exploring further application fields and new presentation styles, we identified some principal limitations of the scripting approach. One decisive factor is the question regarding whether all the information to be conveyed by a character is available before a presentation is started. Another aspect is the kind of user interactions that need to be supported during the display of a presentation. Therefore, we decided to move from a centralized scripting approach to a character-centered approach in which a character’s behavior is determined on the fly. We will illustrate our points using actual application examples developed at the German Research Center for Artificial Intelligence (DFKI).

Back to Top

The AiA Personas: Adapting the Contents and Form of a Presentation

In the AiA (Adaptive InfoBahn Access) project, we developed a series of personalized information assistants that aimed at facilitating user access to the Web. Besides the presentation of Web contents, the AiA agents provide orientation assistance in a dynamically expanding navigation space. Figure 1 shows one of our applications, which is a personalized travel agent.

Suppose the user wants to travel to Hamburg and is starting a query for typical travel information. To comply with the user’s request, the AiA system accesses a list of selected Web servers and databases to retrieve information about Hamburg, for example, a restaurant server, a hotel server, a weather server, and so forth. AiA then selects relevant units, restructures them, and uses an animated character to present them to the user.

AiA is based on a plan-based approach for automated script generation [3]. The basic idea is to formalize action sequences for composing multimedia material and designing scripts for presenting this material to the user as operators of a planning system. To allow for the dynamic expansion of the navigation space, the complete presentation is not scripted in advance. Instead, certain parts of a presentation are expanded only on demand. This method has the advantage that presentations can be adapted to the user’s previous navigation behavior and the information that has been conveyed so far.

In AiA, we started from the assumption that a character is owned by a Web-based service provider. Even though the AiA agents adapt themselves to the specific user, it is still the provider who has the final control over the character’s behavior, which is essentially determined by the provider’s and not the user’s goals. Provider-owned characters inhabit a specific Web site to which they are specialized. In contrast, user-owned characters may take the user to unknown places, make suggestions, direct the user’s attention to interesting information, or simply have a chat with the user about the place they are jointly visiting. We hypothesize that the user will more easily build up a social relationship with a character that is committed to him or her than a character owned by a company. Furthermore, users might hesitate to chat with a company-owned character because they could be afraid of a potential misuse of private data, such as their personal preferences. However, the development of personalized user-owned characters still constitutes a great challenge for future research.

Back to Top

The Inhabited Market Place I: Adapting the Character’s Personality Profile and Role

The objective of the Inhabited Market Place is to investigate sketches, given by a team of lifelike characters, as a new style of Web-based product presentation. The Inhabited Market Place is a virtual place in which seller agents provide product information to potential buyer agents in order to convey facts about a certain product to an observing user. Figure 2 shows a dialogue between several seller and buyer agents. One of the salesmen is trying to convince the buyers of the potential benefits of a certain car. Apart from the agent in the middle, which was created by DFKI, all agents have been taken from the Microsoft Agent Ring (see www.msagentring.org).

The Inhabited Market Place allows us to explore new forms of adaptation. Whereas AiA presents each user with the same agent, the Inhabited Market Place enables the user to put together a team of agents with certain interests and personality features and to assign roles to them. The resulting presentation is not just an enumeration of product attributes. Rather, the attributes are presented along with an evaluation that reflects the role casting chosen by the user. To illustrate this, we depict in Figure 3 two dialogue fragments that have been generated for different initial parameter settings.

In the two dialogues shown in Figure 3, partially the same car attributes are discussed, but from different points of view. In both cases, one of the buyers criticizes the high gas consumption of the car. But in the first case, the buyer is concerned about the environment; in the second case, the buyer is thinking of the high costs.

As in the AiA project, we follow a communication-theoretic view and consider the automated generation of such scripts a planning task. Nevertheless, a number of extensions became necessary to account for the new communicative situation. Information is no longer presented by a single agent that stands for the presentation system, but is instead distributed over the members of a presentation team whose activities need to be coordinated. Another aspect is that information is not conveyed by executing presentation acts that address the user, but rather by a dialogue between several characters that he or she observes. In addition, to generate effective performances with believable dialogues, we cannot simply make reuse of a character that has been developed for another application. Rather, characters have to be realized as distinguishable individuals with their own areas of expertise, interest profiles, personalities, emotions, and audiovisual appearances.

To account for this, we extended our repertoire of communicative acts with dialogue acts, such as “responding to a question” or “taking a turn,” and we defined plan operators that code a decomposition of a complex communicative goal into dialogue acts for the single agents. The character’s profile is considered by treating it as an additional filter during the selection instantiation and rendering of dialogue strategies (see [2] for more details).

Back to Top

The Inhabited Market Place II: Adapting the Degree of Activity

The Inhabited Market Place I allows for flexible role assignment of the agents. The user’s role, however, is fixed to that of a passive observer. Currently we are investigating possibilities to provide the user with the option of taking an active role in the performance whenever he or she wishes to do so. However, if the user does not want to participate in a conversation, the characters will give a performance on their own. At each point in time, the user has the option of joining the discussion again. The novelty of the approach lies in the fact that it allows the user to dynamically switch between active and passive viewing styles. To illustrate this new approach, we are currently developing an interactive version of the Inhabited Market Place. The presentation task and scenario are similar to the original version. Following the principal idea of anytime interaction, our goal is now to allow the user to step into the role of an accompanying buyer or a seller who can pose questions, support, reinforce, or reject arguments made by the other agents. In Figure 2, the user who is shown from behind has just made a comment on the car.

Because the agents have to dynamically respond to user interactions, it is no longer possible to pre-script utterances. For such scenarios, we propose a character-centered approach in which the scripting is done by the involved characters at presentation display time based on their specific goals. Technically, we have realized this approach through a system of distributed planners (see [1] for more details).

Back to Top

Conclusion

Adapting the behavior of a virtual persona is a challenging task that goes far beyond content and layout adaptation. We’ve illustrated this here by means of three systems that focus on various aspects of the adaptation process: the presentation style, the character’s role and personality profile, and the amount of user engagement. All three systems aim at providing new presentation styles for information presentation on the Web. In the context of the European project IST-NECA, we are currently preparing a modified version of the Inhabited Market Place I. By mid-2002, the system will become part of a car portal site hosted by www.freenet.com, a large British Internet provider.

Back to Top

Back to Top

Back to Top

Back to Top

Figures

F1 Figure 1. The AiA persona.

F2 Figure 2. The Inhabited Market Place.

F3 Figure 3. Adapting the dialogue behavior to the characters’ profile.

Back to top

    1. André, E., Rist, T., and Müller J. Employing AI methods to control the behavior of animated interface agents. Applied Artificial Intelligence Journal 13 (1999), 415–448.

    2. André, E., Rist, T., van Mulken, S., Klesen, M., and Baldes, S. The automated design of believable dialogues for animated presentation teams. In J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, Eds., Embodied Conversational Agents, MIT Press, Cambridge, MA, 2000, 220–255.

    3. André, E., Rist, T., and Baldes, S. From simulated dialogues to interactive performances. In V. Marik, O. Stepankova, H. Krautwurmova, and M. Luck, Eds., Multi-Agent Systems and Applications II, Lecture Notes in Artificial Intelligence, LNAI 2284, Springer, Heidelberg, 2002.

    4. Nass, C., Isbister, K., and Lee, E.-J. Truth is beauty: Researching embodied conversational agents. In J. Cassell, J. Sullivan, S. Prevost, and E. Churchill, Eds., Embodied Conversational Agents, MIT Press, Cambridge, MA, 1999, 374–402.

    5. Pandzic, I.S. Life on the Web. Software Focus Journal. Wiley, NY, 2001, 52–58.

    6. Shaw, E. and Johnson, W.L. Pedagogical agents on the Web. In Proceedings of the the Third International Conference on Autonomous Agents, ACM Press, NY, 1999, 283–290.

    1See www.artificial-life.com; www.extempo.com; www.haptek.com; and www.vperson.com.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More