Computer science has long been intertwined with society’s technological dreams. The dream of automated homes relates to ubiquitous computing, just as the dream of sentient machines relates to artificial intelligence (AI). Another of society’s dreams could be called the “Avatar Dream,” a culturally shared vision of a future in which, through the computer, people can become whomever or whatever we want to be.
Key Insights
- Using virtual identity technologies to just look like someone different from yourself is not enough to understand the experiences of someone different from yourself.
- Researchers and developers should practice methods for engineering virtual identity technologies such that modeling their social and cultural implications is intrinsic to the practice of inventing them.
- Analyzing social phenomena involving virtual identity systems, and designing new virtual identity authoring tools and applications to simulate social phenomena, enables researchers and developers to better support users’ needs for more powerfully nuanced forms of what we call “technologies of self-imagination.”
Our focus is not, however, on simply developing the technologies that can support the Avatar Dream. We instead argue for the need to reimagine the Avatar Dream where the potential social and cultural impacts of virtual identities are considered intrinsic to the engineering practices of inventing them. This article is an overview of our endeavors toward this end and several key results and findings demonstrating our reimagining of the Avatar Dream.
We first define the Avatar Dream, describe its current state, and motivate our work by considering problems with current virtual identity systems. We then provide a theoretical framework for characterizing the relationships between virtual and real-world (physical) identities necessary for precise articulation of the sociocultural phenomena we study. The remainder of the article focuses on two key endeavors. The first is our computational approach to analyzing sociocultural identity phenomena in virtual identity systems; these techniques support engineers developing systems that avoid or combat negative phenomena (such as discrimination and prejudice). For example, we use AI to reveal how sexist and racist biases are embedded in a bestselling computer game, demonstrating an approach applicable to other systems. The second is our approach to simulating sociocultural identity phenomena; it includes developing technologies (such as an authoring platform called Chimeria and interactive narratives made using it that convey how individuals navigate social categories). These technologies support the aims of creating richer experiences for users, helping educate diverse learners, and conducting social-science research studies. We conclude by reflecting on this reimagined Avatar Dream.
Defining the Avatar Dream. The Avatar Dream has two elements. One is technical, enabling users to control a virtual surrogate for themselves in a virtual world. These computational surrogate selves are often computer-generated images (CGI) but can range from text descriptions in games or social media to virtual representations that engage all the senses in futuristic virtual reality environments. The second is experiential, enabling users of these virtual surrogate selves to have experiences beyond those they encounter in the physical world, ranging from having new abilities to better understanding the experiences of others (such as of another gender or even another type of creature).
This article introduces the term “box effects” to refer to the experiences of people that emerge from the failure of classification systems.
The current expression of the Avatar Dream in many contemporary societies includes using virtual identities to communicate, share data, and interact in computer-based (virtual) environments. We can thus view a manifestation of this dream as the avatar,a or user-controlled representations of self in virtual environments. Neal Stephenson’s 1993 novel Snow Crash provides a science-fictional vision of avatars as technologies to reimagine one’s self. Stephenson wrote, “Your avatar can look any way you want it to, up to the limitations of your equipment. If you’re ugly, you can make your avatar beautiful. If you’ve just gotten out of bed, your avatar can still be wearing beautiful clothes and professionally applied makeup. You can look like a gorilla or a dragon or …”36 Another manifestation of the dream is the social-media profile. Even in the heady days of the 1990s, it was understood that these profiles were distinct from our physical selves. In one of the most venerable social media sites—The Wellb—where users’ real names were available, self imagination was a central part of the appeal. The Well purportedly offered the freedom of projecting whatever personality you wished, along with the intriguing possibility of highlighting subtle variations of your character.”14 Virtual identities in digital media, including virtual worlds, videogames, and social media, all hearken back to the Avatar Dream.
Problems with current virtual identity systems. The need for socio-culturally informed virtual identity research is urgent; nearly everyone today has social media accounts for connecting with friends, e-commerce accounts for shopping, and videogame characters for playing. One problem in designing virtual identity systems is the need for intuitive, appropriate, and robust tools for avatar creation and customization. Just being able to edit avatar appearance is not enough to support peoples’ needs for self-expression when using virtual identities. It is important that avatars embody enough sociocultural nuance to express facial expressions, body language, gait, discourse style, and personality. Users should not need to program all these forms of self-expression from scratch and should, instead, be able to express themselves with their avatars through simple interfaces. Researchers Joseph Bates, Michael Mateas, Brenda Laurel, Ken Perlin, and others1,24,31,32 have developed such tools and virtual worlds for their deployment. Furthermore, some user groups are underrepresented and/or unfairly stigmatized in virtual environments. Social-network usage percentages are often higher for underrepresented U.S. racial groups and women than for white individuals and men; 90% of African-American females and 40% of white females are depicted as victims of violence in games.7,8 Such phenomena are not only embedded in systems but also enacted by users. Julian Dibbell’s classic 1996 article “A Rape in Cyberspace,”6 describing how a user’s female avatar in a text-based environment was taken over and subjected to violent acts, presaged such negative repercussions of the Avatar Dream. Such observations undergird an important motivation for our work: Virtual identities can and should better serve the needs of diverse users.
Defining Physical, Virtual, and Blended Identities
Before describing our computational approaches, we clarify the terminology we use, in light of the ambiguities in terms like “real” and “virtual” identity.
Physical identities. For identities in the physical world, our focus includes but also goes beyond the notions often associated with identity like gender, race, and age. We are instead concerned with users’ identity experiences, which are informed by history, culture, and values in the physical world. While it is impossible to give a comprehensive definition of everything that affects people’s identities in the real world, we offer a simplified overview useful for the discussion in this article. Identity experiences are informed by history, culture, and values that exist in the physical world and manifest in the ways people behave. Identity experiences include cognitively grounded,c material (such as resources), and social (such as power relationships) aspects.
Virtual identities. Our definition of virtual identities focuses on their components as technical systems. A virtual identity in this model is characterized by its data structures and algorithms that are deployed to provide both representation and control to the user. In videogames, a common virtual identity type is the player character, or avatar, players take control of. Computers have long been a medium for humans to create such “second” selves37 or even many selves. Such use of computers as digital media for self-representation has become even more pervasive with the proliferation of increasingly immersive virtual environments and game worlds that enable interactions among multiple players at the same time. Each of these other selves can be viewed as an “externalization of self”37 beyond our physically embodied selves. Hence, despite the fact that avatars are sometimes narrowly regarded as mere technically constructed visual artifacts, a more expansive view holds that virtual identities serve as important ways through which people represent or express themselves.
Blended identities. We have highlighted aspects of the physical and virtual world identities we are seeking to better understand. However, rather than considering each of them individually, this research is based on their interrelationships. We particularly seek to consider how values are socially and culturally constructed, enacted, and manipulated via blends between physical and virtual identities.
James Gee’s notion of the “projective identity”10 is a useful starting point. His use of the term refers to the reflection of players’ values in how they make sense of their avatars. However, while it provides a high-level descriptive characterization of identity, it is insufficient for our needs. It fails to capture important structural phenomena like mappings between a user’s actions and a virtual identity being controlled. We thus enrich Gee’s model with an approach from cognitive science called “conceptual blending theory”9 in which blending is a proposed cognitive mechanism by which humans integrate concepts.d We thus use Harrell’s notion of a “blended identity”17 in which aspects of a player’s physical identity (such as preferences, control, appearance, and understanding social categories) are selectively projected9 with aspects of the virtual identity onto a blended identity, integrating and elaborating aspects of each (see Figure 1). Blended identities can be studied in light of the interrelationship between both worlds. We later give examples of our approaches for analyzing blended identities computationally to reveal how physical-world values can be both embedded in virtual identity systems and enacted by virtual identity users.
It is also important to note that a single blended-identity user is not restricted to a single virtual world platform but often has multiple different virtual identities and behaves differently depending on the platform, what we term “cross-platform identities.” For each platform, there is a projection of a certain set of identity features from the user.
Box effects. This article introduces the term “box effects” to refer to the experiences of people that emerge from the failure of classification systems. Box effects include, but are not limited to, such related phenomena as stereotypes, social biases, stigmas, discrimination, prejudice, racism, and sexism. In the phrase “classification system,” as used here, “system” does not refer to a technical computer system but rather to the notion of classification put forward by Geoff Bowker and Susan Leigh Star, who said, “A classification is a spatial, temporal or spatiotemporal segmentation of the world,” and a classification system “is a set of boxes (metaphorical or literal) into which things can be put in order to then do some kind of work—bureaucratic or knowledge production.”3 Our term “box effects” is useful because there is no common term for social phenomena with roots in classification systems. It is also useful to have an overarching term like box effects because specific terms (such as stereotyping and prejudice) have multiple definitions in different academic disciplines, as well as general, popular uses.
Better understanding existing systems and designing new systems that take box effects into account cannot be accomplished within a narrowly technical vision of the Avatar Dream. Our reimagined Avatar Dream recommends addressing box effects in several ways. First, developers must understand the ways box effects from the physical world persist in virtual environments in terms of system structure. Second, we must look at how box effects emerge from users’ behaviors in relation to categories from the physical world; for example, users might create stereotypical female characters in a game based on their own preconceptions about categories of physical-world gender. Finally, beyond needing techniques to understand and identify box effects in virtual environments, developers need tools to support addressing them (such as through more nuanced models of identity in games and interactive narratives) like those we provide with the Chimeria platform.
Analyzing Virtual Identity Phenomena
Since 2010, Harrell has led a research initiative to better understand, design, and develop virtual identities called the “Advanced Identity Representation (AIR) Project.” We use the term “advanced” with humility. Our computational systems cannot completely express the nuances of physical-world identities. Yet they provide advances over current systems in their emphasis on physical-world identity categories social scientists have identified as important for modeling user experiences (such as of gender, race, and ethnicity) in addition to personality, values, and preferences. In this way, we can achieve advances in modeling phenomena (such as box effects), goals that increase the expressive range and utility of virtual identities. Our resultant systems are often necessarily reductive (from vast real-life experience to more limited data structures and algorithms) in order to be implementable. Yet this reduction is done knowingly with the benefit of expanding the expressive capacity of computational systems to address social-identity phenomena. As mentioned earlier, this work includes two types of computational modeling: analyzing sociocultural phenomena involving virtual identities using AI/machine learning techniques; and simulating sociocultural identity phenomena. We next describe our work in sociocultural phenomena involving virtual identities, including values built into systems (embedded) and patterns discovered (emergent) from users.
Limitations of current approaches. There are many survey-based studies of how behavioral patterns by users in virtual environments replicate identity experiences based in the physical world, but they have notable limitations. While useful for assessing subjective notions of identities expressed by users (such as preferences), self-reported survey data is often difficult to evaluate and subject to survey bias.2 Also, while useful for understanding certain user characteristics (such as articulated reasons for choosing among options), some aspects of users’ experiences (such as tacit knowledge) cannot be articulated, are intrusive, or are mentally or physically strenuous for participants or interviewers, and are often better suited to automated data collection and analysis.
New approaches to analyzing blended identities. Here, we present our approach to using AI for computationally modeling categorization, focusing on the novel use of these algorithmic techniques to address aspects of social identity often deemed challenging to quantify due to the subjectivity of their manifestations.
Clustering for computational categorization. In the field of AI, cluster analysis, or “clustering,” is the algorithmic process of grouping observations into categories (clusters) based on measurements of similarity between individual observations. An observation refers to a data point within the set of observations (dataset) and is represented through measurements of one or more of its properties, or “features.” Observations often correspond to players, each characterized by features that describe aspects of their physical identities (such as biological sex) or virtual identities (such as avatar appearance and behaviors). Cluster results are based on the definition of a cluster, or which features determine membership, and the similarity measure of observations, or how features are used to measure the similarity or differences between observations. Clustering is appropriate for our aims, as it enables quantitative analysis and can reveal new or unknown categories.
There are many different approaches to clustering users according to their behavior in systems using virtual identities (such as videogames).27 Our own experience includes investigating techniques like the k-means algorithm, principal component analysis, non-negative matrix factorization, and archetypal analysis for social analysis using virtual identities.28 Our focus here is on archetypal analysis, identifying a set of key observations in a dataset called “archetypes,” or certain external points in the dataset. Other observations can be represented as mixtures of these archetypes. This approach provides insight into definable characteristics of highly distinctive virtual representations or behaviors. It is also useful for revealing patterns of users’ behavior that either conform to or subvert conventions.28 In addition, as it provides a visible way to identify marginalized individuals, defined as observations notably distant from all archetypes,e we find archetypal analysis to model more effectively than other clustering techniques.27
Analyzing systems: Revealing values embedded in technologies. We created a system called AIRvatar to perform fine-grain telemetric data collection of users’ virtual identify creation and customization behaviors. Named for an initiative called the Advanced Identity Representation (AIR) Project, AIRvatar also implements the clustering approaches described earlier and has been deployed in videogame and social media data systems.28,29 Here, we provide an example of how we reveal race and gender stereotypes through archetypal analysis. We analyzed the critically acclaimed and commercially successful videogame The Elder Scrolls IV: Oblivion that features an exemplary diverse roster of player character types; we cite it here as one example of a general phenomenon, not to single it out as especially inequitable to diverse users in contrast to other games. The upshot is we found and validated several forms of race and gender inequity. Players may choose to play as one of 10 different races available. Though fictional, some of the races are based on physical-world national, racial, and ethnic groups through their textual descriptions and visual appearances; for example, Redguards represent people broadly of African descent (a single country or subgroup is not suggested); Nords represent Norwegians; Bretons represent French people; and so on. Player characters possess eight attributes representing abilities (such as strength and intelligence). Based on the player’s choice of race and gender, these attributes are initialized with a set of default values. In previous work, Harrell observed several forms of racial and gender inequity in the game (see Figure 2). For example, Bretons are 20 points more intelligent than Redguards and Nords of either gender; females Orcs and Argonians are 10 points more intelligent than males of the same race.
Examples of system-embedded biases (stereotypes of race and gender). While Harrell previously highlighted inequity and biases in Oblivion,17 such insight is typically anecdotal and requires manual assessment. In order to quantitatively model these effects, we performed archetypal analysis on Oblivion‘s distribution of statistical attributes for characters within the game based on gender and race, as in Figure 2. All races’ relationships to these archetypes can be presented using a ternary plot of the results (see Figure 3). This analysis automatically revealed the typical “archetypal” roles developers intend for players to conform to based on how the statistical attributes are distributed within the game.28 More interesting, however, the analysis also revealed several box effects within the game’s design. Observing the three archetypes shows that male characters have better stats for playing in any of the most common roles than females. It also validates the observation that characters of African descent are optimized for strength- rather than intelligence-based roles. This quantitatively and visually represents biases that are often not as obvious, as in the following results:
Traditional game roles. Using archetypal analysis, we observed that the statistical distribution of numerical attributes corresponds with traditional role-playing game roles we call “physical-fighter,” “intelligence-mage,” and “stealth-thief”;
Key individuals. Our system calculated particular races to be key individuals (archetypes). For example, the Viking-like Nords and ostensibly African Redguards are stereotypically close to the physical-fighter archetype with no characteristics of the intelligence-mage archetype, though the Redguards exhibit some stealth-thief characteristics; and
Physical-world stereotypes. We observed a bias toward the male gender based on these archetypal races since male characters are generally closer to the archetypes. The game is thus inequitable toward certain races and female characters in ways that replicate physical-world stereotypes.
Revealing such box effects computationally enables us to quantitatively assess virtual identity systems, providing actionable insights into how designers’ decisions affect users and assistance in developing systems that enable users to take on virtual roles while avoiding undesirable biases. Our results do not suggest all characters should have equal attributes. Rather, they can inform creative designs that are just as effective and even more tied in to game narratives; for example, initial attributes could be based on characters’ backstories, rather than on essential characteristics of races or genders, by having players choose characters’ prior life events from categories corresponding to the archetypes (such as studious, or mage, physically strenuous, or fighter, or street-smart, or thief, upbringings).
Analyzing users: Revealing user-enacted values. We have revealed user values and preferences in the following ways:
Modeling identity expression from player data. Using our AIRvatar system, we created a custom-developed avatar customization system called Heroes of Elibca (see Figure 4)f to give us full control over aspects of data collection for our experiment design. Players were presented with an introductory sequence to provide them with a familiar, computer-role-playing-game setting. Additionally, this helped us to contextualize the study as a scenario in which created avatars would be used as part of a videogame. Out of Harrell’s taxonomy of technical components of computational identity systems16—static media assets, flat text profiles, modular graphical models, statistical/numerical representations, formal annotation, and procedural/behavioral rules—the results in the following subsections focus on statistical/numerical representations.g Players customized their avatars by modifying the values of six statistical attributes—strength, endurance, dexterity, intelligence, charisma, and wisdom—on a seven-point scale; see the online appendix for descriptions based on commonly used conventions in role-playing videogames. Each attribute had default values of 4; there were 27 allocatable points for each avatar.
The underlying engine allows for the movement of individuals within, between, and across social categories.
Examples of user-enacted biases (stereotypes from gender-category expectations). While previous research has shown that players exhibit gender bias toward avatars controlled by others in a virtual environment,39 here we present findings showing how biases can be modeled when constructing one’s own avatar. We conducted a user study with 185 participants who were asked to customize their avatars using the avatar creator Heroes of Elibca. The demographic breakdown included 104 participants (or 56%) self-identified as male and 81 (or 44%) self-identified as female. We studied how they assigned statistical attributes to their avatars based on gender.
The results reveal the phenomenon of gender stereotyping in how some of these avatars were customized. In Figure 5, observe that female players gave male avatars significantly higher values for physical-related traits (such as “strength,” “endurance,” and “dexterity”) while giving them significantly lower values for intellect-related traits (such as “intelligence,” “wisdom,” and “charisma”). Female players here appear to be projecting aspects of an identity experience from the physical world (such as stereotypes of gender) onto the avatars. This demonstrates a kind of box effect, as it reflects a player’s cognitive formation of categories of gender roles, along with associated assumptions and expectations. Interestingly, we did not observe these effects in avatars created by male users, as in the online appendix. We observed this asymmetry in earlier results of a smaller sample size29 and also by other researchers.21 Such results are not meant to portray female players negatively; factors like the genre of the game may reward traditionally “male” behaviors like physical aggressiveness.35 These user-enacted social-identity phenomena reflect the situated nature of cognitively forming categories.
Simulating Social Identify Phenomena
The last section focused on techniques for identifying and analyzing box effects, but analysis alone is not enough for reimagining the Avatar Dream. Developers need tools that are better able to model socio-cultural-identity categories and the experiences that people have based on them. Here, we provide an overview of our platform developers can use to design and implement virtual identity systems that help users better understand box effects and/or enable more nuanced identity-category models that avoid them. Modeling box effects is necessary for the first part of the dream—being whomever you want using a virtual identity—because being someone is not just a matter of graphical appearance, but of modeling systematic experiences. The second part of the Avatar Dream—understanding the experience of others—requires modeling social experiences more robustly to avoid box effects. While one may not be able to directly experience what it means to live a physical-world life as a member of another social category using virtual identity, it is possible to use virtual identities to convey some of the patterns of experience people in other categories face and that exist structurally in societies. Enabling users to be a virtual female superhero or even just a more suave and dandy self requires techniques to help them imagine the subjective experience of those types of identity. Our platform demonstrates representational benefits of a gradient model of social identity; our examples demonstrate applications that aim to engender critical awareness about the nuances of social identity.
Computational models of social identity are found in a wide range of digital-media works. In computer role-playing games, racial categorization is typically used to style the visual appearance of a player’s avatar or trigger several different canned reactions when conversing with a non-player character. In social media, users typically join groups based on shared taste or categorize each other as “colleagues” or “family members” using privacy settings. In such systems, category membership is determined in a top-down fashion; members often slot into single, homogeneous groups with no hybrid identities, identities at the margins of groups, or identities that change over time. They neither provide developers and users elegant ways to avoid box effects nor simulate them to create via more expressive virtual worlds.
Simulating and avoiding box effects. Our “Chimeria platform” (hereafter Chimeria) is a system that supports simulating of physical world identity phenomena in virtual identity systems ranging from social-media accounts to videogames. Such simulation augments virtual identity models with gradience and dynamics, increasing their sociocultural nuance. Such additional nuance supports demonstrating how box effects are detrimental. And demonstrations are performed by creating expressive systems (such as videogames) that reveal how forms of discrimination function or avoiding box effects in utilitarian systems (such as social-media platforms). It does so in two primary ways: modeling the underlying structure of many social categorization phenomena with a computational engine; and enabling users to build their own creative applications about social categorization using the engine as a backbone. The underlying engine allows for the movement of individuals within, between, and across social categories.
It also allows for category members to have varying degrees of centrality to each group, assimilate or naturalize in relation to a hegemonic group, and be members of multiple groups. These aspects of the system are grounded in theories from sociolinguistics,33 cognitive science,23,h and the sociology of classification.3 The system is thus capable of modeling complex social behaviors (such as “impression management,” addressed later). We next describe the architecture of Chimeria and two applications built with it.
Chimeria authoring platform. Chimeria supports simulating experiences based on social-group membership using a data-driven approach and consists of three main components (see Figure 6). Simulations may take different forms (such as a 2D visual novel game, a fictitious social network chat narrative, or 3D virtual environmenti).
Chimeria engine. This is our implementation of a mathematical model of users’ degrees of membership across multiple categories. The Chimeria engine is designed to calculate, modify, and simulate changes to these memberships, acting as the system’s logical processing component. It models users’ category memberships as gradient values relative to the more central members,3,17,23 enabling more representational nuance than binary status of member/nonmember commonly used in applications; for example, on the social network Facebook, being someone’s friend can be viewed as a basic Boolean flag; in the physical world, however, there are varying types and levels of friendship people have with others.j Chimeria is intended to enable both a greater range of expression of such nuances and representations that better serve users.
Chimeria application interface. This is a visual interface for user interaction and for experiencing the narratives of category membership changes, or game or story interfaces. The separation between the back-end (Chimeria engine) and the front-end (Chimeria application interface) provides the flexibility to go through the same narrative trajectory in relation to membership shifts with varying visuals. Chimeria narratives are authored by developers using an XML file format with a narrative structure, as described in Harrell et al.19
Chimeria domain epistemologies. An “epistemology” is an ontologyk that describes cultural knowledge and beliefs.18 In Chimeria, they are the knowledge representations of the categories being modeled. Assets used to present these categories can be author-contributed (such as graphics and text) or data-driven (such as retrieved YouTube videos).
Chimeria applications. To better illustrate the capabilities of the components within Chimeria, we describe two very different simulations of social experience created using Chimeria. These are Chimeria:MusicNet, a social-networking simulation application that models social categories in the domain of musical preferences15 and Chimeria:Gatekeeper, a computer role-playing-game scenario that models a conversational narrative between the player and a non-player character.
Chimeria:MusicNet. This application, the name of which is an abbreviation of Chimeria:Musical-Identity-Social-Network, uses the Chimeria engine to model social experiences based on categories of music preference. Psychologists David Hargreaves, Dorothy Miell, and Raymond MacDonald note that the music people listen to becomes a venue for the expression and formulation of their sense of self-identity and identity portrayed toward others, or “a musical identity.”15 The system models category membership using musical preferences that are automatically constructed from a user’s set of music “likes,” or binary indications of positive valuation, on a social-network profile. These “likes” are musical artists from which the Chimeria engine extrapolates (using commercially available musical-classification data) moods (such as cheerful and gloomy), themes (such as adventure and rebellion), and genres (such as film score). This extrapolation leads to a set of musical-identity categories, that is, musical-affinity groups that provide the context for non-binary group membership and passing, or the “ability of a person to be regarded as a member of social groups other than his or her own … generally with the purpose of gaining social acceptance.”34 Each user’s set of moods, themes, and genres then affect the generated narrative in fundamental ways. The focus for Chimeria:MusicNet is not on categorizing music but on the modeling of musical preferences using a knowledgebase aggregated from external data. These models are used to dynamically construct a narrative conveyed through a social network interface, or “conversational narratives,” structured by a model of conversation from sociolinguistics.33
Figure 7 is a screenshot of Chimeria:MusicNet. A dynamic collage of photos, or photowall, is procedurally generated to represent the user’s musical-taste preferences; a feed of recent updates, posts, and invitations appears in an adjacent vertical timeline, as in Figure 7. Using musical preferences from the user’s Facebook music likes or by manual entry, a hybrid real/fictitious conversational narrative experience progresses over time in a manner described as follows. Dynamically generated posts by the user’s non-player character friends comment on the user’s membership in multiple musical-affinity groups, as in “You’re a raucous rock fan now?” or “Want to hear some airy jazz music?” The user may “like,” “dislike,” or simply ignore these posts, resulting in group-membership changes. Some friends question newly discovered interests while others pass judgment on prior affiliations. The resulting narrative may describe passing or assimilating as a member of a new group of music listeners, reinforcing a prior group affiliation, or even being marginalized in every group.
Chimeria:Gatekeeper. This application models a common role-playing-game scenario—a player trying to gain access to the inside of a castle. The scenario illustrates a phenomenon noted by Harrell,16 who wrote, “There exists a perceived appropriateness of particular ways to present one’s self in different situations, as well as social avenues that may be closed off or accessed only with more difficulty due to externally defined social prejudices and biases. This perceived negative difference between diverse individuals and socially defined, desirable and privileged norms is called stigma.” The Chimeria:Gatekeeper scenario is based on sociologist Erving Goffman’s work on stigma.13 The unseen player character is initialized in a “discredited” (stigmatized) category, and the non-player character is initialized in an “accepted” category. The discredited category is prototypically defined as the Sylvanns race—tall, well-spoken, and wearers of fine clothing. The accepted category is prototypically defined as the Brushwoods race—short, plain-spoken, and wearers of rough-spun clothing. To gain access into the castle, the player must exhibit behaviors that convince the guard that she or he should be admitted; most players try to demonstrate that the player character fits in the accepted category, a social-identity phenomena known as “passing.”13 Figure 8 depicts choosing a dialog option to fit into the accepted category. Actions (such as slouching to adopt the posture of a prototypical Brushwood or displaying fine Sylvann clothing) shift the non-player character’s model of the player character’s category memberships, rendering the outcome closer to gaining access or being rejected. Chimeria handles alternatives to the common strategy of intentionally passing, simulating experiences of a variety of box effects based on Goffman’s notion of impression management.13 Other simulated experiences include voluntary disclosure of stigma and slipping, or trying to pass as an accepted member but failing. They capture trade-offs between gaining utilitarian access versus loss of self-identity.
User testing has revealed Chimeria overcomes limitations common in virtual identity systems while enabling critical examination of how identities are negotiated in the physical world.20,l While this fantasy scenario may seem far removed from physical-world experiences of stigma like sexism and racism on the job, such tensions exist and are common; for example, in the U.S, speakers of southern dialects of English have described needing to change their speech patterns to suitably impress an employer, and female entrepreneurs and politicians have described pressure to de-emphasize stereotypically feminine characteristics to be taken more seriously by those in positions of power. Such people have described having to get past “gatekeepers” as an apt metaphor for their experience.
Future Work
The outcomes of the work we have described here have led to several new projects at the intersection of computing, sociocultural identity, and imaginative cognition. The projects further model dynamic relationships between virtual identities and sociocultural identity phenomena in the physical world. These projects aim to, respectively, use avatars to support public high school students from groups currently underrepresented in STEM fields in seeing themselves as powerful learners and doers of computer science and to excite them about the field; better understand global culturally specific everyday uses of virtual identities in social media and videogames; and create a virtual reality system that helps engender empathy in the midst of global conflict (a collaborative project directed by war photojournalist Karim Ben Khelifa).m
Technologies we use to imagine ourselves can be powerful media for social empowerment through critical thought and social awareness. For us, this is a more urgent dream. Like dreams of ubiquitous computing and AI, the most important aspect of the Avatar Dream is not whether it is achievable but that it pushes us to consider the limits and ethics of virtual identity technology development and propels us toward innovations that benefit society.
Conclusion
This article is a result of more than seven years of research toward our reimagined Avatar Dream wherein addressing social and cultural concerns is intrinsic to its realization. The Avatar Dream is not a panacea for social-identity problems; virtual identities are mere technical components of broader phenomena of human identities and the many concepts, artifacts, and interactions that produce them. We must move beyond questions of whether the Avatar Dream is achievable and also consider whether it would be good if achieved.
Still, we have thoughts regarding whether the Avatar Dream is indeed achievable. Answering first necessitates clarifying what it means to become someone or something else using a computer. Humans have great power of self-imagination. Yet the physical world we live in is rife with individual, social, and cultural histories that affect people’s capacities to determine their own identities. Such histories constrain our ability to directly understand the experience of others. As human-created artifacts, virtual identities reflect historical, social, and cultural constraints from the physical world. Achieving the Avatar Dream requires a better understanding of the relationships between the constraints imposed by our social-identity experience in the physical world and our potential for self-imagination in virtual worlds. Ignoring these constraints on our social identities results in both system-embedded and user-enacted box effects, rendering the Avatar Dream unachievable. While the existence of negative box effects has been forcefully argued in anecdotal terms, we have demonstrated a method for empirically demonstrating their existence through computational modeling. If virtual identities are used to reinforce cognitive or structural constraints to the detriment of individuals, achieving the Avatar Dream would be harmful, even if possible.
A child growing up in poverty imagining herself as a future successful engineer—despite having never lived as one—is a powerful act of self-imagination. If she is discriminated against because she is deemed poor (or any other identity-related reason) and denied access to the resources to become an engineer, then structural constraints have limited her ability to take on a social identity she aspires to. If she believes that becoming an engineer is not achievable because of her socioeconomic status, then cognitive constraints based on her experience of social identity have limited her capacity to self-imagine. Our work using AI to analyze blended identities aims to reveal both structural constraints embedded in systems and cognitive constraints emerging from users. We seek to support individuals’ capacities to self-imagine in empowering ways while negotiating oppressive social constraints they face. At times, this may entail supporting users to imagine themselves as whomever they want to be; at other times, it entails supporting users in realizing and negotiating constraints rooted in the physical world. This is our reimagined Avatar Dream—a socially and culturally informed vision that would be good if achieved.
Acknowledgments
This material is based on work supported by the National Science Foundation Grant #1064495 and extended under NSF Grant #1542970 and a QCRI-CSAIL Collaboration. We thank the anonymous reviewers, as well as Dominic Kao and Pablo Ortiz, for their helpful feedback.
Figures
Figure 1. A blended-identity diagram; cross-space mappings reveal aligned characteristics of the physical and virtual identities (called “input spaces” in conceptual blending theory).
Figure 2. Initial racial attribute values in The Elder Scrolls IV: Oblivion; interesting discrepancies are highlighted between races (blue) and genders (red).
Figure 3. An archetypal analysis ternary-plot of statistical attribute allocations in Oblivion; note, at Archetype 3, the Male Bosmer (red) is behind the Female Bosmer marker (black).
Figure 4. A screenshot of the interface of Heroes of Elibca, a custom avatar-creation system implemented in AIRvatar.
Figure 5. Plot of how female players allocated statistical attributes of conventional role-playing games based on the gender of their avatar (female/male); note, the error bar for male avatars’ intelligence is zero because all were assigned a value of 4 on a 7-point scale.
Figure 6. The Chimeria platform architecture.
Figure 7. Screenshot from Chimeria:MusicNet.
Figure 8. Screenshots from Chimeria:Gatekeeper.
Figure. Watch the authors discuss their work in this exclusive Communications video. https://cacm.acm.org/videos/reimagining-the-avatar-dream
Join the Discussion (0)
Become a Member or Sign In to Post a Comment