IllumiRoom is a proof-of-concept system that surrounds a television with projected light, bringing video games, and film experiences out of the TV screen and into the real world. IllumiRoom uses 3D scanning and projected light to change the appearance of the room, induce apparent motion, extend the field of view, and enable entirely new gaming experiences. Our system is entirely self-calibrating and is designed to work in any room. We present a detailed exploration of the design space of possible projected visualizations and we demonstrate ways to trigger and drive these visualizations from gaming content. We also contribute specific feedback from two groups of target users (10 gamers and 15 game designers); providing insights for enhancing game and film experiences beyond the TV.
The television remains the focal point of living room entertainment today. While visual and audio quality has improved over the years, the content we watch (e.g., games, movies, TV shows) remains trapped inside our television screens. The resulting experiences are restricted by the physical size of the screen and ignore the surrounding physical environment.
In this paper, we propose a novel approach to enhance the viewing experience and blur the boundary between the on-screen content and the surrounding room (see Figure 1). We extend the visual experience outside of the television screen, using a projector that covers a wide area of the surrounding physical environment (e.g., your living room). Similar to Focus+Context displays,3 the television provides a traditional, high-resolution gaming experience, and the projector provides low-resolution information for the user’s peripheral vision. In contrast to previous work, we do not project onto a flat, white projection screen. Instead, we capture the appearance and geometry of the existing room (e.g., furniture, carpet, wallpaper) and use that information to create novel, interactive visual experiences.
In particular, our work investigates how projected visualizations in the periphery can negate, include or augment the physical environment, and thus enhance the content displayed on the television screen. We call such visualizations peripheral projected illusions. IllumiRoom uses these illusions to change the appearance of the room, induce apparent motion, extend the field of view (FOV), and enable entirely new virtual-physical gaming experiences. IllumiRoom is entirely self-calibrating, only requiring the user to point the projector at their TV, and it is designed to work in any room. The concept could be developed into a next generation game console with game content designed from the ground up, or it could be an “add-on” for existing consoles with gamers writing scripts to “mod” their favorite games. While our discussion focuses on interactive video games, the same or similar illusions could be used to enhance movies and television content.
2. Motivating Scenario
To illustrate the idea behind IllumiRoom, imagine sitting down in your living room to play a video game on your television. When the game starts, the room magically transforms to look like a cartoon. The colors of the room become supersaturated and cartoon edges appear on your bookshelves, matching the shading in the video game. You come across an enemy in the game, and suddenly a bullet flies toward your character and then out of the television. The enemy throws a grenade toward you. The grenade rolls out of the television, bounces off your coffee table, and explodes in your living room, throwing shrapnel across your bookshelf. The entire living room appears to shake as you take damage from the explosion. This is just one of many experiences made possible with IllumiRoom.
IllumiRoom uses a technology called projection mapping to bring gaming experiences out of the TV and into your living room. Normally, video projectors are used to display images on flat, white screens (e.g., PowerPoint presentations). Projection mapping points these same video projectors at non-flat objects, altering the appearance of everyday objects. Using animation and interactive rendering, everyday objects can be brought to life with projected light.7, 18–20
In academia, a number of projects have demonstrated the use of projectors to augment workspace environments,16, 18, 24 and even gaming experiences.4,11,24 Previous work has explored ways to adapt to non-white projection surfaces, through a process called radiometric compensation,19 for example, turning a curtain or a building facade into an almost ideal projection surface (e.g., Bimber et al.).5
In the last decade, projection mapping has exploded in popularity,12 being used for outdoor advertising on buildings, special effects in live venue entertainment, and in artistic installations. Currently, artists can create amazing effects by rendering a video that is specific to the scene and projector location. Inspired by these art pieces, IllumiRoom generates real-time, interactive illusions that automatically adapt to any room, any television and any game.
Large, high-resolution displays can be expensive. To create an immersive display at a lower cost, Baudisch et al. proposed Focus+Context (F+C) displays,3 which have a high-resolution computer monitor (focus) surrounded by a lower resolution projection screen (context). This display configuration matches the setup of the human visual system. Compared to the center of gaze, peripheral vision is highly sensitive to motion, but is relatively poor at distinguishing color and shape.21 This is due to the lower density of receptors and the changing distribution of receptor types outside the fovea. Peripheral vision plays an important role in the perception of motion10 and increases feelings of immersion.22 In IllumiRoom, the smaller display serves the high-resolution capability of the fovea, while the projector serves the low-resolution, motion sensitive periphery.
The stimulation of peripheral vision could alternatively be addressed by a single, very large display, a F+C screen,3 a CAVE setup,8,15 or even low-resolution LEDs arrays9 (e.g., Phillips Ambilight TV).23 Along these lines, a recent project by the BBC explored F+C passive film experiences using a projector and a television,14 but did not have any 3D scene knowledge or interactivity. IllumiRoom advances these ideas to high-resolution projected content which is dynamically adapted to the appearance and geometry of the room. Rather than relying on separate projection screens, IllumiRoom adapts the projections to the user’s existing living room, and demonstrates a large range of possible peripheral projected illusions.
4. IllumiRoom System
Our vision for a fully developed IllumiRoom system includes an ultra-wide FOV device sitting on the user’s coffee table, projecting over a large area surrounding the television (see Figure 2a). The device would be connected wirelessly to a next generation gaming console as a secondary display. The ultra-wide FOV could be obtained with an ultra-short throw projector, or by coupling a standard projector with a spherical/parabolic mirror (as the peripheral content does not need a uniform pixel aspect ratio). The room geometry could be acquired with a depth sensor or a structured light scan (which would require only a web camera).2,5,11
Our current proof-of-concept prototype uses a commodity wide FOV projector (InFocus IN126ST) and a Microsoft Kinect for Windows sensor (see Figure 2b). The prototype is restricted by the FOV of the Kinect and projector (57° horizontal FOV). Therefore, the system is mounted above and behind the user’s head, as they are seated on a couch in front of the television.
The Kinect sensor captures the color and geometry of the scene, which is used during calibration and to render the peripheral projected illusions that respond to the living room furniture in a physically realistic manner (similar to the ideas in Benko et al.,4 Wilson et al.24). Careful calibration of the system is required in order for the illusions to tightly match the on-screen content and the physical environment. The calibration of the IllumiRoom system is fully automatic, determining the relative pose of the projector to the depth sensor and the position of the television in the projector image. Therefore, setup only requires that the projector and depth camera are placed such that they cover the area surrounding the TV.
The automatic calibration projects Gray code sequences2 to establish dense correspondences between the projector and the Kinect’s color camera. Using the default parameters for the Kinect, we transform these correspondences into 3D points. We then solve for the intrinsic and extrinsic parameters of the projector. As there is no depth data for specular surfaces, such as that of shiny, black televisions, we recover the position of the TV using a 2D homography, with virtual markers displayed in each corner of the screen. It is important to note that the calibration need only happen once. IllumiRoom could easily be extended to continuously capture the geometry, enabling ad hoc changes to the living room such as moving furniture and people.
IllumiRoom projects on the existing living room furniture, and therefore cannot rely on the presence of a white, flat projection surface. Using radiometric compensation5,19 the projected light may be modulated to achieve a desired color, given knowledge of the existing surface color and geometry. However, this process is limited by the brightness, dynamic range, and color primaries of the projector, as well as the existing surface color, material, and geometry. Accordingly, some desired surface colors are un-achievable. For example, a saturated red object in our physical environment cannot be made to appear a saturated green color with our low cost commodity projector.
Therefore, all of the illusions use the fact that they occur in peripheral vision. Illusions that may not seem realistic on close examination may be quite effective in the user’s peripheral vision. Most illusions do not attempt to accurately reproduce a scene, but aim to introduce motion in the periphery. Also, subtly modifying the existing surface color may be more effective than replacing the color entirely.
Illusions that modify the surface color and not the apparent geometry of the room are independent of viewing position, and thus can be viewed by multiple users from any position within the room. Illusions that add to or modify the geometry of the room are inherently dependent on viewing position and are most effective when viewed from a fixed point within the room. Generally, as the room’s furniture becomes more complex, effects which virtually modify the geometry are more sensitive to viewing position.
5. IllumiRoom Illusions
We envision the IllumiRoom system supporting a great variety of peripheral projected illusions, limited only by the imagination of game designers. We have prototyped illusions that we believe represent the primary dimensions of the design space (discussed in the next section). This represents an initial survey of the design space and is not exhaustive. We outline the techniques used to create the illusions, and describe the illusions that were implemented. The illusions are best understood by demonstration (see the video online at: http://youtu.be/L2w-XqW7bF4).
We present the illusions as distinct points in the design space. However, most of the illusions can be combined together, forming a rich and flexible palette for game designers.
The most obvious way to increase the user’s sense of immersion is to extend the content from the television screen out into the room, replacing the physical reality with the game’s reality. We call this F+C Full (see Figure 3a). This is essentially Focus+Context displays,3 but using a non-flat, non-white projection surface, with radiometric compensation. As stated above, the ability to compensate for the existing surface color is limited, and so this effect relies on the fact that the user will directly attend to the television screen rather than the periphery.
The next illusion, F+C Edges, provides a maximum contrast representation of the peripheral content by displaying only the black-and-white edge information (Figure 3b). This illusion is more robust to ambient light in the room, and increases peripheral optical flow.
F+C Segmented extends the game only onto the rear wall surrounding the television (Figure 3c). While the furniture in the room may not be ideal for projection, most televisions are at least partially surrounded by a flat, light colored wall which provides an ideal, view independent projection surface. This rear wall could be found by a recursive RANSAC plane fitting procedure,17 seeded with the location of the television. Our prototype currently relies on a manually specified image mask identifying the rear wall.
While the previous illusions bleed all game information into the physical environment, there are times when a more subtle approach may be appropriate. For example, when watching a horror movie, an atmosphere of immersion and suspense is created by the absence of the villain, and then their sudden appearance. When all information is present all the time, there is no opportunity for suspense or surprise. With F+C Selective only certain game elements escape the television (Figure 3d). For instance, a first-person shooter might bleed only weapons fire or explosions out of the television. Or, markers representing other characters or key items in the game may be selectively displayed, thus increasing their emphasis.
These illusions increase immersion, induce apparent motion, and provide additional information about the game content. The F+C illusions all require access to the video game’s rendering process.
If we do not have access to the rendering process, apparent motion may still be induced through peripheral visual flow, as long as we have information about the motion of the game’s camera. In Grid (Figure 4a), an infinite grid moves with the motion in the game. When the user moves or looks left or right, the grid moves in the opposite direction. When the user moves forwards or backwards, the grid zooms in or out.
Similarly, Starfield (Figure 4b) creates peripheral flow by moving 3D points to match the movement of the game’s camera. Points are randomly generated in the virtual space between the TV and the user, according to the position and orientation of the game’s camera.
The physical environment may be augmented to match the theme or mood of the game. For instance, the room colors may be super-saturated to resemble a cartoon by simply projecting the color of the surface back onto itself,6 and then drawing silhouette edges black (Figure 5). Similarly colors may be desaturated to create a black-and-white, “film noir” look. We leave more complex texture replacement techniques (i.e., procedural texturing) for future work.
Imagine an explosion in a video game or film that is so powerful that it shakes your entire room. IllumiRoom can use projected light to modify the appearance of objects in the room, making them appear to move, change size, or distort in more extreme ways. Due to the projector’s limited ability to compensate for surface color, these illusions are most effective if the displacements are small and short-lived. Radial Wobble (Figure 6) distorts the apparent room geometry in a “force field” ripple effect. The illusion displaces the original surface texture in an expanding, radial sinusoidal wave.
With full control over the illumination, the lighting in the physical environment may be altered.19,20 Room lighting can change based on the mood or theme in the game. For example, the look of a scene set in space might be achieved by illuminating with point light sources and harsh shadows. The Lighting effect attempts to match the lighting conditions of the physical environment with the virtual environment (Figure 7). Our prototype implements soft shadows and a single point-light source, matching the closest point-light source in the game.
In F+C Selective elements from the game (e.g., bullets) break the screen barrier and fly into the physical environment; however, there are no physical interactions with the environment. Bounce envisions an element in the game leaving the virtual environment and interacting with physical reality. In our prototype, a grenade rolls out of the television, and then bounces and rolls around the physical environment according to a physics simulation using the room geometry (Figure 8a). From a design perspective, it is important to note that the designer does not know where the grenade will go after it leaves the TV, as it will adapt to the 3D geometry of the user’s room.
In Snow, falling snow interacts with the physical environment, briefly collecting on surfaces in the room. Similar to Starfield, the snow moves according to the movement of the user in the virtual environment, allowing the user to walk or drive through snow (Figure 8b). Our prototype simulates the accumulation of snow by turning surfaces with an upward facing normal gradually white.
6. Design Space
The range of possible peripheral projected illusions is as great as the range of visual effects in video games or film. In order to better understand the full range of illusions, we identify the primary dimensions of the design space. Illusions are defined by three primary factors: the goals of the illusion, the connection or separation from physical reality, and the level of abstraction from the game content.
Peripheral projected illusions can enhance gaming experiences in a variety of ways. All of the proposed illusions attempt to increase the user’s sense of immersion by increasing the FOV. All of the illusions increase the user’s sense of presence, making the user feel “in the game.” Additionally, the illusions can induce apparent motion, provide additional information and content, create a sense of atmosphere or theme, and support entirely new physical–virtual game mechanics. These goals are not mutually exclusive, and a single illusion may support multiple goals.
IllumiRoom enables designers to explore the entire spectrum of the reality-virtuality continuum;13 by negating, including, or augmenting the surrounding physical environment. Designers can negate the physical environment, making the living room disappear; thereby immersing the user in an entirely virtual environment (e.g., F+C Full, F+C Edges, Grid, Starfield). Or the game can be grounded in reality, by including the physical environment and selectively introducing virtual objects into physical reality (e.g., F+C Seg, F+C Sel, Bounce, Snow). Finally, IllumiRoom enables a unique middle ground, where the physical surroundings are augmented to match the virtual environment, creating a mixed reality where it is not clear what is real or virtual (e.g., Appearance, Lighting, Radial Wobble).
The illusions can be dependent or independent of the underlying game content. For instance, if we extend the exact game content outside of the TV, the illusion is entirely dependent on the game content (e.g., F+C Full, F+C Edges, F+C Seg, F+C Sel, Bounce). However, if we display an abstract visualization, such as a moving grid, then the peripheral illusion is independent of the game content (e.g., Grid, Starfield, Snow, Appearance, Lighting, and Radial Wobble).
The illusions can be connected with game content through a variety of means. Ideally, IllumiRoom would be directly integrated into a next generation console and new games would be designed for IllumiRoom from the ground up. We envision an API that enables triggering illusions, changing surface appearance, controlling room lighting, inserting objects into the physical environment, etc. In our prototype, to make the system work with numerous games, the IllumiRoom system is a separate component from the game examples, and networking messages are sent to share information.
If the source code of the game is available, then the game can be easily modified to trigger effects. Game events can trigger illusions to be toggled on and off via simple network messages. The position and orientation of the player in the virtual world can be sent to drive the Peripheral Flow illusions. The game can support F+C illusions by conducting a second rendering pass using a wider FOV. In our prototype this information is sent to the IllumiRoom system through a texture residing in shared memory.
While the source-code may not be easily available for most games, many of the illusions can be controlled by other means. For example, the Peripheral Flow illusions need only some measurement of the player/camera movement. An approximation can be obtained by real-time image analysis of the rendered game (e.g., optical flow). In our prototype, any game’s image may be captured using a video capture card (DeckLink Intensity Pro), and subsequently analyzed by optical flow techniques to drive the illusions.
A significant amount of game state may be inferred solely from the user’s input through the controller. Given knowledge of the game and the controller mapping, controller input may be mapped directly onto illusions. For instance, if the game is a first-person shooter and the player presses the “Right Trigger” button on the controller, then the player must be shooting and the Radial Wobble illusion may be activated. In our prototype, input from the controller and vibrotacticle output sent to the controller from the game can drive illusions. This works with any game. We envision users scripting and sharing “mods” of their favorite games to work with the IllumiRoom system.
For our prototype, games are run on the same PC as the IllumiRoom system. GlovePIE intercepts and sends game controller input to the IllumiRoom system by networking messages. Vibrotactile output at the controller is captured by using an Xbox controller modified with an Arduino.
8. User Evaluations
We evaluated the IllumiRoom concept through two user studies focused on end-user “gamers” and expert game designers respectively. In both user studies, we were primarily interested in determining user perceptions along different dimensions of the design space. Just as there is no “best visual effect” for film or video games, we were not expecting to identify a “best illusion.” Rather, we sought the strengths and weaknesses of each technique. We summarize the results here and refer the reader to the full paper for details.
We recruited 10 participants (ages 20–30, two females), who were familiar playing first-person shooter games with an Xbox controller. Each user interacted with our 11 illusions, which we paired with matching game content to create complete immersive experiences. We used a variety of open source games: Red Eclipse, SuperTuxKart, Unity3D’s race car example, and a custom-built DirectX application for Bounce. See the video (http://youtu.be/L2w-XqW7bF4) for the complete experiences.
For each illusion, users first interacted with the game content only on the TV, then the illusion was activated. After viewing each illusion in a randomized order, participants rated and ranked the illusions on various dimensions using a card sorting task. First rating on “overall satisfaction,” then in response to the following prompts “It was fun,” “I felt sick,” “I felt like I was moving,” “I felt like I was in the game world,” and “The game and the physical world felt like separate spaces.”
Participants responded very positively to the illusions and the concept of IllumiRoom, see Figure 9. As expected, there was not one “winning” illusion, but a diverse set of illusions that have strengths on different dimensions of the design space. Participants found the F+C illusions some of the most useful for game play, because they naturally provide additional gaming information. For instance, one user commented that F+C Selective was “very useful for finding items, it kinda lets you see just pieces of the game that are important to know around you.”
However, the F+C illusions can also make the user overwhelmed or more prone to motion sickness. One user commented that with F+C Full, “There was just a lot going on. To see everything going on around you it was like really cool but especially in as fast-paced a game as that, it was just too much happening at once.” The F+C Selective and F+C Segmented illusions seem to represent a middle ground between immersion, adding content information and balancing user comfort. As one user said about F+C Selective, “Out of all of these that would be the most useful and immersive without being distracting.” Similarly, for Grid and Starfield, multiple users commented that they had a greater sense of motion, but missed the contextual information of the F+C illusions.
Appearance worked well at creating a sense of atmosphere for the game. As one user put it “The appearance is awesome. Like that could just be an effect in someone’s living room all the time. That is just super cool.” Radial Wobble was another “magical” effect, that “kinda made you feel like you are physically doing something in [the game].” However, some users said the effect was off-putting, because “all your stuff is shaking in the real world.” As one user stated, “It just seemed like one that you should save for really impactful moments. So I just wouldn’t want it all the time.”
Lighting created a better sense of immersion and presence for some users. “I think more so than any of the others, that one made me feel like I was in the game world.” For Bounce, multiple users referenced the grenade indicator in games like Infinity Ward’s Call of Duty, “Most of the time I don’t see the grenades at all … They always kill me, so that was a good one.” The Snow illusion created a sense of atmosphere and movement for users; however most users did not notice the physical interactions with the geometry. This is because the snow quickly “melted” after impact. One user commented that they “really wanted it to pile up more … if it was piling up it would be an indication of how long I was playing and that would be cool.”
In addition to gamers, we also elicited informal feedback about our illusions from game designers (N = 15). We showed the illusions to three groups, each composed of five game designers or game artists. Each participant had more than five years of experience in the game industry.
While individual game designers had different preferences, we extracted the commonalities among designers. Generally, the game designers thought that F+C Full was impressive at first, but “would probably not hold up to the long term play.” They thought the most promising illusions were ones that altered the mood (Appearance), altered the players sense of reality (Radial Wobble), or ones that selectively showed game content without becoming overwhelming (F+C Selective, F+C Segmented, Bounce). They expressed concern over illusions that need a tightly, fine-tuned connection with the game (Lighting); as they were worried about destroying the illusion for the user if the illusion was not perfectly matched to the game content. Therefore, they saw the merits of Snow and Starfield, which have much looser connections with the source content. They all suggested games where IllumiRoom could be applied (e.g., Forza, Skyrim, Halo, Portal) and enthusiastically suggested that they would like to have a setup like this at home.
While our discussion has focused on interactive video games, IllumiRoom can also extend the viewing experience of film and television. This panoramic video content can be displayed around the television using the F+C techniques (e.g., F+C Full). For example, Figure 10c shows a hockey game that we recorded, where the “focus” video is tightly held on the players, capturing their actions, while the “surround” video is immersing the viewer with live panoramic views of the stadium and the fans at the game. Or in Figure 10d, we demonstrate a live fashion show where the “focus” video captures the models on the runway and the “surround” video features abstract visuals that match the theme of the outfits.
To begin experimenting with such video experiences, we built a custom dual-camera rig which consists of two standard 1080p video cameras: a narrow FOV “focus” camera and a wide FOV “surround” camera using a wide angle lens. Alternatively, IllumiRoom video content could be captured with a ultra high resolution (4K) camera, or traditional video content could be extrapolated (e.g., Aides et al.,1 Novy and Bove).15 Our experience with recording and watching such dual feed videos in IllumiRoom leads us to believe there are a whole new set of cinematic effects that are possible by creatively combining the two video feeds.
In this paper, we presented a proof-of-concept system for augmenting the physical environment surrounding a television to create a more magical, immersive experience. We presented 11 peripheral projected illusions that demonstrate the variety of the design space. We then elicited feedback from gamers and game designers about the illusions along dimensions of the design space. While we did not formally compare the experience of the IllumiRoom system with a standard TV setup, the very positive feedback of both user groups indicates that there is great promise in the approach.
We explored 11 points in the design space, but there are many peripheral projected illusions left to be explored. We only scratched the surface of what is possible with a shared physics representation between the virtual and physical environments (e.g., Bounce). Also, while we examined each illusion individually, future work should examine the combination of illusions and how this palette of illusions could be used by game designers. As some of our illusions distort reality, future work should explore how far users are willing to push their concept of reality, in their own living room.
Another promising direction is to further explore illusions specifically for video content (e.g., TV shows or full-length feature films). Can a grenade from the latest Bond film explode in your living room? How would such content be authored? It would be important to investigate how the movie director should deal with the fixed nature of a film and the randomness imbued by the real-time system adapting to the user’s living room.
Additionally, our methods for connecting content and illusions are not exhaustive. It should be possible to use other cues (e.g., audio) to trigger illusions. For instance, a loud explosion in a film could trigger a Radial Wobble illusion. Simple techniques such as these could enable users to “mod” existing films and games, and share the scripts.
Finally, before the IllumiRoom system can be in every living room, the final form factor, cost, and computational requirements of the system must be determined. While there are many unanswered questions about the future of gaming and film beyond the TV, we hope we have demonstrated that they are worth answering.
Figure 1. IllumiRoom is a proof-of-concept system that augments the physical environment surrounding a television to enhance interactive experiences. We explore the design space of projected visualizations that can negate, include, or augment the surrounding physical environment. (a) With a 3D scan of the physical environment we can (b) directly extend the FOV of the game, (c) selectively render scene elements, (d) augment the appearance of the physical environment, here as a cartoon. See the video online at: http://youtu.be/L2w-XqW7bF4.
Figure 2. Our vision for a productized IllumiRoom system includes an ultra-wide field of view device that sits on a coffee table (a) and projects content in the area surrounding the television. (b) Our current proof-of-concept prototype uses an off-the-shelf projector and a Kinect sensor.
Figure 3. Projecting wider field of view game content around the TV can increase immersion. (a) F+C Full: The full game content is projected with radiometric compensation onto the surrounding furniture. (b) F+C Edges: A high-contrast version where only the edges in the game content are projected. (c) F+C Segmented: Game content is projected only onto the flat, rear wall. (d) F+C Selective: Selective game content (e.g., bullets) bleed out of the TV.
Figure 4. Apparent motion may be induced without access to a game’s rendering by displaying abstract peripheral visual flow. (a) An infinite Grid moves with the game’s camera. (b) A 3D Starfield of spheres moves with the game’s camera.
Figure 5. Appearance: The appearance of the physical environment may be augmented (a), to match the theme of the game. (b) Cartoon super-saturated colors and silhouette edges. (c) Black-and-white (desaturated) colors.
Figure 8. Virtual objects can realistically interact with physical objects. (a) In Bounce a grenade rolls out of the TV and bounces around the living room. (b) In Snow, falling snow moves according to the user’s movement in the game and interacts with the physical environment.
Figure 9. Mean rating for each illusion evaluated in the user study. Participants rated “overall satisfaction” and responded to five questions on a five-point Likert scale from “strongly disagree” to “strongly agree.”
Figure 10. IllumiRoom can also be used for panoramic film and video. (a) A custom dual-camera rig for capturing F+C video (a). The panoramic video content can be displayed on top of the existing living room furniture (b). Sports games can be directly extended providing additional context (c). More abstract content can be used in the periphery like these abstract visualizations surrounding a fashion runway video (d).