Research and Advances
Architecture and Hardware Entertainment networking

Managing Latency and Fairness in Networked Games

Fighting propagation delays in real-time interactive applications improves gameplay and fairness in networked games by trading off inconstencies and tuning decision points topology.
  1. Introduction
  2. Playability and Fairness
  3. Tuning the Game Infrastructure
  4. Inconsistencies
  5. Decision Points Topology
  6. Conclusion
  7. References
  8. Authors
  9. Footnotes
  10. Figures
  11. Tables

Networked games can be seen as forerunners of all kinds of participatory entertainment applications delivered through the Internet. Physically dispersed players are immersed in a common virtual environment where they interact in real time. When a user performs an action, other users must be made aware of that action. Otherwise, there is a discrepancy in the perceptions of participants about the overall state of the virtual world. This discrepancy could lead to undesirable and sometimes paradoxical outcomes. In particular, first-person shooter, and to a lesser extent role-playing games impose stringent constraints on responsiveness and consistency.

Consequently, deploying these applications over a large-scale infrastructure presents a significant technological challenge. For example, as the geographical distances among participants increase, the unavoidable propagation delays among them may render the game unresponsive and sluggish even when abundant processing and network resources are available. Moreover, differences in game responsiveness to user input may give some players an unfair advantage.

To limit the effect of these inevitable consequences of network architecture, most games are deployed as independent virtual worlds for localized areas and served by machines dimensioned for peak-hour demand. However, the true power of these applications is to enable people to work and play together irrespective of physical separation. Confining games to small localities is analogous to having a telephone network able to handle only local calls. Nevertheless, geographical scaling of networked games is nontrivial, involving much more than network connectivity and bandwidth.

Here, we describe the various factors influencing the quality of the game experience in terms of playability and fairness. In software, different methods for synchronization and lag compensation reduce the perceptual effect of latency. We also demonstrate that careful selection and organization of game servers can be of significant value in improving playability and fairness for all players.

At any given time, the virtual world of the game is fully described by a set of parameters called the game state. They include, but are not limited to, the position and states of avatars and other in-game objects. Players perceive and react to the game through their terminals, networked computers, and game consoles that render the virtual environment based on game-state updates. Authoritative modifications of the shared game state are done by decision points located either on dedicated servers or on (a subset of) players’ machines.

Two main architectures are possible for making decisions about the game state: central and distributed decision points. Most of today’s networked games use a central-server architecture in which a single machine is the unique decision point. In distributed architectures, two or more decision points coexist, synchronizing their game states with one another. The need for synchronization in the distributed architecture introduces additional complexity, and if extra dedicated hardware is required, it adds to the running cost for the game provider. On the other hand, distributed architectures (such as mirrored servers and peer-to-peer games) provide more flexibility for load balancing and may improve the players’ experience if the decision points are located close to the players.

Back to Top

Playability and Fairness

Geographical distance between decision points and/or terminals may cause their respective states to be somewhat inconsistent with one another due to the latency involved in the transmission of information. Two classes of inconsistency can be identified: those between a terminal and its relevant decision point(s) and those between different decision points. The latter applies only to the distributed architecture in which there is more than one decision point.

Response time is an example of an inconsistency caused by the propagation delay between a terminal and its decision point. Response time represents the delay between the time of the issuance of an action order by a player and the rendering of the action results on the player’s terminal. A perceptible response time frustrates users and may make the game unplayable. A second example is the presentation inconsistency [10] due to the fact that the game-state update reaching a terminal is already outdated to some degree because the real game state may have varied while the update packet was on its way. Hence, what the player perceives is slightly inconsistent with the real game state at the decision point. This inconsistency may, for example, cause a player to see other avatars at incorrect locations.

The distributed architecture may help reduce response time by bringing decision points closer to the players. However, it also introduces inter-decision-point inconsistency, or discrepancies among decision points, with respect to the “current” game state. These discrepancies can cause some decision points to evaluate actions out of order, possibly violating causality and prompting incompatible decisions. A paradox is a decision made by an inconsistent server that is incompatible with the decision it would have made if it were consistent. Paradoxes arise only due to the discrepancy among decision points and cannot happen in the central-server model. If causality is to be maintained, paradoxical game states must be “healed” by rolling back to an earlier time point. This operation is called a rollback in time, or a Timewarp [5].

A game can be considered playable if its users find its performance acceptable in terms of the perceptual effect of its inevitable inconsistencies. Whereas playability is a game attribute for each individual player, varying depending on the inconsistencies players experience from their terminals or decision points, fairness is a gamewide property concerned with relative playability among all participants [2]. That is, variations in playability among players may give an unfair advantage to some players over others. If these variations are significant, the game itself may be considered unfair. A fair game, on the other hand, gives all users the same level of handicap.

Aside from artistic design and originality, the quality of the online game experience depends on the network aspect of playability and fairness. This is why the management of network-related inconsistencies in the supporting game infrastructure is crucial when scaling to wide geographical areas.

Back to Top

Tuning the Game Infrastructure

The infrastructure supporting a networked game can be divided into a software component, including synchronization scheme and lag compensation techniques, and the hardware infrastructure, or topology of the game decision points over the underlying network platform. The other parameters influencing the player experience, namely the underlying network topology itself and its location, are not controllable. A game provider can manage playability and fairness at two levels:

  • Trading off inconsistencies within the software component. For a given topology of decision points, the network delay among entities is bounded. However, it may be possible to trade off one type of inconsistency with high perceptual impact for another with low impact. This may result in an overall improvement in game quality from a player’s point of view; and
  • Selecting the decision points topology. The latency constraints that depend on the location of the decision point can be altered to influence playability and fairness.

Back to Top


The artifacts of inconsistency (such as long response time and numbers of rollbacks) influence user perception in different ways. For example, in the Unreal Tournament first-person-shooter game, [8] concluded that a round-trip delay (response time) above 60msec seriously disturbs players’ experience. Likewise, it is reasonable to assume that rollbacks also degrade playability.

While the latency between terminals and decision points is bounded by the propagation delay constraints of the given topology, it may be possible to trade one type of inconsistency for another. For example, terminals can use co-simulation to anticipate the decisions made by the decision points. For actions originated by a player, this co-simulation is referred to as client-side prediction [1] and reduces the perceived response time. The state of other avatars may be anticipated through dead reckoning [7], or using knowledge about the previous values of a given parameter (such as location and direction of movement of other avatars), and the physics of the virtual universe.

In both client-side prediction and dead reckoning, if the predicted parameters are the same as the authoritative updates received from the decision point, then the perceived response time or presentation consistency is significantly improved. There is a probability, however, that the authoritative decisions would have to revoke the local predictions if they were incorrect. Such revocation may be perceived by the player as a local rollback. In essence, these techniques trade improved response time and presentation consistency for increased probability of revocation. This trade-off may or may not be appropriate depending on the context of the game and the perceptual effect of each inconsistency type. These lag-compensating techniques are concerned with hiding the effect of inconsistency between terminals and decision points and apply to both central and distributed topologies.

A game can be considered playable if its users find its performance acceptable in terms of the perceptual effect of its inevitable inconsistencies.

The distributed architecture introduces inter-decision-point inconsistency and the possibility of paradoxes. A networked game developer may adopt a conservative synchronization scheme among the decision points, eliminating the probability of paradoxes altogether; examples include conservative local lag [5] and lock-step synchronization [4]. Such schemes would, however, affect a game’s responsiveness, negating some of the benefits of using distributed architecture in the first place.

Alternatively, a more optimistic synchronization scheme may be used. In it some level of inconsistency between decision points is allowed and may even be essential for healing a paradoxical game state through rollbacks. Once again, this approach trades off one type of inconsistency for another. Figure 1 outlines how inter-decision-point inconsistency can be traded off for an increase in response time by adding local lag, assuming there is no packet loss or jitter. Partial local lag can also be used to reduce (without fully eliminating) the duration of the inter-decision-point inconsistency. The longer this duration, the greater the probability of paradox. Hence, it might be worthwhile to set the local lag to achieve the optimal balance between the perceptual effect of response time and rollbacks.

Each of the various parameters of the virtual world may represent totally different in-game concepts with its own consistencies and synchronization requirements [3, 9]. As an example, in most online role-playing games, an error in the avatar’s position would not typically affect the actions of other participants due to the avatar’s limited acceleration and speed. Yet players want to see their avatars react quickly once they decide to move. On the other hand, a paradox on an avatar’s life state—dead or alive—may significantly hurt the game’s playability. Therefore, actions affecting an avatar’s positions could use less local lag than actions affecting its life state. In general, it could be more effective to tailor the synchronization parameters for each action type rather than bind the whole game state to the same synchronization fate.

It is always possible to increase the level of inconsistency in a game by artificially delaying information. This technique enables the equalization of inconsistencies among players, effectively improving game fairness at the cost of overall playability.

Back to Top

Decision Points Topology

The physical topology of telecommunication networks is generally static. However, because access to a network of processing locations would provide a pool of possible decision points to choose from [6], the position of the unique decision point for a central-server model could be selected to best suit the current connected players. This selection could be based on a range of objectives, including optimizing average playability and global fairness, as well as the trade-offs among them.

Alternatively, a distributed decision points architecture composed of a subset of carefully selected processing locations tuned with suitable synchronization parameters could provide an even better trade-off than a central-server solution.

In any game, the worst-off player in terms of playability could be a major contributor to the game’s average response time and unfairness. The player’s response time denoted as the critical response time. We have developed an iterative heuristic called “minimum critical response time growth” that converges toward a set of servers with close to optimal playability for the game’s worst-off player, providing a solution that balances overall playability and fairness. The synchronization scheme it considers is a conservative local lag (assuming no jitter or packet loss), implying that the response time is a good indicator of playability. An absolute lower bound for the critical response time can always be calculated, giving a measure of the quality of the obtained solution. After running this heuristic 100 times over a simulated Internet-like network topology consisting of 600 nodes with 48 randomly positioned players, we found that the average gap between the final solution and the lower bound is about 5% and the number of decision points in the solution required for the final solution is about 7.5 on average.

Figure 2 is a representative instance of the iterative evolution of the heuristic solution in terms of critical response time compared to the lower bound and the critical response time of two other selection strategies:

  • Best central server in terms of average response time, optimizing overall game playability; and
  • Best central server in terms of critical response time, balancing playability and fairness.

At each step (see the table here), the heuristic finds the worst-off player and searches for the best new decision point to be added to the current server list that would reduce this person’s response time. The heuristic ends when further improvement in this critical response time is not possible. The critical response time of the heuristic, even for only six distributed decision points, is very close to the lower bound and significantly better than even the best central-server solution. A properly designed distributed architecture is thus likely to outperform the current central server models under a range of conditions.

Ideally, the two strategies for managing playability and fairness should be implemented in real time to conStantly adapt to the dynamics of the game and to the players’ connections and disconnections.

Figure 3 describes the performance of the three decision points topology-selection strategies in the same simulated network. The horizontal and vertical axes represent the levels of playability and fairness, respectively, the closer to the origin the better. Each of the 100 simulations in the strategies generated solutions represented as a single point in the figure. This combination of points creates distinct clouds. The fourth cloud—average central server—represents the expected playability and fairness of a randomly chosen central server for comparison.

The optimal-playability central-server solutions provide consistently low response time with a high level of unfairness. The outcome of the balanced central-server solution is more variable—sometimes close to best playability, at other times with inferior response time but improved fairness. The distributed solution from the heuristic is consistently better than the other two strategies in terms of fairness at the cost of a slight increase in response time compared to the optimal playability server. All these selection strategies offer considerable improvement over the expected playability and fairness of a randomly selected central server.

Back to Top


Trading off inconsistencies and tuning the decision points topology are the two available strategies for managing playability and fairness in online games. Ideally, both of them should be implemented in real time to constantly adapt to the dynamics of the game and to players’ connections and disconnections. Unfortunately, current software and hardware platforms provide little support for cost-effective deployment of these capabilities on a large scale. This lack of support is the reason most games use a fixed central-server approach in combination with some form of latency compensation, leaving considerable room for improvement in the future.

Back to Top

Back to Top

Back to Top

Back to Top


F1 Figure 1. Examples of synchronization in distributed server architecture. In optimistic synchronization, the sync message from Server 2 can create a paradox on Server 1 if it conflicts with Player A’s actions. The local lag compensates the inter-decision-point inconsistencies (assuming there is no jitter or packet loss).

F2 Figure 2. Iterative evolution of a typical heuristic convergence. The critical response time is improved each time a server is added to the game. The final solution outperforms any central-server approach, ending up close to the calculable lower bound.

F3 Figure 3. Playability and fairness of decision points selection strategies. The set of distributed servers chosen by the heuristic outperforms the best central-server approach in terms of fairness at a marginal cost in average response times.

Back to Top


UT1 Table. The heuristic starts from an initial two-server solution optimized for the two most distant players (from a network point of view). It is then expanded at each iteration to minimize the response time of the worst-off player by adding a new server. The process ends when no more improvement is possible.

Back to top

    1. Bernier, W. Latency compensating methods in client/server in-game protocol design and optimization. In Proceedings of the Game Developer Conference (San Jose, CA, Mar. 20–24). CMP Media LLC, Manhasset, NY, 2001;

    2. Brun, Safaei, F., and Boustead, P. Fairness and playability in online multiplayer games. In Proceedings of the Second IEEE International Workshop on Networking Issues in Multimedia Entertainment at the Third IEEE Communications and Networking Conference (Las Vegas). IEEE Communications Society, New York, 2006, 1199–1203.

    3. Brun, J., Safaei, F., and Boustead, P. Distributing network games servers for improved geographical scalability. Telecommunication Journal of Australia 55, 2 (Autumn 2005), 23–32.

    4. Chen, B. and Maheswaran, M. A fair synchronization protocol with cheat proofing for decentralized online multiplayer games. In Proceedings of the Third IEEE symposium on Network Computing and Applications (Cambridge, MA, Aug. 30–Sept. 1). IEEE Computer Society, Washington, D.C., 2004, 372–375.

    5. Mauve, M., Vogel, J., Hilt, V., and Effelsberg, W. Local-lag and timewarp: Providing consistency for replicated continuous applications. IEEE Transactions on Multimedia 6, 1 (Feb. 2004), 47–57.

    6. Nguyen, T., Safaei, F., Boustead, P., and Chou, C. Provisioning overlay distribution networks. Elsevier Computer Networks 49, 1 (Sept. 2005), 103–118.

    7. Pantel, L. and Wolf, L. On the suitability of dead reckoning schemes for games. In Proceedings of the First Workshop on Network and System Support for Games (Braunschweig, Germany, Apr. 16–17). ACM Press, New York, 2002, 79–84.

    8. Quax, P., Monsieurs, P., Wim, L., De Vleeschauwer, D., and Degrande, N. User experience: Objective and subjective evaluation of the influence of small amounts of delay and jitter on a recent first-person shooter game. In Proceedings of the Third Workshop on Network and System Support for Games (Portland, OR, Aug. 30). ACM Press, New York, 2004.

    9. Safaei, F., Boustead, P., Nguyen, C., Brun, J., and Dowlatshahi, M. Latency-driven distribution: Infrastructure needs of participatory entertainment applications. IEEE Communications Magazine 43, 5 (May 2005), 106–112.

    10. Vaghi, I., Greenhalgh, C., and Benford, S. Coping with inconsistency due to network delays in collaborative virtual environments. In Proceedings of the ACM Symposium on Virtual Reality Software and Technology (London, U.K., Dec. 20–22). ACM Press, New York, 1999, 42–49.

    This work is supported by the Telecommunications and Information Technology Research Institute of the University of Wollongong and the Smart Internet Technology Cooperative Research Centre, Sydney, Australia.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More