Research and Advances
Architecture and Hardware Robots: intelligence, versatility, adaptivity

Entertainment Robotics

Competing teams of autonomous robot soccer players illustrate the challenges, pleasures, and promise of developing collaborative multi-robot applications.
  1. Introduction
  2. Autonomous Robots
  3. Teams of Robots
  4. Conclusion
  5. References
  6. Author
  7. Figures

Artificial intelligence (AI) focuses on using computers to manipulate symbolic and numerical information to perform a wide variety of intelligent reasoning similar to humans. Robotics includes investigating the feasibility of creating mechanical creatures—robots—to perform like humans in real-world physical environments. The fields of robotics and AI converge in pursuing this goal. The automated preprogrammed action of robotic artifacts has been developed extensively and successfully for industrial, inaccessible, and hazardous environments, including volcanos, the arctic, outer space, and the deep ocean floor. But the accelerating advances in computer power now appear to add enormous credibility to the notion that robots with humanlike AI can be developed fully. This expectation offers many new opportunities for people to interact and coexist with robots.

Recent efforts have sought to bring robots into our daily lives, including in the form of autonomous vehicles, museum tour guides, helpmates for the elderly, and robot competitions, particularly robot soccer. My remarks here about entertainment robotics are based on my involvement with my students in robot soccer research over the past six years.

We learned that robots playing a game cannot afford to be stopped and not act; the game goes on at a speed that requires rather proactive behaviors.

The late Herbert A. Simon, a professor of computer science and psychology at Carnegie Mellon University and a founder of the AI field, concluded his lecture “Forecasting the Future or Shaping It?” at the October 2000 Earthware Symposium (see video at by saying: “Here around CMU, we have been amazed, amused, gratified, and instructed by the developments in robot soccer. For four years, and with rapidly increasing skill, computers have been playing a human game requiring skillful coordination of all the senses and motor capabilities of each player, as well as communication and coordination between players on each team, and strategic responses to the moves of the opposing team. We have seen in the soccer games an entire social drama played out with far less skill (thus far) than professional human soccer, but with all the important components of the latter clearly visible.

“Here we see, in a single example, a complex web of all the elements of intelligence and learning—interaction with the environment and social interaction, use of language—that AI has been exploring for half a century and a harbinger of its promise for continuing rapid development. Almost all of our hopes and concerns for the future can be examined in miniature in this setting, including our own role in relation to computers.” The lecture went on to forecast our interactions with computers and robots. But his impressions and assessment of robot soccer are the best introduction to entertainment robotics I know.

Robot soccer pioneered multi-robot research and entertainment. Until its development began in 1996, most robotics research focused on single-robot issues. Robot soccer presented a new horizon; teams of autonomous robots have to respond to a highly dynamic environment, including other teams of robots, to accomplish specific goals, like get the ball in the opponents goal. Moreover, choosing a popular game like soccer to explore such a rich research objective has apparently made a big difference. The entertainment component of robot soccer has been significant in attracting researchers, as well as crowds of spectators. For example, the fifth annual RoboCup International Competition held for the first time in the U.S. last August in Seattle included more than 500 participants, 200 robots, and thousands of spectators (see Figure 1). The ambitious official RoboCup motto is: “By the year 2050, develop a team of fully autonomous humanoid robots that can win against the human world soccer champions.” I now briefly illustrate the technical challenges faced by teams of autonomous robots dealing with such real-time dynamic tasks as robot soccer.

Back to Top

Autonomous Robots

To robot researchers, an autonomous robot is capable of handling problems without the help of an outside source, particularly a human. We humans are in general autonomous in our everyday lives, capable of surviving in our relatively unstructured environments. Autonomy includes three main capabilities:

  • Perception. The ability to recognize the surrounding environment, including the five human senses: vision, hearing, taste, smell, and touch.
  • Action. The ability to respond to perceived sensations, enabling one to change one’s own state or the state of the environment; many actions are available to autonomous creatures, possibly in an infinite number in some continuous space; common actions include all sorts of motion and manipulation.
  • Cognition. The ability to reason, including selecting from among the actions that are possible in response to sensations; reasoning is a complex process that can include the ability to experiment and learn from feedback from the effects of the actions selected.

Research in robotics has a very long way to go to actually achieve the level of perception, action, and cognition we humans demonstrate in our everyday lives. But the research is advancing in that direction. Inherent in this advancement is the fact that robots will be part of our lives and in particular will be able to coexist with us in entertainment tasks. Indeed, the fact that scientific and technology advances are contributing to the development of autonomous robots with perception, action, and cognition similar to our own motivates us to use our discoveries well and learn to coexist with robots.

Back to Top

Teams of Robots

In robot soccer, the robots face a highly dynamic and uncertain environment in which they have to achieve clear goals like advancing the ball toward the opponent’s goal. Robot soccer teams need to effectively integrate perception, action, and cognition in real time. Each team of robots needs to continuously live in a cycle, perceiving the world, deciding what to do, and performing actions. One of the main challenges in developing such integration is how to provide the robots with the ability to close this autonomy cycle, so they perceive the environment, make decisions about which actions to take, actually take actions in the world, and continuously perceive the environment, making decisions and acting.

My students and I have been developing several different teams of autonomous robot soccer players, each reflecting a variety of challenges of perception, action, and cognition. I briefly describe two of them in the following paragraphs to illustrate the concrete challenges of multi-robot entertainment.

Multi-robot soccer teams. Robot perception is one of the main bottlenecks. Robots need to be equipped with sensors from which they can accurately and reliably infer the state of the world. Figure 2 shows small-wheeled soccer-playing robots [3, 4]. Each team designs and builds its own robots under specific size constraints. The robots play with an orange golf ball on a field approximately the size of a ping-pong table. Each robot team is allowed to hang a vision camera over the playing field to provide a global view of the field of play. Processing images globally in real time is itself a significant perception challenge, but the global view of the field of play is provided to each robot on each team. The image can also be sent to an offboard computer that remotely controls each robot’s motion, usually through radio. Interestingly, because each robot has a complete view of the positions of all its teammates and opponents, it can effectively use this information to strategically collaborate with other team members.

This scenario may seem far removed from human sport, as humans cannot see in all directions. But humans can share information through communication and collaboratively develop complete information of the relevant world. Within this framework, these artificial robot artifacts cannot control a ball as humans can and indeed cannot devise compelling strategic teamwork when compared with humans.

We have also developed teams of fully autonomous legged robots with onboard vision and computational power. Figure 3 shows the legged robots we use—programmable versions of the Aibo designed and built by Sony Corp. We have used them as a hardware platform since Sony introduced its first version in 1998 [5, 6]. For each one’s onboard processor, we have developed algorithms to provide image processing, localization, and control. None of them is remotely controlled in any way, and no communication is possible with either human controllers or with other robots in this multi-robot system. The only state information available for each robot’s decision making comes from its own onboard color vision camera and from sensors reporting on the state of the robot’s body. The vision algorithm is crucial, as it provides the perception information as the observable state. Our vision system robustly computes the distance and angle of the robot to the objects and assigns confidence values to its state identifications [1].

The preconditions of several behaviors for each robot require knowledge of the position of the robot on the field. The localization algorithm is responsible for processing the visual information of the fixed colored landmarks of the field and outputting an (x,y) location of the robot. Interestingly, the fact that these little robots are in a highly dynamic and adversarial environment opened a completely new avenue of research in probabilistic localization. Previous algorithms assumed the only factor that could modify the position of a robot was its own motion, as robots were heavy and the environments were stationary with respect to a robot’s motion. Probabilistic updates on position were updated based on a robot’s own motion and adjusted by the input from its sensors. With small robots playing a game with other robots, each one can be pushed, fall down, and even “teleported” out of its current position into a penalty position by a referee following a foul call. The classic grid-based and point-based probabilistic localization algorithms cannot handle such localization situations effectively, as the algorithms update their pose (position) belief very conservatively. The real-time and adversarial aspects of robot soccer have helped prompt our development of new localization algorithms that can trust and use the robot’s sensors in a variety of ways, including a new sensor-resetting localization algorithm [2] performing a nonlinear reset of the locale belief based on strong values of the robot’s sensors.

Finally, our behavior-based planning approach gives each robot the ability to control itself differently as a function of the accuracy of its knowledge of the world [7]. For example, a robot always approaches the ball when it sees it—either directly aiming at the opposing goal or in some other direction, depending on whether it knows its location with high or low certainty. When near the ball, if it did not reliably know the position of the opposing goal, the robot would have to circle the ball until it sees the goal and can align itself. We learned that robots playing a game cannot afford to be stopped and not act; the game goes on at a speed that requires rather proactive behaviors.

All the teams in the RoboCup legged-robot league use the same Sony hardware platform, creating a very interesting research AI problem, as all the robots have in principle the same low-level perception and motion capabilities. Therefore, their eventually different performance should mainly reflect their cognition. However, this is indeed not the case. Although they do differ as to cognition, it remains a challenge to program them to use their similar hardware. The result is that some robots move faster or see better than other robots. Robotics researchers focus on different research directions, leading to robots that vary by performance, even though they have the same physical components. This variation is similar to how we all handle the limits of our physical and cognitive abilities in different ways, achieving different results for similar tasks.

Back to Top


Robot soccer illustrates the challenges of building complete autonomous robots able to perform active perception and sensor-based planning while playing a multi-robot game. The games are not only a source of entertainment but a great source of advances in robotics research. I am confident in extrapolating that further advances in entertainment robotics will continue to serve this twofold goal.

Back to Top

Back to Top

Back to Top


Sony Aibo robots (2001 research version) competing in the RoboCup-2001 soccer tournament.

F1 Figure 1. Researchers and robots participating in RoboCup-2001, Seattle, August 2001.

F2 Figure 2. Carnegie Mellon soccer robots (designed by Brett Browning).

F3 Figure 3. Sony-built Aibos are programmed by Veloso’s students to play soccer in teams of three fully autonomous robots.

Back to top

    1. Bruce, J., Balch, T., and Veloso, M. Fast and inexpensive color image segmentation for interactive robots. In Proceedings of 2000 International Conference on Intelligent Robots and Systems (Japan, Oct. 2000).

    2. Lenser, S. and Veloso, M. Sensor resetting localization for poorly modeled mobile robots. In Proceedings of ICRA-2000, the International Conference on Robotics and Automation (Apr. 2000).

    3. Veloso, M., Bowling, M., Achim, S., Han, K., and Stone, P. The CMUnited-98 champion small robot team. In RoboCup-98: Robot Soccer World Cup II, M. Asada and H. Kitano, Eds., Springer Verlag, Berlin, 1999, 77–92.

    4. Veloso, M., Stone, P., and Han, K. CMUnited-97: RoboCup-97 small-robot world champion team, AI Mag. 19, 3 (1998), 61–69.

    5. Veloso, M., Uther, W., and Fujita, M. Playing soccer with legged robots. In Proceedings of the 1998 International Conference on Intelligent Robots and Systems (Victoria, Canada, Oct. 1998).

    6. Veloso, M., Winner, E., Lenser, S., Bruce, J., and Balch, T. Vision-servoed localization and behavior-based planning for an autonomous quadruped legged robot. In Proceedings of the 5th International Conference on Artificial Intelligence Planning Systems (Breckenridge, CO, Apr. 2000), 387–394.

    7. Winner, E. and Veloso, M. Multi-fidelity behaviors: Acting with variable state information. In Proceedings of the 17th National Conference on Artificial Intelligence (Austin, TX, Aug. 2000).

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More