BLOG@CACM
Architecture and Hardware

Societal Impacts of Embodied AI

Will EAI enhance societal well-being, or will it exacerbate existing divides?

Posted
walking robot with people in everyday life, illustration

Embodied Artificial Intelligence (EAI) integrates AI into physical entities like robots, enabling autonomous perception, learning, and interaction with environments. Unlike earlier robotics, which focused on specific tasks, EAI fosters versatile robots capable of adapting to various situations. This discussion explores EAI’s potential societal impacts, considering its ability to reshape social structures and daily life [1].

EAI challenges existing legal frameworks, creating regulatory uncertainties as governments struggle to keep pace with technological advancements. Economically, while EAI may enhance productivity, it also risks job displacement and increasing inequality. Socially, limited access to EAI could deepen equity gaps, while over-reliance on these systems might disrupt human interactions and lead to social isolation. In education, there’s an urgent need to adapt curricula so humans can develop skills that complement AI, ensuring continued relevance in a rapidly evolving landscape. The key questions remain: Will EAI enhance societal well-being, or will it exacerbate existing divides? Are we ready to shape this future, or will we be forced to react to its consequences?

We can’t yet fully address the questions raised, but we can explore the societal impacts of Embodied Artificial Intelligence (EAI) to gain a better understanding.

Existing laws are often inadequate for managing EAI-related issues like data privacy, where AI systems continuously collect and analyze personal information. The autonomous nature of EAI complicates the establishment of clear regulations, leading to a chaotic legal landscape [2]. Traditional frameworks struggle to assign liability for harm or errors caused by EAI systems, highlighting the need for new laws to ensure accountability and public safety [3]. On the policy front, nations are engaged in a race for technological supremacy, implementing policies to build a robust EAI ecosystem [4]. These efforts include investments in education and training programs, as well as incentives for startups to drive innovation.

EAI has the potential to boost economic growth by enhancing productivity and efficiency across sectors like manufacturing and logistics [5]. However, this comes with the risk of significant job displacement, particularly in manual and repetitive tasks. The rapid adoption of EAI may lead to economic and social strains, with some advocating for measures like taxing robots to fund Universal Basic Income (UBI) [6]. However, recent research suggests that UBI alone may not address the broader social and economic benefits that come from employment, underscoring the importance of reskilling workers.

Socially, EAI threatens to deepen inequities by widening the digital divide, where access to advanced AI technologies could become a privilege of affluent communities. Marginalized groups may be excluded from the benefits of EAI, exacerbating existing disparities [7]. Additionally, the integration of EAI into daily life may disrupt social cohesion, as reliance on AI systems for routine tasks and emotional support could lead to a deterioration of communal bonds and increased social isolation. This “digital loneliness” might replace genuine human connections with artificial, superficial interactions.

As EAI advances, it is poised to surpass human capabilities in knowledge acquisition and task execution, challenging education systems to shift focus. Instead of traditional knowledge-based approaches, education must emphasize creativity, critical thinking, emotional intelligence, and ethical reasoning—areas where humans still hold an advantage. Human creativity, driven by personal experiences and emotions, generates truly novel ideas that EAI cannot replicate. Critical thinking, which involves questioning assumptions and considering multiple perspectives, remains a uniquely human strength [8]. Emotional intelligence, rooted in genuine social interactions, is beyond EAI’s capability to simulate fully [9]. Ethical reasoning, essential for navigating moral dilemmas, requires a nuanced understanding of societal values that EAI lacks.

Are we truly prepared for the ubiquitous integration of EAI into society? As we explore this critical question over the coming decades, we must address several pressing issues. First, can we achieve a global consensus on EAI regulation to ensure its smooth and ethical integration? Second, is the race towards EAI a zero-sum game, or can international collaboration yield greater benefits for all? Third, how can we balance the economic gains from EAI-driven productivity with the inevitable job displacement, and what policies are needed to ensure no one is left behind? Fourth, how do we guarantee equitable access to EAI technologies while preserving social cohesion and preventing the digital divide and “digital loneliness” from eroding community bonds? Fifth, how should education evolve to maintain human superiority in creativity, critical thinking, emotional intelligence, and ethical reasoning as EAI surpasses human capabilities in knowledge processing? Ultimately, we must ask: what is the core of humanity that will remain unthreatened by EAI?

References:

  1. Durkheim, E., 2018. The division of labor in society. In Social stratification (pp. 217-222). Routledge.
  2. Wu, W. and Liu, S., 2023. Dilemma of the artificial intelligence regulatory landscape. Communications, 66(9), pp.28-31.
  3. Novelli, C., Taddeo, M. and Floridi, L., 2023. Accountability in artificial intelligence: what it is and how it works. AI & SOCIETY, pp.1-12.
  4. Biden, J.R., 2023. Executive order on the safe, secure, and trustworthy development and use of artificial intelligence.
  5. Liu, S., 2024. Shaping the Outlook for the Autonomy Economy. Communications, 67(6), pp.10-12.  
  6. West, D.M., 2015. What happens if robots take the jobs? The impact of emerging technologies on employment and public policy. Center for Technology Innovation at Brookings, Washington DC. 
  7. Jacobs, K.A., 2024. Digital loneliness—changes of social recognition through AI companions. Frontiers in Digital Health, 6, p.1281037.
  8. Boden, M.A., 2004. The creative mind: Myths and mechanisms. Routledge.
  9. Goleman, D., 1996. Emotional intelligence. Why it can matter more than IQ. Learning, 24(6), pp.49-50.
Shaoshan Liu, ACM U.S. Technology Policy Committee member

Shaoshan Liu is Director of Embodied AI at the Shenzhen Institute of Artificial Intelligence and Robotics for Society (AIRS). He is a member of the ACM U.S. Technology Policy Committee, and a member of U.S. National Academy of Public Administration’s Technology Leadership Panel Advisory Group. His educational background includes a Ph.D. in Computer Engineering from U.C. Irvine, and a Master of Public Administration (MPA) from Harvard Kennedy School.

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More