Opinion
Society

Thinking of Algorithms as Institutions

A route to democratic digital governance.

Posted
swirling lights encircle classical structure, illustration

There is an increasing concern about the influence algorithms exert in modern societies and the potential threats they pose to democracy. Some algorithms are considered as of high risk, as they have significant public consequences on peoples’ lives and fundamental rights. To exemplify the potential impacts of algorithms in everyday life, we elaborate on the case of algorithmic recommenders, which are systems aimed at generating meaningful recommendations for content or products that might interest a given set of users. They shed light on ideas, products and resources, sorting them and creating cognitive shortcuts for those in need of making decisions. In doing so, they frame ideas, shape decisions, and influence people creating action situations, which explains why they have become a key component of contemporary problems, such as the diffusion of disinformation and hate speech in political elections, pandemics, and wars.4,10

Embedded in global digital platforms, algorithmic systems have strained democracy in numerous countries, arising from the disconnection between democratic processes and technical control within a public sphere increasingly governed by code.7 Algorithmic recommenders connect human preferences with the choices they make, determining what will be the object of human attention, rationalization, and agency. Put another way, when recommending something, algorithmic systems frame the object of attention and organize contexts in which human action happens. If content suggested by recommenders is based on their ability to capture attention, it should not be surprising that these systems radically change processes of knowledge formation by promoting provocative, false, and outrageous content. Thus, some essential questions emerge: What information should be available regarding the operations of these algorithms? How can society advocate for the design of algorithms to align with democratic principles? What democratic values should be integrated into the design and development of algorithms, and what protocols can support this from a policy perspective?

Drawing from the concepts and text presented in our book Algorithmic Institutionalism, this Opinion column summarizes one of its central arguments.5 It also delves into the assertion that, akin to the democratization of other complex institutions in history, it is imperative to consider how algorithmic systems can be democratized to mitigate the risks some of them pose to modern societies.

Algorithms as Institutions

Algorithms are not just lines of code in systems. They are architectures that organize complex systems of interactions involving machines and humans. In doing so, they function as other institutions, which are conceived of as human artifacts, consisting of formal or informal norms and rules, related to collective decisions, that frame the behavior of actors in diverse situations. And like other institutions, they operate in multilayered institutional contexts with diverse levels of complexity in assemblage-like structures. By this, we mean that there are nested institutions, with different layers: platforms, for instance, can be seen as broader institutions, encompassing more restricted algorithmic institutions in nested structures in which contexts of interactions are framed and expected behavior is scripted. Governance for algorithmic decision-making systems can be implemented at various layers. Each layer acts as its own institution, requiring a tailored blend of approaches suited to its specific characteristics.

Institutions are present in different places in society. Families, for instance, constitute an institution, upheld by rules and roles for each of their members. Governments define rules that structure complex economic policies that impact individuals and corporations. Political institutions such as the electoral system delimit the behavior of voters and frame electoral choices. Institutions, while stable, are dynamic entities embedded in social life, shaping behaviors, strategies, and ideas. They often emerge from attempts to solve problems of collective action and establish forms of decision making. The premise of the concept of institution is that humans act based on socially created rules, which establish meanings for their action and structure social relationships and collective action. In other words, an institution is at the same time a factor of meaning for human action and a factor that structures power relations. Because algorithms are rules that are embedded in complex systems, they can create meaning for human agency. Algorithmic systems co-evolve with human agency in different contexts.

Algorithms, like other institutions, are human-created systems often designed to make decisions and solve problems. When we think of algorithms as institutions, we understand they have a specific type of capacity, which is to structure human behavior by affecting the scope of our choices. Human agency depends on how we interpret codes of conduct that are organized in different everyday situations. Algorithms are emerging institutions of contemporary society because they organize these codes of conduct for human action in various situations, both in the public and private spheres. That is, algorithms have themselves become institutions (or as layers of complex institutional settings). For instance, more than half of U.S. adults (54%) say they at least sometimes get news on digital platforms run by algorithmic systems.a

Confirmed by other studies that followed, active adult Facebook users in the U.S. reveals that most of the content seen on the platform comes from sources that align with users’ preferences.6

In everyday life, we are not aware of how institutions relate to the ways in which we think, act, desire and, ultimately, are. Algorithms also represent power relations. Not only of the explicitly hierarchical kind, but mainly of the subtle, pervasive, and omnipresent kind of invisible networks of algorithmic systems that affect processes of subjectivity formation. Like other institutions, algorithmic systems allocate resources and power. Many countries employ algorithmic systems for immigration and asylum decisions, using tools like facial and dialect recognition in the asylum process.2

Thinking about algorithms as institutions requires understanding the social values inscribed in them and the way they intervene, as well as the way they are transformed from the effective action of humans in the world. Like political institutions, algorithms must be related to broad principles that structure collective action, which can change over time. Algorithms are not ahistorical logical chains, but inscriptions of social relations in decision-making patterns. These inscriptions become very visible when algorithms, using artificial intelligence models, seek to learn from the past to act in the present and project something into the future. For example, the debate about racial biases in machine learning software arises because these systems reproduce a racist structure that categorizes people based on racially biased data.7 Algorithms reproduce a social structure based on unequal power relations, thus reinforcing structural forms of domination.9

Democratization of Algorithmic Institutions

We understand algorithms are a central dimension of contemporary societies and are here to stay. There is no way back to a pre-algorithmic world. However, in the same way that other institutions have been democratized throughout time, we believe that democracies now require—and depend on—the democratization of algorithmic institutions.

As institutions, algorithms exert collective influence on society. By occupying a central role in collective decision making, they offer means to rationalize societal structures, alter decision-making norms, and reshape the organization of human existence. The widespread integration of algorithms in areas such as work dynamics, electoral procedures, interpersonal relationships, and public policy choices exemplifies their transformative impact on the social fabric, giving rise to a coevolving algorithmic political framework with human agency.

Looking at algorithmic systems, and conceiving of them as institutions, we can also imagine a movement towards their democratization. These institutions are novel, they are confusing and opaque, they have ambivalent and unintended consequences, and they are pervaded by deep power asymmetries and games. The critical juncture generated by the public problematization of the political consequences of algorithms in many instances provides a window of opportunity to promote change, to bring the design, deployment, and operation of algorithmic systems closer to democratic values. To foster this shift, our argument requires two steps, the first being that this change needs a discussion about legitimacy of algorithmic decision-making. The democratization of algorithmic institutions is intertwined with discussions on establishing legitimate decision-making processes in democratic settings. It is also crucial to reflect on the core values essential for fostering more democratic institutions. The second step is to align algorithms with democratic principles: they must be integrated into political dynamics guided by values such as participation, equality, pluralism, accountability, public debate, and liberty.

While algorithmic systems are a central piece of our contemporary institutional life, they lack the two main pillars of legitimation central to democracy: authorization and accountability. Decisions happening through algorithmic systems do not usually have clear mechanisms of authorization and accountability. Algorithms enjoy the benefits of the legitimacy expected from their outcomes, without the burdens inherent in democratic decision-making. They frame interactive settings and have deep collective consequences without passing through the tests of authorization or going through the controls that lie at the heart of other institutions with political implications in democratic regimes. That is, the institutionalization of algorithmic systems with different forms of public consequences rarely satisfies the criteria of democratic principles and values, with no clear procedures for their justification in the political community.

Route to Democratic Governance

We claim that algorithmic systems embedded in institutional settings need to be democratized, but the meaning of this democratization requires further clarification. In the case of algorithms, the principle of accountability seems to be the key to democratize algorithmic institutions. This principle can demand new governance strategies. The key democratic values should be part of the governance framework for a democratic accountability of algorithms. The democratization of these institutions should be thought of as a normative horizon that guides practices and allows continuous criticism of existing institutions. Currently, democracies are undergoing processes of change that are interpreted as the erosion of traditional political institutions and the emergence of new ones. Traditional institutions such as elections and party systems are challenged and undermined daily. Social media platforms, with different AI-based algorithms, influence political communication affecting human preferences, speeches, and political orientations. When it comes to governing algorithmic institutions, we should draw lessons from political institutions. It is essential to establish networks of diverse institutions for the oversight and regulation of these entities. Concentrated power is never beneficial, and relying solely on self-regulation is not the solution. Moreover, the democratization of algorithmic governance requires transnational participation, capable of bringing together not only experts and governments, but a broader array of citizens, who are affected by the ubiquity of high-risk algorithms in their everyday lives.

The term “governance” encompasses various meanings. In the current context, it serves as a broad term covering all types of collective steering of societal affairs. This includes institutionalized forms of collaboration between state, society and private organizations. Collaborative governance emerges as a response to the complexities of algorithmic society and perceived drawbacks of state-centric administration, including inflexibility, centralization, a dearth of expertise, and overly general rules.3,8 It aims to incorporate non-governmental perspectives without being mere self-governance by companies. It seeks to harness multiple stakeholders, exploring the advantages of private sector organizations while maintaining accountability and legitimacy associated with public regulation and governance. In summary, the book Algorithmic Institutionalism presents a comprehensive theoretical framework to understand a changing world in uncertain times, permeated by algorithms and digital technologies.

    References

    • 1. Almeida, V., Filgueiras, F., and Mendonça, R. Algorithms and institutions: How social sciences can contribute to governance of algorithms. IEEE Internet Computer 26, 2 (2022); 10.1109/MIC.2022.31479
    • 2. Beduschi, A. International migration management in the age of artificial intelligence. Migration Studies 9, 3 (Sept. 2021); 10.1093/migration/mnaa003
    • 3. Gasser, U. and Almeida, V. Futures of digital governance. Commun. ACM 65, 3 (Mar. 2022); 10.1145/3477502
    • 4. Laufer, B. and Nissenbaum, H. Algorithmic Displacement of Social Trust, 23-12 Knight First Amend. Inst. (Nov. 29, 2023); https://bit.ly/3URNKE6
    • 5. Mendonça, R., Filgueiras, F., and Almeida, V. Algorithmic Institutionalism. The Changing Rules of Social + Political Life. Oxford University Press (2023).
    • 6. Nyhan, B. et al. Like-minded sources on Facebook are prevalent but not polarizing. Nature  (2023); 10.1038/s41586-023-06297-w
    • 7. Runciman, D. How Democracy Ends. Basic Books, New York (2018).
    • 8. Selbst, A. An institutional view of algorithmic impact assessments. Harvard J. of Law and Technology 35, 1 (2021).
    • 9. Stray, J. et al. Building human values into recommender systems: An interdisciplinary synthesis (2022); https://arxiv.org/abs/2207.10192
    • 10. West, J.D. and Bergstrom, C.T. Misinformation in and about science. In Proceedings of National Academy of Science (2021); 118/15: e1912444117

Join the Discussion (0)

Become a Member or Sign In to Post a Comment

The Latest from CACM

Shape the Future of Computing

ACM encourages its members to take a direct hand in shaping the future of the association. There are more ways than ever to get involved.

Get Involved

Communications of the ACM (CACM) is now a fully Open Access publication.

By opening CACM to the world, we hope to increase engagement among the broader computer science community and encourage non-members to discover the rich resources ACM has to offer.

Learn More