How does the use of artificial intelligence (AI) change organizations in practice? How can organizations improve their application of AI systems?
In order to find answers to these questions, Marleen Huysman, affiliated with the Vrije Universiteit Amsterdam (VU) in the Netherlands, leads a 35-person multidisciplinary research group called the KIN Center for Digital Innovation. The group includes computer scientists, engineers, sociologists, anthropologists, business experts, and industrial designers.
Their working method is unique: they obtain permission to embed themselves into an organization, then study as digital anthropologists for many months and sometimes years the impact of a recently introduced AI system. To date, they have performed studies of how AI impacts the practices of radiology, predictive policing, robotic surgery, and recruitment.
Communications interviewed Huysman about the impact AI can have on organizations.
Bennie Mols: What is the key of the anthropological approach you use to study AI in organizations?
Marleen Huysman: We become part of the daily routine by walking with the employees and observing carefully how the AI modifies the organization. We ask questions such as: Why did management introduce the AI? What can the AI do and what can it not do? Does the AI function as a replacement or as an augmentation of people? How does the AI change the work of the users and of the people around the users? What are the ripple effects of introducing AI?
Which general conclusions can you draw from the impact of AI on organizations?
The most important conclusion is that there is a big gap between the AI developers and AI systems on the one side, and the end-users in the organization on the other side. We expected to find a gap, but it is much bigger than we thought. This leads to AI systems being implemented too quickly, and to systems that do not fit in well with the daily work routines. As a consequence, we often observed that a new job function needs to be created, that of a broker, translator, or intelligence officer who tries to bridge the gap.
Can you illustrate these conclusions with some examples?
We studied the introduction of predictive policing by the Dutch police. We discovered that the interpretation and filtering of the AI outputs was too difficult to leave to the police officers themselves. To solve this problem, the police set up an intelligence unit which translates the AI outputs into what police officers must actually do.
We also studied the introduction of a hiring algorithm at a large multinational organization. We saw that while the algorithm was introduced to support HR in making better hiring decisions, the HR professionals became assistants to the algorithm instead. HR professionals do not select or reject candidates anymore, but supply the algorithm with fresh data so it can make the decision on their behalf. Furthermore, they need to repair mistakes made by the algorithm and act as its intermediary, such as in the case where the algorithm mistakenly rejected multiple candidates. These are all changes in the work activities of the HR team that were not thought of when the hiring algorithm was introduced.
This illustrates another of our general findings: AI often has unexpected ripple-effects on the people who have to deal with it in practice. Often new jobs are needed, and old jobs have to change.
What is the danger of implementing AI too quickly in an organization?
One of the organizations we studied was the business-to-business sales department of a large company in the Netherlands. They used a relatively simple rule-based AI to predict whether a client would need a new product or whether a client should be contacted because a product they used was out of date. Normally a sales manager would call the client, and because the sales manager has come to know her clients personally, she would also ask personal questions: How are you doing? How is your family? In theory, the AI system was much better at predicting the best moment to contact a client, so the organization fired most of its salespeople and the AI system was introduced. Soon, however, the organization discovered that the personal contact between the salespeople and the clients was far more important in selling their products than anybody had realized. The AI system couldn't make this personal contact, and therefore performed much worse than the sales managers did.
What have you learned about the role of domain experts? Are they still needed when AI is self-learning?
The promise of AI is that self-learning systems do not need domain experts anymore because all the expertise resides in the data. However, we have observed the opposite. AI developers need to cooperate more with domain experts, compared to traditional knowledge systems. This is because knowledge lies not just in data, but also in people. Knowledge is a shared property: it is distributed in the form of expertise and routines by many people in an organization. We see that AI developers continuously need to collaborate with domain experts in order to keep the dataset useful and of good quality.
What is the most important question you are going to investigate in the near future?
We do not just want to build scientific theories; we also want to contribute practically. At the moment, we do this by organizing workshops in the organizations that we have studied and discuss our conclusions with them. However, we can contribute even more by developing a methodology that organizations can use to bridge the gap between the AI developers and the AI users. How can we develop such a methodology? That is, for me, the most important question. We just started a new project to find the answer.
Bennie Mols is a science and technology writer based in Amsterdam, the Netherlands.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment