Recommender systems are among the most pervasive machine learning applications on the Internet. Social media, audio and video streaming, news, and e-commerce are all heavily driven by the data-intensive personalization they enable, leveraging information drawn from the behavior of large user bases to offer a myriad of recommendation services. Point of Interest (PoI) recommendation is the task of recommending locations (business, cultural sites, natural areas) for a user to visit. This is a well-established sub-field within recommender systems, and as a domain of application, it provides a good introduction to the challenges of applying personalized recommendation in practical contexts.
An effective PoI recommender must consider a user's interests and preferences, as in any personalized system, but also practical aspects of travel: weather, congestion, hours of operation, seasonality, to name a few. In addition, some PoI recommenders are designed to take multistakeholder aspects of the problem into account. For example, challenges arise in the rivalrous nature of recommending a limited resource across a large user base: a system that recommends the same 50-seat restaurant to 1,000 people on the same day and time will have a lot of disappointed users.
PoI recommender systems can be found within popular Web applications such as Yelp! and Google Maps, and these are quintessential big data-driven tools. However, in the drive to build practical systems optimized for the masses, there is always the danger that sub-groups of the population will be served less well. One of the important outgrowths of the recent focus on fairness in machine learning generally, and recommendation in particular, has been the recognition that the distribution of a system's benefits (or harms) may be as important as its central tendency, especially if inequities in that distribution fall heavily on already disadvantaged or minoritized groups.
The following paper—winner of the Best Paper Award at the 28th ACM User Modeling, Adaptation and Personalization Conference in 2020—is an example of work that takes seriously the task of supporting a small group that is not well served by existing applications. The authors consider the problem of recommending PoIs specifically to adults with Autism Spectrum Disorder (ASD). The task is challenging for several reasons. First, it is by necessity highly personalized—individuals in this population often have idiosyncratic responses to their environments. Simple peer profile matching or extraction of common patterns in the form of latent factors would be likely to miss these characteristics. At the same time, the risk associated with bad recommendations is high. An individual with autism may be easily overwhelmed and traumatized by an environment that is too loud, too bright, or too crowded, depending on how the syndrome manifests itself.
The authors describe the significant degree of interaction with participants that was required to obtain data that captures the complexity of their PoI preferences. Because of the need for detailed data gathering, the number of participants is necessarily very limited, many orders of magnitude less than what would be considered necessary to train today's state-of-the-art recommendation algorithms. Instead, the authors engaged in a sensitive and detailed analysis of user requirements and environmental characteristics to provide the best representation of the needs of their user base.
The following paper is an example of work that takes seriously the task of supporting a small group that is not well served by existing applications.
The evaluation, necessarily conducted offline at this preliminary stage, reflects another truism of modern recommender systems research, namely the need to use multiple evaluation metrics to capture a multidimensional view of system performance. In this case, we see the individualized treatment of features proposed by the authors leads to improved results for the ASD subjects, as anticipated. Thus, the algorithm is a good candidate for additional development and eventual deployment to these users. Although the results for the (presumed) neurotypical group are more mixed, they are not significantly worse than the baseline.
In the end, the authors demonstrate the value of their synthesis of AI and user modeling techniques in tackling a challenging and practical problem for the benefit of a disadvantaged and understudied group. (The authors note that most HCI research in the autism area focuses on children.) This effort is one step toward a larger and ongoing goal of creating an app providing geographic information and support to ASD users. While machine learning fairness research often concentrates on ensuring fair outcomes for a system's user base considered as a whole, this work is a reminder that real inclusivity and equity may require designs tailored to the needs of specific groups.
Join the Discussion (0)
Become a Member or Sign In to Post a Comment