acm-header
Sign In

Communications of the ACM


111 - 120 of 3,299 for bentley


"All Rise for the AI Director": Eliciting Possible Futures of Voice Technology through Story Completion

How might the capabilities of voice assistants several decades in the future shape human society? To anticipate the space of possible futures for voice assistants, we asked 149 participants to each complete a story based on a brief story stem set in the year 2050 in one of five different contexts: the home, doctor's office, school, workplace, and public transit. Story completion as a method elicits participants' visions of possible futures, unconstrained by their understanding of current technological capabilities, but still reflective of current sociocultural values. Through a thematic analysis, we find these stories reveal the extremes of the capabilities and concerns of today's voice assistants---and artificial intelligence---such as improving efficiency and offering instantaneous support, but also replacing human jobs, eroding human agency, and causing harm through malfunction. We conclude by discussing how these speculative visions might inform and inspire the design of voice assistants and other artificial intelligence.

2020-07-03
https://dl.acm.org/ft_gateway.cfm?id=3395479&dwn=1

Using Remote Controlled Speech Agents to Explore Music Experience in Context

It can be difficult for user researchers to explore how people might interact with interactive systems in everyday contexts; time and space limitations make it hard to be present everywhere that technology is used. Digital music services are one domain where designing for context is important given the myriad places people listen to music. One novel method to help design researchers embed themselves in everyday contexts is through remote-controlled speech agents. This paper describes a practitioner-centered case study of music service interaction researchers using a remote-controlled speech agent, called DJ Bot, to explore people's music interaction in the car and the home. DJ Bot allowed the team to conduct remote user research and contextual inquiry and to quickly explore new interactions. However, challenges using a remote speech-agent arose when adapting DJ Bot from the constrained environment of the car to the unconstrained home environment.

2020-07-03
https://dl.acm.org/ft_gateway.cfm?id=3395440&dwn=1

Making Air Quality Data Meaningful: Coupling Objective Measurement with Subjective Experience through Narration

Air pollution causes several million deaths every year. Increasing public awareness through the deployment of devices that sense air quality may be a promising step in addressing the problem; however, these wholly objective device measurements may not capture the nuanced and lived experiences people have with the air, which are often colored by perceptions, histories, imaginations, and the sociopolitical context in which people live. The gap between objective environmental realities and individuals' subjective experiences of the environment may make it difficult to form meaning from data, hindering the positive policy outcomes that they are intended to produce. To bridge this gap, we conducted a two-phase design fieldwork to obtain an empirical understanding of the rich contours of experiences people have with the air and outline design strategies in making air quality data meaningful.

2020-07-03
https://dl.acm.org/ft_gateway.cfm?id=3395517&dwn=1

Visualizing Personal Rhythms: A Critical Visual Analysis of Mental Health in Flux

Visualizations of personal data in self-tracking systems can make even subtle shifts in mental and physical states observable, greatly influencing how health and wellness goals are set, pursued, and achieved. At the same time, recent work in data ethics cautions that standardized models can have unintended negative consequences for some user groups. Through collaborative design and critical visual analysis, this study contrasts conventional visualizations of personal data with the ways that vulnerable populations represent their lived experiences. Participants self-tracked to manage bipolar disorder, a mental illness characterized by severe and unpredictable mood changes. During design sessions, each created a series of timeline drawings depicting their experiences with mental health. Examples of adaptive and vernacular design, these images use both normative standards and individualized graphic modifications. Analysis shows that conventional visual encodings can support facets of self-assessment while also imposing problematic normative standards onto deeply personal experiences.

2020-07-03
https://dl.acm.org/ft_gateway.cfm?id=3395463&dwn=1

A New Approach for Pedestrian Density Estimation Using Moving Sensors and Computer Vision

An understanding of person dynamics is indispensable for numerous urban applications, including the design of transportation networks and planning for business development. Pedestrian counting often requires utilizing manual or technical means to count individuals in each location of interest. However, such methods do not scale to the size of a city and a new approach to fill this gap is here proposed. In this project, we used a large dense dataset of images of New York City along with computer vision techniques to construct a spatio-temporal map of relative person density. Due to the limitations of state-of-the-art computer vision methods, such automatic detection of person is inherently subject to errors. We model these errors as a probabilistic process, for which we provide theoretical analysis and thorough numerical simulations. We demonstrate that, within our assumptions, our methodology can supply a reasonable estimate of person densities and provide theoretical bounds for the resulting error.

2020-07-03
https://dl.acm.org/ft_gateway.cfm?id=3397575&dwn=1

Scalable Machine Learning on High-Dimensional Vectors: From Data Series to Deep Network Embeddings

There is an increasingly pressing need, by several applications in diverse domains, for developing techniques able to analyze very large collections of static and streaming sequences (a.k.a. data series), predominantly in real-time. Examples of such applications come from Internet of Things installations, neuroscience, astrophysics, and a multitude of other scientific and application domains that need to apply machine learning techniques for knowledge extraction. It is not unusual for these applications, for which similarity search is a core operation, to involve numbers of data series in the order of hundreds of millions to billions, which are seldom analyzed in their full detail due to their sheer size. Such application requirements have driven the development of novel similarity search methods that can facilitate scalable analytics in this context. At the same time, a host of other methods have been developed for similarity search of high-dimensional vectors in general. All these methods are now becoming increasingly important, because of the growing popularity and size of sequence collections, as well as the growing use of high-dimensional vector representations of a large variety of objects (such as text, multimedia, images, audio and video recordings, graphs, database tables, and others) thanks to deep network embeddings. In this work, we review recent efforts in designing techniques for indexing and analyzing massive collections of data series, and argue that they are the methods of choice even for general high-dimensional vectors. Finally, we discuss the challenges and open research problems in this area.

2020-06-30
https://dl.acm.org/ft_gateway.cfm?id=3405989&dwn=1

Evolving robot software and hardware

This paper summarizes the keynote I gave on the SEAMS 2020 conference. Noting the power of natural evolution that makes living systems extremely adaptive, I describe how artificial evolution can be employed to solve design and optimization problems in software. Thereafter, I discuss the Evolution of Things, that is, the possibility of evolving physical artefacts and zoom in on a (r)evolutionary way of creating `bodies' and `brains' of robots for engineering and fundamental research.

2020-06-29
https://dl.acm.org/ft_gateway.cfm?id=3391593&dwn=1

SLEMI: equivalence modulo input (EMI) based mutation of CPS models for finding compiler bugs in Simulink

Finding bugs in commercial cyber-physical system development tools (or "model-based design" tools) such as MathWorks's Simulink is important in practice, as these tools are widely used to generate embedded code that gets deployed in safety-critical applications such as cars and planes. Equivalence Modulo Input (EMI) based mutation is a new twist on differential testing that promises lower use of computational resources and has already been successful at finding bugs in compilers for procedural languages. To provide EMI-based mutation for differential testing of cyber-physical system (CPS) development tools, this paper develops several novel mutation techniques. These techniques deal with CPS language features that are not found in procedural languages, such as an explicit notion of execution time and zombie code, which combines properties of live and dead procedural code. In our experiments the most closely related work (SLforge) found two bugs in the Simulink tool. In comparison, SLEMI found a super-set of issues, including 9 confirmed as bugs by MathWorks Support.

2020-06-27
https://dl.acm.org/ft_gateway.cfm?id=3380381&dwn=1

On Building an Automatic Identification of Country-Specific Feature Requests in Mobile App Reviews: Possibilities and Challenges

Mobile app stores are available in over 150 countries, allowing users from all over the world to leave public reviews of downloaded apps. Previous studies have shown that such reviews can serve as sources of requirements and suggested that users from different countries have different needs and expectations regarding the same app. However, the tremendous quantity of reviews from multiple countries, as well as several other factors, complicates identifying country-specific app feature requests. In this work, we present a simple approach to address this through NLP-based analysis and discuss some of the challenges involved in using the NLP-based analysis for this task.

2020-06-27
https://dl.acm.org/ft_gateway.cfm?id=3391492&dwn=1

Precfix: large-scale patch recommendation by mining defect-patch pairs

Patch recommendation is the process of identifying errors in software systems and suggesting suitable fixes for them. Patch recommendation can significantly improve developer productivity by reducing both the debugging and repairing time. Existing techniques usually rely on complete test suites and detailed debugging reports, which are often absent in practical industrial settings. In this paper, we propose Precfix, a pragmatic approach targeting large-scale industrial codebase and making recommendations based on previously observed debugging activities. Precfix collects defect-patch pairs from development histories, performs clustering, and extracts generic reusable patching patterns as recommendations. We conducted experimental study on an industrial codebase with 10K projects involving diverse defect patterns. We managed to extract 3K templates of defect-patch pairs, which have been successfully applied to the entire codebase. Our approach is able to make recommendations within milliseconds and achieves a false positive rate of 22% confirmed by manual review. The majority (10/12) of the interviewed developers appreciated Precfix, which has been rolled out to Alibaba to support various critical businesses.

2020-06-27
https://dl.acm.org/ft_gateway.cfm?id=3381356&dwn=1