Sign In

Communications of the ACM

1 - 10 of 233 for bentley

Keeping science on keel when software moves

An approach to reproducibility problems related to porting software across machines and compilers.


Relative Worst-order Analysis: A Survey

The standard measure for the quality of online algorithms is the competitive ratio. This measure is generally applicable, and for some problems it works well, but for others it fails to distinguish between algorithms that have very different performance. Thus, ever since its introduction, researchers have worked on improving the measure, defining variants, or defining measures based on other concepts to improve on the situation. Relative worst-order analysis (RWOA) is one of the most thoroughly tested such proposals. With RWOA, many separations of algorithms not obtainable with competitive analysis have been found.

In RWOA, two algorithms are compared directly, rather than indirectly as is done in competitive analysis, where both algorithms are compared separately to an optimal offline algorithm. If, up to permutations of the request sequences, one algorithm is always at least as good and sometimes better than another, then the first algorithm is deemed the better algorithm by RWOA.

We survey the most important results obtained with this technique and compare it with other quality measures. The survey includes a quite complete set of references.


Fine-grained Complexity Analysis of Two Classic TSP Variants

We analyze two classic variants of the TRAVELING SALESMAN PROBLEM (TSP) using the toolkit of fine-grained complexity.

Our first set of results is motivated by the BITONIC TSP problem: given a set of n points in the plane, compute a shortest tour consisting of two monotone chains. It is a classic dynamic-programming exercise to solve this problem in O(n2) time. While the near-quadratic dependency of similar dynamic programs for LONGEST COMMON SUBSEQUENCE and DISCRETE Fréchet Distance has recently been proven to be essentially optimal under the Strong Exponential Time Hypothesis, we show that bitonic tours can be found in subquadratic time. More precisely, we present an algorithm that solves bitonic TSP in O(nlog 2 n) time and its bottleneck version in O(nlog 3 n) time. In the more general pyramidal TSP problem, the points to be visited are labeled 1,… ,n and the sequence of labels in the solution is required to have at most one local maximum. Our algorithms for the bitonic (bottleneck) TSP problem also work for the pyramidal TSP problem in the plane.

Our second set of results concerns the popular k-OPT heuristic for TSP in the graph setting. More precisely, we study the k-OPT decision problem, which asks whether a given tour can be improved by a k-OPT move that replaces k edges in the tour by k new edges. A simple algorithm solves k-OPT in O(nk) time for fixed k. For 2-OPT, this is easily seen to be optimal. For k=3, we prove that an algorithm with a runtime of the form Õ(n3−ɛ) exists if and only if ALL-PAIRS SHORTEST PATHS in weighted digraphs has such an algorithm. For general k-OPT, it is known that a runtime of f(k) · no(k/ log k) would contradict the Exponential Time Hypothesis. The results for k=2,3 may suggest that the actual time complexity of k-OPT is Θ (nk). We show that this is not the case, by presenting an algorithm that finds the best k-move in O(n ⌊ 2k/3 ⌋+1) time for fixed k ≥ 3. This implies that 4-OPT can be solved in O(n3) time, matching the best-known algorithm for 3-OPT. Finally, we show how to beat the quadratic barrier for k=2 in two important settings, namely, for points in the plane and when we want to solve 2-OPT repeatedly.


Mapping and Taking Stock of the Personal Informatics Literature

The research community on the study and design of systems for personal informatics has grown over the past decade. To take stock of what the topics the field has studied and methods the field has used, we map and label 523 publications from ACM's library, IEEE Xplore, and PubMed. We surface that the literature has focused on studying and designing for health and wellness domains, an emphasis on understanding and overcoming barriers to data collection and reflection, and progressively fewer contributions involving artifacts being made. Our mapping review suggests directions future research could explore, such as identifying and resolving barriers to tracking stages beyond collection and reflection, engaging more with domain experts, and further discussing the privacy and ethical concerns around tracked data.


Multi-Modal Repairs of Conversational Breakdowns in Task-Oriented Dialogs

A major problem in task-oriented conversational agents is the lack of support for the repair of conversational breakdowns. Prior studies have shown that current repair strategies for these kinds of errors are often ineffective due to: (1) the lack of transparency about the state of the system's understanding of the user's utterance; and (2) the system's limited capabilities to understand the user's verbal attempts to repair natural language understanding errors. This paper introduces SOVITE, a new multi-modal speech plus direct manipulation interface that helps users discover, identify the causes of, and recover from conversational breakdowns using the resources of existing mobile app GUIs for grounding. SOVITE displays the system's understanding of user intents using GUI screenshots, allows users to refer to third-party apps and their GUI screens in conversations as inputs for intent disambiguation, and enables users to repair breakdowns using direct manipulation on these screenshots. The results from a remote user study with 10 users using SOVITE in 7 scenarios suggested that SOVITE's approach is usable and effective.


Explaining factors affecting telework adoption in South African organisations pre-COVID-19

The COVID-19 pandemic of 2020 saw governments across the world mandating telework for entire populations thereby bringing the topic of telework into sharp focus. Telework is a well-researched topic which dates as far back as five decades ago. While telework provides many indisputable benefits to organisations, society and individuals, it has not achieved the anticipated widespread adoption. While telework studies have examined multiple aspects, few studies have examined organisational factors which affect telework adoption. This study is an empirical investigation of telework adoption, using a set of factors identified in the literature in organisations in a South African context. These factors in prior studies were found to enable or prevent an organisation from adopting telework. The question thus asked in this study was “Which factors enable or prevent the adoption of telework within South African organisations?” A survey with 104 valid responses was analysed using Statistica. The theoretical contribution of the study is a validated model of factors influencing the adoption of telework.


SoK: a taxonomy for anomaly detection in wireless sensor networks focused on node-level techniques

Wireless sensor networks play an important role in today's world: When measuring physical conditions, the quality of the sensor readings ultimately impacts the quality of various data analytical services. To maintain data correctness and quality, run-time measures such as anomaly detection techniques are gaining significance. In particular, the detection of threatening node anomalies caused by sensor node faults has become a crucial task.

The detection of faulty sensor nodes is a non-trivial task because wireless sensor networks typically consist of low-cost embedded systems with strictly limited resources, especially regarding their energy budget. Thus, efficient and lightweight approaches that meet the requirements of sensor networks are required.

In this SoK paper, we contribute with a novel taxonomy of anomaly detection approaches focused on wireless sensor networks and a meta-survey of related classification schemes. To the best of our knowledge, our taxonomy is a comprehensive super-set of all previously published taxonomies in this field. Based on the taxonomy, we present new insights in node-level anomaly detection approaches and the applicability of immune-inspired techniques, and we lay out related research challenges.


A New Approach for Pedestrian Density Estimation Using Moving Sensors and Computer Vision

An understanding of person dynamics is indispensable for numerous urban applications, including the design of transportation networks and planning for business development. Pedestrian counting often requires utilizing manual or technical means to count individuals in each location of interest. However, such methods do not scale to the size of a city and a new approach to fill this gap is here proposed. In this project, we used a large dense dataset of images of New York City along with computer vision techniques to construct a spatio-temporal map of relative person density. Due to the limitations of state-of-the-art computer vision methods, such automatic detection of person is inherently subject to errors. We model these errors as a probabilistic process, for which we provide theoretical analysis and thorough numerical simulations. We demonstrate that, within our assumptions, our methodology can supply a reasonable estimate of person densities and provide theoretical bounds for the resulting error.


VizSciFlow: A Visually Guided Scripting Framework for Supporting Complex Scientific Data Analysis

Scientific workflow management systems such as Galaxy, Taverna and Workspace, have been developed to automate scientific workflow management and are increasingly being used to accelerate the specification, execution, visualization, and monitoring of data-intensive tasks. For example, the popular bioinformatics platform Galaxy is installed on over 168 servers around the world and the social networking space myExperiment shares almost 4,000 Galaxy scientific workflows among its 10,665 members. Most of these systems offer graphical interfaces for composing workflows. However, while graphical languages are considered easier to use, graphical workflow models are more difficult to comprehend and maintain as they become larger and more complex. Text-based languages are considered harder to use but have the potential to provide a clean and concise expression of workflow even for large and complex workflows. A recent study showed that some scientists prefer script/text-based environments to perform complex scientific analysis with workflows. Unfortunately, such environments are unable to meet the needs of scientists who prefer graphical workflows. In order to address the needs of both types of scientists and at the same time to have script-based workflow models because of their underlying benefits, we propose a visually guided workflow modeling framework that combines interactive graphical user interface elements in an integrated development environment with the power of a domain-specific language to compose independently developed and loosely coupled services into workflows. Our domain-specific language provides scientists with a clean, concise, and abstract view of workflow to better support workflow modeling. As a proof of concept, we developed VizSciFlow, a generalized scientific workflow management system that can be customized for use in a variety of scientific domains. As a first use case, we configured and customized VizSciFlow for the bioinformatics domain. We conducted three user studies to assess its usability, expressiveness, efficiency, and flexibility. Results are promising, and in particular, our user studies show that VizSciFlow is more desirable for users to use than either Python or Galaxy for solving complex scientific problems.


Understanding Reflection Needs for Personal Health Data in Diabetes

To empower users of wearable medical devices, it is important to enable methods that facilitate reflection on previous care to improve future outcomes. In this work, we conducted a two-phase user-study involving patients, caregivers, and clinicians to understand gaps in current approaches that support reflection and user needs for new solutions. Our results show that users desire to have specific summarization metrics, solutions that minimize cognitive effort, and solutions that enable data integration to support meaningful reflection on diabetes management. In addition, we developed and evaluated a visualization called PixelGrid that presents key metrics in a matrix-based plot. Majority of users (84%) found the matrix-based approach to be useful for identifying salient patterns related to certain times and days in blood glucose data. Through our evaluation we identified that users desire data visualization solutions with complementary textual descriptors, concise and flexible presentation, contextually-fitting content, and informative and actionable insights. Directions for future research on tools that automate pattern discovery, detect abnormalities, and provide recommendations to improve care were also identified.