acm-header
Sign In

Communications of the ACM


1 - 10 of 371 for bentley


Keeping science on keel when software moves

An approach to reproducibility problems related to porting software across machines and compilers.

2021-01-25
https://dl.acm.org/ft_gateway.cfm?id=3382037&dwn=1

Validity frame concept as effort-cutting technique within the verification and validation of complex cyber-physical systems

The increasing performance demands and certification needs of complex cyber-physical systems (CPS) raise the complexity of the engineering process, not only within the development phase, but also in the Verification and Validation (V&V) phase. A proven technique to handle the complexity of CPSs is Model-Based Design (MBD). Nevertheless, the verification and validation of complex CPSs is still an exhaustive process and the usability of the models to front-load V&V activities heavily depends on the knowledge of the models and the correctness of the conducted virtual experiments. In this paper, we explore how the effort (and cost) of the V&V phase of the engineering process of complex CPSs can be reduced by enhancing the knowledge about the system components, and explicitly capturing it within their corresponding validity frame. This effort reduction originates from exploiting the captured system knowledge to generate efficient V&V processes and by automating activities at different model life stages, such as the setup and execution of boundary-value or fault-injection tests. This will be discussed in the context of a complex CPS: a safety-critical adaptive cruise control system.

2020-10-16
https://dl.acm.org/ft_gateway.cfm?id=3419226&dwn=1

Towards adaptive abstraction for continuous time models with dynamic structure

Humans often switch between multiple levels of abstraction when reasoning about salient properties of complex systems. These changes in perspective may be leveraged at runtime to improve both performance and explainability, while still producing identical answers to questions about the properties of interest. This technique, which switches between multiple abstractions based on changing conditions in the modelled system, is also known as adaptive abstraction.

The Modelica language represents systems as a-causal continuous equations, which makes it appropriate for the modelling of physical systems. However adaptive abstraction requires dynamic structure modelling. This raises many technical challenges in Modelica since it has poor support for modifying connections during simulation. Its equation-based nature means that all equations need to be well-formed at all times, which may not hold when switching between levels of abstraction. The initialization of models upon switching must also be carefully managed, as information will be lost or must be created when switching abstractions [1].

One way to allow adaptive abstraction is to represent the system as a multi-mode hybrid Modelica model, a mode being an abstraction that can be switched to based on relevant criteria. Another way is to employ a co-simulation [2] approach, where modes are exported as "black boxes" and orchestrated by a central algorithm that implements adaptivity techniques to dynamically replace components when a switching condition occurs.

This talk will discuss the benefits of adaptive abstraction using Modelica, and the conceptual and technical challenges towards its implementation. As a stand-in for a complex cyber-physical system, an electrical transmission line case study is proposed where attenuation is studied across two abstractions having varying fidelity depending on the signal. Our initial results, as well as our explorations towards employing Modelica models in a co-simulation context using the DEVS formalism [4] are discussed. A Modelica only solution allows to tackle complexity via decomposition, but does not improve performances as all modes are represented as a single set of equations. The co-simulation approach might offer better performances [3], but complicates the workflow.

2020-10-16
https://dl.acm.org/ft_gateway.cfm?id=3421443&dwn=1

Sochiatrist: Signals of Affect in Messaging Data

Messaging is a common mode of communication, with conversations written informally between individuals. Interpreting emotional affect from messaging data can lead to a powerful form of reflection or act as a support for clinical therapy. Existing analysis techniques for social media commonly use LIWC and VADER for automated sentiment estimation. We correlate LIWC, VADER, and ratings from human reviewers with affect scores from 25 participants. We explore differences in how and when each technique is successful. Results show that human review does better than VADER, the best automated technique, when humans are judging positive affect ($r_s=0.45$ correlation when confident, $r_s=0.30$ overall). Surprisingly, human reviewers only do slightly better than VADER when judging negative affect ($r_s=0.38$ correlation when confident, $r_s=0.29$ overall). Compared to prior literature, VADER correlates more closely with PANAS scores for private messaging than public social media. Our results indicate that while any technique that serves as a proxy for PANAS scores has moderate correlation at best, there are some areas to improve the automated techniques by better considering context and timing in conversations.

2020-10-14
https://dl.acm.org/ft_gateway.cfm?id=3415182&dwn=1

Efficient simulation of macroscopic molecular communication: the pogona simulator

Molecular communication in pipe networks is a novel technique for wireless data exchange. Simulating such networks accurately is difficult because of the complexity of fluid dynamics at centimeter scales, which existing molecular communication simulators do not model. The new simulator we present combines computational fluid dynamics simulation and particle movement predictions. It is optimized to be computationally efficient while offering a high degree of adaptability to complex fluid flows in larger pipe networks. We validate it by comparing the simulation with experimental results obtained in a real-world testbed.

2020-09-23
https://dl.acm.org/ft_gateway.cfm?id=3411297&dwn=1

Constraint handling in genotype to phenotype mapping and genetic operators for project staffing

Project staffing in many organisations involves the assignment of people to multiple projects while satisfying multiple constraints. The use of a genetic algorithm with constraint handling performed during a genotype to phenotype mapping process provides a new approach. Experiments show promise for this technique.

2020-07-08
https://dl.acm.org/ft_gateway.cfm?id=3398165&dwn=1

Spying on the Floating Point Behavior of Existing, Unmodified Scientific Applications

Scientific (and other) applications are critically dependent on calculations done using IEEE floating point arithmetic. A number of concerns have been raised about correctness in such applications given the numerous gotchas the IEEE standard presents for developers, as well as the complexity of its implementation at the hardware and compiler levels. The standard and its implementations do provide mechanisms for analyzing floating point arithmetic as it executes, making it possible to find and track problematic operations. However, this capability is seldom used in practice. In response, we have developed FPSpy, a tool that provides this capability when operating underneath existing, unmodified x64 application binaries on Linux, including those using thread- and process-level parallelism. FPSpy can observe application behavior without any cooperation from the application or developer, and can potentially be deployed as part of a job launch process. We present the design, implementation, and performance evaluation of FPSpy. FPSpy operates conservatively, getting out of the way if the application itself begins to use any of the OS or hardware features that FPSpy depends on. Its overhead can be throttled, allowing a tradeoff between which and how many unusual events are to be captured, and the slowdown incurred by the application, with the low point providing virtually zero slowdown. We evaluated FPSpy by using it to methodically study seven widely-used applications/frameworks from a range of domains (five of which are in the NSF XSEDE top-20), as well as the NAS and PARSEC benchmark suites. All told, these comprise about 7.5 million lines of source code in a wide range of languages, and parallelism models (including OpenMP and MPI). FPSpy was able to produce trace information for all of them. The traces show that problematic floating point events occur in both the applications and the benchmarks. Analysis of the rounding behavior captured in our traces also suggests the feasibility of an approach to adding adaptive precision underneath existing, unmodified binaries.

2020-06-23
https://dl.acm.org/ft_gateway.cfm?id=3392673&dwn=1

APL since 1978

The Evolution of APL, the HOPL I paper by Falkoff and Iverson on APL, recounted the fundamental design principles which shaped the implementation of the APL language in 1966, and the early uses and other influences which shaped its first decade of enhancements.

In the 40 years that have elapsed since HOPL I, several dozen APL implementations have come and gone. In the first decade or two, interpreters were typically born and buried along with the hardware or operating system that they were created for. More recently, the use of C as an implementation language provided APL interpreters with greater longevity and portability.

APL started its life on IBM mainframes which were time-shared by multiple users. As the demand for computing resources grew and costs dropped, APL first moved in-house to mainframes, then to mini- and micro-computers. Today, APL runs on PCs and tablets, Apples and Raspberry Pis, smartphones and watches.

The operating systems, and the software application platforms that APL runs on, have evolved beyond recognition. Tools like database systems have taken over many of the tasks that were initially implemented in APL or provided by the APL system, and new capabilities like parallel hardware have also changed the focus of design and implementation efforts through the years.

The first wave of significant language enhancements occurred shortly after HOPL I, resulting in so-called second-generation APL systems. The most important feature of the second generation is the addition of general arrays—in which any item of an array can be another array—and a number of new functions and operators aligned with, if not always motivated by, the new data structures.

The majority of implementations followed IBM’s path with APL2 “floating” arrays; others aligned themselves with SHARP APL and “grounded” arrays. While the APL2 style of APL interpreters came to dominate the mainstream of the APL community, two new cousins of APL descended from the SHARP APL family tree: J (created by Iverson and Hui) and k (created by Arthur Whitney).

We attempt to follow a reasonable number of threads through the last 40 years, to identify the most important factors that have shaped the evolution of APL. We will discuss the details of what we believe are the most significant language features that made it through the occasionally unnatural selection imposed by the loss of habitats that disappeared with hardware, software platforms, and business models.

The history of APL now spans six decades. It is still the case, as Falkoff and Iverson remarked at the end of the HOPL I paper, that:

Although this is not the place to discuss the future, it should be remarked that the evolution of APL is far from finished.

2020-06-12
https://dl.acm.org/ft_gateway.cfm?id=3386319&dwn=1

Rethinking Consumer Email: The Research Process for Yahoo Mail 6

This case study follows the research process of rethinking the design and functionality of a personal email client, Yahoo Mail. Over three years, we changed the focus of the product from composing emails towards automatically organizing specific categories of business to consumer email (such as deals, receipts, and travel) and creating experiences unique to each category. To achieve this, we employed iterative user research with over 1,500 in-person interviews in six countries and surveys to many thousands of people around the world. This research process culminated in the launch of Yahoo Mail 6.0 for iOS and Android devices in the fall of 2019.

2020-04-25
https://dl.acm.org/ft_gateway.cfm?id=3375224&dwn=1

Exploring the Quality, Efficiency, and Representative Nature of Responses Across Multiple Survey Panels

A common practice in HCI research is to conduct a survey to understand the generalizability of findings from smaller-scale qualitative research. These surveys are typically deployed to convenience samples, on low-cost platforms such as Amazon's Mechanical Turk or Survey Monkey, or to more expensive market research panels offered by a variety of premium firms. Costs can vary widely, from hundreds of dollars to tens of thousands of dollars depending on the platform used. We set out to understand the accuracy of ten different survey platforms/panels compared to ground truth data for a total of 6,007 respondents on 80 different aspects of demographic and behavioral questions. We found several panels that performed significantly better than others on certain topics, while different panels provided longer and more relevant open-ended responses. Based on this data, we highlight the benefits and pitfalls of using a variety of survey distribution options in terms of the quality, efficiency, and representative nature of the respondents and the types of responses that can be obtained.

2020-04-21
https://dl.acm.org/ft_gateway.cfm?id=3376671&dwn=1