Sign In

Communications of the ACM

151 - 160 of 371 for bentley

Bridging pre-silicon verification and post-silicon validation

Post-silicon validation is a necessary step in a design's verification process. Pre-silicon techniques such as simulation and emulation are limited in scope and volume as compared to what can be achieved on the silicon itself. Some parts of the verification, such as full-system functional verification, cannot be practically covered with current pre-silicon technologies. This panel brings together experts from industry, academia, and EDA to review the differences and similarities between pre- and post-silicon, discuss how the fundamental aspects of verification are affected by these differences, and explore how the gaps between the two worlds can be bridged.


Scene tagging: image-based CAPTCHA using image composition and object relationships

In this paper, we propose a new form of image-based CAPTCHA we term "scene tagging". It tests the ability to recognize a relationship between multiple objects in an image that is automatically generated via composition of a background image with multiple irregularly shaped object images, resulting in a large space of possible images and questions without requiring a large object database. This composition process is accompanied by a carefully designed sequence of systematic image distortions that makes it difficult for automated attacks to locate/identify objects present. Automated attacks must recognize all or most objects contained in the image in order to answer a question correctly, thus the proposed approach reduces attack success rates. An experimental study using several widely-used object recognition algorithms (PWD-based template matching, SIFT, SURF) shows that the system is resistant to these attacks with a 2% attack success rate, while a user study shows that the task required can be performed by average users with a 97% success rate.


Contacts 3.0: bringing together research and design teams to reinvent the phonebook

We present a narrative of the design of Contacts 3.0, a service and updated phonebook application on a mobile device that combines on-device communication with communication from online social networks to create a central hub for communication on the device. We discuss how research and design teams worked together to create design assets, technical architectures, and business cases around this concept.


Quality and perceived usefulness of process models

Modeling is now an essential ingredient in business process management and information systems development. The general usefulness of models in these areas is therefore generally accepted. It is also undisputed that the quality of the models has a significant impact on their usefulness. In the literature we can find any number of quality metrics, but hardly any study that investigates their relation with (perceived) usefulness and none that considers their relative impact on usefulness. We take a look at some of the most frequent quality dimensions and their relative impact on the perceived usefulness of models.


Chemotaxis-based sorting of self-organizing heterotypic agents

Cell sorting is a fundamental phenomenon in morphogenesis, which is the process that leads to shape formation in living organisms. The sorting of heterotypic cell populations is produced by a variety of inter-cellular actions, e.g. differential chemotactic response, adhesion and motility. Via a process called chemotaxis, living cells respond to chemicals released by other cells into the environment. Each cell can respond to the stimulus by moving in the direction of the gradient of the cumulative chemical field detected at its surface. Inspired by the biological phenomena of chemotaxis and cell sorting in heterotypic cell aggregates, we propose a chemotaxis-based algorithm for the sorting of self- organizing heterotypic agents. In our algorithm two types of agents are initially randomly placed in a toroidal environment. Agents emit a chemical signal and interact with nearby agents. Given the appropriate parameters, the two kinds of agents self-organize into a complex aggregate consisting of a group of one type of agents surrounded by agents of the second type. This paper describes the chemotaxis- based sorting algorithm, the behaviors of our self-organizing heterotypic agents, evaluation of the final aggregates and parametric studies of the results.


Curriculum Guidelines for Undergraduate Degree Programs in Information Systems

The IS 2010 report is the latest output from model curriculum work for Information Systems (IS) that began in the early 1970s. Prior to this current effort, the most recent version of the IS undergraduate model curriculum is IS 2002 (Gorgone et al., 2003), published in early 2003. IS 2002 was a relatively minor update of IS'97 (Davis et al., 1997). Both IS 2002 and IS '97 were joint efforts by ACM, AIS, and DPMA/AITP (Data Processing Management Association/ Association of Information Technology Professionals). IS'97 was preceded by DPMA'90 (Longenecker and Feinstein 1991) and ACM Curriculum Recommendations 1983 (ACM 1983) and 1973 (Couger 1973). IS 2002 has been widely accepted and it has also been the basis for accreditation of undergraduate programs of Information Systems. This report represents the combined effort of numerous individuals and reflects the interests of thousands of faculty and practitioners. It is grounded in the expected requirements of industry, represents the views of organizations employing the graduates, and is supported by other IS-related organizations.


Generating transparent, steerable recommendations from textual descriptions of items

We propose a recommendation technique that works by collecting text descriptions of items and using this textual aura to compute the similarity between items using techniques drawn from information retrieval. We show how this representation can be used to explain the similarities between items using terms from the textual aura and further how it can be used to steer the recommender. We describe a system that demonstrates these techniques and we'll detail some preliminary experiments aimed at evaluating the quality of the recommendations and the effectiveness of the explanations of item similarity.


The game of funding: modelling peer review for research grants

Procedures of peer review for research proposals often contain an implicit conflict of interest, where academics may take the role of reviewer or submitter, and thus have the capability to affect the success or failure of each other to obtain some portion of limited funds. This work models a peer review procedure for funding from the perspective of evolutionary game theory. An analysis is performed to investigate the long-term submission and review strategies evolved by the modeled academics as they attempt to maximize their funding. Repercussions of the findings are discussed.


The challenge of irrationality: fractal protein recipes for PI

Computational development traditionally focuses on the use of an iterative, generative mapping process from genotype to phenotype in order to obtain complex phenotypes which comprise regularity, repetition and module reuse. This work examines whether an evolutionary computational developmental algorithm is capable of producing a phenotype with no known pattern at all: the irrational number PI. The paper summarizes the fractal protein algorithm, provides a new analysis of how fractals are exploited by the developmental process, then presents experiments, results and analysis showing that evolution is capable of producing an approximate algorithm for PI that goes beyond the limits of precision of the data types used.


A BSP-based algorithm for dimensionally nonhomogeneous planar implicit curves with topological guarantees

Mathematical systems (e.g., Mathematica, Maple, Matlab, and DPGraph) easily plot planar algebraic curves implicitly defined by polynomial functions. However, these systems, and most algorithms found in the literature, cannot draw many implicit curves correctly; in particular, those with singularities (self-intersections, cusps, and isolated points). They do not detect sign-invariant components either, because they use numerical methods based on the Bolzano corollary, that is, they assume that the curve-describing function f flips sign somewhere in a line segment AB that crosses the curve, or f(Af(B) < 0. To solve these problems, we have generalized the False Position (FP) method to determine two types of zeros: (i) crossing zeros and (ii) extremal zeros (local minima and maxima without function sign variation). We have called this method the Generalized False Position (GFP) method. It allows us to sample an implicit curve against the Binary Space Partitioning (BSP), say bisection lines, of a rectangular region of R2. Interestingly, the GFP method can also be used to determine isolated points of the curve. The result is a general algorithm for sampling and rendering planar implicit curves with topological guarantees.