Sign In

Communications of the ACM

81 - 90 of 225 for bentley

Shape grammars and grammatical evolution for evolutionary design

We describe the first steps in the adoption of Shape Grammars with Grammatical Evolution for application in Evolutionary Design. Combining the concepts of Shape Grammars and Genetic Programming opens up the exciting possibility of truly generative design assist tools. In this initial study we provide some background on the adoption of grammar-based Genetic Programming for Evolutionary Design, describe Shape Grammars, and give a brief overview of Grammatical Evolution before detailing how Grammatical Evolution used Shape Grammars to successfully rediscover some benchmark target structures.


A direct path to dependable software

Who could fault an approach that offers greater credibility at reduced cost?


Enhancing social sharing of videos: fragment, annotate, enrich, and share

Media consumption is an inherently social activity, serving to communicate ideas and emotions across both small- and large-scale communities. The migration of the media experience to personal computers retains social viewing, but typically only via a non-social, strictly personal interface. This paper presents an architecture and implementation for media content selection, content (re)organization, and content sharing within a user community that is heterogeneous in terms of both participants and devices. In addition, our application allows the user to enrich the content as a differentiated personalization activity targeted to his/her peer-group. We describe the goals, architecture and implementation of our system in this paper. In order to validate our results, we also present results from two user studies involving disjoint sets of test participants.


Flutter: directed random browsing of photo collections with a tangible interface

Large collections of photographs are commonplace, and many interfaces for viewing, sorting and organizing them have been proposed. This work describes the design and implementation of a "living photo frame" - designed not to navigate or browse collections but to create an enjoyable activity from a collection of images. Tangible interactions with a tablet-style PC are used to bind the user closely to the system. Every interaction is logged and used to gradually evolve the structure of photo collections.


Scalability of particle swarm algorithms

When dealing with complex optimisations problems, evolutionary computation techniques have proven tobe very helpful. Amongst optimisation algorithms driven by evolutionary computation techniques,particle swarm algorithms have proven to be a very good alternative to genetic algorithms because of their faster convergence. However they can still suffer from premature convergence to local optima. Premature convergence occurs when the particles of the swarm are too close to each other to enable further exploration of the space. To put it another way, the dispersion or distribution of the swarm throughout the search space has been localised to a small region with a consequent stagnation of the search process. Many strategies have been used to try to prevent convergence to a local optimum. However little work has been done on problems of high dimensions. By high dimensions, we mean dimensions of 100 and above.This paper focuses, therefore, on the limitations of classical particle swarm algorithms when dealing with high dimensional problems. We compare different versions of Particle Swarm Algorithms: GBest, LBest with ringor random neighbourhood of size 3~\cite{Clerc06}, and GCPSO~\cite{VanDenBergh06}. Those algorithmswere run twice, with a linearly decreasing inertia weight, and with the use of a constriction factor.We also used two repulsive-based algorithms: Charged PSO~\cite{Blackwell02} and Predator Prey~\cite{Silva02}.Our main focus is problems of high dimensionality. In particular, we applied the different algorithmsto the benchmarks functions Ackley and Rosenbrock, in the following dimensions: 100, 250, 500.Even though these represent problems of relatively small dimensionality in a real-world context,experiments on higher dimensions have not been necessary to show the limits of the algorithms studied.Each experiment was run 20 times. The swarm size was chosen from $[30\; 50\; 100\; 250\; 500]$ and so that it isnot greater than the problem size. Each experiment is 10000 iterations long.A one-way analysis of variance (ANOVA) was used to compare the performance of each algorithms. We found that the LBest algorithms perform significantly better when used with the constriction factor.GBest and GCPSO perform better with linearly decreasing inertia with a small swarm size, but better with the constriction factorwith a big swarm size. The improvement of GCPSO on GBest is not statistically significant in our experiments. The LBest algorithms with the constriction factor seem to be the best algorithms to handle problems of high dimensionality. The LBest algorithm with fixed neighbourhood seems to be less sensitive to the size of the swarm than the LBest algorithm with randomneighbourhood. Especially, in the case of the Rosenbrock function of size 500, increasing the sizeof the swarm does not improve the performance of LBest with constricted factor and fixed neighbourhood.The algorithms based on repulsion between particles, i.e. Charged Swarm and Predator Prey, do not perform very well.Indeed, even if the predator prey algorithm gives quite good results, it is trapped in a local optimum, as the fitness value stagnates on a constant value for the last 50\% of iterations. This may come from a too low levelof repulsion. Tuning the parameters used for repulsion seems to be very important for high dimensionality problems. Experiments show that almost all the algorithms managed to solve the problems for dimension 100 butnone of the algorithms managed to solve the problem in the case of problems of dimension 250 and 500. The LBest algorithm with random neighbourhood and constriction factor performed the best. Further work will be done on modelling the size of the swarm required to be able to solve the problems. Other particle swarm algorithms will also be included.


Cache-oblivious streaming B-trees

A streaming B-tree is a dictionary that efficiently implements insertions and range queries. We present two cache-oblivious streaming B-trees, the shuttle tree, and the cache-oblivious lookahead array (COLA).

For block-transfer size B and on N elements, the shuttle tree implements searches in optimal O(log B+1N) transfers, range queries of L successive elements in optimal O(log B+1N +L/B) transfers, and insertions in O((log B+1N)/BΘ(1/(log log B)2)+(log2N)/B) transfers, which is an asymptotic speedup over traditional B-trees if B ≥ (log N)1+c log log log2 N for any constant c >1.

A COLA implements searches in O(log N) transfers, range queries in O(log N + L/B) transfers, and insertions in amortized O((log N)/B) transfers, matching the bounds for a (cache-aware) buffered repository tree. A partially deamortized COLA matches these bounds but reduces the worst-case insertion cost to O(log N) if memory size M = Ω(log N). We also present a cache-aware version of the COLA, the lookahead array, which achieves the same bounds as Brodal and Fagerberg's (cache-aware) Bε-tree.

We compare our COLA implementation to a traditional B-tree. Our COLA implementation runs 790 times faster for random inser-tions, 3.1 times slower for insertions of sorted data, and 3.5 times slower for searches.


Retroactive data structures

We introduce a new data structuring paradigm in which operations can be performed on a data structure not only in the present, but also in the past. In this new paradigm, called retroactive data structures, the historical sequence of operations performed on the data structure is not fixed. The data structure allows arbitrary insertion and deletion of operations at arbitrary times, subject only to consistency requirements. We initiate the study of retroactive data structures by formally defining the model and its variants. We prove that, unlike persistence, efficient retroactivity is not always achievable. Thus, we present efficient retroactive data structures for queues, doubly ended queues, priority queues, union-find, and decomposable search structures.


Deterministic sampling and range counting in geometric data streams

We present memory-efficient deterministic algorithms for constructing ε-nets and ε-approximations of streams of geometric data. Unlike probabilistic approaches, these deterministic samples provide guaranteed bounds on their approximation factors. We show how our deterministic samples can be used to answer approximate online iceberg geometric queries on data streams. We use these techniques to approximate several robust statistics of geometric data streams, including Tukey depth, simplicial depth, regression depth, the Thiel-Sen estimator, and the least median of squares. Our algorithms use only a polylogarithmic amount of memory, provided the desired approximation factors are at least inverse-polylogarithmic. We also include a lower bound for noniceberg geometric queries.


Technically speaking: fostering the communication skills of computer science and mathematics students

The Department of Mathematics and Computer Science at Denison University has introduced a significant new oral communication component early in both majors. The sophomore computer science and mathematics majors meet together each week for a "lab" taught jointly by a computer scientist and a mathematician. There were three goals in this endeavor: (1) to prepare students for the workforce and graduate school by improving their oral communication skills, (2) to nurture future researchers in both fields by exposing them to research early in their undergraduate training, and (3) to increase computer science students' exposure to mathematics. In the following, we establish the need for such a course, describe our approach, how it satisfies our three goals, and additional outcomes.