Sign In

Communications of the ACM

121 - 130 of 2,260 for bentley

From silhouettes to 3D points to mesh: towards free viewpoint video

This paper presents a system for 3D reconstruction from video sequences acquired in multi-camera environments. In particular, the 3D surfaces of foreground objects in the scene are extracted and represented by polygon meshes. Three stages are concatenated to process multi-view data. First, a foreground segmentation method extracts silhouettes of objects of interest. Then, a 3D reconstruction strategy obtains a cloud of oriented points that lie on the surfaces of the objects of interest in a spatially bounded volume. Finally, a fast meshing algorithm provides a topologically correct interpolation of the surface points that can be used for both visualization and further mesh processing purposes. The quality of the results (computational load) obtained by our system compares favorably against a baseline system built from state-of-the-art techniques for similar processing times (quality of the results).

Practice-based CSCW Research: ECSCW bridging across the Atlantic

Practice-based CSCW research is an orientation towards empirically-grounded research embracing particular methodological approaches with the aim of creating new theory about work, collaboration, and cooperative technologies. While practice-based CSCW research has several strong roots in both North America and Europe: ECSCW and Europe remain central to this tradition. In this panel we will discuss the practice-based research approach asking questions such as: What is the nature of Practice-based Computer Supported Cooperative Work research? How is it different from other CSCW research approaches? What is the relationship between these traditions in terms of conceptual approaches, methodologies and open questions for future research? This panel will discuss openly the diversity and commonalities between different CSCW traditions - and argue that practice-based CSCW research is not something that happens only at ECSCW. ECSCW is not a geographical boundary for a certain type of research - but rather a place for a specific research tradition and approach with links to many academic places in the world.

TEXNH trees: a new course in data structures

The TEXNH method is an approach to undergraduate computer science education that is based on cognitive constructivisim, in the sense of Piaget, and which invokes several course design directives that include re-combining art and science, problem-based learning, problem selection from the visual problem domain, and cognitive apprenticeship. The paper describes a new TEXNH course in data structures. It includes a full comparative assessment of the realized improvement in student problem solving capability and, for the first time, cognitive authenticity in problem selection, in that the course problem is a variation on a very recent research result.

Software variability: the design space of configuration languages

Software variability is a major driver in software development. In order to satisfy the increased variability requirements in today's software, several technical and non-technical variability mechanisms have been proposed. In this paper, we contribute with a language-specific perspective on how to manage variability. We explain our view on the concept of configuration languages, which are languages that offer structural and behavioral program configurations through specifically tailored expressions. We present seven design dimensions of configuration languages that determine how the variability model is defined and how a program's artifacts are represented and modified. To show the applicability of the design dimensions for explaining existing configuration languages, we analyze the Linux Kernel configuration language.

The game of funding: modelling peer review for research grants

Procedures of peer review for research proposals often contain an implicit conflict of interest, where academics may take the role of reviewer or submitter, and thus have the capability to affect the success or failure of each other to obtain some portion of limited funds. This work models a peer review procedure for funding from the perspective of evolutionary game theory. An analysis is performed to investigate the long-term submission and review strategies evolved by the modeled academics as they attempt to maximize their funding. Repercussions of the findings are discussed.

Interactive paper devices: end-user design & fabrication

We describe a family of interactive devices made from paper and simple electronics: Paper Robots, Paper Speakers and Paper Lamps. We developed construction techniques for these paper devices and the Paper Factory software with which novice users can create and build their own designs. The process and materials support DIY design and could be used with low-cost production and shipment from an external service.

Improving particle swarm optimization with differentially perturbed velocity

This paper introduces a novel scheme of improving the performance of particle swarm optimization (PSO) by a vector differential operator borrowed from differential evolution (DE). Performance comparisons of the proposed method are provided against (a) the original DE, (b) the canonical PSO, and (c) three recent, high-performance PSO-variants. The new algorithm is shown to be statistically significantly better on a seven-function test suite for the following performance measures: solution quality, time to find the solution, frequency of finding the solution, and scalability.

Pangenome-Wide Association Studies with Frequented Regions

Connecting genetic variation (genotype) to trait variation (phenotype) is a critical but often difficult step in genetic research. A genome-wide association study (GWAS) is a common approach to connect underlying genetic variation to complex phenotypic traits, allowing for phenotypic prediction. GWAS is important in many disciplines, including identifying genetic risk factors for common, complex diseases, identifying genes underlying important traits and predicting phenotypes from genotypes. GWAS is limited, though, in that the types of variations typically studied are single nucleotide polymorphisms (SNPs) identified relative to a single reference genome. These limitations lead to bias and preclude GWAS from studies across related species. The advent of next-generation sequencing has brought an exponential growth in DNA sequence data. This has led to the more comprehensive pangenomics approach, where the entire sequence content and variation of a population are succinctly represented independent of a reference. In prior work, we developed a method for identifying genomic regions that characterize complex variations within pangenomic data and showed that these regions provide a more general way to study genetic variation than existing approaches. This work describes our initial results to develop new methods for a new branch of genomic analysis called pangenome-wide association studies (PWAS) that generalizes GWAS to pangenome datasets both within and across species. We make use of recently developed algorithms for fast compressed De Bruijn graph construction and identifying frequented regions in these graphs that can be used as machine-learning features to identify pangenomic regions, overlaid with gene annotations, that relate to complex phenotypic traits. Initial results on a pangenome composed of 100 yeast indicate that frequented region features provide better machine-learning regression models than SNPs for predicting phenotypic traits.

Imbalanced big data classification: a distributed implementation of SMOTE

In the domain of machine learning, quality of data is most critical component for building good models. Predictive analytics is an AI stream used to predict future events based on historical learnings and is used in diverse fields like predicting online frauds, oil slicks, intrusion attacks, credit defaults, prognosis of disease cells etc. Unfortunately, in most of these cases, traditional learning models fail to generate required results due to imbalanced nature of data. Here imbalance denotes small number of instances belonging to the class under prediction like fraud instances in the total online transactions. The prediction in imbalanced classification gets further limited due to factors like small disjuncts which get accentuated during the partitioning of data when learning at scale. Synthetic generation of minority class data (SMOTE [<u>1</u>]) is one pioneering approach by Chawla [<u>1</u>] to offset said limitations and generate more balanced datasets. Although there exists a standard implementation of SMOTE in python, it is unavailable for distributed computing environments for large datasets. Bringing SMOTE to distributed environment under spark is the key motivation for our research. In this paper we present our algorithm, observations and results for synthetic generation of minority class data under spark using Locality Sensitivity Hashing [LSH]. We were able to successfully demonstrate a distributed version of Spark SMOTE which generated quality artificial samples preserving spatial distribution1.

Tweeting live shows: a content analysis of live-tweets from three entertainment programs

In this paper, we explored whether (and if so, how) live-tweets vary across different entertainment television programs in terms of the tweets' content. Using the 2013 Oscars, the Season 3 finale of Downton Abbey, and the 2014 Super Bowl as case studies, we collected over 200,000 live tweets sent during these three live entertainment programs and performed a content analysis of 4,400 of them. Results indicated that live-tweets, in general, reflect the features of the entertainment programs in many ways, suggesting that practitioners should incorporate more tailored social media strategies to better engage audiences. Theoretical implications and limitations were discussed in detail.