Studying domestic usage contexts has become an important field in research. Recent technological improvements have made media available on different devices, in different contexts and from different places. The adoption and appropriation of new devices and technologies has led to a more flexible usage behavior. However, even if we know about such a behavior, many questions, regarding how new technology changes the user's media usage and how these changes affect the social structure in a household, are still unanswered. We will address this topic in our work and want to provide an insight on how recent media consumption patterns have changed due to the appropriation of new technologies in the home. Based on a qualitative long-term Living Lab study we will present various patterns based on changes in media usage routines and their influences on households as social systems. The results provide a detailed understanding of how the new technology is embedded within domestic life by considering potentials and conflicts that also address further design oriented work.2012-07-04
People often complete tasks and to-dos not only for themselves but also for others in their household. In this work, we examine how household members share and accomplish errands both individually and together. We conducted a three-week diary study with eight households to understand the types of errands that family members and roommates share with each other. We explore their motivations for offering and requesting help to complete their errands and the variety of methods for doing so. Our findings reveal when participants sometimes face challenges completing their errands, and how household members request and receive help. We learned that the cooperative performance of errands is typically dependent on household members' location, availability, and capability. Using these findings, we discuss design opportunities for cooperative errands sharing systems that can assist households.2012-02-11
In this article, we analyse the concept of design thinking with its process, the team structure, the work environment, the specific culture, and certain brainstorming rules and techniques. The goal of this work is to understand how the creative mechanisms of design thinking work and how they might be improved. For this purpose, we refer to the idea of creativity as an evolutionary process, which is determined by generation (i.e., recombination and mutation), selection, and retention of ideas. We evaluate the design thinking process in terms of its capabilities to activate these mechanisms, and we propose possible improvements. This paper contributes to a better understanding of creative design processes in general and the design thinking process in particular, and will serve as a foundation for further research about creative mechanisms.2011-10-19
Interpreting individuals described by a set of criteria can be a difficult task when the number of criteria is large. Such individuals can be ranked, for instance in terms of their average rank across criteria as well as by each distinct criterion. We therefore investigate criteria selection methods which aim to preserve the average rank of individuals but with fewer criteria. Our experiments show that these methods perform effectively, identifying and removing redundancies within the data, and that they are best incorporated into a multi-objective algorithm.2011-07-12
We present a number of new results on one of the most extensively studied topics in computational geometry, orthogonal range searching. All our results are in the standard word RAM model: We present two data structures for 2-d orthogonal range emptiness. The first achieves O(n lg lg n) space and O(lg lg n) query time, assuming that the n given points are in rank space. This improves the previous results by Alstrup, Brodal, and Rauhe (FOCS'00), with O(n lgε n) space and O(lg lg n) query time, or with O(n lg lg n) space and O(lg2lg n) query time. Our second data structure uses O(n) space and answers queries in O(lgε n) time. The best previous O(n)-space data structure, due to Nekrich (WADS'07), answers queries in O(lg n/lg lg n) time. We give a data structure for 3-d orthogonal range reporting with O(n lg1+ε n) space and O(lg lg n + k) query time for points in rank space, for any constant ε>0. This improves the previous results by Afshani (ESA'08), Karpinski and Nekrich (COCOON'09), and Chan (SODA'11), with O(n lg3 n) space and O(lg lg n + k) query time, or with O(n lg1+εn) space and O(lg2lg n + k) query time. Consequently, we obtain improved upper bounds for orthogonal range reporting in all constant dimensions above 3.
Our approach also leads to a new data structure for 2D orthogonal range minimum queries with O(n lgε n) space and O(lg lg n) query time for points in rank space. We give a randomized algorithm for 4-d offline dominance range reporting/emptiness with running time O(n log n) plus the output size. This resolves two open problems (both appeared in Preparata and Shamos' seminal book): given a set of n axis-aligned rectangles in the plane, we can report all k enclosure pairs (i.e., pairs (r1,r2) where rectangle r1 completely encloses rectangle r2) in O(n lg n + k) expected time; given a set of n points in 4-d, we can find all maximal points (points not dominated by any other points) in O(n lg n) expected time. The most recent previous development on (a) was reported back in SoCG'95 by Gupta, Janardan, Smid, and Dasgupta, whose main result was an O([n lg n + k] lg lg n) algorithm. The best previous result on (b) was an O(n lg n lg lg n) algorithm due to Gabow, Bentley, and Tarjan---from STOC'84! As a consequence, we also obtain the current-record time bound for the maxima problem in all constant dimensions above~4.2011-06-13
We present three results related to dynamic convex hulls: A fully dynamic data structure for maintaining a set of n points in the plane so that we can find the edges of the convex hull intersecting a query line, with expected query and amortized update time O(log1+εn) for an arbitrarily small constant ε>0. This improves the previous bound of O(log3/2n). A fully dynamic data structure for maintaining a set of n points in the plane to support halfplane range reporting queries in O(log n + k) time with O(polylog, n) expected amortized update time. A similar result holds for 3-dimensional orthogonal range reporting. For 3-dimensional halfspace range reporting, the query time increases to O(log2 n/log log n + k). A semi-online dynamic data structure for maintaining a set of n line segments in the plane, so that we can decide whether a query line segment lies completely above the lower envelope, with query time O(log n) and amortized update time O(nε). As a corollary, we can solve the following problem in O(n1+ε) time: given a triangulated terrain in 3-d of size n, identify all faces that are partially visible from a fixed viewpoint.2011-06-13
Through this paper we advance an initial set of 12 observations that will form the basis for developing design principles for public safety networks (PSN), and more broadly for inter-organizational systems within the public sector. A public safety network is an interagency collaboration focused on the development and use of information and communication technologies to support the information sharing and functional interoperability needs of public safety organizations engaged in law enforcement, criminal justice, and emergency response. Our goal in presenting this initial set of PSN design observations is to: (1) encourage improved PSN systems design through the development of design principles and (2) increase the attention paid, when designing and developing these forms of information systems, to the co-design of structures of governance and operation that PSN entail.2011-06-12
We consider an approximate version of a fundamental geometric search problem, polytope membership queries. Given a convex polytope P in REd, presented as the intersection of halfspaces, the objective is to preprocess P so that, given a query point q, it is possible to determine efficiently whether q lies inside P subject to an error bound ε. Previous solutions to this problem were based on straightforward applications of classic polytope approximation techniques by Dudley (1974) and Bentley et al. (1982). The former yields minimum storage, and the latter yields constant query time. A space-time tradeoff can be obtained by interpolating between the two. We present the first significant improvements to this tradeoff. For example, using the same storage as Dudley, we reduce the query time from O(1/ε(d-1)/2) to O(1/ε(d-1)/4). Our approach is based on a very simple algorithm. Both lower bounds and upper bounds on the performance of the algorithm are presented.
To establish the relevance of our results, we introduce a reduction from approximate nearest neighbor searching to approximate polytope membership queries. We show that our tradeoff provides significant improvements to the best known space-time tradeoffs for approximate nearest neighbor searching. Furthermore, this is achieved with constructions that are much simpler than existing methods.2011-06-06
A Reverse Skyline query returns all objects whose skyline contains the query object. In this paper, we consider Reverse Skyline query processing where the distance between attribute values are not necessarily metric. We outline real world cases that motivate Reverse Skyline processing in such scenarios. We consider various optimizations to develop efficient algorithms for Reverse Skyline processing. Firstly, we consider block-based processing of objects to optimize on IO costs. We then explore pre-processing to re-arrange objects on disk to speed-up computational and IO costs. We then present our main contribution, which is a method of using group-level reasoning and early pruning to micro-optimize processing by reducing attribute level comparisons. An extensive empirical evaluation with real-world datasets and synthetic data of varying characteristics shows that our optimization techniques are indeed very effective in dramatically speeding Reverse Skyline processing, both in terms of computational costs and IO costs.2011-03-21
Given a set of points with uncertain locations, we consider the problem of computing the probability of each point lying on the skyline, that is, the probability that it is not dominated by any other input point. If each point's uncertainty is described as a probability distribution over a discrete set of locations, we improve the best known exact solution. We also suggest why we believe our solution might be optimal. Next, we describe simple, near-linear time approximation algorithms for computing the probability of each point lying on the skyline. In addition, some of our methods can be adapted to construct data structures that can efficiently determine the probability of a query point lying on the skyline.2011-03-21