A new divisive algorithm for multidimensional data clustering is suggested. Based on the minimization of the sum-of-squared-errors, the proposed method produces much smaller quantization errors than the median-cut and mean-split algorithms. It is also observed that the solutions obtained from our algorithm are close to the local optimal ones derived by the k-means iterative procedure.
Since the first courses were offered in programming, plagiarism has been a perplexing problem. Detection techniques, administrative procedures, and penalties vary greatly. Instructors face an increasingly legalistic system when prosecuting plagiarism cases. Panel members will discuss the prevention, detection, and prosecution aspects of program plagiarism and will present legal and administrative views of the problem.
Let &Ggr; be a collection of n (possibly intersecting) “red” Jordan arcs of some simple shape in the plane and let &Ggr;' be a similar collection of m “blue” arcs. We present several efficient algorithms for detecting an intersection between an arc of &Ggr; and an arc of &Ggr;'. (i) If the arcs of &Ggr;' form the boundary of a simply connected region, then we can detect a “red-blue” intersection in time &Ogr; (&lgr;s(m)log2 m + (&lgr;a(m) + n)log(n + m)) where &lgr;s(m) is the (almost-linear) maximum length of (m, s) Davenport-Schinzel sequences, and where s is a fixed parameter, depending on the shape of the given arcs. Another case where we can detect an intersection in close to linear time is when the union of the arcs of &Ggr; and the union of the arcs of &Ggr;' are both connected. (ii) In the most general case, we can detect an intersection in time &Ogr; ((m√&lgr;s(n) + n√&lgr;s(m))log1.5(m+n)). For several special but useful cases, in which many faces in the arrangements of &Ggr; and &Ggr;' can be computed efficiently, we obtain randomized algorithms which are better than the general algorithm. In particular when all arcs in &Ggr; and &Ggr;' are line segments, we obtain a randomized &Ogr;((m+n)4/3+c) intersection detection algorithm. We apply the algorithm in (i) to obtain an &Ogr;(&lgr;s(n) log2 n) algorithm (for some small s > 0) for planning the motion of an n-sided simple polygon around a right-angle corner in a corridor, improving a previous &Ogr;(n2) algorithm of [MY86], and to derive an efficient technique for fast collision detection for a simple polygon moving (translating and rotating) in the plane along a prescribed path.
A data compression scheme that exploits locality of reference, such as occurs when words are used frequently over short intervals and then fall into long periods of disuse, is described. The scheme is based on a simple heuristic for self-organizing sequential search and on variable-length encodings of integers. We prove that it never performs much worse than Huffman coding and can perform substantially better; experiments on real files show that its performance is usually quite close to that of Huffman coding. Our scheme has many implementation advantages: it is simple, allows fast encoding and decoding, and requires only one pass over the data to be compressed (static Huffman coding takes two passes).
In this paper an O(N log N) algorithm for routing through a rectangle is presented. Consider an n-by-m rectangular grid and a set of N two-terminal nets. A net is a pair of points on the boundary of the rectangle. A layout is a set of edge-disjoint paths, one for each net. Our algorithm constructs a layout, if there is one, in O(N log N) time; this contrasts favorably with the area of the layout that might be as large as N2. The layout constructed can be wired using four layers of interconnect with only O(N) contact cuts. A partial extension to multiterminal nets is also discussed.
The post office problem is the following: points in d-dimensional space, so that given an arbitrary point p, the closest points in S to p can be found quickly.
We consider the case of this problem where the Euclidean norm is the measure of distance. The previous best algorithm for this problem for d>2 requires &Ogr;(n2d+1) preprocessing time to build a data structure allowing an &Ogr;(log n query time. We will show that a data structure can be built in expected &Ogr;(n(d-1)(1+k)) time, for any fixed k;>&Ogr;, so that closest-point queries can be answered in &Ogr;(log n) worstcase time. (The constant factors depend on d and k.) The algorithm employs random sampling, so the expected time holds for any set of points. A variant of this algorithm (for the variant problem where only one closest point of S to the query point is desired) requires &Ogr;(n⌈d/2⌉) &ogr;(n⌈d/2⌉) preprocessing time for &ogr;(nt) worst-case query time, for any fixed &egr;>0. These results approach the &OHgr;(n⌈d/2⌉) preprocessing time required for any algorithm constructing the Voronoi diagram of the input points. Implementation of these algorithms requires not too much more than a random sampling procedure and a procedure for constructing the Voronoi diagram of that random sample.
We state and prove a theorem about the number of points of local nonconvexity in the union of m. Minkowski sums of planar convex sets, and then apply it to planning a collision-free translational motion of a convex polygon B amidst several (convex) polygonal obstacles Al,…, Am, following a basic approach suggested by Lozano-Perez and Wesley. Assuming that the number of corners of B is fixed, the algorithm developed here runs in time &Ogr;(n log2n), where n is the total number of corners of the Al's.