I am an Associate Professor of Computer Science at the North Carolina State University. My research is in geometric algorithms. I am most interested in the intersection of geometric algorithms and topological data analysis.
Watch the video of my talk on Sensors and Samples: A Homological Approach at the Institute for Advanced Study Workshop on Topology. Here is a talk I gave on Mesh Generation and Topological Data Analysis.
Manuscripts
Manuscript 2022
From the work of Bauer and Lesnick, it is known that there is no functor from the category of pointwise finite-dimensional persistence modules to the category of barcodes and overlap matchings. In this work, we introduce sub-barcodes and show that there is a functor from the category of factorizations of persistence module homomorphisms to a poset of barcodes ordered by the sub-barcode relation. Sub-barcodes and factorizations provide a looser alternative to bottleneck matchings and interleavings that can give strong guarantees in a number of settings that arise naturally in topological data analysis. The main use of sub-barcodes is to make strong claims about an unknown barcode in the absence of an interleaving. For example, given only upper and lower bounds $g\ge f\ge\ell$ of an unknown real-valued function $f$, a sub-barcode associated with $f$ can be constructed from $\ell$ and $g$ alone. We propose a theory of sub-barcodes and observe that the subobjects in the category of functors from intervals to matchings naturally correspond to sub-barcodes.
@misc{https://doi.org/10.48550/arxiv.2206.10504, author = {Oliver A. Chubet and Kirk P. Gardner and Donald R. Sheehy}, title = {A Theory of Sub-Barcodes}, doi = {10.48550/ARXIV.2206.10504}, url = {https://arxiv.org/abs/2206.10504}, publisher = {arXiv}, year = {2022}, }
Published
SOCG: Symposium on Computational Geometry (Multimedia Session), 65:1-65:6 2023
We give several ways to derive and express classic summation problems in terms of polycubes. We visualize them with 3D printed models. The video is here: http://go.ncsu.edu/sum_of_squares.
@inproceedings{sheehy23sum, Author = {Donald R. Sheehy}, Booktitle = {Proceedings of the 39th International Symposium on Computational Geometry (SoCG 2023)}, Title = {The Sum of Squares in Polycubes}, PAges = {65:1--65:6}, Year = {2023}}
SOCG: Symposium on Computational Geometry (Multimedia Session), 64:1-64:5 2023
We illustrate the computation of a greedy permutation using finite Voronoi diagrams. We describe the neighbor graph, which is a sparse graph data structure that facilitates efficient point location to insert a new Voronoi cell. This data structure is not dependent on a Euclidean metric space. The greedy permutation is computed in $O(n \log \Delta)$ time for low-dimensional data using this method.
@inproceedings{chubet23greedy, Author = {Oliver A. Chubet and Paul Macnichol and Parth Parikh and Donald R. Sheehy and Siddharth S. Sheth}, Booktitle = {Proceedings of the 39th International Symposium on Computational Geometry (SoCG 2023)}, Title = {Greedy Permutations and Finite Voronoi Diagrams}, PAges = {64:1--64:5}, Year = {2023}}
SOSA: SIAM Symposium on Simplicity in Algorithms. 2023
Over the last 50 years, there have been many data structures proposed to perform proximity search problems on metric data. Perhaps the simplest of these is the ball tree, which was independently discovered multiple times over the years. However, there is a lack of strong theoretical guarantees for standard ball trees, often leading to more complicated structures when guarantees are required. In this paper, we present the greedy tree, a simple ball tree construction for which we can prove strong theoretical guarantees for proximity search queries, matching the state of the art under reasonable assumptions. To our knowledge, this is the first ball tree construction providing such guarantees. Like a standard ball tree, it is a binary tree with the points stored in the leaves. Only a point, a radius, and an integer are stored for each node. The asymptotic running times of search algorithms in the greedy tree match those of more complicated structures regularly used in practice.
@inproceedings{chubet23proximity, title = {Proximity Search in the Greedy Tree}, author = {Oliver Chubet and Parth Parikh and Donald R. Sheehy and Siddharth Sheth}, booktitle = {Proceedings of the SIAM Symposium on Simplicity in Algorithms (SOSA 2023)}, year = {2023}}
SOCG: The International Symposium on Computational Geometry, 60:1-60:15 2022
The space of persistence diagrams under bottleneck distance is known to have infinite doubling dimension. Because many metric search algorithms and data structures have bounds that depend on the dimension of the search space, the high-dimensionality makes it difficult to analyze and compare asymptotic running times of metric search algorithms on this space. We introduce the notion of nearly-doubling metrics, those that are Gromov-Hausdorff close to metric spaces of bounded doubling dimension and prove that bounded $k$-point persistence diagrams are nearly-doubling. This allows us to prove that in some ways, persistence diagrams can be expected to behave like a doubling metric space. We prove our results in great generality, studying a large class of quotient metrics (of which the persistence plane is just one example). We also prove bounds on the dimension of the $k$-point bottleneck space over such metrics. The notion of being nearly-doubling in this Gromov-Hausdorff sense is likely of more general interest. Some algorithms that have a dependence on the dimension can be analyzed in terms of the dimension of the nearby metric rather than that of the metric itself. We give a specific example of this phenomenon by analyzing an algorithm to compute metric nets, a useful operation on persistence diagrams.
@inproceedings{sheehy22nearly, title = {Nearly-Doubling Spaces of Persistence Diagrams}, author = {Donald R. Sheehy and Siddharth Sheth}, booktitle = {38th International Symposium on Computational Geometry (SoCG 2022)}, pages = {60:1--60:15}, doi = {10.4230/LIPIcs.SoCG.2022.60}, year = {2022}}
SOCG: The International Symposium on Computational Geometry, 58:1-58:16 2021
We show how a filtration of Delaunay complexes can be used to approximate the persistence diagram of the distance to a point set in $\mathbb{R}^d$. Whereas the full Delaunay complex can be used to compute this persistence diagram exactly, it may have size $O(n^{\lceil d/2 \rceil})$. In contrast, our construction uses only $O(n)$ simplices. The central idea is to connect Delaunay complexes on progressively denser subsamples by considering the flips in an incremental construction as simplices in $d+1$ dimensions. This approach leads to a very simple and straightforward proof of correctness in geometric terms, because the final filtration is dual to a $(d+1)$-dimensional Voronoi construction similar to the standard Delaunay filtration complex. We also, show how this complex can be efficiently constructed.
@inproceedings{sheehy21sparse, title = {A Sparse Delaunay Filtration}, author = {Donald R. Sheehy}, booktitle = {37th International Symposium on Computational Geometry (SoCG 2021)}, pages = {58:1--58:16}, year = {2021}}
SOCG: The International Symposium on Computational Geometry, 57:1-57:15 2021
Given a persistence diagram with $n$ points, we give an algorithm that produces a sequence of $n$ persistence diagrams converging in bottleneck distance to the input diagram, the $i$th of which has $i$ distinct (weighted) points and is a 2-approximation to the closest persistence diagram with that many distinct points. For each approximation, we precompute the optimal matching between the $i$th and the $(i+1)$st. Perhaps surprisingly, the entire sequence of diagrams as well as the sequence of matchings can be represented in $O(n)$ space. The main approach is to use a variation of the greedy permutation of the persistence diagram to give good Hausdorff approximations and assign weights to these subsets. We give a new algorithm to efficiently compute this permutation, despite the high implicit dimension of points in a persistence diagram due to the effect of the diagonal. The sketches are also structured to permit fast (linear time) approximations to the Hausdorff distance between diagrams -- a lower bound on the bottleneck distance. For approximating the bottleneck distance, sketches can also be used to compute a linear-size neighborhood graph directly, obviating the need for geometric data structures used in state-of-the-art methods for bottleneck computation.
@inproceedings{sheehy21sketching, title={Sketching Persistence Diagrams}, author={Donald R. Sheehy and Siddharth Sheth}, booktitle = {37th International Symposium on Computational Geometry (SoCG 2021)}, pages = {57:1--57:15}, year={2021}}
J. Chem. Theory Comput. 2020, 16, 7, 4588-4598 2020
Many surfactant-based formulations are utilized in industry as they produce desirable viscoelastic properties at low concentrations. These properties are due to the presence of worm-like micelles (WLMs), and as a result, understanding the processes that lead to WLM formation is of significant interest. Various experimental techniques have been applied with some success to this problem but can encounter issues probing key microscopic characteristics or the specific regimes of interest. The complementary use of computer simulations could provide an alternate route to accessing their structural and dynamic behavior. However, few computational methods exist for measuring key characteristics of WLMs formed in particle simulations. Further, their mathematical formulations are challenged by WLMs with sharp curvature profiles or density fluctuations along the backbone. Here, we present a new topological algorithm for identifying and characterizing WLMs in particle simulations, which has desirable mathematical properties that address shortcomings in previous techniques. We apply the algorithm to the case of sodium dodecyl sulfate micelles to demonstrate how it can be used to construct a comprehensive topological characterization of the observed structures.
@article{conchuir20efficient, author = {Breanndan O Conchuir and Kirk Gardner and Kirk E. Jordan and David J. Bray and Richard L. Anderson and Michael A. Johnston and William C. Swope and Alex Harrison and Donald R. Sheehy and Thomas J. Peters}, date-added = {2021-01-27 10:11:43 -0500}, date-modified = {2021-01-27 10:15:57 -0500}, journal = {Journal of Chemical Theory and Computation}, number = {7}, pages = {4588--4598}, title = {An Efficient Algorithm for Topological Characterisation of Worm-Like and Branched Micelle Structures from Simulations}, volume = {16}, year = {2020}}
CCCG: Proceedings of the 32nd Canadian Conference on Computational Geometry, 221-225 2020
We adapt and generalize a heuristic for $k$-center clustering to the permutation case, where every prefix of the ordering is a guaranteed approximate solution. The one-hop greedy permutations work by choosing at each step the farthest unchosen point and then looking in its local neighborhood for a point that covers the most points at a certain scale. This balances the competing demands of reducing the coverage radius and also covering as many points as possible. This idea first appeared in the work of Garcia-Diaz et al. and their algorithm required $O(n^2\log n)$ time for a fixed $k$ (i.e.\ not the whole permutation). We show how to use geometric data structures to approximate the entire permutation in $O(n \log \Delta)$ time for metrics sets with spread $\Delta$. Notably, this running time is asymptotically the same as the running time for computing the ordinary greedy permutation.
@inproceedings{sheehy20onehop, title = {One Hop Greedy Permutations}, author = {Donald R. Sheehy}, booktitle = {Proceedings of the 32nd Canadian Conference on Computational Geometry}, pages = {221--225}, year = {2020}}
CCCG: Proceedings of the 32nd Canadian Conference on Computational Geometry, 345-351 2020
Finding the $k$th nearest neighbor to a query point is a ubiquitous operation in many types of metric computations, especially those in unsupervised machine learning. In many such cases, the distance to $k$ sample points is used as an estimate of the local density of the sample. In this paper, we give an algorithm that takes a finite metric $(P, d)$ and an integer $k$ and produces a subset $M\subseteq P$ with the property that for any $q\in P$, the distance to the second nearest point of $M$ to $q$ is a constant factor approximation to the distance to the $k$th nearest point of $P$ to $q$. Thus, the sample $M$ may be used in lieu of $P$. In addition to being much smaller than $P$, the distance queries on $M$ only require finding the second nearest neighbor instead of the $k$th nearest neighbor. This is a significant improvement, especially because theoretical guarantees on $k$th nearest neighbor methods often require $k$ to grow as a function of the input size $n$.
@inproceedings{gardner20simple, title = {A Simple Algorithm for $k$NN Sampling in General Metrics}, author = {Kirk P. Gardner and Donald R. Sheehy}, booktitle = {Proceedings of the 32nd Canadian Conference on Computational Geometry}, pages = {345--351}, year = {2020}}
Algorithms 13(8), 200:1-200:15 2020
We generalize the local-feature size definition of adaptive sampling used in surface reconstruction to relate it to an alternative metric on Euclidean space. In the new metric, adaptive samples become uniform samples, making it simpler both to give adaptive sampling versions of homological inference results and to prove topological guarantees using the critical points theory of distance functions. This ultimately leads to an algorithm for homology inference from samples whose spacing depends on their distance to a discrete representation of the complement space.
@article{cavanna20adaptive, title = {Adaptive Metrics for Adaptive Samples}, author = {Nicholas J. Cavanna and Donald R. Sheehy}, journal = {Algorithms}, volume = {13}, number = {8}, pages = {200:1--15}, year = {2020}}
SODA: Proceedings of the ACM-SIAM Symposium on Discrete Algorithms, 411-425 2020
Data-sensitive metrics adapt distances locally based the density of data points with the goal of aligning distances and some notion of similarity. In this paper, we give the first exact algorithm for computing a data-sensitive metric called the nearest neighbor metric. In fact, we prove the surprising result that a previously published 3-approximation is an exact algorithm. The nearest neighbor metric can be viewed as a special case of a density-based distance used in machine learning, or it can be seen as an example of a manifold metric. Previous computational research on such metrics despaired of computing exact distances on account of the apparent difficulty of minimizing over all continuous paths between a pair of points. We leverage the exact computation of the nearest neighbor metric to compute sparse spanners and persistent homology. We also explore the behavior of the metric built from point sets drawn from an underlying distribution and consider the more general case of inputs that are finite collections of path-connected compact sets. The main results connect several classical theories such as the conformal change of Riemannian metrics, the theory of positive definite functions of Schoenberg, and screw function theory of Schoenberg and Von Neumann. We also develop some novel proof techniques based on the combination of screw functions and Lipschitz extensions that may be of independent interest.
@inproceedings{chu20exact, Author = {Timothy Chu and Gary L. Miller and Donald R. Sheehy}, Booktitle = {Proceedings of the ACM-SIAM Symposium on Discrete Algorithms}, Title = {Exact computation of a manifold metric, via Lipschitz Embeddings and Shortest Paths on a Graph}, Pages = {411--425}, Year = {2020}}
CCCG: The Canadian Conference in Computational Geometry, 340-345 2018
In the formal verification of dynamical systems, one often looks at a trajectory through a state space as a sample behavior of the system. Thus, metrics on trajectories give important information about the different behavior of the system given different starting states. In the important special case of linear dynamical systems, the set of trajectories forms a finite-dimensional vector space. In this paper, we exploit this vector space structure to define (semi)norms on the trajectories, give an isometric embedding from the trajectory metric into low-dimensional Euclidean space, and bound the Lipschitz constant on the map from start states to trajectories as measured in one of several different metrics. These results show that for an interesting class of trajectories, one can treat the trajectories as points while losing little or no information.
@inproceedings{duggirala18when, Author = {Parasara Sridhar Duggirala and Donald R. Sheehy}, Booktitle = {Proceedings of the Canadian Conference on Computational Geometry}, Title = {When Can We Treat Trajectories as Points?}, Pages = {340--345}, Year = {2018}}
CCCG: The Canadian Conference in Computational Geometry, 78-84 2018
We define an algorithm that can compute the minimum of the bottleneck distance between two persistence diagrams over all diagonal shifts, in $O(n^{3.5})$ time. When applied to log-scale persistence diagrams, this is a scale-invariant version of bottleneck distance.
@inproceedings{cavanna18computing, Author = {Nicholas J. Cavanna and Oliver Kiselius and Donald R. Sheehy}, Booktitle = {Proceedings of the Canadian Conference on Computational Geometry}, Title = {Computing the Shift-Invariant Bottleneck Distance for Persistence Diagrams}, Pages = {78--84}, Year = {2018}}
SODA: ACM-SIAM Symposium on Discrete Algorithms, 1100-1108 2018
For a metric space $Y$, the Frechet distance is a metric on trajectories $f,g:[0,1]\to Y$ that minimizes $\max_{t\in [0,1]}d_Y(f(t), g(h(t)))$ over continuous reparameterizations $h$ of time. One can define the generalized Frechet distance between more complex objects, functions $f:X\to Y$ where $X$ is some topological space that minimizes over homeomorphisms from $X\to X$. This more general definition has been studied for surfaces and often leads to computationally hard problems. We show how to compute in polynomial-time signatures for these functions for which the resulting metric on the signatures can also be computed in polynomial-time and provides a meaningful lower bound on the generalized Frechet distance. Our approach uses persistent homology and exploits the natural invariance of persistence diagrams of functions to homeomorphisms of the domain. Our algorithm for computing the signatures in Euclidean spaces uses a new method for computing persistent homology of convex functions on simplicial complexes which may be of independent interest.
@inproceedings{sheehy18frechet, Author = {Donald R. Sheehy}, Booktitle = {Proceedings of the ACM-SIAM Symposium on Discrete Algorithms}, Title = {Fr\'{e}chet-Stable Signatures Using Persistence Homology}, Pages = {1100--1108}, Year = {2018}}
CCCG: The Canadian Conference in Computational Geometry, 191-196 2017
We explore several problems related to ruled polygons. Given a ruling of a polygon P, we consider the Reeb graph of P induced by the ruling. We define the Reeb complexity of P, which roughly equates to the minimum number of points necessary to support P. We give asymptotically tight bounds on the Reeb complexity that are also tight up to a small additive constant. When restricted to the set of parallel rulings, we show that the Reeb complexity can be computed in polynomial time.
@inproceedings{cavanna17supporting, Author = {Nicholas J. Cavanna and Marc Khoury and Donald R. Sheehy}, Booktitle = {Proceedings of the Canadian Conference on Computational Geometry}, Title = {Supporting Ruled Polygons}, Pages = {191--196}, Year = {2017}}
SODA: ACM-SIAM Symposium on Discrete Algorithms, 2679-2690 2017
In their seminal work on homological sensor networks, de Silva and Ghrist showed the surprising fact that it’s possible to certify the coverage of a coordinate-free sensor network even with very minimal knowledge of the space to be covered. Here, coverage means that every point in the domain (except possibly those very near the boundary) has a nearby sensor. More generally, their algorithm takes a pair of nested neighborhood graphs along with a labeling of vertices as either boundary or interior and computes the relative homology of a simplicial complex induced by the graphs. This approach, called the Topological Coverage Criterion (TCC), requires some assumptions about the underlying geometric domain as well as some assumptions about the relationship of the input graphs to the domain. The goal of this paper is to generalize these assumptions and show how the TCC can be applied to both much more general domains as well as very weak assumptions on the input. We give a new, simpler proof of the de Silva-Ghrist Topological Coverage Criterion that eliminates any assumptions about the smoothness of the boundary of the underlying space, allowing the results to be applied to much more general problems. The new proof factors the geometric, topological, and combinatorial aspects, allowing us to provide a coverage condition that supports thick boundaries, k-coverage, and weighted coverage, in which sensors have varying radii.
@inproceedings{cavanna17when, Author = {Nicholas J. Cavanna and Kirk P. Gardner and Donald R. Sheehy}, Booktitle = {Proceedings of the ACM-SIAM Symposium on Discrete Algorithms}, Title = {When and Why the Topological Coverage Criterion Works}, Pages = {2679--2690}, Year = {2017}}
Computational Geometry: Theory and Applications. 58: 70-96 2016
A new paradigm for point cloud data analysis has emerged recently, where point clouds are no longer treated as mere compact sets but rather as empirical measures. A notion of distance to such measures has been defined and shown to be stable with respect to perturbations of the measure. This distance can easily be computed pointwise in the case of a point cloud, but its sublevel-sets, which carry the geometric information about the measure, remain hard to compute or approximate. This makes it challenging to adapt many powerful techniques based on the Euclidean distance to a point cloud to the more general setting of the distance to a measure on a metric space. We propose an efficient and reliable scheme to approximate the topological structure of the family of sublevel-sets of the distance to a measure. We obtain an algorithm for approximating the persistent homology of the distance to an empirical measure that works in arbitrary metric spaces. Precise quality and complexity guarantees are given with a discussion on the behavior of our approach in practice.
@article{buchet16efficient, Title = {Efficient and Robust Persistent Homology for Measures}, Author = {Micka\"{e}l Buchet and Fr\'{e}d\'{e}ric Chazal and Steve Y. Oudot and Donald Sheehy}, Booktitle = {Computational Geometry: Theory and Applications}, Volume = {58} Pages = {70--96} Year = {2016}}
CCCG: The Canadian Conference in Computational Geometry, 107-113 2016
We show how a simple hierarchical tree called a cover tree can be turned into an asymptotically more efficient one known as a net-tree in linear time. We also introduce two linear-time operations to manipulate cover trees called coarsening and refining. These operations make a trade-off between tree height and the node degree.
@inproceedings{jahanseir16transforming, Author = {Mahmoodreza Jahanseir and Donald R. Sheehy}, Booktitle = {Proceedings of the Canadian Conference on Computational Geometry}, Title = {Transforming Hierarchical Trees on Metric Spaces}, Pages = {107--113}, Year = {2016}}
CCCG: The Canadian Conference in Computational Geometry, 34-41 2016
Let $B$ be a square region in the plane. We give an efficient algorithm that takes a set $P$ of $n$ points from $B$, and produces a set $M\subset B$ with the property that the distance to the second nearest point in $M$ approximates the distance to the $k$th nearest point of $P$. That is, there are constants $\alpha, \beta\in \mathbb{R}$ such that for all $x\in B$, we have $\alpha\mathrm{d}_{P,k}(x) \le \mathrm{d}_{M,2}(x) \le \beta \mathrm{d}_{P,k}(x)$, where $\mathrm{d}_{M,2}$ and $\mathrm{d}_{P,k}$ denote the second nearest and $k$th nearest neighbor distance functions to $M$ and $P$ respectively. The algorithm is based on Delaunay refinement. The output set $M$ also has the property that its Delaunay triangulation has a lower bound on the size of the smallest angle. The primary application is in statistical density estimation and robust geometric inference.
@inproceedings{gardner16kth, Author = {Kirk P. Gardner and Donald R. Sheehy}, Booktitle = {Proceedings of the Canadian Conference on Computational Geometry}, Title = {$k$th Nearest Neighbor Sampling in the Plane}, Pages = {34--41}, Year = {2016}}
CCCG: The Canadian Conference in Computational Geometry, 128-132 2016
We generalize the local-feature size definition of adaptive sampling used in surface reconstruction to relate it to an alternative metric on Euclidean space. In the new metric, adaptive samples become uniform samples, making it simpler both to give adaptive sampling versions of homological inference results and to prove topological guarantees using the theory of critical points to distance functions.
@inproceedings{cavanna16adaptive, Author = {Nicholas J. Cavanna and Donald R. Sheehy}, Booktitle = {Proceedings of the Canadian Conference on Computational Geometry}, Title = {Adaptive Metrics for Adaptive Samples}, Pages = {128--132}, Year = {2016}}
SOCG: Symposium on Computational Geometry (Multimedia Session), 69:1-69:4 2016
We present a new tool for interactively building and evaluating circle packings of planar graphs.
@inproceedings{pratt16exploring, Author = {Kevin Pratt and Connor Riliey and Donald R. Sheehy}, Booktitle = {Proceedings of the 32st International Symposium on Computational Geometry}, Title = {Exploring Circle Packing Algorithms}, PAges = {69:1--69:4}, Year = {2016}}
SOCG: Symposium on Computational Geometry (Multimedia Session), 64:1-64:5 2016
We present an open source computational geometry framework with interactive visualizations. We provide code for simple topological data structures that can describe planar geometry, polyhedral surfaces, and even more complex topological surfaces. Because the data structures use the same simple primitives for all levels of complexity, it provides a natural springboard for students, educators, and developers. See compugeom.github.io for an interactive demo.
@inproceedings{asselin16interactive, Author = {Lynn Asselin and Kirk P. Gardner and Donald R. Sheehy}, Booktitle = {Proceedings of the 32st International Symposium on Computational Geometry}, Title = {Interactive Geometric Algorithm Visualization in a Browser}, Pages={64:1--64:5}, Year = {2016}}
SODA: ACM-SIAM Symposium on Discrete Algorithms, 1234-1245 2016
Nested dissection exploits the underlying topology to do matrix reductions while persistent homology exploits matrix reductions to the reveal underlying topology. It seems natural that one should be able to combine these techniques to beat the currently best bound of matrix multiplication time for computing persistent homology. However, nested dissection works by fixing a reduction order, whereas persistent homology generally constrains the ordering according to an input filtration. Despite this obstruction, we show that it is possible to combine these two theories. This shows that one can improve the computation of persistent homology if the underlying space has some additional structure. We give reasonable geometric conditions under which one can beat the matrix multiplication bound for persistent homology.
@inproceedings{kerber16persistent, Author = {Michael Kerber and Donald R. Sheehy and Primoz Skraba}, Booktitle = {Proceedings of the ACM-SIAM Symposium on Discrete Algorithms}, Title = {Persistent Homology and Nested Dissection}, Pages = {1234--1245}, Year = {2016}}
Foundations of Computational Mathematics, 15:1151-1186 2015
For points sampled near a compact set $X$, the persistence barcode of the Rips filtration built from the sample contains information about the homology of $X$ as long as $X$ satisfies some geometric assumptions.
The Rips filtration is prohibitively large, however zigzag persistence can be used to keep the size linear.
We present several species of Rips-like zigzags and compare them with respect to the signal-to-noise ratio, a measure of how well the underlying homology is represented in the persistence barcode relative to the noise in the barcode at the relevant scales.
Some of these Rips-like zigzags have been available as part of the Dionysus library for several years while others are new.
Interestingly, we show that some species of Rips zigzags will exhibit less noise than the (non-zigzag) Rips filtration itself.
Thus, the Rips zigzag can offer improvements in both size complexity and signal-to-noise ratio.
Along the way, we develop new techniques for manipulating and comparing persistence barcodes from zigzag modules.
We give methods for reversing arrows and removing spaces from a zigzag.
We also discuss factoring zigzags and a kind of interleaving of two zigzags that allows their barcodes to be compared.
These techniques were developed to provide our theoretical analysis of the signal-to-noise ratio of Rips-like zigzags, but they are of independent interest as they apply to zigzag modules generally.
@article{oudot15zigzag, Title = {Zigzag Zoology: Rips Zigzags for Homology Inference}, Author = {Steve Y. Oudot and Donald R. Sheehy}, Journal = {Foundations of Computational Mathematics}, Volume = {15}, Pages = {1151--1186}, Year = {2015}}
CCCG: The Canadian Conference in Computational Geometry, 145-150 2015
An $\alpha$-complex is a subcomplex of the Delaunay triangulation of a point set $P\subset \mathbb{R}^d$ that is topologically equivalent to the union of balls of radius $\alpha$ centered at the points of $P$. In this paper, we give an output-sensitive algorithm to compute $\alpha$-complexes of $n$-point sets in constant dimensions, whose running time is $O(f\log n\log \frac{\alpha}{s})$, where $s$ is the smallest pairwise distance and $f$ is the number of simplices in the $c\alpha$-complex for a constant $c$. The algorithm is based on a refinement of a recent algorithm for computing the full Delaunay triangulation of $P$. We also extend the algorithm to work with weighted points provided the weights are appropriately bounded. The new analysis, which may be of independent interest, bounds the number of intersections of $k$-faces of a Voronoi diagram with $(d-k)$-faces of the Voronoi diagram of a carefully constructed superset.
@inproceedings{sheehy15output, Author = {Donald R. Sheehy}, Booktitle = {Proceedings of the Canadian Conference on Computational Geometry}, Title = {An Output-Sensitive Algorithm for Computing Weighted $\alpha$-Complexes}, Pages = {145--150}, Year = {2015}}
CCCG: The Canadian Conference in Computational Geometry, 116-121 2015
We present a geometric perspective on sparse filtrations used in topological data analysis. This new perspective leads to much simpler proofs, while also being more general, applying equally to Rips filtrations and Cech filtrations for any convex metric. We also give an algorithm for finding the simplices in such a filtration and prove that the vertex removal can be implemented as a sequence of elementary edge collapses.
@inproceedings{cavanna15geometric, Author = {Nicholas J. Cavanna and Mahmoodreza Jahanseir and Donald R. Sheehy}, Booktitle = {Proceedings of the Canadian Conference on Computational Geometry}, Title = {A Geometric Perspective on Sparse Filtrations}, Pages = {116--121}, Year = {2015}}
WADS: Algorithms and Data Structures Symposium, 200-211 2015
Several researchers proposed using non-Euclidean metrics on point sets in Euclidean space for clustering noisy data. Almost always, a distance function is desired that recognizes the closeness of the points in the same cluster, even if the Euclidean cluster diameter is large. Therefore, it is preferred to assign smaller costs to the paths that stay close to the input points. In this paper, we consider a natural metric with this property, which we call the nearest neighbor metric. Given a point set~$P$ and a path~$\gamma$, this metric is the integral of the distance to $P$ along $\gamma$. We describe a \mbox{$(3+\varepsilon)$}-approximation algorithm and a more intricate $(1+\varepsilon)$-approximation algorithm to compute the nearest neighbor metric. Both approximation algorithms work in near-linear time. The former uses shortest paths on a sparse graph defined over the input points. The latter uses a sparse sample of the ambient space, to find good approximate geodesic paths.
@inproceedings{cohen15approximating, Author = {Michael B. Cohen and Brittany Terese Fasy and Gary L. Miller and Amir Nayyeri and Donald R. Sheehy and Ameya Velingker}, Booktitle = {Proceedings of the Algorithms and Data Structures Symposium}, Date-Added = {2015-07-31 02:18:18 +0000}, Date-Modified = {2015-07-31 02:23:04 +0000}, Title = {Approximating Nearest Neighbor Distances}, Pages = {200--211}, Year = {2015}}
SOCG: Symposium on Computational Geometry (Multimedia Session), 23-25 2015
Over the last few years, there have been several approaches to building sparser complexes that still give good approximations to the persistent homology. In this video, we have illustrated a geometric perspective on sparse filtrations that leads to simpler proofs, more general theorems, and a more visual explanation. We hope that as these techniques become easier to understand, they will also become easier to use. Visualizing Sparse Filtrations from Don Sheehy on Vimeo.
@inproceedings{cavanna15visualizing, Author = {Nicholas J. Cavanna and Mahmoodreza Jahanseir and Donald R. Sheehy}, Booktitle = {Proceedings of the 31st International Symposium on Computational Geometry}, Title = {Visualizing Sparse Filtrations}, Pages = {23--25}, Year = {2015}}
SOCG: Symposium on Computational Geometry, 328-334 2014
Given $n$ points $P$ in a Euclidean space, the Johnson-Lindenstrauss lemma guarantees that the distances between pairs of points is preserved up to a small constant factor with high probability by random projection into $O(\log n)$ dimensions. In this paper, we show that the persistent homology of the distance function to $P$ is also preserved up to a comparable constant factor. One could never hope to preserve the distance function to $P$ pointwise, but we show that it is preserved sufficiently at the critical points of the distance function to guarantee similar persistent homology. We prove these results in the more general setting of weighted $k$th nearest neighbor distances, for which $k=1$ and all weights equal to zero gives the usual distance to $P$.
@inproceedings{sheehy14persistent, Title = {The Persistent Homology of Distance Functions under Random Projection}, Author = {Donald R. Sheehy}, Booktitle = {Proceedings of the 30th annual Symposium on Computational Geometry}, Pages = {328-334}, Year = {2014}}
Discrete Comput Geom, 52(3): 476--491 2014
We describe a new algorithm for computing the Voronoi diagram of a set of $n$ points in constant-dimensional Euclidean space. The running time of our algorithm is $O(f \log n \log \Delta)$ where $f$ is the output complexity of the Voronoi diagram and $\Delta$ is the spread of the input, the ratio of largest to smallest pairwise distances. Despite the simplicity of the algorithm and its analysis, it improves on the state of the art for all inputs with polynomial spread and near-linear output size. The key idea is to first build the Voronoi diagram of a superset of the input points using ideas from Voronoi refinement mesh generation. Then, the extra points are removed in a straightforward way that allows the total work to be bounded in terms of the output complexity, yielding the output sensitive bound. The removal only involves local flips and is inspired by kinetic data structures.
@inproceedings{miller14new, Title = {A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations}, Author = {Gary L. Miller and Donald R. Sheehy}, Journal = {Discrete \& Computational Geometry}, Volume = {52}, Number = {3}, Pages = {476--491}, Year = {2014}}
CCCG: The Canadian Conference in Computational Geometry, 253-258 2013
A geometric separator for a set $U$ of $n$ geometric objects (usually balls) is a small (sublinear in $n$) subset whose removal disconnects the intersection graph of $U$ into roughly equal sized parts. These separators provide a natural way to do divide and conquer in geometric settings. A particularly nice geometric separator algorithm originally introduced by Miller and Thurston has three steps: compute a centerpoint in a space of one dimension higher than the input, compute a conformal transformation that ``centers'' the centerpoint, and finally, use the computed transformation to sample a sphere in the original space. The output separator is the subset of $S$ intersecting this sphere. It is both simple and elegant. We show that a change of perspective (literally) can make this algorithm even simpler by eliminating the entire middle step. By computing the centerpoint of the points lifted onto a paraboloid rather than using the stereographic map as in the original method, one can sample the desired sphere directly, without computing the conformal transformation.
@inproceedings{sheehy13geometric, Title = {Geometric Separators and the Parabolic Lift}, Author = {Donald R. Sheehy}, Booktitle = {Canadian Conference in Computational Geometry}, Pages = {253--258} Year = {2013}}
SOCG: ACM Symposium on Computational Geometry, 281-288 2013
We describe a new algorithm for computing the Voronoi diagram of a set of $n$ points in constant-dimensional Euclidean space. The running time of our algorithm is $O(f \log n \log \Delta)$ where $f$ is the output complexity of the Voronoi diagram and $\Delta$ is the spread of the input, the ratio of largest to smallest pairwise distances. Despite the simplicity of the algorithm and its analysis, it improves on the state of the art for all inputs with polynomial spread and near-linear output size. The key idea is to first build the Voronoi diagram of a superset of the input points using ideas from Voronoi refinement mesh generation. Then, the extra points are removed in a straightforward way that allows the total work to be bounded in terms of the output complexity, yielding the output sensitive bound. The removal only involves local flips and is inspired by kinetic data structures.
@inproceedings{miller13new, Title = {A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations}, Author = {Gary L. Miller and Donald R. Sheehy}, Booktitle = {Proceedings of the 29th annual Symposium on Computational Geometry}, Pages = {281--288}, Year = {2013}}
SOCG: ACM Symposium on Computational Geometry, 289-298 2013
We present a new algorithm that produces a well-spaced superset of points conforming to a given input set in any dimension with guaranteed optimal output size.
We also provide an approximate Delaunay graph on the output points.
Our algorithm runs in expected time $O(2^{O(d)}(n\log n + m))$, where $n$ is the input size, $m$ is the output point set size, and $d$ is the ambient dimension.
The constants only depend on the desired element quality bounds.
To gain this new efficiency, the algorithm approximately maintains the Voronoi diagram of the
current set of points by storing a superset of the Delaunay neighbors of each point.
By retaining quality of the Voronoi diagram and avoiding the storage of the full Voronoi diagram, a simple exponential dependence on $d$ is obtained in the running time.
Thus, if one only wants the approximate neighbors structure of a refined Delaunay mesh conforming to a set of input points, the algorithm will return a size $2^{O(d)}m$ graph in $2^{O(d)}(n\log n + m)$ expected time.
If $m$ is superlinear in $n$, then we can produce a hierarchically well-spaced superset of size $2^{O(d)}n$ in $2^{O(d)}n\log n$ expected time.
@inproceedings{miller13fast, Title = {A Fast Algorithm for Well-Spaced Points and Approximate Delaunay Graphs}, Author = {Gary L. Miller and Donald R. Sheehy and Ameya Velingker}, Booktitle = {Proceedings of the 29th annual Symposium on Computational Geometry}, Pages = {289--298}, Year = {2013}}
Discrete Comput Geom, 49(4): 778-796 2013
Previously appeared in SOCG 2012
The Vietoris-Rips filtration is a versatile tool in topological data analysis.
Unfortunately, it is often too large to construct in full.
We show how to construct an $O(n)$-size filtered simplicial complex on an $n$-point metric space such that the persistence diagram is a good approximation to that of the Vietoris-Rips filtration.
The filtration can be constructed in $O(n\log n)$ time.
The constants depend only on the doubling dimension of the metric space and the desired tightness of the approximation.
For the first time, this makes it computationally tractable to approximate the persistence diagram of the Vietoris-Rips filtration across all scales for large data sets.
Our approach uses a hierarchical net-tree to sparsify the filtration.
We can either sparsify the data by throwing out points at larger scales to give a zigzag filtration,
or sparsify the underlying graph by throwing out edges at larger scales to give a standard filtration.
Both methods yield the same guarantees.
@article{sheehy13linear, Title = {Linear-Size Approximations to the {V}ietoris-{R}ips Filtration}, Author = {Donald R. Sheehy}, Journal = {Discrete \& Computational Geometry}, Volume = {49}, Number = {4}, Pages = {778--796}, Year = {2013}}
CCCG: The Canadian Conference in Computational Geometry, 309-314 2012
We show that filtering the barycentric decomposition of a \v Cech complex by the cardinality of the vertices captures precisely the topology of $k$-covered regions among a collection of balls for all values of $k$. Moreover, we relate this result to the Vietoris-Rips complex to get an approximation in terms of the persistent homology.
@inproceedings{sheehy12multicover, Title = {A Multicover Nerve for Geometric Inference}, Author = {Donald R. Sheehy}, Booktitle = {CCCG: Canadian Conference in Computational Geometry}, Pages = {309--314}, Year = {2012}}
Computer Graphics Forum, 31:5, 1627-1635 2012
The theory of optimal size meshes gives a method for analyzing the output size (number of simplices) of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain. The input points define a maximal such sizing function called the feature size. This paper presents a way to bound the feature size integral in terms of an easy to compute property of a suitable ordering of the point set. The key idea is to consider the pacing of an ordered point set, a measure of the rate of change in the feature size as points are added one at a time. In previous work, Miller et al.\ showed that if an ordered point set has pacing $\phi$, then the number of vertices in an optimal mesh will be $O(\phi^dn)$, where $d$ is the input dimension. We give a new analysis of this integral showing that the output size is only $\Theta(n + n\log \phi)$. The new analysis tightens bounds from several previous results and provides matching lower bounds. Moreover, it precisely characterizes inputs that yield outputs of size $O(n)$.
@article{sheehy12new, Title = {{New Bounds on the Size of Optimal Meshes}}, Author = {Donald R. Sheehy}, Journal = {Computer Graphics Forum}, Volume= {31}, Number= {5}, Pages = {1627--1635}, Year = {2012}}
AISTATS: AI and Statistics, 64-72 2012
Often, high dimensional data lie close to a low-dimensional submanifold and it is of interest to understand the geometry of these submanifolds. The homology groups of a manifold are important topological invariants that provide an algebraic summary of the manifold. These groups contain rich topological information, for instance, about the connected components, holes, tunnels and sometimes the dimension of the manifold. In this paper, we consider the statistical problem of estimating the homology of a manifold from noisy samples under several different noise models. We derive upper and lower bounds on the minimax risk for this problem. Our upper bounds are based on estimators which are constructed from a union of balls of appropriate radius around carefully selected points. In each case we establish complementary lower bounds using Le Cam's lemma.
@article{balakrishnan12minimax, Title = {Minimax rates for homology inference}, Author = {Sivaraman Balakrishnan and Alessandro Rinaldo and Don Sheehy and Aarti Singh and Larry A. Wasserman}, Journal = {Journal of Machine Learning Research - Proceedings Track}, Volume = {22}, Pages = {64--72}, Year = {2012}}
SOCG: ACM Symposium on Computational Geometry, 321-330 2011
We present NetMesh, a new algorithm that produces a conforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality.
Our comparison based algorithm runs in time $O(n\log n + m)$, where $n$ is the input size and $m$ is the output size, and with constants depending only on the dimension and the desired element quality bounds.
It can terminate early in $O(n\log n)$ time returning a $O(n)$ size Voronoi diagram of a superset of $P$ with a relaxed quality bound, which again matches the known lower bounds.
The previous best results in the comparison model depended on the log of the spread of the input, the ratio of the largest to smallest pairwise distance among input points.
We reduce this dependence to $O(\log n)$ by using a sequence of $\epsilon$-nets to determine input insertion order in an incremental Voronoi diagram.
We generate a hierarchy of well-spaced meshes and use these to show that the complexity of the Voronoi diagram stays linear in the number of points throughout the construction.
@inproceedings{miller11beating, Title = {Beating the Spread: Time-Optimal Point Meshing}, Author = {Gary L. Miller and Todd Phillips and Donald R. Sheehy}, Booktitle = {SOCG: Proceedings of the 27th ACM Symposium on Computational Geometry}, PAges = {321-330}, Year = {2011}}
SOCG: ACM Symposium on Computational Geometry, 277-286 2010
We apply ideas from mesh generation to improve the time and space complexities of computing the full persistent homological information associated with a point cloud $P$ in Euclidean space $R^d$. Classical approaches rely on the Cech, Rips, $\alpha$-complex, or witness complex filtrations of $P$, whose complexities scale badly with $d$. For instance, the $\alpha$-complex filtration incurs the $n^{\Omega(d)}$ size of the Delaunay triangulation, where $n$ is the size of $P$. The common alternative is to truncate the filtrations when the sizes of the complexes become prohibitive, possibly before discovering the most relevant topological features. In this paper we propose a new collection of filtrations, based on the Delaunay triangulation of a carefully-chosen superset of $P$, whose sizes are reduced to $2^{O(d^2)}n$. A nice property of these filtrations is to be interleaved multiplicatively with the family of offsets of $P$, so that the persistence diagram of $P$ can be approximated in $2^{O(d^2)}n^3$ time in theory, with a near-linear observed running time in practice. Thus, our approach remains tractable in medium dimensions, say 4 to 10.
@inproceedings{hudson10topological, Title = {Topological Inference via Meshing}, Author = {Beno\^{i}t Hudson and Steve Y. Oudot and Gary L. Miller and Donald R. Sheehy}, Booktitle = {SOCG: Proceedings of the 26th ACM Symposium on Computational Geometry}, Pages = {277--286}, Year = {2010}}
Computational Geometry: Theory and Applications, 43(8): 647-654 2010
Previously appeared in SOCG 2009
We present the IteratedTverberg algorithm, the first deterministic algorithm for computing an approximate centerpoint of a set $S \subset R^d$ with running time subexponential in d. The algorithm is a derandomization of the IteratedRadon algorithm of Clarkson et al. (International Journal of Computational Geometry and Applications 6 (3) (1996) 357–377) and is guaranteed to terminate with an Ω(1/d2)-center. Moreover, it returns a polynomial-time checkable proof of the approximation guarantee, despite the coNP-completeness of testing centerpoints in general. We also explore the use of higher order Tverberg partitions to improve the running time of the deterministic algorithm and improve the approximation guarantee for the randomized algorithm. In particular, we show how to improve the $\Omega(1/d^2)$-center of the IteratedRadon algorithm to $\Omega(1/d^{\frac{r}{r-1}})$ for a cost of $O((rd)^d)$ in time for any integer r > 1.
@article{miller10approximate, Title = {Approximate centerpoints with proofs}, author = {Gary L. Miller and Donald Sheehy}, journal = {Computational Geometry: Theory and Applications}, volume = {43}, number = {8}, year = {2010}, pages = {647--654}, Year = {2010}}
CCCG: The Canadian Conference in Computational Geometry, 79-82 2009
There are many depth measures on point sets that yield centerpoint theorems. These theorems guarantee the existence of points of a specified depth, a kind of geometric median. However, the deep point guaranteed to exist is not guaranteed to be among the input, and often, it is not. The alpha-wedge depth of a point with respect to a point set is a natural generalization of halfspace depth that replaces halfspaces with wedges (cones or cocones) of angle $\alpha$. We introduce the notion of a centervertex, a point with depth at least $n/(d+1)$ among the set $S$. We prove that for any finite subset $S$ of $R^d$, a centervertex exists. We also present a simple algorithm for computing an approximate centervertex.
@inproceedings{miller09centervertex, Title = {The Centervertex Theorem for Wedge Depth}, Author = {Gary L. Miller and Todd Phillips and Donald R. Sheehy}, Booktitle = {CCCG: Canadian Conference in Computational Geometry}, Pages = {79--82}, Year = {2009}}
SODA: ACM-SIAM Symposium on Discrete Algorithms, 1041-1047 2009
Typical volume meshes in three dimensions are designed to conform to an underlying two-dimensional surface mesh, with volume mesh element size growing larger away from the surface. The surface mesh may be uniformly spaced or highly graded, and may have fine resolution due to extrinsic mesh size concerns. When we desire that such a volume mesh have good aspect ratio, we require that some space-filling {\it scaffold} vertices be inserted off the surface. We analyze the number of scaffold vertices in a setting that encompasses many existing volume meshing algorithms. We show that under simple preconditions, the number of scaffold vertices will be linear in the number of surface vertices.
@inproceedings{hudson09size, Author = {Beno\^{i}t Hudson and Gary L. Miller and Todd Phillips and Donald R. Sheehy}, Booktitle = {SODA: ACM-SIAM Symposium on Discrete Algorithms}, Title = {Size Complexity of Volume Meshes vs. Surface Meshes}, Pages = {1041--1047}, Year = {2009}}
GeoInformatica 13: 2, 203-221 2009
Given a collection of regions on a map, we seek a method of continuously altering the regions as the scale is varied. This is formalized and brought to rigor as well-defined problems in homotopic deformation. We ask the regions to preserve topology, area-ratios, and relative position as they change over time. A solution is presented using differential methods and computational geometric techniques. Most notably, an application of this method is used to provide an algorithm to obtain cartograms.
@article{danciger09shape, Author = {Jeff Danciger and Satyan Devadoss and John Mugno and Donald R. Sheehy and Rachel Ward}, Journal = {GeoInformatica}, Number = {2}, Pages = {203--221}, Title = {Shape Deformation in Continuous Map Generalization}, Volume = {13}, Year = {2009}}
CCCG: The Canadian Conference in Computational Geometry, 163-166 2008
We present the first spatially adaptive data structure that answers
approximate nearest neighbor (ANN) queries to points that reside in a
geometric space of any constant dimension $d$.
The $L_t$-norm approximation ratio is $O(d^{1 + 1/t})$, and the running time
for a query $q$ is $O(d^2 \lg \delta(p, q))$, where $p$ is the result of the
preceding query and $\delta(p, q)$ is the number of input points in a
suitably-sized box containing $p$ and $q$.
Our data structure has $O(d n)$ size and requires $O(d^2 n \lg n)$
preprocessing time, where $n$ is the number of points in the data structure.
The size of the bounding box for $\delta$ depends on $d$, and our results
rely on the Random Access Machine (RAM) model with word size $\Theta(\lg
n)$.
@inproceedings{derryberry08achieving, Author = {John Derryberry and Daniel D. Sleator and Donald Sheehy and Maverick Woo}, Booktitle = {CCCG: Canadian Conference in Computational Geometry}, Title = {Achieving Spatial Adaptivity while Finding Approximate Nearest Neighbors}, Pages = {163--166}, Year = {2008}}
CCCG: The Canadian Conference in Computational Geometry, 175-178 2008
Most modern meshing algorithms produce asymptotically optimal size output. However, the size of the optimal mesh may not be bounded by any function of $n$. In this paper, we introduce \emph{well-paced} point sets and prove that these will produce linear size outputs when meshed with any ``size-optimal'' meshing algorithm. This work generalizes all previous work on the linear cost of balancing quadtrees. We also present an algorithm that uses well-paced points to produce a linear size Delaunay mesh of a point set in $R^d$.
@inproceedings{miller08linear, Author = {Gary L. Miller and Todd Phillips and Donald R. Sheehy}, Booktitle = {CCCG: Canadian Conference in Computational Geometry}, Title = {Linear-size meshes}, Pages = {175--178}, Year = {2008}}
ICALP: 34th International Colloquium on Automata, Languages and Programming, 655-666 2007
We present a new meshing algorithm for the plane, Overlay Stitch Meshing (OSM), accepting as input an arbitrary Planar Straight Line Graph and producing a triangulation with all angles smaller than $170^\circ$. The output triangulation has competitive size with any optimal size mesh having equally bounded largest angle. The competitive ratio is $O(\log(L/s))$ where $L$ and $s$ are respectively the largest and smallest features in the input. OSM runs in $O(n \log (L/s) + m)$ time/work where $n$ is the input size and $m$ is the output size. The algorithm first uses Sparse Voronoi Refinement to compute a quality overlay mesh of the input points alone. This triangulation is then combined with the input edges to give the final mesh.
@inproceedings{miller07size, Author = {Gary L. Miller and Todd Phillips and Donald R. Sheehy}, Booktitle = {34th International Colloquium on Automata, Languages and Programming}, Title = {Size Competitive Meshing without Large Angles}, Pages = {655--666}, Year = {2007}}
Computational Geometry: Theory and Applications 34, 195-202 2006
We introduce series-triangular graph embeddings and show how to partition point sets with them. This result is then used to prove an upper bound on the number of Steiner points needed to obtain compatible triangulations of point sets. The problem is generalized to finding compatible triangulations for more than two point sets and we show that such triangulations can be constructed with only a linear number of Steiner points added to each point set. Moreover, the compatible triangulations constructed by these methods are regular triangulations.
@article{danciger06compatible, Author = {Jeff Danciger and Satyan Devadoss and Donald R. Sheehy}, Journal = {Computational Geometry: Theory and Applications}, Pages = {195--202}, Title = {Compatible Triangulations and Point Partitions by Series Triangular Graphs}, Volume = {34}, Year = {2006}}