Talks

Proximity Search in the Greedy Tree
SIAM Symposium on Simplicity in Algorithms (SOSA), Florence Italy, Jan 24, 2023

Over the last 50 years, there have been many data structures proposed to perform proximity search problems on metric data. Perhaps the simplest of these is the ball tree, which was independently discovered multiple times over the years. However, there is a lack of strong theoretical guarantees for standard ball trees, often leading to more complicated structures when guarantees are required. In this paper, we present the greedy tree, a simple ball tree construction for which we can prove strong theoretical guarantees for proximity search queries, matching the state of the art under reasonable assumptions. To our knowledge, this is the first ball tree construction providing such guarantees. Like a standard ball tree, it is a binary tree with the points stored in the leaves. Only a point, a radius, and an integer are stored for each node. The asymptotic running times of search algorithms in the greedy tree match those of more complicated structures regularly used in practice.

Semi-Supervised TDA
International Conference on Advances in Interdisciplinary Statistics and Combinatorics (AISC) 2022 October 8, 2022.

One of the most pervasive metaphors in TDA is that homology generalizes clustering. This situates techniques like persistent homology firmly in the domain of unsupervised learning. Although less common, there is also substantial work on the estimation of a persistence barcode of an unknown function from samples---a kind of supervised TDA problem. In this talk, I will consider the question of what kinds of guarantees are possible if one has evaluations of the function at only a subset of the input points---a semi-supervised TDA problem. I will summarize the new theory of sub-barcodes and show how one can compute a barcode that is guaranteed to be contained in the barcode of every Lipschitz function that agrees with the sample data. This provides strong guarantees even with only partial data.

The Persistent Homology of Lipschitz Extensions
The Applied Algebraic Topology Research Network Vietoris-Rips Seminar, Feb 4, 2022

Let $X$ be a metric space and let $f:X\to R$ be a real-valued Lipschitz function. Given a sample $S$ of $X$ and the values $f$ at the points of $S$, we want to know something about the persistent homology of the sublevel set filtration of $f$. Under reasonable assumptions about the space $X$ and the sample $S$, it is possible to give strong guarantees about the bottleneck distance between the ground truth barcode of $f$ and a barcode computed from the sample. This talk will explore the question of what one can say in the absence of any guarantees about the quality (i.e. density) of the sample. I will present a theory of sub-barcodes that provides an alternative to bottleneck distance for theoretical guarantees on barcodes. Then I will show that it is possible to compute a barcode from Lipschitz extensions of $f(S)$ that is guaranteed to be a sub-barcode of the barcode of every Lipschitz function that agrees with $f$ on the points of $S$.

Discrete Voronoi Refinement
Duke Algorithms Seminar, Nov 18, 2021

Discrete Voronoi Refinement
Workshop on Computational Persistence, Purdue University, Nov 2, 2021

A Sparse Delaunay Filtration
The International Symposium on Computational Geometry, Univ. of Buffalo, June 7, 2021

Sketching Persistence Diagrams
The International Symposium on Computational Geometry, Univ. of Buffalo, June 7, 2021

Sketching Persistence Diagrams
Dagstuhl Seminar on Computational Geometry, May 5, 2021

Sketching Persistence Diagrams
Oxford TDA Seminar, April 30, 2021

A Sparse Delaunay Filtration
Neurips Workshop on TDA and Beyond, December 11, 2020

We show how a filtration of Delaunay complexes can be used to approximate the persistence diagram of the distance to a point set in $\mathbb{R}^d$. Whereas the full Delaunay complex can be used to compute this persistence diagram exactly, it may have size $O(n^{\lceil d/2 \rceil})$. In contrast, our construction uses only $O(n)$ simplices. The central idea is to connect Delaunay complexes on progressively denser subsamples by considering the flips in an incremental construction as simplices in $d+1$ dimensions. This approach leads to a very simple and straightforward proof of correctness in geometric terms, because the final filtration is dual to a $(d+1)$-dimensional Voronoi construction similar to the standard Delaunay filtration complex. We also, show how this complex can be efficiently constructed.

A Simple Algorithm for kNN Sampling in General Metrics
The Canadian Conference on Computational Geometry, August 8, 2020

Finding the $k$th nearest neighbor to a query point is a ubiquitous operation in many types of metric computations, especially those in unsupervised machine learning. In many such cases, the distance to $k$ sample points is used as an estimate of the local density of the sample. In this paper, we give an algorithm that takes a finite metric $(P, d)$ and an integer $k$ and produces a subset $M\subseteq P$ with the property that for any $q\in P$, the distance to the second nearest point of $M$ to $q$ is a constant factor approximation to the distance to the $k$th nearest point of $P$ to $q$. Thus, the sample $M$ may be used in lieu of $P$. In addition to being much smaller than $P$, the distance queries on $M$ only require finding the second nearest neighbor instead of the $k$th nearest neighbor. This is a significant improvement, especially because theoretical guarantees on $k$th nearest neighbor methods often require $k$ to grow as a function of the input size $n$.

One Hop Greedy Permutations
The Canadian Conference on Computational Geometry, August 8, 2020

We adapt and generalize a heuristic for $k$-center clustering to the permutation case, where every prefix of the ordering is a guaranteed approximate solution. The one-hop greedy permutations work by choosing at each step the farthest unchosen point and then looking in its local neighborhood for a point that covers the most points at a certain scale. This balances the competing demands of reducing the coverage radius and also covering as many points as possible. This idea first appeared in the work of Garcia-Diaz et al. and their algorithm required $O(n^2\log n)$ time for a fixed $k$ (i.e.\ not the whole permutation). We show how to use geometric data structures to approximate the entire permutation in $O(n \log \Delta)$ time for metrics sets with spread $\Delta$. Notably, this running time is asymptotically the same as the running time for computing the ordinary greedy permutation.

Conformal Change of Metrics in Data Analysis
University of Florida Topological Data Analysis conference, January 30, 2020

I will survey a variety of algorithmic settings in which one would like to compute (or at least use) distances between points that are induced by local scaling of space. I will track this idea through topics in mesh generation, surface reconstruction, and robotics. Then, I will discuss some recent results on exact computation of such distances for the special case where the local scaling of space is proportional to the distance to the input. This gives the first example of an exact computation for a so-called density-based distance.

Conformal Change of Metrics in Data Analysis
Duke Theory Seminar, December 12, 2019

I will survey a variety of algorithmic settings in which one would like to compute (or at least use) distances between points that are induced by local scaling of space. I will track this idea through topics in mesh generation, surface reconstruction, and robotics. Then, I will discuss some recent results on exact computation of such distances for the special case where the local scaling of space is proportional to the distance to the input. This gives the first example of an exact computation for a so-called density-based distance.

The Cohomology of Impossible Figures, Revisited
TGDA@OSU Seminar, November 5, 2019

The Penrose triangle, also known as the impossible tribar is an icon for cohomology. It is literally the icon for Cech cohomology on Wikipedia. The idea goes back to a paper by Roger Penrose in 1992, but was first reported by Penrose several years earlier. There, he shows how the impossibility of the figure depends on a topological property of the figure. However, the impossibility is clearly not a purely topological property as it depends crucially on the geometry of how the figure is drawn. In this talk, I will try to restore the geometry to the theory of impossible figures. The resulting theory is neatly decomposed as a direct sum of two older theories, the first is the theory of reciprocal diagrams from James Clerk Mawell in the 1860s and the second is the acyclicity condition of Herbert Edelsbrunner in the 1990's. I will also give some hints as to how these theories are relevant in modern computational geometry for scientific computing.

The Cohomology of Impossible Figures, Revisited
NCSU Topology, October, 1, 2019

Roger Penrose is credited with identifying nontrivial cohomology as a basic requirement for a certain class of optical illusion present in the work of artist M.C. Escher. The so-called Penrose Triangle has since become an emblem for cohomology. It captures, in a discrete way, the kind of rotating vector fields whose singularities allow for the nonexistence of an antiderivative and stand as a primary motivating example for de Rham cohomogy in the first place. However, in many ways, the discrete theory is much older. In this talk, I will explore some appearances of discrete de Rham cohomology theory in the 1860’s, the 1960’s, and their impact in later problems in discrete and computational geometry. Then, I will argue that discrete differential forms are a powerful tool for geometric graph algorithms.

The Cohomology of Impossible Figures, Revisited
Union College Math Conference, September, 14, 2019

Roger Penrose is credited with identifying nontrivial cohomology as a basic requirement for a certain class of optical illusion present in the work of artist M.C. Escher. The so-called Penrose Triangle has since become an emblem for cohomology. It captures, in a discrete way, the kind of rotating vector fields whose singularities allow for the nonexistence of an antiderivative and stand as a primary motivating example for de Rham cohomogy in the first place. However, in many ways, the discrete theory is much older. In this talk, I will explore some appearances of discrete de Rham cohomology theory in the 1860’s, the 1960’s, and their impact in later problems in discrete and computational geometry. Then, I will argue that discrete differential forms are a powerful tool for geometric graph algorithms.

Topological Data Analysis
NCSU Computer Science Department Colloquium, January 17, 2019

Topological Data Analysis
Notre Dame Computer Science Department Colloquium, January 17, 2019

Frechet-Stable Signatures Using Persistent Homology
Fall Eastern Sectional Meeting, Newark, Delaware, September 29, 2018

Frechet-Stable Signatures Using Persistent Homology
ACM-SIAM Symposium on Discrete Algorithms, New Orleans, LA, January 8, 2018

Frechet-Stable Signatures Using Persistence Homology
B.I.R.S. Seminar on Topological Data Analysis, Banff, Canada, August 3, 2017

Supporting Ruled Polygons
Canadian Conference on Computational Geometry, Ottawa, Canada, July 27, 2017

Discrete Morse Theory and the Persistent Homology of Simplicial Maps
Dagstuhl Seminar on Topological Data Analysis, Wadern, Germany, July 19, 2017

What Can Persistent Homology Tell Us About Fréchet Distance?
Brown Seminar on Applied Topology, Brown Univeristy Februaru 23, 2017

When and Why the Topological Coverage Criterion Works
ACM-SIAM Symposium on Discrete Algorithms, Barcelona, Spain, January 18, 2017

Persistent Nerves Revisited
Union College Math Conference: Applied Topology, December 3, 2016

Transforming Hierarchical Trees on Metric Spaces
The Canadian Conference on Computational Geometry, Vancouver, Canada, August 4, 2016

We show how a simple hierarchical tree called a cover tree can be turned into an asymptotically more efficient one known as a net-tree in linear time. We also introduce two linear-time operations to manipulate cover trees called coarsening and refining. These operations make a trade-off between tree height and the node degree.

Sampling Uncertain Manifolds
CG Week Workshop on Geometric Computing on Uncertain Data, Boston, MA, June 15, 2016

In this talk, I address a new take on adaptive sampling with respect to the local feature size, i.e., the distance to the medial axis. We recently proved that such samples can be viewed as uniform samples with respect to an alternative metric on the Euclidean space. The idea of adaptive metrics gives a general way to adapt the critical point theory of distance functions to locally adaptive sampling conditions.

Some Thoughts on Sampling
Topology, Geometry, and Data Analysis Conference at Ohio State University, May 16, 2016

In this talk, I address two new ideas in sampling geometric objects. The first is a new take on adaptive sampling with respect to the local feature size, i.e., the distance to the medial axis. We recently proved that such samples can be viewed as uniform samples with respect to an alternative metric on the Euclidean space. The second is a generalization of Voronoi refinement sampling. There, one also achieves an adaptive sample while simultaneously "discovering" the underlying sizing function. We show how to construct such samples that are spaced uniformly with respect to the $k$th nearest neighbor distance function.

Characterizing the Distortion of Some Simple Euclidean Embeddings
The European Workshop on Computational Geometry, Lugano, Switzerland, March 31, 2016

This talk addresses some upper and lower bounds techniques for bounding the distortion between mappings between Euclidean metric spaces including circles, spheres, pairs of lines, triples of planes, and the union of a hyperplane and a point.

Persistent Homology and Nested Dissection
The ACM-SIAM Symposium on Discrete Algorithms, Arlington, VA, January 11, 2016

Nested dissection exploits the underlying topology to do matrix reductions while persistent homology exploits matrix reductions to the reveal underlying topology. It seems natural that one should be able to combine these techniques to beat the currently best bound of matrix multiplication time for computing persistent homology. However, nested dissection works by fixing a reduction order, whereas persistent homology generally constrains the ordering according to an input filtration. Despite this obstruction, we show that it is possible to combine these two theories. This shows that one can improve the computation of persistent homology if the underlying space has some additional structure. We give reasonable geometric conditions under which one can beat the matrix multiplication bound for persistent homology.

Sensors and Samples: A Homological Approach
Presented at the Institute for Advanced Study in Princeton, NJ. Workshop on Topology: Identifying Order in Complex Systems

In their seminal work on homological sensor networks, de Silva and Ghrist showed the surprising fact that its possible to certify the coverage of a coordinate free sensor network even with very minimal knowledge of the space to be covered. We give a new, simpler proof of the de Silva-Ghrist Topological Coverage Criterion that eliminates any assumptions about the smoothness of the boundary of the underlying space, allowing the results to be applied to much more general problems. The new proof factors the geometric, topological, and combinatorial aspects of this approach. This factoring reveals an interesting new connection between the topological coverage condition and the notion of weak feature size in geometric sampling theory. We then apply this connection to the problem of showing that for a given scale, if one knows the number of connected components and the distance to the boundary, one can also infer the higher betti numbers or provide strong evidence that more samples are needed. This is in contrast to previous work which merely assumed a good sample and gives no guarantees if the sampling condition is not met.

The Persistent Homology of Distance Functions under Random Projection
Presented at the Symposium on Computational Geometry, Kyoto Japan, June, 2014.

Given $n$ points $P$ in a Euclidean space, the Johnson-Linden-strauss lemma guarantees that the distances between pairs of points is preserved up to a small constant factor with high probability by random projection into $O(\log n)$ dimensions. In this paper, we show that the persistent homology of the distance function to $P$ is also preserved up to a comparable constant factor. One could never hope to preserve the distance function to $P$ pointwise, but we show that it is preserved sufficiently at the critical points of the distance function to guarantee similar persistent homology. We prove these results in the more general setting of weighted $k$th nearest neighbor distances, for which $k=1$ and all weights equal to zero gives the usual distance to $P$.

Persistent Homology and Nested Dissection
Presented at the 23rd Fall Workshop on Computational Geometry, New York City, October 2013.

Nested dissection exploits underlying topology to do matrix reductions while persistent homology exploits matrix reductions to reveal underlying topology. It seems natural that one should be able to combine these techniques to beat the currently best bound of matrix multiplication time for computing persistent homology. However, nested dissection works by fixing a reduction order, whereas persistent homology generally constrains the ordering. Despite this obstruction, we show that it is possible to combine these two theories. This shows that one can improve the computation of persistent homology of a filtration by exploiting information about the underlying space. It gives reasonable geometric conditions under which one can beat the matrix multiplication bound for persistent homology. This is a purely theoretical result, but it's a valuable one. We also give a topological view of nested dissection, which should be interesting to the wider theory community. For example, separators are covers, sparsification is simplification, asymmetric reduction is morse reduction, graph separators can be extended to complex separators.

Geometric and Topological Data Analysis
Presented at the Air Force Research Lab, Rome, New York

In this talk, I give a gentle introduction to geometric and topological data analysis and then segue into some natural questions that arise when one combines the topological view with the perhaps more well-studied linear algebraic view.

Geometric Separators and the Parabolic Lift
Presented at The Canadian Conference on Computational Geometry, Waterloo, Canada, August 2013.

A geometric separator for a set $U$ of $n$ geometric objects (usually balls) is a small (sublinear in $n$) subset whose removal disconnects the intersection graph of $U$ into roughly equal sized parts. These separators provide a natural way to do divide and conquer in geometric settings. A particularly nice geometric separator algorithm originally introduced by Miller and Thurston has three steps: compute a centerpoint in a space of one dimension higher than the input, compute a conformal transformation that ``centers'' the centerpoint, and finally, use the computed transformation to sample a sphere in the original space. The output separator is the subset of $S$ intersecting this sphere. It is both simple and elegant. We show that a change of perspective (literally) can make this algorithm even simpler by eliminating the entire middle step. By computing the centerpoint of the points lifted onto a paraboloid rather than using the stereographic map as in the original method, one can sample the desired sphere directly, without computing the conformal transformation.

A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
Presented at The Symposium on Computational Geometry 2013, Rio de Janiero, Brazil

We describe a new algorithm for computing the Voronoi diagram of a set of $n$ points in constant-dimensional Euclidean space. The running time of our algorithm is $O(f \log n \log \Delta)$ where $f$ is the output complexity of the Voronoi diagram and $\Delta$ is the spread of the input, the ratio of largest to smallest pairwise distances. Despite the simplicity of the algorithm and its analysis, it improves on the state of the art for all inputs with polynomial spread and near-linear output size. The key idea is to first build the Voronoi diagram of a superset of the input points using ideas from Voronoi refinement mesh generation. Then, the extra points are removed in a straightforward way that allows the total work to be bounded in terms of the output complexity, yielding the output sensitive bound. The removal only involves local flips and is inspired by kinetic data structures.

Optimal Meshing
Presented at the Workshop on Computational Geometry (Mesh Generation) at SoCG 2013 in Rio de Janiero

The word optimal is used in different ways in mesh generation.  It could mean that the output is in some sense, "the best mesh" or that the algorithm is, by some measure, "the best algorithm".  One might hope that the best algorithm also produces the best mesh, but maybe some tradeoffs are necessary.  In this talk, I will survey several different notions of optimality in mesh generation and explore the different tradeoffs between them.  The bias will be towards Delaunay/Voronoi methods.

Note that I have made some corrections from the version presented at SoCG. Specifically, I had presented the analyssi for why the feature size intergral counts the vertices size in a quality mesh. This was meant to be simple, but it ignores the fact that one must first prove the algorithm produces verties that are spaced according to the feature size. As this may have been confusing, I updated the slides and added a note.

A New Approach to Output-Sensitive Voronoi Diagrams and Delaunay Triangulations
Ohio State University, Oct 31, 2012

Voronoi diagrams and their duals, Delaunay triangulations, are used in many areas of computing and the sciences. Starting in 3-dimensions, there is a substantial (i.e. polynomial) difference between the best case and the worst case complexity of these objects when starting with $n$ points. This motivates the search for algorithms that are output-senstiive rather than relying only on worst-case guarantees. In this talk, I will describe a simple, new algorithm for computing Voronoi diagrams in $d$-dimensions that runs in $O(f \log n \log \Delta)$ time, where $f$ is the output size and the spread $\Delta$ of the input points is the ratio of the diameter to the closest pair distance. For a wide range of inputs, this is the best known algorithm. The algorithm is novel in the that it turns the classic algorithm of Delaunay refinement for mesh generation on its head, working backwards from a quality mesh to the Delaunay triangulation of the input. Along the way, we will see instances of several other classic problems for which no higher-dimensional results are known, including kinetic convex hulls and splitting Delaunay triangulations.

Mesh Generation and Topological Data Analysis
Banff Workshop on Topological Data Analysis and Machine Learning Theory 2012

In this talk, I show how mesh generation and topological data analysis are a natural fit. (Watch the video here)

A Multicover Nerve for Geometric Inference
Presented at the Canadian Conference on Computational Geometry 2012, PEI Canada

We show that filtering the barycentric decomposition of a \v Cech complex by the cardinality of the vertices captures precisely the topology of $k$-covered regions among a collection of balls for all values of $k$. Moreover, we relate this result to the Vietoris-Rips complex to get an approximation in terms of the persistent homology.

New Bounds on the Size of Optimal Meshes
Presented at the Symposium on Geometry Processing 2012, Tallinn Estonia

The theory of optimal size meshes gives a method for analyzing the output size (number of simplices) of a Delaunay refinement mesh in terms of the integral of a sizing function over the input domain. The input points define a maximal such sizing function called the feature size. This paper presents a way to bound the feature size integral in terms of an easy to compute property of a suitable ordering of the point set. The key idea is to consider the pacing of an ordered point set, a measure of the rate of change in the feature size as points are added one at a time. In previous work, Miller et al.\ showed that if an ordered point set has pacing $\phi$, then the number of vertices in an optimal mesh will be $O(\phi^dn)$, where $d$ is the input dimension. We give a new analysis of this integral showing that the output size is only $\Theta(n + n\log \phi)$. The new analysis tightens bounds from several previous results and provides matching lower bounds. Moreover, it precisely characterizes inputs that yield outputs of size $O(n)$.

Minimax Rates for Homology Inference
Geometrica Seminar, Inria Saclay

This talk explores upper and lower bounds on sampling rates for determining the homology of a maifold from noisy random samples.

Linear-Size Approximations to the Vietoris-Rips Filtration
Presented at The University of Muenster

The Vietoris-Rips filtration is a versatile tool in topological data analysis. Unfortunately, it is often too large to construct in full. We show how to construct an $O(n)$-size filtered simplicial complex on an $n$-point metric space such that the persistence diagram is a good approximation to that of the Vietoris-Rips filtration. The filtration can be constructed in $O(n\log n)$ time. The constants depend only on the doubling dimension of the metric space and the desired tightness of the approximation. For the first time, this makes it computationally tractable to approximate the persistence diagram of the Vietoris-Rips filtration across all scales for large data sets.

Our approach uses a hierarchical net-tree to sparsify the filtration. We can either sparsify the data by throwing out points at larger scales to give a zigzag filtration, or sparsify the underlying graph by throwing out edges at larger scales to give a standard filtration. Both methods yield the same guarantees.

Linear-Size Approximations to the Vietoris-Rips Filtration
Presented at ATMCS5: Applied and Computational Topology

The Vietoris-Rips filtration is a versatile tool in topological data analysis. Unfortunately, it is often too large to construct in full. We show how to construct an $O(n)$-size filtered simplicial complex on an $n$-point metric space such that the persistence diagram is a good approximation to that of the Vietoris-Rips filtration. The filtration can be constructed in $O(n\log n)$ time. The constants depend only on the doubling dimension of the metric space and the desired tightness of the approximation. For the first time, this makes it computationally tractable to approximate the persistence diagram of the Vietoris-Rips filtration across all scales for large data sets.

Our approach uses a hierarchical net-tree to sparsify the filtration. We can either sparsify the data by throwing out points at larger scales to give a zigzag filtration, or sparsify the underlying graph by throwing out edges at larger scales to give a standard filtration. Both methods yield the same guarantees.

Linear-Size Approximations to the Vietoris-Rips Filtration
Presented at The Symposium on Computational Geometry 2012, University of North Carolina Chapel Hill

The Vietoris-Rips filtration is a versatile tool in topological data analysis. Unfortunately, it is often too large to construct in full. We show how to construct an $O(n)$-size filtered simplicial complex on an $n$-point metric space such that the persistence diagram is a good approximation to that of the Vietoris-Rips filtration. The filtration can be constructed in $O(n\log n)$ time. The constants depend only on the doubling dimension of the metric space and the desired tightness of the approximation. For the first time, this makes it computationally tractable to approximate the persistence diagram of the Vietoris-Rips filtration across all scales for large data sets.

Our approach uses a hierarchical net-tree to sparsify the filtration. We can either sparsify the data by throwing out points at larger scales to give a zigzag filtration, or sparsify the underlying graph by throwing out edges at larger scales to give a standard filtration. Both methods yield the same guarantees.

Beating the Spread: Time-Optimal Point Meshing
Presented at Symposium on Computational Geometry, 2011, Paris, France

We present NetMesh, a new algorithm that produces a conforming Delaunay mesh for point sets in any fixed dimension with guaranteed optimal mesh size and quality. Our comparison based algorithm runs in time $O(n\log n + m)$, where $n$ is the input size and $m$ is the output size, and with constants depending only on the dimension and the desired element quality bounds. It can terminate early in $O(n\log n)$ time returning a $O(n)$ size Voronoi diagram of a superset of $P$ with a relaxed quality bound, which again matches the known lower bounds.

The previous best results in the comparison model depended on the log of the spread of the input, the ratio of the largest to smallest pairwise distance among input points. We reduce this dependence to $O(\log n)$ by using a sequence of $\epsilon$-nets to determine input insertion order in an incremental Voronoi diagram. We generate a hierarchy of well-spaced meshes and use these to show that the complexity of the Voronoi diagram stays linear in the number of points throughout the construction.

Learning with Nets and Meshes
Computational Geometry Learning Workshop (CGL), Paris, France

In this talk, I argue that mesh generation is good tool to preprocess point clouds for geometric inference and discuss how to compute such meshes.

Meshes and Nets
Presented at CMU Theory Lunch, April 6, 2011

What is the difference between a mesh and a net? What is the difference between a metric space epsilon-net and a range space epsilon-net? What is the difference between geometric divide-and-conquer and combinatorial divide-and-conquer?
In this talk, I will answer these questions and discuss how these different ideas come together to finally settle the question of how to compute conforming point set meshes in optimal time. The meshing problem is to discretize space into as few pieces as possible and yet still capture the underlying density of the input points. Meshes are fundamental in scientific computing, graphics, and more recently, topological data analysis.
This is joint work with Gary Miller and Todd Phillips

Ball Packings and Fat Voronoi Diagrams
Presented at CMU Theory Lunch, September 15, 2010

Here's a toy problem: What is the SMALLEST number of unit balls you can fit in a box such that no more will fit? In this talk, I will show how just thinking about a naive greedy approach to this problem leads to a simple derivation of several of the most important theoretical results in the field of mesh generation. We'll prove classic upper and lower bounds on both the number of balls and the complexity of their interrelationships.

Then, we'll relate this problem to a similar one called the Fat Voronoi Problem, in which we try to find point sets such that every Voronoi cell is fat (the ratio of the radii of the largest contained to smallest containing ball is bounded). This problem has tremendous promise in the future of mesh generation as it can circumvent the classic lowerbounds presented in the first half of the talk. Unfortunately the simple approach no longer works. In the end we will show that the number of neighbors of any cell in a Fat Voronoi Diagram in the plane is bounded by a constant (if you think that's obvious, spend a minute to try to prove it). We'll also talk a little about the higher dimensional version of the problem and its wide range of applications.

Topological Inference via Meshing
Presented at Symposium on Computational Geometry, 2010, in Snowbird, Utah

In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales. To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation. This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.

Topological Inference via Meshing (long version for Theory Lunch)
Presented at CMU Theory Lunch, March 3, 2010

In topological inference, the goal is to extract information about a shape, given only a sample of points from it. There are many approaches to this problem, but the one we focus on is persistent homology. We get a view of the data at different scales by imagining the points are balls and consider different radii. The shape information we want comes in the form of a persistence diagram, which describes the components, cycles, bubbles, etc in the space that persist over a range of different scales. To actually compute a persistence diagram in the geometric setting, previous work required complexes of size n^O(d). We reduce this complexity to O(n) (hiding some large constants depending on d) by using ideas from mesh generation. This talk will not assume any knowledge of topology. This is joint work with Gary Miller, Benoit Hudson, and Steve Oudot.

Prospective Students Research Talk:
Geometry, Topology and All of You Wildest Dreams Will Come True.
Presented to CMU Prospective Grad Students, Feb 27, 2010

In this light talk, I give a high level view of some of my recent research in using ideas from mesh generation to lower the complexity of computing persistent homology in geomemtric settings. Because this talk is for a general audience, I will focus on three related applications (where related is interpreted loosely) that I think have the widest appeal. The applications are: (1) Winning Nobel Peace Prizes, (2)Winning Olympic Gold Medals, and (3) Finding True Love

The Centervertex Theorem for Wedge Depth
Presented at the Canadian Conference on Computational Geometry, 2009, in Vancouver

There are many depth measures on point sets that yield centerpoint theorems. These theorems guarantee the existence of points of a specified depth, a kind of geometric median. However, the deep point guaranteed to exist is not guaranteed to be among the input, and often, it is not. The alpha-wedge depth of a point with respect to a point set is a natural generalization of halfspace depth that replaces halfspaces with wedges (cones or cocones) of angle alpha. We introduce the notion of a centervertex, a point with depth at least n/(d+1) among the set S. We prove that for any finite subset S of R^d, a centervertex exists. We also present a simple algorithm for computing an approximate centervertex.

Approximate Centerpoints with Proofs
Presented at the Symposium on Computational Geometry, 2009, in Aarhus, Denmark.

We show how to derandomize the Iterated-Radon algorithm to achieve the first deterministic algorithm for computing approximate centerpoints that runs in time subexponential in the dimension. The algorithm also returns a polynomial-time checkable proof of the quality of the returned centerpoint.

Planar Graphs in 2 1/2 Dimensions
Presented at Theory Lunch, Carnegie Mellon University, March 18, 2009

In 1864, James Clerk Maxwell published his now famous correspondence between equilibrium stresses and liftings of planar graphs into 3D "terrains." We will explore this classic theorem and how it has influenced many areas of modern theoretical computer science. This survey will touch on results in graph drawing, regular triangulations, network routing, rigidity theory, and spectral graph theory (time permitting).

Linear-size meshes
Presented at the Canadian Conference on Computational Geometry, 2008, in Montreal

We show how to adapt Delaunay refinement meshing technology to produce linear size Delaunay meshes of any input point set. The new technique that makes this possible is the use of well-paced points, or those that can be ordered so that consecutive prefixes have a bounded difference in local feature size.

Achieving Spatial Adaptivity while Finding Approximate Nearest Neighbors
Presented at the Canadian Conference on Computational Geometry, 2008, in Montreal

We give an algorithm for computing approximate nearest neighbors that also achieves spatial adaptivity, a kind of geometric finger search. The algorithm is general dimensional.

Cone Depth and the Center Vertex Theorem
Presented at The Fall Workshop in in Computational Geometry, October 31, 2008

We generalize the Tukey depth to use cones instead of halfspaces. We prove a generalization of the Center Point Theorem that for $S\subset \reals^d$, there is a vertex $s\in S$, with depth at least $\frac{n}{d+1}$ for cones of half-angle $45^\circ$. This gives a notion of data depth for which an approximate median can always be found among the original set. We present a simple algorithm for computing an approximate center vertex.

Searching for the Center
Presented at Theory Lunch, Carnegie Mellon University, October 8, 2008

Convexity is a powerful tool for extracting combinatorial information from geometric objects. In this talk, I will survey several classical convexity theorems going back to the 1920s and show how they are useful in computational geometry today. In particular, I will focus on the problem of computing center points. A center point p for a set S is a point such that any line through p divides S evenly (at worst its a 1/3-2/3 cut). The existence of a center point follows independently from both Helly's Theorem (1923) and Tverberg's Theorem (1966). By borrowing intuition from both theorems, we arrive at a new deterministic algorithm for computing approximate center points in higher dimensions.

A Competitive Algorithm for No-Large-Angle Triangulation
Presented at Theory Lunch, Carnegie Mellon University, May 2, 2007

In this talk, I will present Overlay Stitch Meshing, a novel new algorithm that simultaneously solves two different problems in the field of triangulating planar straight-line graphs in the plane for scientific computing applications: It is the first Delaunay Refinement-type triangulation algorithm that terminates with a full complement of guarantees even on degenerate inputs and with no dependence on the size of the smallest input angle. It is the first algorithm that gives a log competitive output size for the problem of no-large-angle triangulation with Steiner points. The workings of the algorithm are very easy to state and understand. The brunt of the talk will focus on new analytic techniques that allow us to prove strong guarantees.

Flips in Computational Geometry
Presented at Theory Lunch, Carnegie Mellon University, Nov. 15, 2006

In this talk, we will be looking at a basic primitive in computational geometry, the flip. Also known as bistellar flips, edge-flips, rotations, and Pachner moves, this local change operation has been discovered and rediscovered in a variety of fields (thus the many names) and has proven useful both as an algorithmic tool as well as a proof technology. For algorithm designers working outside of computational geometry, one can consider the flip move as a higher dimensional analog of the tree rotations used in binary trees. I will survey some of the most important results about flips with an emphasis on developing a general geometric intuition that has led to many advances.