To get started: Clone this repository then issue
git clone --recursive http://github.com/alecjacobson/geometry-processing-mesh-reconstruction.git
See introduction.
Once built, you can execute the assignment from inside the build/
using
./mesh-reconstruction [path to point cloud]
In this assignment, we will be implementing a simplified version of the method in "Poisson Surface Reconstruction" by Kazhdan et al. 2006. (Your first "task" will be to read and understand this paper).
Many scanning technologies output a set of point samples
on the
surface of the object in question. From these points and perhaps the location
of the camera, one can also estimate normals
to the surface for each point
. This image shows the
data/elephant.pwn
input
data with a white dot for each point and a red line segment pointing outward for
each corresponding normal vector.
For shape analysis, visualization and other downstream geometry processing
phases, we would like to convert this finitely sampled point cloud data into
an explicit continuous surface representation: i.e., a triangle
mesh (a special case
of a polygon mesh). This image
shows the corresponding output mesh for data/elephant.pwn
input data above:
Converting the point cloud directly to a triangle mesh makes it very difficult to ensure that the mesh meets certain topological postconditions: i.e., that it is manifold, closed, and has a small number of holes.
Instead we will first convert the point cloud sampling representation into a
an implicit surface
representation: where the
unknown surface is defined as the
level-set of some function mapping all points in space to a scalar value. For example, we may define
the surface
of some solid, volumetric shape
to be all points
such that
, where we may arbitrarily set
.
On the computer, it is straightforward
discretize an implicit
function. We define a regular 3D grid of
voxels containing at least the bounding
box of . At each
node in the grid
we store the value of the implicit function
. This defines
everywhere in the grid via trilinear
interpolation.
For example, consider a point lying in the middle of the
bottom-most, front-most, left-most cell. We know the values at the eight
corners. Trilinear interpolation can be understood as linear
interpolation in the
-direction by
on each
-axis-aligned edge, resulting in four values
living on the same plane. These can then be linearly interpolated in the
direction by
resulting in two points on the same line, and finally in the
direction by
to get to our evaluation point
.
An implicit surface stored as the level-set of a trilinearly interpolated grid
can be contoured into a triangle mesh via the Marching Cubes
Algorithm.
For the purposes of this assignment, we will treat this as a black
box. Instead, we focus on
determining what values for to store on the grid.
We assume that our set of points lie on the surface
of some physical
solid object
. This solid object
must have some non-trivial volume that we can calculate abstractly as the
integral of unit density over the solid:
We can rewrite this definite integral as an indefinite integral over all of
:
by introducing the characteristic
function of , that is
one for points inside of the shape and zero for points outside of
:
Compared to typical implicit surface
functions, this function
represents the surface of the shape
as the discontinuity between
the one values and the zero values. Awkwardly, the gradient of the
characteristic function
is not defined along
.
One of the key observations made in [Kazhdan et al. 2006] is that the gradient of a infinitesimally mollified (smoothed) characteristic function:
Our goal will be to use our points and normals
to optimize an
implicit function
over a regular grid, so that its gradient
meets
these two criteria. In that way, our
will be an approximation of the
mollified characteristic function.
- we know how to compute
at each node location
, and
- our input points
all lie perfectly at grid nodes:
.
We will find out these assumptions are not realistic and we will have to relax them (i.e., we will not make these assumptions in the completion of the tasks). However, it will make the following algorithmic description easier on the first pass.
If our points lie at grid points, then our corresponding normals
also
live at grid points. This leads to a very simple set of linear equations to
define a function
with a gradient equal to the surface normal at the
surface and zero gradient away from the surface:
This is a vector-valued equation. The gradients, normals and zero-vectors are
three-dimensional (e.g., ). In effect, this is three equations for
every grid node.
Since we only need a single number at each grid node (the value of ), we
have too many equations.
Like many geometry processing algorithms confronted with such an over determined, we will optimize for the solution that best minimizes the error of equation:
We will treat the error of each grid location equally by minimizing the sum over all grid locations:
where (written in boldface) is a vector of unknown grid-nodes values,
where
.
Part of the convenience of working on a regular grid is that we can use the
finite difference
method to approximate
the gradient on the grid.
After revisiting our assumptions, we will be able to compute
approximations of
the -,
- and
-components of
via a sparse
matrix multiplication of
a "gradient matrix"
and our vector of unknown grid values
. We will be
able to write the
minimization problem above in matrix form:
or equivalently after expanding the norm:
This is a quadratic "energy" function of the variables of , its minimum occurs when
an infinitesimal change in
produces no change in the energy:
Applying this derivative gives us a sparse system of linear equations
We will assume that we can solve this using a black box sparse solver.
Now, let's revisit our assumptions.
The gradient of a function in 3D is nothing more than a vector containing
partial derivatives in each coordinate direction:
We will approximate each partial derivative individually. Let's consider the
partial derivative in the direction,
, and we will assume
without loss of generality that what we derive applies symmetrically for
and
.
The partial derivative in the -direction is a one-dimensional derivative.
This couldn't be easier to do with finite differences. We approximate the
derivative of the function
with respect to the
direction is the
difference between the function evaluated at one grid node and at the grid node
before it in the
-direction then divided by the spatial distance between
adjacent nodes
(i.e., the grid step size):
where we use the index to indicate that this derivative in the
-direction lives on a staggered
grid in between the grid
nodes where the function values for
.
The following pictures show a 2D example, where lives on the nodes of a
blue grid:
The partial derivatives of with respect to the
-direction
live on a
green, staggered grid:
The partial derivatives of with respect to the
-direction
live on a
yellow, staggered grid:
Letting be column vector of function values on the
primary grid (blue in the example pictures), we can construct a sparse matrix
so that each row
computes the partial derivative at the corresponding
staggered grid location
. The
th entry in that row receives a
value only for neighboring primary grid nodes:
Now, obviously in our code we cannot index the column vector
by a triplet of numbers
or the rows of
by the triplet
. We will assume that
refers to
g(i+j*n_x+k*n_y*n_x)
. Similarly, for the staggered grid subscriptswe will assume that
refers to the matrix entry
Dx(i+j*n_x+k*n_y*n_x,l)
, where thehas been rounded down.
We can similarly build matrices and
and stack these matrices
vertically to create a gradient matrix
:
This implies that our vector of zeros and normals in our minimization
problem should not live on the primary, but rather it, too, should be broken
into
-,
- and
-components that live of their resepctive staggered
grids:
This leads to addressing our second assumption.
At this point, we would actually liked to have had that our input normals
were given component-wise on the staggered grid. Then we could immediate stick
them into . But this doesn't make much sense as each normal
lives
at its associated point
, regardless of any grids.
To remedy this, we will distribute each component of each input normal
to
at the corresponding staggered grid node location.
For example, consider the normal at some point
. Conceptually,
we'll think of the
-component of the normal
as floating in the
staggered grid corresponding to
, in between the eight staggered grid
locations:
Each of these staggered grid nodes has a corresponding value in the vector
.
We will distribute to these entries in
by adding a partial amount
of
to each. I.e.,
where is the trilinear interpolation weight associate with
staggered grid node
to interpolate a value at the point
.
The trilinear interpolation weights so that:
Since we need to do these for the -component of each input normal, we will
assemble a sparse matrix
that interpolates
at each point
:
the transpose of is not quite its
inverse, but instead can
be interpreted as distributing values onto staggered grid locations where
lives:
Using this definition of and analogously for
and
we can
construct the vector
in our energy minimization problem above.
BTW, what's Poisson got to do with it?
The discrete energy minimization problem we've written looks like the squared norm of some gradients. An analogous energy in the smooth world is the Dirichlet energy:
to minimize this energy with respect to
as an unknown function, we need to invoke Calculus of Variations and Green's First Identity. In doing so we find that minimizers will satisfy:
known as Laplaces' Equation.
If we instead start with a slightly different energy:
where
is a vector-valued function. Then applying the same machinery we find that minimizers will satisfy:
known as Poisson's equation.
Notice that if we interpret the transpose of our gradient matrix
as a divergence matrix (we can and we should), then the structure of these smooth energies and equations are directly preserved in our discrete energies and equations.
This kind of structure preservation is a major criterion for judging discrete methods.
Constant functions have no gradient. This means that we can add a constant
function to our implicit function without changing its gradient:
The same is true for our discrete gradient matrix : if the vector of grid
values
is constant then
will be a vector zeros.
This is potentially problematic for our least squares solve: there are many
solutions, since we can just add a constant. Fortunately, we don't really
care. It's elegant to say that our surface is defined at , but we'd be
just as happy declaring that our surface is defined at
.
To this end we just need to find a solution , and then to pick a good
iso-value
.
As suggested in [Kazhdan et al. 2006], we can pick a good iso-value by
interpolating our solution at each of the input points (since we know
they're on the surface) and averaging their values. For an appropriate
interpolation matrix
on the primary (non-staggered) grid this can be
written as:
Besides the insights above, a major contribution of [Kazhdan et al. 2006] was to setup and solve this problem on an adaptive grid rather than a regular grid. They also track "confidence" of their input data effecting how they smooth and interpolate values. As a result, their method is one of the most highly used surface reconstruction techniques to this day.
Consider adding your own insights to the wikipedia entry for this method.
This reading task is not directly graded, but it's expected that you read and understand this paper before moving on to the other tasks.
Given a regular finite-difference grid described by the number of nodes on each
side (nx
, ny
and nz
), the grid spacing (h
), and the location of the
bottom-left-front-most corner node (corner
), and a list of points (P
),
construct a sparse matrix W
of trilinear interpolation weights so that
P = W * x
.
Given a regular finite-difference grid described by the number of nodes on each
side (nx
, ny
and nz
), the grid spacing (h
), and a desired direction,
construct a sparse matrix D
to compute first partial derivatives in the given
direction onto the staggered grid in that direction.
Given a regular finite-difference grid described by the number of nodes on each
side (nx
, ny
and nz
), and the grid spacing (h
), construct a sparse
matrix G
to compute gradients with each component on its respective staggered
grid.
Use
fd_partial_derivative
to computeDx
,Dy
, andDz
and then simply concatenate these together to makeG
.
Given a list of points P
and the list of corresponding normals N
, construct
a implicit function g
over a regular grid (built for you) using approach
described above.
You will need to distribute the given normals N
onto the staggered grid
values in v
via sparse trilinear interpolation matrices Wx
, Wy
and Wz
for each staggered grid.
Then you will need to construct and solve the linear system .
Determine the iso-level sigma
to extract from the g
.
Feed this implicit function g
to igl::copyleft::marching_cubes
to contour
this function into a triangle mesh V
and F
.
Make use of fd_interpolate
and fd_grad
.
Eigen has many different sparse matrix solvers. For these very regular matrices, it seems that the conjugate gradient method will outperform direct methods such as Cholesky factorization. Try
Eigen::BiCGSTAB
.
Debug in debug mode with assertions enabled. For Unix users on the command line use:
cmake -DCMAKE_BUILD_TYPE=Debug ../
but then try out your code in release mode for much better performance
cmake -DCMAKE_BUILD_TYPE=Release ../