# Posterior shape models

In this tutorial we will use Gaussian processes for regression tasks and experiment with the concept of posterior shape models. This will form the basics for the next tutorial, where we will see how these tools can be applied to construct a reconstruction of partial shapes.

##### Related resources

The following resources from our online course may provide some helpful context for this tutorial:

- The regression problem (Article)
- Gaussian process regression (Video)
- Posterior models for different kernels (Article)

##### Preparation

As in the previous tutorials, we start by importing some commonly used objects and initializing the system.

We also load and visualize the face model:

## Fitting observed data using Gaussian process regression

The reason we build statistical models is that we want to use them for explaining data. More precisely, given some observed data, we fit the model to the data and get as a result a distribution over the model parameters. In our case, the model is a Gaussian process model of shape deformations, and the data are observed shape deformations; I.e. deformation vectors from the reference surface.

To illustrate this process, we simulate some data. We generate a deformation vector at the tip of the nose, which corresponds ot a really long nose:

To visualize this deformation, we need to define a `DiscreteField`

, which can then be passed to the show
method of our `ui`

object.

In the next step we set up the regression. The Gaussian process model assumes that the deformation is observed only up to some uncertainty, which can be modelled using a normal distribution.

In Scalismo, the data for the regression is specified by a sequence of triples, consisting of the point of the reference, the corresponding deformation vector, as well as the noise at that point:

We can now obtain the regression result by feeding this data to the method `regression`

of the `GaussianProcess`

object:

Note that the result of the regression is again a Gaussian process, over the same domain as the original process. We call this the *posterior process*.
This construction is very important in Scalismo. Therefore, we have a convenience method defined directly on the Gaussian process object. We could write the same in
the more succinctly:

Independently of how you call the method, the returned type is a continuous (low rank) Gaussian Process from which we can now sample deformations at any set of points:

### Posterior of a StatisticalMeshModel:

Given that the StatisticalMeshModel is merely a wrapper around a GP, the same posterior functionality is available for statistical mesh models:

Notice in this case, since we are working with a discrete Gaussian process, the observed data is specified in terms of the *point identifier* of the nose tip point instead of its 3D coordinates.

Let's visualize the obtained posterior model:

*Exercise: sample a few random faces from the graphical interface using the random button. Notice how all faces display large noses :) with the tip of the nose remaining close to the selected landmark.*

Here again we obtain much more than just a single face instance fitting the input data: we get a full normal distribution of shapes fitting the observation. The **most probable** shape, and hence our best fit, is the **mean** of the posterior.

We notice by sampling from the posterior model that we tend to get faces with rather large noses. This is since we chose our observation to be twice the length of the average (mean) deformation at the tip of the nose.

#### Landmark uncertainty:

When we are specifying the training data for the posterior GP computation, we model the uncertainty of the input data. The variance of this noise model has a large influence on the resulting posterior distribution. We should choose it always such that it corresponds as closely as possible to the real uncertainty of our observation.

To see how this variance influences the posterior, we perform the posterior computation again with, this time, a 5 times bigger noise variance.

We observe, that there is now much more variance left in this posterior process, which is a consequence of the larger uncertainty that was associated with the observed data.