hippylib.modeling package¶
Submodules¶
hippylib.modeling.PDEProblem module¶
-
class
hippylib.modeling.PDEProblem.
PDEProblem
[source]¶ Bases:
object
Consider the PDE problem: Given \(m\), find \(u\) such that
\[F(u, m, p) = ( f(u, m), p) = 0, \quad \forall p.\]Here \(F\) is linear in \(p\), but it may be non linear in \(u\) and \(m\).
-
apply_ij
(i, j, dir, out)[source]¶ Given \(u, m, p\); compute \(\delta_{ij} F(u, m, p; \hat{i}, \tilde{j})\) in the direction \(\tilde{j} =\)
dir
, \(\forall \hat{i}.\)
-
apply_ijk
(i, j, k, x, jdir, kdir, out)[source]¶ Given
x = [u,a,p]
; compute \(\delta_{ijk} F(u,a,p; \hat{i}, \tilde{j}, \tilde{k})\) in the direction \((\tilde{j},\tilde{k}) = (\)jdir,kdir
), \(\forall \hat{i}.\)
-
evalGradientParameter
(x, out)[source]¶ Given \(u, m, p\); evaluate \(\delta_m F(u, m, p; \hat{m}),\, \forall \hat{m}.\)
-
setLinearizationPoint
(x, gauss_newton_approx)[source]¶ Set the values of the state and parameter for the incremental forward and adjoint solvers. Set whether Gauss Newton approximation of the Hessian should be used.
-
solveAdj
(state, x, adj_rhs, tol)[source]¶ Solve the linear adjoint problem: Given \(m\), \(u\); find \(p\) such that
\[\delta_u F(u, m, p;\hat{u}) = 0, \quad \forall \hat{u}.\]
-
solveFwd
(state, x, tol)[source]¶ Solve the possibly nonlinear forward problem: Given \(m\), find \(u\) such that
\[\delta_p F(u, m, p;\hat{p}) = 0, \quad \forall \hat{p}.\]
-
solveIncremental
(out, rhs, is_adj, mytol)[source]¶ If
is_adj = False
:Solve the forward incremental system: Given \(u, m\), find \(\tilde{u}\) such that
\[\delta_{pu} F(u, m, p; \hat{p}, \tilde{u}) = \mbox{rhs}, \quad \forall \hat{p}.\]If
is_adj = True
:Solve the adjoint incremental system: Given \(u, m\), find \(\tilde{p}\) such that
\[\delta_{up} F(u, m, p; \hat{u}, \tilde{p}) = \mbox{rhs}, \quad \forall \hat{u}.\]
-
-
class
hippylib.modeling.PDEProblem.
PDEVariationalProblem
(Vh, varf_handler, bc, bc0, is_fwd_linear=False)[source]¶ Bases:
hippylib.modeling.PDEProblem.PDEProblem
-
apply_ij
(i, j, dir, out)[source]¶ Given \(u, m, p\); compute \(\delta_{ij} F(u, m, p; \hat{i}, \tilde{j})\) in the direction \(\tilde{j} =\)
dir
, \(\forall \hat{i}\).
-
apply_ijk
(i, j, k, x, jdir, kdir, out)[source]¶ Given
x = [u,a,p]
; compute \(\delta_{ijk} F(u,a,p; \hat{i}, \tilde{j}, \tilde{k})\) in the direction \((\tilde{j},\tilde{k}) = (\)jdir,kdir
), \(\forall \hat{i}.\)
-
evalGradientParameter
(x, out)[source]¶ Given \(u, m, p\); evaluate \(\delta_m F(u, m, p; \hat{m}),\, \forall \hat{m}.\)
-
setLinearizationPoint
(x, gauss_newton_approx)[source]¶ Set the values of the state and parameter for the incremental forward and adjoint solvers.
-
solveAdj
(adj, x, adj_rhs, tol)[source]¶ Solve the linear adjoint problem: Given \(m, u\); find \(p\) such that
\[\delta_u F(u, m, p;\hat{u}) = 0, \quad \forall \hat{u}.\]
-
solveFwd
(state, x, tol)[source]¶ Solve the possibly nonlinear forward problem: Given \(m\), find \(u\) such that
\[\delta_p F(u, m, p;\hat{p}) = 0,\quad \forall \hat{p}.\]
-
solveIncremental
(out, rhs, is_adj, mytol)[source]¶ If
is_adj == False
:Solve the forward incremental system: Given \(u, m\), find \(\tilde{u}\) such that
\[\delta_{pu} F(u, m, p; \hat{p}, \tilde{u}) = \mbox{rhs},\quad \forall \hat{p}.\]If
is_adj == True
:Solve the adjoint incremental system: Given \(u, m\), find \(\tilde{p}\) such that
\[\delta_{up} F(u, m, p; \hat{u}, \tilde{p}) = \mbox{rhs},\quad \forall \hat{u}.\]
-
hippylib.modeling.expression module¶
hippylib.modeling.misfit module¶
-
class
hippylib.modeling.misfit.
ContinuousStateObservation
(Vh, dX, bcs, form=None)[source]¶ Bases:
hippylib.modeling.misfit.Misfit
This class implements continuous state observations in a subdomain \(X \subset \Omega\) or \(X \subset \partial \Omega\).
Constructor:
Vh
: the finite element space for the state variable.dX
: the integrator on subdomain X where observation are presents. E.g.dX = dl.dx
means observation on all \(\Omega\) anddX = dl.ds
means observations on all \(\partial \Omega\).bcs
: If the forward problem imposes Dirichlet boundary conditions \(u = u_D \mbox{ on } \Gamma_D\);bcs
is a list ofdolfin.DirichletBC
object that prescribes homogeneuos Dirichlet conditions \(u = 0 \mbox{ on } \Gamma_D\).form
: ifform = None
we compute the \(L^2(X)\) misfit: \(\int_X (u - u_d)^2 dX,\) otherwise the integrand specified in the given form will be used.-
apply_ij
(i, j, dir, out)[source]¶ Apply the second variation \(\delta_{ij}\) (
i,j = STATE,PARAMETER
) of the cost in directiondir
.
-
cost
(x)[source]¶ Given x evaluate the cost functional. Only the state u and (possibly) the parameter m are accessed.
-
-
class
hippylib.modeling.misfit.
Misfit
[source]¶ Bases:
object
Abstract class to model the misfit componenet of the cost functional. In the following
x
will denote the variable[u, m, p]
, denoting respectively the stateu
, the parameterm
, and the adjoint variablep
.The methods in the class misfit will usually access the state u and possibly the parameter
m
. The adjoint variables will never be accessed.-
apply_ij
(i, j, dir, out)[source]¶ Apply the second variation \(\delta_{ij}\) (
i,j = STATE,PARAMETER
) of the cost in directiondir
.
-
cost
(x)[source]¶ Given x evaluate the cost functional. Only the state u and (possibly) the parameter m are accessed.
-
-
class
hippylib.modeling.misfit.
MultiStateMisfit
(misfits)[source]¶ Bases:
hippylib.modeling.misfit.Misfit
-
apply_ij
(i, j, dir, out)[source]¶ Apply the second variation \(\delta_{ij}\) (
i,j = STATE,PARAMETER
) of the cost in directiondir
.
-
cost
(x)[source]¶ Given x evaluate the cost functional. Only the state u and (possibly) the parameter m are accessed.
-
-
class
hippylib.modeling.misfit.
PointwiseStateObservation
(Vh, obs_points)[source]¶ Bases:
hippylib.modeling.misfit.Misfit
This class implements pointwise state observations at given locations. It assumes that the state variable is a scalar function.
Constructor:
Vh
is the finite element space for the state variableobs_points
is a 2D array number of points by geometric dimensions that stores the location of the observations.-
apply_ij
(i, j, dir, out)[source]¶ Apply the second variation \(\delta_{ij}\) (
i,j = STATE,PARAMETER
) of the cost in directiondir
.
-
cost
(x)[source]¶ Given x evaluate the cost functional. Only the state u and (possibly) the parameter m are accessed.
-
hippylib.modeling.model module¶
-
class
hippylib.modeling.model.
Model
(problem, prior, misfit)[source]¶ This class contains the full description of the inverse problem. As inputs it takes a
PDEProblem object
, aPrior
object, and aMisfit
object.In the following we will denote with
u
the state variablem
the (model) parameter variablep
the adjoint variable
Create a model given:
- problem: the description of the forward/adjoint problem and all the sensitivities
- prior: the prior component of the cost functional
- misfit: the misfit componenent of the cost functional
-
Rsolver
()[source]¶ Return an object
Rsovler
that is a suitable solver for the regularization operator \(R\).The solver object should implement the method
Rsolver.solve(z,r)
such that \(Rz pprox r\).
-
applyC
(dm, out)[source]¶ Apply the \(C\) block of the Hessian to a (incremental) parameter variable, i.e.
out
= \(C dm\)Parameters:
dm
the (incremental) parameter variableout
the action of the \(C\) block ondm
Note
This routine assumes that
out
has the correct shape.
-
applyCt
(dp, out)[source]¶ Apply the transpose of the \(C\) block of the Hessian to a (incremental) adjoint variable.
out
= \(C^t dp\)Parameters:
dp
the (incremental) adjoint variableout
the action of the \(C^T\) block ondp
..note:: This routine assumes that
out
has the correct shape.
-
applyR
(dm, out)[source]¶ Apply the regularization \(R\) to a (incremental) parameter variable.
out
= \(R dm\)Parameters:
dm
the (incremental) parameter variableout
the action of \(R\) ondm
Note
This routine assumes that
out
has the correct shape.
-
applyWmm
(dm, out)[source]¶ Apply the \(W_{mm}\) block of the Hessian to a (incremental) parameter variable.
out
= \(W_{mm} dm\)Parameters:
dm
the (incremental) parameter variableout
the action of the \(W_{mm}\) on blockdm
Note
This routine assumes that
out
has the correct shape.
-
applyWmu
(du, out)[source]¶ Apply the \(W_{mu}\) block of the Hessian to a (incremental) state variable.
out
= \(W_{mu} du\)Parameters:
du
the (incremental) state variableout
the action of the \(W_{mu}\) block ondu
Note
This routine assumes that
out
has the correct shape.
-
applyWum
(dm, out)[source]¶ Apply the \(W_{um}\) block of the Hessian to a (incremental) parameter variable.
out
= \(W_{um} dm\)Parameters:
dm
the (incremental) parameter variableout
the action of the \(W_{um}\) block ondu
Note
This routine assumes that
out
has the correct shape.
-
applyWuu
(du, out)[source]¶ Apply the \(W_{uu}\) block of the Hessian to a (incremental) state variable.
out
= \(W_{uu} du\)Parameters:
du
the (incremental) state variableout
the action of the \(W_{uu}\) block ondu
Note
This routine assumes that
out
has the correct shape.
-
cost
(x)[source]¶ Given the list
x = [u,m,p]
which describes the state, parameter, and adjoint variable compute the cost functional as the sum of the misfit functional and the regularization functional.Return the list [cost functional, regularization functional, misfit functional]
Note
p
is not needed to compute the cost functional
-
evalGradientParameter
(x, mg, misfit_only=False)[source]¶ Evaluate the gradient for the variational parameter equation at the point
x=[u,m,p]
.Parameters:
x = [u,m,p]
the point at which to evaluate the gradient.mg
the variational gradient \((g, mtest)\), mtest being a test function in the parameter space (Output parameter)
Returns the norm of the gradient in the correct inner product \(g_norm = sqrt(g,g)\)
-
generate_vector
(component='ALL')[source]¶ By default, return the list
[u,m,p]
where:u
is any object that describes the state variablem
is adolfin.Vector
object that describes the parameter variable. (Needs to support linear algebra operations)p
is any object that describes the adjoint variable
If
component = STATE
return onlyu
If
component = PARAMETER
return onlym
If
component = ADJOINT
return onlyp
-
setPointForHessianEvaluations
(x, gauss_newton_approx=False)[source]¶ Specify the point
x = [u,m,p]
at which the Hessian operator (or the Gauss-Newton approximation) needs to be evaluated.Parameters:
x = [u,m,p]
: the point at which the Hessian or its Gauss-Newton approximation needs to be evaluated.gauss_newton_approx (bool)
: whether to use Gauss-Newton approximation (default: use Newton)
Note
This routine should either:
- simply store a copy of x and evaluate action of blocks of the Hessian on the fly
- or partially precompute the block of the hessian (if feasible)
-
solveAdj
(out, x, tol=1e-09)[source]¶ Solve the linear adjoint problem.
Parameters:
out
: is the solution of the adjoint problem (i.e. the adjointp
) (Output parameter)x = [u, m, p]
provides- the parameter variable
m
for assembling the adjoint operator - the state variable
u
for assembling the adjoint right hand side
Note
p
is not accessed- the parameter variable
tol
is the relative tolerance for the solution of the adjoint problem. [Default 1e-9].
-
solveAdjIncremental
(sol, rhs, tol)[source]¶ Solve the incremental adjoint problem for a given right-hand side
Parameters:
sol
the solution of the incremental adjoint problem (Output)rhs
the right hand side of the linear systemtol
the relative tolerance for the linear system
-
solveFwd
(out, x, tol=1e-09)[source]¶ Solve the (possibly non-linear) forward problem.
Parameters:
out
: is the solution of the forward problem (i.e. the state) (Output parameters)x = [u,m,p]
provides- the parameter variable
m
for the solution of the forward problem - the initial guess
u
if the forward problem is non-linear
Note
p
is not accessed- the parameter variable
tol
is the relative tolerance for the solution of the forward problem. [Default 1e-9].
hippylib.modeling.modelVerify module¶
-
hippylib.modeling.modelVerify.
modelVerify
(model, m0, innerTol, is_quadratic=False, misfit_only=False, verbose=True, eps=None)[source]¶ Verify the reduced Gradient and the Hessian of a model. It will produce two loglog plots of the finite difference checks for the gradient and for the Hessian. It will also check for symmetry of the Hessian.
hippylib.modeling.pointwiseObservation module¶
-
hippylib.modeling.pointwiseObservation.
assemblePointwiseObservation
(Vh, targets)[source]¶ Assemble the pointwise observation matrix:
Inputs
Vh
: FEniCS finite element space.targets
: observation points (numpy array).
-
hippylib.modeling.pointwiseObservation.
exportPointwiseObservation
(Vh, B, data, fname, varname='observation')[source]¶ This function writes a VTK PolyData file to visualize pointwise data.
Inputs:
B
: observation operator.mesh
: mesh.data
:dolfin.Vector
containing the data.fname
: filename for the file to export (without extension).varname
: name of the variable for the .vtk file.
-
hippylib.modeling.pointwiseObservation.
write_vtk
(points, data, fname, varname='observation')[source]¶ This function writes a VTK PolyData file to visualize pointwise data.
Inputs:
points
: locations of the points (numpy array of size equal to number of points times space dimension).data
: pointwise values (numpy array of size equal to number of points).fname
: filename for the .vtk file to export.varname
: name of the variable for the .vtk file.
hippylib.modeling.posterior module¶
-
class
hippylib.modeling.posterior.
GaussianLRPosterior
(prior, d, U, mean=None)[source]¶ Class for the low rank Gaussian Approximation of the Posterior. This class provides functionality for approximate Hessian apply, solve, and Gaussian sampling based on the low rank factorization of the Hessian.
In particular if \(d\) and \(U\) are the dominant eigenpairs of \(H_{\mbox{misfit}} U[:,i] = d[i] R U[:,i]\) then we have:
- low rank Hessian apply: \(y = ( R + RU D U^{T}) x.\)
- low rank Hessian solve: \(y = (R^-1 - U (I + D^{-1})^{-1} U^T) x.\)
- low rank Hessian Gaussian sampling: \(y = ( I - U S U^{T}) x\), where \(S = I - (I + D)^{-1/2}\) and \(x \sim \mathcal{N}(0, R^{-1}).\)
Construct the Gaussian approximation of the posterior. Input: -
prior
: the prior mode. -d
: the dominant generalized eigenvalues of the Hessian misfit. -U
: the dominant generalized eigenvector of the Hessian misfit \(U^T R U = I.\) -mean
: the MAP point.-
init_vector
(x, dim)[source]¶ Inizialize a vector
x
to be compatible with the range/domain of \(H\). Ifdim == "noise"
inizializex
to be compatible with the size of white noise used for sampling.
-
pointwise_variance
(**kwargs)[source]¶ Compute/estimate the pointwise variance of the posterior, prior distribution and the pointwise variance reduction informed by the data.
See
_Prior.pointwise_variance
for more details.
-
sample
(*args, **kwargs)[source]¶ possible calls:
sample(s_prior, s_post, add_mean=True)
Given a prior sample
s_prior
compute a samples_post
from the posterior.s_prior
is a sample from the prior centered at 0 (input).s_post
is a sample from the posterior (output).- if
add_mean=True
(default) then the samples will be centered at the map point.
sample(noise, s_prior, s_post, add_mean=True)
Given
noise
\(\sim \mathcal{N}(0, I)\) compute a samples_prior
from the prior ands_post
from the posterior.noise
is a realization of white noise (input).s_prior
is a sample from the prior (output).s_post
is a sample from the posterior.- if
add_mean=True
(default) then the prior and posterior samples will be centered at the respective means.
hippylib.modeling.prior module¶
-
class
hippylib.modeling.prior.
BiLaplacianPrior
(Vh, gamma, delta, Theta=None, mean=None, rel_tol=1e-12, max_iter=1000, robin_bc=False)[source]¶ Bases:
hippylib.modeling.prior._Prior
This class implement a prior model with covariance matrix \(C = (\delta I + \gamma \mbox{div } \Theta \nabla) ^ {-2}\).
The magnitude of \(\delta\gamma\) governs the variance of the samples, while the ratio \(\frac{\gamma}{\delta}\) governs the correlation lenght.
Here \(\Theta\) is a SPD tensor that models anisotropy in the covariance kernel.
Construct the prior model. Input:
Vh
: the finite element space for the parametergamma
anddelta
: the coefficient in the PDETheta
: the SPD tensor for anisotropic diffusion of the PDEmean
: the prior mean
-
class
hippylib.modeling.prior.
LaplacianPrior
(Vh, gamma, delta, mean=None, rel_tol=1e-12, max_iter=100)[source]¶ Bases:
hippylib.modeling.prior._Prior
This class implements a prior model with covariance matrix \(C = (\delta I - \gamma \Delta) ^ {-1}\).
The magnitude of \(\gamma\) governs the variance of the samples, while the ratio \(\frac{\gamma}{\delta}\) governs the correlation length.
Note
\(C\) is a trace class operator only in 1D while it is not a valid prior in 2D and 3D.
Construct the prior model. Input:
Vh
: the finite element space for the parametergamma
anddelta
: the coefficient in the PDETheta
: the SPD tensor for anisotropic diffusion of the PDEmean
: the prior mean
-
class
hippylib.modeling.prior.
MollifiedBiLaplacianPrior
(Vh, gamma, delta, locations, m_true, Theta=None, pen=10.0, order=2, rel_tol=1e-12, max_iter=1000)[source]¶ Bases:
hippylib.modeling.prior._Prior
This class implement a prior model with covariance matrix \(C = \left( [\delta + \mbox{pen} \sum_i m(x - x_i) ] I + \gamma \mbox{div } \Theta \nabla\right) ^ {-2}\),
where
- \(\Theta\) is a SPD tensor that models anisotropy in the covariance kernel.
- \(x_i (i=1,...,n)\) are points were we assume to know exactly the value of the parameter (i.e., \(m(x_i) = m_{\mbox{true}}( x_i) \mbox{ for } i=1,...,n).\)
- \(m\) is the mollifier function: \(m(x - x_i) = \exp\left( - \left[\frac{\gamma}{\delta}\| x - x_i \|_{\Theta^{-1}}\right]^{\mbox{order}} \right).\)
pen
is a penalization parameter.
The magnitude of \(\delta \gamma\) governs the variance of the samples, while the ratio \(\frac{\gamma}{\delta}\) governs the correlation length.
The prior mean is computed by solving
\[\left( [\delta + \sum_i m(x - x_i) ] I + \gamma \mbox{div } \Theta \nabla \right) m = \sum_i m(x - x_i) m_{\mbox{true}}.\]Construct the prior model. Input:
Vh
: the finite element space for the parametergamma
anddelta
: the coefficients in the PDElocations
: the points \(x_i\) at which we assume to know the true value of the parameterm_true
: the true modelTheta
: the SPD tensor for anisotropic diffusion of the PDEpen
: a penalization parameter for the mollifier
-
class
hippylib.modeling.prior.
_BilaplacianR
(A, Msolver)[source]¶ Operator that represent the action of the regularization/precision matrix for the Bilaplacian prior.
-
class
hippylib.modeling.prior.
_BilaplacianRsolver
(Asolver, M)[source]¶ Operator that represent the action of the inverse the regularization/precision matrix for the Bilaplacian prior.
-
class
hippylib.modeling.prior.
_Prior
[source]¶ Abstract class to describe the prior model. Concrete instances of a
_Prior class
should expose the following attributes and methods.Attributes:
R
: an operator to apply the regularization/precision operator.Rsolver
: an operator to apply the inverse of the regularization/precision operator.M
: the mass matrix in the control space.mean
: the prior mean.
Methods:
init_vector(self,x,dim)
: Inizialize a vectorx
to be compatible with the range/domain ofR
Ifdim == "noise"
inizializex
to be compatible with the size of white noise used for sampling.sample(self, noise, s, add_mean=True)
: Givennoise
\(\sim \mathcal{N}(0, I)\) compute a sample s from the prior. Ifadd_mean==True
add the prior mean value tos
.
-
pointwise_variance
(method, k=1000000, r=200)[source]¶ Compute/estimate the prior pointwise variance.
- If
method=="Exact"
we compute the diagonal entries of \(R^{-1}\) entry by entry. This requires to solve \(n\) linear system in \(R\) (not scalable, but ok for illustration purposes).
- If
-
trace
(method='Exact', tol=0.1, min_iter=20, max_iter=100, r=200)[source]¶ Compute/estimate the trace of the prior covariance operator.
- If
method=="Exact"
we compute the trace exactly by summing the diagonal entries of \(R^{-1}M\). This requires to solve \(n\) linear system in \(R\) (not scalable, but ok for illustration purposes). - If
method=="Estimator"
use the trace estimator algorithms implemeted in the classTraceEstimator
.tol
is a relative bound on the estimator standard deviation. In particular, we used enough samples in the Estimator such that the standard deviation of the estimator is less thentol
\(tr(\mbox{Prior})\).min_iter
andmax_iter
are the lower and upper bound on the number of samples to be used for the estimation of the trace.
- If
hippylib.modeling.reducedHessian module¶
-
class
hippylib.modeling.reducedHessian.
FDHessian
(model, m0, h, innerTol, misfit_only=False)[source]¶ This class implements matrix free application of the reduced Hessian operator. The constructor takes the following parameters:
model
: the object which contains the description of the problem.m0
: the value of the parameter at which the Hessian needs to be evaluated.h
: the mesh size for FD.innerTol
: the relative tolerance for the solution of the forward and adjoint problems.misfit_only
: a boolean flag that describes whenever the full Hessian or only the misfit component of the Hessian is used.
Type
help(Template)
for more information on which methods model should implement.Construct the reduced Hessian Operator
-
init_vector
(x, dim)[source]¶ Reshape the Vector
x
so that it is compatible with the reduced Hessian operator.Parameters:
x
: the vector to reshapedim
: if 0 thenx
will be reshaped to be compatible with the range of the reduced Hessian, if 1 thenx
will be reshaped to be compatible with the domain of the reduced Hessian
Note
Since the reduced Hessian is a self adjoint operator, the range and the domain is the same. Either way, we choosed to add the parameter
dim
for consistency with the interface ofMatrix
in dolfin.
-
class
hippylib.modeling.reducedHessian.
ReducedHessian
(model, innerTol, misfit_only=False)[source]¶ This class implements matrix free application of the reduced Hessian operator. The constructor takes the following parameters:
model
: the object which contains the description of the problem.innerTol
: the relative tolerance for the solution of the incremental forward and adjoint problems.misfit_only
: a boolean flag that describes whenever the full Hessian or only the misfit component of the Hessian is used.
Type
help(modelTemplate)
for more information on which methods model should implement.Construct the reduced Hessian Operator
-
GNHessian
(x, y)[source]¶ Apply the Gauss-Newton approximation of the reduced Hessian to the vector
x
. Return the result iny
.
-
init_vector
(x, dim)[source]¶ Reshape the Vector
x
so that it is compatible with the reduced Hessian operator.Parameters:
x
: the vector to reshape.dim
: if 0 thenx
will be reshaped to be compatible with the range of the reduced Hessian, if 1 thenx
will be reshaped to be compatible with the domain of the reduced Hessian.
Note
Since the reduced Hessian is a self adjoint operator, the range and the domain is the same. Either way, we choosed to add the parameter
dim
for consistency with the interface ofMatrix
in dolfin.
hippylib.modeling.timeDependentVector module¶
-
class
hippylib.modeling.timeDependentVector.
TimeDependentVector
(times, tol=1e-10, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]¶ Bases:
object
A class to store time dependent vectors. Snapshots are stored/retrieved by specifying the time of the snapshot. Times at which the snapshot are taken must be specified in the constructor.
Constructor:
times
: time frame at which snapshots are stored.tol
: tolerance to identify the frame of the snapshot.
-
initialize
(M, dim)[source]¶ Initialize all the snapshot to be compatible with the range/domain of an operator
M
.
-
inner
(other)[source]¶ Compute the inner products: \(a+= (\mbox{self[i]},\mbox{other[i]})\) for each snapshot.
-
retrieve
(u, t)[source]¶ Retrieve snapshot
u
relative to timet
. Ift
does not belong to the list of time frame an error is raised.