hippylib.algorithms package¶
Submodules¶
hippylib.algorithms.NewtonCG module¶
-
hippylib.algorithms.NewtonCG.
LS_ParameterList
()[source]¶ Generate a ParameterList for line search globalization. type:
LS_ParameterList().showMe()
for default values and their descriptions
-
class
hippylib.algorithms.NewtonCG.
ReducedSpaceNewtonCG
(model, parameters=<hippylib.utils.parameterList.ParameterList object>)[source]¶ Inexact Newton-CG method to solve constrained optimization problems in the reduced parameter space. The Newton system is solved inexactly by early termination of CG iterations via Eisenstat-Walker (to prevent oversolving) and Steihaug (to avoid negative curvature) criteria. Globalization is performed using one of the following methods:
- line search (LS) based on the armijo sufficient reduction condition; or
- trust region (TR) based on the prior preconditioned norm of the update direction.
The stopping criterion is based on a control on the norm of the gradient and a control of the inner product between the gradient and the Newton direction.
The user must provide a model that describes the forward problem, cost functionals, and all the derivatives for the gradient and the Hessian.
More specifically the model object should implement following methods:
generate_vector()
-> generate the object containing state, parameter, adjointcost(x)
-> evaluate the cost functional, report regularization part and misfit separatelysolveFwd(out, x,tol)
-> solve the possibly non linear forward problem up a tolerancetol
solveAdj(out, x,tol)
-> solve the linear adjoint problemevalGradientParameter(x, out)
-> evaluate the gradient of the parameter and compute its normsetPointForHessianEvaluations(x)
-> set the state to perform hessian evaluationssolveFwdIncremental(out, rhs, tol)
-> solve the linearized forward problem for a givenrhs
solveAdjIncremental(out, rhs, tol)
-> solve the linear adjoint problem for a givenrhs
applyC(dm, out)
–> Compute out \(= C_x dm\)applyCt(dp, out)
–> Compute out = \(C_x dp\)applyWuu(du,out)
–> Compute out = \((W_{uu})_x du\)applyWmu(dm, out)
–> Compute out = \((W_{um})_x dm\)applyWmu(du, out)
–> Compute out = \(W_{mu} du\)applyR(dm, out)
–> Compute out = \(R dm\)applyWmm(dm,out)
–> Compute out = \(W_{mm} dm\)Rsolver()
–> A solver for the regularization term
Type
help(Model)
for additional informationInitialize the ReducedSpaceNewtonCG. Type
ReducedSpaceNewtonCG_ParameterList().showMe()
for list of default parameters and their descriptions.-
solve
(x)[source]¶ - Input:
x = [u, m, p]
represents the initial guess (u and p may be None).x
will be overwritten on return.
-
termination_reasons
= ['Maximum number of Iteration reached', 'Norm of the gradient less than tolerance', 'Maximum number of backtracking reached', 'Norm of (g, dm) less than tolerance']¶
hippylib.algorithms.bfgs module¶
-
class
hippylib.algorithms.bfgs.
BFGS
(model, parameters=<hippylib.utils.parameterList.ParameterList object>)[source]¶ Implement BFGS technique with backtracking inexact line search and damped updating See Nocedal & Wright (06), ch.6.2, ch.7.3, ch.18.3
The user must provide a model that describes the forward problem, cost functionals, and all the derivatives for the gradient and the Hessian.
More specifically the model object should implement following methods:
generate_vector()
-> generate the object containing state, parameter, adjointcost(x)
-> evaluate the cost functional, report regularization part and misfit separatelysolveFwd(out, x,tol)
-> solve the possibly non-linear forward problem up a tolerance tolsolveAdj(out, x,tol)
-> solve the linear adjoint problemevalGradientParameter(x, out)
-> evaluate the gradient of the parameter and compute its normapplyR(dm, out)
–> Computeout
= \(R dm\)Rsolver()
–> A solver for the regularization term
Type
help(Model)
for additional informationInitialize the BFGS solver. Type
BFGS_ParameterList().showMe()
for default parameters and their description-
solve
(x, H0inv, bounds_xPARAM=None)[source]¶ Solve the constrained optimization problem with initial guess
x = [u, m0, p]
.Note
u
andp
may beNone
.x
will be overwritten.H0inv
: the initial approximated inverse of the Hessian for the BFGS operator. It has an optional methodupdate(x)
that will update the operator based onx = [u,m,p]
.bounds_xPARAM
: Bound constraint (list with two entries: min and max). Can be either a scalar value or adolfin.Vector
.Return the solution
[u,m,p]
-
termination_reasons
= ['Maximum number of Iteration reached', 'Norm of the gradient less than tolerance', 'Maximum number of backtracking reached', 'Norm of (g, da) less than tolerance']¶
-
class
hippylib.algorithms.bfgs.
BFGS_operator
(parameters=<hippylib.utils.parameterList.ParameterList object>)[source]¶ -
set_H0inv
(H0inv)[source]¶ Set user-defined operator corresponding to
H0inv
Input:
H0inv
: Fenics operator with methodsolve()
-
solve
(x, b)[source]¶ Solve system: \(H_{bfgs} x = b\) where \(H_{bfgs}\) is the approximation to the Hessian build by BFGS. That is, we apply
\[x = (H_{bfgs})^{-1} b = H_k b\]where \(H_k\) matrix is BFGS approximation to the inverse of the Hessian. Computation done via double-loop algorithm.
Inputs:
x = dolfin.Vector
- [out]b = dolfin.Vector
- [in]
-
hippylib.algorithms.cgsampler module¶
-
class
hippylib.algorithms.cgsampler.
CGSampler
[source]¶ This class implements the CG sampler algorithm to generate samples from \(\mathcal{N}(0, A^{-1})\).
- Reference:
- Albert Parker and Colin Fox Sampling Gaussian Distributions in Krylov Spaces with Conjugate Gradient SIAM J SCI COMPUT, Vol 34, No. 3 pp. B312-B334
Construct the solver with default parameters
tolerance = 1e-4
print_level = 0
verbose = 0
hippylib.algorithms.cgsolverSteihaug module¶
-
class
hippylib.algorithms.cgsolverSteihaug.
CGSolverSteihaug
(parameters=<hippylib.utils.parameterList.ParameterList object>, comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]¶ Solve the linear system \(A x = b\) using preconditioned conjugate gradient ( \(B\) preconditioner) and the Steihaug stopping criterion:
- reason of termination 0: we reached the maximum number of iterations (no convergence)
- reason of termination 1: we reduced the residual up to the given tolerance (convergence)
- reason of termination 2: we reached a negative direction (premature termination due to not spd matrix)
- reason of termination 3: we reached the boundary of the trust region
The stopping criterion is based on either
- the absolute preconditioned residual norm check: \(|| r^* ||_{B^{-1}} < atol\)
- the relative preconditioned residual norm check: \(|| r^* ||_{B^{-1}}/|| r^0 ||_{B^{-1}} < rtol,\)
where \(r^* = b - Ax^*\) is the residual at convergence and \(r^0 = b - Ax^0\) is the initial residual.
The operator
A
is set using the methodset_operator(A)
.A
must provide the following two methods:A.mult(x,y)
: y = AxA.init_vector(x, dim)
: initialize the vector x so that it is compatible with the range (dim = 0) or the domain (dim = 1) ofA
.
The preconditioner
B
is set using the methodset_preconditioner(B)
.B
must provide the following method: -B.solve(z,r)
: z is the action of the preconditionerB
on the vector rTo solve the linear system \(Ax = b\) call
self.solve(x,b)
. Herex
andb
are assumed to bedolfin.Vector
objects.Type
CGSolverSteihaug_ParameterList().showMe()
for default parameters and their descriptions-
reason
= ['Maximum Number of Iterations Reached', 'Relative/Absolute residual less than tol', 'Reached a negative direction', 'Reached trust region boundary']¶
hippylib.algorithms.linalg module¶
-
class
hippylib.algorithms.linalg.
Operator2Solver
(op, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]¶
-
class
hippylib.algorithms.linalg.
Solver2Operator
(S, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>, init_vector=None)[source]¶
-
hippylib.algorithms.linalg.
amg_method
()[source]¶ Determine which AMG preconditioner to use. If avaialable use ML, which is faster than the PETSc one.
-
hippylib.algorithms.linalg.
estimate_diagonal_inv2
(Asolver, k, d)[source]¶ An unbiased stochastic estimator for the diagonal of \(A^{-1}\). \(d = [ \sum_{j=1}^k v_j .* A^{-1} v_j ] ./ [ \sum_{j=1}^k v_j .* v_j ]\) where
- \(v_j\) are i.i.d. \(\mathcal{N}(0, I)\)
- \(.*\) and \(./\) represent the element-wise multiplication and division of vectors, respectively.
- Reference:
- Costas Bekas, Effrosyni Kokiopoulou, and Yousef Saad, An estimator for the diagonal of a matrix, Applied Numerical Mathematics, 57 (2007), pp. 1214-1229.
-
hippylib.algorithms.linalg.
get_diagonal
(A, d)[source]¶ Compute the diagonal of the square operator \(A\). Use
Solver2Operator
if \(A^{-1}\) is needed.
hippylib.algorithms.lowRankOperator module¶
-
class
hippylib.algorithms.lowRankOperator.
LowRankOperator
(d, U, my_init_vector=None)[source]¶ This class model the action of a low rank operator \(A = U D U^T\). Here \(D\) is a diagonal matrix, and the columns of are orthonormal in some weighted inner-product.
Note
This class only works in serial!
Construct the low rank operator given
d
andU
.-
init_vector
(x, dim)[source]¶ Initialize
x
to be compatible with the range (dim=0
) or domain (dim=1
) ofA
.
-
trace
(W=None)[source]¶ Compute the trace of
A
. If the weightW
is given, compute the trace of \(W^{1/2} A W^{1/2}\). This is equivalent to \(\mbox{tr}_W(A) = \sum_i \lambda_i\), where \(\lambda_i\) are the generalized eigenvalues of \(A x = \lambda W^{-1} x\).Note
If \(U\) is a \(W\)-orthogonal matrix then \(\mbox{tr}_W(A) = \sum_i D(i,i)\).
-
trace2
(W=None)[source]¶ Compute the trace of \(A A\) (Note this is the square of Frobenius norm, since \(A\) is symmetic). If the weight
W
is provided, it will compute the trace of \((AW)^2\).This is equivalent to \(\mbox{tr}_W(A) = \sum_i \lambda_i^2\), where \(\lambda_i\) are the generalized eigenvalues of \(A x = \lambda W^{-1} x\).
Note
If \(U\) is a \(W\)-orthogonal matrix then \(\mbox{tr}_W(A) = \sum_i D(i,i)^2\).
-
hippylib.algorithms.multivector module¶
-
class
hippylib.algorithms.multivector.
MultiVector
(*args, **kwargs)[source]¶ Bases:
sphinx.ext.autodoc.importer._MockObject
-
Borthogonalize
(B)[source]¶ Returns \(QR\) decomposition of self. \(Q\) and \(R\) satisfy the following relations in exact arithmetic
\[ \begin{align}\begin{aligned}R \,= \,Z, && (1),\\Q^*BQ\, = \, I, && (2),\\Q^*BZ \, = \,R, && (3),\\ZR^{-1} \, = \, Q, && (4). \end{aligned}\end{align} \]Returns:
Bq
of typeMultiVector
->B
\(^{-1}\)-orthogonal vectorsr
of typendarray
-> The \(r\) of the QR decomposition.Note
self
is overwritten by \(Q\).
-
_mgs_stable
(B)[source]¶ Returns \(QR\) decomposition of self, which satisfies conditions (1)–(4). Uses Modified Gram-Schmidt with re-orthogonalization (Rutishauser variant) for computing the \(B\)-orthogonal \(QR\) factorization.
- References:
- A.K. Saibaba, J. Lee and P.K. Kitanidis, Randomized algorithms for Generalized Hermitian Eigenvalue Problems with application to computing Karhunen-Loe’ve expansion http://arxiv.org/abs/1307.6885
- W. Gander, Algorithms for the QR decomposition. Res. Rep, 80(02), 1980
-
export
(Vh, filename, varname='mv', normalize=False)[source]¶ Export in paraview this multivector.
Inputs:
Vh
: the parameter finite element space.filename
: the name of the paraview output file.varname
: the name of the paraview variable.normalize
: ifTrue
the vector is rescaled such that \(\| u \|_{\infty} = 1.\)
-
orthogonalize
()[source]¶ Returns \(QR\) decomposition of self. \(Q\) and \(R\) satisfy the following relations in exact arithmetic
\[ \begin{align}\begin{aligned}QR \, = \, Z, && (1),\\Q^*Q \, = \, I, && (2),\\Q^*Z \, = \, R, && (3),\\ZR^{-1} \, = \, Q, && (4).\end{aligned}\end{align} \]Returns:
r
of typendarray
-> The r of the QR decompositionNote
self
is overwritten by \(Q\).
-
hippylib.algorithms.randomizedEigensolver module¶
-
hippylib.algorithms.randomizedEigensolver.
check_g
(A, B, U, d)[source]¶ Test the frobenious norm of \(U^TBU - I_k\).
Test the frobenious norm of \((V^TAV) - I_k\), with \(V = U D^{-1/2}\).
Test the \(l_2\) norm of the residual: \(r[i] = A U[i] - d[i] B U[i]\).
-
hippylib.algorithms.randomizedEigensolver.
check_std
(A, U, d)[source]¶ Test the frobenious norm of \(U^TU - I_k\).
Test the frobenious norm of \((V^TAV) - I_k\), with \(V = U D^{-1/2}\).
Test the \(l_2\) norm of the residual: \(r[i] = A U[i] - d[i] U[i]\).
-
hippylib.algorithms.randomizedEigensolver.
doublePass
(A, Omega, k, s, check=False)[source]¶ The double pass algorithm for the HEP as presented in [1].
Inputs:
A
: the operator for which we need to estimate the dominant eigenpairs.Omega
: a random gassian matrix with \(m \geq k\) columns.k
: the number of eigenpairs to extract.s
: the number of power iterations for selecting the subspace.
Outputs:
d
: the estimate of the \(k\) dominant eigenvalues of \(A\).U
: the estimate of the \(k\) dominant eigenvectors of \(A,\, U^T U = I_k\).
-
hippylib.algorithms.randomizedEigensolver.
doublePassG
(A, B, Binv, Omega, k, s=1, check=False)[source]¶ The double pass algorithm for the GHEP as presented in [2].
Inputs:
A
: the operator for which we need to estimate the dominant generalized eigenpairs.B
: the right-hand side operator.Binv
: the inverse of the right-hand side operator.Omega
: a random gassian matrix with \(m \geq k\) columns.k
: the number of eigenpairs to extract.s
: the number of power iterations for selecting the subspace.
Outputs:
d
: the estimate of the \(k\) dominant eigenvalues of \(A\).U
: the estimate of the \(k\) dominant eigenvectors of \(A,\, U^T B U = I_k\).
-
hippylib.algorithms.randomizedEigensolver.
singlePass
(A, Omega, k, s=1, check=False)[source]¶ The single pass algorithm for the Hermitian Eigenvalues Problems (HEP) as presented in [1].
Inputs:
A
: the operator for which we need to estimate the dominant eigenpairs.Omega
: a random gassian matrix with \(m \geq k\) columns.k
: the number of eigenpairs to extract.
Outputs:
d
: the estimate of the \(k\) dominant eigenvalues of \(A\).U
: the estimate of the \(k\) dominant eigenvectors of \(A,\, U^T U = I_k\).
-
hippylib.algorithms.randomizedEigensolver.
singlePassG
(A, B, Binv, Omega, k, s=1, check=False)[source]¶ The single pass algorithm for the Generalized Hermitian Eigenvalues Problems (GHEP) as presented in [2].
Inputs:
A
: the operator for which we need to estimate the dominant generalized eigenpairs.B
: the right-hand side operator.Binv
: the inverse of the right-hand side operator.Omega
: a random gassian matrix with \(m \geq k\) columns.k
: the number of eigenpairs to extract.s
: the number of power iterations for selecting the subspace.
Outputs:
d
: the estimate of the \(k\) dominant eigenvalues of \(A\).U
: the estimate of the \(k\) dominant eigenvectors of \(A,\, U^T B U = I_k\).
hippylib.algorithms.steepestDescent module¶
-
class
hippylib.algorithms.steepestDescent.
SteepestDescent
(model, parameters=<hippylib.utils.parameterList.ParameterList object>)[source]¶ Prior-preconditioned Steepest Descent to solve constrained optimization problems in the reduced parameter space. Globalization is performed using the Armijo sufficient reduction condition (backtracking). The stopping criterion is based on a control on the norm of the gradient.
The user must provide a model that describes the forward problem, cost functionals, and all the derivatives for the gradient.
More specifically the model object should implement following methods:
generate_vector()
-> generate the object containing state, parameter, adjoint.cost(x)
-> evaluate the cost functional, report regularization part and misfit separately.solveFwd(out, x,tol)
-> solve the possibly non linear forward problem up to tolerancetol
.solveAdj(out, x,tol)
-> solve the linear adjoint problem.evalGradientParameter(x, out)
-> evaluate the gradient of the parameter and compute its norm.Rsolver()
–> A solver for the regularization term.
Type
help(Model)
for additional information.Initialize the Steepest Descent solver. Type
SteepestDescent_ParameterList().showMe()
for list of default parameters and their descriptions.-
solve
(x)[source]¶ Solve the constrained optimization problem with initial guess
x = [u,a,p]
. Return the solution[u,a,p]
.Note
x
will be overwritten.
-
termination_reasons
= ['Maximum number of Iteration reached', 'Norm of the gradient less than tolerance', 'Maximum number of backtracking reached']¶
hippylib.algorithms.traceEstimator module¶
-
class
hippylib.algorithms.traceEstimator.
TraceEstimator
(A, solve_mode=False, accurancy=0.1, init_vector=None, random_engine=<function rademacher_engine>, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]¶ An unbiased stochastic estimator for the trace of \(A,\, d = \sum_{j=1}^k (v_j, A v_j)\), where
- \(v_j\) are i.i.d. Rademacher or Gaussian random vectors.
- \((\cdot,\cdot)\) represents the inner product.
The number of samples \(k\) is estimated at run time based on the variance of the estimator.
Reference: Haim Avron and Sivan Toledo, Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix, Journal of the ACM (JACM), 58 (2011), p. 17.
Constructor:
A
: an operatorsolve_mode
: ifTrue
we estimate the trace ofA
\(^{-1}\), otherwise ofA
.- code:accurancy: we stop when the standard deviation of the estimator is less then
accurancy`*tr(:code:`A
).
init_vector
: use a custom function to initialize a vector compatible with the- range/domain of
A
.
random_engine
: which type of i.i.d. random variables to use (Rademacher or Gaussian).