hippylib.algorithms package

Submodules

hippylib.algorithms.NewtonCG module

hippylib.algorithms.NewtonCG.LS_ParameterList()[source]

Generate a ParameterList for line search globalization. type: LS_ParameterList().showMe() for default values and their descriptions

class hippylib.algorithms.NewtonCG.ReducedSpaceNewtonCG(model, parameters=<hippylib.utils.parameterList.ParameterList object>)[source]

Inexact Newton-CG method to solve constrained optimization problems in the reduced parameter space. The Newton system is solved inexactly by early termination of CG iterations via Eisenstat-Walker (to prevent oversolving) and Steihaug (to avoid negative curvature) criteria. Globalization is performed using one of the following methods:

  • line search (LS) based on the armijo sufficient reduction condition; or
  • trust region (TR) based on the prior preconditioned norm of the update direction.

The stopping criterion is based on a control on the norm of the gradient and a control of the inner product between the gradient and the Newton direction.

The user must provide a model that describes the forward problem, cost functionals, and all the derivatives for the gradient and the Hessian.

More specifically the model object should implement following methods:

  • generate_vector() -> generate the object containing state, parameter, adjoint
  • cost(x) -> evaluate the cost functional, report regularization part and misfit separately
  • solveFwd(out, x,tol) -> solve the possibly non linear forward problem up a tolerance tol
  • solveAdj(out, x,tol) -> solve the linear adjoint problem
  • evalGradientParameter(x, out) -> evaluate the gradient of the parameter and compute its norm
  • setPointForHessianEvaluations(x) -> set the state to perform hessian evaluations
  • solveFwdIncremental(out, rhs, tol) -> solve the linearized forward problem for a given rhs
  • solveAdjIncremental(out, rhs, tol) -> solve the linear adjoint problem for a given rhs
  • applyC(dm, out) –> Compute out \(= C_x dm\)
  • applyCt(dp, out) –> Compute out = \(C_x dp\)
  • applyWuu(du,out) –> Compute out = \((W_{uu})_x du\)
  • applyWmu(dm, out) –> Compute out = \((W_{um})_x dm\)
  • applyWmu(du, out) –> Compute out = \(W_{mu} du\)
  • applyR(dm, out) –> Compute out = \(R dm\)
  • applyWmm(dm,out) –> Compute out = \(W_{mm} dm\)
  • Rsolver() –> A solver for the regularization term

Type help(Model) for additional information

Initialize the ReducedSpaceNewtonCG. Type ReducedSpaceNewtonCG_ParameterList().showMe() for list of default parameters and their descriptions.

_solve_ls(x)[source]

Solve the constrained optimization problem with initial guess x.

_solve_tr(x)[source]
solve(x)[source]
Input:
x = [u, m, p] represents the initial guess (u and p may be None). x will be overwritten on return.
termination_reasons = ['Maximum number of Iteration reached', 'Norm of the gradient less than tolerance', 'Maximum number of backtracking reached', 'Norm of (g, dm) less than tolerance']
hippylib.algorithms.NewtonCG.ReducedSpaceNewtonCG_ParameterList()[source]

Generate a ParameterList for ReducedSpaceNewtonCG. type: ReducedSpaceNewtonCG_ParameterList().showMe() for default values and their descriptions

hippylib.algorithms.NewtonCG.TR_ParameterList()[source]

Generate a ParameterList for Trust Region globalization. type: RT_ParameterList().showMe() for default values and their descriptions

hippylib.algorithms.bfgs module

class hippylib.algorithms.bfgs.BFGS(model, parameters=<hippylib.utils.parameterList.ParameterList object>)[source]

Implement BFGS technique with backtracking inexact line search and damped updating See Nocedal & Wright (06), ch.6.2, ch.7.3, ch.18.3

The user must provide a model that describes the forward problem, cost functionals, and all the derivatives for the gradient and the Hessian.

More specifically the model object should implement following methods:

  • generate_vector() -> generate the object containing state, parameter, adjoint
  • cost(x) -> evaluate the cost functional, report regularization part and misfit separately
  • solveFwd(out, x,tol) -> solve the possibly non-linear forward problem up a tolerance tol
  • solveAdj(out, x,tol) -> solve the linear adjoint problem
  • evalGradientParameter(x, out) -> evaluate the gradient of the parameter and compute its norm
  • applyR(dm, out) –> Compute out = \(R dm\)
  • Rsolver() –> A solver for the regularization term

Type help(Model) for additional information

Initialize the BFGS solver. Type BFGS_ParameterList().showMe() for default parameters and their description

solve(x, H0inv, bounds_xPARAM=None)[source]

Solve the constrained optimization problem with initial guess x = [u, m0, p].

Note

u and p may be None.

x will be overwritten.

H0inv: the initial approximated inverse of the Hessian for the BFGS operator. It has an optional method update(x) that will update the operator based on x = [u,m,p].

bounds_xPARAM: Bound constraint (list with two entries: min and max). Can be either a scalar value or a dolfin.Vector.

Return the solution [u,m,p]

termination_reasons = ['Maximum number of Iteration reached', 'Norm of the gradient less than tolerance', 'Maximum number of backtracking reached', 'Norm of (g, da) less than tolerance']
hippylib.algorithms.bfgs.BFGS_ParameterList()[source]
class hippylib.algorithms.bfgs.BFGS_operator(parameters=<hippylib.utils.parameterList.ParameterList object>)[source]
set_H0inv(H0inv)[source]

Set user-defined operator corresponding to H0inv

Input:

H0inv: Fenics operator with method solve()
solve(x, b)[source]

Solve system: \(H_{bfgs} x = b\) where \(H_{bfgs}\) is the approximation to the Hessian build by BFGS. That is, we apply

\[x = (H_{bfgs})^{-1} b = H_k b\]

where \(H_k\) matrix is BFGS approximation to the inverse of the Hessian. Computation done via double-loop algorithm.

Inputs:

x = dolfin.Vector - [out]

b = dolfin.Vector - [in]

update(s, y)[source]

Update BFGS operator with most recent gradient update.

To handle potential break from secant condition, update done via damping

Inputs:

s = dolfin.Vector [in] - corresponds to update in medium parameters.

y = dolfin.Vector [in] - corresponds to update in gradient.

hippylib.algorithms.bfgs.BFGSoperator_ParameterList()[source]
class hippylib.algorithms.bfgs.RescaledIdentity(init_vector=None)[source]

Bases: object

Default operator for H0inv, corresponds to applying \(d0 I\)

init_vector(x, dim)[source]
solve(x, b)[source]

hippylib.algorithms.cgsampler module

class hippylib.algorithms.cgsampler.CGSampler[source]

This class implements the CG sampler algorithm to generate samples from \(\mathcal{N}(0, A^{-1})\).

Reference:
Albert Parker and Colin Fox Sampling Gaussian Distributions in Krylov Spaces with Conjugate Gradient SIAM J SCI COMPUT, Vol 34, No. 3 pp. B312-B334

Construct the solver with default parameters tolerance = 1e-4

print_level = 0

verbose = 0

sample(noise, s)[source]

Generate a sample \(s ~ N(0, A^{-1})\).

noise is a numpy.array of i.i.d. normal variables used as input. For a fixed realization of noise the algorithm is fully deterministic. The size of noise determine the maximum number of CG iterations.

set_operator(A)[source]

Set the operator A, such that \(x \sim \mathcal{N}(0, A^{-1})\).

Note

A is any object that provides the methods init_vector() and mult()

hippylib.algorithms.cgsolverSteihaug module

class hippylib.algorithms.cgsolverSteihaug.CGSolverSteihaug(parameters=<hippylib.utils.parameterList.ParameterList object>, comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]

Solve the linear system \(A x = b\) using preconditioned conjugate gradient ( \(B\) preconditioner) and the Steihaug stopping criterion:

  • reason of termination 0: we reached the maximum number of iterations (no convergence)
  • reason of termination 1: we reduced the residual up to the given tolerance (convergence)
  • reason of termination 2: we reached a negative direction (premature termination due to not spd matrix)
  • reason of termination 3: we reached the boundary of the trust region

The stopping criterion is based on either

  • the absolute preconditioned residual norm check: \(|| r^* ||_{B^{-1}} < atol\)
  • the relative preconditioned residual norm check: \(|| r^* ||_{B^{-1}}/|| r^0 ||_{B^{-1}} < rtol,\)

where \(r^* = b - Ax^*\) is the residual at convergence and \(r^0 = b - Ax^0\) is the initial residual.

The operator A is set using the method set_operator(A). A must provide the following two methods:

  • A.mult(x,y): y = Ax
  • A.init_vector(x, dim): initialize the vector x so that it is compatible with the range (dim = 0) or the domain (dim = 1) of A.

The preconditioner B is set using the method set_preconditioner(B). B must provide the following method: - B.solve(z,r): z is the action of the preconditioner B on the vector r

To solve the linear system \(Ax = b\) call self.solve(x,b). Here x and b are assumed to be dolfin.Vector objects.

Type CGSolverSteihaug_ParameterList().showMe() for default parameters and their descriptions

reason = ['Maximum Number of Iterations Reached', 'Relative/Absolute residual less than tol', 'Reached a negative direction', 'Reached trust region boundary']
set_TR(radius, B_op)[source]
set_operator(A)[source]

Set the operator \(A\).

set_preconditioner(B_solver)[source]

Set the preconditioner \(B\).

solve(x, b)[source]

Solve the linear system \(Ax = b\)

update_x_with_TR(x, alpha, d)[source]
update_x_without_TR(x, alpha, d)[source]
hippylib.algorithms.cgsolverSteihaug.CGSolverSteihaug_ParameterList()[source]

Generate a ParameterList for CGSolverSteihaug. Type CGSolverSteihaug_ParameterList().showMe() for default values and their descriptions

hippylib.algorithms.linalg module

class hippylib.algorithms.linalg.DiagonalOperator(d)[source]
init_vector(x, dim)[source]
inner(x, y)[source]
mult(x, y)[source]
hippylib.algorithms.linalg.GetFromOwnedGid(v, gid)[source]
hippylib.algorithms.linalg.MatAtB(A, B)[source]

Compute the matrix-matrix product \(A^T B\).

hippylib.algorithms.linalg.MatMatMult(A, B)[source]

Compute the matrix-matrix product \(AB\).

hippylib.algorithms.linalg.MatPtAP(A, P)[source]

Compute the triple matrix product \(P^T A P\).

class hippylib.algorithms.linalg.Operator2Solver(op, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]
init_vector(x, dim)[source]
inner(x, y)[source]
solve(y, x)[source]
hippylib.algorithms.linalg.SetToOwnedGid(v, gid, val)[source]
class hippylib.algorithms.linalg.Solver2Operator(S, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>, init_vector=None)[source]
init_vector(x, dim)[source]
inner(x, y)[source]
mult(x, y)[source]
hippylib.algorithms.linalg.Transpose(A)[source]

Compute the matrix transpose

hippylib.algorithms.linalg.amg_method()[source]

Determine which AMG preconditioner to use. If avaialable use ML, which is faster than the PETSc one.

hippylib.algorithms.linalg.estimate_diagonal_inv2(Asolver, k, d)[source]

An unbiased stochastic estimator for the diagonal of \(A^{-1}\). \(d = [ \sum_{j=1}^k v_j .* A^{-1} v_j ] ./ [ \sum_{j=1}^k v_j .* v_j ]\) where

  • \(v_j\) are i.i.d. \(\mathcal{N}(0, I)\)
  • \(.*\) and \(./\) represent the element-wise multiplication and division of vectors, respectively.
Reference:
Costas Bekas, Effrosyni Kokiopoulou, and Yousef Saad, An estimator for the diagonal of a matrix, Applied Numerical Mathematics, 57 (2007), pp. 1214-1229.
hippylib.algorithms.linalg.get_diagonal(A, d)[source]

Compute the diagonal of the square operator \(A\). Use Solver2Operator if \(A^{-1}\) is needed.

hippylib.algorithms.linalg.to_dense(A, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]

Convert a sparse matrix A to dense. For debugging only.

hippylib.algorithms.linalg.trace(A, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]

Compute the trace of a sparse matrix \(A\).

hippylib.algorithms.lowRankOperator module

class hippylib.algorithms.lowRankOperator.LowRankOperator(d, U, my_init_vector=None)[source]

This class model the action of a low rank operator \(A = U D U^T\). Here \(D\) is a diagonal matrix, and the columns of are orthonormal in some weighted inner-product.

Note

This class only works in serial!

Construct the low rank operator given d and U.

get_diagonal(diag)[source]

Compute the diagonal of A.

init_vector(x, dim)[source]

Initialize x to be compatible with the range (dim=0) or domain (dim=1) of A.

inner(x, y)[source]
mult(x, y)[source]

Compute \(y = Ax = U D U^T x\)

solve(sol, rhs)[source]

Compute \(\mbox{sol} = U D^-1 U^T x\)

trace(W=None)[source]

Compute the trace of A. If the weight W is given, compute the trace of \(W^{1/2} A W^{1/2}\). This is equivalent to \(\mbox{tr}_W(A) = \sum_i \lambda_i\), where \(\lambda_i\) are the generalized eigenvalues of \(A x = \lambda W^{-1} x\).

Note

If \(U\) is a \(W\)-orthogonal matrix then \(\mbox{tr}_W(A) = \sum_i D(i,i)\).

trace2(W=None)[source]

Compute the trace of \(A A\) (Note this is the square of Frobenius norm, since \(A\) is symmetic). If the weight W is provided, it will compute the trace of \((AW)^2\).

This is equivalent to \(\mbox{tr}_W(A) = \sum_i \lambda_i^2\), where \(\lambda_i\) are the generalized eigenvalues of \(A x = \lambda W^{-1} x\).

Note

If \(U\) is a \(W\)-orthogonal matrix then \(\mbox{tr}_W(A) = \sum_i D(i,i)^2\).

hippylib.algorithms.multivector module

hippylib.algorithms.multivector.MatMvMult(A, x, y)[source]
class hippylib.algorithms.multivector.MultiVector(*args, **kwargs)[source]

Bases: sphinx.ext.autodoc.importer._MockObject

Borthogonalize(B)[source]

Returns \(QR\) decomposition of self. \(Q\) and \(R\) satisfy the following relations in exact arithmetic

\[ \begin{align}\begin{aligned}R \,= \,Z, && (1),\\Q^*BQ\, = \, I, && (2),\\Q^*BZ \, = \,R, && (3),\\ZR^{-1} \, = \, Q, && (4). \end{aligned}\end{align} \]

Returns:

Bq of type MultiVector -> B\(^{-1}\)-orthogonal vectors r of type ndarray -> The \(r\) of the QR decomposition.

Note

self is overwritten by \(Q\).

_mgs_reortho()[source]
_mgs_stable(B)[source]

Returns \(QR\) decomposition of self, which satisfies conditions (1)–(4). Uses Modified Gram-Schmidt with re-orthogonalization (Rutishauser variant) for computing the \(B\)-orthogonal \(QR\) factorization.

References:
  1. A.K. Saibaba, J. Lee and P.K. Kitanidis, Randomized algorithms for Generalized Hermitian Eigenvalue Problems with application to computing Karhunen-Loe’ve expansion http://arxiv.org/abs/1307.6885
  2. W. Gander, Algorithms for the QR decomposition. Res. Rep, 80(02), 1980

https://github.com/arvindks/kle

dot_mv(mv)[source]
dot_v(v)[source]
export(Vh, filename, varname='mv', normalize=False)[source]

Export in paraview this multivector.

Inputs:

  • Vh: the parameter finite element space.
  • filename: the name of the paraview output file.
  • varname: the name of the paraview variable.
  • normalize: if True the vector is rescaled such that \(\| u \|_{\infty} = 1.\)
norm(norm_type)[source]
orthogonalize()[source]

Returns \(QR\) decomposition of self. \(Q\) and \(R\) satisfy the following relations in exact arithmetic

\[ \begin{align}\begin{aligned}QR \, = \, Z, && (1),\\Q^*Q \, = \, I, && (2),\\Q^*Z \, = \, R, && (3),\\ZR^{-1} \, = \, Q, && (4).\end{aligned}\end{align} \]

Returns:

r of type ndarray -> The r of the QR decomposition

Note

self is overwritten by \(Q\).

hippylib.algorithms.multivector.MvDSmatMult(X, A, Y)[source]

hippylib.algorithms.randomizedEigensolver module

hippylib.algorithms.randomizedEigensolver.check_g(A, B, U, d)[source]

Test the frobenious norm of \(U^TBU - I_k\).

Test the frobenious norm of \((V^TAV) - I_k\), with \(V = U D^{-1/2}\).

Test the \(l_2\) norm of the residual: \(r[i] = A U[i] - d[i] B U[i]\).

hippylib.algorithms.randomizedEigensolver.check_std(A, U, d)[source]

Test the frobenious norm of \(U^TU - I_k\).

Test the frobenious norm of \((V^TAV) - I_k\), with \(V = U D^{-1/2}\).

Test the \(l_2\) norm of the residual: \(r[i] = A U[i] - d[i] U[i]\).

hippylib.algorithms.randomizedEigensolver.doublePass(A, Omega, k, s, check=False)[source]

The double pass algorithm for the HEP as presented in [1].

Inputs:

  • A: the operator for which we need to estimate the dominant eigenpairs.
  • Omega: a random gassian matrix with \(m \geq k\) columns.
  • k: the number of eigenpairs to extract.
  • s: the number of power iterations for selecting the subspace.

Outputs:

  • d: the estimate of the \(k\) dominant eigenvalues of \(A\).
  • U: the estimate of the \(k\) dominant eigenvectors of \(A,\, U^T U = I_k\).
hippylib.algorithms.randomizedEigensolver.doublePassG(A, B, Binv, Omega, k, s=1, check=False)[source]

The double pass algorithm for the GHEP as presented in [2].

Inputs:

  • A: the operator for which we need to estimate the dominant generalized eigenpairs.
  • B: the right-hand side operator.
  • Binv: the inverse of the right-hand side operator.
  • Omega: a random gassian matrix with \(m \geq k\) columns.
  • k: the number of eigenpairs to extract.
  • s: the number of power iterations for selecting the subspace.

Outputs:

  • d: the estimate of the \(k\) dominant eigenvalues of \(A\).
  • U: the estimate of the \(k\) dominant eigenvectors of \(A,\, U^T B U = I_k\).
hippylib.algorithms.randomizedEigensolver.singlePass(A, Omega, k, s=1, check=False)[source]

The single pass algorithm for the Hermitian Eigenvalues Problems (HEP) as presented in [1].

Inputs:

  • A: the operator for which we need to estimate the dominant eigenpairs.
  • Omega: a random gassian matrix with \(m \geq k\) columns.
  • k: the number of eigenpairs to extract.

Outputs:

  • d: the estimate of the \(k\) dominant eigenvalues of \(A\).
  • U: the estimate of the \(k\) dominant eigenvectors of \(A,\, U^T U = I_k\).
hippylib.algorithms.randomizedEigensolver.singlePassG(A, B, Binv, Omega, k, s=1, check=False)[source]

The single pass algorithm for the Generalized Hermitian Eigenvalues Problems (GHEP) as presented in [2].

Inputs:

  • A: the operator for which we need to estimate the dominant generalized eigenpairs.
  • B: the right-hand side operator.
  • Binv: the inverse of the right-hand side operator.
  • Omega: a random gassian matrix with \(m \geq k\) columns.
  • k: the number of eigenpairs to extract.
  • s: the number of power iterations for selecting the subspace.

Outputs:

  • d: the estimate of the \(k\) dominant eigenvalues of \(A\).
  • U: the estimate of the \(k\) dominant eigenvectors of \(A,\, U^T B U = I_k\).

hippylib.algorithms.steepestDescent module

class hippylib.algorithms.steepestDescent.SteepestDescent(model, parameters=<hippylib.utils.parameterList.ParameterList object>)[source]

Prior-preconditioned Steepest Descent to solve constrained optimization problems in the reduced parameter space. Globalization is performed using the Armijo sufficient reduction condition (backtracking). The stopping criterion is based on a control on the norm of the gradient.

The user must provide a model that describes the forward problem, cost functionals, and all the derivatives for the gradient.

More specifically the model object should implement following methods:

  • generate_vector() -> generate the object containing state, parameter, adjoint.
  • cost(x) -> evaluate the cost functional, report regularization part and misfit separately.
  • solveFwd(out, x,tol) -> solve the possibly non linear forward problem up to tolerance tol.
  • solveAdj(out, x,tol) -> solve the linear adjoint problem.
  • evalGradientParameter(x, out) -> evaluate the gradient of the parameter and compute its norm.
  • Rsolver() –> A solver for the regularization term.

Type help(Model) for additional information.

Initialize the Steepest Descent solver. Type SteepestDescent_ParameterList().showMe() for list of default parameters and their descriptions.

solve(x)[source]

Solve the constrained optimization problem with initial guess x = [u,a,p]. Return the solution [u,a,p].

Note

x will be overwritten.

termination_reasons = ['Maximum number of Iteration reached', 'Norm of the gradient less than tolerance', 'Maximum number of backtracking reached']
hippylib.algorithms.steepestDescent.SteepestDescent_ParameterList()[source]

hippylib.algorithms.traceEstimator module

class hippylib.algorithms.traceEstimator.TraceEstimator(A, solve_mode=False, accurancy=0.1, init_vector=None, random_engine=<function rademacher_engine>, mpi_comm=<sphinx.ext.autodoc.importer._MockObject object>)[source]

An unbiased stochastic estimator for the trace of \(A,\, d = \sum_{j=1}^k (v_j, A v_j)\), where

  • \(v_j\) are i.i.d. Rademacher or Gaussian random vectors.
  • \((\cdot,\cdot)\) represents the inner product.

The number of samples \(k\) is estimated at run time based on the variance of the estimator.

Reference: Haim Avron and Sivan Toledo, Randomized algorithms for estimating the trace of an implicit symmetric positive semi-definite matrix, Journal of the ACM (JACM), 58 (2011), p. 17.

Constructor:

  • A: an operator
  • solve_mode: if True we estimate the trace of A\(^{-1}\), otherwise of A.
  • code:accurancy: we stop when the standard deviation of the estimator is less then
    accurancy`*tr(:code:`A).
  • init_vector: use a custom function to initialize a vector compatible with the
    range/domain of A.
  • random_engine: which type of i.i.d. random variables to use (Rademacher or Gaussian).
hippylib.algorithms.traceEstimator.gaussian_engine(v)[source]

Generate a vector of \(n\) i.i.d. standard normal variables.

hippylib.algorithms.traceEstimator.rademacher_engine(v)[source]

Generate a vector of \(n\) i.i.d. Rademacher variables.

Module contents