From 44861dcbfeee041223c4aac1ee075e92fa4daa01 Mon Sep 17 00:00:00 2001 From: Stanislaw Halik Date: Sun, 18 Sep 2016 12:42:15 +0200 Subject: update --- eigen/doc/TopicLinearAlgebraDecompositions.dox | 261 +++++++++++++++++++++++++ 1 file changed, 261 insertions(+) create mode 100644 eigen/doc/TopicLinearAlgebraDecompositions.dox (limited to 'eigen/doc/TopicLinearAlgebraDecompositions.dox') diff --git a/eigen/doc/TopicLinearAlgebraDecompositions.dox b/eigen/doc/TopicLinearAlgebraDecompositions.dox new file mode 100644 index 0000000..8649cc2 --- /dev/null +++ b/eigen/doc/TopicLinearAlgebraDecompositions.dox @@ -0,0 +1,261 @@ +namespace Eigen { + +/** \eigenManualPage TopicLinearAlgebraDecompositions Catalogue of dense decompositions + +This page presents a catalogue of the dense matrix decompositions offered by Eigen. +For an introduction on linear solvers and decompositions, check this \link TutorialLinearAlgebra page \endlink. + +\section TopicLinAlgBigTable Catalogue of decompositions offered by Eigen + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
Generic information, not Eigen-specificEigen-specific
DecompositionRequirements on the matrixSpeedAlgorithm reliability and accuracyRank-revealingAllows to compute (besides linear solving)Linear solver provided by EigenMaturity of Eigen's implementationOptimizations
PartialPivLUInvertibleFastDepends on condition number--YesExcellentBlocking, Implicit MT
FullPivLU-SlowProvenYes-YesExcellent-
HouseholderQR-FastDepends on condition number-OrthogonalizationYesExcellentBlocking
ColPivHouseholderQR-FastGoodYesOrthogonalizationYesExcellentSoon: blocking
FullPivHouseholderQR-SlowProvenYesOrthogonalizationYesAverage-
LLTPositive definiteVery fastDepends on condition number--YesExcellentBlocking
LDLTPositive or negative semidefinite1Very fastGood--YesExcellentSoon: blocking
\n Singular values and eigenvalues decompositions
JacobiSVD (two-sided)-Slow (but fast for small matrices)Excellent-Proven3YesSingular values/vectors, least squaresYes (and does least squares)ExcellentR-SVD
SelfAdjointEigenSolverSelf-adjointFast-average2GoodYesEigenvalues/vectors-GoodClosed forms for 2x2 and 3x3
ComplexEigenSolverSquareSlow-very slow2Depends on condition numberYesEigenvalues/vectors-Average-
EigenSolverSquare and realAverage-slow2Depends on condition numberYesEigenvalues/vectors-Average-
GeneralizedSelfAdjointEigenSolverSquareFast-average2Depends on condition number-Generalized eigenvalues/vectors-Good-
\n Helper decompositions
RealSchurSquare and realAverage-slow2Depends on condition numberYes--Average-
ComplexSchurSquareSlow-very slow2Depends on condition numberYes--Average-
TridiagonalizationSelf-adjointFastGood---GoodSoon: blocking
HessenbergDecompositionSquareAverageGood---GoodSoon: blocking
+ +\b Notes: + + +\section TopicLinAlgTerminology Terminology + +
+
Selfadjoint
+
For a real matrix, selfadjoint is a synonym for symmetric. For a complex matrix, selfadjoint is a synonym for \em hermitian. + More generally, a matrix \f$ A \f$ is selfadjoint if and only if it is equal to its adjoint \f$ A^* \f$. The adjoint is also called the \em conjugate \em transpose.
+
Positive/negative definite
+
A selfadjoint matrix \f$ A \f$ is positive definite if \f$ v^* A v > 0 \f$ for any non zero vector \f$ v \f$. + In the same vein, it is negative definite if \f$ v^* A v < 0 \f$ for any non zero vector \f$ v \f$
+
Positive/negative semidefinite
+
A selfadjoint matrix \f$ A \f$ is positive semi-definite if \f$ v^* A v \ge 0 \f$ for any non zero vector \f$ v \f$. + In the same vein, it is negative semi-definite if \f$ v^* A v \le 0 \f$ for any non zero vector \f$ v \f$
+ +
Blocking
+
Means the algorithm can work per block, whence guaranteeing a good scaling of the performance for large matrices.
+
Implicit Multi Threading (MT)
+
Means the algorithm can take advantage of multicore processors via OpenMP. "Implicit" means the algortihm itself is not parallelized, but that it relies on parallelized matrix-matrix product rountines.
+
Explicit Multi Threading (MT)
+
Means the algorithm is explicitely parallelized to take advantage of multicore processors via OpenMP.
+
Meta-unroller
+
Means the algorithm is automatically and explicitly unrolled for very small fixed size matrices.
+
+
+
+ +*/ + +} -- cgit v1.2.3