diff options
| author | Stanislaw Halik <sthalik@misaki.pl> | 2017-03-25 14:17:07 +0100 |
|---|---|---|
| committer | Stanislaw Halik <sthalik@misaki.pl> | 2017-03-25 14:17:07 +0100 |
| commit | 35f7829af10c61e33dd2e2a7a015058e11a11ea0 (patch) | |
| tree | 7135010dcf8fd0a49f3020d52112709bcb883bd6 /eigen/doc/TopicLinearAlgebraDecompositions.dox | |
| parent | 6e8724193e40a932faf9064b664b529e7301c578 (diff) | |
update
Diffstat (limited to 'eigen/doc/TopicLinearAlgebraDecompositions.dox')
| -rw-r--r-- | eigen/doc/TopicLinearAlgebraDecompositions.dox | 8 |
1 files changed, 5 insertions, 3 deletions
diff --git a/eigen/doc/TopicLinearAlgebraDecompositions.dox b/eigen/doc/TopicLinearAlgebraDecompositions.dox index 8649cc2..4914706 100644 --- a/eigen/doc/TopicLinearAlgebraDecompositions.dox +++ b/eigen/doc/TopicLinearAlgebraDecompositions.dox @@ -4,6 +4,7 @@ namespace Eigen { This page presents a catalogue of the dense matrix decompositions offered by Eigen. For an introduction on linear solvers and decompositions, check this \link TutorialLinearAlgebra page \endlink. +To get an overview of the true relative speed of the different decomposition, check this \link DenseDecompositionBenchmark benchmark \endlink. \section TopicLinAlgBigTable Catalogue of decompositions offered by Eigen @@ -116,7 +117,7 @@ For an introduction on linear solvers and decompositions, check this \link Tutor <td>JacobiSVD (two-sided)</td> <td>-</td> <td>Slow (but fast for small matrices)</td> - <td>Excellent-Proven<sup><a href="#note3">3</a></sup></td> + <td>Proven<sup><a href="#note3">3</a></sup></td> <td>Yes</td> <td>Singular values/vectors, least squares</td> <td>Yes (and does least squares)</td> @@ -132,7 +133,7 @@ For an introduction on linear solvers and decompositions, check this \link Tutor <td>Yes</td> <td>Eigenvalues/vectors</td> <td>-</td> - <td>Good</td> + <td>Excellent</td> <td><em>Closed forms for 2x2 and 3x3</em></td> </tr> @@ -249,13 +250,14 @@ For an introduction on linear solvers and decompositions, check this \link Tutor <dt><b>Implicit Multi Threading (MT)</b></dt> <dd>Means the algorithm can take advantage of multicore processors via OpenMP. "Implicit" means the algortihm itself is not parallelized, but that it relies on parallelized matrix-matrix product rountines.</dd> <dt><b>Explicit Multi Threading (MT)</b></dt> - <dd>Means the algorithm is explicitely parallelized to take advantage of multicore processors via OpenMP.</dd> + <dd>Means the algorithm is explicitly parallelized to take advantage of multicore processors via OpenMP.</dd> <dt><b>Meta-unroller</b></dt> <dd>Means the algorithm is automatically and explicitly unrolled for very small fixed size matrices.</dd> <dt><b></b></dt> <dd></dd> </dl> + */ } |
