This is gsl-ref.info, produced by makeinfo version 6.4 from gsl-ref.texi. GSL 2.7, May 27, 2021 The GSL Team Copyright © 1996-2021 The GSL Team INFO-DIR-SECTION Miscellaneous START-INFO-DIR-ENTRY * GSL: (gsl-ref.info). One line description of project. END-INFO-DIR-ENTRY Generated by Sphinx 3.4.1.  File: gsl-ref.info, Node: Top, Next: Introduction, Up: (dir) GNU Scientific Library ********************** GSL 2.7, May 27, 2021 The GSL Team Copyright © 1996-2021 The GSL Team * Menu: * Introduction:: * Using the Library:: * Error Handling:: * Mathematical Functions:: * Complex Numbers:: * Polynomials:: * Special Functions:: * Vectors and Matrices:: * Permutations:: * Combinations:: * Multisets:: * Sorting:: * BLAS Support:: * Linear Algebra:: * Eigensystems:: * Fast Fourier Transforms (FFTs): Fast Fourier Transforms FFTs. * Numerical Integration:: * Random Number Generation:: * Quasi-Random Sequences:: * Random Number Distributions:: * Statistics:: * Running Statistics:: * Moving Window Statistics:: * Digital Filtering:: * Histograms:: * N-tuples:: * Monte Carlo Integration:: * Simulated Annealing:: * Ordinary Differential Equations:: * Interpolation:: * Numerical Differentiation:: * Chebyshev Approximations:: * Series Acceleration:: * Wavelet Transforms:: * Discrete Hankel Transforms:: * One Dimensional Root-Finding:: * One Dimensional Minimization:: * Multidimensional Root-Finding:: * Multidimensional Minimization:: * Linear Least-Squares Fitting:: * Nonlinear Least-Squares Fitting:: * Basis Splines:: * Sparse Matrices:: * Sparse BLAS Support:: * Sparse Linear Algebra:: * Physical Constants:: * IEEE floating-point arithmetic:: * Debugging Numerical Programs:: * Contributors to GSL:: * Autoconf Macros:: * GSL CBLAS Library:: * GNU General Public License:: * GNU Free Documentation License:: * Index:: — The Detailed Node Listing — Introduction * Routines available in GSL:: * GSL is Free Software:: * Obtaining GSL:: * No Warranty:: * Reporting Bugs:: * Further Information:: * Conventions used in this manual:: Using the Library * An Example Program:: * Compiling and Linking:: * Shared Libraries:: * ANSI C Compliance:: * Inline functions:: * Long double:: * Portability functions:: * Alternative optimized functions:: * Support for different numeric types:: * Compatibility with C++:: * Aliasing of arrays:: * Thread-safety:: * Deprecated Functions:: * Code Reuse:: Compiling and Linking * Linking programs with the library:: * Linking with an alternative BLAS library:: Error Handling * Error Reporting:: * Error Codes:: * Error Handlers:: * Using GSL error reporting in your own functions:: * Examples:: Mathematical Functions * Mathematical Constants:: * Infinities and Not-a-number:: * Elementary Functions:: * Small integer powers:: * Testing the Sign of Numbers:: * Testing for Odd and Even Numbers:: * Maximum and Minimum functions:: * Approximate Comparison of Floating Point Numbers:: Complex Numbers * Representation of complex numbers:: * Complex number macros:: * Assigning complex numbers:: * Properties of complex numbers:: * Complex arithmetic operators:: * Elementary Complex Functions:: * Complex Trigonometric Functions:: * Inverse Complex Trigonometric Functions:: * Complex Hyperbolic Functions:: * Inverse Complex Hyperbolic Functions:: * References and Further Reading:: Polynomials * Polynomial Evaluation:: * Divided Difference Representation of Polynomials:: * Quadratic Equations:: * Cubic Equations:: * General Polynomial Equations:: * Examples: Examples<2>. * References and Further Reading: References and Further Reading<2>. Special Functions * Usage:: * The gsl_sf_result struct:: * Modes:: * Airy Functions and Derivatives:: * Bessel Functions:: * Clausen Functions:: * Coulomb Functions:: * Coupling Coefficients:: * Dawson Function:: * Debye Functions:: * Dilogarithm:: * Elementary Operations:: * Elliptic Integrals:: * Elliptic Functions (Jacobi): Elliptic Functions Jacobi. * Error Functions:: * Exponential Functions:: * Exponential Integrals:: * Fermi-Dirac Function:: * Gamma and Beta Functions:: * Gegenbauer Functions:: * Hermite Polynomials and Functions:: * Hypergeometric Functions:: * Laguerre Functions:: * Lambert W Functions:: * Legendre Functions and Spherical Harmonics:: * Logarithm and Related Functions:: * Mathieu Functions:: * Power Function:: * Psi (Digamma) Function: Psi Digamma Function. * Synchrotron Functions:: * Transport Functions:: * Trigonometric Functions:: * Zeta Functions:: * Examples: Examples<3>. * References and Further Reading: References and Further Reading<3>. Airy Functions and Derivatives * Airy Functions:: * Derivatives of Airy Functions:: * Zeros of Airy Functions:: * Zeros of Derivatives of Airy Functions:: Bessel Functions * Regular Cylindrical Bessel Functions:: * Irregular Cylindrical Bessel Functions:: * Regular Modified Cylindrical Bessel Functions:: * Irregular Modified Cylindrical Bessel Functions:: * Regular Spherical Bessel Functions:: * Irregular Spherical Bessel Functions:: * Regular Modified Spherical Bessel Functions:: * Irregular Modified Spherical Bessel Functions:: * Regular Bessel Function—Fractional Order:: * Irregular Bessel Functions—Fractional Order:: * Regular Modified Bessel Functions—Fractional Order:: * Irregular Modified Bessel Functions—Fractional Order:: * Zeros of Regular Bessel Functions:: Coulomb Functions * Normalized Hydrogenic Bound States:: * Coulomb Wave Functions:: * Coulomb Wave Function Normalization Constant:: Coupling Coefficients * 3-j Symbols:: * 6-j Symbols:: * 9-j Symbols:: Dilogarithm * Real Argument:: * Complex Argument:: Elliptic Integrals * Definition of Legendre Forms:: * Definition of Carlson Forms:: * Legendre Form of Complete Elliptic Integrals:: * Legendre Form of Incomplete Elliptic Integrals:: * Carlson Forms:: Error Functions * Error Function:: * Complementary Error Function:: * Log Complementary Error Function:: * Probability functions:: Exponential Functions * Exponential Function:: * Relative Exponential Functions:: * Exponentiation With Error Estimate:: Exponential Integrals * Exponential Integral:: * Ei(x): Ei x. * Hyperbolic Integrals:: * Ei_3(x): Ei_3 x. * Trigonometric Integrals:: * Arctangent Integral:: Fermi-Dirac Function * Complete Fermi-Dirac Integrals:: * Incomplete Fermi-Dirac Integrals:: Gamma and Beta Functions * Gamma Functions:: * Factorials:: * Pochhammer Symbol:: * Incomplete Gamma Functions:: * Beta Functions:: * Incomplete Beta Function:: Hermite Polynomials and Functions * Hermite Polynomials:: * Derivatives of Hermite Polynomials:: * Hermite Functions:: * Derivatives of Hermite Functions:: * Zeros of Hermite Polynomials and Hermite Functions:: Legendre Functions and Spherical Harmonics * Legendre Polynomials:: * Associated Legendre Polynomials and Spherical Harmonics:: * Conical Functions:: * Radial Functions for Hyperbolic Space:: Mathieu Functions * Mathieu Function Workspace:: * Mathieu Function Characteristic Values:: * Angular Mathieu Functions:: * Radial Mathieu Functions:: Psi (Digamma) Function * Digamma Function:: * Trigamma Function:: * Polygamma Function:: Trigonometric Functions * Circular Trigonometric Functions:: * Trigonometric Functions for Complex Arguments:: * Hyperbolic Trigonometric Functions:: * Conversion Functions:: * Restriction Functions:: * Trigonometric Functions With Error Estimates:: Zeta Functions * Riemann Zeta Function:: * Riemann Zeta Function Minus One:: * Hurwitz Zeta Function:: * Eta Function:: Vectors and Matrices * Data types:: * Blocks:: * Vectors:: * Matrices:: Blocks * Block allocation:: * Reading and writing blocks:: * Example programs for blocks:: Vectors * Vector allocation:: * Accessing vector elements:: * Initializing vector elements:: * Reading and writing vectors:: * Vector views:: * Copying vectors:: * Exchanging elements:: * Vector operations:: * Finding maximum and minimum elements of vectors:: * Vector properties:: * Example programs for vectors:: Matrices * Matrix allocation:: * Accessing matrix elements:: * Initializing matrix elements:: * Reading and writing matrices:: * Matrix views:: * Creating row and column views:: * Copying matrices:: * Copying rows and columns:: * Exchanging rows and columns:: * Matrix operations:: * Finding maximum and minimum elements of matrices:: * Matrix properties:: * Example programs for matrices:: * References and Further Reading: References and Further Reading<4>. Permutations * The Permutation struct:: * Permutation allocation:: * Accessing permutation elements:: * Permutation properties:: * Permutation functions:: * Applying Permutations:: * Reading and writing permutations:: * Permutations in cyclic form:: * Examples: Examples<4>. * References and Further Reading: References and Further Reading<5>. Combinations * The Combination struct:: * Combination allocation:: * Accessing combination elements:: * Combination properties:: * Combination functions:: * Reading and writing combinations:: * Examples: Examples<5>. * References and Further Reading: References and Further Reading<6>. Multisets * The Multiset struct:: * Multiset allocation:: * Accessing multiset elements:: * Multiset properties:: * Multiset functions:: * Reading and writing multisets:: * Examples: Examples<6>. Sorting * Sorting objects:: * Sorting vectors:: * Selecting the k smallest or largest elements:: * Computing the rank:: * Examples: Examples<7>. * References and Further Reading: References and Further Reading<7>. BLAS Support * GSL BLAS Interface:: * Examples: Examples<8>. * References and Further Reading: References and Further Reading<8>. GSL BLAS Interface * Level 1:: * Level 2:: * Level 3:: Linear Algebra * LU Decomposition:: * QR Decomposition:: * QR Decomposition with Column Pivoting:: * LQ Decomposition:: * QL Decomposition:: * Complete Orthogonal Decomposition:: * Singular Value Decomposition:: * Cholesky Decomposition:: * Pivoted Cholesky Decomposition:: * Modified Cholesky Decomposition:: * LDLT Decomposition:: * Tridiagonal Decomposition of Real Symmetric Matrices:: * Tridiagonal Decomposition of Hermitian Matrices:: * Hessenberg Decomposition of Real Matrices:: * Hessenberg-Triangular Decomposition of Real Matrices:: * Bidiagonalization:: * Givens Rotations:: * Householder Transformations:: * Householder solver for linear systems:: * Tridiagonal Systems:: * Triangular Systems:: * Banded Systems:: * Balancing:: * Examples: Examples<9>. * References and Further Reading: References and Further Reading<9>. QR Decomposition * Level 2 Interface:: * Triangle on Top of Rectangle:: * Triangle on Top of Triangle:: * Triangle on Top of Trapezoidal:: * Triangle on Top of Diagonal:: Banded Systems * General Banded Format:: * Symmetric Banded Format:: * Banded LU Decomposition:: * Banded Cholesky Decomposition:: * Banded LDLT Decomposition:: Eigensystems * Real Symmetric Matrices:: * Complex Hermitian Matrices:: * Real Nonsymmetric Matrices:: * Real Generalized Symmetric-Definite Eigensystems:: * Complex Generalized Hermitian-Definite Eigensystems:: * Real Generalized Nonsymmetric Eigensystems:: * Sorting Eigenvalues and Eigenvectors:: * Examples: Examples<10>. * References and Further Reading: References and Further Reading<10>. Fast Fourier Transforms (FFTs) * Mathematical Definitions:: * Overview of complex data FFTs:: * Radix-2 FFT routines for complex data:: * Mixed-radix FFT routines for complex data:: * Overview of real data FFTs:: * Radix-2 FFT routines for real data:: * Mixed-radix FFT routines for real data:: * References and Further Reading: References and Further Reading<11>. Numerical Integration * Introduction: Introduction<2>. * QNG non-adaptive Gauss-Kronrod integration:: * QAG adaptive integration:: * QAGS adaptive integration with singularities:: * QAGP adaptive integration with known singular points:: * QAGI adaptive integration on infinite intervals:: * QAWC adaptive integration for Cauchy principal values:: * QAWS adaptive integration for singular functions:: * QAWO adaptive integration for oscillatory functions:: * QAWF adaptive integration for Fourier integrals:: * CQUAD doubly-adaptive integration:: * Romberg integration:: * Gauss-Legendre integration:: * Fixed point quadratures:: * Error codes:: * Examples: Examples<11>. * References and Further Reading: References and Further Reading<12>. Introduction * Integrands without weight functions:: * Integrands with weight functions:: * Integrands with singular weight functions:: Examples * Adaptive integration example:: * Fixed-point quadrature example:: Random Number Generation * General comments on random numbers:: * The Random Number Generator Interface:: * Random number generator initialization:: * Sampling from a random number generator:: * Auxiliary random number generator functions:: * Random number environment variables:: * Copying random number generator state:: * Reading and writing random number generator state:: * Random number generator algorithms:: * Unix random number generators:: * Other random number generators:: * Performance:: * Examples: Examples<12>. * References and Further Reading: References and Further Reading<13>. * Acknowledgements:: Quasi-Random Sequences * Quasi-random number generator initialization:: * Sampling from a quasi-random number generator:: * Auxiliary quasi-random number generator functions:: * Saving and restoring quasi-random number generator state:: * Quasi-random number generator algorithms:: * Examples: Examples<13>. * References:: Random Number Distributions * Introduction: Introduction<3>. * The Gaussian Distribution:: * The Gaussian Tail Distribution:: * The Bivariate Gaussian Distribution:: * The Multivariate Gaussian Distribution:: * The Exponential Distribution:: * The Laplace Distribution:: * The Exponential Power Distribution:: * The Cauchy Distribution:: * The Rayleigh Distribution:: * The Rayleigh Tail Distribution:: * The Landau Distribution:: * The Levy alpha-Stable Distributions:: * The Levy skew alpha-Stable Distribution:: * The Gamma Distribution:: * The Flat (Uniform) Distribution: The Flat Uniform Distribution. * The Lognormal Distribution:: * The Chi-squared Distribution:: * The F-distribution:: * The t-distribution:: * The Beta Distribution:: * The Logistic Distribution:: * The Pareto Distribution:: * Spherical Vector Distributions:: * The Weibull Distribution:: * The Type-1 Gumbel Distribution:: * The Type-2 Gumbel Distribution:: * The Dirichlet Distribution:: * General Discrete Distributions:: * The Poisson Distribution:: * The Bernoulli Distribution:: * The Binomial Distribution:: * The Multinomial Distribution:: * The Negative Binomial Distribution:: * The Pascal Distribution:: * The Geometric Distribution:: * The Hypergeometric Distribution:: * The Logarithmic Distribution:: * The Wishart Distribution:: * Shuffling and Sampling:: * Examples: Examples<14>. * References and Further Reading: References and Further Reading<14>. Statistics * Mean, Standard Deviation and Variance: Mean Standard Deviation and Variance. * Absolute deviation:: * Higher moments (skewness and kurtosis): Higher moments skewness and kurtosis. * Autocorrelation:: * Covariance:: * Correlation:: * Weighted Samples:: * Maximum and Minimum values:: * Median and Percentiles:: * Order Statistics:: * Robust Location Estimates:: * Robust Scale Estimates:: * Examples: Examples<15>. * References and Further Reading: References and Further Reading<15>. Robust Location Estimates * Trimmed Mean:: * Gastwirth Estimator:: Robust Scale Estimates * Median Absolute Deviation (MAD): Median Absolute Deviation MAD. * S_n Statistic:: * Q_n Statistic:: Running Statistics * Initializing the Accumulator:: * Adding Data to the Accumulator:: * Current Statistics:: * Quantiles:: * Examples: Examples<16>. * References and Further Reading: References and Further Reading<16>. Moving Window Statistics * Introduction: Introduction<4>. * Handling Endpoints:: * Allocation for Moving Window Statistics:: * Moving Mean:: * Moving Variance and Standard Deviation:: * Moving Minimum and Maximum:: * Moving Sum:: * Moving Median:: * Robust Scale Estimation:: * User-defined Moving Statistics:: * Accumulators:: * Examples: Examples<17>. * References and Further Reading: References and Further Reading<17>. Robust Scale Estimation * Moving MAD:: * Moving QQR:: * Moving S_n:: * Moving Q_n:: Examples * Example 1:: * Example 2; Robust Scale: Example 2 Robust Scale. * Example 3; User-defined Moving Window: Example 3 User-defined Moving Window. Digital Filtering * Introduction: Introduction<5>. * Handling Endpoints: Handling Endpoints<2>. * Linear Digital Filters:: * Nonlinear Digital Filters:: * Examples: Examples<18>. * References and Further Reading: References and Further Reading<18>. Linear Digital Filters * Gaussian Filter:: Nonlinear Digital Filters * Standard Median Filter:: * Recursive Median Filter:: * Impulse Detection Filter:: Examples * Gaussian Example 1:: * Gaussian Example 2:: * Square Wave Signal Example:: * Impulse Detection Example:: Histograms * The histogram struct:: * Histogram allocation:: * Copying Histograms:: * Updating and accessing histogram elements:: * Searching histogram ranges:: * Histogram Statistics:: * Histogram Operations:: * Reading and writing histograms:: * Resampling from histograms:: * The histogram probability distribution struct:: * Example programs for histograms:: * Two dimensional histograms:: * The 2D histogram struct:: * 2D Histogram allocation:: * Copying 2D Histograms:: * Updating and accessing 2D histogram elements:: * Searching 2D histogram ranges:: * 2D Histogram Statistics:: * 2D Histogram Operations:: * Reading and writing 2D histograms:: * Resampling from 2D histograms:: * Example programs for 2D histograms:: N-tuples * The ntuple struct:: * Creating ntuples:: * Opening an existing ntuple file:: * Writing ntuples:: * Reading ntuples:: * Closing an ntuple file:: * Histogramming ntuple values:: * Examples: Examples<19>. * References and Further Reading: References and Further Reading<19>. Monte Carlo Integration * Interface:: * PLAIN Monte Carlo:: * MISER:: * VEGAS:: * Examples: Examples<20>. * References and Further Reading: References and Further Reading<20>. Simulated Annealing * Simulated Annealing algorithm:: * Simulated Annealing functions:: * Examples: Examples<21>. * References and Further Reading: References and Further Reading<21>. Examples * Trivial example:: * Traveling Salesman Problem:: Ordinary Differential Equations * Defining the ODE System:: * Stepping Functions:: * Adaptive Step-size Control:: * Evolution:: * Driver:: * Examples: Examples<22>. * References and Further Reading: References and Further Reading<22>. Interpolation * Introduction to 1D Interpolation:: * 1D Interpolation Functions:: * 1D Interpolation Types:: * 1D Index Look-up and Acceleration:: * 1D Evaluation of Interpolating Functions:: * 1D Higher-level Interface:: * 1D Interpolation Example Programs:: * Introduction to 2D Interpolation:: * 2D Interpolation Functions:: * 2D Interpolation Grids:: * 2D Interpolation Types:: * 2D Evaluation of Interpolating Functions:: * 2D Higher-level Interface:: * 2D Interpolation Example programs:: * References and Further Reading: References and Further Reading<23>. Numerical Differentiation * Functions:: * Examples: Examples<23>. * References and Further Reading: References and Further Reading<24>. Chebyshev Approximations * Definitions:: * Creation and Calculation of Chebyshev Series:: * Auxiliary Functions:: * Chebyshev Series Evaluation:: * Derivatives and Integrals:: * Examples: Examples<24>. * References and Further Reading: References and Further Reading<25>. Series Acceleration * Acceleration functions:: * Acceleration functions without error estimation:: * Examples: Examples<25>. * References and Further Reading: References and Further Reading<26>. Wavelet Transforms * Definitions: Definitions<2>. * Initialization:: * Transform Functions:: * Examples: Examples<26>. * References and Further Reading: References and Further Reading<27>. Transform Functions * Wavelet transforms in one dimension:: * Wavelet transforms in two dimension:: Discrete Hankel Transforms * Definitions: Definitions<3>. * Functions: Functions<2>. * References and Further Reading: References and Further Reading<28>. One Dimensional Root-Finding * Overview:: * Caveats:: * Initializing the Solver:: * Providing the function to solve:: * Search Bounds and Guesses:: * Iteration:: * Search Stopping Parameters:: * Root Bracketing Algorithms:: * Root Finding Algorithms using Derivatives:: * Examples: Examples<27>. * References and Further Reading: References and Further Reading<29>. One Dimensional Minimization * Overview: Overview<2>. * Caveats: Caveats<2>. * Initializing the Minimizer:: * Providing the function to minimize:: * Iteration: Iteration<2>. * Stopping Parameters:: * Minimization Algorithms:: * Examples: Examples<28>. * References and Further Reading: References and Further Reading<30>. Multidimensional Root-Finding * Overview: Overview<3>. * Initializing the Solver: Initializing the Solver<2>. * Providing the function to solve: Providing the function to solve<2>. * Iteration: Iteration<3>. * Search Stopping Parameters: Search Stopping Parameters<2>. * Algorithms using Derivatives:: * Algorithms without Derivatives:: * Examples: Examples<29>. * References and Further Reading: References and Further Reading<31>. Multidimensional Minimization * Overview: Overview<4>. * Caveats: Caveats<3>. * Initializing the Multidimensional Minimizer:: * Providing a function to minimize:: * Iteration: Iteration<4>. * Stopping Criteria:: * Algorithms with Derivatives:: * Algorithms without Derivatives: Algorithms without Derivatives<2>. * Examples: Examples<30>. * References and Further Reading: References and Further Reading<32>. Linear Least-Squares Fitting * Overview: Overview<5>. * Linear regression:: * Multi-parameter regression:: * Regularized regression:: * Robust linear regression:: * Large dense linear systems:: * Troubleshooting:: * Examples: Examples<31>. * References and Further Reading: References and Further Reading<33>. Linear regression * Linear regression with a constant term:: * Linear regression without a constant term:: Large dense linear systems * Normal Equations Approach:: * Tall Skinny QR (TSQR) Approach: Tall Skinny QR TSQR Approach. * Large Dense Linear Systems Solution Steps:: * Large Dense Linear Least Squares Routines:: Examples * Simple Linear Regression Example:: * Multi-parameter Linear Regression Example:: * Regularized Linear Regression Example 1:: * Regularized Linear Regression Example 2:: * Robust Linear Regression Example:: * Large Dense Linear Regression Example:: Nonlinear Least-Squares Fitting * Overview: Overview<6>. * Solving the Trust Region Subproblem (TRS): Solving the Trust Region Subproblem TRS. * Weighted Nonlinear Least-Squares:: * Tunable Parameters:: * Initializing the Solver: Initializing the Solver<3>. * Providing the Function to be Minimized:: * Iteration: Iteration<5>. * Testing for Convergence:: * High Level Driver:: * Covariance matrix of best fit parameters:: * Troubleshooting: Troubleshooting<2>. * Examples: Examples<32>. * References and Further Reading: References and Further Reading<34>. Solving the Trust Region Subproblem (TRS) * Levenberg-Marquardt:: * Levenberg-Marquardt with Geodesic Acceleration:: * Dogleg:: * Double Dogleg:: * Two Dimensional Subspace:: * Steihaug-Toint Conjugate Gradient:: Examples * Exponential Fitting Example:: * Geodesic Acceleration Example 1:: * Geodesic Acceleration Example 2:: * Comparing TRS Methods Example:: * Large Nonlinear Least Squares Example:: Basis Splines * Overview: Overview<7>. * Initializing the B-splines solver:: * Constructing the knots vector:: * Evaluation of B-splines:: * Evaluation of B-spline derivatives:: * Working with the Greville abscissae:: * Examples: Examples<33>. * References and Further Reading: References and Further Reading<35>. Sparse Matrices * Data types: Data types<2>. * Sparse Matrix Storage Formats:: * Overview: Overview<8>. * Allocation:: * Accessing Matrix Elements:: * Initializing Matrix Elements:: * Reading and Writing Matrices:: * Copying Matrices:: * Exchanging Rows and Columns:: * Matrix Operations:: * Matrix Properties:: * Finding Maximum and Minimum Elements:: * Compressed Format:: * Conversion Between Sparse and Dense Matrices:: * Examples: Examples<34>. * References and Further Reading: References and Further Reading<36>. Sparse Matrix Storage Formats * Coordinate Storage (COO): Coordinate Storage COO. * Compressed Sparse Column (CSC): Compressed Sparse Column CSC. * Compressed Sparse Row (CSR): Compressed Sparse Row CSR. Sparse BLAS Support * Sparse BLAS operations:: * References and Further Reading: References and Further Reading<37>. Sparse Linear Algebra * Overview: Overview<9>. * Sparse Iterative Solvers:: * Examples: Examples<35>. * References and Further Reading: References and Further Reading<38>. Sparse Iterative Solvers * Overview: Overview<10>. * Types of Sparse Iterative Solvers:: * Iterating the Sparse Linear System:: Physical Constants * Fundamental Constants:: * Astronomy and Astrophysics:: * Atomic and Nuclear Physics:: * Measurement of Time:: * Imperial Units:: * Speed and Nautical Units:: * Printers Units:: * Volume, Area and Length: Volume Area and Length. * Mass and Weight:: * Thermal Energy and Power:: * Pressure:: * Viscosity:: * Light and Illumination:: * Radioactivity:: * Force and Energy:: * Prefixes:: * Examples: Examples<36>. * References and Further Reading: References and Further Reading<39>. IEEE floating-point arithmetic * Representation of floating point numbers:: * Setting up your IEEE environment:: * References and Further Reading: References and Further Reading<40>. Debugging Numerical Programs * Using gdb:: * Examining floating point registers:: * Handling floating point exceptions:: * GCC warning options for numerical programs:: * References and Further Reading: References and Further Reading<41>. GSL CBLAS Library * Level 1: Level 1<2>. * Level 2: Level 2<2>. * Level 3: Level 3<2>. * Examples: Examples<37>.  File: gsl-ref.info, Node: Introduction, Next: Using the Library, Prev: Top, Up: Top 1 Introduction ************** The GNU Scientific Library (GSL) is a collection of routines for numerical computing. The routines have been written from scratch in C, and present a modern Applications Programming Interface (API) for C programmers, allowing wrappers to be written for very high level languages. The source code is distributed under the GNU General Public License. * Menu: * Routines available in GSL:: * GSL is Free Software:: * Obtaining GSL:: * No Warranty:: * Reporting Bugs:: * Further Information:: * Conventions used in this manual::  File: gsl-ref.info, Node: Routines available in GSL, Next: GSL is Free Software, Up: Introduction 1.1 Routines available in GSL ============================= The library covers a wide range of topics in numerical computing. Routines are available for the following areas, Complex Numbers Roots of Polynomials Special Functions Vectors and Matrices Permutations Combinations Sorting BLAS Support Linear Algebra CBLAS Library Fast Fourier Transforms Eigensystems Random Numbers Quadrature Random Distributions Quasi-Random Sequences Histograms Statistics Monte Carlo Integration N-Tuples Differential Equations Simulated Annealing Numerical Differentiation Interpolation Series Acceleration Chebyshev Approximations Root-Finding Discrete Hankel Transforms Least-Squares Fitting Minimization IEEE Floating-Point Physical Constants Basis Splines Wavelets Sparse BLAS Support Sparse Linear Algebra The use of these routines is described in this manual. Each chapter provides detailed definitions of the functions, followed by example programs and references to the articles on which the algorithms are based. Where possible the routines have been based on reliable public-domain packages such as FFTPACK and QUADPACK, which the developers of GSL have reimplemented in C with modern coding conventions.  File: gsl-ref.info, Node: GSL is Free Software, Next: Obtaining GSL, Prev: Routines available in GSL, Up: Introduction 1.2 GSL is Free Software ======================== The subroutines in the GNU Scientific Library are “free software”; this means that everyone is free to use them, and to redistribute them in other free programs. The library is not in the public domain; it is copyrighted and there are conditions on its distribution. These conditions are designed to permit everything that a good cooperating citizen would want to do. What is not allowed is to try to prevent others from further sharing any version of the software that they might get from you. Specifically, we want to make sure that you have the right to share copies of programs that you are given which use the GNU Scientific Library, that you receive their source code or else can get it if you want it, that you can change these programs or use pieces of them in new free programs, and that you know you can do these things. To make sure that everyone has such rights, we have to forbid you to deprive anyone else of these rights. For example, if you distribute copies of any code which uses the GNU Scientific Library, you must give the recipients all the rights that you have received. You must make sure that they, too, receive or can get the source code, both to the library and the code which uses it. And you must tell them their rights. This means that the library should not be redistributed in proprietary programs. Also, for our own protection, we must make certain that everyone finds out that there is no warranty for the GNU Scientific Library. If these programs are modified by someone else and passed on, we want their recipients to know that what they have is not what we distributed, so that any problems introduced by others will not reflect on our reputation. The precise conditions for the distribution of software related to the GNU Scientific Library are found in the GNU General Public License(1). Further information about this license is available from the GNU Project webpage Frequently Asked Questions about the GNU GPL(2). The Free Software Foundation also operates a license consulting service for commercial users (contact details available from ‘http://www.fsf.org’. ---------- Footnotes ---------- (1) https://www.gnu.org/software/gsl/manual/html_node/GNU-General-Public-License.html#GNU-General-Public-License (2) http://www.gnu.org/copyleft/gpl-faq.html  File: gsl-ref.info, Node: Obtaining GSL, Next: No Warranty, Prev: GSL is Free Software, Up: Introduction 1.3 Obtaining GSL ================= The source code for the library can be obtained in different ways, by copying it from a friend, purchasing it on CDROM or downloading it from the internet. A list of public ftp servers which carry the source code can be found on the GNU website, ‘http://www.gnu.org/software/gsl/’. The preferred platform for the library is a GNU system, which allows it to take advantage of additional features in the GNU C compiler and GNU C library. However, the library is fully portable and should compile on most systems with a C compiler. Announcements of new releases, updates and other relevant events are made on the mailing list. To subscribe to this low-volume list, send an email of the following form: To: info-gsl-request@gnu.org Subject: subscribe You will receive a response asking you to reply in order to confirm your subscription.  File: gsl-ref.info, Node: No Warranty, Next: Reporting Bugs, Prev: Obtaining GSL, Up: Introduction 1.4 No Warranty =============== The software described in this manual has no warranty, it is provided “as is”. It is your responsibility to validate the behavior of the routines and their accuracy using the source code provided, or to purchase support and warranties from commercial redistributors. Consult the GNU General Public License(1) for further details. ---------- Footnotes ---------- (1) https://www.gnu.org/software/gsl/manual/html_node/GNU-General-Public-License.html#GNU-General-Public-License  File: gsl-ref.info, Node: Reporting Bugs, Next: Further Information, Prev: No Warranty, Up: Introduction 1.5 Reporting Bugs ================== A list of known bugs can be found in the ‘BUGS’ file included in the GSL distribution or online in the GSL bug tracker. (1) Details of compilation problems can be found in the ‘INSTALL’ file. If you find a bug which is not listed in these files, please report it to . All bug reports should include: - The version number of GSL - The hardware and operating system - The compiler used, including version number and compilation options - A description of the bug behavior - A short program which exercises the bug It is useful if you can check whether the same problem occurs when the library is compiled without optimization. Thank you. Any errors or omissions in this manual can also be reported to the same address. ---------- Footnotes ---------- (1) (1) ‘http://savannah.gnu.org/bugs/?group=gsl’  File: gsl-ref.info, Node: Further Information, Next: Conventions used in this manual, Prev: Reporting Bugs, Up: Introduction 1.6 Further Information ======================= Additional information, including online copies of this manual, links to related projects, and mailing list archives are available from the website mentioned above. Any questions about the use and installation of the library can be asked on the mailing list . To subscribe to this list, send an email of the following form: To: help-gsl-request@gnu.org Subject: subscribe This mailing list can be used to ask questions not covered by this manual, and to contact the developers of the library. If you would like to refer to the GNU Scientific Library in a journal article, the recommended way is to cite this reference manual, e.g.: M. Galassi et al, GNU Scientific Library Reference Manual (3rd Ed.), ISBN 0954612078. If you want to give a url, use “‘http://www.gnu.org/software/gsl/’”.  File: gsl-ref.info, Node: Conventions used in this manual, Prev: Further Information, Up: Introduction 1.7 Conventions used in this manual =================================== This manual contains many examples which can be typed at the keyboard. A command entered at the terminal is shown like this: $ command The first character on the line is the terminal prompt, and should not be typed. The dollar sign $ is used as the standard prompt in this manual, although some systems may use a different character. The examples assume the use of the GNU operating system. There may be minor differences in the output on other systems. The commands for setting environment variables use the Bourne shell syntax of the standard GNU shell (‘bash’).  File: gsl-ref.info, Node: Using the Library, Next: Error Handling, Prev: Introduction, Up: Top 2 Using the Library ******************* This chapter describes how to compile programs that use GSL, and introduces its conventions. * Menu: * An Example Program:: * Compiling and Linking:: * Shared Libraries:: * ANSI C Compliance:: * Inline functions:: * Long double:: * Portability functions:: * Alternative optimized functions:: * Support for different numeric types:: * Compatibility with C++:: * Aliasing of arrays:: * Thread-safety:: * Deprecated Functions:: * Code Reuse::  File: gsl-ref.info, Node: An Example Program, Next: Compiling and Linking, Up: Using the Library 2.1 An Example Program ====================== The following short program demonstrates the use of the library by computing the value of the Bessel function J_0(x) for x=5: #include #include int main (void) { double x = 5.0; double y = gsl_sf_bessel_J0 (x); printf ("J0(%g) = %.18e\n", x, y); return 0; } The output is shown below, and should be correct to double-precision accuracy (1), J0(5) = -1.775967713143382642e-01 The steps needed to compile this program are described in the following sections. ---------- Footnotes ---------- (1) (1) The last few digits may vary slightly depending on the compiler and platform used—this is normal  File: gsl-ref.info, Node: Compiling and Linking, Next: Shared Libraries, Prev: An Example Program, Up: Using the Library 2.2 Compiling and Linking ========================= The library header files are installed in their own ‘gsl’ directory. You should write any preprocessor include statements with a ‘gsl/’ directory prefix thus: #include If the directory is not installed on the standard search path of your compiler you will also need to provide its location to the preprocessor as a command line flag. The default location of the ‘gsl’ directory is ‘/usr/local/include/gsl’. A typical compilation command for a source file ‘example.c’ with the GNU C compiler ‘gcc’ is: $ gcc -Wall -I/usr/local/include -c example.c This results in an object file ‘example.o’. The default include path for ‘gcc’ searches ‘/usr/local/include’ automatically so the ‘-I’ option can actually be omitted when GSL is installed in its default location. * Menu: * Linking programs with the library:: * Linking with an alternative BLAS library::  File: gsl-ref.info, Node: Linking programs with the library, Next: Linking with an alternative BLAS library, Up: Compiling and Linking 2.2.1 Linking programs with the library --------------------------------------- The library is installed as a single file, ‘libgsl.a’. A shared version of the library ‘libgsl.so’ is also installed on systems that support shared libraries. The default location of these files is ‘/usr/local/lib’. If this directory is not on the standard search path of your linker you will also need to provide its location as a command line flag. To link against the library you need to specify both the main library and a supporting CBLAS library, which provides standard basic linear algebra subroutines. A suitable CBLAS implementation is provided in the library ‘libgslcblas.a’ if your system does not provide one. The following example shows how to link an application with the library: $ gcc -L/usr/local/lib example.o -lgsl -lgslcblas -lm The default library path for ‘gcc’ searches ‘/usr/local/lib’ automatically so the ‘-L’ option can be omitted when GSL is installed in its default location. The option ‘-lm’ links with the system math library. On some systems it is not needed. (1) For a tutorial introduction to the GNU C Compiler and related programs, see “An Introduction to GCC” (ISBN 0954161793). (2) ---------- Footnotes ---------- (1) (2) It is not needed on MacOS X (2) (3) ‘http://www.network-theory.co.uk/gcc/intro/’  File: gsl-ref.info, Node: Linking with an alternative BLAS library, Prev: Linking programs with the library, Up: Compiling and Linking 2.2.2 Linking with an alternative BLAS library ---------------------------------------------- The following command line shows how you would link the same application with an alternative CBLAS library ‘libcblas.a’: $ gcc example.o -lgsl -lcblas -lm For the best performance an optimized platform-specific CBLAS library should be used for ‘-lcblas’. The library must conform to the CBLAS standard. The ATLAS package provides a portable high-performance BLAS library with a CBLAS interface. It is free software and should be installed for any work requiring fast vector and matrix operations. The following command line will link with the ATLAS library and its CBLAS interface: $ gcc example.o -lgsl -lcblas -latlas -lm If the ATLAS library is installed in a non-standard directory use the ‘-L’ option to add it to the search path, as described above. For more information about BLAS functions see *note BLAS Support: 11.  File: gsl-ref.info, Node: Shared Libraries, Next: ANSI C Compliance, Prev: Compiling and Linking, Up: Using the Library 2.3 Shared Libraries ==================== To run a program linked with the shared version of the library the operating system must be able to locate the corresponding ‘.so’ file at runtime. If the library cannot be found, the following error will occur: $ ./a.out ./a.out: error while loading shared libraries: libgsl.so.0: cannot open shared object file: No such file or directory To avoid this error, either modify the system dynamic linker configuration (1) or define the shell variable ‘LD_LIBRARY_PATH’ to include the directory where the library is installed. For example, in the Bourne shell (‘/bin/sh’ or ‘/bin/bash’), the library search path can be set with the following commands: $ LD_LIBRARY_PATH=/usr/local/lib $ export LD_LIBRARY_PATH $ ./example In the C-shell (‘/bin/csh’ or ‘/bin/tcsh’) the equivalent command is: % setenv LD_LIBRARY_PATH /usr/local/lib The standard prompt for the C-shell in the example above is the percent character %, and should not be typed as part of the command. To save retyping these commands each session they can be placed in an individual or system-wide login file. To compile a statically linked version of the program, use the ‘-static’ flag in ‘gcc’: $ gcc -static example.o -lgsl -lgslcblas -lm ---------- Footnotes ---------- (1) (4) ‘/etc/ld.so.conf’ on GNU/Linux systems  File: gsl-ref.info, Node: ANSI C Compliance, Next: Inline functions, Prev: Shared Libraries, Up: Using the Library 2.4 ANSI C Compliance ===================== The library is written in ANSI C and is intended to conform to the ANSI C standard (C89). It should be portable to any system with a working ANSI C compiler. The library does not rely on any non-ANSI extensions in the interface it exports to the user. Programs you write using GSL can be ANSI compliant. Extensions which can be used in a way compatible with pure ANSI C are supported, however, via conditional compilation. This allows the library to take advantage of compiler extensions on those platforms which support them. When an ANSI C feature is known to be broken on a particular system the library will exclude any related functions at compile-time. This should make it impossible to link a program that would use these functions and give incorrect results. To avoid namespace conflicts all exported function names and variables have the prefix ‘gsl_’, while exported macros have the prefix ‘GSL_’.  File: gsl-ref.info, Node: Inline functions, Next: Long double, Prev: ANSI C Compliance, Up: Using the Library 2.5 Inline functions ==================== The ‘inline’ keyword is not part of the original ANSI C standard (C89) so the library does not export any inline function definitions by default. Inline functions were introduced officially in the newer C99 standard but most C89 compilers have also included ‘inline’ as an extension for a long time. To allow the use of inline functions, the library provides optional inline versions of performance-critical routines by conditional compilation in the exported header files. The inline versions of these functions can be included by defining the macro ‘HAVE_INLINE’ when compiling an application: $ gcc -Wall -c -DHAVE_INLINE example.c If you use ‘autoconf’ this macro can be defined automatically. If you do not define the macro ‘HAVE_INLINE’ then the slower non-inlined versions of the functions will be used instead. By default, the actual form of the inline keyword is ‘extern inline’, which is a ‘gcc’ extension that eliminates unnecessary function definitions. If the form ‘extern inline’ causes problems with other compilers a stricter autoconf test can be used, see *note Autoconf Macros: 16. When compiling with ‘gcc’ in C99 mode (‘gcc -std=c99’) the header files automatically switch to C99-compatible inline function declarations instead of ‘extern inline’. With other C99 compilers, define the macro ‘GSL_C99_INLINE’ to use these declarations.  File: gsl-ref.info, Node: Long double, Next: Portability functions, Prev: Inline functions, Up: Using the Library 2.6 Long double =============== In general, the algorithms in the library are written for double precision only. The ‘long double’ type is not supported for actual computation. One reason for this choice is that the precision of ‘long double’ is platform dependent. The IEEE standard only specifies the minimum precision of extended precision numbers, while the precision of ‘double’ is the same on all platforms. However, it is sometimes necessary to interact with external data in long-double format, so the vector and matrix datatypes include long-double versions. It should be noted that in some system libraries the ‘stdio.h’ formatted input/output functions ‘printf’ and ‘scanf’ are not implemented correctly for ‘long double’. Undefined or incorrect results are avoided by testing these functions during the ‘configure’ stage of library compilation and eliminating certain GSL functions which depend on them if necessary. The corresponding line in the ‘configure’ output looks like this: checking whether printf works with long double... no Consequently when ‘long double’ formatted input/output does not work on a given system it should be impossible to link a program which uses GSL functions dependent on this. If it is necessary to work on a system which does not support formatted ‘long double’ input/output then the options are to use binary formats or to convert ‘long double’ results into ‘double’ for reading and writing.  File: gsl-ref.info, Node: Portability functions, Next: Alternative optimized functions, Prev: Long double, Up: Using the Library 2.7 Portability functions ========================= To help in writing portable applications GSL provides some implementations of functions that are found in other libraries, such as the BSD math library. You can write your application to use the native versions of these functions, and substitute the GSL versions via a preprocessor macro if they are unavailable on another platform. For example, after determining whether the BSD function ‘hypot()’ is available you can include the following macro definitions in a file ‘config.h’ with your application: /* Substitute gsl_hypot for missing system hypot */ #ifndef HAVE_HYPOT #define hypot gsl_hypot #endif The application source files can then use the include command ‘#include ’ to replace each occurrence of ‘hypot()’ by *note gsl_hypot(): 1a. when ‘hypot()’ is not available. This substitution can be made automatically if you use ‘autoconf’, see *note Autoconf Macros: 16. In most circumstances the best strategy is to use the native versions of these functions when available, and fall back to GSL versions otherwise, since this allows your application to take advantage of any platform-specific optimizations in the system library. This is the strategy used within GSL itself.  File: gsl-ref.info, Node: Alternative optimized functions, Next: Support for different numeric types, Prev: Portability functions, Up: Using the Library 2.8 Alternative optimized functions =================================== The main implementation of some functions in the library will not be optimal on all architectures. For example, there are several ways to compute a Gaussian random variate and their relative speeds are platform-dependent. In cases like this the library provides alternative implementations of these functions with the same interface. If you write your application using calls to the standard implementation you can select an alternative version later via a preprocessor definition. It is also possible to introduce your own optimized functions this way while retaining portability. The following lines demonstrate the use of a platform-dependent choice of methods for sampling from the Gaussian distribution: #ifdef SPARC #define gsl_ran_gaussian gsl_ran_gaussian_ratio_method #endif #ifdef INTEL #define gsl_ran_gaussian my_gaussian #endif These lines would be placed in the configuration header file ‘config.h’ of the application, which should then be included by all the source files. Note that the alternative implementations will not produce bit-for-bit identical results, and in the case of random number distributions will produce an entirely different stream of random variates.  File: gsl-ref.info, Node: Support for different numeric types, Next: Compatibility with C++, Prev: Alternative optimized functions, Up: Using the Library 2.9 Support for different numeric types ======================================= Many functions in the library are defined for different numeric types. This feature is implemented by varying the name of the function with a type-related modifier—a primitive form of C++ templates. The modifier is inserted into the function name after the initial module prefix. The following table shows the function names defined for all the numeric types of an imaginary module ‘gsl_foo’ with function ‘fn()’: gsl_foo_fn double gsl_foo_long_double_fn long double gsl_foo_float_fn float gsl_foo_long_fn long gsl_foo_ulong_fn unsigned long gsl_foo_int_fn int gsl_foo_uint_fn unsigned int gsl_foo_short_fn short gsl_foo_ushort_fn unsigned short gsl_foo_char_fn char gsl_foo_uchar_fn unsigned char The normal numeric precision ‘double’ is considered the default and does not require a suffix. For example, the function *note gsl_stats_mean(): 1d. computes the mean of double precision numbers, while the function ‘gsl_stats_int_mean()’ computes the mean of integers. A corresponding scheme is used for library defined types, such as ‘gsl_vector’ and ‘gsl_matrix’. In this case the modifier is appended to the type name. For example, if a module defines a new type-dependent struct or typedef ‘gsl_foo’ it is modified for other types in the following way: gsl_foo double gsl_foo_long_double long double gsl_foo_float float gsl_foo_long long gsl_foo_ulong unsigned long gsl_foo_int int gsl_foo_uint unsigned int gsl_foo_short short gsl_foo_ushort unsigned short gsl_foo_char char gsl_foo_uchar unsigned char When a module contains type-dependent definitions the library provides individual header files for each type. The filenames are modified as shown in the below. For convenience the default header includes the definitions for all the types. To include only the double precision header file, or any other specific type, use its individual filename: #include All types #include double #include long double #include float #include long #include unsigned long #include int #include unsigned int #include short #include unsigned short #include char #include unsigned char  File: gsl-ref.info, Node: Compatibility with C++, Next: Aliasing of arrays, Prev: Support for different numeric types, Up: Using the Library 2.10 Compatibility with C++ =========================== The library header files automatically define functions to have ‘extern "C"’ linkage when included in C++ programs. This allows the functions to be called directly from C++. To use C++ exception handling within user-defined functions passed to the library as parameters, the library must be built with the additional ‘CFLAGS’ compilation option ‘-fexceptions’.  File: gsl-ref.info, Node: Aliasing of arrays, Next: Thread-safety, Prev: Compatibility with C++, Up: Using the Library 2.11 Aliasing of arrays ======================= The library assumes that arrays, vectors and matrices passed as modifiable arguments are not aliased and do not overlap with each other. This removes the need for the library to handle overlapping memory regions as a special case, and allows additional optimizations to be used. If overlapping memory regions are passed as modifiable arguments then the results of such functions will be undefined. If the arguments will not be modified (for example, if a function prototype declares them as ‘const’ arguments) then overlapping or aliased memory regions can be safely used.  File: gsl-ref.info, Node: Thread-safety, Next: Deprecated Functions, Prev: Aliasing of arrays, Up: Using the Library 2.12 Thread-safety ================== The library can be used in multi-threaded programs. All the functions are thread-safe, in the sense that they do not use static variables. Memory is always associated with objects and not with functions. For functions which use `workspace' objects as temporary storage the workspaces should be allocated on a per-thread basis. For functions which use `table' objects as read-only memory the tables can be used by multiple threads simultaneously. Table arguments are always declared ‘const’ in function prototypes, to indicate that they may be safely accessed by different threads. There are a small number of static global variables which are used to control the overall behavior of the library (e.g. whether to use range-checking, the function to call on fatal error, etc). These variables are set directly by the user, so they should be initialized once at program startup and not modified by different threads.  File: gsl-ref.info, Node: Deprecated Functions, Next: Code Reuse, Prev: Thread-safety, Up: Using the Library 2.13 Deprecated Functions ========================= From time to time, it may be necessary for the definitions of some functions to be altered or removed from the library. In these circumstances the functions will first be declared `deprecated' and then removed from subsequent versions of the library. Functions that are deprecated can be disabled in the current release by setting the preprocessor definition ‘GSL_DISABLE_DEPRECATED’. This allows existing code to be tested for forwards compatibility.  File: gsl-ref.info, Node: Code Reuse, Prev: Deprecated Functions, Up: Using the Library 2.14 Code Reuse =============== Where possible the routines in the library have been written to avoid dependencies between modules and files. This should make it possible to extract individual functions for use in your own applications, without needing to have the whole library installed. You may need to define certain macros such as ‘GSL_ERROR’ and remove some ‘#include’ statements in order to compile the files as standalone units. Reuse of the library code in this way is encouraged, subject to the terms of the GNU General Public License.  File: gsl-ref.info, Node: Error Handling, Next: Mathematical Functions, Prev: Using the Library, Up: Top 3 Error Handling **************** This chapter describes the way that GSL functions report and handle errors. By examining the status information returned by every function you can determine whether it succeeded or failed, and if it failed you can find out what the precise cause of failure was. You can also define your own error handling functions to modify the default behavior of the library. The functions described in this section are declared in the header file ‘gsl_errno.h’. * Menu: * Error Reporting:: * Error Codes:: * Error Handlers:: * Using GSL error reporting in your own functions:: * Examples::  File: gsl-ref.info, Node: Error Reporting, Next: Error Codes, Up: Error Handling 3.1 Error Reporting =================== The library follows the thread-safe error reporting conventions of the POSIX Threads library. Functions return a non-zero error code to indicate an error and ‘0’ to indicate success: int status = gsl_function (...) if (status) { /* an error occurred */ ..... /* status value specifies the type of error */ } The routines report an error whenever they cannot perform the task requested of them. For example, a root-finding function would return a non-zero error code if could not converge to the requested accuracy, or exceeded a limit on the number of iterations. Situations like this are a normal occurrence when using any mathematical library and you should check the return status of the functions that you call. Whenever a routine reports an error the return value specifies the type of error. The return value is analogous to the value of the variable ‘errno’ in the C library. The caller can examine the return code and decide what action to take, including ignoring the error if it is not considered serious. In addition to reporting errors by return codes the library also has an error handler function ‘gsl_error()’. This function is called by other library functions when they report an error, just before they return to the caller. The default behavior of the error handler is to print a message and abort the program: gsl: file.c:67: ERROR: invalid argument supplied by user Default GSL error handler invoked. Aborted The purpose of the ‘gsl_error()’ handler is to provide a function where a breakpoint can be set that will catch library errors when running under the debugger. It is not intended for use in production programs, which should handle any errors using the return codes.  File: gsl-ref.info, Node: Error Codes, Next: Error Handlers, Prev: Error Reporting, Up: Error Handling 3.2 Error Codes =============== The error code numbers returned by library functions are defined in the file ‘gsl_errno.h’. They all have the prefix ‘GSL_’ and expand to non-zero constant integer values. Error codes above 1024 are reserved for applications, and are not used by the library. Many of the error codes use the same base name as the corresponding error code in the C library. Here are some of the most common error codes, -- Variable: int GSL_EDOM Domain error; used by mathematical functions when an argument value does not fall into the domain over which the function is defined (like ‘EDOM’ in the C library) -- Variable: int GSL_ERANGE Range error; used by mathematical functions when the result value is not representable because of overflow or underflow (like ‘ERANGE’ in the C library) -- Variable: int GSL_ENOMEM No memory available. The system cannot allocate more virtual memory because its capacity is full (like ‘ENOMEM’ in the C library). This error is reported when a GSL routine encounters problems when trying to allocate memory with ‘malloc()’. -- Variable: int GSL_EINVAL Invalid argument. This is used to indicate various kinds of problems with passing the wrong argument to a library function (like ‘EINVAL’ in the C library). The error codes can be converted into an error message using the function *note gsl_strerror(): 2c. -- Function: const char *gsl_strerror (const int gsl_errno) This function returns a pointer to a string describing the error code *note gsl_errno: 2c. For example: printf ("error: %s\n", gsl_strerror (status)); would print an error message like ‘error: output range error’ for a status value of *note GSL_ERANGE: 29.  File: gsl-ref.info, Node: Error Handlers, Next: Using GSL error reporting in your own functions, Prev: Error Codes, Up: Error Handling 3.3 Error Handlers ================== The default behavior of the GSL error handler is to print a short message and call ‘abort()’. When this default is in use programs will stop with a core-dump whenever a library routine reports an error. This is intended as a fail-safe default for programs which do not check the return status of library routines (we don’t encourage you to write programs this way). If you turn off the default error handler it is your responsibility to check the return values of routines and handle them yourself. You can also customize the error behavior by providing a new error handler. For example, an alternative error handler could log all errors to a file, ignore certain error conditions (such as underflows), or start the debugger and attach it to the current process when an error occurs. All GSL error handlers have the type ‘gsl_error_handler_t’, which is defined in ‘gsl_errno.h’, -- Type: gsl_error_handler_t This is the type of GSL error handler functions. An error handler will be passed four arguments which specify the reason for the error (a string), the name of the source file in which it occurred (also a string), the line number in that file (an integer) and the error number (an integer). The source file and line number are set at compile time using the ‘__FILE__’ and ‘__LINE__’ directives in the preprocessor. An error handler function returns type ‘void’. Error handler functions should be defined like this: void handler (const char * reason, const char * file, int line, int gsl_errno) To request the use of your own error handler you need to call the function *note gsl_set_error_handler(): 2f. which is also declared in ‘gsl_errno.h’, -- Function: *note gsl_error_handler_t: 2e. *gsl_set_error_handler (gsl_error_handler_t *new_handler) This function sets a new error handler, *note new_handler: 2f, for the GSL library routines. The previous handler is returned (so that you can restore it later). Note that the pointer to a user defined error handler function is stored in a static variable, so there can be only one error handler per program. This function should be not be used in multi-threaded programs except to set up a program-wide error handler from a master thread. The following example shows how to set and restore a new error handler: /* save original handler, install new handler */ old_handler = gsl_set_error_handler (&my_handler); /* code uses new handler */ ..... /* restore original handler */ gsl_set_error_handler (old_handler); To use the default behavior (‘abort()’ on error) set the error handler to ‘NULL’: old_handler = gsl_set_error_handler (NULL); -- Function: *note gsl_error_handler_t: 2e. *gsl_set_error_handler_off () This function turns off the error handler by defining an error handler which does nothing. This will cause the program to continue after any error, so the return values from any library routines must be checked. This is the recommended behavior for production programs. The previous handler is returned (so that you can restore it later). The error behavior can be changed for specific applications by recompiling the library with a customized definition of the ‘GSL_ERROR’ macro in the file ‘gsl_errno.h’.  File: gsl-ref.info, Node: Using GSL error reporting in your own functions, Next: Examples, Prev: Error Handlers, Up: Error Handling 3.4 Using GSL error reporting in your own functions =================================================== If you are writing numerical functions in a program which also uses GSL code you may find it convenient to adopt the same error reporting conventions as in the library. To report an error you need to call the function ‘gsl_error()’ with a string describing the error and then return an appropriate error code from ‘gsl_errno.h’, or a special value, such as ‘NaN’. For convenience the file ‘gsl_errno.h’ defines two macros which carry out these steps: -- Macro: GSL_ERROR (reason, gsl_errno) This macro reports an error using the GSL conventions and returns a status value of ‘gsl_errno’. It expands to the following code fragment: gsl_error (reason, __FILE__, __LINE__, gsl_errno); return gsl_errno; The macro definition in ‘gsl_errno.h’ actually wraps the code in a ‘do { ... } while (0)’ block to prevent possible parsing problems. Here is an example of how the macro could be used to report that a routine did not achieve a requested tolerance. To report the error the routine needs to return the error code ‘GSL_ETOL’: if (residual > tolerance) { GSL_ERROR("residual exceeds tolerance", GSL_ETOL); } -- Macro: GSL_ERROR_VAL (reason, gsl_errno, value) This macro is the same as ‘GSL_ERROR’ but returns a user-defined value of ‘value’ instead of an error code. It can be used for mathematical functions that return a floating point value. The following example shows how to return a ‘NaN’ at a mathematical singularity using the ‘GSL_ERROR_VAL’ macro: if (x == 0) { GSL_ERROR_VAL("argument lies on singularity", GSL_ERANGE, GSL_NAN); }  File: gsl-ref.info, Node: Examples, Prev: Using GSL error reporting in your own functions, Up: Error Handling 3.5 Examples ============ Here is an example of some code which checks the return value of a function where an error might be reported: #include #include #include ... int status; size_t n = 37; gsl_set_error_handler_off(); status = gsl_fft_complex_radix2_forward (data, stride, n); if (status) { if (status == GSL_EINVAL) { fprintf (stderr, "invalid argument, n=%d\n", n); } else { fprintf (stderr, "failed, gsl_errno=%d\n", status); } exit (-1); } ... The function *note gsl_fft_complex_radix2_forward(): 35. only accepts integer lengths which are a power of two. If the variable ‘n’ is not a power of two then the call to the library function will return ‘GSL_EINVAL’, indicating that the length argument is invalid. The function call to *note gsl_set_error_handler_off(): 30. stops the default error handler from aborting the program. The ‘else’ clause catches any other possible errors.  File: gsl-ref.info, Node: Mathematical Functions, Next: Complex Numbers, Prev: Error Handling, Up: Top 4 Mathematical Functions ************************ This chapter describes basic mathematical functions. Some of these functions are present in system libraries, but the alternative versions given here can be used as a substitute when the system functions are not available. The functions and macros described in this chapter are defined in the header file ‘gsl_math.h’. * Menu: * Mathematical Constants:: * Infinities and Not-a-number:: * Elementary Functions:: * Small integer powers:: * Testing the Sign of Numbers:: * Testing for Odd and Even Numbers:: * Maximum and Minimum functions:: * Approximate Comparison of Floating Point Numbers::  File: gsl-ref.info, Node: Mathematical Constants, Next: Infinities and Not-a-number, Up: Mathematical Functions 4.1 Mathematical Constants ========================== The library ensures that the standard BSD mathematical constants are defined. For reference, here is a list of the constants: ‘M_E’ The base of exponentials, e ‘M_LOG2E’ The base-2 logarithm of e, \log_2 (e) ‘M_LOG10E’ The base-10 logarithm of e, \log_{10} (e) ‘M_SQRT2’ The square root of two, \sqrt 2 ‘M_SQRT1_2’ The square root of one-half, \sqrt{1/2} ‘M_SQRT3’ The square root of three, \sqrt 3 ‘M_PI’ The constant pi, \pi ‘M_PI_2’ Pi divided by two, \pi/2 ‘M_PI_4’ Pi divided by four, \pi/4 ‘M_SQRTPI’ The square root of pi, \sqrt\pi ‘M_2_SQRTPI’ Two divided by the square root of pi, 2/\sqrt\pi ‘M_1_PI’ The reciprocal of pi, 1/\pi ‘M_2_PI’ Twice the reciprocal of pi, 2/\pi ‘M_LN10’ The natural logarithm of ten, \ln(10) ‘M_LN2’ The natural logarithm of two, \ln(2) ‘M_LNPI’ The natural logarithm of pi, \ln(\pi) ‘M_EULER’ Euler’s constant, \gamma  File: gsl-ref.info, Node: Infinities and Not-a-number, Next: Elementary Functions, Prev: Mathematical Constants, Up: Mathematical Functions 4.2 Infinities and Not-a-number =============================== -- Macro: GSL_POSINF This macro contains the IEEE representation of positive infinity, +\infty. It is computed from the expression ‘+1.0/0.0’. -- Macro: GSL_NEGINF This macro contains the IEEE representation of negative infinity, -\infty. It is computed from the expression ‘-1.0/0.0’. -- Macro: GSL_NAN This macro contains the IEEE representation of the Not-a-Number symbol, ‘NaN’. It is computed from the ratio ‘0.0/0.0’. -- Function: int gsl_isnan (const double x) This function returns 1 if *note x: 3d. is not-a-number. -- Function: int gsl_isinf (const double x) This function returns +1 if *note x: 3e. is positive infinity, -1 if *note x: 3e. is negative infinity and 0 otherwise. (1) -- Function: int gsl_finite (const double x) This function returns 1 if *note x: 3f. is a real number, and 0 if it is infinite or not-a-number. ---------- Footnotes ---------- (1) (1) Note that the C99 standard only requires the system ‘isinf()’ function to return a non-zero value, without the sign of the infinity. The implementation in some earlier versions of GSL used the system ‘isinf()’ function and may have this behavior on some platforms. Therefore, it is advisable to test the sign of ‘x’ separately, if needed, rather than relying the sign of the return value from *note gsl_isinf(): 3e.  File: gsl-ref.info, Node: Elementary Functions, Next: Small integer powers, Prev: Infinities and Not-a-number, Up: Mathematical Functions 4.3 Elementary Functions ======================== The following routines provide portable implementations of functions found in the BSD math library. When native versions are not available the functions described here can be used instead. The substitution can be made automatically if you use ‘autoconf’ to compile your application (see *note Portability functions: 19.). -- Function: double gsl_log1p (const double x) This function computes the value of \log(1+x) in a way that is accurate for small *note x: 41. It provides an alternative to the BSD math function ‘log1p(x)’. -- Function: double gsl_expm1 (const double x) This function computes the value of \exp(x)-1 in a way that is accurate for small *note x: 42. It provides an alternative to the BSD math function ‘expm1(x)’. -- Function: double gsl_hypot (const double x, const double y) This function computes the value of \sqrt{x^2 + y^2} in a way that avoids overflow. It provides an alternative to the BSD math function ‘hypot(x,y)’. -- Function: double gsl_hypot3 (const double x, const double y, const double z) This function computes the value of \sqrt{x^2 + y^2 + z^2} in a way that avoids overflow. -- Function: double gsl_acosh (const double x) This function computes the value of \arccosh{(x)}. It provides an alternative to the standard math function ‘acosh(x)’. -- Function: double gsl_asinh (const double x) This function computes the value of \arcsinh{(x)}. It provides an alternative to the standard math function ‘asinh(x)’. -- Function: double gsl_atanh (const double x) This function computes the value of \arctanh{(x)}. It provides an alternative to the standard math function ‘atanh(x)’. -- Function: double gsl_ldexp (double x, int e) This function computes the value of x * 2^e. It provides an alternative to the standard math function ‘ldexp(x,e)’. -- Function: double gsl_frexp (double x, int *e) This function splits the number *note x: 48. into its normalized fraction f and exponent e, such that x = f * 2^e and 0.5 <= f < 1. The function returns f and stores the exponent in e. If x is zero, both f and e are set to zero. This function provides an alternative to the standard math function ‘frexp(x, e)’.  File: gsl-ref.info, Node: Small integer powers, Next: Testing the Sign of Numbers, Prev: Elementary Functions, Up: Mathematical Functions 4.4 Small integer powers ======================== A common complaint about the standard C library is its lack of a function for calculating (small) integer powers. GSL provides some simple functions to fill this gap. For reasons of efficiency, these functions do not check for overflow or underflow conditions. -- Function: double gsl_pow_int (double x, int n) -- Function: double gsl_pow_uint (double x, unsigned int n) These routines computes the power x^n for integer *note n: 4b. The power is computed efficiently—for example, x^8 is computed as ((x^2)^2)^2, requiring only 3 multiplications. A version of this function which also computes the numerical error in the result is available as *note gsl_sf_pow_int_e(): 4c. -- Function: double gsl_pow_2 (const double x) -- Function: double gsl_pow_3 (const double x) -- Function: double gsl_pow_4 (const double x) -- Function: double gsl_pow_5 (const double x) -- Function: double gsl_pow_6 (const double x) -- Function: double gsl_pow_7 (const double x) -- Function: double gsl_pow_8 (const double x) -- Function: double gsl_pow_9 (const double x) These functions can be used to compute small integer powers x^2, x^3, etc. efficiently. The functions will be inlined when ‘HAVE_INLINE’ is defined, so that use of these functions should be as efficient as explicitly writing the corresponding product expression: #include double y = gsl_pow_4 (3.141) /* compute 3.141**4 */  File: gsl-ref.info, Node: Testing the Sign of Numbers, Next: Testing for Odd and Even Numbers, Prev: Small integer powers, Up: Mathematical Functions 4.5 Testing the Sign of Numbers =============================== -- Macro: GSL_SIGN (x) This macro returns the sign of ‘x’. It is defined as ‘((x) >= 0 ? 1 : -1)’. Note that with this definition the sign of zero is positive (regardless of its IEEE sign bit).  File: gsl-ref.info, Node: Testing for Odd and Even Numbers, Next: Maximum and Minimum functions, Prev: Testing the Sign of Numbers, Up: Mathematical Functions 4.6 Testing for Odd and Even Numbers ==================================== -- Macro: GSL_IS_ODD (n) This macro evaluates to 1 if ‘n’ is odd and 0 if ‘n’ is even. The argument ‘n’ must be of integer type. -- Macro: GSL_IS_EVEN (n) This macro is the opposite of *note GSL_IS_ODD: 58. It evaluates to 1 if ‘n’ is even and 0 if ‘n’ is odd. The argument ‘n’ must be of integer type.  File: gsl-ref.info, Node: Maximum and Minimum functions, Next: Approximate Comparison of Floating Point Numbers, Prev: Testing for Odd and Even Numbers, Up: Mathematical Functions 4.7 Maximum and Minimum functions ================================= Note that the following macros perform multiple evaluations of their arguments, so they should not be used with arguments that have side effects (such as a call to a random number generator). -- Macro: GSL_MAX (a, b) This macro returns the maximum of ‘a’ and ‘b’. It is defined as ‘((a) > (b) ? (a):(b))’. -- Macro: GSL_MIN (a, b) This macro returns the minimum of ‘a’ and ‘b’. It is defined as ‘((a) < (b) ? (a):(b))’. -- Function: extern inline double GSL_MAX_DBL (double a, double b) This function returns the maximum of the double precision numbers *note a: 5d. and *note b: 5d. using an inline function. The use of a function allows for type checking of the arguments as an extra safety feature. On platforms where inline functions are not available the macro *note GSL_MAX: 5b. will be automatically substituted. -- Function: extern inline double GSL_MIN_DBL (double a, double b) This function returns the minimum of the double precision numbers *note a: 5e. and *note b: 5e. using an inline function. The use of a function allows for type checking of the arguments as an extra safety feature. On platforms where inline functions are not available the macro *note GSL_MIN: 5c. will be automatically substituted. -- Function: extern inline int GSL_MAX_INT (int a, int b) -- Function: extern inline int GSL_MIN_INT (int a, int b) These functions return the maximum or minimum of the integers *note a: 60. and *note b: 60. using an inline function. On platforms where inline functions are not available the macros *note GSL_MAX: 5b. or *note GSL_MIN: 5c. will be automatically substituted. -- Function: extern inline long double GSL_MAX_LDBL (long double a, long double b) -- Function: extern inline long double GSL_MIN_LDBL (long double a, long double b) These functions return the maximum or minimum of the long doubles *note a: 62. and *note b: 62. using an inline function. On platforms where inline functions are not available the macros *note GSL_MAX: 5b. or *note GSL_MIN: 5c. will be automatically substituted.  File: gsl-ref.info, Node: Approximate Comparison of Floating Point Numbers, Prev: Maximum and Minimum functions, Up: Mathematical Functions 4.8 Approximate Comparison of Floating Point Numbers ==================================================== It is sometimes useful to be able to compare two floating point numbers approximately, to allow for rounding and truncation errors. The following function implements the approximate floating-point comparison algorithm proposed by D.E. Knuth in Section 4.2.2 of “Seminumerical Algorithms” (3rd edition). -- Function: int gsl_fcmp (double x, double y, double epsilon) This function determines whether *note x: 64. and *note y: 64. are approximately equal to a relative accuracy *note epsilon: 64. The relative accuracy is measured using an interval of size 2 \delta, where \delta = 2^k \epsilon and k is the maximum base-2 exponent of x and y as computed by the function ‘frexp()’. If x and y lie within this interval, they are considered approximately equal and the function returns 0. Otherwise if x < y, the function returns -1, or if x > y, the function returns +1. Note that x and y are compared to relative accuracy, so this function is not suitable for testing whether a value is approximately zero. The implementation is based on the package ‘fcmp’ by T.C. Belding.  File: gsl-ref.info, Node: Complex Numbers, Next: Polynomials, Prev: Mathematical Functions, Up: Top 5 Complex Numbers ***************** The functions described in this chapter provide support for complex numbers. The algorithms take care to avoid unnecessary intermediate underflows and overflows, allowing the functions to be evaluated over as much of the complex plane as possible. For multiple-valued functions the branch cuts have been chosen to follow the conventions of Abramowitz and Stegun. The functions return principal values which are the same as those in GNU Calc, which in turn are the same as those in “Common Lisp, The Language (Second Edition)” (1) and the HP-28/48 series of calculators. The complex types are defined in the header file ‘gsl_complex.h’, while the corresponding complex functions and arithmetic operations are defined in ‘gsl_complex_math.h’. * Menu: * Representation of complex numbers:: * Complex number macros:: * Assigning complex numbers:: * Properties of complex numbers:: * Complex arithmetic operators:: * Elementary Complex Functions:: * Complex Trigonometric Functions:: * Inverse Complex Trigonometric Functions:: * Complex Hyperbolic Functions:: * Inverse Complex Hyperbolic Functions:: * References and Further Reading:: ---------- Footnotes ---------- (1) (1) Note that the first edition uses different definitions.  File: gsl-ref.info, Node: Representation of complex numbers, Next: Complex number macros, Up: Complex Numbers 5.1 Representation of complex numbers ===================================== Complex numbers are represented using the type ‘gsl_complex’. The default interface defines ‘gsl_complex’ as: typedef struct { double dat[2]; } gsl_complex; The real and imaginary part are stored in contiguous elements of a two element array. This eliminates any padding between the real and imaginary parts, ‘dat[0]’ and ‘dat[1]’, allowing the struct to be mapped correctly onto packed complex arrays. If a C compiler is available which supports the C11 standard, and the ‘’ header file is included `prior' to ‘gsl_complex.h’, then ‘gsl_complex’ will be defined to be the native C complex type: typedef double complex gsl_complex This allows users to use ‘gsl_complex’ in ordinary operations such as: gsl_complex x = 2 + 5 * I; gsl_complex y = x + (3 - 4*I); Important: Native C support for complex numbers was introduced in the C99 standard, and additional functionality was added in C11. When ‘’ is included in a user’s program prior to ‘gsl_complex.h’, GSL uses the new C11 functionality to define the *note GSL_REAL: 68. and *note GSL_IMAG: 69. macros. It does not appear possible to properly define these macros using the C99 standard, and so using a C99 compiler will not define ‘gsl_complex’ to the native complex type. Some compilers, such as the gcc 4.8 series implement only a portion of the C11 standard and so they may fail to correctly compile GSL code when a user tries to turn on native complex functionality. A workaround for this issue is to either remove ‘’ from the include list, or add ‘-DGSL_COMPLEX_LEGACY’ to the compiler flags, which will use the older struct-based definition of ‘gsl_complex’.  File: gsl-ref.info, Node: Complex number macros, Next: Assigning complex numbers, Prev: Representation of complex numbers, Up: Complex Numbers 5.2 Complex number macros ========================= The following C macros offer convenient ways to manipulate complex numbers. -- Macro: GSL_REAL (z) -- Macro: GSL_IMAG (z) These macros return a memory location (lvalue) corresponding to the real and imaginary parts respectively of the complex number ‘z’. This allows users to perform operations like: gsl_complex x, y; GSL_REAL(x) = 4; GSL_IMAG(x) = 2; GSL_REAL(y) = GSL_REAL(x); GSL_IMAG(y) = GSL_REAL(x); In other words, these macros can both read and write to the real and imaginary parts of a complex variable. -- Macro: GSL_SET_COMPLEX (zp, x, y) This macro uses the Cartesian components (‘x’, ‘y’) to set the real and imaginary parts of the complex number pointed to by ‘zp’. For example: GSL_SET_COMPLEX(&z, 3, 4) sets z to be 3 + 4i.  File: gsl-ref.info, Node: Assigning complex numbers, Next: Properties of complex numbers, Prev: Complex number macros, Up: Complex Numbers 5.3 Assigning complex numbers ============================= -- Function: gsl_complex gsl_complex_rect (double x, double y) This function uses the rectangular Cartesian components (x,y) to return the complex number z = x + i y. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: gsl_complex gsl_complex_polar (double r, double theta) This function returns the complex number z = r \exp(i \theta) = r (\cos(\theta) + i \sin(\theta)) from the polar representation (*note r: 6e, *note theta: 6e.).  File: gsl-ref.info, Node: Properties of complex numbers, Next: Complex arithmetic operators, Prev: Assigning complex numbers, Up: Complex Numbers 5.4 Properties of complex numbers ================================= -- Function: double gsl_complex_arg (gsl_complex z) This function returns the argument of the complex number *note z: 70, \arg(z), where -\pi < \arg(z) <= \pi. -- Function: double gsl_complex_abs (gsl_complex z) This function returns the magnitude of the complex number *note z: 71, |z|. -- Function: double gsl_complex_abs2 (gsl_complex z) This function returns the squared magnitude of the complex number *note z: 72, |z|^2. -- Function: double gsl_complex_logabs (gsl_complex z) This function returns the natural logarithm of the magnitude of the complex number *note z: 73, \log|z|. It allows an accurate evaluation of \log|z| when |z| is close to one. The direct evaluation of ‘log(gsl_complex_abs(z))’ would lead to a loss of precision in this case.  File: gsl-ref.info, Node: Complex arithmetic operators, Next: Elementary Complex Functions, Prev: Properties of complex numbers, Up: Complex Numbers 5.5 Complex arithmetic operators ================================ -- Function: gsl_complex gsl_complex_add (gsl_complex a, gsl_complex b) This function returns the sum of the complex numbers *note a: 75. and *note b: 75, z=a+b. -- Function: gsl_complex gsl_complex_sub (gsl_complex a, gsl_complex b) This function returns the difference of the complex numbers *note a: 76. and *note b: 76, z=a-b. -- Function: gsl_complex gsl_complex_mul (gsl_complex a, gsl_complex b) This function returns the product of the complex numbers *note a: 77. and *note b: 77, z=ab. -- Function: gsl_complex gsl_complex_div (gsl_complex a, gsl_complex b) This function returns the quotient of the complex numbers *note a: 78. and *note b: 78, z=a/b. -- Function: gsl_complex gsl_complex_add_real (gsl_complex a, double x) This function returns the sum of the complex number *note a: 79. and the real number *note x: 79, z=a+x. -- Function: gsl_complex gsl_complex_sub_real (gsl_complex a, double x) This function returns the difference of the complex number *note a: 7a. and the real number *note x: 7a, z=a-x. -- Function: gsl_complex gsl_complex_mul_real (gsl_complex a, double x) This function returns the product of the complex number *note a: 7b. and the real number *note x: 7b, z=ax. -- Function: gsl_complex gsl_complex_div_real (gsl_complex a, double x) This function returns the quotient of the complex number *note a: 7c. and the real number *note x: 7c, z=a/x. -- Function: gsl_complex gsl_complex_add_imag (gsl_complex a, double y) This function returns the sum of the complex number *note a: 7d. and the imaginary number iy, z=a+iy. -- Function: gsl_complex gsl_complex_sub_imag (gsl_complex a, double y) This function returns the difference of the complex number *note a: 7e. and the imaginary number iy, z=a-iy. -- Function: gsl_complex gsl_complex_mul_imag (gsl_complex a, double y) This function returns the product of the complex number *note a: 7f. and the imaginary number iy, z=a*(iy). -- Function: gsl_complex gsl_complex_div_imag (gsl_complex a, double y) This function returns the quotient of the complex number *note a: 80. and the imaginary number iy, z=a/(iy). -- Function: gsl_complex gsl_complex_conjugate (gsl_complex z) This function returns the complex conjugate of the complex number *note z: 81, z^* = x - i y. -- Function: gsl_complex gsl_complex_inverse (gsl_complex z) This function returns the inverse, or reciprocal, of the complex number *note z: 82, 1/z = (x - i y)/(x^2 + y^2). -- Function: gsl_complex gsl_complex_negative (gsl_complex z) This function returns the negative of the complex number *note z: 83, -z = (-x) + i(-y).  File: gsl-ref.info, Node: Elementary Complex Functions, Next: Complex Trigonometric Functions, Prev: Complex arithmetic operators, Up: Complex Numbers 5.6 Elementary Complex Functions ================================ -- Function: gsl_complex gsl_complex_sqrt (gsl_complex z) This function returns the square root of the complex number *note z: 85, \sqrt z. The branch cut is the negative real axis. The result always lies in the right half of the complex plane. -- Function: gsl_complex gsl_complex_sqrt_real (double x) This function returns the complex square root of the real number *note x: 86, where *note x: 86. may be negative. -- Function: gsl_complex gsl_complex_pow (gsl_complex z, gsl_complex a) The function returns the complex number *note z: 87. raised to the complex power *note a: 87, z^a. This is computed as \exp(\log(z)*a) using complex logarithms and complex exponentials. -- Function: gsl_complex gsl_complex_pow_real (gsl_complex z, double x) This function returns the complex number *note z: 88. raised to the real power *note x: 88, z^x. -- Function: gsl_complex gsl_complex_exp (gsl_complex z) This function returns the complex exponential of the complex number *note z: 89, \exp(z). -- Function: gsl_complex gsl_complex_log (gsl_complex z) This function returns the complex natural logarithm (base e) of the complex number *note z: 8a, \log(z). The branch cut is the negative real axis. -- Function: gsl_complex gsl_complex_log10 (gsl_complex z) This function returns the complex base-10 logarithm of the complex number *note z: 8b, \log_{10} (z). -- Function: gsl_complex gsl_complex_log_b (gsl_complex z, gsl_complex b) This function returns the complex base-*note b: 8c. logarithm of the complex number *note z: 8c, \log_b(z). This quantity is computed as the ratio \log(z)/\log(b).  File: gsl-ref.info, Node: Complex Trigonometric Functions, Next: Inverse Complex Trigonometric Functions, Prev: Elementary Complex Functions, Up: Complex Numbers 5.7 Complex Trigonometric Functions =================================== -- Function: gsl_complex gsl_complex_sin (gsl_complex z) This function returns the complex sine of the complex number *note z: 8e, \sin(z) = (\exp(iz) - \exp(-iz))/(2i). -- Function: gsl_complex gsl_complex_cos (gsl_complex z) This function returns the complex cosine of the complex number *note z: 8f, \cos(z) = (\exp(iz) + \exp(-iz))/2. -- Function: gsl_complex gsl_complex_tan (gsl_complex z) This function returns the complex tangent of the complex number *note z: 90, \tan(z) = \sin(z)/\cos(z). -- Function: gsl_complex gsl_complex_sec (gsl_complex z) This function returns the complex secant of the complex number *note z: 91, \sec(z) = 1/\cos(z). -- Function: gsl_complex gsl_complex_csc (gsl_complex z) This function returns the complex cosecant of the complex number *note z: 92, \csc(z) = 1/\sin(z). -- Function: gsl_complex gsl_complex_cot (gsl_complex z) This function returns the complex cotangent of the complex number *note z: 93, \cot(z) = 1/\tan(z).  File: gsl-ref.info, Node: Inverse Complex Trigonometric Functions, Next: Complex Hyperbolic Functions, Prev: Complex Trigonometric Functions, Up: Complex Numbers 5.8 Inverse Complex Trigonometric Functions =========================================== -- Function: gsl_complex gsl_complex_arcsin (gsl_complex z) This function returns the complex arcsine of the complex number *note z: 95, \arcsin(z). The branch cuts are on the real axis, less than -1 and greater than 1. -- Function: gsl_complex gsl_complex_arcsin_real (double z) This function returns the complex arcsine of the real number *note z: 96, \arcsin(z). For z between -1 and 1, the function returns a real value in the range [-\pi/2,\pi/2]. For z less than -1 the result has a real part of -\pi/2 and a positive imaginary part. For z greater than 1 the result has a real part of \pi/2 and a negative imaginary part. -- Function: gsl_complex gsl_complex_arccos (gsl_complex z) This function returns the complex arccosine of the complex number *note z: 97, \arccos(z). The branch cuts are on the real axis, less than -1 and greater than 1. -- Function: gsl_complex gsl_complex_arccos_real (double z) This function returns the complex arccosine of the real number *note z: 98, \arccos(z). For z between -1 and 1, the function returns a real value in the range [0,\pi]. For z less than -1 the result has a real part of \pi and a negative imaginary part. For z greater than 1 the result is purely imaginary and positive. -- Function: gsl_complex gsl_complex_arctan (gsl_complex z) This function returns the complex arctangent of the complex number *note z: 99, \arctan(z). The branch cuts are on the imaginary axis, below -i and above i. -- Function: gsl_complex gsl_complex_arcsec (gsl_complex z) This function returns the complex arcsecant of the complex number *note z: 9a, \arcsec(z) = \arccos(1/z). -- Function: gsl_complex gsl_complex_arcsec_real (double z) This function returns the complex arcsecant of the real number *note z: 9b, \arcsec(z) = \arccos(1/z). -- Function: gsl_complex gsl_complex_arccsc (gsl_complex z) This function returns the complex arccosecant of the complex number *note z: 9c, \arccsc(z) = \arcsin(1/z). -- Function: gsl_complex gsl_complex_arccsc_real (double z) This function returns the complex arccosecant of the real number *note z: 9d, \arccsc(z) = \arcsin(1/z). -- Function: gsl_complex gsl_complex_arccot (gsl_complex z) This function returns the complex arccotangent of the complex number *note z: 9e, \arccot(z) = \arctan(1/z).  File: gsl-ref.info, Node: Complex Hyperbolic Functions, Next: Inverse Complex Hyperbolic Functions, Prev: Inverse Complex Trigonometric Functions, Up: Complex Numbers 5.9 Complex Hyperbolic Functions ================================ -- Function: gsl_complex gsl_complex_sinh (gsl_complex z) This function returns the complex hyperbolic sine of the complex number *note z: a0, \sinh(z) = (\exp(z) - \exp(-z))/2. -- Function: gsl_complex gsl_complex_cosh (gsl_complex z) This function returns the complex hyperbolic cosine of the complex number *note z: a1, \cosh(z) = (\exp(z) + \exp(-z))/2. -- Function: gsl_complex gsl_complex_tanh (gsl_complex z) This function returns the complex hyperbolic tangent of the complex number *note z: a2, \tanh(z) = \sinh(z)/\cosh(z). -- Function: gsl_complex gsl_complex_sech (gsl_complex z) This function returns the complex hyperbolic secant of the complex number *note z: a3, \sech(z) = 1/\cosh(z). -- Function: gsl_complex gsl_complex_csch (gsl_complex z) This function returns the complex hyperbolic cosecant of the complex number *note z: a4, \csch(z) = 1/\sinh(z). -- Function: gsl_complex gsl_complex_coth (gsl_complex z) This function returns the complex hyperbolic cotangent of the complex number *note z: a5, \coth(z) = 1/\tanh(z).  File: gsl-ref.info, Node: Inverse Complex Hyperbolic Functions, Next: References and Further Reading, Prev: Complex Hyperbolic Functions, Up: Complex Numbers 5.10 Inverse Complex Hyperbolic Functions ========================================= -- Function: gsl_complex gsl_complex_arcsinh (gsl_complex z) This function returns the complex hyperbolic arcsine of the complex number *note z: a7, \arcsinh(z). The branch cuts are on the imaginary axis, below -i and above i. -- Function: gsl_complex gsl_complex_arccosh (gsl_complex z) This function returns the complex hyperbolic arccosine of the complex number *note z: a8, \arccosh(z). The branch cut is on the real axis, less than 1. Note that in this case we use the negative square root in formula 4.6.21 of Abramowitz & Stegun giving \arccosh(z)=\log(z-\sqrt{z^2-1}). -- Function: gsl_complex gsl_complex_arccosh_real (double z) This function returns the complex hyperbolic arccosine of the real number *note z: a9, \arccosh(z). -- Function: gsl_complex gsl_complex_arctanh (gsl_complex z) This function returns the complex hyperbolic arctangent of the complex number *note z: aa, \arctanh(z). The branch cuts are on the real axis, less than -1 and greater than 1. -- Function: gsl_complex gsl_complex_arctanh_real (double z) This function returns the complex hyperbolic arctangent of the real number *note z: ab, \arctanh(z). -- Function: gsl_complex gsl_complex_arcsech (gsl_complex z) This function returns the complex hyperbolic arcsecant of the complex number *note z: ac, \arcsech(z) = \arccosh(1/z). -- Function: gsl_complex gsl_complex_arccsch (gsl_complex z) This function returns the complex hyperbolic arccosecant of the complex number *note z: ad, \arccsch(z) = \arcsinh(1/z). -- Function: gsl_complex gsl_complex_arccoth (gsl_complex z) This function returns the complex hyperbolic arccotangent of the complex number *note z: ae, \arccoth(z) = \arctanh(1/z).  File: gsl-ref.info, Node: References and Further Reading, Prev: Inverse Complex Hyperbolic Functions, Up: Complex Numbers 5.11 References and Further Reading =================================== The implementations of the elementary and trigonometric functions are based on the following papers, * T. E. Hull, Thomas F. Fairgrieve, Ping Tak Peter Tang, “Implementing Complex Elementary Functions Using Exception Handling”, ACM Transactions on Mathematical Software, Volume 20 (1994), pp 215–244, Corrigenda, p553 * T. E. Hull, Thomas F. Fairgrieve, Ping Tak Peter Tang, “Implementing the complex arcsin and arccosine functions using exception handling”, ACM Transactions on Mathematical Software, Volume 23 (1997) pp 299–335 The general formulas and details of branch cuts can be found in the following books, * Abramowitz and Stegun, Handbook of Mathematical Functions, “Circular Functions in Terms of Real and Imaginary Parts”, Formulas 4.3.55–58, “Inverse Circular Functions in Terms of Real and Imaginary Parts”, Formulas 4.4.37–39, “Hyperbolic Functions in Terms of Real and Imaginary Parts”, Formulas 4.5.49–52, “Inverse Hyperbolic Functions—relation to Inverse Circular Functions”, Formulas 4.6.14–19. * Dave Gillespie, Calc Manual, Free Software Foundation, ISBN 1-882114-18-3  File: gsl-ref.info, Node: Polynomials, Next: Special Functions, Prev: Complex Numbers, Up: Top 6 Polynomials ************* This chapter describes functions for evaluating and solving polynomials. There are routines for finding real and complex roots of quadratic and cubic equations using analytic methods. An iterative polynomial solver is also available for finding the roots of general polynomials with real coefficients (of any order). The functions are declared in the header file ‘gsl_poly.h’. * Menu: * Polynomial Evaluation:: * Divided Difference Representation of Polynomials:: * Quadratic Equations:: * Cubic Equations:: * General Polynomial Equations:: * Examples: Examples<2>. * References and Further Reading: References and Further Reading<2>.  File: gsl-ref.info, Node: Polynomial Evaluation, Next: Divided Difference Representation of Polynomials, Up: Polynomials 6.1 Polynomial Evaluation ========================= The functions described here evaluate the polynomial P(x) = c[0] + c[1] x + c[2] x^2 + … + c[len-1] x^{len-1} using Horner’s method for stability. Inline versions of these functions are used when ‘HAVE_INLINE’ is defined. -- Function: double gsl_poly_eval (const double c[], const int len, const double x) This function evaluates a polynomial with real coefficients for the real variable *note x: b3. -- Function: gsl_complex gsl_poly_complex_eval (const double c[], const int len, const gsl_complex z) This function evaluates a polynomial with real coefficients for the complex variable *note z: b4. -- Function: gsl_complex gsl_complex_poly_complex_eval (const gsl_complex c[], const int len, const gsl_complex z) This function evaluates a polynomial with complex coefficients for the complex variable *note z: b5. -- Function: int gsl_poly_eval_derivs (const double c[], const size_t lenc, const double x, double res[], const size_t lenres) This function evaluates a polynomial and its derivatives storing the results in the array *note res: b6. of size *note lenres: b6. The output array contains the values of d^k P(x)/d x^k for the specified value of *note x: b6. starting with k = 0.  File: gsl-ref.info, Node: Divided Difference Representation of Polynomials, Next: Quadratic Equations, Prev: Polynomial Evaluation, Up: Polynomials 6.2 Divided Difference Representation of Polynomials ==================================================== The functions described here manipulate polynomials stored in Newton’s divided-difference representation. The use of divided-differences is described in Abramowitz & Stegun sections 25.1.4 and 25.2.26, and Burden and Faires, chapter 3, and discussed briefly below. Given a function f(x), an nth degree interpolating polynomial P_{n}(x) can be constructed which agrees with f at n+1 distinct points x_0,x_1,...,x_{n}. This polynomial can be written in a form known as Newton’s divided-difference representation P_{n}(x) = f(x_0) + sum_{k=1}^n [x_0,x_1,…,x_k] (x-x_0)(x-x_1) … (x-x_{k-1}) where the divided differences [x_0,x_1,...,x_k] are defined in section 25.1.4 of Abramowitz and Stegun. Additionally, it is possible to construct an interpolating polynomial of degree 2n+1 which also matches the first derivatives of f at the points x_0,x_1,...,x_n. This is called the Hermite interpolating polynomial and is defined as H_{2n+1}(x) = f(z_0) + sum_{k=1}^{2n+1} [z_0,z_1,…,z_k] (x-z_0)(x-z_1) … (x-z_{k-1}) where the elements of z = \{x_0,x_0,x_1,x_1,...,x_n,x_n\} are defined by z_{2k} = z_{2k+1} = x_k. The divided-differences [z_0,z_1,...,z_k] are discussed in Burden and Faires, section 3.4. -- Function: int gsl_poly_dd_init (double dd[], const double xa[], const double ya[], size_t size) This function computes a divided-difference representation of the interpolating polynomial for the points (x, y) stored in the arrays *note xa: b8. and *note ya: b8. of length *note size: b8. On output the divided-differences of (*note xa: b8, *note ya: b8.) are stored in the array *note dd: b8, also of length *note size: b8. Using the notation above, dd[k] = [x_0,x_1,...,x_k]. -- Function: double gsl_poly_dd_eval (const double dd[], const double xa[], const size_t size, const double x) This function evaluates the polynomial stored in divided-difference form in the arrays *note dd: b9. and *note xa: b9. of length *note size: b9. at the point *note x: b9. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: int gsl_poly_dd_taylor (double c[], double xp, const double dd[], const double xa[], size_t size, double w[]) This function converts the divided-difference representation of a polynomial to a Taylor expansion. The divided-difference representation is supplied in the arrays *note dd: ba. and *note xa: ba. of length *note size: ba. On output the Taylor coefficients of the polynomial expanded about the point *note xp: ba. are stored in the array *note c: ba. also of length *note size: ba. A workspace of length *note size: ba. must be provided in the array *note w: ba. -- Function: int gsl_poly_dd_hermite_init (double dd[], double za[], const double xa[], const double ya[], const double dya[], const size_t size) This function computes a divided-difference representation of the interpolating Hermite polynomial for the points (x,y) stored in the arrays *note xa: bb. and *note ya: bb. of length *note size: bb. Hermite interpolation constructs polynomials which also match first derivatives dy/dx which are provided in the array *note dya: bb. also of length *note size: bb. The first derivatives can be incorported into the usual divided-difference algorithm by forming a new dataset z = \{x_0,x_0,x_1,x_1,...\}, which is stored in the array *note za: bb. of length 2**note size: bb. on output. On output the divided-differences of the Hermite representation are stored in the array *note dd: bb, also of length 2**note size: bb. Using the notation above, dd[k] = [z_0,z_1,...,z_k]. The resulting Hermite polynomial can be evaluated by calling *note gsl_poly_dd_eval(): b9. and using *note za: bb. for the input argument *note xa: bb.  File: gsl-ref.info, Node: Quadratic Equations, Next: Cubic Equations, Prev: Divided Difference Representation of Polynomials, Up: Polynomials 6.3 Quadratic Equations ======================= -- Function: int gsl_poly_solve_quadratic (double a, double b, double c, double *x0, double *x1) This function finds the real roots of the quadratic equation, a x^2 + b x + c = 0 The number of real roots (either zero, one or two) is returned, and their locations are stored in *note x0: bd. and *note x1: bd. If no real roots are found then *note x0: bd. and *note x1: bd. are not modified. If one real root is found (i.e. if a=0) then it is stored in *note x0: bd. When two real roots are found they are stored in *note x0: bd. and *note x1: bd. in ascending order. The case of coincident roots is not considered special. For example (x-1)^2=0 will have two roots, which happen to have exactly equal values. The number of roots found depends on the sign of the discriminant b^2 - 4 a c. This will be subject to rounding and cancellation errors when computed in double precision, and will also be subject to errors if the coefficients of the polynomial are inexact. These errors may cause a discrete change in the number of roots. However, for polynomials with small integer coefficients the discriminant can always be computed exactly. -- Function: int gsl_poly_complex_solve_quadratic (double a, double b, double c, gsl_complex *z0, gsl_complex *z1) This function finds the complex roots of the quadratic equation, a z^2 + b z + c = 0 The number of complex roots is returned (either one or two) and the locations of the roots are stored in *note z0: be. and *note z1: be. The roots are returned in ascending order, sorted first by their real components and then by their imaginary components. If only one real root is found (i.e. if a=0) then it is stored in *note z0: be.  File: gsl-ref.info, Node: Cubic Equations, Next: General Polynomial Equations, Prev: Quadratic Equations, Up: Polynomials 6.4 Cubic Equations =================== -- Function: int gsl_poly_solve_cubic (double a, double b, double c, double *x0, double *x1, double *x2) This function finds the real roots of the cubic equation, x^3 + a x^2 + b x + c = 0 with a leading coefficient of unity. The number of real roots (either one or three) is returned, and their locations are stored in *note x0: c0, *note x1: c0. and *note x2: c0. If one real root is found then only *note x0: c0. is modified. When three real roots are found they are stored in *note x0: c0, *note x1: c0. and *note x2: c0. in ascending order. The case of coincident roots is not considered special. For example, the equation (x-1)^3=0 will have three roots with exactly equal values. As in the quadratic case, finite precision may cause equal or closely-spaced real roots to move off the real axis into the complex plane, leading to a discrete change in the number of real roots. -- Function: int gsl_poly_complex_solve_cubic (double a, double b, double c, gsl_complex *z0, gsl_complex *z1, gsl_complex *z2) This function finds the complex roots of the cubic equation, z^3 + a z^2 + b z + c = 0 The number of complex roots is returned (always three) and the locations of the roots are stored in *note z0: c1, *note z1: c1. and *note z2: c1. The roots are returned in ascending order, sorted first by their real components and then by their imaginary components.  File: gsl-ref.info, Node: General Polynomial Equations, Next: Examples<2>, Prev: Cubic Equations, Up: Polynomials 6.5 General Polynomial Equations ================================ The roots of polynomial equations cannot be found analytically beyond the special cases of the quadratic, cubic and quartic equation. The algorithm described in this section uses an iterative method to find the approximate locations of roots of higher order polynomials. -- Type: gsl_poly_complex_workspace This workspace contains parameters used for finding roots of general polynomials -- Function: *note gsl_poly_complex_workspace: c3. *gsl_poly_complex_workspace_alloc (size_t n) This function allocates space for a *note gsl_poly_complex_workspace: c3. struct and a workspace suitable for solving a polynomial with *note n: c4. coefficients using the routine *note gsl_poly_complex_solve(): c5. The function returns a pointer to the newly allocated *note gsl_poly_complex_workspace: c3. if no errors were detected, and a null pointer in the case of error. -- Function: void gsl_poly_complex_workspace_free (gsl_poly_complex_workspace *w) This function frees all the memory associated with the workspace *note w: c6. -- Function: int gsl_poly_complex_solve (const double *a, size_t n, gsl_poly_complex_workspace *w, gsl_complex_packed_ptr z) This function computes the roots of the general polynomial P(x) = a_0 + a_1 x + a_2 x^2 + … + a_{n-1} x^{n-1} using balanced-QR reduction of the companion matrix. The parameter *note n: c5. specifies the length of the coefficient array. The coefficient of the highest order term must be non-zero. The function requires a workspace *note w: c5. of the appropriate size. The n-1 roots are returned in the packed complex array *note z: c5. of length 2(n-1), alternating real and imaginary parts. The function returns ‘GSL_SUCCESS’ if all the roots are found. If the QR reduction does not converge, the error handler is invoked with an error code of ‘GSL_EFAILED’. Note that due to finite precision, roots of higher multiplicity are returned as a cluster of simple roots with reduced accuracy. The solution of polynomials with higher-order roots requires specialized algorithms that take the multiplicity structure into account (see e.g. Z. Zeng, Algorithm 835, ACM Transactions on Mathematical Software, Volume 30, Issue 2 (2004), pp 218–236).  File: gsl-ref.info, Node: Examples<2>, Next: References and Further Reading<2>, Prev: General Polynomial Equations, Up: Polynomials 6.6 Examples ============ To demonstrate the use of the general polynomial solver we will take the polynomial P(x) = x^5 - 1 which has these roots: 1, e^{2*pi i / 5}, e^{4*pi i / 5}, e^{6*pi i / 5}, e^{8*pi i / 5} The following program will find these roots. #include #include int main (void) { int i; /* coefficients of P(x) = -1 + x^5 */ double a[6] = { -1, 0, 0, 0, 0, 1 }; double z[10]; gsl_poly_complex_workspace * w = gsl_poly_complex_workspace_alloc (6); gsl_poly_complex_solve (a, 6, w, z); gsl_poly_complex_workspace_free (w); for (i = 0; i < 5; i++) { printf ("z%d = %+.18f %+.18f\n", i, z[2*i], z[2*i+1]); } return 0; } The output of the program is z0 = -0.809016994374947673 +0.587785252292473359 z1 = -0.809016994374947673 -0.587785252292473359 z2 = +0.309016994374947507 +0.951056516295152976 z3 = +0.309016994374947507 -0.951056516295152976 z4 = +0.999999999999999889 +0.000000000000000000 which agrees with the analytic result, z_n = \exp(2 \pi n i/5).  File: gsl-ref.info, Node: References and Further Reading<2>, Prev: Examples<2>, Up: Polynomials 6.7 References and Further Reading ================================== The balanced-QR method and its error analysis are described in the following papers, * R.S. Martin, G. Peters and J.H. Wilkinson, “The QR Algorithm for Real Hessenberg Matrices”, Numerische Mathematik, 14 (1970), 219–231. * B.N. Parlett and C. Reinsch, “Balancing a Matrix for Calculation of Eigenvalues and Eigenvectors”, Numerische Mathematik, 13 (1969), 293–304. * A. Edelman and H. Murakami, “Polynomial roots from companion matrix eigenvalues”, Mathematics of Computation, Vol.: 64, No.: 210 (1995), 763–776. The formulas for divided differences are given in the following texts, * Abramowitz and Stegun, Handbook of Mathematical Functions, Sections 25.1.4 and 25.2.26. * R. L. Burden and J. D. Faires, Numerical Analysis, 9th edition, ISBN 0-538-73351-9, 2011.  File: gsl-ref.info, Node: Special Functions, Next: Vectors and Matrices, Prev: Polynomials, Up: Top 7 Special Functions ******************* This chapter describes the GSL special function library. The library includes routines for calculating the values of Airy functions, Bessel functions, Clausen functions, Coulomb wave functions, Coupling coefficients, the Dawson function, Debye functions, Dilogarithms, Elliptic integrals, Jacobi elliptic functions, Error functions, Exponential integrals, Fermi-Dirac functions, Gamma functions, Gegenbauer functions, Hermite polynomials and functions, Hypergeometric functions, Laguerre functions, Legendre functions and Spherical Harmonics, the Psi (Digamma) Function, Synchrotron functions, Transport functions, Trigonometric functions and Zeta functions. Each routine also computes an estimate of the numerical error in the calculated value of the function. The functions in this chapter are declared in individual header files, such as ‘gsl_sf_airy.h’, ‘gsl_sf_bessel.h’, etc. The complete set of header files can be included using the file ‘gsl_sf.h’. * Menu: * Usage:: * The gsl_sf_result struct:: * Modes:: * Airy Functions and Derivatives:: * Bessel Functions:: * Clausen Functions:: * Coulomb Functions:: * Coupling Coefficients:: * Dawson Function:: * Debye Functions:: * Dilogarithm:: * Elementary Operations:: * Elliptic Integrals:: * Elliptic Functions (Jacobi): Elliptic Functions Jacobi. * Error Functions:: * Exponential Functions:: * Exponential Integrals:: * Fermi-Dirac Function:: * Gamma and Beta Functions:: * Gegenbauer Functions:: * Hermite Polynomials and Functions:: * Hypergeometric Functions:: * Laguerre Functions:: * Lambert W Functions:: * Legendre Functions and Spherical Harmonics:: * Logarithm and Related Functions:: * Mathieu Functions:: * Power Function:: * Psi (Digamma) Function: Psi Digamma Function. * Synchrotron Functions:: * Transport Functions:: * Trigonometric Functions:: * Zeta Functions:: * Examples: Examples<3>. * References and Further Reading: References and Further Reading<3>.  File: gsl-ref.info, Node: Usage, Next: The gsl_sf_result struct, Up: Special Functions 7.1 Usage ========= The special functions are available in two calling conventions, a `natural form' which returns the numerical value of the function and an `error-handling form' which returns an error code. The two types of function provide alternative ways of accessing the same underlying code. The `natural form' returns only the value of the function and can be used directly in mathematical expressions. For example, the following function call will compute the value of the Bessel function J_0(x): double y = gsl_sf_bessel_J0 (x); There is no way to access an error code or to estimate the error using this method. To allow access to this information the alternative error-handling form stores the value and error in a modifiable argument: gsl_sf_result result; int status = gsl_sf_bessel_J0_e (x, &result); The error-handling functions have the suffix ‘_e’. The returned status value indicates error conditions such as overflow, underflow or loss of precision. If there are no errors the error-handling functions return ‘GSL_SUCCESS’.  File: gsl-ref.info, Node: The gsl_sf_result struct, Next: Modes, Prev: Usage, Up: Special Functions 7.2 The gsl_sf_result struct ============================ The error handling form of the special functions always calculate an error estimate along with the value of the result. Therefore, structures are provided for amalgamating a value and error estimate. These structures are declared in the header file ‘gsl_sf_result.h’. The following struct contains value and error fields. -- Type: gsl_sf_result typedef struct { double val; double err; } gsl_sf_result; The field ‘val’ contains the value and the field ‘err’ contains an estimate of the absolute error in the value. In some cases, an overflow or underflow can be detected and handled by a function. In this case, it may be possible to return a scaling exponent as well as an error/value pair in order to save the result from exceeding the dynamic range of the built-in types. The following struct contains value and error fields as well as an exponent field such that the actual result is obtained as ‘result * 10^(e10)’. -- Type: gsl_sf_result_e10 typedef struct { double val; double err; int e10; } gsl_sf_result_e10;  File: gsl-ref.info, Node: Modes, Next: Airy Functions and Derivatives, Prev: The gsl_sf_result struct, Up: Special Functions 7.3 Modes ========= The goal of the library is to achieve double precision accuracy wherever possible. However the cost of evaluating some special functions to double precision can be significant, particularly where very high order terms are required. In these cases a ‘mode’ argument, of type *note gsl_mode_t: d0. allows the accuracy of the function to be reduced in order to improve performance. The following precision levels are available for the mode argument, -- Type: gsl_mode_t -- Macro: GSL_PREC_DOUBLE Double-precision, a relative accuracy of approximately 2 * 10^{-16}. -- Macro: GSL_PREC_SINGLE Single-precision, a relative accuracy of approximately 10^{-7}. -- Macro: GSL_PREC_APPROX Approximate values, a relative accuracy of approximately 5 * 10^{-4}. The approximate mode provides the fastest evaluation at the lowest accuracy.  File: gsl-ref.info, Node: Airy Functions and Derivatives, Next: Bessel Functions, Prev: Modes, Up: Special Functions 7.4 Airy Functions and Derivatives ================================== The Airy functions Ai(x) and Bi(x) are defined by the integral representations, Ai(x) = (1/pi) int_0^infty cos((1/3) t^3 + xt) dt Bi(x) = (1/pi) int_0^infty (e^(-(1/3) t^3 + xt) + sin((1/3) t^3 + xt)) dt For further information see Abramowitz & Stegun, Section 10.4. The Airy functions are defined in the header file ‘gsl_sf_airy.h’. * Menu: * Airy Functions:: * Derivatives of Airy Functions:: * Zeros of Airy Functions:: * Zeros of Derivatives of Airy Functions::  File: gsl-ref.info, Node: Airy Functions, Next: Derivatives of Airy Functions, Up: Airy Functions and Derivatives 7.4.1 Airy Functions -------------------- -- Function: double gsl_sf_airy_Ai (double x, gsl_mode_t mode) -- Function: int gsl_sf_airy_Ai_e (double x, gsl_mode_t mode, gsl_sf_result *result) These routines compute the Airy function Ai(x) with an accuracy specified by *note mode: d7. -- Function: double gsl_sf_airy_Bi (double x, gsl_mode_t mode) -- Function: int gsl_sf_airy_Bi_e (double x, gsl_mode_t mode, gsl_sf_result *result) These routines compute the Airy function Bi(x) with an accuracy specified by *note mode: d9. -- Function: double gsl_sf_airy_Ai_scaled (double x, gsl_mode_t mode) -- Function: int gsl_sf_airy_Ai_scaled_e (double x, gsl_mode_t mode, gsl_sf_result *result) These routines compute a scaled version of the Airy function S_A(x) Ai(x). For x > 0 the scaling factor S_A(x) is \exp(+(2/3) x^{3/2}), and is 1 for x < 0. -- Function: double gsl_sf_airy_Bi_scaled (double x, gsl_mode_t mode) -- Function: int gsl_sf_airy_Bi_scaled_e (double x, gsl_mode_t mode, gsl_sf_result *result) These routines compute a scaled version of the Airy function S_B(x) Bi(x). For x > 0 the scaling factor S_B(x) is exp(-(2/3) x^{3/2}), and is 1 for x < 0.  File: gsl-ref.info, Node: Derivatives of Airy Functions, Next: Zeros of Airy Functions, Prev: Airy Functions, Up: Airy Functions and Derivatives 7.4.2 Derivatives of Airy Functions ----------------------------------- -- Function: double gsl_sf_airy_Ai_deriv (double x, gsl_mode_t mode) -- Function: int gsl_sf_airy_Ai_deriv_e (double x, gsl_mode_t mode, gsl_sf_result *result) These routines compute the Airy function derivative Ai'(x) with an accuracy specified by *note mode: e0. -- Function: double gsl_sf_airy_Bi_deriv (double x, gsl_mode_t mode) -- Function: int gsl_sf_airy_Bi_deriv_e (double x, gsl_mode_t mode, gsl_sf_result *result) These routines compute the Airy function derivative Bi'(x) with an accuracy specified by *note mode: e2. -- Function: double gsl_sf_airy_Ai_deriv_scaled (double x, gsl_mode_t mode) -- Function: int gsl_sf_airy_Ai_deriv_scaled_e (double x, gsl_mode_t mode, gsl_sf_result *result) These routines compute the scaled Airy function derivative S_A(x) Ai'(x). For x > 0 the scaling factor S_A(x) is \exp(+(2/3) x^{3/2}), and is 1 for x < 0. -- Function: double gsl_sf_airy_Bi_deriv_scaled (double x, gsl_mode_t mode) -- Function: int gsl_sf_airy_Bi_deriv_scaled_e (double x, gsl_mode_t mode, gsl_sf_result *result) These routines compute the scaled Airy function derivative S_B(x) Bi'(x). For x > 0 the scaling factor S_B(x) is exp(-(2/3) x^{3/2}), and is 1 for x < 0.  File: gsl-ref.info, Node: Zeros of Airy Functions, Next: Zeros of Derivatives of Airy Functions, Prev: Derivatives of Airy Functions, Up: Airy Functions and Derivatives 7.4.3 Zeros of Airy Functions ----------------------------- -- Function: double gsl_sf_airy_zero_Ai (unsigned int s) -- Function: int gsl_sf_airy_zero_Ai_e (unsigned int s, gsl_sf_result *result) These routines compute the location of the *note s: e9.-th zero of the Airy function Ai(x). -- Function: double gsl_sf_airy_zero_Bi (unsigned int s) -- Function: int gsl_sf_airy_zero_Bi_e (unsigned int s, gsl_sf_result *result) These routines compute the location of the *note s: eb.-th zero of the Airy function Bi(x).  File: gsl-ref.info, Node: Zeros of Derivatives of Airy Functions, Prev: Zeros of Airy Functions, Up: Airy Functions and Derivatives 7.4.4 Zeros of Derivatives of Airy Functions -------------------------------------------- -- Function: double gsl_sf_airy_zero_Ai_deriv (unsigned int s) -- Function: int gsl_sf_airy_zero_Ai_deriv_e (unsigned int s, gsl_sf_result *result) These routines compute the location of the *note s: ee.-th zero of the Airy function derivative Ai'(x). -- Function: double gsl_sf_airy_zero_Bi_deriv (unsigned int s) -- Function: int gsl_sf_airy_zero_Bi_deriv_e (unsigned int s, gsl_sf_result *result) These routines compute the location of the *note s: f0.-th zero of the Airy function derivative Bi'(x).  File: gsl-ref.info, Node: Bessel Functions, Next: Clausen Functions, Prev: Airy Functions and Derivatives, Up: Special Functions 7.5 Bessel Functions ==================== The routines described in this section compute the Cylindrical Bessel functions J_n(x), Y_n(x), Modified cylindrical Bessel functions I_n(x), K_n(x), Spherical Bessel functions j_l(x), y_l(x), and Modified Spherical Bessel functions i_l(x), k_l(x). For more information see Abramowitz & Stegun, Chapters 9 and 10. The Bessel functions are defined in the header file ‘gsl_sf_bessel.h’. * Menu: * Regular Cylindrical Bessel Functions:: * Irregular Cylindrical Bessel Functions:: * Regular Modified Cylindrical Bessel Functions:: * Irregular Modified Cylindrical Bessel Functions:: * Regular Spherical Bessel Functions:: * Irregular Spherical Bessel Functions:: * Regular Modified Spherical Bessel Functions:: * Irregular Modified Spherical Bessel Functions:: * Regular Bessel Function—Fractional Order:: * Irregular Bessel Functions—Fractional Order:: * Regular Modified Bessel Functions—Fractional Order:: * Irregular Modified Bessel Functions—Fractional Order:: * Zeros of Regular Bessel Functions::  File: gsl-ref.info, Node: Regular Cylindrical Bessel Functions, Next: Irregular Cylindrical Bessel Functions, Up: Bessel Functions 7.5.1 Regular Cylindrical Bessel Functions ------------------------------------------ -- Function: double gsl_sf_bessel_J0 (double x) -- Function: int gsl_sf_bessel_J0_e (double x, gsl_sf_result *result) These routines compute the regular cylindrical Bessel function of zeroth order, J_0(x). -- Function: double gsl_sf_bessel_J1 (double x) -- Function: int gsl_sf_bessel_J1_e (double x, gsl_sf_result *result) These routines compute the regular cylindrical Bessel function of first order, J_1(x). -- Function: double gsl_sf_bessel_Jn (int n, double x) -- Function: int gsl_sf_bessel_Jn_e (int n, double x, gsl_sf_result *result) These routines compute the regular cylindrical Bessel function of order *note n: f8, J_n(x). -- Function: int gsl_sf_bessel_Jn_array (int nmin, int nmax, double x, double result_array[]) This routine computes the values of the regular cylindrical Bessel functions J_n(x) for n from *note nmin: f9. to *note nmax: f9. inclusive, storing the results in the array *note result_array: f9. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values.  File: gsl-ref.info, Node: Irregular Cylindrical Bessel Functions, Next: Regular Modified Cylindrical Bessel Functions, Prev: Regular Cylindrical Bessel Functions, Up: Bessel Functions 7.5.2 Irregular Cylindrical Bessel Functions -------------------------------------------- -- Function: double gsl_sf_bessel_Y0 (double x) -- Function: int gsl_sf_bessel_Y0_e (double x, gsl_sf_result *result) These routines compute the irregular cylindrical Bessel function of zeroth order, Y_0(x), for x>0. -- Function: double gsl_sf_bessel_Y1 (double x) -- Function: int gsl_sf_bessel_Y1_e (double x, gsl_sf_result *result) These routines compute the irregular cylindrical Bessel function of first order, Y_1(x), for x>0. -- Function: double gsl_sf_bessel_Yn (int n, double x) -- Function: int gsl_sf_bessel_Yn_e (int n, double x, gsl_sf_result *result) These routines compute the irregular cylindrical Bessel function of order *note n: 100, Y_n(x), for x>0. -- Function: int gsl_sf_bessel_Yn_array (int nmin, int nmax, double x, double result_array[]) This routine computes the values of the irregular cylindrical Bessel functions Y_n(x) for n from *note nmin: 101. to *note nmax: 101. inclusive, storing the results in the array *note result_array: 101. The domain of the function is x>0. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values.  File: gsl-ref.info, Node: Regular Modified Cylindrical Bessel Functions, Next: Irregular Modified Cylindrical Bessel Functions, Prev: Irregular Cylindrical Bessel Functions, Up: Bessel Functions 7.5.3 Regular Modified Cylindrical Bessel Functions --------------------------------------------------- -- Function: double gsl_sf_bessel_I0 (double x) -- Function: int gsl_sf_bessel_I0_e (double x, gsl_sf_result *result) These routines compute the regular modified cylindrical Bessel function of zeroth order, I_0(x). -- Function: double gsl_sf_bessel_I1 (double x) -- Function: int gsl_sf_bessel_I1_e (double x, gsl_sf_result *result) These routines compute the regular modified cylindrical Bessel function of first order, I_1(x). -- Function: double gsl_sf_bessel_In (int n, double x) -- Function: int gsl_sf_bessel_In_e (int n, double x, gsl_sf_result *result) These routines compute the regular modified cylindrical Bessel function of order *note n: 108, I_n(x). -- Function: int gsl_sf_bessel_In_array (int nmin, int nmax, double x, double result_array[]) This routine computes the values of the regular modified cylindrical Bessel functions I_n(x) for n from *note nmin: 109. to *note nmax: 109. inclusive, storing the results in the array *note result_array: 109. The start of the range *note nmin: 109. must be positive or zero. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values. -- Function: double gsl_sf_bessel_I0_scaled (double x) -- Function: int gsl_sf_bessel_I0_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled regular modified cylindrical Bessel function of zeroth order \exp(-|x|) I_0(x). -- Function: double gsl_sf_bessel_I1_scaled (double x) -- Function: int gsl_sf_bessel_I1_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled regular modified cylindrical Bessel function of first order \exp(-|x|) I_1(x). -- Function: double gsl_sf_bessel_In_scaled (int n, double x) -- Function: int gsl_sf_bessel_In_scaled_e (int n, double x, gsl_sf_result *result) These routines compute the scaled regular modified cylindrical Bessel function of order *note n: 10f, \exp(-|x|) I_n(x) -- Function: int gsl_sf_bessel_In_scaled_array (int nmin, int nmax, double x, double result_array[]) This routine computes the values of the scaled regular cylindrical Bessel functions \exp(-|x|) I_n(x) for n from *note nmin: 110. to *note nmax: 110. inclusive, storing the results in the array *note result_array: 110. The start of the range *note nmin: 110. must be positive or zero. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values.  File: gsl-ref.info, Node: Irregular Modified Cylindrical Bessel Functions, Next: Regular Spherical Bessel Functions, Prev: Regular Modified Cylindrical Bessel Functions, Up: Bessel Functions 7.5.4 Irregular Modified Cylindrical Bessel Functions ----------------------------------------------------- -- Function: double gsl_sf_bessel_K0 (double x) -- Function: int gsl_sf_bessel_K0_e (double x, gsl_sf_result *result) These routines compute the irregular modified cylindrical Bessel function of zeroth order, K_0(x), for x > 0. -- Function: double gsl_sf_bessel_K1 (double x) -- Function: int gsl_sf_bessel_K1_e (double x, gsl_sf_result *result) These routines compute the irregular modified cylindrical Bessel function of first order, K_1(x), for x > 0. -- Function: double gsl_sf_bessel_Kn (int n, double x) -- Function: int gsl_sf_bessel_Kn_e (int n, double x, gsl_sf_result *result) These routines compute the irregular modified cylindrical Bessel function of order *note n: 117, K_n(x), for x > 0. -- Function: int gsl_sf_bessel_Kn_array (int nmin, int nmax, double x, double result_array[]) This routine computes the values of the irregular modified cylindrical Bessel functions K_n(x) for n from *note nmin: 118. to *note nmax: 118. inclusive, storing the results in the array *note result_array: 118. The start of the range *note nmin: 118. must be positive or zero. The domain of the function is x>0. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values. -- Function: double gsl_sf_bessel_K0_scaled (double x) -- Function: int gsl_sf_bessel_K0_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled irregular modified cylindrical Bessel function of zeroth order \exp(x) K_0(x) for x>0. -- Function: double gsl_sf_bessel_K1_scaled (double x) -- Function: int gsl_sf_bessel_K1_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled irregular modified cylindrical Bessel function of first order \exp(x) K_1(x) for x>0. -- Function: double gsl_sf_bessel_Kn_scaled (int n, double x) -- Function: int gsl_sf_bessel_Kn_scaled_e (int n, double x, gsl_sf_result *result) These routines compute the scaled irregular modified cylindrical Bessel function of order *note n: 11e, \exp(x) K_n(x), for x>0. -- Function: int gsl_sf_bessel_Kn_scaled_array (int nmin, int nmax, double x, double result_array[]) This routine computes the values of the scaled irregular cylindrical Bessel functions \exp(x) K_n(x) for n from *note nmin: 11f. to *note nmax: 11f. inclusive, storing the results in the array *note result_array: 11f. The start of the range *note nmin: 11f. must be positive or zero. The domain of the function is x>0. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values.  File: gsl-ref.info, Node: Regular Spherical Bessel Functions, Next: Irregular Spherical Bessel Functions, Prev: Irregular Modified Cylindrical Bessel Functions, Up: Bessel Functions 7.5.5 Regular Spherical Bessel Functions ---------------------------------------- -- Function: double gsl_sf_bessel_j0 (double x) -- Function: int gsl_sf_bessel_j0_e (double x, gsl_sf_result *result) These routines compute the regular spherical Bessel function of zeroth order, j_0(x) = \sin(x)/x. -- Function: double gsl_sf_bessel_j1 (double x) -- Function: int gsl_sf_bessel_j1_e (double x, gsl_sf_result *result) These routines compute the regular spherical Bessel function of first order, j_1(x) = (\sin(x)/x - \cos(x))/x. -- Function: double gsl_sf_bessel_j2 (double x) -- Function: int gsl_sf_bessel_j2_e (double x, gsl_sf_result *result) These routines compute the regular spherical Bessel function of second order, j_2(x) = ((3/x^2 - 1)\sin(x) - 3\cos(x)/x)/x. -- Function: double gsl_sf_bessel_jl (int l, double x) -- Function: int gsl_sf_bessel_jl_e (int l, double x, gsl_sf_result *result) These routines compute the regular spherical Bessel function of order *note l: 128, j_l(x), for l \geq 0 and x \geq 0. -- Function: int gsl_sf_bessel_jl_array (int lmax, double x, double result_array[]) This routine computes the values of the regular spherical Bessel functions j_l(x) for l from 0 to *note lmax: 129. inclusive for lmax \geq 0 and x \geq 0, storing the results in the array *note result_array: 129. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values. -- Function: int gsl_sf_bessel_jl_steed_array (int lmax, double x, double *result_array) This routine uses Steed’s method to compute the values of the regular spherical Bessel functions j_l(x) for l from 0 to *note lmax: 12a. inclusive for lmax \geq 0 and x \geq 0, storing the results in the array *note result_array: 12a. The Steed/Barnett algorithm is described in Comp. Phys. Comm. 21, 297 (1981). Steed’s method is more stable than the recurrence used in the other functions but is also slower.  File: gsl-ref.info, Node: Irregular Spherical Bessel Functions, Next: Regular Modified Spherical Bessel Functions, Prev: Regular Spherical Bessel Functions, Up: Bessel Functions 7.5.6 Irregular Spherical Bessel Functions ------------------------------------------ -- Function: double gsl_sf_bessel_y0 (double x) -- Function: int gsl_sf_bessel_y0_e (double x, gsl_sf_result *result) These routines compute the irregular spherical Bessel function of zeroth order, y_0(x) = -\cos(x)/x. -- Function: double gsl_sf_bessel_y1 (double x) -- Function: int gsl_sf_bessel_y1_e (double x, gsl_sf_result *result) These routines compute the irregular spherical Bessel function of first order, y_1(x) = -(\cos(x)/x + \sin(x))/x. -- Function: double gsl_sf_bessel_y2 (double x) -- Function: int gsl_sf_bessel_y2_e (double x, gsl_sf_result *result) These routines compute the irregular spherical Bessel function of second order, y_2(x) = (-3/x^3 + 1/x)\cos(x) - (3/x^2)\sin(x). -- Function: double gsl_sf_bessel_yl (int l, double x) -- Function: int gsl_sf_bessel_yl_e (int l, double x, gsl_sf_result *result) These routines compute the irregular spherical Bessel function of order *note l: 133, y_l(x), for l \geq 0. -- Function: int gsl_sf_bessel_yl_array (int lmax, double x, double result_array[]) This routine computes the values of the irregular spherical Bessel functions y_l(x) for l from 0 to *note lmax: 134. inclusive for lmax \geq 0, storing the results in the array *note result_array: 134. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values.  File: gsl-ref.info, Node: Regular Modified Spherical Bessel Functions, Next: Irregular Modified Spherical Bessel Functions, Prev: Irregular Spherical Bessel Functions, Up: Bessel Functions 7.5.7 Regular Modified Spherical Bessel Functions ------------------------------------------------- The regular modified spherical Bessel functions i_l(x) are related to the modified Bessel functions of fractional order, i_l(x) = \sqrt{\pi/(2x)} I_{l+1/2}(x) -- Function: double gsl_sf_bessel_i0_scaled (double x) -- Function: int gsl_sf_bessel_i0_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled regular modified spherical Bessel function of zeroth order, \exp(-|x|) i_0(x). -- Function: double gsl_sf_bessel_i1_scaled (double x) -- Function: int gsl_sf_bessel_i1_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled regular modified spherical Bessel function of first order, \exp(-|x|) i_1(x). -- Function: double gsl_sf_bessel_i2_scaled (double x) -- Function: int gsl_sf_bessel_i2_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled regular modified spherical Bessel function of second order, \exp(-|x|) i_2(x) -- Function: double gsl_sf_bessel_il_scaled (int l, double x) -- Function: int gsl_sf_bessel_il_scaled_e (int l, double x, gsl_sf_result *result) These routines compute the scaled regular modified spherical Bessel function of order *note l: 13d, \exp(-|x|) i_l(x) -- Function: int gsl_sf_bessel_il_scaled_array (int lmax, double x, double result_array[]) This routine computes the values of the scaled regular modified spherical Bessel functions \exp(-|x|) i_l(x) for l from 0 to *note lmax: 13e. inclusive for lmax \geq 0, storing the results in the array *note result_array: 13e. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values.  File: gsl-ref.info, Node: Irregular Modified Spherical Bessel Functions, Next: Regular Bessel Function—Fractional Order, Prev: Regular Modified Spherical Bessel Functions, Up: Bessel Functions 7.5.8 Irregular Modified Spherical Bessel Functions --------------------------------------------------- The irregular modified spherical Bessel functions k_l(x) are related to the irregular modified Bessel functions of fractional order, k_l(x) = \sqrt{\pi/(2x)} K_{l+1/2}(x). -- Function: double gsl_sf_bessel_k0_scaled (double x) -- Function: int gsl_sf_bessel_k0_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled irregular modified spherical Bessel function of zeroth order, \exp(x) k_0(x), for x>0. -- Function: double gsl_sf_bessel_k1_scaled (double x) -- Function: int gsl_sf_bessel_k1_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled irregular modified spherical Bessel function of first order, \exp(x) k_1(x), for x>0. -- Function: double gsl_sf_bessel_k2_scaled (double x) -- Function: int gsl_sf_bessel_k2_scaled_e (double x, gsl_sf_result *result) These routines compute the scaled irregular modified spherical Bessel function of second order, \exp(x) k_2(x), for x>0. -- Function: double gsl_sf_bessel_kl_scaled (int l, double x) -- Function: int gsl_sf_bessel_kl_scaled_e (int l, double x, gsl_sf_result *result) These routines compute the scaled irregular modified spherical Bessel function of order *note l: 147, \exp(x) k_l(x), for x>0. -- Function: int gsl_sf_bessel_kl_scaled_array (int lmax, double x, double result_array[]) This routine computes the values of the scaled irregular modified spherical Bessel functions \exp(x) k_l(x) for l from 0 to *note lmax: 148. inclusive for lmax \geq 0 and x>0, storing the results in the array *note result_array: 148. The values are computed using recurrence relations for efficiency, and therefore may differ slightly from the exact values.  File: gsl-ref.info, Node: Regular Bessel Function—Fractional Order, Next: Irregular Bessel Functions—Fractional Order, Prev: Irregular Modified Spherical Bessel Functions, Up: Bessel Functions 7.5.9 Regular Bessel Function—Fractional Order ---------------------------------------------- -- Function: double gsl_sf_bessel_Jnu (double nu, double x) -- Function: int gsl_sf_bessel_Jnu_e (double nu, double x, gsl_sf_result *result) These routines compute the regular cylindrical Bessel function of fractional order \nu, J_\nu(x). -- Function: int gsl_sf_bessel_sequence_Jnu_e (double nu, gsl_mode_t mode, size_t size, double v[]) This function computes the regular cylindrical Bessel function of fractional order \nu, J_\nu(x), evaluated at a series of x values. The array *note v: 14c. of length *note size: 14c. contains the x values. They are assumed to be strictly ordered and positive. The array is over-written with the values of J_\nu(x_i).  File: gsl-ref.info, Node: Irregular Bessel Functions—Fractional Order, Next: Regular Modified Bessel Functions—Fractional Order, Prev: Regular Bessel Function—Fractional Order, Up: Bessel Functions 7.5.10 Irregular Bessel Functions—Fractional Order -------------------------------------------------- -- Function: double gsl_sf_bessel_Ynu (double nu, double x) -- Function: int gsl_sf_bessel_Ynu_e (double nu, double x, gsl_sf_result *result) These routines compute the irregular cylindrical Bessel function of fractional order \nu, Y_\nu(x).  File: gsl-ref.info, Node: Regular Modified Bessel Functions—Fractional Order, Next: Irregular Modified Bessel Functions—Fractional Order, Prev: Irregular Bessel Functions—Fractional Order, Up: Bessel Functions 7.5.11 Regular Modified Bessel Functions—Fractional Order --------------------------------------------------------- -- Function: double gsl_sf_bessel_Inu (double nu, double x) -- Function: int gsl_sf_bessel_Inu_e (double nu, double x, gsl_sf_result *result) These routines compute the regular modified Bessel function of fractional order \nu, I_\nu(x) for x>0, \nu>0. -- Function: double gsl_sf_bessel_Inu_scaled (double nu, double x) -- Function: int gsl_sf_bessel_Inu_scaled_e (double nu, double x, gsl_sf_result *result) These routines compute the scaled regular modified Bessel function of fractional order \nu, \exp(-|x|)I_\nu(x) for x>0, \nu>0.  File: gsl-ref.info, Node: Irregular Modified Bessel Functions—Fractional Order, Next: Zeros of Regular Bessel Functions, Prev: Regular Modified Bessel Functions—Fractional Order, Up: Bessel Functions 7.5.12 Irregular Modified Bessel Functions—Fractional Order ----------------------------------------------------------- -- Function: double gsl_sf_bessel_Knu (double nu, double x) -- Function: int gsl_sf_bessel_Knu_e (double nu, double x, gsl_sf_result *result) These routines compute the irregular modified Bessel function of fractional order \nu, K_\nu(x) for x>0, \nu>0. -- Function: double gsl_sf_bessel_lnKnu (double nu, double x) -- Function: int gsl_sf_bessel_lnKnu_e (double nu, double x, gsl_sf_result *result) These routines compute the logarithm of the irregular modified Bessel function of fractional order \nu, \ln(K_\nu(x)) for x>0, \nu>0. -- Function: double gsl_sf_bessel_Knu_scaled (double nu, double x) -- Function: int gsl_sf_bessel_Knu_scaled_e (double nu, double x, gsl_sf_result *result) These routines compute the scaled irregular modified Bessel function of fractional order \nu, \exp(+|x|) K_\nu(x) for x>0, \nu>0.  File: gsl-ref.info, Node: Zeros of Regular Bessel Functions, Prev: Irregular Modified Bessel Functions—Fractional Order, Up: Bessel Functions 7.5.13 Zeros of Regular Bessel Functions ---------------------------------------- -- Function: double gsl_sf_bessel_zero_J0 (unsigned int s) -- Function: int gsl_sf_bessel_zero_J0_e (unsigned int s, gsl_sf_result *result) These routines compute the location of the *note s: 15e.-th positive zero of the Bessel function J_0(x). -- Function: double gsl_sf_bessel_zero_J1 (unsigned int s) -- Function: int gsl_sf_bessel_zero_J1_e (unsigned int s, gsl_sf_result *result) These routines compute the location of the *note s: 160.-th positive zero of the Bessel function J_1(x). -- Function: double gsl_sf_bessel_zero_Jnu (double nu, unsigned int s) -- Function: int gsl_sf_bessel_zero_Jnu_e (double nu, unsigned int s, gsl_sf_result *result) These routines compute the location of the *note s: 162.-th positive zero of the Bessel function J_\nu(x). The current implementation does not support negative values of *note nu: 162.  File: gsl-ref.info, Node: Clausen Functions, Next: Coulomb Functions, Prev: Bessel Functions, Up: Special Functions 7.6 Clausen Functions ===================== The Clausen function is defined by the following integral, Cl_2(x) = - int_0^x dt log( 2 sin(t/2) ) It is related to the *note dilogarithm: 164. by Cl_2(\theta) = \Im Li_2(\exp(i\theta)). The Clausen functions are declared in the header file ‘gsl_sf_clausen.h’. -- Function: double gsl_sf_clausen (double x) -- Function: int gsl_sf_clausen_e (double x, gsl_sf_result *result) These routines compute the Clausen integral Cl_2(x).  File: gsl-ref.info, Node: Coulomb Functions, Next: Coupling Coefficients, Prev: Clausen Functions, Up: Special Functions 7.7 Coulomb Functions ===================== The prototypes of the Coulomb functions are declared in the header file ‘gsl_sf_coulomb.h’. Both bound state and scattering solutions are available. * Menu: * Normalized Hydrogenic Bound States:: * Coulomb Wave Functions:: * Coulomb Wave Function Normalization Constant::  File: gsl-ref.info, Node: Normalized Hydrogenic Bound States, Next: Coulomb Wave Functions, Up: Coulomb Functions 7.7.1 Normalized Hydrogenic Bound States ---------------------------------------- -- Function: double gsl_sf_hydrogenicR_1 (double Z, double r) -- Function: int gsl_sf_hydrogenicR_1_e (double Z, double r, gsl_sf_result *result) These routines compute the lowest-order normalized hydrogenic bound state radial wavefunction R_1 := 2Z \sqrt{Z} \exp(-Z r). -- Function: double gsl_sf_hydrogenicR (int n, int l, double Z, double r) -- Function: int gsl_sf_hydrogenicR_e (int n, int l, double Z, double r, gsl_sf_result *result) These routines compute the *note n: 16c.-th normalized hydrogenic bound state radial wavefunction, R_n := 2 (Z^{3/2}/n^2) sqrt{(n-l-1)!/(n+l)!} exp(-Z r/n) (2Zr/n)^l L^{2l+1}_{n-l-1}(2Zr/n). where L^a_b(x) is the *note generalized Laguerre polynomial: 16d. The normalization is chosen such that the wavefunction \psi is given by \psi(n,l,r) = R_n Y_{lm}.  File: gsl-ref.info, Node: Coulomb Wave Functions, Next: Coulomb Wave Function Normalization Constant, Prev: Normalized Hydrogenic Bound States, Up: Coulomb Functions 7.7.2 Coulomb Wave Functions ---------------------------- The Coulomb wave functions F_L(\eta,x), G_L(\eta,x) are described in Abramowitz & Stegun, Chapter 14. Because there can be a large dynamic range of values for these functions, overflows are handled gracefully. If an overflow occurs, ‘GSL_EOVRFLW’ is signalled and exponent(s) are returned through the modifiable parameters ‘exp_F’, ‘exp_G’. The full solution can be reconstructed from the following relations, F_L(eta,x) = fc[k_L] * exp(exp_F) G_L(eta,x) = gc[k_L] * exp(exp_G) F_L’(eta,x) = fcp[k_L] * exp(exp_F) G_L’(eta,x) = gcp[k_L] * exp(exp_G) -- Function: int gsl_sf_coulomb_wave_FG_e (double eta, double x, double L_F, int k, gsl_sf_result *F, gsl_sf_result *Fp, gsl_sf_result *G, gsl_sf_result *Gp, double *exp_F, double *exp_G) This function computes the Coulomb wave functions F_L(\eta,x), G_{L-k}(\eta,x) and their derivatives F'_L(\eta,x), G'_{L-k}(\eta,x) with respect to x. The parameters are restricted to L, L-k > -1/2, x > 0 and integer k. Note that L itself is not restricted to being an integer. The results are stored in the parameters F, G for the function values and *note Fp: 16f, *note Gp: 16f. for the derivative values. If an overflow occurs, ‘GSL_EOVRFLW’ is returned and scaling exponents are stored in the modifiable parameters *note exp_F: 16f, *note exp_G: 16f. -- Function: int gsl_sf_coulomb_wave_F_array (double L_min, int kmax, double eta, double x, double fc_array[], double *F_exponent) This function computes the Coulomb wave function F_L(\eta,x) for L = Lmin \dots Lmin + kmax, storing the results in *note fc_array: 170. In the case of overflow the exponent is stored in *note F_exponent: 170. -- Function: int gsl_sf_coulomb_wave_FG_array (double L_min, int kmax, double eta, double x, double fc_array[], double gc_array[], double *F_exponent, double *G_exponent) This function computes the functions F_L(\eta,x), G_L(\eta,x) for L = Lmin \dots Lmin + kmax storing the results in *note fc_array: 171. and *note gc_array: 171. In the case of overflow the exponents are stored in *note F_exponent: 171. and *note G_exponent: 171. -- Function: int gsl_sf_coulomb_wave_FGp_array (double L_min, int kmax, double eta, double x, double fc_array[], double fcp_array[], double gc_array[], double gcp_array[], double *F_exponent, double *G_exponent) This function computes the functions F_L(\eta,x), G_L(\eta,x) and their derivatives F'_L(\eta,x), G'_L(\eta,x) for L = Lmin \dots Lmin + kmax storing the results in *note fc_array: 172, *note gc_array: 172, *note fcp_array: 172. and *note gcp_array: 172. In the case of overflow the exponents are stored in *note F_exponent: 172. and *note G_exponent: 172. -- Function: int gsl_sf_coulomb_wave_sphF_array (double L_min, int kmax, double eta, double x, double fc_array[], double F_exponent[]) This function computes the Coulomb wave function divided by the argument F_L(\eta, x)/x for L = Lmin \dots Lmin + kmax, storing the results in *note fc_array: 173. In the case of overflow the exponent is stored in *note F_exponent: 173. This function reduces to spherical Bessel functions in the limit \eta \to 0.  File: gsl-ref.info, Node: Coulomb Wave Function Normalization Constant, Prev: Coulomb Wave Functions, Up: Coulomb Functions 7.7.3 Coulomb Wave Function Normalization Constant -------------------------------------------------- The Coulomb wave function normalization constant is defined in Abramowitz 14.1.7. -- Function: int gsl_sf_coulomb_CL_e (double L, double eta, gsl_sf_result *result) This function computes the Coulomb wave function normalization constant C_L(\eta) for L > -1. -- Function: int gsl_sf_coulomb_CL_array (double Lmin, int kmax, double eta, double cl[]) This function computes the Coulomb wave function normalization constant C_L(\eta) for L = Lmin \dots Lmin + kmax, Lmin > -1.  File: gsl-ref.info, Node: Coupling Coefficients, Next: Dawson Function, Prev: Coulomb Functions, Up: Special Functions 7.8 Coupling Coefficients ========================= The Wigner 3-j, 6-j and 9-j symbols give the coupling coefficients for combined angular momentum vectors. Since the arguments of the standard coupling coefficient functions are integer or half-integer, the arguments of the following functions are, by convention, integers equal to twice the actual spin value. For information on the 3-j coefficients see Abramowitz & Stegun, Section 27.9. The functions described in this section are declared in the header file ‘gsl_sf_coupling.h’. * Menu: * 3-j Symbols:: * 6-j Symbols:: * 9-j Symbols::  File: gsl-ref.info, Node: 3-j Symbols, Next: 6-j Symbols, Up: Coupling Coefficients 7.8.1 3-j Symbols ----------------- -- Function: double gsl_sf_coupling_3j (int two_ja, int two_jb, int two_jc, int two_ma, int two_mb, int two_mc) -- Function: int gsl_sf_coupling_3j_e (int two_ja, int two_jb, int two_jc, int two_ma, int two_mb, int two_mc, gsl_sf_result *result) These routines compute the Wigner 3-j coefficient, ( ja jb jc ) ( ma mb mc ) where the arguments are given in half-integer units, ja = *note two_ja: 17a./2, ma = *note two_ma: 17a./2, etc.  File: gsl-ref.info, Node: 6-j Symbols, Next: 9-j Symbols, Prev: 3-j Symbols, Up: Coupling Coefficients 7.8.2 6-j Symbols ----------------- -- Function: double gsl_sf_coupling_6j (int two_ja, int two_jb, int two_jc, int two_jd, int two_je, int two_jf) -- Function: int gsl_sf_coupling_6j_e (int two_ja, int two_jb, int two_jc, int two_jd, int two_je, int two_jf, gsl_sf_result *result) These routines compute the Wigner 6-j coefficient, { ja jb jc } { jd je jf } where the arguments are given in half-integer units, ja = *note two_ja: 17d./2, ma = ‘two_ma’/2, etc.  File: gsl-ref.info, Node: 9-j Symbols, Prev: 6-j Symbols, Up: Coupling Coefficients 7.8.3 9-j Symbols ----------------- -- Function: double gsl_sf_coupling_9j (int two_ja, int two_jb, int two_jc, int two_jd, int two_je, int two_jf, int two_jg, int two_jh, int two_ji) -- Function: int gsl_sf_coupling_9j_e (int two_ja, int two_jb, int two_jc, int two_jd, int two_je, int two_jf, int two_jg, int two_jh, int two_ji, gsl_sf_result *result) These routines compute the Wigner 9-j coefficient, { ja jb jc } { jd je jf } { jg jh ji } where the arguments are given in half-integer units, ja = *note two_ja: 180./2, ma = ‘two_ma’/2, etc.  File: gsl-ref.info, Node: Dawson Function, Next: Debye Functions, Prev: Coupling Coefficients, Up: Special Functions 7.9 Dawson Function =================== The Dawson integral is defined by \exp(-x^2) \int_0^x dt \exp(t^2) A table of Dawson’s integral can be found in Abramowitz & Stegun, Table 7.5. The Dawson functions are declared in the header file ‘gsl_sf_dawson.h’. -- Function: double gsl_sf_dawson (double x) -- Function: int gsl_sf_dawson_e (double x, gsl_sf_result *result) These routines compute the value of Dawson’s integral for *note x: 183.  File: gsl-ref.info, Node: Debye Functions, Next: Dilogarithm, Prev: Dawson Function, Up: Special Functions 7.10 Debye Functions ==================== The Debye functions D_n(x) are defined by the following integral, D_n(x) = n/x^n \int_0^x dt (t^n/(e^t - 1)) For further information see Abramowitz & Stegun, Section 27.1. The Debye functions are declared in the header file ‘gsl_sf_debye.h’. -- Function: double gsl_sf_debye_1 (double x) -- Function: int gsl_sf_debye_1_e (double x, gsl_sf_result *result) These routines compute the first-order Debye function D_1(x). -- Function: double gsl_sf_debye_2 (double x) -- Function: int gsl_sf_debye_2_e (double x, gsl_sf_result *result) These routines compute the second-order Debye function D_2(x). -- Function: double gsl_sf_debye_3 (double x) -- Function: int gsl_sf_debye_3_e (double x, gsl_sf_result *result) These routines compute the third-order Debye function D_3(x). -- Function: double gsl_sf_debye_4 (double x) -- Function: int gsl_sf_debye_4_e (double x, gsl_sf_result *result) These routines compute the fourth-order Debye function D_4(x). -- Function: double gsl_sf_debye_5 (double x) -- Function: int gsl_sf_debye_5_e (double x, gsl_sf_result *result) These routines compute the fifth-order Debye function D_5(x). -- Function: double gsl_sf_debye_6 (double x) -- Function: int gsl_sf_debye_6_e (double x, gsl_sf_result *result) These routines compute the sixth-order Debye function D_6(x).  File: gsl-ref.info, Node: Dilogarithm, Next: Elementary Operations, Prev: Debye Functions, Up: Special Functions 7.11 Dilogarithm ================ The dilogarithm is defined as Li_2(z) = - \int_0^z ds log(1-s) / s The functions described in this section are declared in the header file ‘gsl_sf_dilog.h’. * Menu: * Real Argument:: * Complex Argument::  File: gsl-ref.info, Node: Real Argument, Next: Complex Argument, Up: Dilogarithm 7.11.1 Real Argument -------------------- -- Function: double gsl_sf_dilog (double x) -- Function: int gsl_sf_dilog_e (double x, gsl_sf_result *result) These routines compute the dilogarithm for a real argument. In Lewin’s notation this is Li_2(x), the real part of the dilogarithm of a real x. It is defined by the integral representation Li_2(x) = - \Re \int_0^x ds \log(1-s) / s Note that \Im(Li_2(x)) = 0 for x \le 1, and -\pi\log(x) for x > 1. Note that Abramowitz & Stegun refer to the Spence integral S(x) = Li_2(1 - x) as the dilogarithm rather than Li_2(x).  File: gsl-ref.info, Node: Complex Argument, Prev: Real Argument, Up: Dilogarithm 7.11.2 Complex Argument ----------------------- -- Function: int gsl_sf_complex_dilog_e (double r, double theta, gsl_sf_result *result_re, gsl_sf_result *result_im) This function computes the full complex-valued dilogarithm for the complex argument z = r \exp(i \theta). The real and imaginary parts of the result are returned in *note result_re: 196, *note result_im: 196.  File: gsl-ref.info, Node: Elementary Operations, Next: Elliptic Integrals, Prev: Dilogarithm, Up: Special Functions 7.12 Elementary Operations ========================== The following functions allow for the propagation of errors when combining quantities by multiplication. The functions are declared in the header file ‘gsl_sf_elementary.h’. -- Function: double gsl_sf_multiply (double x, double y) -- Function: int gsl_sf_multiply_e (double x, double y, gsl_sf_result *result) This function multiplies *note x: 199. and *note y: 199. storing the product and its associated error in *note result: 199. -- Function: int gsl_sf_multiply_err_e (double x, double dx, double y, double dy, gsl_sf_result *result) This function multiplies *note x: 19a. and *note y: 19a. with associated absolute errors *note dx: 19a. and *note dy: 19a. The product xy \pm xy \sqrt{(dx/x)^2 +(dy/y)^2} is stored in *note result: 19a.  File: gsl-ref.info, Node: Elliptic Integrals, Next: Elliptic Functions Jacobi, Prev: Elementary Operations, Up: Special Functions 7.13 Elliptic Integrals ======================= The functions described in this section are declared in the header file ‘gsl_sf_ellint.h’. Further information about the elliptic integrals can be found in Abramowitz & Stegun, Chapter 17. * Menu: * Definition of Legendre Forms:: * Definition of Carlson Forms:: * Legendre Form of Complete Elliptic Integrals:: * Legendre Form of Incomplete Elliptic Integrals:: * Carlson Forms::  File: gsl-ref.info, Node: Definition of Legendre Forms, Next: Definition of Carlson Forms, Up: Elliptic Integrals 7.13.1 Definition of Legendre Forms ----------------------------------- The Legendre forms of elliptic integrals F(\phi,k), E(\phi,k) and \Pi(\phi,k,n) are defined by, F(\phi,k) = \int_0^\phi dt 1/\sqrt((1 - k^2 \sin^2(t))) E(\phi,k) = \int_0^\phi dt \sqrt((1 - k^2 \sin^2(t))) Pi(\phi,k,n) = \int_0^\phi dt 1/((1 + n \sin^2(t))\sqrt(1 - k^2 \sin^2(t))) The complete Legendre forms are denoted by K(k) = F(\pi/2, k) and E(k) = E(\pi/2, k). The notation used here is based on Carlson, “Numerische Mathematik” 33 (1979) 1 and differs slightly from that used by Abramowitz & Stegun, where the functions are given in terms of the parameter m = k^2 and n is replaced by -n.  File: gsl-ref.info, Node: Definition of Carlson Forms, Next: Legendre Form of Complete Elliptic Integrals, Prev: Definition of Legendre Forms, Up: Elliptic Integrals 7.13.2 Definition of Carlson Forms ---------------------------------- The Carlson symmetric forms of elliptical integrals RC(x,y), RD(x,y,z), RF(x,y,z) and RJ(x,y,z,p) are defined by, RC(x,y) = 1/2 \int_0^\infty dt (t+x)^(-1/2) (t+y)^(-1) RD(x,y,z) = 3/2 \int_0^\infty dt (t+x)^(-1/2) (t+y)^(-1/2) (t+z)^(-3/2) RF(x,y,z) = 1/2 \int_0^\infty dt (t+x)^(-1/2) (t+y)^(-1/2) (t+z)^(-1/2) RJ(x,y,z,p) = 3/2 \int_0^\infty dt (t+x)^(-1/2) (t+y)^(-1/2) (t+z)^(-1/2) (t+p)^(-1)  File: gsl-ref.info, Node: Legendre Form of Complete Elliptic Integrals, Next: Legendre Form of Incomplete Elliptic Integrals, Prev: Definition of Carlson Forms, Up: Elliptic Integrals 7.13.3 Legendre Form of Complete Elliptic Integrals --------------------------------------------------- -- Function: double gsl_sf_ellint_Kcomp (double k, gsl_mode_t mode) -- Function: int gsl_sf_ellint_Kcomp_e (double k, gsl_mode_t mode, gsl_sf_result *result) These routines compute the complete elliptic integral K(k) to the accuracy specified by the mode variable *note mode: 1a0. Note that Abramowitz & Stegun define this function in terms of the parameter m = k^2. -- Function: double gsl_sf_ellint_Ecomp (double k, gsl_mode_t mode) -- Function: int gsl_sf_ellint_Ecomp_e (double k, gsl_mode_t mode, gsl_sf_result *result) These routines compute the complete elliptic integral E(k) to the accuracy specified by the mode variable *note mode: 1a2. Note that Abramowitz & Stegun define this function in terms of the parameter m = k^2. -- Function: double gsl_sf_ellint_Pcomp (double k, double n, gsl_mode_t mode) -- Function: int gsl_sf_ellint_Pcomp_e (double k, double n, gsl_mode_t mode, gsl_sf_result *result) These routines compute the complete elliptic integral \Pi(k,n) to the accuracy specified by the mode variable *note mode: 1a4. Note that Abramowitz & Stegun define this function in terms of the parameters m = k^2 and \sin^2(\alpha) = k^2, with the change of sign n \to -n.  File: gsl-ref.info, Node: Legendre Form of Incomplete Elliptic Integrals, Next: Carlson Forms, Prev: Legendre Form of Complete Elliptic Integrals, Up: Elliptic Integrals 7.13.4 Legendre Form of Incomplete Elliptic Integrals ----------------------------------------------------- -- Function: double gsl_sf_ellint_F (double phi, double k, gsl_mode_t mode) -- Function: int gsl_sf_ellint_F_e (double phi, double k, gsl_mode_t mode, gsl_sf_result *result) These routines compute the incomplete elliptic integral F(\phi,k) to the accuracy specified by the mode variable *note mode: 1a7. Note that Abramowitz & Stegun define this function in terms of the parameter m = k^2. -- Function: double gsl_sf_ellint_E (double phi, double k, gsl_mode_t mode) -- Function: int gsl_sf_ellint_E_e (double phi, double k, gsl_mode_t mode, gsl_sf_result *result) These routines compute the incomplete elliptic integral E(\phi,k) to the accuracy specified by the mode variable *note mode: 1a9. Note that Abramowitz & Stegun define this function in terms of the parameter m = k^2. -- Function: double gsl_sf_ellint_P (double phi, double k, double n, gsl_mode_t mode) -- Function: int gsl_sf_ellint_P_e (double phi, double k, double n, gsl_mode_t mode, gsl_sf_result *result) These routines compute the incomplete elliptic integral \Pi(\phi,k,n) to the accuracy specified by the mode variable *note mode: 1ab. Note that Abramowitz & Stegun define this function in terms of the parameters m = k^2 and \sin^2(\alpha) = k^2, with the change of sign n \to -n. -- Function: double gsl_sf_ellint_D (double phi, double k, gsl_mode_t mode) -- Function: int gsl_sf_ellint_D_e (double phi, double k, gsl_mode_t mode, gsl_sf_result *result) These functions compute the incomplete elliptic integral D(\phi,k) which is defined through the Carlson form RD(x,y,z) by the following relation, D(\phi,k) = (1/3)(\sin(\phi))^3 RD (1-\sin^2(\phi), 1-k^2 \sin^2(\phi), 1).  File: gsl-ref.info, Node: Carlson Forms, Prev: Legendre Form of Incomplete Elliptic Integrals, Up: Elliptic Integrals 7.13.5 Carlson Forms -------------------- -- Function: double gsl_sf_ellint_RC (double x, double y, gsl_mode_t mode) -- Function: int gsl_sf_ellint_RC_e (double x, double y, gsl_mode_t mode, gsl_sf_result *result) These routines compute the incomplete elliptic integral RC(x,y) to the accuracy specified by the mode variable *note mode: 1b0. -- Function: double gsl_sf_ellint_RD (double x, double y, double z, gsl_mode_t mode) -- Function: int gsl_sf_ellint_RD_e (double x, double y, double z, gsl_mode_t mode, gsl_sf_result *result) These routines compute the incomplete elliptic integral RD(x,y,z) to the accuracy specified by the mode variable *note mode: 1b2. -- Function: double gsl_sf_ellint_RF (double x, double y, double z, gsl_mode_t mode) -- Function: int gsl_sf_ellint_RF_e (double x, double y, double z, gsl_mode_t mode, gsl_sf_result *result) These routines compute the incomplete elliptic integral RF(x,y,z) to the accuracy specified by the mode variable *note mode: 1b4. -- Function: double gsl_sf_ellint_RJ (double x, double y, double z, double p, gsl_mode_t mode) -- Function: int gsl_sf_ellint_RJ_e (double x, double y, double z, double p, gsl_mode_t mode, gsl_sf_result *result) These routines compute the incomplete elliptic integral RJ(x,y,z,p) to the accuracy specified by the mode variable *note mode: 1b6.  File: gsl-ref.info, Node: Elliptic Functions Jacobi, Next: Error Functions, Prev: Elliptic Integrals, Up: Special Functions 7.14 Elliptic Functions (Jacobi) ================================ The Jacobian Elliptic functions are defined in Abramowitz & Stegun, Chapter 16. The functions are declared in the header file ‘gsl_sf_elljac.h’. -- Function: int gsl_sf_elljac_e (double u, double m, double *sn, double *cn, double *dn) This function computes the Jacobian elliptic functions sn(u|m), cn(u|m), dn(u|m) by descending Landen transformations.  File: gsl-ref.info, Node: Error Functions, Next: Exponential Functions, Prev: Elliptic Functions Jacobi, Up: Special Functions 7.15 Error Functions ==================== The error function is described in Abramowitz & Stegun, Chapter 7. The functions in this section are declared in the header file ‘gsl_sf_erf.h’. * Menu: * Error Function:: * Complementary Error Function:: * Log Complementary Error Function:: * Probability functions::  File: gsl-ref.info, Node: Error Function, Next: Complementary Error Function, Up: Error Functions 7.15.1 Error Function --------------------- -- Function: double gsl_sf_erf (double x) -- Function: int gsl_sf_erf_e (double x, gsl_sf_result *result) These routines compute the error function \erf(x), where \erf(x) = (2/\sqrt{\pi}) \int_0^x dt \exp(-t^2).  File: gsl-ref.info, Node: Complementary Error Function, Next: Log Complementary Error Function, Prev: Error Function, Up: Error Functions 7.15.2 Complementary Error Function ----------------------------------- -- Function: double gsl_sf_erfc (double x) -- Function: int gsl_sf_erfc_e (double x, gsl_sf_result *result) These routines compute the complementary error function \erfc(x) = 1 - \erf(x) = (2/\sqrt{\pi}) \int_x^\infty \exp(-t^2)  File: gsl-ref.info, Node: Log Complementary Error Function, Next: Probability functions, Prev: Complementary Error Function, Up: Error Functions 7.15.3 Log Complementary Error Function --------------------------------------- -- Function: double gsl_sf_log_erfc (double x) -- Function: int gsl_sf_log_erfc_e (double x, gsl_sf_result *result) These routines compute the logarithm of the complementary error function \log(\erfc(x)).  File: gsl-ref.info, Node: Probability functions, Prev: Log Complementary Error Function, Up: Error Functions 7.15.4 Probability functions ---------------------------- The probability functions for the Normal or Gaussian distribution are described in Abramowitz & Stegun, Section 26.2. -- Function: double gsl_sf_erf_Z (double x) -- Function: int gsl_sf_erf_Z_e (double x, gsl_sf_result *result) These routines compute the Gaussian probability density function Z(x) = (1/\sqrt{2\pi}) \exp(-x^2/2) -- Function: double gsl_sf_erf_Q (double x) -- Function: int gsl_sf_erf_Q_e (double x, gsl_sf_result *result) These routines compute the upper tail of the Gaussian probability function Q(x) = (1/\sqrt{2\pi}) \int_x^\infty dt \exp(-t^2/2) The `hazard function' for the normal distribution, also known as the inverse Mills’ ratio, is defined as, h(x) = Z(x)/Q(x) = \sqrt{2/\pi} \exp(-x^2 / 2) / \erfc(x/\sqrt 2) It decreases rapidly as x approaches -\infty and asymptotes to h(x) \sim x as x approaches +\infty. -- Function: double gsl_sf_hazard (double x) -- Function: int gsl_sf_hazard_e (double x, gsl_sf_result *result) These routines compute the hazard function for the normal distribution.  File: gsl-ref.info, Node: Exponential Functions, Next: Exponential Integrals, Prev: Error Functions, Up: Special Functions 7.16 Exponential Functions ========================== The functions described in this section are declared in the header file ‘gsl_sf_exp.h’. * Menu: * Exponential Function:: * Relative Exponential Functions:: * Exponentiation With Error Estimate::  File: gsl-ref.info, Node: Exponential Function, Next: Relative Exponential Functions, Up: Exponential Functions 7.16.1 Exponential Function --------------------------- -- Function: double gsl_sf_exp (double x) -- Function: int gsl_sf_exp_e (double x, gsl_sf_result *result) These routines provide an exponential function \exp(x) using GSL semantics and error checking. -- Function: int gsl_sf_exp_e10_e (double x, gsl_sf_result_e10 *result) This function computes the exponential \exp(x) using the *note gsl_sf_result_e10: ce. type to return a result with extended range. This function may be useful if the value of \exp(x) would overflow the numeric range of ‘double’. -- Function: double gsl_sf_exp_mult (double x, double y) -- Function: int gsl_sf_exp_mult_e (double x, double y, gsl_sf_result *result) These routines exponentiate *note x: 1d0. and multiply by the factor *note y: 1d0. to return the product y \exp(x). -- Function: int gsl_sf_exp_mult_e10_e (const double x, const double y, gsl_sf_result_e10 *result) This function computes the product y \exp(x) using the *note gsl_sf_result_e10: ce. type to return a result with extended numeric range.  File: gsl-ref.info, Node: Relative Exponential Functions, Next: Exponentiation With Error Estimate, Prev: Exponential Function, Up: Exponential Functions 7.16.2 Relative Exponential Functions ------------------------------------- -- Function: double gsl_sf_expm1 (double x) -- Function: int gsl_sf_expm1_e (double x, gsl_sf_result *result) These routines compute the quantity \exp(x)-1 using an algorithm that is accurate for small x. -- Function: double gsl_sf_exprel (double x) -- Function: int gsl_sf_exprel_e (double x, gsl_sf_result *result) These routines compute the quantity (\exp(x)-1)/x using an algorithm that is accurate for small *note x: 1d6. For small *note x: 1d6. the algorithm is based on the expansion (\exp(x)-1)/x = 1 + x/2 + x^2/(2*3) + x^3/(2*3*4) + \dots. -- Function: double gsl_sf_exprel_2 (double x) -- Function: int gsl_sf_exprel_2_e (double x, gsl_sf_result *result) These routines compute the quantity 2(\exp(x)-1-x)/x^2 using an algorithm that is accurate for small *note x: 1d8. For small *note x: 1d8. the algorithm is based on the expansion 2(\exp(x)-1-x)/x^2 = 1 + x/3 + x^2/(3*4) + x^3/(3*4*5) + \dots. -- Function: double gsl_sf_exprel_n (int n, double x) -- Function: int gsl_sf_exprel_n_e (int n, double x, gsl_sf_result *result) These routines compute the N-relative exponential, which is the *note n: 1da.-th generalization of the functions *note gsl_sf_exprel(): 1d5. and *note gsl_sf_exprel_2(): 1d7. The N-relative exponential is given by, exprel_N(x) = N!/x^N (\exp(x) - \sum_{k=0}^{N-1} x^k/k!) = 1 + x/(N+1) + x^2/((N+1)(N+2)) + ... = 1F1 (1,1+N,x)  File: gsl-ref.info, Node: Exponentiation With Error Estimate, Prev: Relative Exponential Functions, Up: Exponential Functions 7.16.3 Exponentiation With Error Estimate ----------------------------------------- -- Function: int gsl_sf_exp_err_e (double x, double dx, gsl_sf_result *result) This function exponentiates *note x: 1dc. with an associated absolute error *note dx: 1dc. -- Function: int gsl_sf_exp_err_e10_e (double x, double dx, gsl_sf_result_e10 *result) This function exponentiates a quantity *note x: 1dd. with an associated absolute error *note dx: 1dd. using the *note gsl_sf_result_e10: ce. type to return a result with extended range. -- Function: int gsl_sf_exp_mult_err_e (double x, double dx, double y, double dy, gsl_sf_result *result) This routine computes the product y \exp(x) for the quantities *note x: 1de, *note y: 1de. with associated absolute errors *note dx: 1de, *note dy: 1de. -- Function: int gsl_sf_exp_mult_err_e10_e (double x, double dx, double y, double dy, gsl_sf_result_e10 *result) This routine computes the product y \exp(x) for the quantities *note x: 1df, *note y: 1df. with associated absolute errors *note dx: 1df, *note dy: 1df. using the *note gsl_sf_result_e10: ce. type to return a result with extended range.  File: gsl-ref.info, Node: Exponential Integrals, Next: Fermi-Dirac Function, Prev: Exponential Functions, Up: Special Functions 7.17 Exponential Integrals ========================== Information on the exponential integrals can be found in Abramowitz & Stegun, Chapter 5. These functions are declared in the header file ‘gsl_sf_expint.h’. * Menu: * Exponential Integral:: * Ei(x): Ei x. * Hyperbolic Integrals:: * Ei_3(x): Ei_3 x. * Trigonometric Integrals:: * Arctangent Integral::  File: gsl-ref.info, Node: Exponential Integral, Next: Ei x, Up: Exponential Integrals 7.17.1 Exponential Integral --------------------------- -- Function: double gsl_sf_expint_E1 (double x) -- Function: int gsl_sf_expint_E1_e (double x, gsl_sf_result *result) These routines compute the exponential integral E_1(x), E_1(x) := \Re \int_1^\infty dt \exp(-xt)/t. -- Function: double gsl_sf_expint_E2 (double x) -- Function: int gsl_sf_expint_E2_e (double x, gsl_sf_result *result) These routines compute the second-order exponential integral E_2(x), E_2(x) := \Re \int_1^\infty dt \exp(-xt)/t^2 -- Function: double gsl_sf_expint_En (int n, double x) -- Function: int gsl_sf_expint_En_e (int n, double x, gsl_sf_result *result) These routines compute the exponential integral E_n(x) of order *note n: 1e7, E_n(x) := \Re \int_1^\infty dt \exp(-xt)/t^n.  File: gsl-ref.info, Node: Ei x, Next: Hyperbolic Integrals, Prev: Exponential Integral, Up: Exponential Integrals 7.17.2 Ei(x) ------------ -- Function: double gsl_sf_expint_Ei (double x) -- Function: int gsl_sf_expint_Ei_e (double x, gsl_sf_result *result) These routines compute the exponential integral Ei(x), Ei(x) = - PV(\int_{-x}^\infty dt \exp(-t)/t) where PV denotes the principal value of the integral.  File: gsl-ref.info, Node: Hyperbolic Integrals, Next: Ei_3 x, Prev: Ei x, Up: Exponential Integrals 7.17.3 Hyperbolic Integrals --------------------------- -- Function: double gsl_sf_Shi (double x) -- Function: int gsl_sf_Shi_e (double x, gsl_sf_result *result) These routines compute the integral Shi(x) = \int_0^x dt \sinh(t)/t -- Function: double gsl_sf_Chi (double x) -- Function: int gsl_sf_Chi_e (double x, gsl_sf_result *result) These routines compute the integral Chi(x) := \Re[ \gamma_E + \log(x) + \int_0^x dt (\cosh(t)-1)/t ] where \gamma_E is the Euler constant (available as the macro ‘M_EULER’).  File: gsl-ref.info, Node: Ei_3 x, Next: Trigonometric Integrals, Prev: Hyperbolic Integrals, Up: Exponential Integrals 7.17.4 Ei_3(x) -------------- -- Function: double gsl_sf_expint_3 (double x) -- Function: int gsl_sf_expint_3_e (double x, gsl_sf_result *result) These routines compute the third-order exponential integral Ei_3(x) = \int_0^x dt \exp(-t^3)` for x \ge 0.  File: gsl-ref.info, Node: Trigonometric Integrals, Next: Arctangent Integral, Prev: Ei_3 x, Up: Exponential Integrals 7.17.5 Trigonometric Integrals ------------------------------ -- Function: double gsl_sf_Si (const double x) -- Function: int gsl_sf_Si_e (double x, gsl_sf_result *result) These routines compute the Sine integral Si(x) = \int_0^x dt \sin(t)/t -- Function: double gsl_sf_Ci (const double x) -- Function: int gsl_sf_Ci_e (double x, gsl_sf_result *result) These routines compute the Cosine integral Ci(x) = -\int_x^\infty dt \cos(t)/t} for x > 0  File: gsl-ref.info, Node: Arctangent Integral, Prev: Trigonometric Integrals, Up: Exponential Integrals 7.17.6 Arctangent Integral -------------------------- -- Function: double gsl_sf_atanint (double x) -- Function: int gsl_sf_atanint_e (double x, gsl_sf_result *result) These routines compute the Arctangent integral, which is defined as AtanInt(x) = \int_0^x dt \arctan(t)/t  File: gsl-ref.info, Node: Fermi-Dirac Function, Next: Gamma and Beta Functions, Prev: Exponential Integrals, Up: Special Functions 7.18 Fermi-Dirac Function ========================= The functions described in this section are declared in the header file ‘gsl_sf_fermi_dirac.h’. * Menu: * Complete Fermi-Dirac Integrals:: * Incomplete Fermi-Dirac Integrals::  File: gsl-ref.info, Node: Complete Fermi-Dirac Integrals, Next: Incomplete Fermi-Dirac Integrals, Up: Fermi-Dirac Function 7.18.1 Complete Fermi-Dirac Integrals ------------------------------------- The complete Fermi-Dirac integral F_j(x) is given by, F_j(x) := (1/\Gamma(j+1)) \int_0^\infty dt (t^j / (\exp(t-x) + 1)) Note that the Fermi-Dirac integral is sometimes defined without the normalisation factor in other texts. -- Function: double gsl_sf_fermi_dirac_m1 (double x) -- Function: int gsl_sf_fermi_dirac_m1_e (double x, gsl_sf_result *result) These routines compute the complete Fermi-Dirac integral with an index of -1. This integral is given by F_{-1}(x) = e^x / (1 + e^x). -- Function: double gsl_sf_fermi_dirac_0 (double x) -- Function: int gsl_sf_fermi_dirac_0_e (double x, gsl_sf_result *result) These routines compute the complete Fermi-Dirac integral with an index of 0. This integral is given by F_0(x) = \ln(1 + e^x). -- Function: double gsl_sf_fermi_dirac_1 (double x) -- Function: int gsl_sf_fermi_dirac_1_e (double x, gsl_sf_result *result) These routines compute the complete Fermi-Dirac integral with an index of 1, F_1(x) = \int_0^\infty dt (t /(\exp(t-x)+1)). -- Function: double gsl_sf_fermi_dirac_2 (double x) -- Function: int gsl_sf_fermi_dirac_2_e (double x, gsl_sf_result *result) These routines compute the complete Fermi-Dirac integral with an index of 2, F_2(x) = (1/2) \int_0^\infty dt (t^2 /(\exp(t-x)+1)). -- Function: double gsl_sf_fermi_dirac_int (int j, double x) -- Function: int gsl_sf_fermi_dirac_int_e (int j, double x, gsl_sf_result *result) These routines compute the complete Fermi-Dirac integral with an integer index of j, F_j(x) = (1/\Gamma(j+1)) \int_0^\infty dt (t^j /(\exp(t-x)+1)). -- Function: double gsl_sf_fermi_dirac_mhalf (double x) -- Function: int gsl_sf_fermi_dirac_mhalf_e (double x, gsl_sf_result *result) These routines compute the complete Fermi-Dirac integral F_{-1/2}(x). -- Function: double gsl_sf_fermi_dirac_half (double x) -- Function: int gsl_sf_fermi_dirac_half_e (double x, gsl_sf_result *result) These routines compute the complete Fermi-Dirac integral F_{1/2}(x). -- Function: double gsl_sf_fermi_dirac_3half (double x) -- Function: int gsl_sf_fermi_dirac_3half_e (double x, gsl_sf_result *result) These routines compute the complete Fermi-Dirac integral F_{3/2}(x).  File: gsl-ref.info, Node: Incomplete Fermi-Dirac Integrals, Prev: Complete Fermi-Dirac Integrals, Up: Fermi-Dirac Function 7.18.2 Incomplete Fermi-Dirac Integrals --------------------------------------- The incomplete Fermi-Dirac integral F_j(x,b) is given by, F_j(x,b) := (1/\Gamma(j+1)) \int_b^\infty dt (t^j / (\Exp(t-x) + 1)) -- Function: double gsl_sf_fermi_dirac_inc_0 (double x, double b) -- Function: int gsl_sf_fermi_dirac_inc_0_e (double x, double b, gsl_sf_result *result) These routines compute the incomplete Fermi-Dirac integral with an index of zero, F_0(x,b) = \ln(1 + e^{b-x}) - (b-x)  File: gsl-ref.info, Node: Gamma and Beta Functions, Next: Gegenbauer Functions, Prev: Fermi-Dirac Function, Up: Special Functions 7.19 Gamma and Beta Functions ============================= The following routines compute the gamma and beta functions in their full and incomplete forms, as well as various kinds of factorials. The functions described in this section are declared in the header file ‘gsl_sf_gamma.h’. * Menu: * Gamma Functions:: * Factorials:: * Pochhammer Symbol:: * Incomplete Gamma Functions:: * Beta Functions:: * Incomplete Beta Function::  File: gsl-ref.info, Node: Gamma Functions, Next: Factorials, Up: Gamma and Beta Functions 7.19.1 Gamma Functions ---------------------- The Gamma function is defined by the following integral, \Gamma(x) = \int_0^{\infty} dt t^{x-1} \exp(-t) It is related to the factorial function by \Gamma(n) = (n-1)! for positive integer n. Further information on the Gamma function can be found in Abramowitz & Stegun, Chapter 6. -- Function: double gsl_sf_gamma (double x) -- Function: int gsl_sf_gamma_e (double x, gsl_sf_result *result) These routines compute the Gamma function \Gamma(x), subject to x not being a negative integer or zero. The function is computed using the real Lanczos method. The maximum value of x such that \Gamma(x) is not considered an overflow is given by the macro ‘GSL_SF_GAMMA_XMAX’ and is 171.0. -- Function: double gsl_sf_lngamma (double x) -- Function: int gsl_sf_lngamma_e (double x, gsl_sf_result *result) These routines compute the logarithm of the Gamma function, \log(\Gamma(x)), subject to x not being a negative integer or zero. For x < 0 the real part of \log(\Gamma(x)) is returned, which is equivalent to \log(|\Gamma(x)|). The function is computed using the real Lanczos method. -- Function: int gsl_sf_lngamma_sgn_e (double x, gsl_sf_result *result_lg, double *sgn) This routine computes the sign of the gamma function and the logarithm of its magnitude, subject to x not being a negative integer or zero. The function is computed using the real Lanczos method. The value of the gamma function and its error can be reconstructed using the relation \Gamma(x) = sgn * \exp(result\_lg), taking into account the two components of *note result_lg: 216. -- Function: double gsl_sf_gammastar (double x) -- Function: int gsl_sf_gammastar_e (double x, gsl_sf_result *result) These routines compute the regulated Gamma Function \Gamma^*(x) for x > 0. The regulated gamma function is given by, \Gamma^*(x) = \Gamma(x)/(\sqrt{2\pi} x^{(x-1/2)} \exp(-x)) = (1 + (1/12x) + ...) for x \to \infty and is a useful suggestion of Temme. -- Function: double gsl_sf_gammainv (double x) -- Function: int gsl_sf_gammainv_e (double x, gsl_sf_result *result) These routines compute the reciprocal of the gamma function, 1/\Gamma(x) using the real Lanczos method. -- Function: int gsl_sf_lngamma_complex_e (double zr, double zi, gsl_sf_result *lnr, gsl_sf_result *arg) This routine computes \log(\Gamma(z)) for complex z = z_r + i z_i and z not a negative integer or zero, using the complex Lanczos method. The returned parameters are lnr = \log|\Gamma(z)| and arg = \arg(\Gamma(z)) in (-\pi,\pi]. Note that the phase part (*note arg: 21b.) is not well-determined when |z| is very large, due to inevitable roundoff in restricting to (-\pi,\pi]. This will result in a ‘GSL_ELOSS’ error when it occurs. The absolute value part (*note lnr: 21b.), however, never suffers from loss of precision.  File: gsl-ref.info, Node: Factorials, Next: Pochhammer Symbol, Prev: Gamma Functions, Up: Gamma and Beta Functions 7.19.2 Factorials ----------------- Although factorials can be computed from the Gamma function, using the relation n! = \Gamma(n+1) for non-negative integer n, it is usually more efficient to call the functions in this section, particularly for small values of n, whose factorial values are maintained in hardcoded tables. -- Function: double gsl_sf_fact (unsigned int n) -- Function: int gsl_sf_fact_e (unsigned int n, gsl_sf_result *result) These routines compute the factorial n!. The factorial is related to the Gamma function by n! = \Gamma(n+1). The maximum value of n such that n! is not considered an overflow is given by the macro ‘GSL_SF_FACT_NMAX’ and is 170. -- Function: double gsl_sf_doublefact (unsigned int n) -- Function: int gsl_sf_doublefact_e (unsigned int n, gsl_sf_result *result) These routines compute the double factorial n!! = n(n-2)(n-4) \dots. The maximum value of n such that n!! is not considered an overflow is given by the macro ‘GSL_SF_DOUBLEFACT_NMAX’ and is 297. -- Function: double gsl_sf_lnfact (unsigned int n) -- Function: int gsl_sf_lnfact_e (unsigned int n, gsl_sf_result *result) These routines compute the logarithm of the factorial of *note n: 222, \log(n!). The algorithm is faster than computing \ln(\Gamma(n+1)) via *note gsl_sf_lngamma(): 214. for n < 170, but defers for larger *note n: 222. -- Function: double gsl_sf_lndoublefact (unsigned int n) -- Function: int gsl_sf_lndoublefact_e (unsigned int n, gsl_sf_result *result) These routines compute the logarithm of the double factorial of *note n: 224, \log(n!!). -- Function: double gsl_sf_choose (unsigned int n, unsigned int m) -- Function: int gsl_sf_choose_e (unsigned int n, unsigned int m, gsl_sf_result *result) These routines compute the combinatorial factor ‘n choose m’ = n!/(m!(n-m)!) -- Function: double gsl_sf_lnchoose (unsigned int n, unsigned int m) -- Function: int gsl_sf_lnchoose_e (unsigned int n, unsigned int m, gsl_sf_result *result) These routines compute the logarithm of ‘n choose m’. This is equivalent to the sum \log(n!) - \log(m!) - \log((n-m)!). -- Function: double gsl_sf_taylorcoeff (int n, double x) -- Function: int gsl_sf_taylorcoeff_e (int n, double x, gsl_sf_result *result) These routines compute the Taylor coefficient x^n / n! for x \ge 0, n \ge 0  File: gsl-ref.info, Node: Pochhammer Symbol, Next: Incomplete Gamma Functions, Prev: Factorials, Up: Gamma and Beta Functions 7.19.3 Pochhammer Symbol ------------------------ -- Function: double gsl_sf_poch (double a, double x) -- Function: int gsl_sf_poch_e (double a, double x, gsl_sf_result *result) These routines compute the Pochhammer symbol (a)_x = \Gamma(a + x)/\Gamma(a). The Pochhammer symbol is also known as the Apell symbol and sometimes written as (a,x). When a and a + x are negative integers or zero, the limiting value of the ratio is returned. -- Function: double gsl_sf_lnpoch (double a, double x) -- Function: int gsl_sf_lnpoch_e (double a, double x, gsl_sf_result *result) These routines compute the logarithm of the Pochhammer symbol, \log((a)_x) = \log(\Gamma(a + x)/\Gamma(a)). -- Function: int gsl_sf_lnpoch_sgn_e (double a, double x, gsl_sf_result *result, double *sgn) These routines compute the sign of the Pochhammer symbol and the logarithm of its magnitude. The computed parameters are result = \log(|(a)_x|) with a corresponding error term, and sgn = \sgn((a)_x) where (a)_x = \Gamma(a + x)/\Gamma(a). -- Function: double gsl_sf_pochrel (double a, double x) -- Function: int gsl_sf_pochrel_e (double a, double x, gsl_sf_result *result) These routines compute the relative Pochhammer symbol ((a)_x - 1)/x where (a)_x = \Gamma(a + x)/\Gamma(a).  File: gsl-ref.info, Node: Incomplete Gamma Functions, Next: Beta Functions, Prev: Pochhammer Symbol, Up: Gamma and Beta Functions 7.19.4 Incomplete Gamma Functions --------------------------------- -- Function: double gsl_sf_gamma_inc (double a, double x) -- Function: int gsl_sf_gamma_inc_e (double a, double x, gsl_sf_result *result) These functions compute the unnormalized incomplete Gamma Function \Gamma(a,x) = \int_x^\infty dt t^{(a-1)} \exp(-t) for a real and x \ge 0. -- Function: double gsl_sf_gamma_inc_Q (double a, double x) -- Function: int gsl_sf_gamma_inc_Q_e (double a, double x, gsl_sf_result *result) These routines compute the normalized incomplete Gamma Function Q(a,x) = 1/\Gamma(a) \int_x^\infty dt t^{(a-1)} \exp(-t) for a > 0, x \ge 0. -- Function: double gsl_sf_gamma_inc_P (double a, double x) -- Function: int gsl_sf_gamma_inc_P_e (double a, double x, gsl_sf_result *result) These routines compute the complementary normalized incomplete Gamma Function P(a,x) = 1 - Q(a,x) = 1/\Gamma(a) \int_0^x dt t^{(a-1)} \exp(-t) for a > 0, x \ge 0. Note that Abramowitz & Stegun call P(a,x) the incomplete gamma function (section 6.5).  File: gsl-ref.info, Node: Beta Functions, Next: Incomplete Beta Function, Prev: Incomplete Gamma Functions, Up: Gamma and Beta Functions 7.19.5 Beta Functions --------------------- -- Function: double gsl_sf_beta (double a, double b) -- Function: int gsl_sf_beta_e (double a, double b, gsl_sf_result *result) These routines compute the Beta Function, B(a,b) = \Gamma(a)\Gamma(b)/\Gamma(a+b) subject to a and b not being negative integers. -- Function: double gsl_sf_lnbeta (double a, double b) -- Function: int gsl_sf_lnbeta_e (double a, double b, gsl_sf_result *result) These routines compute the logarithm of the Beta Function, \log(B(a,b)) subject to a and b not being negative integers.  File: gsl-ref.info, Node: Incomplete Beta Function, Prev: Beta Functions, Up: Gamma and Beta Functions 7.19.6 Incomplete Beta Function ------------------------------- -- Function: double gsl_sf_beta_inc (double a, double b, double x) -- Function: int gsl_sf_beta_inc_e (double a, double b, double x, gsl_sf_result *result) These routines compute the normalized incomplete Beta function I_x(a,b) = B_x(a,b) / B(a,b) where B_x(a,b) = \int_0^x t^{a-1} (1-t)^{b-1} dt for 0 \le x \le 1. For a > 0, b > 0 the value is computed using a continued fraction expansion. For all other values it is computed using the relation I_x(a,b,x) = (1/a) x^a 2F1(a,1-b,a+1,x) / B(a,b)  File: gsl-ref.info, Node: Gegenbauer Functions, Next: Hermite Polynomials and Functions, Prev: Gamma and Beta Functions, Up: Special Functions 7.20 Gegenbauer Functions ========================= The Gegenbauer polynomials are defined in Abramowitz & Stegun, Chapter 22, where they are known as Ultraspherical polynomials. The functions described in this section are declared in the header file ‘gsl_sf_gegenbauer.h’. -- Function: double gsl_sf_gegenpoly_1 (double lambda, double x) -- Function: double gsl_sf_gegenpoly_2 (double lambda, double x) -- Function: double gsl_sf_gegenpoly_3 (double lambda, double x) -- Function: int gsl_sf_gegenpoly_1_e (double lambda, double x, gsl_sf_result *result) -- Function: int gsl_sf_gegenpoly_2_e (double lambda, double x, gsl_sf_result *result) -- Function: int gsl_sf_gegenpoly_3_e (double lambda, double x, gsl_sf_result *result) These functions evaluate the Gegenbauer polynomials C^{(\lambda)}_n(x) using explicit representations for n = 1, 2, 3. -- Function: double gsl_sf_gegenpoly_n (int n, double lambda, double x) -- Function: int gsl_sf_gegenpoly_n_e (int n, double lambda, double x, gsl_sf_result *result) These functions evaluate the Gegenbauer polynomial C^{(\lambda)}_n(x) for a specific value of *note n: 24b, *note lambda: 24b, *note x: 24b. subject to \lambda > -1/2, n \ge 0. -- Function: int gsl_sf_gegenpoly_array (int nmax, double lambda, double x, double result_array[]) This function computes an array of Gegenbauer polynomials C^{(\lambda)}_n(x) for n = 0, 1, 2, \dots, nmax, subject to \lambda > -1/2, nmax \ge 0.  File: gsl-ref.info, Node: Hermite Polynomials and Functions, Next: Hypergeometric Functions, Prev: Gegenbauer Functions, Up: Special Functions 7.21 Hermite Polynomials and Functions ====================================== Hermite polynomials and functions are discussed in Abramowitz & Stegun, Chapter 22 and Szego, Gabor (1939, 1958, 1967), Orthogonal Polynomials, American Mathematical Society. The Hermite polynomials and functions are defined in the header file ‘gsl_sf_hermite.h’. * Menu: * Hermite Polynomials:: * Derivatives of Hermite Polynomials:: * Hermite Functions:: * Derivatives of Hermite Functions:: * Zeros of Hermite Polynomials and Hermite Functions::  File: gsl-ref.info, Node: Hermite Polynomials, Next: Derivatives of Hermite Polynomials, Up: Hermite Polynomials and Functions 7.21.1 Hermite Polynomials -------------------------- The Hermite polynomials exist in two variants: the physicist version H_n(x) and the probabilist version He_n(x). They are defined by the derivatives H_n(x) = (-1)^n e^{x^2} (d / dx)^n e^{-x^2} He_n(x) = (-1)^n e^{x^2/2} (d / dx)^n e^{-x^2/2} They are connected via H_n(x) = 2^{n/2} He_n(\sqrt{2} x) He_n(x) = 2^{-n/2} H_n(x / \sqrt{2}) and satisfy the ordinary differential equations H_n^{''}(x) - 2x H_n^{'}(x) + 2n H_n(x) = 0 He_n^{''}(x) - x He_n^{'}(x) + n He_n(x) = 0 -- Function: double gsl_sf_hermite (const int n, const double x) -- Function: int gsl_sf_hermite_e (const int n, const double x, gsl_sf_result *result) These routines evaluate the physicist Hermite polynomial H_n(x) of order *note n: 250. at position *note x: 250. If an overflow is detected, ‘GSL_EOVRFLW’ is returned without calling the error handler. -- Function: int gsl_sf_hermite_array (const int nmax, const double x, double *result_array) This routine evaluates all physicist Hermite polynomials H_n up to order *note nmax: 251. at position *note x: 251. The results are stored in *note result_array: 251. -- Function: double gsl_sf_hermite_series (const int n, const double x, const double *a) -- Function: int gsl_sf_hermite_series_e (const int n, const double x, const double *a, gsl_sf_result *result) These routines evaluate the series \sum_{j=0}^n a_j H_j(x) with H_j being the j-th physicist Hermite polynomial using the Clenshaw algorithm. -- Function: double gsl_sf_hermite_prob (const int n, const double x) -- Function: int gsl_sf_hermite_prob_e (const int n, const double x, gsl_sf_result *result) These routines evaluate the probabilist Hermite polynomial He_n(x) of order *note n: 255. at position *note x: 255. If an overflow is detected, ‘GSL_EOVRFLW’ is returned without calling the error handler. -- Function: int gsl_sf_hermite_prob_array (const int nmax, const double x, double *result_array) This routine evaluates all probabilist Hermite polynomials He_n(x) up to order *note nmax: 256. at position *note x: 256. The results are stored in *note result_array: 256. -- Function: double gsl_sf_hermite_prob_series (const int n, const double x, const double *a) -- Function: int gsl_sf_hermite_prob_series_e (const int n, const double x, const double *a, gsl_sf_result *result) These routines evaluate the series \sum_{j=0}^n a_j He_j(x) with He_j being the j-th probabilist Hermite polynomial using the Clenshaw algorithm.  File: gsl-ref.info, Node: Derivatives of Hermite Polynomials, Next: Hermite Functions, Prev: Hermite Polynomials, Up: Hermite Polynomials and Functions 7.21.2 Derivatives of Hermite Polynomials ----------------------------------------- -- Function: double gsl_sf_hermite_deriv (const int m, const int n, const double x) -- Function: int gsl_sf_hermite_deriv_e (const int m, const int n, const double x, gsl_sf_result *result) These routines evaluate the *note m: 25b.-th derivative of the physicist Hermite polynomial H_n(x) of order *note n: 25b. at position *note x: 25b. -- Function: int gsl_sf_hermite_array_deriv (const int m, const int nmax, const double x, double *result_array) This routine evaluates the *note m: 25c.-th derivative of all physicist Hermite polynomials H_n(x) from orders 0, \dots, \text{nmax} at position *note x: 25c. The result d^m/dx^m H_n(x) is stored in ‘result_array[n]’. The output *note result_array: 25c. must have length at least ‘nmax + 1’. -- Function: int gsl_sf_hermite_deriv_array (const int mmax, const int n, const double x, double *result_array) This routine evaluates all derivative orders from 0, \dots, \text{mmax} of the physicist Hermite polynomial of order *note n: 25d, H_n, at position *note x: 25d. The result d^m/dx^m H_n(x) is stored in ‘result_array[m]’. The output *note result_array: 25d. must have length at least ‘mmax + 1’. -- Function: double gsl_sf_hermite_prob_deriv (const int m, const int n, const double x) -- Function: int gsl_sf_hermite_prob_deriv_e (const int m, const int n, const double x, gsl_sf_result *result) These routines evaluate the *note m: 25f.-th derivative of the probabilist Hermite polynomial He_n(x) of order *note n: 25f. at position *note x: 25f. -- Function: int gsl_sf_hermite_prob_array_deriv (const int m, const int nmax, const double x, double *result_array) This routine evaluates the *note m: 260.-th derivative of all probabilist Hermite polynomials He_n(x) from orders 0, \dots, \text{nmax} at position *note x: 260. The result d^m/dx^m He_n(x) is stored in ‘result_array[n]’. The output *note result_array: 260. must have length at least ‘nmax + 1’. -- Function: int gsl_sf_hermite_prob_deriv_array (const int mmax, const int n, const double x, double *result_array) This routine evaluates all derivative orders from 0, \dots, \text{mmax} of the probabilist Hermite polynomial of order *note n: 261, He_n, at position *note x: 261. The result d^m/dx^m He_n(x) is stored in ‘result_array[m]’. The output *note result_array: 261. must have length at least ‘mmax + 1’.  File: gsl-ref.info, Node: Hermite Functions, Next: Derivatives of Hermite Functions, Prev: Derivatives of Hermite Polynomials, Up: Hermite Polynomials and Functions 7.21.3 Hermite Functions ------------------------ The Hermite functions are defined by \psi_n(x) = ( 2^n n! \sqrt{\pi} )^{-1/2} e^{-x^2/2} H_n(x) and satisfy the Schrödinger equation for a quantum mechanical harmonic oscillator psi''_n(x) + (2n + 1 - x^2) psi_n(x) = 0 They are orthonormal, \int_{-\infty}^{\infty} \psi_m(x) \psi_n(x) dx = \delta_{mn} and form an orthonormal basis of L^2(\mathbb{R}). The Hermite functions are also eigenfunctions of the continuous Fourier transform. GSL offers two methods for evaluating the Hermite functions. The first uses the standard three-term recurrence relation which has O(n) complexity and is the most accurate. The second uses a Cauchy integral approach due to Bunck (2009) which has O(\sqrt{n}) complexity which represents a significant speed improvement for large n, although it is slightly less accurate. -- Function: double gsl_sf_hermite_func (const int n, const double x) -- Function: int gsl_sf_hermite_func_e (const int n, const double x, gsl_sf_result *result) These routines evaluate the Hermite function \psi_n(x) of order *note n: 264. at position *note x: 264. using a three term recurrence relation. The algorithm complexity is O(n). -- Function: double gsl_sf_hermite_func_fast (const int n, const double x) -- Function: int gsl_sf_hermite_func_fast_e (const int n, const double x, gsl_sf_result *result) These routines evaluate the Hermite function \psi_n(x) of order *note n: 266. at position *note x: 266. using a the Cauchy integral algorithm due to Bunck, 2009. The algorithm complexity is O(\sqrt{n}). -- Function: int gsl_sf_hermite_func_array (const int nmax, const double x, double *result_array) This routine evaluates all Hermite functions \psi_n(x) for orders n = 0, \dots, \textrm{nmax} at position *note x: 267, using the recurrence relation algorithm. The results are stored in *note result_array: 267. which has length at least ‘nmax + 1’. -- Function: double gsl_sf_hermite_func_series (const int n, const double x, const double *a) -- Function: int gsl_sf_hermite_func_series_e (const int n, const double x, const double *a, gsl_sf_result *result) These routines evaluate the series \sum_{j=0}^n a_j \psi_j(x) with \psi_j being the j-th Hermite function using the Clenshaw algorithm.  File: gsl-ref.info, Node: Derivatives of Hermite Functions, Next: Zeros of Hermite Polynomials and Hermite Functions, Prev: Hermite Functions, Up: Hermite Polynomials and Functions 7.21.4 Derivatives of Hermite Functions --------------------------------------- -- Function: double gsl_sf_hermite_func_der (const int m, const int n, const double x) -- Function: int gsl_sf_hermite_func_der_e (const int m, const int n, const double x, gsl_sf_result *result) These routines evaluate the *note m: 26c.-th derivative of the Hermite function \psi_n(x) of order *note n: 26c. at position *note x: 26c.  File: gsl-ref.info, Node: Zeros of Hermite Polynomials and Hermite Functions, Prev: Derivatives of Hermite Functions, Up: Hermite Polynomials and Functions 7.21.5 Zeros of Hermite Polynomials and Hermite Functions --------------------------------------------------------- These routines calculate the s-th zero of the Hermite polynomial/function of order n. Since the zeros are symmetrical around zero, only positive zeros are calculated, ordered from smallest to largest, starting from index 1. Only for odd polynomial orders a zeroth zero exists, its value always being zero. -- Function: double gsl_sf_hermite_zero (const int n, const int s) -- Function: int gsl_sf_hermite_zero_e (const int n, const int s, gsl_sf_result *result) These routines evaluate the *note s: 26f.-th zero of the physicist Hermite polynomial H_n(x) of order *note n: 26f. -- Function: double gsl_sf_hermite_prob_zero (const int n, const int s) -- Function: int gsl_sf_hermite_prob_zero_e (const int n, const int s, gsl_sf_result *result) These routines evaluate the *note s: 271.-th zero of the probabilist Hermite polynomial He_n(x) of order *note n: 271. -- Function: double gsl_sf_hermite_func_zero (const int n, const int s) -- Function: int gsl_sf_hermite_func_zero_e (const int n, const int s, gsl_sf_result *result) These routines evaluate the *note s: 273.-th zero of the Hermite function \psi_n(x) of order *note n: 273.  File: gsl-ref.info, Node: Hypergeometric Functions, Next: Laguerre Functions, Prev: Hermite Polynomials and Functions, Up: Special Functions 7.22 Hypergeometric Functions ============================= Hypergeometric functions are described in Abramowitz & Stegun, Chapters 13 and 15. These functions are declared in the header file ‘gsl_sf_hyperg.h’. -- Function: double gsl_sf_hyperg_0F1 (double c, double x) -- Function: int gsl_sf_hyperg_0F1_e (double c, double x, gsl_sf_result *result) These routines compute the hypergeometric function 0F1(c,x) -- Function: double gsl_sf_hyperg_1F1_int (int m, int n, double x) -- Function: int gsl_sf_hyperg_1F1_int_e (int m, int n, double x, gsl_sf_result *result) These routines compute the confluent hypergeometric function 1F1(m,n,x) = M(m,n,x) for integer parameters *note m: 278, *note n: 278. -- Function: double gsl_sf_hyperg_1F1 (double a, double b, double x) -- Function: int gsl_sf_hyperg_1F1_e (double a, double b, double x, gsl_sf_result *result) These routines compute the confluent hypergeometric function 1F1(a,b,x) = M(a,b,x) for general parameters *note a: 27a, *note b: 27a. -- Function: double gsl_sf_hyperg_U_int (int m, int n, double x) -- Function: int gsl_sf_hyperg_U_int_e (int m, int n, double x, gsl_sf_result *result) These routines compute the confluent hypergeometric function U(m,n,x) for integer parameters *note m: 27c, *note n: 27c. -- Function: int gsl_sf_hyperg_U_int_e10_e (int m, int n, double x, gsl_sf_result_e10 *result) This routine computes the confluent hypergeometric function U(m,n,x) for integer parameters *note m: 27d, *note n: 27d. using the *note gsl_sf_result_e10: ce. type to return a result with extended range. -- Function: double gsl_sf_hyperg_U (double a, double b, double x) -- Function: int gsl_sf_hyperg_U_e (double a, double b, double x, gsl_sf_result *result) These routines compute the confluent hypergeometric function U(a,b,x). -- Function: int gsl_sf_hyperg_U_e10_e (double a, double b, double x, gsl_sf_result_e10 *result) This routine computes the confluent hypergeometric function U(a,b,x) using the *note gsl_sf_result_e10: ce. type to return a result with extended range. -- Function: double gsl_sf_hyperg_2F1 (double a, double b, double c, double x) -- Function: int gsl_sf_hyperg_2F1_e (double a, double b, double c, double x, gsl_sf_result *result) These routines compute the Gauss hypergeometric function 2F1(a,b,c,x) = F(a,b,c,x) for |x| < 1. If the arguments (a,b,c,x) are too close to a singularity then the function can return the error code ‘GSL_EMAXITER’ when the series approximation converges too slowly. This occurs in the region of x = 1, c - a - b = m for integer m. -- Function: double gsl_sf_hyperg_2F1_conj (double aR, double aI, double c, double x) -- Function: int gsl_sf_hyperg_2F1_conj_e (double aR, double aI, double c, double x, gsl_sf_result *result) These routines compute the Gauss hypergeometric function 2F1(a_R + i a_I, aR - i aI, c, x) with complex parameters for |x| < 1. -- Function: double gsl_sf_hyperg_2F1_renorm (double a, double b, double c, double x) -- Function: int gsl_sf_hyperg_2F1_renorm_e (double a, double b, double c, double x, gsl_sf_result *result) These routines compute the renormalized Gauss hypergeometric function 2F1(a,b,c,x) / \Gamma(c) for |x| < 1. -- Function: double gsl_sf_hyperg_2F1_conj_renorm (double aR, double aI, double c, double x) -- Function: int gsl_sf_hyperg_2F1_conj_renorm_e (double aR, double aI, double c, double x, gsl_sf_result *result) These routines compute the renormalized Gauss hypergeometric function 2F1(a_R + i a_I, a_R - i a_I, c, x) / \Gamma(c) for |x| < 1. -- Function: double gsl_sf_hyperg_2F0 (double a, double b, double x) -- Function: int gsl_sf_hyperg_2F0_e (double a, double b, double x, gsl_sf_result *result) These routines compute the hypergeometric function 2F0(a,b,x) The series representation is a divergent hypergeometric series. However, for x < 0 we have 2F0(a,b,x) = (-1/x)^a U(a,1+a-b,-1/x)  File: gsl-ref.info, Node: Laguerre Functions, Next: Lambert W Functions, Prev: Hypergeometric Functions, Up: Special Functions 7.23 Laguerre Functions ======================= The generalized Laguerre polynomials, sometimes referred to as associated Laguerre polynomials, are defined in terms of confluent hypergeometric functions as L^a_n(x) = ((a+1)_n / n!) 1F1(-n,a+1,x) where (a)_n is the *note Pochhammer symbol: 22c. (rising factorial). They are related to the plain Laguerre polynomials L_n(x) by L^0_n(x) = L_n(x) and L^k_n(x) = (-1)^k (d^k/dx^k) L_{(n+k)}(x) For more information see Abramowitz & Stegun, Chapter 22. The functions described in this section are declared in the header file ‘gsl_sf_laguerre.h’. -- Function: double gsl_sf_laguerre_1 (double a, double x) -- Function: double gsl_sf_laguerre_2 (double a, double x) -- Function: double gsl_sf_laguerre_3 (double a, double x) -- Function: int gsl_sf_laguerre_1_e (double a, double x, gsl_sf_result *result) -- Function: int gsl_sf_laguerre_2_e (double a, double x, gsl_sf_result *result) -- Function: int gsl_sf_laguerre_3_e (double a, double x, gsl_sf_result *result) These routines evaluate the generalized Laguerre polynomials L^a_1(x), L^a_2(x), L^a_3(x) using explicit representations. -- Function: double gsl_sf_laguerre_n (const int n, const double a, const double x) -- Function: int gsl_sf_laguerre_n_e (int n, double a, double x, gsl_sf_result *result) These routines evaluate the generalized Laguerre polynomials L^a_n(x) for a > -1, n \ge 0.  File: gsl-ref.info, Node: Lambert W Functions, Next: Legendre Functions and Spherical Harmonics, Prev: Laguerre Functions, Up: Special Functions 7.24 Lambert W Functions ======================== Lambert’s W functions, W(x), are defined to be solutions of the equation W(x) \exp(W(x)) = x. This function has multiple branches for x < 0; however, it has only two real-valued branches. We define W_0(x) to be the principal branch, where W > -1 for x < 0, and W_{-1}(x) to be the other real branch, where W < -1 for x < 0. The Lambert functions are declared in the header file ‘gsl_sf_lambert.h’. -- Function: double gsl_sf_lambert_W0 (double x) -- Function: int gsl_sf_lambert_W0_e (double x, gsl_sf_result *result) These compute the principal branch of the Lambert W function, W_0(x). -- Function: double gsl_sf_lambert_Wm1 (double x) -- Function: int gsl_sf_lambert_Wm1_e (double x, gsl_sf_result *result) These compute the secondary real-valued branch of the Lambert W function, W_{-1}(x).  File: gsl-ref.info, Node: Legendre Functions and Spherical Harmonics, Next: Logarithm and Related Functions, Prev: Lambert W Functions, Up: Special Functions 7.25 Legendre Functions and Spherical Harmonics =============================================== The Legendre Functions and Legendre Polynomials are described in Abramowitz & Stegun, Chapter 8. These functions are declared in the header file ‘gsl_sf_legendre.h’. * Menu: * Legendre Polynomials:: * Associated Legendre Polynomials and Spherical Harmonics:: * Conical Functions:: * Radial Functions for Hyperbolic Space::  File: gsl-ref.info, Node: Legendre Polynomials, Next: Associated Legendre Polynomials and Spherical Harmonics, Up: Legendre Functions and Spherical Harmonics 7.25.1 Legendre Polynomials --------------------------- -- Function: double gsl_sf_legendre_P1 (double x) -- Function: double gsl_sf_legendre_P2 (double x) -- Function: double gsl_sf_legendre_P3 (double x) -- Function: int gsl_sf_legendre_P1_e (double x, gsl_sf_result *result) -- Function: int gsl_sf_legendre_P2_e (double x, gsl_sf_result *result) -- Function: int gsl_sf_legendre_P3_e (double x, gsl_sf_result *result) These functions evaluate the Legendre polynomials P_l(x) using explicit representations for l = 1, 2, 3. -- Function: double gsl_sf_legendre_Pl (int l, double x) -- Function: int gsl_sf_legendre_Pl_e (int l, double x, gsl_sf_result *result) These functions evaluate the Legendre polynomial P_l(x) for a specific value of *note l: 2a2, *note x: 2a2. subject to l \ge 0 and |x| \le 1. -- Function: int gsl_sf_legendre_Pl_array (int lmax, double x, double result_array[]) -- Function: int gsl_sf_legendre_Pl_deriv_array (int lmax, double x, double result_array[], double result_deriv_array[]) These functions compute arrays of Legendre polynomials P_l(x) and derivatives dP_l(x)/dx for l = 0, \dots, lmax and |x| \le 1. -- Function: double gsl_sf_legendre_Q0 (double x) -- Function: int gsl_sf_legendre_Q0_e (double x, gsl_sf_result *result) These routines compute the Legendre function Q_0(x) for x > -1 and x \ne 1. -- Function: double gsl_sf_legendre_Q1 (double x) -- Function: int gsl_sf_legendre_Q1_e (double x, gsl_sf_result *result) These routines compute the Legendre function Q_1(x) for x > -1 and x \ne 1. -- Function: double gsl_sf_legendre_Ql (int l, double x) -- Function: int gsl_sf_legendre_Ql_e (int l, double x, gsl_sf_result *result) These routines compute the Legendre function Q_l(x) for x > -1, x \ne 1 and l \ge 0.  File: gsl-ref.info, Node: Associated Legendre Polynomials and Spherical Harmonics, Next: Conical Functions, Prev: Legendre Polynomials, Up: Legendre Functions and Spherical Harmonics 7.25.2 Associated Legendre Polynomials and Spherical Harmonics -------------------------------------------------------------- The following functions compute the associated Legendre polynomials P_l^m(x) which are solutions of the differential equation (1 - x^2) d^2 P_l^m(x) / dx^2 P_l^m(x) - 2x d/dx P_l^m(x) + ( l(l+1) - m^2 / (1 - x^2) ) P_l^m(x) = 0 where the degree l and order m satisfy 0 \le l and 0 \le m \le l. The functions P_l^m(x) grow combinatorially with l and can overflow for l larger than about 150. Alternatively, one may calculate normalized associated Legendre polynomials. There are a number of different normalization conventions, and these functions can be stably computed up to degree and order 2700. The following normalizations are provided: * Schmidt semi-normalization Schmidt semi-normalized associated Legendre polynomials are often used in the magnetics community and are defined as S_l^0(x) = P_l^0(x) S_l^m(x) = (-1)^m \sqrt((2(l-m)! / (l+m)!)) P_l^m(x), m > 0 The factor of (-1)^m is called the Condon-Shortley phase factor and can be excluded if desired by setting the parameter ‘csphase = 1’ in the functions below. * Spherical Harmonic Normalization The associated Legendre polynomials suitable for calculating spherical harmonics are defined as Y_l^m(x) = (-1)^m \sqrt((2l + 1) * (l-m)! / (4 \pi) / (l+m)!) P_l^m(x) where again the phase factor (-1)^m can be included or excluded if desired. * Full Normalization The fully normalized associated Legendre polynomials are defined as N_l^m(x) = (-1)^m \sqrt((l + 1/2) (l-m)! / (l+m)!) P_l^m(x) and have the property \int_{-1}^1 N_l^m(x)^2 dx = 1 The normalized associated Legendre routines below use a recurrence relation which is stable up to a degree and order of about 2700. Beyond this, the computed functions could suffer from underflow leading to incorrect results. Routines are provided to compute first and second derivatives dP_l^m(x)/dx and d^2 P_l^m(x)/dx^2 as well as their alternate versions d P_l^m(\cos{\theta})/d\theta and d^2 P_l^m(\cos{\theta})/d\theta^2. While there is a simple scaling relationship between the two forms, the derivatives involving \theta are heavily used in spherical harmonic expansions and so these routines are also provided. In the functions below, a parameter of type *note gsl_sf_legendre_t: 2ac. specifies the type of normalization to use. The possible values are -- Type: gsl_sf_legendre_t Value Description --------------------------------------------------------------------------------------------------------------------------- ‘GSL_SF_LEGENDRE_NONE’ The unnormalized associated Legendre polynomials P_l^m(x) ‘GSL_SF_LEGENDRE_SCHMIDT’ The Schmidt semi-normalized associated Legendre polynomials S_l^m(x) ‘GSL_SF_LEGENDRE_SPHARM’ The spherical harmonic associated Legendre polynomials Y_l^m(x) ‘GSL_SF_LEGENDRE_FULL’ The fully normalized associated Legendre polynomials N_l^m(x) -- Function: int gsl_sf_legendre_array (const gsl_sf_legendre_t norm, const size_t lmax, const double x, double result_array[]) -- Function: int gsl_sf_legendre_array_e (const gsl_sf_legendre_t norm, const size_t lmax, const double x, const double csphase, double result_array[]) These functions calculate all normalized associated Legendre polynomials for 0 \le l \le lmax and 0 \le m \le l for |x| \le 1. The *note norm: 2ae. parameter specifies which normalization is used. The normalized P_l^m(x) values are stored in *note result_array: 2ae, whose minimum size can be obtained from calling *note gsl_sf_legendre_array_n(): 2af. The array index of P_l^m(x) is obtained from calling ‘gsl_sf_legendre_array_index(l, m)’. To include or exclude the Condon-Shortley phase factor of (-1)^m, set the parameter *note csphase: 2ae. to either -1 or 1 respectively in the ‘_e’ function. This factor is excluded by default. -- Function: int gsl_sf_legendre_deriv_array (const gsl_sf_legendre_t norm, const size_t lmax, const double x, double result_array[], double result_deriv_array[]) -- Function: int gsl_sf_legendre_deriv_array_e (const gsl_sf_legendre_t norm, const size_t lmax, const double x, const double csphase, double result_array[], double result_deriv_array[]) These functions calculate all normalized associated Legendre functions and their first derivatives up to degree *note lmax: 2b1. for |x| < 1. The parameter *note norm: 2b1. specifies the normalization used. The normalized P_l^m(x) values and their derivatives dP_l^m(x)/dx are stored in *note result_array: 2b1. and *note result_deriv_array: 2b1. respectively. To include or exclude the Condon-Shortley phase factor of (-1)^m, set the parameter *note csphase: 2b1. to either -1 or 1 respectively in the ‘_e’ function. This factor is excluded by default. -- Function: int gsl_sf_legendre_deriv_alt_array (const gsl_sf_legendre_t norm, const size_t lmax, const double x, double result_array[], double result_deriv_array[]) -- Function: int gsl_sf_legendre_deriv_alt_array_e (const gsl_sf_legendre_t norm, const size_t lmax, const double x, const double csphase, double result_array[], double result_deriv_array[]) These functions calculate all normalized associated Legendre functions and their (alternate) first derivatives up to degree *note lmax: 2b3. for |x| < 1. The normalized P_l^m(x) values and their derivatives dP_l^m(\cos{\theta})/d\theta are stored in *note result_array: 2b3. and *note result_deriv_array: 2b3. respectively. To include or exclude the Condon-Shortley phase factor of (-1)^m, set the parameter *note csphase: 2b3. to either -1 or 1 respectively in the ‘_e’ function. This factor is excluded by default. -- Function: int gsl_sf_legendre_deriv2_array (const gsl_sf_legendre_t norm, const size_t lmax, const double x, double result_array[], double result_deriv_array[], double result_deriv2_array[]) -- Function: int gsl_sf_legendre_deriv2_array_e (const gsl_sf_legendre_t norm, const size_t lmax, const double x, const double csphase, double result_array[], double result_deriv_array[], double result_deriv2_array[]) These functions calculate all normalized associated Legendre functions and their first and second derivatives up to degree *note lmax: 2b5. for |x| < 1. The parameter *note norm: 2b5. specifies the normalization used. The normalized P_l^m(x), their first derivatives dP_l^m(x)/dx, and their second derivatives d^2 P_l^m(x)/dx^2 are stored in *note result_array: 2b5, *note result_deriv_array: 2b5, and *note result_deriv2_array: 2b5. respectively. To include or exclude the Condon-Shortley phase factor of (-1)^m, set the parameter *note csphase: 2b5. to either -1 or 1 respectively in the ‘_e’ function. This factor is excluded by default. -- Function: int gsl_sf_legendre_deriv2_alt_array (const gsl_sf_legendre_t norm, const size_t lmax, const double x, double result_array[], double result_deriv_array[], double result_deriv2_array[]) -- Function: int gsl_sf_legendre_deriv2_alt_array_e (const gsl_sf_legendre_t norm, const size_t lmax, const double x, const double csphase, double result_array[], double result_deriv_array[], double result_deriv2_array[]) These functions calculate all normalized associated Legendre functions and their (alternate) first and second derivatives up to degree *note lmax: 2b7. for |x| < 1. The parameter *note norm: 2b7. specifies the normalization used. The normalized P_l^m(x), their first derivatives dP_l^m(\cos{\theta})/d\theta, and their second derivatives d^2 P_l^m(\cos{\theta})/d\theta^2 are stored in *note result_array: 2b7, *note result_deriv_array: 2b7, and *note result_deriv2_array: 2b7. respectively. To include or exclude the Condon-Shortley phase factor of (-1)^m, set the parameter *note csphase: 2b7. to either -1 or 1 respectively in the ‘_e’ function. This factor is excluded by default. -- Function: size_t gsl_sf_legendre_nlm (const size_t lmax) This function returns the total number of associated Legendre functions P_l^m(x) for a given *note lmax: 2b8. The number is ‘(lmax+1) * (lmax+2) / 2’. -- Function: size_t gsl_sf_legendre_array_n (const size_t lmax) This function returns the minimum array size for maximum degree *note lmax: 2af. needed for the array versions of the associated Legendre functions. Size is calculated as the total number of P_l^m(x) functions (see *note gsl_sf_legendre_nlm(): 2b8.), plus extra space for precomputing multiplicative factors used in the recurrence relations. -- Function: size_t gsl_sf_legendre_array_index (const size_t l, const size_t m) This function returns the index into ‘result_array’, ‘result_deriv_array’, or ‘result_deriv2_array’ corresponding to P_l^m(x), P_l^{'m}(x), or P_l^{''m}(x). The index is given by l(l+1)/2 + m. An inline version of this function is used if ‘HAVE_INLINE’ is defined. -- Function: double gsl_sf_legendre_Plm (int l, int m, double x) -- Function: int gsl_sf_legendre_Plm_e (int l, int m, double x, gsl_sf_result *result) These routines compute the associated Legendre polynomial P_l^m(x) for m \ge 0, l \ge m, and |x| \le 1. -- Function: double gsl_sf_legendre_sphPlm (int l, int m, double x) -- Function: int gsl_sf_legendre_sphPlm_e (int l, int m, double x, gsl_sf_result *result) These routines compute the normalized associated Legendre polynomial \sqrt{(2l+1)/(4\pi)} \sqrt{(l-m)!/(l+m)!} P_l^m(x) suitable for use in spherical harmonics. The parameters must satisfy m \ge 0, l \ge m, and |x| \le 1. These routines avoid the overflows that occur for the standard normalization of P_l^m(x). -- Function: int gsl_sf_legendre_Plm_array (int lmax, int m, double x, double result_array[]) -- Function: int gsl_sf_legendre_Plm_deriv_array (int lmax, int m, double x, double result_array[], double result_deriv_array[]) These functions are now deprecated and will be removed in a future release; see *note gsl_sf_legendre_array(): 2ad. and *note gsl_sf_legendre_deriv_array(): 2b0. -- Function: int gsl_sf_legendre_sphPlm_array (int lmax, int m, double x, double result_array[]) -- Function: int gsl_sf_legendre_sphPlm_deriv_array (int lmax, int m, double x, double result_array[], double result_deriv_array[]) These functions are now deprecated and will be removed in a future release; see *note gsl_sf_legendre_array(): 2ad. and *note gsl_sf_legendre_deriv_array(): 2b0. -- Function: int gsl_sf_legendre_array_size (const int lmax, const int m) This function is now deprecated and will be removed in a future release.  File: gsl-ref.info, Node: Conical Functions, Next: Radial Functions for Hyperbolic Space, Prev: Associated Legendre Polynomials and Spherical Harmonics, Up: Legendre Functions and Spherical Harmonics 7.25.3 Conical Functions ------------------------ The Conical Functions P^\mu_{-(1/2)+i\lambda}(x) and Q^\mu_{-(1/2)+i\lambda} are described in Abramowitz & Stegun, Section 8.12. -- Function: double gsl_sf_conicalP_half (double lambda, double x) -- Function: int gsl_sf_conicalP_half_e (double lambda, double x, gsl_sf_result *result) These routines compute the irregular Spherical Conical Function P^{1/2}_{-1/2 + i \lambda}(x) for x > -1. -- Function: double gsl_sf_conicalP_mhalf (double lambda, double x) -- Function: int gsl_sf_conicalP_mhalf_e (double lambda, double x, gsl_sf_result *result) These routines compute the regular Spherical Conical Function P^{-1/2}_{-1/2 + i \lambda}(x) for x > -1. -- Function: double gsl_sf_conicalP_0 (double lambda, double x) -- Function: int gsl_sf_conicalP_0_e (double lambda, double x, gsl_sf_result *result) These routines compute the conical function P^0_{-1/2 + i \lambda}(x) for x > -1. -- Function: double gsl_sf_conicalP_1 (double lambda, double x) -- Function: int gsl_sf_conicalP_1_e (double lambda, double x, gsl_sf_result *result) These routines compute the conical function P^1_{-1/2 + i \lambda}(x) for x > -1. -- Function: double gsl_sf_conicalP_sph_reg (int l, double lambda, double x) -- Function: int gsl_sf_conicalP_sph_reg_e (int l, double lambda, double x, gsl_sf_result *result) These routines compute the Regular Spherical Conical Function P^{-1/2-l}_{-1/2 + i \lambda}(x) for x > -1 and l \ge -1. -- Function: double gsl_sf_conicalP_cyl_reg (int m, double lambda, double x) -- Function: int gsl_sf_conicalP_cyl_reg_e (int m, double lambda, double x, gsl_sf_result *result) These routines compute the Regular Cylindrical Conical Function P^{-m}_{-1/2 + i \lambda}(x) for x > -1 and m \ge -1.  File: gsl-ref.info, Node: Radial Functions for Hyperbolic Space, Prev: Conical Functions, Up: Legendre Functions and Spherical Harmonics 7.25.4 Radial Functions for Hyperbolic Space -------------------------------------------- The following spherical functions are specializations of Legendre functions which give the regular eigenfunctions of the Laplacian on a 3-dimensional hyperbolic space H^3. Of particular interest is the flat limit, \lambda \to \infty, \eta \to 0, \lambda\eta fixed. -- Function: double gsl_sf_legendre_H3d_0 (double lambda, double eta) -- Function: int gsl_sf_legendre_H3d_0_e (double lambda, double eta, gsl_sf_result *result) These routines compute the zeroth radial eigenfunction of the Laplacian on the 3-dimensional hyperbolic space, L^{H3d}_0(\lambda,\eta) := {\sin(\lambda\eta) \over \lambda\sinh(\eta)} for \eta \ge 0. In the flat limit this takes the form L^{H3d}_0(\lambda,\eta) = j_0(\lambda\eta). -- Function: double gsl_sf_legendre_H3d_1 (double lambda, double eta) -- Function: int gsl_sf_legendre_H3d_1_e (double lambda, double eta, gsl_sf_result *result) These routines compute the first radial eigenfunction of the Laplacian on the 3-dimensional hyperbolic space, L^{H3d}_1(\lambda,\eta) := {1\over\sqrt{\lambda^2 + 1}} {\left(\sin(\lambda \eta)\over \lambda \sinh(\eta)\right)} \left(\coth(\eta) - \lambda \cot(\lambda\eta)\right) for \eta \ge 0 In the flat limit this takes the form L^{H3d}_1(\lambda,\eta) = j_1(\lambda\eta). -- Function: double gsl_sf_legendre_H3d (int l, double lambda, double eta) -- Function: int gsl_sf_legendre_H3d_e (int l, double lambda, double eta, gsl_sf_result *result) These routines compute the *note l: 2d6.-th radial eigenfunction of the Laplacian on the 3-dimensional hyperbolic space \eta \ge 0 and l \ge 0. In the flat limit this takes the form L^{H3d}_l(\lambda,\eta) = j_l(\lambda\eta). -- Function: int gsl_sf_legendre_H3d_array (int lmax, double lambda, double eta, double result_array[]) This function computes an array of radial eigenfunctions L^{H3d}_l( \lambda, \eta) for 0 \le l \le lmax.  File: gsl-ref.info, Node: Logarithm and Related Functions, Next: Mathieu Functions, Prev: Legendre Functions and Spherical Harmonics, Up: Special Functions 7.26 Logarithm and Related Functions ==================================== Information on the properties of the Logarithm function can be found in Abramowitz & Stegun, Chapter 4. The functions described in this section are declared in the header file ‘gsl_sf_log.h’. -- Function: double gsl_sf_log (double x) -- Function: int gsl_sf_log_e (double x, gsl_sf_result *result) These routines compute the logarithm of *note x: 2da, \log(x), for x > 0. -- Function: double gsl_sf_log_abs (double x) -- Function: int gsl_sf_log_abs_e (double x, gsl_sf_result *result) These routines compute the logarithm of the magnitude of *note x: 2dc, \log(|x|), for x \ne 0. -- Function: int gsl_sf_complex_log_e (double zr, double zi, gsl_sf_result *lnr, gsl_sf_result *theta) This routine computes the complex logarithm of z = z_r + i z_i. The results are returned as *note lnr: 2dd, *note theta: 2dd. such that \exp(lnr + i \theta) = z_r + i z_i, where \theta lies in the range [-\pi,\pi]. -- Function: double gsl_sf_log_1plusx (double x) -- Function: int gsl_sf_log_1plusx_e (double x, gsl_sf_result *result) These routines compute \log(1 + x) for x > -1 using an algorithm that is accurate for small *note x: 2df. -- Function: double gsl_sf_log_1plusx_mx (double x) -- Function: int gsl_sf_log_1plusx_mx_e (double x, gsl_sf_result *result) These routines compute \log(1 + x) - x for x > -1 using an algorithm that is accurate for small *note x: 2e1.  File: gsl-ref.info, Node: Mathieu Functions, Next: Power Function, Prev: Logarithm and Related Functions, Up: Special Functions 7.27 Mathieu Functions ====================== The routines described in this section compute the angular and radial Mathieu functions, and their characteristic values. Mathieu functions are the solutions of the following two differential equations: d^2y/dv^2 + (a - 2q\cos 2v)y = 0 d^2f/du^2 - (a - 2q\cosh 2u)f = 0 The angular Mathieu functions ce_r(x,q), se_r(x,q) are the even and odd periodic solutions of the first equation, which is known as Mathieu’s equation. These exist only for the discrete sequence of characteristic values a = a_r(q) (even-periodic) and a = b_r(q) (odd-periodic). The radial Mathieu functions Mc^{(j)}_{r}(z,q) and Ms^{(j)}_{r}(z,q) are the solutions of the second equation, which is referred to as Mathieu’s modified equation. The radial Mathieu functions of the first, second, third and fourth kind are denoted by the parameter j, which takes the value 1, 2, 3 or 4. For more information on the Mathieu functions, see Abramowitz and Stegun, Chapter 20. These functions are defined in the header file ‘gsl_sf_mathieu.h’. * Menu: * Mathieu Function Workspace:: * Mathieu Function Characteristic Values:: * Angular Mathieu Functions:: * Radial Mathieu Functions::  File: gsl-ref.info, Node: Mathieu Function Workspace, Next: Mathieu Function Characteristic Values, Up: Mathieu Functions 7.27.1 Mathieu Function Workspace --------------------------------- The Mathieu functions can be computed for a single order or for multiple orders, using array-based routines. The array-based routines require a preallocated workspace. -- Type: gsl_sf_mathieu_workspace Workspace required for array-based routines -- Function: *note gsl_sf_mathieu_workspace: 2e4. *gsl_sf_mathieu_alloc (size_t n, double qmax) This function returns a workspace for the array versions of the Mathieu routines. The arguments n and *note qmax: 2e5. specify the maximum order and q-value of Mathieu functions which can be computed with this workspace. -- Function: void gsl_sf_mathieu_free (gsl_sf_mathieu_workspace *work) This function frees the workspace *note work: 2e6.  File: gsl-ref.info, Node: Mathieu Function Characteristic Values, Next: Angular Mathieu Functions, Prev: Mathieu Function Workspace, Up: Mathieu Functions 7.27.2 Mathieu Function Characteristic Values --------------------------------------------- -- Function: int gsl_sf_mathieu_a (int n, double q) -- Function: int gsl_sf_mathieu_a_e (int n, double q, gsl_sf_result *result) -- Function: int gsl_sf_mathieu_b (int n, double q) -- Function: int gsl_sf_mathieu_b_e (int n, double q, gsl_sf_result *result) These routines compute the characteristic values a_n(q), b_n(q) of the Mathieu functions ce_n(q,x) and se_n(q,x), respectively. -- Function: int gsl_sf_mathieu_a_array (int order_min, int order_max, double q, gsl_sf_mathieu_workspace *work, double result_array[]) -- Function: int gsl_sf_mathieu_b_array (int order_min, int order_max, double q, gsl_sf_mathieu_workspace *work, double result_array[]) These routines compute a series of Mathieu characteristic values a_n(q), b_n(q) for n from *note order_min: 2ed. to *note order_max: 2ed. inclusive, storing the results in the array *note result_array: 2ed.  File: gsl-ref.info, Node: Angular Mathieu Functions, Next: Radial Mathieu Functions, Prev: Mathieu Function Characteristic Values, Up: Mathieu Functions 7.27.3 Angular Mathieu Functions -------------------------------- -- Function: int gsl_sf_mathieu_ce (int n, double q, double x) -- Function: int gsl_sf_mathieu_ce_e (int n, double q, double x, gsl_sf_result *result) -- Function: int gsl_sf_mathieu_se (int n, double q, double x) -- Function: int gsl_sf_mathieu_se_e (int n, double q, double x, gsl_sf_result *result) These routines compute the angular Mathieu functions ce_n(q,x) and se_n(q,x), respectively. -- Function: int gsl_sf_mathieu_ce_array (int nmin, int nmax, double q, double x, gsl_sf_mathieu_workspace *work, double result_array[]) -- Function: int gsl_sf_mathieu_se_array (int nmin, int nmax, double q, double x, gsl_sf_mathieu_workspace *work, double result_array[]) These routines compute a series of the angular Mathieu functions ce_n(q,x) and se_n(q,x) of order n from *note nmin: 2f4. to *note nmax: 2f4. inclusive, storing the results in the array *note result_array: 2f4.  File: gsl-ref.info, Node: Radial Mathieu Functions, Prev: Angular Mathieu Functions, Up: Mathieu Functions 7.27.4 Radial Mathieu Functions ------------------------------- -- Function: int gsl_sf_mathieu_Mc (int j, int n, double q, double x) -- Function: int gsl_sf_mathieu_Mc_e (int j, int n, double q, double x, gsl_sf_result *result) -- Function: int gsl_sf_mathieu_Ms (int j, int n, double q, double x) -- Function: int gsl_sf_mathieu_Ms_e (int j, int n, double q, double x, gsl_sf_result *result) These routines compute the radial *note j: 2f9.-th kind Mathieu functions Mc_n^{(j)}(q,x) and Ms_n^{(j)}(q,x) of order *note n: 2f9. The allowed values of *note j: 2f9. are 1 and 2. The functions for j = 3,4 can be computed as M_n^{(3)} = M_n^{(1)} + iM_n^{(2)} and M_n^{(4)} = M_n^{(1)} - iM_n^{(2)}, where M_n^{(j)} = Mc_n^{(j)} or Ms_n^{(j)}. -- Function: int gsl_sf_mathieu_Mc_array (int j, int nmin, int nmax, double q, double x, gsl_sf_mathieu_workspace *work, double result_array[]) -- Function: int gsl_sf_mathieu_Ms_array (int j, int nmin, int nmax, double q, double x, gsl_sf_mathieu_workspace *work, double result_array[]) These routines compute a series of the radial Mathieu functions of kind *note j: 2fb, with order from *note nmin: 2fb. to *note nmax: 2fb. inclusive, storing the results in the array *note result_array: 2fb.  File: gsl-ref.info, Node: Power Function, Next: Psi Digamma Function, Prev: Mathieu Functions, Up: Special Functions 7.28 Power Function =================== The following functions are equivalent to the function *note gsl_pow_int(): 4a. with an error estimate. These functions are declared in the header file ‘gsl_sf_pow_int.h’. -- Function: double gsl_sf_pow_int (double x, int n) -- Function: int gsl_sf_pow_int_e (double x, int n, gsl_sf_result *result) These routines compute the power x^n for integer *note n: 4c. The power is computed using the minimum number of multiplications. For example, x^8 is computed as ((x^2)^2)^2, requiring only 3 multiplications. For reasons of efficiency, these functions do not check for overflow or underflow conditions. The following is a simple example: #include /* compute 3.0**12 */ double y = gsl_sf_pow_int(3.0, 12);  File: gsl-ref.info, Node: Psi Digamma Function, Next: Synchrotron Functions, Prev: Power Function, Up: Special Functions 7.29 Psi (Digamma) Function =========================== The polygamma functions of order n are defined by \psi^{(n)}(x) = (d/dx)^n \psi(x) = (d/dx)^{n+1} \log(\Gamma(x)) where \psi(x) = \Gamma'(x)/\Gamma(x) is known as the digamma function. These functions are declared in the header file ‘gsl_sf_psi.h’. * Menu: * Digamma Function:: * Trigamma Function:: * Polygamma Function::  File: gsl-ref.info, Node: Digamma Function, Next: Trigamma Function, Up: Psi Digamma Function 7.29.1 Digamma Function ----------------------- -- Function: double gsl_sf_psi_int (int n) -- Function: int gsl_sf_psi_int_e (int n, gsl_sf_result *result) These routines compute the digamma function \psi(n) for positive integer *note n: 301. The digamma function is also called the Psi function. -- Function: double gsl_sf_psi (double x) -- Function: int gsl_sf_psi_e (double x, gsl_sf_result *result) These routines compute the digamma function \psi(x) for general *note x: 303, x \ne 0. -- Function: double gsl_sf_psi_1piy (double y) -- Function: int gsl_sf_psi_1piy_e (double y, gsl_sf_result *result) These routines compute the real part of the digamma function on the line 1 + i y, \Re[\psi(1 + i y)].  File: gsl-ref.info, Node: Trigamma Function, Next: Polygamma Function, Prev: Digamma Function, Up: Psi Digamma Function 7.29.2 Trigamma Function ------------------------ -- Function: double gsl_sf_psi_1_int (int n) -- Function: int gsl_sf_psi_1_int_e (int n, gsl_sf_result *result) These routines compute the Trigamma function \psi'(n) for positive integer n. -- Function: double gsl_sf_psi_1 (double x) -- Function: int gsl_sf_psi_1_e (double x, gsl_sf_result *result) These routines compute the Trigamma function \psi'(x) for general *note x: 30a.  File: gsl-ref.info, Node: Polygamma Function, Prev: Trigamma Function, Up: Psi Digamma Function 7.29.3 Polygamma Function ------------------------- -- Function: double gsl_sf_psi_n (int n, double x) -- Function: int gsl_sf_psi_n_e (int n, double x, gsl_sf_result *result) These routines compute the polygamma function \psi^{(n)}(x) for n \ge 0, x > 0.  File: gsl-ref.info, Node: Synchrotron Functions, Next: Transport Functions, Prev: Psi Digamma Function, Up: Special Functions 7.30 Synchrotron Functions ========================== The functions described in this section are declared in the header file ‘gsl_sf_synchrotron.h’. -- Function: double gsl_sf_synchrotron_1 (double x) -- Function: int gsl_sf_synchrotron_1_e (double x, gsl_sf_result *result) These routines compute the first synchrotron function x \int_x^\infty dt K_{5/3}(t) for x \ge 0. -- Function: double gsl_sf_synchrotron_2 (double x) -- Function: int gsl_sf_synchrotron_2_e (double x, gsl_sf_result *result) These routines compute the second synchrotron function x K_{2/3}(x) for x \ge 0.  File: gsl-ref.info, Node: Transport Functions, Next: Trigonometric Functions, Prev: Synchrotron Functions, Up: Special Functions 7.31 Transport Functions ======================== The transport functions J(n,x) are defined by the integral representations J(n,x) = \int_0^x t^n e^t /(e^t - 1)^2 dt They are declared in the header file ‘gsl_sf_transport.h’. -- Function: double gsl_sf_transport_2 (double x) -- Function: int gsl_sf_transport_2_e (double x, gsl_sf_result *result) These routines compute the transport function J(2,x). -- Function: double gsl_sf_transport_3 (double x) -- Function: int gsl_sf_transport_3_e (double x, gsl_sf_result *result) These routines compute the transport function J(3,x). -- Function: double gsl_sf_transport_4 (double x) -- Function: int gsl_sf_transport_4_e (double x, gsl_sf_result *result) These routines compute the transport function J(4,x). -- Function: double gsl_sf_transport_5 (double x) -- Function: int gsl_sf_transport_5_e (double x, gsl_sf_result *result) These routines compute the transport function J(5,x).  File: gsl-ref.info, Node: Trigonometric Functions, Next: Zeta Functions, Prev: Transport Functions, Up: Special Functions 7.32 Trigonometric Functions ============================ The library includes its own trigonometric functions in order to provide consistency across platforms and reliable error estimates. These functions are declared in the header file ‘gsl_sf_trig.h’. * Menu: * Circular Trigonometric Functions:: * Trigonometric Functions for Complex Arguments:: * Hyperbolic Trigonometric Functions:: * Conversion Functions:: * Restriction Functions:: * Trigonometric Functions With Error Estimates::  File: gsl-ref.info, Node: Circular Trigonometric Functions, Next: Trigonometric Functions for Complex Arguments, Up: Trigonometric Functions 7.32.1 Circular Trigonometric Functions --------------------------------------- -- Function: double gsl_sf_sin (double x) -- Function: int gsl_sf_sin_e (double x, gsl_sf_result *result) These routines compute the sine function \sin(x). -- Function: double gsl_sf_cos (double x) -- Function: int gsl_sf_cos_e (double x, gsl_sf_result *result) These routines compute the cosine function \cos(x). -- Function: double gsl_sf_hypot (double x, double y) -- Function: int gsl_sf_hypot_e (double x, double y, gsl_sf_result *result) These routines compute the hypotenuse function \sqrt{x^2 + y^2} avoiding overflow and underflow. -- Function: double gsl_sf_sinc (double x) -- Function: int gsl_sf_sinc_e (double x, gsl_sf_result *result) These routines compute \sinc(x) = \sin(\pi x) / (\pi x) for any value of *note x: 325.  File: gsl-ref.info, Node: Trigonometric Functions for Complex Arguments, Next: Hyperbolic Trigonometric Functions, Prev: Circular Trigonometric Functions, Up: Trigonometric Functions 7.32.2 Trigonometric Functions for Complex Arguments ---------------------------------------------------- -- Function: int gsl_sf_complex_sin_e (double zr, double zi, gsl_sf_result *szr, gsl_sf_result *szi) This function computes the complex sine, \sin(z_r + i z_i) storing the real and imaginary parts in *note szr: 327, *note szi: 327. -- Function: int gsl_sf_complex_cos_e (double zr, double zi, gsl_sf_result *czr, gsl_sf_result *czi) This function computes the complex cosine, \cos(z_r + i z_i) storing the real and imaginary parts in *note czr: 328, *note czi: 328. -- Function: int gsl_sf_complex_logsin_e (double zr, double zi, gsl_sf_result *lszr, gsl_sf_result *lszi) This function computes the logarithm of the complex sine, \log(\sin(z_r + i z_i)) storing the real and imaginary parts in *note lszr: 329, *note lszi: 329.  File: gsl-ref.info, Node: Hyperbolic Trigonometric Functions, Next: Conversion Functions, Prev: Trigonometric Functions for Complex Arguments, Up: Trigonometric Functions 7.32.3 Hyperbolic Trigonometric Functions ----------------------------------------- -- Function: double gsl_sf_lnsinh (double x) -- Function: int gsl_sf_lnsinh_e (double x, gsl_sf_result *result) These routines compute \log(\sinh(x)) for x > 0. -- Function: double gsl_sf_lncosh (double x) -- Function: int gsl_sf_lncosh_e (double x, gsl_sf_result *result) These routines compute \log(\cosh(x)) for any *note x: 32e.  File: gsl-ref.info, Node: Conversion Functions, Next: Restriction Functions, Prev: Hyperbolic Trigonometric Functions, Up: Trigonometric Functions 7.32.4 Conversion Functions --------------------------- -- Function: int gsl_sf_polar_to_rect (double r, double theta, gsl_sf_result *x, gsl_sf_result *y) This function converts the polar coordinates (*note r: 330, *note theta: 330.) to rectilinear coordinates (*note x: 330, *note y: 330.), x = r\cos(\theta), y = r\sin(\theta). -- Function: int gsl_sf_rect_to_polar (double x, double y, gsl_sf_result *r, gsl_sf_result *theta) This function converts the rectilinear coordinates (*note x: 331, *note y: 331.) to polar coordinates (*note r: 331, *note theta: 331.), such that x = r\cos(\theta), y = r\sin(\theta). The argument *note theta: 331. lies in the range [-\pi, \pi].  File: gsl-ref.info, Node: Restriction Functions, Next: Trigonometric Functions With Error Estimates, Prev: Conversion Functions, Up: Trigonometric Functions 7.32.5 Restriction Functions ---------------------------- -- Function: double gsl_sf_angle_restrict_symm (double theta) -- Function: int gsl_sf_angle_restrict_symm_e (double *theta) These routines force the angle *note theta: 334. to lie in the range (-\pi,\pi]. Note that the mathematical value of \pi is slightly greater than ‘M_PI’, so the machine numbers ‘M_PI’ and ‘-M_PI’ are included in the range. -- Function: double gsl_sf_angle_restrict_pos (double theta) -- Function: int gsl_sf_angle_restrict_pos_e (double *theta) These routines force the angle *note theta: 336. to lie in the range [0, 2\pi). Note that the mathematical value of 2\pi is slightly greater than ‘2*M_PI’, so the machine number ‘2*M_PI’ is included in the range.  File: gsl-ref.info, Node: Trigonometric Functions With Error Estimates, Prev: Restriction Functions, Up: Trigonometric Functions 7.32.6 Trigonometric Functions With Error Estimates --------------------------------------------------- -- Function: int gsl_sf_sin_err_e (double x, double dx, gsl_sf_result *result) This routine computes the sine of an angle *note x: 338. with an associated absolute error *note dx: 338, \sin(x \pm dx). Note that this function is provided in the error-handling form only since its purpose is to compute the propagated error. -- Function: int gsl_sf_cos_err_e (double x, double dx, gsl_sf_result *result) This routine computes the cosine of an angle *note x: 339. with an associated absolute error *note dx: 339, \cos(x \pm dx). Note that this function is provided in the error-handling form only since its purpose is to compute the propagated error.  File: gsl-ref.info, Node: Zeta Functions, Next: Examples<3>, Prev: Trigonometric Functions, Up: Special Functions 7.33 Zeta Functions =================== The Riemann zeta function is defined in Abramowitz & Stegun, Section 23.2. The functions described in this section are declared in the header file ‘gsl_sf_zeta.h’. * Menu: * Riemann Zeta Function:: * Riemann Zeta Function Minus One:: * Hurwitz Zeta Function:: * Eta Function::  File: gsl-ref.info, Node: Riemann Zeta Function, Next: Riemann Zeta Function Minus One, Up: Zeta Functions 7.33.1 Riemann Zeta Function ---------------------------- The Riemann zeta function is defined by the infinite sum \zeta(s) = \sum_{k=1}^\infty k^{-s} -- Function: double gsl_sf_zeta_int (int n) -- Function: int gsl_sf_zeta_int_e (int n, gsl_sf_result *result) These routines compute the Riemann zeta function \zeta(n) for integer *note n: 33d, n \ne 1. -- Function: double gsl_sf_zeta (double s) -- Function: int gsl_sf_zeta_e (double s, gsl_sf_result *result) These routines compute the Riemann zeta function \zeta(s) for arbitrary *note s: 33f, s \ne 1.  File: gsl-ref.info, Node: Riemann Zeta Function Minus One, Next: Hurwitz Zeta Function, Prev: Riemann Zeta Function, Up: Zeta Functions 7.33.2 Riemann Zeta Function Minus One -------------------------------------- For large positive argument, the Riemann zeta function approaches one. In this region the fractional part is interesting, and therefore we need a function to evaluate it explicitly. -- Function: double gsl_sf_zetam1_int (int n) -- Function: int gsl_sf_zetam1_int_e (int n, gsl_sf_result *result) These routines compute \zeta(n) - 1 for integer *note n: 342, n \ne 1. -- Function: double gsl_sf_zetam1 (double s) -- Function: int gsl_sf_zetam1_e (double s, gsl_sf_result *result) These routines compute \zeta(s) - 1 for arbitrary *note s: 344, s \ne 1.  File: gsl-ref.info, Node: Hurwitz Zeta Function, Next: Eta Function, Prev: Riemann Zeta Function Minus One, Up: Zeta Functions 7.33.3 Hurwitz Zeta Function ---------------------------- The Hurwitz zeta function is defined by \zeta(s,q) = \sum_0^\infty (k+q)^{-s} -- Function: double gsl_sf_hzeta (double s, double q) -- Function: int gsl_sf_hzeta_e (double s, double q, gsl_sf_result *result) These routines compute the Hurwitz zeta function \zeta(s,q) for s > 1, q > 0.  File: gsl-ref.info, Node: Eta Function, Prev: Hurwitz Zeta Function, Up: Zeta Functions 7.33.4 Eta Function ------------------- The eta function is defined by \eta(s) = (1-2^{1-s}) \zeta(s) -- Function: double gsl_sf_eta_int (int n) -- Function: int gsl_sf_eta_int_e (int n, gsl_sf_result *result) These routines compute the eta function \eta(n) for integer *note n: 34a. -- Function: double gsl_sf_eta (double s) -- Function: int gsl_sf_eta_e (double s, gsl_sf_result *result) These routines compute the eta function \eta(s) for arbitrary *note s: 34c.  File: gsl-ref.info, Node: Examples<3>, Next: References and Further Reading<3>, Prev: Zeta Functions, Up: Special Functions 7.34 Examples ============= The following example demonstrates the use of the error handling form of the special functions, in this case to compute the Bessel function J_0(5.0), #include #include #include int main (void) { double x = 5.0; gsl_sf_result result; double expected = -0.17759677131433830434739701; int status = gsl_sf_bessel_J0_e (x, &result); printf ("status = %s\n", gsl_strerror(status)); printf ("J0(5.0) = %.18f\n" " +/- % .18f\n", result.val, result.err); printf ("exact = %.18f\n", expected); return status; } Here are the results of running the program, status = success J0(5.0) = -0.177596771314338264 +/- 0.000000000000000193 exact = -0.177596771314338292 The next program computes the same quantity using the natural form of the function. In this case the error term ‘result.err’ and return status are not accessible. #include #include int main (void) { double x = 5.0; double expected = -0.17759677131433830434739701; double y = gsl_sf_bessel_J0 (x); printf ("J0(5.0) = %.18f\n", y); printf ("exact = %.18f\n", expected); return 0; } The results of the function are the same, J0(5.0) = -0.177596771314338264 exact = -0.177596771314338292  File: gsl-ref.info, Node: References and Further Reading<3>, Prev: Examples<3>, Up: Special Functions 7.35 References and Further Reading =================================== The library follows the conventions of the following book where possible, * Handbook of Mathematical Functions, edited by Abramowitz & Stegun, Dover, ISBN 0486612724. The following papers contain information on the algorithms used to compute the special functions, * Allan J. MacLeod, MISCFUN: A software package to compute uncommon special functions. ACM Trans. Math. Soft., vol.: 22, 1996, 288–301 * Bunck, B. F., A fast algorithm for evaluation of normalized Hermite functions, BIT Numer. Math, 49: 281-295, 2009. * G.N. Watson, A Treatise on the Theory of Bessel Functions, 2nd Edition (Cambridge University Press, 1944). * G. Nemeth, Mathematical Approximations of Special Functions, Nova Science Publishers, ISBN 1-56072-052-2 * B.C. Carlson, Special Functions of Applied Mathematics (1977) * N. M. Temme, Special Functions: An Introduction to the Classical Functions of Mathematical Physics (1996), ISBN 978-0471113133. * W.J. Thompson, Atlas for Computing Mathematical Functions, John Wiley & Sons, New York (1997). * Y.Y. Luke, Algorithms for the Computation of Mathematical Functions, Academic Press, New York (1977). * S. A. Holmes and W. E. Featherstone, A unified approach to the Clenshaw summation and the recursive computation of very high degree and order normalised associated Legendre functions, Journal of Geodesy, 76, pg. 279-299, 2002.  File: gsl-ref.info, Node: Vectors and Matrices, Next: Permutations, Prev: Special Functions, Up: Top 8 Vectors and Matrices ********************** The functions described in this chapter provide a simple vector and matrix interface to ordinary C arrays. The memory management of these arrays is implemented using a single underlying type, known as a block. By writing your functions in terms of vectors and matrices you can pass a single structure containing both data and dimensions as an argument without needing additional function parameters. The structures are compatible with the vector and matrix formats used by BLAS routines. * Menu: * Data types:: * Blocks:: * Vectors:: * Matrices::  File: gsl-ref.info, Node: Data types, Next: Blocks, Up: Vectors and Matrices 8.1 Data types ============== All the functions are available for each of the standard data-types. The versions for ‘double’ have the prefix ‘gsl_block’, ‘gsl_vector’ and ‘gsl_matrix’. Similarly the versions for single-precision ‘float’ arrays have the prefix ‘gsl_block_float’, ‘gsl_vector_float’ and ‘gsl_matrix_float’. The full list of available types is given below, Prefix Type ------------------------------------------------------------- gsl_block double gsl_block_float float gsl_block_long_double long double gsl_block_int int gsl_block_uint unsigned int gsl_block_long long gsl_block_ulong unsigned long gsl_block_short short gsl_block_ushort unsigned short gsl_block_char char gsl_block_uchar unsigned char gsl_block_complex complex double gsl_block_complex_float complex float gsl_block_complex_long_double complex long double Corresponding types exist for the ‘gsl_vector’ and ‘gsl_matrix’ functions.  File: gsl-ref.info, Node: Blocks, Next: Vectors, Prev: Data types, Up: Vectors and Matrices 8.2 Blocks ========== For consistency all memory is allocated through a ‘gsl_block’ structure. The structure contains two components, the size of an area of memory and a pointer to the memory. The ‘gsl_block’ structure looks like this, -- Type: gsl_block typedef struct { size_t size; double * data; } gsl_block; Vectors and matrices are made by `slicing' an underlying block. A slice is a set of elements formed from an initial offset and a combination of indices and step-sizes. In the case of a matrix the step-size for the column index represents the row-length. The step-size for a vector is known as the `stride'. The functions for allocating and deallocating blocks are defined in ‘gsl_block.h’. * Menu: * Block allocation:: * Reading and writing blocks:: * Example programs for blocks::  File: gsl-ref.info, Node: Block allocation, Next: Reading and writing blocks, Up: Blocks 8.2.1 Block allocation ---------------------- The functions for allocating memory to a block follow the style of ‘malloc’ and ‘free’. In addition they also perform their own error checking. If there is insufficient memory available to allocate a block then the functions call the GSL error handler (with an error number of *note GSL_ENOMEM: 2a.) in addition to returning a null pointer. Thus if you use the library error handler to abort your program then it isn’t necessary to check every ‘alloc’. -- Function: *note gsl_block: 353. *gsl_block_alloc (size_t n) This function allocates memory for a block of *note n: 355. double-precision elements, returning a pointer to the block struct. The block is not initialized and so the values of its elements are undefined. Use the function *note gsl_block_calloc(): 356. if you want to ensure that all the elements are initialized to zero. Zero-sized requests are valid and return a non-null result. A null pointer is returned if insufficient memory is available to create the block. -- Function: *note gsl_block: 353. *gsl_block_calloc (size_t n) This function allocates memory for a block and initializes all the elements of the block to zero. -- Function: void gsl_block_free (gsl_block *b) This function frees the memory used by a block *note b: 357. previously allocated with *note gsl_block_alloc(): 355. or *note gsl_block_calloc(): 356.  File: gsl-ref.info, Node: Reading and writing blocks, Next: Example programs for blocks, Prev: Block allocation, Up: Blocks 8.2.2 Reading and writing blocks -------------------------------- The library provides functions for reading and writing blocks to a file as binary data or formatted text. -- Function: int gsl_block_fwrite (FILE *stream, const gsl_block *b) This function writes the elements of the block *note b: 359. to the stream *note stream: 359. in binary format. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_block_fread (FILE *stream, gsl_block *b) This function reads into the block *note b: 35a. from the open stream *note stream: 35a. in binary format. The block *note b: 35a. must be preallocated with the correct length since the function uses the size of *note b: 35a. to determine how many bytes to read. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_block_fprintf (FILE *stream, const gsl_block *b, const char *format) This function writes the elements of the block *note b: 35b. line-by-line to the stream *note stream: 35b. using the format specifier *note format: 35b, which should be one of the ‘%g’, ‘%e’ or ‘%f’ formats for floating point numbers and ‘%d’ for integers. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. -- Function: int gsl_block_fscanf (FILE *stream, gsl_block *b) This function reads formatted data from the stream *note stream: 35c. into the block *note b: 35c. The block *note b: 35c. must be preallocated with the correct length since the function uses the size of *note b: 35c. to determine how many numbers to read. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file.  File: gsl-ref.info, Node: Example programs for blocks, Prev: Reading and writing blocks, Up: Blocks 8.2.3 Example programs for blocks --------------------------------- The following program shows how to allocate a block, #include #include int main (void) { gsl_block * b = gsl_block_alloc (100); printf ("length of block = %zu\n", b->size); printf ("block data address = %p\n", b->data); gsl_block_free (b); return 0; } Here is the output from the program, length of block = 100 block data address = 0x804b0d8  File: gsl-ref.info, Node: Vectors, Next: Matrices, Prev: Blocks, Up: Vectors and Matrices 8.3 Vectors =========== Vectors are defined by a *note gsl_vector: 35f. structure which describes a slice of a block. Different vectors can be created which point to the same block. A vector slice is a set of equally-spaced elements of an area of memory. The *note gsl_vector: 35f. structure contains five components, the `size', the `stride', a pointer to the memory where the elements are stored, ‘data’, a pointer to the block owned by the vector, ‘block’, if any, and an ownership flag, ‘owner’. The structure is very simple and looks like this, -- Type: gsl_vector typedef struct { size_t size; size_t stride; double * data; gsl_block * block; int owner; } gsl_vector; The ‘size’ is simply the number of vector elements. The range of valid indices runs from 0 to ‘size-1’. The ‘stride’ is the step-size from one element to the next in physical memory, measured in units of the appropriate datatype. The pointer ‘data’ gives the location of the first element of the vector in memory. The pointer ‘block’ stores the location of the memory block in which the vector elements are located (if any). If the vector owns this block then the ‘owner’ field is set to one and the block will be deallocated when the vector is freed. If the vector points to a block owned by another object then the ‘owner’ field is zero and any underlying block will not be deallocated with the vector. The functions for allocating and accessing vectors are defined in ‘gsl_vector.h’. * Menu: * Vector allocation:: * Accessing vector elements:: * Initializing vector elements:: * Reading and writing vectors:: * Vector views:: * Copying vectors:: * Exchanging elements:: * Vector operations:: * Finding maximum and minimum elements of vectors:: * Vector properties:: * Example programs for vectors::  File: gsl-ref.info, Node: Vector allocation, Next: Accessing vector elements, Up: Vectors 8.3.1 Vector allocation ----------------------- The functions for allocating memory to a vector follow the style of ‘malloc’ and ‘free’. In addition they also perform their own error checking. If there is insufficient memory available to allocate a vector then the functions call the GSL error handler (with an error number of *note GSL_ENOMEM: 2a.) in addition to returning a null pointer. Thus if you use the library error handler to abort your program then it isn’t necessary to check every ‘alloc’. -- Function: *note gsl_vector: 35f. *gsl_vector_alloc (size_t n) This function creates a vector of length n, returning a pointer to a newly initialized vector struct. A new block is allocated for the elements of the vector, and stored in the ‘block’ component of the vector struct. The block is “owned” by the vector, and will be deallocated when the vector is deallocated. Zero-sized requests are valid and return a non-null result. -- Function: *note gsl_vector: 35f. *gsl_vector_calloc (size_t n) This function allocates memory for a vector of length *note n: 362. and initializes all the elements of the vector to zero. -- Function: void gsl_vector_free (gsl_vector *v) This function frees a previously allocated vector *note v: 363. If the vector was created using *note gsl_vector_alloc(): 361. then the block underlying the vector will also be deallocated. If the vector has been created from another object then the memory is still owned by that object and will not be deallocated.  File: gsl-ref.info, Node: Accessing vector elements, Next: Initializing vector elements, Prev: Vector allocation, Up: Vectors 8.3.2 Accessing vector elements ------------------------------- Unlike Fortran compilers, C compilers do not usually provide support for range checking of vectors and matrices. (1) The functions *note gsl_vector_get(): 365. and *note gsl_vector_set(): 366. can perform portable range checking for you and report an error if you attempt to access elements outside the allowed range. The functions for accessing the elements of a vector or matrix are defined in ‘gsl_vector.h’ and declared ‘extern inline’ to eliminate function-call overhead. You must compile your program with the preprocessor macro ‘HAVE_INLINE’ defined to use these functions. -- Macro: GSL_RANGE_CHECK_OFF If necessary you can turn off range checking completely without modifying any source files by recompiling your program with the preprocessor definition *note GSL_RANGE_CHECK_OFF: 367. Provided your compiler supports inline functions the effect of turning off range checking is to replace calls to ‘gsl_vector_get(v,i)’ by ‘v->data[i*v->stride]’ and calls to ‘gsl_vector_set(v,i,x)’ by ‘v->data[i*v->stride]=x’. Thus there should be no performance penalty for using the range checking functions when range checking is turned off. -- Macro: GSL_C99_INLINE If you use a C99 compiler which requires inline functions in header files to be declared ‘inline’ instead of ‘extern inline’, define the macro *note GSL_C99_INLINE: 368. (see *note Inline functions: 15.). With GCC this is selected automatically when compiling in C99 mode (‘-std=c99’). -- Variable: int gsl_check_range If inline functions are not used, calls to the functions *note gsl_vector_get(): 365. and *note gsl_vector_set(): 366. will link to the compiled versions of these functions in the library itself. The range checking in these functions is controlled by the global integer variable ‘gsl_check_range’. It is enabled by default—to disable it, set ‘gsl_check_range’ to zero. Due to function-call overhead, there is less benefit in disabling range checking here than for inline functions. -- Function: double gsl_vector_get (const gsl_vector *v, const size_t i) This function returns the *note i: 365.-th element of a vector *note v: 365. If *note i: 365. lies outside the allowed range of 0 to ‘size - 1’ then the error handler is invoked and 0 is returned. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: void gsl_vector_set (gsl_vector *v, const size_t i, double x) This function sets the value of the *note i: 366.-th element of a vector *note v: 366. to *note x: 366. If *note i: 366. lies outside the allowed range of 0 to ‘size - 1’ then the error handler is invoked. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: double *gsl_vector_ptr (gsl_vector *v, size_t i) -- Function: const double *gsl_vector_const_ptr (const gsl_vector *v, size_t i) These functions return a pointer to the *note i: 36b.-th element of a vector *note v: 36b. If *note i: 36b. lies outside the allowed range of 0 to ‘size - 1’ then the error handler is invoked and a null pointer is returned. Inline versions of these functions are used when ‘HAVE_INLINE’ is defined. ---------- Footnotes ---------- (1) (1) Range checking is available in the GNU C Compiler bounds-checking extension, but it is not part of the default installation of GCC. Memory accesses can also be checked with Valgrind or the ‘gcc -fmudflap’ memory protection option.  File: gsl-ref.info, Node: Initializing vector elements, Next: Reading and writing vectors, Prev: Accessing vector elements, Up: Vectors 8.3.3 Initializing vector elements ---------------------------------- -- Function: void gsl_vector_set_all (gsl_vector *v, double x) This function sets all the elements of the vector *note v: 36d. to the value *note x: 36d. -- Function: void gsl_vector_set_zero (gsl_vector *v) This function sets all the elements of the vector *note v: 36e. to zero. -- Function: int gsl_vector_set_basis (gsl_vector *v, size_t i) This function makes a basis vector by setting all the elements of the vector *note v: 36f. to zero except for the *note i: 36f.-th element which is set to one.  File: gsl-ref.info, Node: Reading and writing vectors, Next: Vector views, Prev: Initializing vector elements, Up: Vectors 8.3.4 Reading and writing vectors --------------------------------- The library provides functions for reading and writing vectors to a file as binary data or formatted text. -- Function: int gsl_vector_fwrite (FILE *stream, const gsl_vector *v) This function writes the elements of the vector *note v: 371. to the stream *note stream: 371. in binary format. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_vector_fread (FILE *stream, gsl_vector *v) This function reads into the vector *note v: 372. from the open stream *note stream: 372. in binary format. The vector *note v: 372. must be preallocated with the correct length since the function uses the size of *note v: 372. to determine how many bytes to read. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_vector_fprintf (FILE *stream, const gsl_vector *v, const char *format) This function writes the elements of the vector *note v: 373. line-by-line to the stream *note stream: 373. using the format specifier *note format: 373, which should be one of the ‘%g’, ‘%e’ or ‘%f’ formats for floating point numbers and ‘%d’ for integers. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. -- Function: int gsl_vector_fscanf (FILE *stream, gsl_vector *v) This function reads formatted data from the stream *note stream: 374. into the vector *note v: 374. The vector *note v: 374. must be preallocated with the correct length since the function uses the size of *note v: 374. to determine how many numbers to read. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file.  File: gsl-ref.info, Node: Vector views, Next: Copying vectors, Prev: Reading and writing vectors, Up: Vectors 8.3.5 Vector views ------------------ In addition to creating vectors from slices of blocks it is also possible to slice vectors and create vector views. For example, a subvector of another vector can be described with a view, or two views can be made which provide access to the even and odd elements of a vector. -- Type: gsl_vector_view -- Type: gsl_vector_const_view A vector view is a temporary object, stored on the stack, which can be used to operate on a subset of vector elements. Vector views can be defined for both constant and non-constant vectors, using separate types that preserve constness. A vector view has the type *note gsl_vector_view: 376. and a constant vector view has the type *note gsl_vector_const_view: 377. In both cases the elements of the view can be accessed as a *note gsl_vector: 35f. using the ‘vector’ component of the view object. A pointer to a vector of type ‘gsl_vector *’ or ‘const gsl_vector *’ can be obtained by taking the address of this component with the ‘&’ operator. When using this pointer it is important to ensure that the view itself remains in scope—the simplest way to do so is by always writing the pointer as ‘&view.vector’, and never storing this value in another variable. -- Function: *note gsl_vector_view: 376. gsl_vector_subvector (gsl_vector *v, size_t offset, size_t n) -- Function: *note gsl_vector_const_view: 377. gsl_vector_const_subvector (const gsl_vector *v, size_t offset, size_t n) These functions return a vector view of a subvector of another vector *note v: 379. The start of the new vector is offset by *note offset: 379. elements from the start of the original vector. The new vector has *note n: 379. elements. Mathematically, the ‘i’-th element of the new vector ‘v'’ is given by: v'(i) = v->data[(offset + i)*v->stride] where the index ‘i’ runs from 0 to ‘n - 1’. The ‘data’ pointer of the returned vector struct is set to null if the combined parameters (*note offset: 379, *note n: 379.) overrun the end of the original vector. The new vector is only a view of the block underlying the original vector, *note v: 379. The block containing the elements of *note v: 379. is not owned by the new vector. When the view goes out of scope the original vector *note v: 379. and its block will continue to exist. The original memory can only be deallocated by freeing the original vector. Of course, the original vector should not be deallocated while the view is still in use. The function *note gsl_vector_const_subvector(): 379. is equivalent to *note gsl_vector_subvector(): 378. but can be used for vectors which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_vector_subvector_with_stride (gsl_vector *v, size_t offset, size_t stride, size_t n) -- Function: *note gsl_vector_const_view: 377. gsl_vector_const_subvector_with_stride (const gsl_vector *v, size_t offset, size_t stride, size_t n) These functions return a vector view of a subvector of another vector *note v: 37b. with an additional stride argument. The subvector is formed in the same way as for *note gsl_vector_subvector(): 378. but the new vector has *note n: 37b. elements with a step-size of *note stride: 37b. from one element to the next in the original vector. Mathematically, the ‘i’-th element of the new vector ‘v'’ is given by: v'(i) = v->data[(offset + i*stride)*v->stride] where the index ‘i’ runs from 0 to ‘n - 1’. Note that subvector views give direct access to the underlying elements of the original vector. For example, the following code will zero the even elements of the vector *note v: 37b. of length ‘n’, while leaving the odd elements untouched: gsl_vector_view v_even = gsl_vector_subvector_with_stride (v, 0, 2, n/2); gsl_vector_set_zero (&v_even.vector); A vector view can be passed to any subroutine which takes a vector argument just as a directly allocated vector would be, using ‘&view.vector’. For example, the following code computes the norm of the odd elements of *note v: 37b. using the BLAS routine ‘dnrm2’: gsl_vector_view v_odd = gsl_vector_subvector_with_stride (v, 1, 2, n/2); double r = gsl_blas_dnrm2 (&v_odd.vector); The function *note gsl_vector_const_subvector_with_stride(): 37b. is equivalent to *note gsl_vector_subvector_with_stride(): 37a. but can be used for vectors which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_vector_complex_real (gsl_vector_complex *v) -- Function: *note gsl_vector_const_view: 377. gsl_vector_complex_const_real (const gsl_vector_complex *v) These functions return a vector view of the real parts of the complex vector *note v: 37d. The function *note gsl_vector_complex_const_real(): 37d. is equivalent to *note gsl_vector_complex_real(): 37c. but can be used for vectors which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_vector_complex_imag (gsl_vector_complex *v) -- Function: *note gsl_vector_const_view: 377. gsl_vector_complex_const_imag (const gsl_vector_complex *v) These functions return a vector view of the imaginary parts of the complex vector *note v: 37f. The function *note gsl_vector_complex_const_imag(): 37f. is equivalent to *note gsl_vector_complex_imag(): 37e. but can be used for vectors which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_vector_view_array (double *base, size_t n) -- Function: *note gsl_vector_const_view: 377. gsl_vector_const_view_array (const double *base, size_t n) These functions return a vector view of an array. The start of the new vector is given by *note base: 381. and has *note n: 381. elements. Mathematically, the ‘i’-th element of the new vector ‘v'’ is given by: v'(i) = base[i] where the index ‘i’ runs from 0 to ‘n - 1’. The array containing the elements of ‘v’ is not owned by the new vector view. When the view goes out of scope the original array will continue to exist. The original memory can only be deallocated by freeing the original pointer *note base: 381. Of course, the original array should not be deallocated while the view is still in use. The function *note gsl_vector_const_view_array(): 381. is equivalent to *note gsl_vector_view_array(): 380. but can be used for arrays which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_vector_view_array_with_stride (double *base, size_t stride, size_t n) -- Function: *note gsl_vector_const_view: 377. gsl_vector_const_view_array_with_stride (const double *base, size_t stride, size_t n) These functions return a vector view of an array *note base: 383. with an additional stride argument. The subvector is formed in the same way as for *note gsl_vector_view_array(): 380. but the new vector has *note n: 383. elements with a step-size of *note stride: 383. from one element to the next in the original array. Mathematically, the ‘i’-th element of the new vector ‘v'’ is given by: v'(i) = base[i*stride] where the index ‘i’ runs from 0 to ‘n - 1’. Note that the view gives direct access to the underlying elements of the original array. A vector view can be passed to any subroutine which takes a vector argument just as a directly allocated vector would be, using ‘&view.vector’. The function *note gsl_vector_const_view_array_with_stride(): 383. is equivalent to *note gsl_vector_view_array_with_stride(): 382. but can be used for arrays which are declared ‘const’.  File: gsl-ref.info, Node: Copying vectors, Next: Exchanging elements, Prev: Vector views, Up: Vectors 8.3.6 Copying vectors --------------------- Common operations on vectors such as addition and multiplication are available in the BLAS part of the library (see *note BLAS Support: 11.). However, it is useful to have a small number of utility functions which do not require the full BLAS code. The following functions fall into this category. -- Function: int gsl_vector_memcpy (gsl_vector *dest, const gsl_vector *src) This function copies the elements of the vector *note src: 385. into the vector *note dest: 385. The two vectors must have the same length. -- Function: int gsl_vector_swap (gsl_vector *v, gsl_vector *w) This function exchanges the elements of the vectors *note v: 386. and *note w: 386. by copying. The two vectors must have the same length.  File: gsl-ref.info, Node: Exchanging elements, Next: Vector operations, Prev: Copying vectors, Up: Vectors 8.3.7 Exchanging elements ------------------------- The following functions can be used to exchange, or permute, the elements of a vector. -- Function: int gsl_vector_swap_elements (gsl_vector *v, size_t i, size_t j) This function exchanges the *note i: 388.-th and *note j: 388.-th elements of the vector *note v: 388. in-place. -- Function: int gsl_vector_reverse (gsl_vector *v) This function reverses the order of the elements of the vector *note v: 389.  File: gsl-ref.info, Node: Vector operations, Next: Finding maximum and minimum elements of vectors, Prev: Exchanging elements, Up: Vectors 8.3.8 Vector operations ----------------------- -- Function: int gsl_vector_add (gsl_vector *a, const gsl_vector *b) This function adds the elements of vector *note b: 38b. to the elements of vector *note a: 38b. The result a_i \leftarrow a_i + b_i is stored in *note a: 38b. and *note b: 38b. remains unchanged. The two vectors must have the same length. -- Function: int gsl_vector_sub (gsl_vector *a, const gsl_vector *b) This function subtracts the elements of vector *note b: 38c. from the elements of vector *note a: 38c. The result a_i \leftarrow a_i - b_i is stored in *note a: 38c. and *note b: 38c. remains unchanged. The two vectors must have the same length. -- Function: int gsl_vector_mul (gsl_vector *a, const gsl_vector *b) This function multiplies the elements of vector *note a: 38d. by the elements of vector *note b: 38d. The result a_i \leftarrow a_i * b_i is stored in *note a: 38d. and *note b: 38d. remains unchanged. The two vectors must have the same length. -- Function: int gsl_vector_div (gsl_vector *a, const gsl_vector *b) This function divides the elements of vector *note a: 38e. by the elements of vector *note b: 38e. The result a_i \leftarrow a_i / b_i is stored in *note a: 38e. and *note b: 38e. remains unchanged. The two vectors must have the same length. -- Function: int gsl_vector_scale (gsl_vector *a, const double x) This function multiplies the elements of vector *note a: 38f. by the constant factor *note x: 38f. The result a_i \leftarrow x a_i is stored in *note a: 38f. -- Function: int gsl_vector_add_constant (gsl_vector *a, const double x) This function adds the constant value *note x: 390. to the elements of the vector *note a: 390. The result a_i \leftarrow a_i + x is stored in *note a: 390. -- Function: double gsl_vector_sum (const gsl_vector *a) This function returns the sum of the elements of *note a: 391, defined as \sum_{i=1}^n a_i -- Function: int gsl_vector_axpby (const double alpha, const gsl_vector *x, const double beta, gsl_vector *y) This function performs the operation y \leftarrow \alpha x + \beta y. The vectors *note x: 392. and *note y: 392. must have the same length.  File: gsl-ref.info, Node: Finding maximum and minimum elements of vectors, Next: Vector properties, Prev: Vector operations, Up: Vectors 8.3.9 Finding maximum and minimum elements of vectors ----------------------------------------------------- The following operations are only defined for real vectors. -- Function: double gsl_vector_max (const gsl_vector *v) This function returns the maximum value in the vector *note v: 394. -- Function: double gsl_vector_min (const gsl_vector *v) This function returns the minimum value in the vector *note v: 395. -- Function: void gsl_vector_minmax (const gsl_vector *v, double *min_out, double *max_out) This function returns the minimum and maximum values in the vector *note v: 396, storing them in *note min_out: 396. and *note max_out: 396. -- Function: size_t gsl_vector_max_index (const gsl_vector *v) This function returns the index of the maximum value in the vector *note v: 397. When there are several equal maximum elements then the lowest index is returned. -- Function: size_t gsl_vector_min_index (const gsl_vector *v) This function returns the index of the minimum value in the vector *note v: 398. When there are several equal minimum elements then the lowest index is returned. -- Function: void gsl_vector_minmax_index (const gsl_vector *v, size_t *imin, size_t *imax) This function returns the indices of the minimum and maximum values in the vector *note v: 399, storing them in *note imin: 399. and *note imax: 399. When there are several equal minimum or maximum elements then the lowest indices are returned.  File: gsl-ref.info, Node: Vector properties, Next: Example programs for vectors, Prev: Finding maximum and minimum elements of vectors, Up: Vectors 8.3.10 Vector properties ------------------------ The following functions are defined for real and complex vectors. For complex vectors both the real and imaginary parts must satisfy the conditions. -- Function: int gsl_vector_isnull (const gsl_vector *v) -- Function: int gsl_vector_ispos (const gsl_vector *v) -- Function: int gsl_vector_isneg (const gsl_vector *v) -- Function: int gsl_vector_isnonneg (const gsl_vector *v) These functions return 1 if all the elements of the vector *note v: 39e. are zero, strictly positive, strictly negative, or non-negative respectively, and 0 otherwise. -- Function: int gsl_vector_equal (const gsl_vector *u, const gsl_vector *v) This function returns 1 if the vectors *note u: 39f. and *note v: 39f. are equal (by comparison of element values) and 0 otherwise.  File: gsl-ref.info, Node: Example programs for vectors, Prev: Vector properties, Up: Vectors 8.3.11 Example programs for vectors ----------------------------------- This program shows how to allocate, initialize and read from a vector using the functions *note gsl_vector_alloc(): 361, *note gsl_vector_set(): 366. and *note gsl_vector_get(): 365. #include #include int main (void) { int i; gsl_vector * v = gsl_vector_alloc (3); for (i = 0; i < 3; i++) { gsl_vector_set (v, i, 1.23 + i); } for (i = 0; i < 100; i++) /* OUT OF RANGE ERROR */ { printf ("v_%d = %g\n", i, gsl_vector_get (v, i)); } gsl_vector_free (v); return 0; } Here is the output from the program. The final loop attempts to read outside the range of the vector ‘v’, and the error is trapped by the range-checking code in *note gsl_vector_get(): 365. $ ./a.out v_0 = 1.23 v_1 = 2.23 v_2 = 3.23 gsl: vector_source.c:12: ERROR: index out of range Default GSL error handler invoked. Aborted (core dumped) The next program shows how to write a vector to a file. #include #include int main (void) { int i; gsl_vector * v = gsl_vector_alloc (100); for (i = 0; i < 100; i++) { gsl_vector_set (v, i, 1.23 + i); } { FILE * f = fopen ("test.dat", "w"); gsl_vector_fprintf (f, v, "%.5g"); fclose (f); } gsl_vector_free (v); return 0; } After running this program the file ‘test.dat’ should contain the elements of ‘v’, written using the format specifier ‘%.5g’. The vector could then be read back in using the function ‘gsl_vector_fscanf (f, v)’ as follows: #include #include int main (void) { int i; gsl_vector * v = gsl_vector_alloc (10); { FILE * f = fopen ("test.dat", "r"); gsl_vector_fscanf (f, v); fclose (f); } for (i = 0; i < 10; i++) { printf ("%g\n", gsl_vector_get(v, i)); } gsl_vector_free (v); return 0; }  File: gsl-ref.info, Node: Matrices, Prev: Vectors, Up: Vectors and Matrices 8.4 Matrices ============ Matrices are defined by a *note gsl_matrix: 3a2. structure which describes a generalized slice of a block. Like a vector it represents a set of elements in an area of memory, but uses two indices instead of one. -- Type: gsl_matrix The *note gsl_matrix: 3a2. structure contains six components, the two dimensions of the matrix, a physical dimension, a pointer to the memory where the elements of the matrix are stored, ‘data’, a pointer to the block owned by the matrix ‘block’, if any, and an ownership flag, ‘owner’. The physical dimension determines the memory layout and can differ from the matrix dimension to allow the use of submatrices. The *note gsl_matrix: 3a2. structure is very simple and looks like this: typedef struct { size_t size1; size_t size2; size_t tda; double * data; gsl_block * block; int owner; } gsl_matrix; Matrices are stored in row-major order, meaning that each row of elements forms a contiguous block in memory. This is the standard “C-language ordering” of two-dimensional arrays. Note that Fortran stores arrays in column-major order. The number of rows is ‘size1’. The range of valid row indices runs from 0 to ‘size1 - 1’. Similarly ‘size2’ is the number of columns. The range of valid column indices runs from 0 to ‘size2 - 1’. The physical row dimension ‘tda’, or `trailing dimension', specifies the size of a row of the matrix as laid out in memory. For example, in the following matrix ‘size1’ is 3, ‘size2’ is 4, and ‘tda’ is 8. The physical memory layout of the matrix begins in the top left hand-corner and proceeds from left to right along each row in turn. 00 01 02 03 XX XX XX XX 10 11 12 13 XX XX XX XX 20 21 22 23 XX XX XX XX Each unused memory location is represented by “‘XX’”. The pointer ‘data’ gives the location of the first element of the matrix in memory. The pointer ‘block’ stores the location of the memory block in which the elements of the matrix are located (if any). If the matrix owns this block then the ‘owner’ field is set to one and the block will be deallocated when the matrix is freed. If the matrix is only a slice of a block owned by another object then the ‘owner’ field is zero and any underlying block will not be freed. The functions for allocating and accessing matrices are defined in ‘gsl_matrix.h’. * Menu: * Matrix allocation:: * Accessing matrix elements:: * Initializing matrix elements:: * Reading and writing matrices:: * Matrix views:: * Creating row and column views:: * Copying matrices:: * Copying rows and columns:: * Exchanging rows and columns:: * Matrix operations:: * Finding maximum and minimum elements of matrices:: * Matrix properties:: * Example programs for matrices:: * References and Further Reading: References and Further Reading<4>.  File: gsl-ref.info, Node: Matrix allocation, Next: Accessing matrix elements, Up: Matrices 8.4.1 Matrix allocation ----------------------- The functions for allocating memory to a matrix follow the style of ‘malloc’ and ‘free’. They also perform their own error checking. If there is insufficient memory available to allocate a matrix then the functions call the GSL error handler (with an error number of *note GSL_ENOMEM: 2a.) in addition to returning a null pointer. Thus if you use the library error handler to abort your program then it isn’t necessary to check every ‘alloc’. -- Function: *note gsl_matrix: 3a2. *gsl_matrix_alloc (size_t n1, size_t n2) This function creates a matrix of size *note n1: 3a4. rows by *note n2: 3a4. columns, returning a pointer to a newly initialized matrix struct. A new block is allocated for the elements of the matrix, and stored in the ‘block’ component of the matrix struct. The block is “owned” by the matrix, and will be deallocated when the matrix is deallocated. Requesting zero for *note n1: 3a4. or *note n2: 3a4. is valid and returns a non-null result. -- Function: *note gsl_matrix: 3a2. *gsl_matrix_calloc (size_t n1, size_t n2) This function allocates memory for a matrix of size *note n1: 3a5. rows by *note n2: 3a5. columns and initializes all the elements of the matrix to zero. -- Function: void gsl_matrix_free (gsl_matrix *m) This function frees a previously allocated matrix *note m: 3a6. If the matrix was created using *note gsl_matrix_alloc(): 3a4. then the block underlying the matrix will also be deallocated. If the matrix has been created from another object then the memory is still owned by that object and will not be deallocated.  File: gsl-ref.info, Node: Accessing matrix elements, Next: Initializing matrix elements, Prev: Matrix allocation, Up: Matrices 8.4.2 Accessing matrix elements ------------------------------- The functions for accessing the elements of a matrix use the same range checking system as vectors. You can turn off range checking by recompiling your program with the preprocessor definition *note GSL_RANGE_CHECK_OFF: 367. The elements of the matrix are stored in “C-order”, where the second index moves continuously through memory. More precisely, the element accessed by the function ‘gsl_matrix_get(m,i,j)’ and ‘gsl_matrix_set(m,i,j,x)’ is: m->data[i * m->tda + j] where ‘tda’ is the physical row-length of the matrix. -- Function: double gsl_matrix_get (const gsl_matrix *m, const size_t i, const size_t j) This function returns the (i,j)-th element of a matrix *note m: 3a8. If *note i: 3a8. or *note j: 3a8. lie outside the allowed range of 0 to ‘n1 - 1’ and 0 to ‘n2 - 1’ then the error handler is invoked and 0 is returned. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: void gsl_matrix_set (gsl_matrix *m, const size_t i, const size_t j, double x) This function sets the value of the (i,j)-th element of a matrix *note m: 3a9. to *note x: 3a9. If *note i: 3a9. or *note j: 3a9. lies outside the allowed range of 0 to ‘n1 - 1’ and 0 to ‘n2 - 1’ then the error handler is invoked. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: double *gsl_matrix_ptr (gsl_matrix *m, size_t i, size_t j) -- Function: const double *gsl_matrix_const_ptr (const gsl_matrix *m, size_t i, size_t j) These functions return a pointer to the (i,j)-th element of a matrix *note m: 3ab. If *note i: 3ab. or *note j: 3ab. lie outside the allowed range of 0 to ‘n1 - 1’ and 0 to ‘n2 - 1’ then the error handler is invoked and a null pointer is returned. Inline versions of these functions are used when ‘HAVE_INLINE’ is defined.  File: gsl-ref.info, Node: Initializing matrix elements, Next: Reading and writing matrices, Prev: Accessing matrix elements, Up: Matrices 8.4.3 Initializing matrix elements ---------------------------------- -- Function: void gsl_matrix_set_all (gsl_matrix *m, double x) This function sets all the elements of the matrix *note m: 3ad. to the value *note x: 3ad. -- Function: void gsl_matrix_set_zero (gsl_matrix *m) This function sets all the elements of the matrix *note m: 3ae. to zero. -- Function: void gsl_matrix_set_identity (gsl_matrix *m) This function sets the elements of the matrix *note m: 3af. to the corresponding elements of the identity matrix, m(i,j) = \delta(i,j), i.e. a unit diagonal with all off-diagonal elements zero. This applies to both square and rectangular matrices.  File: gsl-ref.info, Node: Reading and writing matrices, Next: Matrix views, Prev: Initializing matrix elements, Up: Matrices 8.4.4 Reading and writing matrices ---------------------------------- The library provides functions for reading and writing matrices to a file as binary data or formatted text. -- Function: int gsl_matrix_fwrite (FILE *stream, const gsl_matrix *m) This function writes the elements of the matrix *note m: 3b1. to the stream *note stream: 3b1. in binary format. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_matrix_fread (FILE *stream, gsl_matrix *m) This function reads into the matrix *note m: 3b2. from the open stream *note stream: 3b2. in binary format. The matrix *note m: 3b2. must be preallocated with the correct dimensions since the function uses the size of *note m: 3b2. to determine how many bytes to read. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_matrix_fprintf (FILE *stream, const gsl_matrix *m, const char *format) This function writes the elements of the matrix *note m: 3b3. line-by-line to the stream *note stream: 3b3. using the format specifier *note format: 3b3, which should be one of the ‘%g’, ‘%e’ or ‘%f’ formats for floating point numbers and ‘%d’ for integers. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. -- Function: int gsl_matrix_fscanf (FILE *stream, gsl_matrix *m) This function reads formatted data from the stream *note stream: 3b4. into the matrix *note m: 3b4. The matrix *note m: 3b4. must be preallocated with the correct dimensions since the function uses the size of *note m: 3b4. to determine how many numbers to read. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file.  File: gsl-ref.info, Node: Matrix views, Next: Creating row and column views, Prev: Reading and writing matrices, Up: Matrices 8.4.5 Matrix views ------------------ -- Type: gsl_matrix_view -- Type: gsl_matrix_const_view A matrix view is a temporary object, stored on the stack, which can be used to operate on a subset of matrix elements. Matrix views can be defined for both constant and non-constant matrices using separate types that preserve constness. A matrix view has the type *note gsl_matrix_view: 3b6. and a constant matrix view has the type *note gsl_matrix_const_view: 3b7. In both cases the elements of the view can by accessed using the ‘matrix’ component of the view object. A pointer ‘gsl_matrix *’ or ‘const gsl_matrix *’ can be obtained by taking the address of the ‘matrix’ component with the ‘&’ operator. In addition to matrix views it is also possible to create vector views of a matrix, such as row or column views. -- Function: *note gsl_matrix_view: 3b6. gsl_matrix_submatrix (gsl_matrix *m, size_t k1, size_t k2, size_t n1, size_t n2) -- Function: *note gsl_matrix_const_view: 3b7. gsl_matrix_const_submatrix (const gsl_matrix *m, size_t k1, size_t k2, size_t n1, size_t n2) These functions return a matrix view of a submatrix of the matrix *note m: 3b9. The upper-left element of the submatrix is the element (*note k1: 3b9, *note k2: 3b9.) of the original matrix. The submatrix has *note n1: 3b9. rows and *note n2: 3b9. columns. The physical number of columns in memory given by ‘tda’ is unchanged. Mathematically, the (i,j)-th element of the new matrix is given by: m'(i,j) = m->data[(k1*m->tda + k2) + i*m->tda + j] where the index ‘i’ runs from 0 to ‘n1 - 1’ and the index ‘j’ runs from 0 to ‘n2 - 1’. The ‘data’ pointer of the returned matrix struct is set to null if the combined parameters (‘i’, ‘j’, *note n1: 3b9, *note n2: 3b9, ‘tda’) overrun the ends of the original matrix. The new matrix view is only a view of the block underlying the existing matrix, *note m: 3b9. The block containing the elements of *note m: 3b9. is not owned by the new matrix view. When the view goes out of scope the original matrix *note m: 3b9. and its block will continue to exist. The original memory can only be deallocated by freeing the original matrix. Of course, the original matrix should not be deallocated while the view is still in use. The function *note gsl_matrix_const_submatrix(): 3b9. is equivalent to *note gsl_matrix_submatrix(): 3b8. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_matrix_view: 3b6. gsl_matrix_view_array (double *base, size_t n1, size_t n2) -- Function: *note gsl_matrix_const_view: 3b7. gsl_matrix_const_view_array (const double *base, size_t n1, size_t n2) These functions return a matrix view of the array *note base: 3bb. The matrix has *note n1: 3bb. rows and *note n2: 3bb. columns. The physical number of columns in memory is also given by *note n2: 3bb. Mathematically, the (i,j)-th element of the new matrix is given by: m'(i,j) = base[i*n2 + j] where the index ‘i’ runs from 0 to ‘n1 - 1’ and the index ‘j’ runs from 0 to ‘n2 - 1’. The new matrix is only a view of the array *note base: 3bb. When the view goes out of scope the original array *note base: 3bb. will continue to exist. The original memory can only be deallocated by freeing the original array. Of course, the original array should not be deallocated while the view is still in use. The function *note gsl_matrix_const_view_array(): 3bb. is equivalent to *note gsl_matrix_view_array(): 3ba. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_matrix_view: 3b6. gsl_matrix_view_array_with_tda (double *base, size_t n1, size_t n2, size_t tda) -- Function: *note gsl_matrix_const_view: 3b7. gsl_matrix_const_view_array_with_tda (const double *base, size_t n1, size_t n2, size_t tda) These functions return a matrix view of the array *note base: 3bd. with a physical number of columns *note tda: 3bd. which may differ from the corresponding dimension of the matrix. The matrix has *note n1: 3bd. rows and *note n2: 3bd. columns, and the physical number of columns in memory is given by *note tda: 3bd. Mathematically, the (i,j)-th element of the new matrix is given by: m'(i,j) = base[i*tda + j] where the index ‘i’ runs from 0 to ‘n1 - 1’ and the index ‘j’ runs from 0 to ‘n2 - 1’. The new matrix is only a view of the array *note base: 3bd. When the view goes out of scope the original array *note base: 3bd. will continue to exist. The original memory can only be deallocated by freeing the original array. Of course, the original array should not be deallocated while the view is still in use. The function *note gsl_matrix_const_view_array_with_tda(): 3bd. is equivalent to *note gsl_matrix_view_array_with_tda(): 3bc. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_matrix_view: 3b6. gsl_matrix_view_vector (gsl_vector *v, size_t n1, size_t n2) -- Function: *note gsl_matrix_const_view: 3b7. gsl_matrix_const_view_vector (const gsl_vector *v, size_t n1, size_t n2) These functions return a matrix view of the vector *note v: 3bf. The matrix has *note n1: 3bf. rows and *note n2: 3bf. columns. The vector must have unit stride. The physical number of columns in memory is also given by *note n2: 3bf. Mathematically, the (i,j)-th element of the new matrix is given by: m'(i,j) = v->data[i*n2 + j] where the index ‘i’ runs from 0 to ‘n1 - 1’ and the index ‘j’ runs from 0 to ‘n2 - 1’. The new matrix is only a view of the vector *note v: 3bf. When the view goes out of scope the original vector *note v: 3bf. will continue to exist. The original memory can only be deallocated by freeing the original vector. Of course, the original vector should not be deallocated while the view is still in use. The function *note gsl_matrix_const_view_vector(): 3bf. is equivalent to *note gsl_matrix_view_vector(): 3be. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_matrix_view: 3b6. gsl_matrix_view_vector_with_tda (gsl_vector *v, size_t n1, size_t n2, size_t tda) -- Function: *note gsl_matrix_const_view: 3b7. gsl_matrix_const_view_vector_with_tda (const gsl_vector *v, size_t n1, size_t n2, size_t tda) These functions return a matrix view of the vector *note v: 3c1. with a physical number of columns *note tda: 3c1. which may differ from the corresponding matrix dimension. The vector must have unit stride. The matrix has *note n1: 3c1. rows and *note n2: 3c1. columns, and the physical number of columns in memory is given by *note tda: 3c1. Mathematically, the (i,j)-th element of the new matrix is given by: m'(i,j) = v->data[i*tda + j] where the index ‘i’ runs from 0 to ‘n1 - 1’ and the index ‘j’ runs from 0 to ‘n2 - 1’. The new matrix is only a view of the vector *note v: 3c1. When the view goes out of scope the original vector *note v: 3c1. will continue to exist. The original memory can only be deallocated by freeing the original vector. Of course, the original vector should not be deallocated while the view is still in use. The function *note gsl_matrix_const_view_vector_with_tda(): 3c1. is equivalent to *note gsl_matrix_view_vector_with_tda(): 3c0. but can be used for matrices which are declared ‘const’.  File: gsl-ref.info, Node: Creating row and column views, Next: Copying matrices, Prev: Matrix views, Up: Matrices 8.4.6 Creating row and column views ----------------------------------- In general there are two ways to access an object, by reference or by copying. The functions described in this section create vector views which allow access to a row or column of a matrix by reference. Modifying elements of the view is equivalent to modifying the matrix, since both the vector view and the matrix point to the same memory block. -- Function: *note gsl_vector_view: 376. gsl_matrix_row (gsl_matrix *m, size_t i) -- Function: *note gsl_vector_const_view: 377. gsl_matrix_const_row (const gsl_matrix *m, size_t i) These functions return a vector view of the *note i: 3c4.-th row of the matrix *note m: 3c4. The ‘data’ pointer of the new vector is set to null if *note i: 3c4. is out of range. The function *note gsl_matrix_const_row(): 3c4. is equivalent to *note gsl_matrix_row(): 3c3. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_matrix_column (gsl_matrix *m, size_t j) -- Function: *note gsl_vector_const_view: 377. gsl_matrix_const_column (const gsl_matrix *m, size_t j) These functions return a vector view of the *note j: 3c6.-th column of the matrix *note m: 3c6. The ‘data’ pointer of the new vector is set to null if *note j: 3c6. is out of range. The function *note gsl_matrix_const_column(): 3c6. is equivalent to *note gsl_matrix_column(): 3c5. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_matrix_subrow (gsl_matrix *m, size_t i, size_t offset, size_t n) -- Function: *note gsl_vector_const_view: 377. gsl_matrix_const_subrow (const gsl_matrix *m, size_t i, size_t offset, size_t n) These functions return a vector view of the *note i: 3c8.-th row of the matrix *note m: 3c8. beginning at *note offset: 3c8. elements past the first column and containing *note n: 3c8. elements. The ‘data’ pointer of the new vector is set to null if *note i: 3c8, *note offset: 3c8, or *note n: 3c8. are out of range. The function *note gsl_matrix_const_subrow(): 3c8. is equivalent to *note gsl_matrix_subrow(): 3c7. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_matrix_subcolumn (gsl_matrix *m, size_t j, size_t offset, size_t n) -- Function: *note gsl_vector_const_view: 377. gsl_matrix_const_subcolumn (const gsl_matrix *m, size_t j, size_t offset, size_t n) These functions return a vector view of the *note j: 3ca.-th column of the matrix *note m: 3ca. beginning at *note offset: 3ca. elements past the first row and containing *note n: 3ca. elements. The ‘data’ pointer of the new vector is set to null if *note j: 3ca, *note offset: 3ca, or *note n: 3ca. are out of range. The function *note gsl_matrix_const_subcolumn(): 3ca. is equivalent to *note gsl_matrix_subcolumn(): 3c9. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_matrix_diagonal (gsl_matrix *m) -- Function: *note gsl_vector_const_view: 377. gsl_matrix_const_diagonal (const gsl_matrix *m) These functions return a vector view of the diagonal of the matrix *note m: 3cc. The matrix *note m: 3cc. is not required to be square. For a rectangular matrix the length of the diagonal is the same as the smaller dimension of the matrix. The function *note gsl_matrix_const_diagonal(): 3cc. is equivalent to *note gsl_matrix_diagonal(): 3cb. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_matrix_subdiagonal (gsl_matrix *m, size_t k) -- Function: *note gsl_vector_const_view: 377. gsl_matrix_const_subdiagonal (const gsl_matrix *m, size_t k) These functions return a vector view of the *note k: 3ce.-th subdiagonal of the matrix *note m: 3ce. The matrix *note m: 3ce. is not required to be square. The diagonal of the matrix corresponds to k = 0. The function *note gsl_matrix_const_subdiagonal(): 3ce. is equivalent to *note gsl_matrix_subdiagonal(): 3cd. but can be used for matrices which are declared ‘const’. -- Function: *note gsl_vector_view: 376. gsl_matrix_superdiagonal (gsl_matrix *m, size_t k) -- Function: *note gsl_vector_const_view: 377. gsl_matrix_const_superdiagonal (const gsl_matrix *m, size_t k) These functions return a vector view of the *note k: 3d0.-th superdiagonal of the matrix *note m: 3d0. The matrix *note m: 3d0. is not required to be square. The diagonal of the matrix corresponds to k = 0. The function *note gsl_matrix_const_superdiagonal(): 3d0. is equivalent to *note gsl_matrix_superdiagonal(): 3cf. but can be used for matrices which are declared ‘const’.  File: gsl-ref.info, Node: Copying matrices, Next: Copying rows and columns, Prev: Creating row and column views, Up: Matrices 8.4.7 Copying matrices ---------------------- -- Function: int gsl_matrix_memcpy (gsl_matrix *dest, const gsl_matrix *src) This function copies the elements of the matrix *note src: 3d2. into the matrix *note dest: 3d2. The two matrices must have the same size. -- Function: int gsl_matrix_swap (gsl_matrix *m1, gsl_matrix *m2) This function exchanges the elements of the matrices *note m1: 3d3. and *note m2: 3d3. by copying. The two matrices must have the same size.  File: gsl-ref.info, Node: Copying rows and columns, Next: Exchanging rows and columns, Prev: Copying matrices, Up: Matrices 8.4.8 Copying rows and columns ------------------------------ The functions described in this section copy a row or column of a matrix into a vector. This allows the elements of the vector and the matrix to be modified independently. Note that if the matrix and the vector point to overlapping regions of memory then the result will be undefined. The same effect can be achieved with more generality using *note gsl_vector_memcpy(): 385. with vector views of rows and columns. -- Function: int gsl_matrix_get_row (gsl_vector *v, const gsl_matrix *m, size_t i) This function copies the elements of the *note i: 3d5.-th row of the matrix *note m: 3d5. into the vector *note v: 3d5. The length of the vector must be the same as the length of the row. -- Function: int gsl_matrix_get_col (gsl_vector *v, const gsl_matrix *m, size_t j) This function copies the elements of the *note j: 3d6.-th column of the matrix *note m: 3d6. into the vector *note v: 3d6. The length of the vector must be the same as the length of the column. -- Function: int gsl_matrix_set_row (gsl_matrix *m, size_t i, const gsl_vector *v) This function copies the elements of the vector *note v: 3d7. into the *note i: 3d7.-th row of the matrix *note m: 3d7. The length of the vector must be the same as the length of the row. -- Function: int gsl_matrix_set_col (gsl_matrix *m, size_t j, const gsl_vector *v) This function copies the elements of the vector *note v: 3d8. into the *note j: 3d8.-th column of the matrix *note m: 3d8. The length of the vector must be the same as the length of the column.  File: gsl-ref.info, Node: Exchanging rows and columns, Next: Matrix operations, Prev: Copying rows and columns, Up: Matrices 8.4.9 Exchanging rows and columns --------------------------------- The following functions can be used to exchange the rows and columns of a matrix. -- Function: int gsl_matrix_swap_rows (gsl_matrix *m, size_t i, size_t j) This function exchanges the *note i: 3da.-th and *note j: 3da.-th rows of the matrix *note m: 3da. in-place. -- Function: int gsl_matrix_swap_columns (gsl_matrix *m, size_t i, size_t j) This function exchanges the *note i: 3db.-th and *note j: 3db.-th columns of the matrix *note m: 3db. in-place. -- Function: int gsl_matrix_swap_rowcol (gsl_matrix *m, size_t i, size_t j) This function exchanges the *note i: 3dc.-th row and *note j: 3dc.-th column of the matrix *note m: 3dc. in-place. The matrix must be square for this operation to be possible. -- Function: int gsl_matrix_transpose_memcpy (gsl_matrix *dest, const gsl_matrix *src) This function makes the matrix *note dest: 3dd. the transpose of the matrix *note src: 3dd. by copying the elements of *note src: 3dd. into *note dest: 3dd. This function works for all matrices provided that the dimensions of the matrix *note dest: 3dd. match the transposed dimensions of the matrix *note src: 3dd. -- Function: int gsl_matrix_transpose (gsl_matrix *m) This function replaces the matrix *note m: 3de. by its transpose by copying the elements of the matrix in-place. The matrix must be square for this operation to be possible. -- Function: int gsl_matrix_complex_conjtrans_memcpy (gsl_matrix *dest, const gsl_matrix *src) This function makes the matrix *note dest: 3df. the conjugate transpose of the matrix *note src: 3df. by copying the complex conjugate elements of *note src: 3df. into *note dest: 3df. This function works for all complex matrices provided that the dimensions of the matrix *note dest: 3df. match the transposed dimensions of the matrix *note src: 3df.  File: gsl-ref.info, Node: Matrix operations, Next: Finding maximum and minimum elements of matrices, Prev: Exchanging rows and columns, Up: Matrices 8.4.10 Matrix operations ------------------------ The following operations are defined for real and complex matrices. -- Function: int gsl_matrix_add (gsl_matrix *a, const gsl_matrix *b) This function adds the elements of matrix *note b: 3e1. to the elements of matrix *note a: 3e1. The result a(i,j) \leftarrow a(i,j) + b(i,j) is stored in *note a: 3e1. and *note b: 3e1. remains unchanged. The two matrices must have the same dimensions. -- Function: int gsl_matrix_sub (gsl_matrix *a, const gsl_matrix *b) This function subtracts the elements of matrix *note b: 3e2. from the elements of matrix *note a: 3e2. The result a(i,j) \leftarrow a(i,j) - b(i,j) is stored in *note a: 3e2. and *note b: 3e2. remains unchanged. The two matrices must have the same dimensions. -- Function: int gsl_matrix_mul_elements (gsl_matrix *a, const gsl_matrix *b) This function multiplies the elements of matrix *note a: 3e3. by the elements of matrix *note b: 3e3. The result a(i,j) \leftarrow a(i,j) * b(i,j) is stored in *note a: 3e3. and *note b: 3e3. remains unchanged. The two matrices must have the same dimensions. -- Function: int gsl_matrix_div_elements (gsl_matrix *a, const gsl_matrix *b) This function divides the elements of matrix *note a: 3e4. by the elements of matrix *note b: 3e4. The result a(i,j) \leftarrow a(i,j) / b(i,j) is stored in *note a: 3e4. and *note b: 3e4. remains unchanged. The two matrices must have the same dimensions. -- Function: int gsl_matrix_scale (gsl_matrix *a, const double x) This function multiplies the elements of matrix *note a: 3e5. by the constant factor *note x: 3e5. The result a(i,j) \leftarrow x a(i,j) is stored in *note a: 3e5. -- Function: int gsl_matrix_scale_columns (gsl_matrix *A, const gsl_vector *x) This function scales the columns of the M-by-N matrix *note A: 3e6. by the elements of the vector *note x: 3e6, of length N. The j-th column of *note A: 3e6. is multiplied by x_j. This is equivalent to forming A \rightarrow A X where X = \textrm{diag}(x). -- Function: int gsl_matrix_scale_rows (gsl_matrix *A, const gsl_vector *x) This function scales the rows of the M-by-N matrix *note A: 3e7. by the elements of the vector *note x: 3e7, of length M. The i-th row of *note A: 3e7. is multiplied by x_i. This is equivalent to forming A \rightarrow X A where X = \textrm{diag}(x). -- Function: int gsl_matrix_add_constant (gsl_matrix *a, const double x) This function adds the constant value *note x: 3e8. to the elements of the matrix *note a: 3e8. The result a(i,j) \leftarrow a(i,j) + x is stored in *note a: 3e8.  File: gsl-ref.info, Node: Finding maximum and minimum elements of matrices, Next: Matrix properties, Prev: Matrix operations, Up: Matrices 8.4.11 Finding maximum and minimum elements of matrices ------------------------------------------------------- The following operations are only defined for real matrices. -- Function: double gsl_matrix_max (const gsl_matrix *m) This function returns the maximum value in the matrix *note m: 3ea. -- Function: double gsl_matrix_min (const gsl_matrix *m) This function returns the minimum value in the matrix *note m: 3eb. -- Function: void gsl_matrix_minmax (const gsl_matrix *m, double *min_out, double *max_out) This function returns the minimum and maximum values in the matrix *note m: 3ec, storing them in *note min_out: 3ec. and *note max_out: 3ec. -- Function: void gsl_matrix_max_index (const gsl_matrix *m, size_t *imax, size_t *jmax) This function returns the indices of the maximum value in the matrix *note m: 3ed, storing them in *note imax: 3ed. and *note jmax: 3ed. When there are several equal maximum elements then the first element found is returned, searching in row-major order. -- Function: void gsl_matrix_min_index (const gsl_matrix *m, size_t *imin, size_t *jmin) This function returns the indices of the minimum value in the matrix *note m: 3ee, storing them in *note imin: 3ee. and *note jmin: 3ee. When there are several equal minimum elements then the first element found is returned, searching in row-major order. -- Function: void gsl_matrix_minmax_index (const gsl_matrix *m, size_t *imin, size_t *jmin, size_t *imax, size_t *jmax) This function returns the indices of the minimum and maximum values in the matrix *note m: 3ef, storing them in (*note imin: 3ef, *note jmin: 3ef.) and (*note imax: 3ef, *note jmax: 3ef.). When there are several equal minimum or maximum elements then the first elements found are returned, searching in row-major order.  File: gsl-ref.info, Node: Matrix properties, Next: Example programs for matrices, Prev: Finding maximum and minimum elements of matrices, Up: Matrices 8.4.12 Matrix properties ------------------------ The following functions are defined for real and complex matrices. For complex matrices both the real and imaginary parts must satisfy the conditions. -- Function: int gsl_matrix_isnull (const gsl_matrix *m) -- Function: int gsl_matrix_ispos (const gsl_matrix *m) -- Function: int gsl_matrix_isneg (const gsl_matrix *m) -- Function: int gsl_matrix_isnonneg (const gsl_matrix *m) These functions return 1 if all the elements of the matrix *note m: 3f4. are zero, strictly positive, strictly negative, or non-negative respectively, and 0 otherwise. To test whether a matrix is positive-definite, use the *note Cholesky decomposition: 3f5. -- Function: int gsl_matrix_equal (const gsl_matrix *a, const gsl_matrix *b) This function returns 1 if the matrices *note a: 3f6. and *note b: 3f6. are equal (by comparison of element values) and 0 otherwise. -- Function: double gsl_matrix_norm1 (const gsl_matrix *A) This function returns the 1-norm of the m-by-n matrix *note A: 3f7, defined as the maximum column sum, ||A||_1 = \textrm{max}_{1 \le j \le n} \sum_{i=1}^m |A_{ij}|  File: gsl-ref.info, Node: Example programs for matrices, Next: References and Further Reading<4>, Prev: Matrix properties, Up: Matrices 8.4.13 Example programs for matrices ------------------------------------ The program below shows how to allocate, initialize and read from a matrix using the functions *note gsl_matrix_alloc(): 3a4, *note gsl_matrix_set(): 3a9. and *note gsl_matrix_get(): 3a8. #include #include int main (void) { int i, j; gsl_matrix * m = gsl_matrix_alloc (10, 3); for (i = 0; i < 10; i++) for (j = 0; j < 3; j++) gsl_matrix_set (m, i, j, 0.23 + 100*i + j); for (i = 0; i < 100; i++) /* OUT OF RANGE ERROR */ for (j = 0; j < 3; j++) printf ("m(%d,%d) = %g\n", i, j, gsl_matrix_get (m, i, j)); gsl_matrix_free (m); return 0; } Here is the output from the program. The final loop attempts to read outside the range of the matrix ‘m’, and the error is trapped by the range-checking code in *note gsl_matrix_get(): 3a8. $ ./a.out m(0,0) = 0.23 m(0,1) = 1.23 m(0,2) = 2.23 m(1,0) = 100.23 m(1,1) = 101.23 m(1,2) = 102.23 ... m(9,2) = 902.23 gsl: matrix_source.c:13: ERROR: first index out of range Default GSL error handler invoked. Aborted (core dumped) The next program shows how to write a matrix to a file. #include #include int main (void) { int i, j, k = 0; gsl_matrix * m = gsl_matrix_alloc (100, 100); gsl_matrix * a = gsl_matrix_alloc (100, 100); for (i = 0; i < 100; i++) for (j = 0; j < 100; j++) gsl_matrix_set (m, i, j, 0.23 + i + j); { FILE * f = fopen ("test.dat", "wb"); gsl_matrix_fwrite (f, m); fclose (f); } { FILE * f = fopen ("test.dat", "rb"); gsl_matrix_fread (f, a); fclose (f); } for (i = 0; i < 100; i++) for (j = 0; j < 100; j++) { double mij = gsl_matrix_get (m, i, j); double aij = gsl_matrix_get (a, i, j); if (mij != aij) k++; } gsl_matrix_free (m); gsl_matrix_free (a); printf ("differences = %d (should be zero)\n", k); return (k > 0); } After running this program the file ‘test.dat’ should contain the elements of ‘m’, written in binary format. The matrix which is read back in using the function *note gsl_matrix_fread(): 3b2. should be exactly equal to the original matrix. The following program demonstrates the use of vector views. The program computes the column norms of a matrix. #include #include #include #include int main (void) { size_t i,j; gsl_matrix *m = gsl_matrix_alloc (10, 10); for (i = 0; i < 10; i++) for (j = 0; j < 10; j++) gsl_matrix_set (m, i, j, sin (i) + cos (j)); for (j = 0; j < 10; j++) { gsl_vector_view column = gsl_matrix_column (m, j); double d; d = gsl_blas_dnrm2 (&column.vector); printf ("matrix column %zu, norm = %g\n", j, d); } gsl_matrix_free (m); return 0; } Here is the output of the program, matrix column 0, norm = 4.31461 matrix column 1, norm = 3.1205 matrix column 2, norm = 2.19316 matrix column 3, norm = 3.26114 matrix column 4, norm = 2.53416 matrix column 5, norm = 2.57281 matrix column 6, norm = 4.20469 matrix column 7, norm = 3.65202 matrix column 8, norm = 2.08524 matrix column 9, norm = 3.07313 The results can be confirmed using GNU octave: $ octave GNU Octave, version 2.0.16.92 octave> m = sin(0:9)' * ones(1,10) + ones(10,1) * cos(0:9); octave> sqrt(sum(m.^2)) ans = 4.3146 3.1205 2.1932 3.2611 2.5342 2.5728 4.2047 3.6520 2.0852 3.0731  File: gsl-ref.info, Node: References and Further Reading<4>, Prev: Example programs for matrices, Up: Matrices 8.4.14 References and Further Reading ------------------------------------- The block, vector and matrix objects in GSL follow the ‘valarray’ model of C++. A description of this model can be found in the following reference, * B. Stroustrup, The C++ Programming Language (3rd Ed), Section 22.4 Vector Arithmetic. Addison-Wesley 1997, ISBN 0-201-88954-4.  File: gsl-ref.info, Node: Permutations, Next: Combinations, Prev: Vectors and Matrices, Up: Top 9 Permutations ************** This chapter describes functions for creating and manipulating permutations. A permutation p is represented by an array of n integers in the range 0 to n-1, where each value p_i occurs once and only once. The application of a permutation p to a vector v yields a new vector v' where v'_i = v_{p_i}. For example, the array (0,1,3,2) represents a permutation which exchanges the last two elements of a four element vector. The corresponding identity permutation is (0,1,2,3). Note that the permutations produced by the linear algebra routines correspond to the exchange of matrix columns, and so should be considered as applying to row-vectors in the form v' = v P rather than column-vectors, when permuting the elements of a vector. The functions described in this chapter are defined in the header file ‘gsl_permutation.h’. * Menu: * The Permutation struct:: * Permutation allocation:: * Accessing permutation elements:: * Permutation properties:: * Permutation functions:: * Applying Permutations:: * Reading and writing permutations:: * Permutations in cyclic form:: * Examples: Examples<4>. * References and Further Reading: References and Further Reading<5>.  File: gsl-ref.info, Node: The Permutation struct, Next: Permutation allocation, Up: Permutations 9.1 The Permutation struct ========================== -- Type: gsl_permutation A permutation is defined by a structure containing two components, the size of the permutation and a pointer to the permutation array. The elements of the permutation array are all of type ‘size_t’. The *note gsl_permutation: 3fd. structure looks like this: typedef struct { size_t size; size_t * data; } gsl_permutation;  File: gsl-ref.info, Node: Permutation allocation, Next: Accessing permutation elements, Prev: The Permutation struct, Up: Permutations 9.2 Permutation allocation ========================== -- Function: *note gsl_permutation: 3fd. *gsl_permutation_alloc (size_t n) This function allocates memory for a new permutation of size *note n: 3ff. The permutation is not initialized and its elements are undefined. Use the function *note gsl_permutation_calloc(): 400. if you want to create a permutation which is initialized to the identity. A null pointer is returned if insufficient memory is available to create the permutation. -- Function: *note gsl_permutation: 3fd. *gsl_permutation_calloc (size_t n) This function allocates memory for a new permutation of size *note n: 400. and initializes it to the identity. A null pointer is returned if insufficient memory is available to create the permutation. -- Function: void gsl_permutation_init (gsl_permutation *p) This function initializes the permutation *note p: 401. to the identity, i.e. (0, 1, 2, \dots, n - 1). -- Function: void gsl_permutation_free (gsl_permutation *p) This function frees all the memory used by the permutation *note p: 402. -- Function: int gsl_permutation_memcpy (gsl_permutation *dest, const gsl_permutation *src) This function copies the elements of the permutation *note src: 403. into the permutation *note dest: 403. The two permutations must have the same size.  File: gsl-ref.info, Node: Accessing permutation elements, Next: Permutation properties, Prev: Permutation allocation, Up: Permutations 9.3 Accessing permutation elements ================================== The following functions can be used to access and manipulate permutations. -- Function: size_t gsl_permutation_get (const gsl_permutation *p, const size_t i) This function returns the value of the *note i: 405.-th element of the permutation *note p: 405. If *note i: 405. lies outside the allowed range of 0 to n - 1 then the error handler is invoked and 0 is returned. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: int gsl_permutation_swap (gsl_permutation *p, const size_t i, const size_t j) This function exchanges the *note i: 406.-th and *note j: 406.-th elements of the permutation *note p: 406.  File: gsl-ref.info, Node: Permutation properties, Next: Permutation functions, Prev: Accessing permutation elements, Up: Permutations 9.4 Permutation properties ========================== -- Function: size_t gsl_permutation_size (const gsl_permutation *p) This function returns the size of the permutation *note p: 408. -- Function: size_t *gsl_permutation_data (const gsl_permutation *p) This function returns a pointer to the array of elements in the permutation *note p: 409. -- Function: int gsl_permutation_valid (const gsl_permutation *p) This function checks that the permutation *note p: 40a. is valid. The ‘n’ elements should contain each of the numbers 0 to ‘n - 1’ once and only once.  File: gsl-ref.info, Node: Permutation functions, Next: Applying Permutations, Prev: Permutation properties, Up: Permutations 9.5 Permutation functions ========================= -- Function: void gsl_permutation_reverse (gsl_permutation *p) This function reverses the elements of the permutation *note p: 40c. -- Function: int gsl_permutation_inverse (gsl_permutation *inv, const gsl_permutation *p) This function computes the inverse of the permutation *note p: 40d, storing the result in *note inv: 40d. -- Function: int gsl_permutation_next (gsl_permutation *p) This function advances the permutation *note p: 40e. to the next permutation in lexicographic order and returns ‘GSL_SUCCESS’. If no further permutations are available it returns ‘GSL_FAILURE’ and leaves *note p: 40e. unmodified. Starting with the identity permutation and repeatedly applying this function will iterate through all possible permutations of a given order. -- Function: int gsl_permutation_prev (gsl_permutation *p) This function steps backwards from the permutation *note p: 40f. to the previous permutation in lexicographic order, returning ‘GSL_SUCCESS’. If no previous permutation is available it returns ‘GSL_FAILURE’ and leaves *note p: 40f. unmodified.  File: gsl-ref.info, Node: Applying Permutations, Next: Reading and writing permutations, Prev: Permutation functions, Up: Permutations 9.6 Applying Permutations ========================= The following functions are defined in the header files ‘gsl_permute.h’ and ‘gsl_permute_vector.h’. -- Function: int gsl_permute (const size_t *p, double *data, size_t stride, size_t n) This function applies the permutation *note p: 411. to the array *note data: 411. of size *note n: 411. with stride *note stride: 411. -- Function: int gsl_permute_inverse (const size_t *p, double *data, size_t stride, size_t n) This function applies the inverse of the permutation *note p: 412. to the array *note data: 412. of size *note n: 412. with stride *note stride: 412. -- Function: int gsl_permute_vector (const gsl_permutation *p, gsl_vector *v) This function applies the permutation *note p: 413. to the elements of the vector *note v: 413, considered as a row-vector acted on by a permutation matrix from the right, v' = v P. The j-th column of the permutation matrix P is given by the p_j-th column of the identity matrix. The permutation *note p: 413. and the vector *note v: 413. must have the same length. -- Function: int gsl_permute_vector_inverse (const gsl_permutation *p, gsl_vector *v) This function applies the inverse of the permutation *note p: 414. to the elements of the vector *note v: 414, considered as a row-vector acted on by an inverse permutation matrix from the right, v' = v P^T. Note that for permutation matrices the inverse is the same as the transpose. The j-th column of the permutation matrix P is given by the p_j-th column of the identity matrix. The permutation *note p: 414. and the vector *note v: 414. must have the same length. -- Function: int gsl_permute_matrix (const gsl_permutation *p, gsl_matrix *A) This function applies the permutation *note p: 415. to the matrix *note A: 415. from the right, A' = A P. The j-th column of the permutation matrix P is given by the p_j-th column of the identity matrix. This effectively permutes the columns of *note A: 415. according to the permutation *note p: 415, and so the number of columns of *note A: 415. must equal the size of the permutation *note p: 415. -- Function: int gsl_permutation_mul (gsl_permutation *p, const gsl_permutation *pa, const gsl_permutation *pb) This function combines the two permutations *note pa: 416. and *note pb: 416. into a single permutation *note p: 416, where p = pa * pb The permutation *note p: 416. is equivalent to applying *note pb: 416. first and then *note pa: 416.  File: gsl-ref.info, Node: Reading and writing permutations, Next: Permutations in cyclic form, Prev: Applying Permutations, Up: Permutations 9.7 Reading and writing permutations ==================================== The library provides functions for reading and writing permutations to a file as binary data or formatted text. -- Function: int gsl_permutation_fwrite (FILE *stream, const gsl_permutation *p) This function writes the elements of the permutation *note p: 418. to the stream *note stream: 418. in binary format. The function returns ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_permutation_fread (FILE *stream, gsl_permutation *p) This function reads into the permutation *note p: 419. from the open stream *note stream: 419. in binary format. The permutation *note p: 419. must be preallocated with the correct length since the function uses the size of *note p: 419. to determine how many bytes to read. The function returns ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_permutation_fprintf (FILE *stream, const gsl_permutation *p, const char *format) This function writes the elements of the permutation *note p: 41a. line-by-line to the stream *note stream: 41a. using the format specifier *note format: 41a, which should be suitable for a type of ‘size_t’. In ISO C99 the type modifier ‘z’ represents ‘size_t’, so ‘"%zu\n"’ is a suitable format (1). The function returns ‘GSL_EFAILED’ if there was a problem writing to the file. -- Function: int gsl_permutation_fscanf (FILE *stream, gsl_permutation *p) This function reads formatted data from the stream *note stream: 41b. into the permutation *note p: 41b. The permutation *note p: 41b. must be preallocated with the correct length since the function uses the size of *note p: 41b. to determine how many numbers to read. The function returns ‘GSL_EFAILED’ if there was a problem reading from the file. ---------- Footnotes ---------- (1) (1) In versions of the GNU C library prior to the ISO C99 standard, the type modifier ‘Z’ was used instead.  File: gsl-ref.info, Node: Permutations in cyclic form, Next: Examples<4>, Prev: Reading and writing permutations, Up: Permutations 9.8 Permutations in cyclic form =============================== A permutation can be represented in both `linear' and `cyclic' notations. The functions described in this section convert between the two forms. The linear notation is an index mapping, and has already been described above. The cyclic notation expresses a permutation as a series of circular rearrangements of groups of elements, or `cycles'. For example, under the cycle (1 2 3), 1 is replaced by 2, 2 is replaced by 3 and 3 is replaced by 1 in a circular fashion. Cycles of different sets of elements can be combined independently, for example (1 2 3) (4 5) combines the cycle (1 2 3) with the cycle (4 5), which is an exchange of elements 4 and 5. A cycle of length one represents an element which is unchanged by the permutation and is referred to as a `singleton'. It can be shown that every permutation can be decomposed into combinations of cycles. The decomposition is not unique, but can always be rearranged into a standard `canonical form' by a reordering of elements. The library uses the canonical form defined in Knuth’s `Art of Computer Programming' (Vol 1, 3rd Ed, 1997) Section 1.3.3, p.178. The procedure for obtaining the canonical form given by Knuth is, 1. Write all singleton cycles explicitly 2. Within each cycle, put the smallest number first 3. Order the cycles in decreasing order of the first number in the cycle. For example, the linear representation (2 4 3 0 1) is represented as (1 4) (0 2 3) in canonical form. The permutation corresponds to an exchange of elements 1 and 4, and rotation of elements 0, 2 and 3. The important property of the canonical form is that it can be reconstructed from the contents of each cycle without the brackets. In addition, by removing the brackets it can be considered as a linear representation of a different permutation. In the example given above the permutation (2 4 3 0 1) would become (1 4 0 2 3). This mapping has many applications in the theory of permutations. -- Function: int gsl_permutation_linear_to_canonical (gsl_permutation *q, const gsl_permutation *p) This function computes the canonical form of the permutation *note p: 41d. and stores it in the output argument *note q: 41d. -- Function: int gsl_permutation_canonical_to_linear (gsl_permutation *p, const gsl_permutation *q) This function converts a permutation *note q: 41e. in canonical form back into linear form storing it in the output argument *note p: 41e. -- Function: size_t gsl_permutation_inversions (const gsl_permutation *p) This function counts the number of inversions in the permutation *note p: 41f. An inversion is any pair of elements that are not in order. For example, the permutation 2031 has three inversions, corresponding to the pairs (2,0) (2,1) and (3,1). The identity permutation has no inversions. -- Function: size_t gsl_permutation_linear_cycles (const gsl_permutation *p) This function counts the number of cycles in the permutation *note p: 420, given in linear form. -- Function: size_t gsl_permutation_canonical_cycles (const gsl_permutation *q) This function counts the number of cycles in the permutation *note q: 421, given in canonical form.  File: gsl-ref.info, Node: Examples<4>, Next: References and Further Reading<5>, Prev: Permutations in cyclic form, Up: Permutations 9.9 Examples ============ The example program below creates a random permutation (by shuffling the elements of the identity) and finds its inverse. #include #include #include #include int main (void) { const size_t N = 10; const gsl_rng_type * T; gsl_rng * r; gsl_permutation * p = gsl_permutation_alloc (N); gsl_permutation * q = gsl_permutation_alloc (N); gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); printf ("initial permutation:"); gsl_permutation_init (p); gsl_permutation_fprintf (stdout, p, " %u"); printf ("\n"); printf (" random permutation:"); gsl_ran_shuffle (r, p->data, N, sizeof(size_t)); gsl_permutation_fprintf (stdout, p, " %u"); printf ("\n"); printf ("inverse permutation:"); gsl_permutation_inverse (q, p); gsl_permutation_fprintf (stdout, q, " %u"); printf ("\n"); gsl_permutation_free (p); gsl_permutation_free (q); gsl_rng_free (r); return 0; } Here is the output from the program: $ ./a.out initial permutation: 0 1 2 3 4 5 6 7 8 9 random permutation: 1 3 5 2 7 6 0 4 9 8 inverse permutation: 6 0 3 1 7 2 5 4 9 8 The random permutation ‘p[i]’ and its inverse ‘q[i]’ are related through the identity ‘p[q[i]] = i’, which can be verified from the output. The next example program steps forwards through all possible third order permutations, starting from the identity, #include #include int main (void) { gsl_permutation * p = gsl_permutation_alloc (3); gsl_permutation_init (p); do { gsl_permutation_fprintf (stdout, p, " %u"); printf ("\n"); } while (gsl_permutation_next(p) == GSL_SUCCESS); gsl_permutation_free (p); return 0; } Here is the output from the program: $ ./a.out 0 1 2 0 2 1 1 0 2 1 2 0 2 0 1 2 1 0 The permutations are generated in lexicographic order. To reverse the sequence, begin with the final permutation (which is the reverse of the identity) and replace *note gsl_permutation_next(): 40e. with *note gsl_permutation_prev(): 40f.  File: gsl-ref.info, Node: References and Further Reading<5>, Prev: Examples<4>, Up: Permutations 9.10 References and Further Reading =================================== The subject of permutations is covered extensively in the following, * Donald E. Knuth, The Art of Computer Programming: Sorting and Searching (Vol 3, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896850. For the definition of the `canonical form' see, * Donald E. Knuth, The Art of Computer Programming: Fundamental Algorithms (Vol 1, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896850. Section 1.3.3, An Unusual Correspondence, p.178–179.  File: gsl-ref.info, Node: Combinations, Next: Multisets, Prev: Permutations, Up: Top 10 Combinations *************** This chapter describes functions for creating and manipulating combinations. A combination c is represented by an array of k integers in the range 0 to n - 1, where each value c_i occurs at most once. The combination c corresponds to indices of k elements chosen from an n element vector. Combinations are useful for iterating over all k-element subsets of a set. The functions described in this chapter are defined in the header file ‘gsl_combination.h’. * Menu: * The Combination struct:: * Combination allocation:: * Accessing combination elements:: * Combination properties:: * Combination functions:: * Reading and writing combinations:: * Examples: Examples<5>. * References and Further Reading: References and Further Reading<6>.  File: gsl-ref.info, Node: The Combination struct, Next: Combination allocation, Up: Combinations 10.1 The Combination struct =========================== -- Type: gsl_combination A combination is defined by a structure containing three components, the values of n and k, and a pointer to the combination array. The elements of the combination array are all of type ‘size_t’, and are stored in increasing order. The *note gsl_combination: 427. structure looks like this: typedef struct { size_t n; size_t k; size_t *data; } gsl_combination;  File: gsl-ref.info, Node: Combination allocation, Next: Accessing combination elements, Prev: The Combination struct, Up: Combinations 10.2 Combination allocation =========================== -- Function: *note gsl_combination: 427. *gsl_combination_alloc (size_t n, size_t k) This function allocates memory for a new combination with parameters *note n: 429, *note k: 429. The combination is not initialized and its elements are undefined. Use the function *note gsl_combination_calloc(): 42a. if you want to create a combination which is initialized to the lexicographically first combination. A null pointer is returned if insufficient memory is available to create the combination. -- Function: *note gsl_combination: 427. *gsl_combination_calloc (size_t n, size_t k) This function allocates memory for a new combination with parameters *note n: 42a, *note k: 42a. and initializes it to the lexicographically first combination. A null pointer is returned if insufficient memory is available to create the combination. -- Function: void gsl_combination_init_first (gsl_combination *c) This function initializes the combination *note c: 42b. to the lexicographically first combination, i.e. (0, 1, 2, \dots, k - 1). -- Function: void gsl_combination_init_last (gsl_combination *c) This function initializes the combination *note c: 42c. to the lexicographically last combination, i.e. (n - k, n - k + 1, \dots, n - 1). -- Function: void gsl_combination_free (gsl_combination *c) This function frees all the memory used by the combination *note c: 42d. -- Function: int gsl_combination_memcpy (gsl_combination *dest, const gsl_combination *src) This function copies the elements of the combination *note src: 42e. into the combination *note dest: 42e. The two combinations must have the same size.  File: gsl-ref.info, Node: Accessing combination elements, Next: Combination properties, Prev: Combination allocation, Up: Combinations 10.3 Accessing combination elements =================================== The following function can be used to access the elements of a combination. -- Function: size_t gsl_combination_get (const gsl_combination *c, const size_t i) This function returns the value of the *note i: 430.-th element of the combination *note c: 430. If *note i: 430. lies outside the allowed range of 0 to k - 1 then the error handler is invoked and 0 is returned. An inline version of this function is used when ‘HAVE_INLINE’ is defined.  File: gsl-ref.info, Node: Combination properties, Next: Combination functions, Prev: Accessing combination elements, Up: Combinations 10.4 Combination properties =========================== -- Function: size_t gsl_combination_n (const gsl_combination *c) This function returns the range (n) of the combination c. -- Function: size_t gsl_combination_k (const gsl_combination *c) This function returns the number of elements (k) in the combination *note c: 433. -- Function: size_t *gsl_combination_data (const gsl_combination *c) This function returns a pointer to the array of elements in the combination *note c: 434. -- Function: int gsl_combination_valid (gsl_combination *c) This function checks that the combination *note c: 435. is valid. The ‘k’ elements should lie in the range 0 to n - 1, with each value occurring once at most and in increasing order.  File: gsl-ref.info, Node: Combination functions, Next: Reading and writing combinations, Prev: Combination properties, Up: Combinations 10.5 Combination functions ========================== -- Function: int gsl_combination_next (gsl_combination *c) This function advances the combination *note c: 437. to the next combination in lexicographic order and returns ‘GSL_SUCCESS’. If no further combinations are available it returns ‘GSL_FAILURE’ and leaves *note c: 437. unmodified. Starting with the first combination and repeatedly applying this function will iterate through all possible combinations of a given order. -- Function: int gsl_combination_prev (gsl_combination *c) This function steps backwards from the combination *note c: 438. to the previous combination in lexicographic order, returning ‘GSL_SUCCESS’. If no previous combination is available it returns ‘GSL_FAILURE’ and leaves *note c: 438. unmodified.  File: gsl-ref.info, Node: Reading and writing combinations, Next: Examples<5>, Prev: Combination functions, Up: Combinations 10.6 Reading and writing combinations ===================================== The library provides functions for reading and writing combinations to a file as binary data or formatted text. -- Function: int gsl_combination_fwrite (FILE *stream, const gsl_combination *c) This function writes the elements of the combination *note c: 43a. to the stream *note stream: 43a. in binary format. The function returns ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_combination_fread (FILE *stream, gsl_combination *c) This function reads elements from the open stream *note stream: 43b. into the combination *note c: 43b. in binary format. The combination *note c: 43b. must be preallocated with correct values of n and k since the function uses the size of *note c: 43b. to determine how many bytes to read. The function returns ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_combination_fprintf (FILE *stream, const gsl_combination *c, const char *format) This function writes the elements of the combination *note c: 43c. line-by-line to the stream *note stream: 43c. using the format specifier *note format: 43c, which should be suitable for a type of ‘size_t’. In ISO C99 the type modifier ‘z’ represents ‘size_t’, so ‘"%zu\n"’ is a suitable format (1). The function returns ‘GSL_EFAILED’ if there was a problem writing to the file. -- Function: int gsl_combination_fscanf (FILE *stream, gsl_combination *c) This function reads formatted data from the stream *note stream: 43d. into the combination *note c: 43d. The combination *note c: 43d. must be preallocated with correct values of n and k since the function uses the size of *note c: 43d. to determine how many numbers to read. The function returns ‘GSL_EFAILED’ if there was a problem reading from the file. ---------- Footnotes ---------- (1) (1) In versions of the GNU C library prior to the ISO C99 standard, the type modifier ‘Z’ was used instead.  File: gsl-ref.info, Node: Examples<5>, Next: References and Further Reading<6>, Prev: Reading and writing combinations, Up: Combinations 10.7 Examples ============= The example program below prints all subsets of the set {0,1,2,3} ordered by size. Subsets of the same size are ordered lexicographically. #include #include int main (void) { gsl_combination * c; size_t i; printf ("All subsets of {0,1,2,3} by size:\n") ; for (i = 0; i <= 4; i++) { c = gsl_combination_calloc (4, i); do { printf ("{"); gsl_combination_fprintf (stdout, c, " %u"); printf (" }\n"); } while (gsl_combination_next (c) == GSL_SUCCESS); gsl_combination_free (c); } return 0; } Here is the output from the program, All subsets of {0,1,2,3} by size: { } { 0 } { 1 } { 2 } { 3 } { 0 1 } { 0 2 } { 0 3 } { 1 2 } { 1 3 } { 2 3 } { 0 1 2 } { 0 1 3 } { 0 2 3 } { 1 2 3 } { 0 1 2 3 } All 16 subsets are generated, and the subsets of each size are sorted lexicographically.  File: gsl-ref.info, Node: References and Further Reading<6>, Prev: Examples<5>, Up: Combinations 10.8 References and Further Reading =================================== Further information on combinations can be found in, * Donald L. Kreher, Douglas R. Stinson, Combinatorial Algorithms: Generation, Enumeration and Search, 1998, CRC Press LLC, ISBN 084933988X  File: gsl-ref.info, Node: Multisets, Next: Sorting, Prev: Combinations, Up: Top 11 Multisets ************ This chapter describes functions for creating and manipulating multisets. A multiset c is represented by an array of k integers in the range 0 to n - 1, where each value c_i may occur more than once. The multiset c corresponds to indices of k elements chosen from an n element vector with replacement. In mathematical terms, n is the cardinality of the multiset while k is the maximum multiplicity of any value. Multisets are useful, for example, when iterating over the indices of a k-th order symmetric tensor in n-space. The functions described in this chapter are defined in the header file ‘gsl_multiset.h’. * Menu: * The Multiset struct:: * Multiset allocation:: * Accessing multiset elements:: * Multiset properties:: * Multiset functions:: * Reading and writing multisets:: * Examples: Examples<6>.  File: gsl-ref.info, Node: The Multiset struct, Next: Multiset allocation, Up: Multisets 11.1 The Multiset struct ======================== -- Type: gsl_multiset A multiset is defined by a structure containing three components, the values of n and k, and a pointer to the multiset array. The elements of the multiset array are all of type ‘size_t’, and are stored in increasing order. The *note gsl_multiset: 443. structure looks like this: typedef struct { size_t n; size_t k; size_t *data; } gsl_multiset;  File: gsl-ref.info, Node: Multiset allocation, Next: Accessing multiset elements, Prev: The Multiset struct, Up: Multisets 11.2 Multiset allocation ======================== -- Function: *note gsl_multiset: 443. *gsl_multiset_alloc (size_t n, size_t k) This function allocates memory for a new multiset with parameters *note n: 445, *note k: 445. The multiset is not initialized and its elements are undefined. Use the function *note gsl_multiset_calloc(): 446. if you want to create a multiset which is initialized to the lexicographically first multiset element. A null pointer is returned if insufficient memory is available to create the multiset. -- Function: *note gsl_multiset: 443. *gsl_multiset_calloc (size_t n, size_t k) This function allocates memory for a new multiset with parameters *note n: 446, *note k: 446. and initializes it to the lexicographically first multiset element. A null pointer is returned if insufficient memory is available to create the multiset. -- Function: void gsl_multiset_init_first (gsl_multiset *c) This function initializes the multiset *note c: 447. to the lexicographically first multiset element, i.e. 0 repeated k times. -- Function: void gsl_multiset_init_last (gsl_multiset *c) This function initializes the multiset *note c: 448. to the lexicographically last multiset element, i.e. n-1 repeated k times. -- Function: void gsl_multiset_free (gsl_multiset *c) This function frees all the memory used by the multiset *note c: 449. -- Function: int gsl_multiset_memcpy (gsl_multiset *dest, const gsl_multiset *src) This function copies the elements of the multiset *note src: 44a. into the multiset *note dest: 44a. The two multisets must have the same size.  File: gsl-ref.info, Node: Accessing multiset elements, Next: Multiset properties, Prev: Multiset allocation, Up: Multisets 11.3 Accessing multiset elements ================================ The following function can be used to access the elements of a multiset. -- Function: size_t gsl_multiset_get (const gsl_multiset *c, const size_t i) This function returns the value of the *note i: 44c.-th element of the multiset *note c: 44c. If *note i: 44c. lies outside the allowed range of 0 to k - 1 then the error handler is invoked and 0 is returned. An inline version of this function is used when ‘HAVE_INLINE’ is defined.  File: gsl-ref.info, Node: Multiset properties, Next: Multiset functions, Prev: Accessing multiset elements, Up: Multisets 11.4 Multiset properties ======================== -- Function: size_t gsl_multiset_n (const gsl_multiset *c) This function returns the range (n) of the multiset *note c: 44e. -- Function: size_t gsl_multiset_k (const gsl_multiset *c) This function returns the number of elements (k) in the multiset *note c: 44f. -- Function: size_t *gsl_multiset_data (const gsl_multiset *c) This function returns a pointer to the array of elements in the multiset *note c: 450. -- Function: int gsl_multiset_valid (gsl_multiset *c) This function checks that the multiset *note c: 451. is valid. The ‘k’ elements should lie in the range 0 to n - 1, with each value occurring in nondecreasing order.  File: gsl-ref.info, Node: Multiset functions, Next: Reading and writing multisets, Prev: Multiset properties, Up: Multisets 11.5 Multiset functions ======================= -- Function: int gsl_multiset_next (gsl_multiset *c) This function advances the multiset *note c: 453. to the next multiset element in lexicographic order and returns ‘GSL_SUCCESS’. If no further multisets elements are available it returns ‘GSL_FAILURE’ and leaves *note c: 453. unmodified. Starting with the first multiset and repeatedly applying this function will iterate through all possible multisets of a given order. -- Function: int gsl_multiset_prev (gsl_multiset *c) This function steps backwards from the multiset *note c: 454. to the previous multiset element in lexicographic order, returning ‘GSL_SUCCESS’. If no previous multiset is available it returns ‘GSL_FAILURE’ and leaves *note c: 454. unmodified.  File: gsl-ref.info, Node: Reading and writing multisets, Next: Examples<6>, Prev: Multiset functions, Up: Multisets 11.6 Reading and writing multisets ================================== The library provides functions for reading and writing multisets to a file as binary data or formatted text. -- Function: int gsl_multiset_fwrite (FILE *stream, const gsl_multiset *c) This function writes the elements of the multiset *note c: 456. to the stream *note stream: 456. in binary format. The function returns ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_multiset_fread (FILE *stream, gsl_multiset *c) This function reads elements from the open stream *note stream: 457. into the multiset *note c: 457. in binary format. The multiset *note c: 457. must be preallocated with correct values of n and k since the function uses the size of *note c: 457. to determine how many bytes to read. The function returns ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_multiset_fprintf (FILE *stream, const gsl_multiset *c, const char *format) This function writes the elements of the multiset *note c: 458. line-by-line to the stream *note stream: 458. using the format specifier *note format: 458, which should be suitable for a type of ‘size_t’. In ISO C99 the type modifier ‘z’ represents ‘size_t’, so ‘"%zu\n"’ is a suitable format (1). The function returns ‘GSL_EFAILED’ if there was a problem writing to the file. -- Function: int gsl_multiset_fscanf (FILE *stream, gsl_multiset *c) This function reads formatted data from the stream *note stream: 459. into the multiset *note c: 459. The multiset *note c: 459. must be preallocated with correct values of n and k since the function uses the size of *note c: 459. to determine how many numbers to read. The function returns ‘GSL_EFAILED’ if there was a problem reading from the file. ---------- Footnotes ---------- (1) (1) In versions of the GNU C library prior to the ISO C99 standard, the type modifier ‘Z’ was used instead.  File: gsl-ref.info, Node: Examples<6>, Prev: Reading and writing multisets, Up: Multisets 11.7 Examples ============= The example program below prints all multisets elements containing the values {0,1,2,3} ordered by size. Multiset elements of the same size are ordered lexicographically. #include #include int main (void) { gsl_multiset * c; size_t i; printf ("All multisets of {0,1,2,3} by size:\n") ; for (i = 0; i <= 4; i++) { c = gsl_multiset_calloc (4, i); do { printf ("{"); gsl_multiset_fprintf (stdout, c, " %u"); printf (" }\n"); } while (gsl_multiset_next (c) == GSL_SUCCESS); gsl_multiset_free (c); } return 0; } Here is the output from the program, All multisets of {0,1,2,3} by size: { } { 0 } { 1 } { 2 } { 3 } { 0 0 } { 0 1 } { 0 2 } { 0 3 } { 1 1 } { 1 2 } { 1 3 } { 2 2 } { 2 3 } { 3 3 } { 0 0 0 } { 0 0 1 } { 0 0 2 } { 0 0 3 } { 0 1 1 } { 0 1 2 } { 0 1 3 } { 0 2 2 } { 0 2 3 } { 0 3 3 } { 1 1 1 } { 1 1 2 } { 1 1 3 } { 1 2 2 } { 1 2 3 } { 1 3 3 } { 2 2 2 } { 2 2 3 } { 2 3 3 } { 3 3 3 } { 0 0 0 0 } { 0 0 0 1 } { 0 0 0 2 } { 0 0 0 3 } { 0 0 1 1 } { 0 0 1 2 } { 0 0 1 3 } { 0 0 2 2 } { 0 0 2 3 } { 0 0 3 3 } { 0 1 1 1 } { 0 1 1 2 } { 0 1 1 3 } { 0 1 2 2 } { 0 1 2 3 } { 0 1 3 3 } { 0 2 2 2 } { 0 2 2 3 } { 0 2 3 3 } { 0 3 3 3 } { 1 1 1 1 } { 1 1 1 2 } { 1 1 1 3 } { 1 1 2 2 } { 1 1 2 3 } { 1 1 3 3 } { 1 2 2 2 } { 1 2 2 3 } { 1 2 3 3 } { 1 3 3 3 } { 2 2 2 2 } { 2 2 2 3 } { 2 2 3 3 } { 2 3 3 3 } { 3 3 3 3 } All 70 multisets are generated and sorted lexicographically.  File: gsl-ref.info, Node: Sorting, Next: BLAS Support, Prev: Multisets, Up: Top 12 Sorting ********** This chapter describes functions for sorting data, both directly and indirectly (using an index). All the functions use the `heapsort' algorithm. Heapsort is an O(N \log N) algorithm which operates in-place and does not require any additional storage. It also provides consistent performance, the running time for its worst-case (ordered data) being not significantly longer than the average and best cases. Note that the heapsort algorithm does not preserve the relative ordering of equal elements—it is an `unstable' sort. However the resulting order of equal elements will be consistent across different platforms when using these functions. * Menu: * Sorting objects:: * Sorting vectors:: * Selecting the k smallest or largest elements:: * Computing the rank:: * Examples: Examples<7>. * References and Further Reading: References and Further Reading<7>.  File: gsl-ref.info, Node: Sorting objects, Next: Sorting vectors, Up: Sorting 12.1 Sorting objects ==================== The following function provides a simple alternative to the standard library function ‘qsort()’. It is intended for systems lacking ‘qsort()’, not as a replacement for it. The function ‘qsort()’ should be used whenever possible, as it will be faster and can provide stable ordering of equal elements. Documentation for ‘qsort()’ is available in the GNU C Library Reference Manual. The functions described in this section are defined in the header file ‘gsl_heapsort.h’. -- Function: void gsl_heapsort (void *array, size_t count, size_t size, gsl_comparison_fn_t compare) This function sorts the *note count: 45e. elements of the array *note array: 45e, each of size *note size: 45e, into ascending order using the comparison function *note compare: 45e. The type of the comparison function is defined by -- Type: gsl_comparison_fn_t int (*gsl_comparison_fn_t) (const void * a, const void * b) A comparison function should return a negative integer if the first argument is less than the second argument, ‘0’ if the two arguments are equal and a positive integer if the first argument is greater than the second argument. For example, the following function can be used to sort doubles into ascending numerical order. int compare_doubles (const double * a, const double * b) { if (*a > *b) return 1; else if (*a < *b) return -1; else return 0; } The appropriate function call to perform the sort is: gsl_heapsort (array, count, sizeof(double), compare_doubles); Note that unlike ‘qsort()’ the heapsort algorithm cannot be made into a stable sort by pointer arithmetic. The trick of comparing pointers for equal elements in the comparison function does not work for the heapsort algorithm. The heapsort algorithm performs an internal rearrangement of the data which destroys its initial ordering. -- Function: int gsl_heapsort_index (size_t *p, const void *array, size_t count, size_t size, gsl_comparison_fn_t compare) This function indirectly sorts the *note count: 460. elements of the array *note array: 460, each of size *note size: 460, into ascending order using the comparison function *note compare: 460. The resulting permutation is stored in *note p: 460, an array of length ‘n’. The elements of *note p: 460. give the index of the array element which would have been stored in that position if the array had been sorted in place. The first element of *note p: 460. gives the index of the least element in *note array: 460, and the last element of *note p: 460. gives the index of the greatest element in *note array: 460. The array itself is not changed.  File: gsl-ref.info, Node: Sorting vectors, Next: Selecting the k smallest or largest elements, Prev: Sorting objects, Up: Sorting 12.2 Sorting vectors ==================== The following functions will sort the elements of an array or vector, either directly or indirectly. They are defined for all real and integer types using the normal suffix rules. For example, the ‘float’ versions of the array functions are ‘gsl_sort_float()’ and ‘gsl_sort_float_index()’. The corresponding vector functions are ‘gsl_sort_vector_float()’ and ‘gsl_sort_vector_float_index()’. The prototypes are available in the header files ‘gsl_sort_float.h’ ‘gsl_sort_vector_float.h’. The complete set of prototypes can be included using the header files ‘gsl_sort.h’ and ‘gsl_sort_vector.h’. There are no functions for sorting complex arrays or vectors, since the ordering of complex numbers is not uniquely defined. To sort a complex vector by magnitude compute a real vector containing the magnitudes of the complex elements, and sort this vector indirectly. The resulting index gives the appropriate ordering of the original complex vector. -- Function: void gsl_sort (double *data, const size_t stride, size_t n) This function sorts the *note n: 462. elements of the array *note data: 462. with stride *note stride: 462. into ascending numerical order. -- Function: void gsl_sort2 (double *data1, const size_t stride1, double *data2, const size_t stride2, size_t n) This function sorts the *note n: 463. elements of the array *note data1: 463. with stride *note stride1: 463. into ascending numerical order, while making the same rearrangement of the array *note data2: 463. with stride *note stride2: 463, also of size *note n: 463. -- Function: void gsl_sort_vector (gsl_vector *v) This function sorts the elements of the vector *note v: 464. into ascending numerical order. -- Function: void gsl_sort_vector2 (gsl_vector *v1, gsl_vector *v2) This function sorts the elements of the vector *note v1: 465. into ascending numerical order, while making the same rearrangement of the vector *note v2: 465. -- Function: void gsl_sort_index (size_t *p, const double *data, size_t stride, size_t n) This function indirectly sorts the *note n: 466. elements of the array *note data: 466. with stride *note stride: 466. into ascending order, storing the resulting permutation in *note p: 466. The array *note p: 466. must be allocated with a sufficient length to store the *note n: 466. elements of the permutation. The elements of *note p: 466. give the index of the array element which would have been stored in that position if the array had been sorted in place. The array *note data: 466. is not changed. -- Function: int gsl_sort_vector_index (gsl_permutation *p, const gsl_vector *v) This function indirectly sorts the elements of the vector *note v: 467. into ascending order, storing the resulting permutation in *note p: 467. The elements of *note p: 467. give the index of the vector element which would have been stored in that position if the vector had been sorted in place. The first element of *note p: 467. gives the index of the least element in *note v: 467, and the last element of *note p: 467. gives the index of the greatest element in *note v: 467. The vector *note v: 467. is not changed.  File: gsl-ref.info, Node: Selecting the k smallest or largest elements, Next: Computing the rank, Prev: Sorting vectors, Up: Sorting 12.3 Selecting the k smallest or largest elements ================================================= The functions described in this section select the k smallest or largest elements of a data set of size N. The routines use an O(kN) direct insertion algorithm which is suited to subsets that are small compared with the total size of the dataset. For example, the routines are useful for selecting the 10 largest values from one million data points, but not for selecting the largest 100,000 values. If the subset is a significant part of the total dataset it may be faster to sort all the elements of the dataset directly with an O(N \log N) algorithm and obtain the smallest or largest values that way. -- Function: int gsl_sort_smallest (double *dest, size_t k, const double *src, size_t stride, size_t n) This function copies the *note k: 469. smallest elements of the array *note src: 469, of size *note n: 469. and stride *note stride: 469, in ascending numerical order into the array *note dest: 469. The size *note k: 469. of the subset must be less than or equal to *note n: 469. The data *note src: 469. is not modified by this operation. -- Function: int gsl_sort_largest (double *dest, size_t k, const double *src, size_t stride, size_t n) This function copies the *note k: 46a. largest elements of the array *note src: 46a, of size *note n: 46a. and stride *note stride: 46a, in descending numerical order into the array *note dest: 46a. *note k: 46a. must be less than or equal to *note n: 46a. The data *note src: 46a. is not modified by this operation. -- Function: int gsl_sort_vector_smallest (double *dest, size_t k, const gsl_vector *v) -- Function: int gsl_sort_vector_largest (double *dest, size_t k, const gsl_vector *v) These functions copy the *note k: 46c. smallest or largest elements of the vector *note v: 46c. into the array *note dest: 46c. *note k: 46c. must be less than or equal to the length of the vector *note v: 46c. The following functions find the indices of the k smallest or largest elements of a dataset. -- Function: int gsl_sort_smallest_index (size_t *p, size_t k, const double *src, size_t stride, size_t n) This function stores the indices of the *note k: 46d. smallest elements of the array *note src: 46d, of size *note n: 46d. and stride *note stride: 46d, in the array *note p: 46d. The indices are chosen so that the corresponding data is in ascending numerical order. *note k: 46d. must be less than or equal to *note n: 46d. The data *note src: 46d. is not modified by this operation. -- Function: int gsl_sort_largest_index (size_t *p, size_t k, const double *src, size_t stride, size_t n) This function stores the indices of the *note k: 46e. largest elements of the array *note src: 46e, of size *note n: 46e. and stride *note stride: 46e, in the array *note p: 46e. The indices are chosen so that the corresponding data is in descending numerical order. *note k: 46e. must be less than or equal to *note n: 46e. The data *note src: 46e. is not modified by this operation. -- Function: int gsl_sort_vector_smallest_index (size_t *p, size_t k, const gsl_vector *v) -- Function: int gsl_sort_vector_largest_index (size_t *p, size_t k, const gsl_vector *v) These functions store the indices of the *note k: 470. smallest or largest elements of the vector *note v: 470. in the array *note p: 470. *note k: 470. must be less than or equal to the length of the vector *note v: 470.  File: gsl-ref.info, Node: Computing the rank, Next: Examples<7>, Prev: Selecting the k smallest or largest elements, Up: Sorting 12.4 Computing the rank ======================= The `rank' of an element is its order in the sorted data. The rank is the inverse of the index permutation, p. It can be computed using the following algorithm: for (i = 0; i < p->size; i++) { size_t pi = p->data[i]; rank->data[pi] = i; } This can be computed directly from the function ‘gsl_permutation_inverse(rank,p)’. The following function will print the rank of each element of the vector v: void print_rank (gsl_vector * v) { size_t i; size_t n = v->size; gsl_permutation * perm = gsl_permutation_alloc(n); gsl_permutation * rank = gsl_permutation_alloc(n); gsl_sort_vector_index (perm, v); gsl_permutation_inverse (rank, perm); for (i = 0; i < n; i++) { double vi = gsl_vector_get(v, i); printf ("element = %d, value = %g, rank = %d\n", i, vi, rank->data[i]); } gsl_permutation_free (perm); gsl_permutation_free (rank); }  File: gsl-ref.info, Node: Examples<7>, Next: References and Further Reading<7>, Prev: Computing the rank, Up: Sorting 12.5 Examples ============= The following example shows how to use the permutation p to print the elements of the vector v in ascending order: gsl_sort_vector_index (p, v); for (i = 0; i < v->size; i++) { double vpi = gsl_vector_get (v, p->data[i]); printf ("order = %d, value = %g\n", i, vpi); } The next example uses the function *note gsl_sort_smallest(): 469. to select the 5 smallest numbers from 100000 uniform random variates stored in an array, #include #include int main (void) { const gsl_rng_type * T; gsl_rng * r; size_t i, k = 5, N = 100000; double * x = malloc (N * sizeof(double)); double * small = malloc (k * sizeof(double)); gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); for (i = 0; i < N; i++) { x[i] = gsl_rng_uniform(r); } gsl_sort_smallest (small, k, x, 1, N); printf ("%zu smallest values from %zu\n", k, N); for (i = 0; i < k; i++) { printf ("%zu: %.18f\n", i, small[i]); } free (x); free (small); gsl_rng_free (r); return 0; } The output lists the 5 smallest values, in ascending order, 5 smallest values from 100000 0: 0.000003489200025797 1: 0.000008199829608202 2: 0.000008953968062997 3: 0.000010712770745158 4: 0.000033531803637743  File: gsl-ref.info, Node: References and Further Reading<7>, Prev: Examples<7>, Up: Sorting 12.6 References and Further Reading =================================== The subject of sorting is covered extensively in the following, * Donald E. Knuth, The Art of Computer Programming: Sorting and Searching (Vol 3, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896850. The Heapsort algorithm is described in the following book, * Robert Sedgewick, Algorithms in C, Addison-Wesley, ISBN 0201514257.  File: gsl-ref.info, Node: BLAS Support, Next: Linear Algebra, Prev: Sorting, Up: Top 13 BLAS Support *************** The Basic Linear Algebra Subprograms (BLAS) define a set of fundamental operations on vectors and matrices which can be used to create optimized higher-level linear algebra functionality. The library provides a low-level layer which corresponds directly to the C-language BLAS standard, referred to here as “CBLAS”, and a higher-level interface for operations on GSL vectors and matrices. Users who are interested in simple operations on GSL vector and matrix objects should use the high-level layer described in this chapter. The functions are declared in the file ‘gsl_blas.h’ and should satisfy the needs of most users. Note that GSL matrices are implemented using dense-storage so the interface only includes the corresponding dense-storage BLAS functions. The full BLAS functionality for band-format and packed-format matrices is available through the low-level CBLAS interface. Similarly, GSL vectors are restricted to positive strides, whereas the low-level CBLAS interface supports negative strides as specified in the BLAS standard (1). The interface for the ‘gsl_cblas’ layer is specified in the file ‘gsl_cblas.h’. This interface corresponds to the BLAS Technical Forum’s standard for the C interface to legacy BLAS implementations. Users who have access to other conforming CBLAS implementations can use these in place of the version provided by the library. Note that users who have only a Fortran BLAS library can use a CBLAS conformant wrapper to convert it into a CBLAS library. A reference CBLAS wrapper for legacy Fortran implementations exists as part of the CBLAS standard and can be obtained from Netlib. The complete set of CBLAS functions is listed in an *note appendix: 476. There are three levels of BLAS operations, `Level 1' Vector operations, e.g. y = \alpha x + y `Level 2' Matrix-vector operations, e.g. y = \alpha A x + \beta y `Level 3' Matrix-matrix operations, e.g. C = \alpha A B + C Each routine has a name which specifies the operation, the type of matrices involved and their precisions. Some of the most common operations and their names are given below, `DOT' scalar product, x^T y `AXPY' vector sum, \alpha x + y `MV' matrix-vector product, A x `SV' matrix-vector solve, inv(A) x `MM' matrix-matrix product, A B `SM' matrix-matrix solve, inv(A) B The types of matrices are, `GE' general `GB' general band `SY' symmetric `SB' symmetric band `SP' symmetric packed `HE' hermitian `HB' hermitian band `HP' hermitian packed `TR' triangular `TB' triangular band `TP' triangular packed Each operation is defined for four precisions, `S' single real `D' double real `C' single complex `Z' double complex Thus, for example, the name SGEMM stands for “single-precision general matrix-matrix multiply” and ZGEMM stands for “double-precision complex matrix-matrix multiply”. Note that the vector and matrix arguments to BLAS functions must not be aliased, as the results are undefined when the underlying arrays overlap (*note Aliasing of arrays: 1f.). * Menu: * GSL BLAS Interface:: * Examples: Examples<8>. * References and Further Reading: References and Further Reading<8>. ---------- Footnotes ---------- (1) (1) In the low-level CBLAS interface, a negative stride accesses the vector elements in reverse order, i.e. the i-th element is given by (N-i)*|incx| for incx < 0.  File: gsl-ref.info, Node: GSL BLAS Interface, Next: Examples<8>, Up: BLAS Support 13.1 GSL BLAS Interface ======================= GSL provides dense vector and matrix objects, based on the relevant built-in types. The library provides an interface to the BLAS operations which apply to these objects. The interface to this functionality is given in the file ‘gsl_blas.h’. * Menu: * Level 1:: * Level 2:: * Level 3::  File: gsl-ref.info, Node: Level 1, Next: Level 2, Up: GSL BLAS Interface 13.1.1 Level 1 -------------- -- Function: int gsl_blas_sdsdot (float alpha, const gsl_vector_float *x, const gsl_vector_float *y, float *result) This function computes the sum \alpha + x^T y for the vectors *note x: 479. and *note y: 479, returning the result in *note result: 479. -- Function: int gsl_blas_sdot (const gsl_vector_float *x, const gsl_vector_float *y, float *result) -- Function: int gsl_blas_dsdot (const gsl_vector_float *x, const gsl_vector_float *y, double *result) -- Function: int gsl_blas_ddot (const gsl_vector *x, const gsl_vector *y, double *result) These functions compute the scalar product x^T y for the vectors *note x: 47c. and *note y: 47c, returning the result in *note result: 47c. -- Function: int gsl_blas_cdotu (const gsl_vector_complex_float *x, const gsl_vector_complex_float *y, gsl_complex_float *dotu) -- Function: int gsl_blas_zdotu (const gsl_vector_complex *x, const gsl_vector_complex *y, gsl_complex *dotu) These functions compute the complex scalar product x^T y for the vectors *note x: 47e. and *note y: 47e, returning the result in *note dotu: 47e. -- Function: int gsl_blas_cdotc (const gsl_vector_complex_float *x, const gsl_vector_complex_float *y, gsl_complex_float *dotc) -- Function: int gsl_blas_zdotc (const gsl_vector_complex *x, const gsl_vector_complex *y, gsl_complex *dotc) These functions compute the complex conjugate scalar product x^H y for the vectors *note x: 480. and *note y: 480, returning the result in *note dotc: 480. -- Function: float gsl_blas_snrm2 (const gsl_vector_float *x) -- Function: double gsl_blas_dnrm2 (const gsl_vector *x) These functions compute the Euclidean norm ||x||_2 = \sqrt{\sum x_i^2} of the vector *note x: 482. -- Function: float gsl_blas_scnrm2 (const gsl_vector_complex_float *x) -- Function: double gsl_blas_dznrm2 (const gsl_vector_complex *x) These functions compute the Euclidean norm of the complex vector *note x: 484, ||x||_2 = \sqrt{\sum (\Re(x_i)^2 + \Im(x_i)^2)}. -- Function: float gsl_blas_sasum (const gsl_vector_float *x) -- Function: double gsl_blas_dasum (const gsl_vector *x) These functions compute the absolute sum \sum |x_i| of the elements of the vector *note x: 486. -- Function: float gsl_blas_scasum (const gsl_vector_complex_float *x) -- Function: double gsl_blas_dzasum (const gsl_vector_complex *x) These functions compute the sum of the magnitudes of the real and imaginary parts of the complex vector *note x: 488, \sum \left( |\Re(x_i)| + |\Im(x_i)| \right). -- Function: CBLAS_INDEX_t gsl_blas_isamax (const gsl_vector_float *x) -- Function: CBLAS_INDEX_t gsl_blas_idamax (const gsl_vector *x) -- Function: CBLAS_INDEX_t gsl_blas_icamax (const gsl_vector_complex_float *x) -- Function: CBLAS_INDEX_t gsl_blas_izamax (const gsl_vector_complex *x) These functions return the index of the largest element of the vector *note x: 48c. The largest element is determined by its absolute magnitude for real vectors and by the sum of the magnitudes of the real and imaginary parts |\Re(x_i)| + |\Im(x_i)| for complex vectors. If the largest value occurs several times then the index of the first occurrence is returned. -- Function: int gsl_blas_sswap (gsl_vector_float *x, gsl_vector_float *y) -- Function: int gsl_blas_dswap (gsl_vector *x, gsl_vector *y) -- Function: int gsl_blas_cswap (gsl_vector_complex_float *x, gsl_vector_complex_float *y) -- Function: int gsl_blas_zswap (gsl_vector_complex *x, gsl_vector_complex *y) These functions exchange the elements of the vectors *note x: 490. and *note y: 490. -- Function: int gsl_blas_scopy (const gsl_vector_float *x, gsl_vector_float *y) -- Function: int gsl_blas_dcopy (const gsl_vector *x, gsl_vector *y) -- Function: int gsl_blas_ccopy (const gsl_vector_complex_float *x, gsl_vector_complex_float *y) -- Function: int gsl_blas_zcopy (const gsl_vector_complex *x, gsl_vector_complex *y) These functions copy the elements of the vector *note x: 494. into the vector *note y: 494. -- Function: int gsl_blas_saxpy (float alpha, const gsl_vector_float *x, gsl_vector_float *y) -- Function: int gsl_blas_daxpy (double alpha, const gsl_vector *x, gsl_vector *y) -- Function: int gsl_blas_caxpy (const gsl_complex_float alpha, const gsl_vector_complex_float *x, gsl_vector_complex_float *y) -- Function: int gsl_blas_zaxpy (const gsl_complex alpha, const gsl_vector_complex *x, gsl_vector_complex *y) These functions compute the sum y = \alpha x + y for the vectors *note x: 498. and *note y: 498. -- Function: void gsl_blas_sscal (float alpha, gsl_vector_float *x) -- Function: void gsl_blas_dscal (double alpha, gsl_vector *x) -- Function: void gsl_blas_cscal (const gsl_complex_float alpha, gsl_vector_complex_float *x) -- Function: void gsl_blas_zscal (const gsl_complex alpha, gsl_vector_complex *x) -- Function: void gsl_blas_csscal (float alpha, gsl_vector_complex_float *x) -- Function: void gsl_blas_zdscal (double alpha, gsl_vector_complex *x) These functions rescale the vector *note x: 49e. by the multiplicative factor *note alpha: 49e. -- Function: int gsl_blas_srotg (float a[], float b[], float c[], float s[]) -- Function: int gsl_blas_drotg (double a[], double b[], double c[], double s[]) These functions compute a Givens rotation (c,s) which zeroes the vector (a,b), [ c s ] [ a ] = [ r ] [ -s c ] [ b ] [ 0 ] The variables *note a: 4a0. and *note b: 4a0. are overwritten by the routine. -- Function: int gsl_blas_srot (gsl_vector_float *x, gsl_vector_float *y, float c, float s) -- Function: int gsl_blas_drot (gsl_vector *x, gsl_vector *y, const double c, const double s) These functions apply a Givens rotation (x', y') = (c x + s y, -s x + c y) to the vectors *note x: 4a2, *note y: 4a2. -- Function: int gsl_blas_srotmg (float d1[], float d2[], float b1[], float b2, float P[]) -- Function: int gsl_blas_drotmg (double d1[], double d2[], double b1[], double b2, double P[]) These functions compute a modified Givens transformation. The modified Givens transformation is defined in the original Level-1 BLAS specification, given in the references. -- Function: int gsl_blas_srotm (gsl_vector_float *x, gsl_vector_float *y, const float P[]) -- Function: int gsl_blas_drotm (gsl_vector *x, gsl_vector *y, const double P[]) These functions apply a modified Givens transformation.  File: gsl-ref.info, Node: Level 2, Next: Level 3, Prev: Level 1, Up: GSL BLAS Interface 13.1.2 Level 2 -------------- -- Function: int gsl_blas_sgemv (CBLAS_TRANSPOSE_t TransA, float alpha, const gsl_matrix_float *A, const gsl_vector_float *x, float beta, gsl_vector_float *y) -- Function: int gsl_blas_dgemv (CBLAS_TRANSPOSE_t TransA, double alpha, const gsl_matrix *A, const gsl_vector *x, double beta, gsl_vector *y) -- Function: int gsl_blas_cgemv (CBLAS_TRANSPOSE_t TransA, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, const gsl_vector_complex_float *x, const gsl_complex_float beta, gsl_vector_complex_float *y) -- Function: int gsl_blas_zgemv (CBLAS_TRANSPOSE_t TransA, const gsl_complex alpha, const gsl_matrix_complex *A, const gsl_vector_complex *x, const gsl_complex beta, gsl_vector_complex *y) These functions compute the matrix-vector product and sum y = \alpha op(A) x + \beta y, where op(A) = A, A^T, A^H for *note TransA: 4ab. = ‘CblasNoTrans’, ‘CblasTrans’, ‘CblasConjTrans’. -- Function: int gsl_blas_strmv (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_matrix_float *A, gsl_vector_float *x) -- Function: int gsl_blas_dtrmv (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_matrix *A, gsl_vector *x) -- Function: int gsl_blas_ctrmv (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_matrix_complex_float *A, gsl_vector_complex_float *x) -- Function: int gsl_blas_ztrmv (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_matrix_complex *A, gsl_vector_complex *x) These functions compute the matrix-vector product x = op(A) x for the triangular matrix *note A: 4af, where op(A) = A, A^T, A^H for *note TransA: 4af. = ‘CblasNoTrans’, ‘CblasTrans’, ‘CblasConjTrans’. When *note Uplo: 4af. is ‘CblasUpper’ then the upper triangle of *note A: 4af. is used, and when *note Uplo: 4af. is ‘CblasLower’ then the lower triangle of *note A: 4af. is used. If *note Diag: 4af. is ‘CblasNonUnit’ then the diagonal of the matrix is used, but if *note Diag: 4af. is ‘CblasUnit’ then the diagonal elements of the matrix *note A: 4af. are taken as unity and are not referenced. -- Function: int gsl_blas_strsv (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_matrix_float *A, gsl_vector_float *x) -- Function: int gsl_blas_dtrsv (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_matrix *A, gsl_vector *x) -- Function: int gsl_blas_ctrsv (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_matrix_complex_float *A, gsl_vector_complex_float *x) -- Function: int gsl_blas_ztrsv (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_matrix_complex *A, gsl_vector_complex *x) These functions compute inv(op(A)) x for *note x: 4b3, where op(A) = A, A^T, A^H for *note TransA: 4b3. = ‘CblasNoTrans’, ‘CblasTrans’, ‘CblasConjTrans’. When *note Uplo: 4b3. is ‘CblasUpper’ then the upper triangle of *note A: 4b3. is used, and when *note Uplo: 4b3. is ‘CblasLower’ then the lower triangle of *note A: 4b3. is used. If *note Diag: 4b3. is ‘CblasNonUnit’ then the diagonal of the matrix is used, but if *note Diag: 4b3. is ‘CblasUnit’ then the diagonal elements of the matrix *note A: 4b3. are taken as unity and are not referenced. -- Function: int gsl_blas_ssymv (CBLAS_UPLO_t Uplo, float alpha, const gsl_matrix_float *A, const gsl_vector_float *x, float beta, gsl_vector_float *y) -- Function: int gsl_blas_dsymv (CBLAS_UPLO_t Uplo, double alpha, const gsl_matrix *A, const gsl_vector *x, double beta, gsl_vector *y) These functions compute the matrix-vector product and sum y = \alpha A x + \beta y for the symmetric matrix *note A: 4b5. Since the matrix *note A: 4b5. is symmetric only its upper half or lower half need to be stored. When *note Uplo: 4b5. is ‘CblasUpper’ then the upper triangle and diagonal of *note A: 4b5. are used, and when *note Uplo: 4b5. is ‘CblasLower’ then the lower triangle and diagonal of *note A: 4b5. are used. -- Function: int gsl_blas_chemv (CBLAS_UPLO_t Uplo, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, const gsl_vector_complex_float *x, const gsl_complex_float beta, gsl_vector_complex_float *y) -- Function: int gsl_blas_zhemv (CBLAS_UPLO_t Uplo, const gsl_complex alpha, const gsl_matrix_complex *A, const gsl_vector_complex *x, const gsl_complex beta, gsl_vector_complex *y) These functions compute the matrix-vector product and sum y = \alpha A x + \beta y for the hermitian matrix *note A: 4b7. Since the matrix *note A: 4b7. is hermitian only its upper half or lower half need to be stored. When *note Uplo: 4b7. is ‘CblasUpper’ then the upper triangle and diagonal of *note A: 4b7. are used, and when *note Uplo: 4b7. is ‘CblasLower’ then the lower triangle and diagonal of *note A: 4b7. are used. The imaginary elements of the diagonal are automatically assumed to be zero and are not referenced. -- Function: int gsl_blas_sger (float alpha, const gsl_vector_float *x, const gsl_vector_float *y, gsl_matrix_float *A) -- Function: int gsl_blas_dger (double alpha, const gsl_vector *x, const gsl_vector *y, gsl_matrix *A) -- Function: int gsl_blas_cgeru (const gsl_complex_float alpha, const gsl_vector_complex_float *x, const gsl_vector_complex_float *y, gsl_matrix_complex_float *A) -- Function: int gsl_blas_zgeru (const gsl_complex alpha, const gsl_vector_complex *x, const gsl_vector_complex *y, gsl_matrix_complex *A) These functions compute the rank-1 update A = \alpha x y^T + A of the matrix *note A: 4bb. -- Function: int gsl_blas_cgerc (const gsl_complex_float alpha, const gsl_vector_complex_float *x, const gsl_vector_complex_float *y, gsl_matrix_complex_float *A) -- Function: int gsl_blas_zgerc (const gsl_complex alpha, const gsl_vector_complex *x, const gsl_vector_complex *y, gsl_matrix_complex *A) These functions compute the conjugate rank-1 update A = \alpha x y^H + A of the matrix *note A: 4bd. -- Function: int gsl_blas_ssyr (CBLAS_UPLO_t Uplo, float alpha, const gsl_vector_float *x, gsl_matrix_float *A) -- Function: int gsl_blas_dsyr (CBLAS_UPLO_t Uplo, double alpha, const gsl_vector *x, gsl_matrix *A) These functions compute the symmetric rank-1 update A = \alpha x x^T + A of the symmetric matrix *note A: 4bf. Since the matrix *note A: 4bf. is symmetric only its upper half or lower half need to be stored. When *note Uplo: 4bf. is ‘CblasUpper’ then the upper triangle and diagonal of *note A: 4bf. are used, and when *note Uplo: 4bf. is ‘CblasLower’ then the lower triangle and diagonal of *note A: 4bf. are used. -- Function: int gsl_blas_cher (CBLAS_UPLO_t Uplo, float alpha, const gsl_vector_complex_float *x, gsl_matrix_complex_float *A) -- Function: int gsl_blas_zher (CBLAS_UPLO_t Uplo, double alpha, const gsl_vector_complex *x, gsl_matrix_complex *A) These functions compute the hermitian rank-1 update A = \alpha x x^H + A of the hermitian matrix *note A: 4c1. Since the matrix *note A: 4c1. is hermitian only its upper half or lower half need to be stored. When *note Uplo: 4c1. is ‘CblasUpper’ then the upper triangle and diagonal of *note A: 4c1. are used, and when *note Uplo: 4c1. is ‘CblasLower’ then the lower triangle and diagonal of *note A: 4c1. are used. The imaginary elements of the diagonal are automatically set to zero. -- Function: int gsl_blas_ssyr2 (CBLAS_UPLO_t Uplo, float alpha, const gsl_vector_float *x, const gsl_vector_float *y, gsl_matrix_float *A) -- Function: int gsl_blas_dsyr2 (CBLAS_UPLO_t Uplo, double alpha, const gsl_vector *x, const gsl_vector *y, gsl_matrix *A) These functions compute the symmetric rank-2 update A = \alpha x y^T + \alpha y x^T + A of the symmetric matrix *note A: 4c3. Since the matrix *note A: 4c3. is symmetric only its upper half or lower half need to be stored. When *note Uplo: 4c3. is ‘CblasUpper’ then the upper triangle and diagonal of *note A: 4c3. are used, and when *note Uplo: 4c3. is ‘CblasLower’ then the lower triangle and diagonal of *note A: 4c3. are used. -- Function: int gsl_blas_cher2 (CBLAS_UPLO_t Uplo, const gsl_complex_float alpha, const gsl_vector_complex_float *x, const gsl_vector_complex_float *y, gsl_matrix_complex_float *A) -- Function: int gsl_blas_zher2 (CBLAS_UPLO_t Uplo, const gsl_complex alpha, const gsl_vector_complex *x, const gsl_vector_complex *y, gsl_matrix_complex *A) These functions compute the hermitian rank-2 update A = \alpha x y^H + \alpha^* y x^H + A of the hermitian matrix *note A: 4c5. Since the matrix *note A: 4c5. is hermitian only its upper half or lower half need to be stored. When *note Uplo: 4c5. is ‘CblasUpper’ then the upper triangle and diagonal of *note A: 4c5. are used, and when *note Uplo: 4c5. is ‘CblasLower’ then the lower triangle and diagonal of *note A: 4c5. are used. The imaginary elements of the diagonal are automatically set to zero.  File: gsl-ref.info, Node: Level 3, Prev: Level 2, Up: GSL BLAS Interface 13.1.3 Level 3 -------------- -- Function: int gsl_blas_sgemm (CBLAS_TRANSPOSE_t TransA, CBLAS_TRANSPOSE_t TransB, float alpha, const gsl_matrix_float *A, const gsl_matrix_float *B, float beta, gsl_matrix_float *C) -- Function: int gsl_blas_dgemm (CBLAS_TRANSPOSE_t TransA, CBLAS_TRANSPOSE_t TransB, double alpha, const gsl_matrix *A, const gsl_matrix *B, double beta, gsl_matrix *C) -- Function: int gsl_blas_cgemm (CBLAS_TRANSPOSE_t TransA, CBLAS_TRANSPOSE_t TransB, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, const gsl_matrix_complex_float *B, const gsl_complex_float beta, gsl_matrix_complex_float *C) -- Function: int gsl_blas_zgemm (CBLAS_TRANSPOSE_t TransA, CBLAS_TRANSPOSE_t TransB, const gsl_complex alpha, const gsl_matrix_complex *A, const gsl_matrix_complex *B, const gsl_complex beta, gsl_matrix_complex *C) These functions compute the matrix-matrix product and sum C = \alpha op(A) op(B) + \beta C where op(A) = A, A^T, A^H for *note TransA: 4ca. = ‘CblasNoTrans’, ‘CblasTrans’, ‘CblasConjTrans’ and similarly for the parameter *note TransB: 4ca. -- Function: int gsl_blas_ssymm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, float alpha, const gsl_matrix_float *A, const gsl_matrix_float *B, float beta, gsl_matrix_float *C) -- Function: int gsl_blas_dsymm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, double alpha, const gsl_matrix *A, const gsl_matrix *B, double beta, gsl_matrix *C) -- Function: int gsl_blas_csymm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, const gsl_matrix_complex_float *B, const gsl_complex_float beta, gsl_matrix_complex_float *C) -- Function: int gsl_blas_zsymm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, const gsl_complex alpha, const gsl_matrix_complex *A, const gsl_matrix_complex *B, const gsl_complex beta, gsl_matrix_complex *C) These functions compute the matrix-matrix product and sum C = \alpha A B + \beta C for *note Side: 4ce. is ‘CblasLeft’ and C = \alpha B A + \beta C for *note Side: 4ce. is ‘CblasRight’, where the matrix *note A: 4ce. is symmetric. When *note Uplo: 4ce. is ‘CblasUpper’ then the upper triangle and diagonal of *note A: 4ce. are used, and when *note Uplo: 4ce. is ‘CblasLower’ then the lower triangle and diagonal of *note A: 4ce. are used. -- Function: int gsl_blas_chemm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, const gsl_matrix_complex_float *B, const gsl_complex_float beta, gsl_matrix_complex_float *C) -- Function: int gsl_blas_zhemm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, const gsl_complex alpha, const gsl_matrix_complex *A, const gsl_matrix_complex *B, const gsl_complex beta, gsl_matrix_complex *C) These functions compute the matrix-matrix product and sum C = \alpha A B + \beta C for *note Side: 4d0. is ‘CblasLeft’ and C = \alpha B A + \beta C for *note Side: 4d0. is ‘CblasRight’, where the matrix *note A: 4d0. is hermitian. When *note Uplo: 4d0. is ‘CblasUpper’ then the upper triangle and diagonal of *note A: 4d0. are used, and when *note Uplo: 4d0. is ‘CblasLower’ then the lower triangle and diagonal of *note A: 4d0. are used. The imaginary elements of the diagonal are automatically set to zero. -- Function: int gsl_blas_strmm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, float alpha, const gsl_matrix_float *A, gsl_matrix_float *B) -- Function: int gsl_blas_dtrmm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, double alpha, const gsl_matrix *A, gsl_matrix *B) -- Function: int gsl_blas_ctrmm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, gsl_matrix_complex_float *B) -- Function: int gsl_blas_ztrmm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_complex alpha, const gsl_matrix_complex *A, gsl_matrix_complex *B) These functions compute the matrix-matrix product B = \alpha op(A) B for *note Side: 4d4. is ‘CblasLeft’ and B = \alpha B op(A) for *note Side: 4d4. is ‘CblasRight’. The matrix *note A: 4d4. is triangular and op(A) = A, A^T, A^H for *note TransA: 4d4. = ‘CblasNoTrans’, ‘CblasTrans’, ‘CblasConjTrans’. When *note Uplo: 4d4. is ‘CblasUpper’ then the upper triangle of *note A: 4d4. is used, and when *note Uplo: 4d4. is ‘CblasLower’ then the lower triangle of *note A: 4d4. is used. If *note Diag: 4d4. is ‘CblasNonUnit’ then the diagonal of *note A: 4d4. is used, but if *note Diag: 4d4. is ‘CblasUnit’ then the diagonal elements of the matrix *note A: 4d4. are taken as unity and are not referenced. -- Function: int gsl_blas_strsm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, float alpha, const gsl_matrix_float *A, gsl_matrix_float *B) -- Function: int gsl_blas_dtrsm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, double alpha, const gsl_matrix *A, gsl_matrix *B) -- Function: int gsl_blas_ctrsm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, gsl_matrix_complex_float *B) -- Function: int gsl_blas_ztrsm (CBLAS_SIDE_t Side, CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t TransA, CBLAS_DIAG_t Diag, const gsl_complex alpha, const gsl_matrix_complex *A, gsl_matrix_complex *B) These functions compute the inverse-matrix matrix product B = \alpha op(inv(A))B for *note Side: 4d8. is ‘CblasLeft’ and B = \alpha B op(inv(A)) for *note Side: 4d8. is ‘CblasRight’. The matrix *note A: 4d8. is triangular and op(A) = A, A^T, A^H for *note TransA: 4d8. = ‘CblasNoTrans’, ‘CblasTrans’, ‘CblasConjTrans’. When *note Uplo: 4d8. is ‘CblasUpper’ then the upper triangle of *note A: 4d8. is used, and when *note Uplo: 4d8. is ‘CblasLower’ then the lower triangle of *note A: 4d8. is used. If *note Diag: 4d8. is ‘CblasNonUnit’ then the diagonal of *note A: 4d8. is used, but if *note Diag: 4d8. is ‘CblasUnit’ then the diagonal elements of the matrix *note A: 4d8. are taken as unity and are not referenced. -- Function: int gsl_blas_ssyrk (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, float alpha, const gsl_matrix_float *A, float beta, gsl_matrix_float *C) -- Function: int gsl_blas_dsyrk (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, double alpha, const gsl_matrix *A, double beta, gsl_matrix *C) -- Function: int gsl_blas_csyrk (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, const gsl_complex_float beta, gsl_matrix_complex_float *C) -- Function: int gsl_blas_zsyrk (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, const gsl_complex alpha, const gsl_matrix_complex *A, const gsl_complex beta, gsl_matrix_complex *C) These functions compute a rank-k update of the symmetric matrix *note C: 4dc, C = \alpha A A^T + \beta C when *note Trans: 4dc. is ‘CblasNoTrans’ and C = \alpha A^T A + \beta C when *note Trans: 4dc. is ‘CblasTrans’. Since the matrix *note C: 4dc. is symmetric only its upper half or lower half need to be stored. When *note Uplo: 4dc. is ‘CblasUpper’ then the upper triangle and diagonal of *note C: 4dc. are used, and when *note Uplo: 4dc. is ‘CblasLower’ then the lower triangle and diagonal of *note C: 4dc. are used. -- Function: int gsl_blas_cherk (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, float alpha, const gsl_matrix_complex_float *A, float beta, gsl_matrix_complex_float *C) -- Function: int gsl_blas_zherk (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, double alpha, const gsl_matrix_complex *A, double beta, gsl_matrix_complex *C) These functions compute a rank-k update of the hermitian matrix *note C: 4de, C = \alpha A A^H + \beta C when *note Trans: 4de. is ‘CblasNoTrans’ and C = \alpha A^H A + \beta C when *note Trans: 4de. is ‘CblasConjTrans’. Since the matrix *note C: 4de. is hermitian only its upper half or lower half need to be stored. When *note Uplo: 4de. is ‘CblasUpper’ then the upper triangle and diagonal of *note C: 4de. are used, and when *note Uplo: 4de. is ‘CblasLower’ then the lower triangle and diagonal of *note C: 4de. are used. The imaginary elements of the diagonal are automatically set to zero. -- Function: int gsl_blas_ssyr2k (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, float alpha, const gsl_matrix_float *A, const gsl_matrix_float *B, float beta, gsl_matrix_float *C) -- Function: int gsl_blas_dsyr2k (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, double alpha, const gsl_matrix *A, const gsl_matrix *B, double beta, gsl_matrix *C) -- Function: int gsl_blas_csyr2k (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, const gsl_matrix_complex_float *B, const gsl_complex_float beta, gsl_matrix_complex_float *C) -- Function: int gsl_blas_zsyr2k (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, const gsl_complex alpha, const gsl_matrix_complex *A, const gsl_matrix_complex *B, const gsl_complex beta, gsl_matrix_complex *C) These functions compute a rank-2k update of the symmetric matrix *note C: 4e2, C = \alpha A B^T + \alpha B A^T + \beta C when *note Trans: 4e2. is ‘CblasNoTrans’ and C = \alpha A^T B + \alpha B^T A + \beta C when *note Trans: 4e2. is ‘CblasTrans’. Since the matrix *note C: 4e2. is symmetric only its upper half or lower half need to be stored. When *note Uplo: 4e2. is ‘CblasUpper’ then the upper triangle and diagonal of *note C: 4e2. are used, and when *note Uplo: 4e2. is ‘CblasLower’ then the lower triangle and diagonal of *note C: 4e2. are used. -- Function: int gsl_blas_cher2k (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, const gsl_complex_float alpha, const gsl_matrix_complex_float *A, const gsl_matrix_complex_float *B, float beta, gsl_matrix_complex_float *C) -- Function: int gsl_blas_zher2k (CBLAS_UPLO_t Uplo, CBLAS_TRANSPOSE_t Trans, const gsl_complex alpha, const gsl_matrix_complex *A, const gsl_matrix_complex *B, double beta, gsl_matrix_complex *C) These functions compute a rank-2k update of the hermitian matrix *note C: 4e4, C = \alpha A B^H + \alpha^* B A^H + \beta C when *note Trans: 4e4. is ‘CblasNoTrans’ and C = \alpha A^H B + \alpha^* B^H A + \beta C when *note Trans: 4e4. is ‘CblasConjTrans’. Since the matrix *note C: 4e4. is hermitian only its upper half or lower half need to be stored. When *note Uplo: 4e4. is ‘CblasUpper’ then the upper triangle and diagonal of *note C: 4e4. are used, and when *note Uplo: 4e4. is ‘CblasLower’ then the lower triangle and diagonal of *note C: 4e4. are used. The imaginary elements of the diagonal are automatically set to zero.  File: gsl-ref.info, Node: Examples<8>, Next: References and Further Reading<8>, Prev: GSL BLAS Interface, Up: BLAS Support 13.2 Examples ============= The following program computes the product of two matrices using the Level-3 BLAS function DGEMM, [ 0.11 0.12 0.13 ] [ 1011 1012 ] [ 367.76 368.12 ] [ 0.21 0.22 0.23 ] [ 1021 1022 ] = [ 674.06 674.72 ] [ 1031 1032 ] The matrices are stored in row major order, according to the C convention for arrays. #include #include int main (void) { double a[] = { 0.11, 0.12, 0.13, 0.21, 0.22, 0.23 }; double b[] = { 1011, 1012, 1021, 1022, 1031, 1032 }; double c[] = { 0.00, 0.00, 0.00, 0.00 }; gsl_matrix_view A = gsl_matrix_view_array(a, 2, 3); gsl_matrix_view B = gsl_matrix_view_array(b, 3, 2); gsl_matrix_view C = gsl_matrix_view_array(c, 2, 2); /* Compute C = A B */ gsl_blas_dgemm (CblasNoTrans, CblasNoTrans, 1.0, &A.matrix, &B.matrix, 0.0, &C.matrix); printf ("[ %g, %g\n", c[0], c[1]); printf (" %g, %g ]\n", c[2], c[3]); return 0; } Here is the output from the program, [ 367.76, 368.12 674.06, 674.72 ]  File: gsl-ref.info, Node: References and Further Reading<8>, Prev: Examples<8>, Up: BLAS Support 13.3 References and Further Reading =================================== Information on the BLAS standards, including both the legacy and updated interface standards, is available online from the BLAS Homepage and BLAS Technical Forum web-site. * BLAS Homepage, ‘http://www.netlib.org/blas/’ * BLAS Technical Forum, ‘http://www.netlib.org/blas/blast-forum/’ The following papers contain the specifications for Level 1, Level 2 and Level 3 BLAS. * C. Lawson, R. Hanson, D. Kincaid, F. Krogh, “Basic Linear Algebra Subprograms for Fortran Usage”, ACM Transactions on Mathematical Software, Vol.: 5 (1979), Pages 308–325. * J.J. Dongarra, J. DuCroz, S. Hammarling, R. Hanson, “An Extended Set of Fortran Basic Linear Algebra Subprograms”, ACM Transactions on Mathematical Software, Vol.: 14, No.: 1 (1988), Pages 1–32. * J.J. Dongarra, I. Duff, J. DuCroz, S. Hammarling, “A Set of Level 3 Basic Linear Algebra Subprograms”, ACM Transactions on Mathematical Software, Vol.: 16 (1990), Pages 1–28. Postscript versions of the latter two papers are available from ‘http://www.netlib.org/blas/’. A CBLAS wrapper for Fortran BLAS libraries is available from the same location.  File: gsl-ref.info, Node: Linear Algebra, Next: Eigensystems, Prev: BLAS Support, Up: Top 14 Linear Algebra ***************** This chapter describes functions for solving linear systems. The library provides linear algebra operations which operate directly on the *note gsl_vector: 35f. and *note gsl_matrix: 3a2. objects. These routines use the standard algorithms from Golub & Van Loan’s `Matrix Computations' with Level-1 and Level-2 BLAS calls for efficiency. The functions described in this chapter are declared in the header file ‘gsl_linalg.h’. * Menu: * LU Decomposition:: * QR Decomposition:: * QR Decomposition with Column Pivoting:: * LQ Decomposition:: * QL Decomposition:: * Complete Orthogonal Decomposition:: * Singular Value Decomposition:: * Cholesky Decomposition:: * Pivoted Cholesky Decomposition:: * Modified Cholesky Decomposition:: * LDLT Decomposition:: * Tridiagonal Decomposition of Real Symmetric Matrices:: * Tridiagonal Decomposition of Hermitian Matrices:: * Hessenberg Decomposition of Real Matrices:: * Hessenberg-Triangular Decomposition of Real Matrices:: * Bidiagonalization:: * Givens Rotations:: * Householder Transformations:: * Householder solver for linear systems:: * Tridiagonal Systems:: * Triangular Systems:: * Banded Systems:: * Balancing:: * Examples: Examples<9>. * References and Further Reading: References and Further Reading<9>.  File: gsl-ref.info, Node: LU Decomposition, Next: QR Decomposition, Up: Linear Algebra 14.1 LU Decomposition ===================== A general M-by-N matrix A has an LU decomposition P A = L U where P is an M-by-M permutation matrix, L is M-by-\min(M,N) and U is \min(M,N)-by-N. For square matrices, L is a lower unit triangular matrix and U is upper triangular. For M > N, L is a unit lower trapezoidal matrix of size M-by-N. For M < N, U is upper trapezoidal of size M-by-N. For square matrices this decomposition can be used to convert the linear system A x = b into a pair of triangular systems (L y = P b, U x = y), which can be solved by forward and back-substitution. Note that the LU decomposition is valid for singular matrices. -- Function: int gsl_linalg_LU_decomp (gsl_matrix *A, gsl_permutation *p, int *signum) -- Function: int gsl_linalg_complex_LU_decomp (gsl_matrix_complex *A, gsl_permutation *p, int *signum) These functions factorize the matrix *note A: 4ed. into the LU decomposition PA = LU. On output the diagonal and upper triangular (or trapezoidal) part of the input matrix *note A: 4ed. contain the matrix U. The lower triangular (or trapezoidal) part of the input matrix (excluding the diagonal) contains L. The diagonal elements of L are unity, and are not stored. The permutation matrix P is encoded in the permutation *note p: 4ed. on output. The j-th column of the matrix P is given by the k-th column of the identity matrix, where k = p_j the j-th element of the permutation vector. The sign of the permutation is given by *note signum: 4ed. It has the value (-1)^n, where n is the number of interchanges in the permutation. The algorithm used in the decomposition is Gaussian Elimination with partial pivoting (Golub & Van Loan, `Matrix Computations', Algorithm 3.4.1), combined with a recursive algorithm based on Level 3 BLAS (Peise and Bientinesi, 2016). -- Function: int gsl_linalg_LU_solve (const gsl_matrix *LU, const gsl_permutation *p, const gsl_vector *b, gsl_vector *x) -- Function: int gsl_linalg_complex_LU_solve (const gsl_matrix_complex *LU, const gsl_permutation *p, const gsl_vector_complex *b, gsl_vector_complex *x) These functions solve the square system A x = b using the LU decomposition of A into (*note LU: 4ef, *note p: 4ef.) given by *note gsl_linalg_LU_decomp(): 4ec. or *note gsl_linalg_complex_LU_decomp(): 4ed. as input. -- Function: int gsl_linalg_LU_svx (const gsl_matrix *LU, const gsl_permutation *p, gsl_vector *x) -- Function: int gsl_linalg_complex_LU_svx (const gsl_matrix_complex *LU, const gsl_permutation *p, gsl_vector_complex *x) These functions solve the square system A x = b in-place using the precomputed LU decomposition of A into (*note LU: 4f1, *note p: 4f1.). On input *note x: 4f1. should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_LU_refine (const gsl_matrix *A, const gsl_matrix *LU, const gsl_permutation *p, const gsl_vector *b, gsl_vector *x, gsl_vector *work) -- Function: int gsl_linalg_complex_LU_refine (const gsl_matrix_complex *A, const gsl_matrix_complex *LU, const gsl_permutation *p, const gsl_vector_complex *b, gsl_vector_complex *x, gsl_vector_complex *work) These functions apply an iterative improvement to *note x: 4f3, the solution of A x = b, from the precomputed LU decomposition of A into (*note LU: 4f3, *note p: 4f3.). Additional workspace of length ‘N’ is required in *note work: 4f3. -- Function: int gsl_linalg_LU_invert (const gsl_matrix *LU, const gsl_permutation *p, gsl_matrix *inverse) -- Function: int gsl_linalg_complex_LU_invert (const gsl_matrix_complex *LU, const gsl_permutation *p, gsl_matrix_complex *inverse) These functions compute the inverse of a matrix A from its LU decomposition (*note LU: 4f5, *note p: 4f5.), storing the result in the matrix *note inverse: 4f5. The inverse is computed by computing the inverses U^{-1}, L^{-1} and finally forming the product A^{-1} = U^{-1} L^{-1} P. Each step is based on Level 3 BLAS calls. It is preferable to avoid direct use of the inverse whenever possible, as the linear solver functions can obtain the same result more efficiently and reliably (consult any introductory textbook on numerical linear algebra for details). -- Function: int gsl_linalg_LU_invx (gsl_matrix *LU, const gsl_permutation *p) -- Function: int gsl_linalg_complex_LU_invx (gsl_matrix_complex *LU, const gsl_permutation *p) These functions compute the inverse of a matrix A from its LU decomposition (*note LU: 4f7, *note p: 4f7.), storing the result in-place in the matrix *note LU: 4f7. The inverse is computed by computing the inverses U^{-1}, L^{-1} and finally forming the product A^{-1} = U^{-1} L^{-1} P. Each step is based on Level 3 BLAS calls. It is preferable to avoid direct use of the inverse whenever possible, as the linear solver functions can obtain the same result more efficiently and reliably (consult any introductory textbook on numerical linear algebra for details). -- Function: double gsl_linalg_LU_det (gsl_matrix *LU, int signum) -- Function: gsl_complex gsl_linalg_complex_LU_det (gsl_matrix_complex *LU, int signum) These functions compute the determinant of a matrix A from its LU decomposition, *note LU: 4f9. The determinant is computed as the product of the diagonal elements of U and the sign of the row permutation *note signum: 4f9. -- Function: double gsl_linalg_LU_lndet (gsl_matrix *LU) -- Function: double gsl_linalg_complex_LU_lndet (gsl_matrix_complex *LU) These functions compute the logarithm of the absolute value of the determinant of a matrix A, \ln|\det(A)|, from its LU decomposition, *note LU: 4fb. This function may be useful if the direct computation of the determinant would overflow or underflow. -- Function: int gsl_linalg_LU_sgndet (gsl_matrix *LU, int signum) -- Function: gsl_complex gsl_linalg_complex_LU_sgndet (gsl_matrix_complex *LU, int signum) These functions compute the sign or phase factor of the determinant of a matrix A, \det(A)/|\det(A)|, from its LU decomposition, *note LU: 4fd.  File: gsl-ref.info, Node: QR Decomposition, Next: QR Decomposition with Column Pivoting, Prev: LU Decomposition, Up: Linear Algebra 14.2 QR Decomposition ===================== A general rectangular M-by-N matrix A has a QR decomposition into the product of a unitary M-by-M square matrix Q (where Q^{\dagger} Q = I) and an M-by-N right-triangular matrix R, A = Q R This decomposition can be used to convert the square linear system A x = b into the triangular system R x = Q^{\dagger} b, which can be solved by back-substitution. Another use of the QR decomposition is to compute an orthonormal basis for a set of vectors. The first N columns of Q form an orthonormal basis for the range of A, ran(A), when A has full column rank. When M > N, the bottom M - N rows of R are zero, and so A can be naturally partioned as A = [ Q_1 Q_2 ] [ R_1 ] = Q_1 R_1 [ 0 ] where R_1 is N-by-N upper triangular, Q_1 is M-by-N, and Q_2 is M-by-(M-N). Q_1 R_1 is sometimes called the `thin' or `reduced' QR decomposition. The solution of the least squares problem \min_x || b - A x ||^2 when A has full rank is: x = R_1^{-1} c_1 where c_1 is the first N elements of Q^{\dagger} b. If A is rank deficient, see *note QR Decomposition with Column Pivoting: 500. and *note Complete Orthogonal Decomposition: 501. GSL offers two interfaces for the QR decomposition. The first proceeds by zeroing out columns below the diagonal of A, one column at a time using Householder transforms. In this method, the factor Q is represented as a product of Householder reflectors: Q = H_n \cdots H_2 H_1 where each H_i = I - \tau_i v_i v_i^{\dagger} for a scalar \tau_i and column vector v_i. In this method, functions which compute the full matrix Q or apply Q^{\dagger} to a right hand side vector operate by applying the Householder matrices one at a time using Level 2 BLAS. The second interface is based on a Level 3 BLAS block recursive algorithm developed by Elmroth and Gustavson. In this case, Q is written in block form as Q = I - V T V^{\dagger} where V is an M-by-N matrix of the column vectors v_i and T is an N-by-N upper triangular matrix, whose diagonal elements are the \tau_i. Computing the full T, while requiring more flops than the Level 2 approach, offers the advantage that all standard operations can take advantage of cache efficient Level 3 BLAS operations, and so this method often performs faster than the Level 2 approach. The functions for the recursive block algorithm have a ‘_r’ suffix, and it is recommended to use this interface for performance critical applications. -- Function: int gsl_linalg_QR_decomp_r (gsl_matrix *A, gsl_matrix *T) -- Function: int gsl_linalg_complex_QR_decomp_r (gsl_matrix_complex *A, gsl_matrix_complex *T) These functions factor the M-by-N matrix *note A: 503. into the QR decomposition A = Q R using the recursive Level 3 BLAS algorithm of Elmroth and Gustavson. On output the diagonal and upper triangular part of *note A: 503. contain the matrix R. The N-by-N matrix *note T: 503. stores the upper triangular factor appearing in Q. The matrix Q is given by Q = I - V T V^{\dagger}, where the elements below the diagonal of *note A: 503. contain the columns of V on output. This algorithm requires M \ge N and performs best for “tall-skinny” matrices, i.e. M \gg N. -- Function: int gsl_linalg_QR_solve_r (const gsl_matrix *QR, const gsl_matrix *T, const gsl_vector *b, gsl_vector *x) -- Function: int gsl_linalg_complex_QR_solve_r (const gsl_matrix_complex *QR, const gsl_matrix_complex *T, const gsl_vector_complex *b, gsl_vector_complex *x) These functions solve the square system A x = b using the QR decomposition of A held in (*note QR: 505, *note T: 505.). The least-squares solution for rectangular systems can be found using *note gsl_linalg_QR_lssolve_r(): 506. or *note gsl_linalg_complex_QR_lssolve_r(): 507. -- Function: int gsl_linalg_QR_lssolve_r (const gsl_matrix *QR, const gsl_matrix *T, const gsl_vector *b, gsl_vector *x, gsl_vector *work) -- Function: int gsl_linalg_complex_QR_lssolve_r (const gsl_matrix_complex *QR, const gsl_matrix_complex *T, const gsl_vector_complex *b, gsl_vector_complex *x, gsl_vector_complex *work) These functions find the least squares solution to the overdetermined system A x = b where the matrix ‘A’ has more rows than columns. The least squares solution minimizes the Euclidean norm of the residual, ||b - Ax||. The routine requires as input the QR decomposition of A into (*note QR: 507, *note T: 507.) given by *note gsl_linalg_QR_decomp_r(): 502. or *note gsl_linalg_complex_QR_decomp_r(): 503. The parameter *note x: 507. is of length M. The solution x is returned in the first N rows of *note x: 507, i.e. x = ‘x[0], x[1], ..., x[N-1]’. The last M - N rows of *note x: 507. contain a vector whose norm is equal to the residual norm || b - A x ||. This similar to the behavior of LAPACK DGELS. Additional workspace of length N is required in *note work: 507. -- Function: int gsl_linalg_QR_QTvec_r (const gsl_matrix *QR, const gsl_matrix *T, gsl_vector *v, gsl_vector *work) -- Function: int gsl_linalg_complex_QR_QHvec_r (const gsl_matrix_complex *QR, const gsl_matrix_complex *T, gsl_vector_complex *v, gsl_vector_complex *work) These functions apply the matrix Q^T (or Q^{\dagger}) encoded in the decomposition (*note QR: 509, *note T: 509.) to the vector *note v: 509, storing the result Q^T v (or Q^{\dagger} v) in *note v: 509. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T (or Q^{\dagger}). Additional workspace of size N is required in *note work: 509. -- Function: int gsl_linalg_QR_QTmat_r (const gsl_matrix *QR, const gsl_matrix *T, gsl_matrix *B, gsl_matrix *work) This function applies the matrix Q^T encoded in the decomposition (*note QR: 50a, *note T: 50a.) to the M-by-K matrix *note B: 50a, storing the result Q^T B in *note B: 50a. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T. Additional workspace of size N-by-K is required in *note work: 50a. -- Function: int gsl_linalg_QR_unpack_r (const gsl_matrix *QR, const gsl_matrix *T, gsl_matrix *Q, gsl_matrix *R) -- Function: int gsl_linalg_complex_QR_unpack_r (const gsl_matrix_complex *QR, const gsl_matrix_complex *T, gsl_matrix_complex *Q, gsl_matrix_complex *R) These functions unpack the encoded QR decomposition (*note QR: 50c, *note T: 50c.) as output from *note gsl_linalg_QR_decomp_r(): 502. or *note gsl_linalg_complex_QR_decomp_r(): 503. into the matrices *note Q: 50c. and *note R: 50c, where *note Q: 50c. is M-by-M and *note R: 50c. is N-by-N. Note that the full R matrix is M-by-N, however the lower trapezoidal portion is zero, so only the upper triangular factor is stored. -- Function: int gsl_linalg_QR_rcond (const gsl_matrix *QR, double *rcond, gsl_vector *work) This function estimates the reciprocal condition number (using the 1-norm) of the R factor, stored in the upper triangle of *note QR: 50d. The reciprocal condition number estimate, defined as 1 / (||R||_1 \cdot ||R^{-1}||_1), is stored in *note rcond: 50d. Additional workspace of size 3 N is required in *note work: 50d. * Menu: * Level 2 Interface:: * Triangle on Top of Rectangle:: * Triangle on Top of Triangle:: * Triangle on Top of Trapezoidal:: * Triangle on Top of Diagonal::  File: gsl-ref.info, Node: Level 2 Interface, Next: Triangle on Top of Rectangle, Up: QR Decomposition 14.2.1 Level 2 Interface ------------------------ The functions below are for the slower Level 2 interface to the QR decomposition. It is recommended to use these functions only for M < N, since the Level 3 interface above performs much faster for M \ge N. -- Function: int gsl_linalg_QR_decomp (gsl_matrix *A, gsl_vector *tau) -- Function: int gsl_linalg_complex_QR_decomp (gsl_matrix_complex *A, gsl_vector_complex *tau) These functions factor the M-by-N matrix *note A: 510. into the QR decomposition A = Q R. On output the diagonal and upper triangular part of the input matrix contain the matrix R. The vector *note tau: 510. and the columns of the lower triangular part of the matrix *note A: 510. contain the Householder coefficients and Householder vectors which encode the orthogonal matrix ‘Q’. The vector *note tau: 510. must be of length N. The matrix Q is related to these components by the product of k=min(M,N) reflector matrices, Q = H_k ... H_2 H_1 where H_i = I - \tau_i v_i v_i^{\dagger} and v_i is the Householder vector v_i = (0,...,1,A(i+1,i),A(i+2,i),...,A(m,i)). This is the same storage scheme as used by LAPACK. The algorithm used to perform the decomposition is Householder QR (Golub & Van Loan, “Matrix Computations”, Algorithm 5.2.1). -- Function: int gsl_linalg_QR_solve (const gsl_matrix *QR, const gsl_vector *tau, const gsl_vector *b, gsl_vector *x) -- Function: int gsl_linalg_complex_QR_solve (const gsl_matrix_complex *QR, const gsl_vector_complex *tau, const gsl_vector_complex *b, gsl_vector_complex *x) These functions solve the square system A x = b using the QR decomposition of A held in (*note QR: 512, *note tau: 512.). The least-squares solution for rectangular systems can be found using *note gsl_linalg_QR_lssolve(): 513. -- Function: int gsl_linalg_QR_svx (const gsl_matrix *QR, const gsl_vector *tau, gsl_vector *x) -- Function: int gsl_linalg_complex_QR_svx (const gsl_matrix_complex *QR, const gsl_vector_complex *tau, gsl_vector_complex *x) These functions solve the square system A x = b in-place using the QR decomposition of A held in (*note QR: 515, *note tau: 515.). On input *note x: 515. should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_QR_lssolve (const gsl_matrix *QR, const gsl_vector *tau, const gsl_vector *b, gsl_vector *x, gsl_vector *residual) -- Function: int gsl_linalg_complex_QR_lssolve (const gsl_matrix_complex *QR, const gsl_vector_complex *tau, const gsl_vector_complex *b, gsl_vector_complex *x, gsl_vector_complex *residual) These functions find the least squares solution to the overdetermined system A x = b where the matrix ‘A’ has more rows than columns. The least squares solution minimizes the Euclidean norm of the residual, ||Ax - b||.The routine requires as input the QR decomposition of A into (*note QR: 516, *note tau: 516.) given by *note gsl_linalg_QR_decomp(): 50f. or *note gsl_linalg_complex_QR_decomp(): 510. The solution is returned in *note x: 516. The residual is computed as a by-product and stored in *note residual: 516. -- Function: int gsl_linalg_QR_QTvec (const gsl_matrix *QR, const gsl_vector *tau, gsl_vector *v) -- Function: int gsl_linalg_complex_QR_QHvec (const gsl_matrix_complex *QR, const gsl_vector_complex *tau, gsl_vector_complex *v) These functions apply the matrix Q^T (or Q^{\dagger}) encoded in the decomposition (*note QR: 518, *note tau: 518.) to the vector *note v: 518, storing the result Q^T v (or Q^{\dagger} v) in *note v: 518. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T (or Q^{\dagger}). -- Function: int gsl_linalg_QR_Qvec (const gsl_matrix *QR, const gsl_vector *tau, gsl_vector *v) -- Function: int gsl_linalg_complex_QR_Qvec (const gsl_matrix_complex *QR, const gsl_vector_complex *tau, gsl_vector_complex *v) These functions apply the matrix Q encoded in the decomposition (*note QR: 51a, *note tau: 51a.) to the vector *note v: 51a, storing the result Q v in *note v: 51a. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q. -- Function: int gsl_linalg_QR_QTmat (const gsl_matrix *QR, const gsl_vector *tau, gsl_matrix *B) This function applies the matrix Q^T encoded in the decomposition (*note QR: 51b, *note tau: 51b.) to the M-by-K matrix *note B: 51b, storing the result Q^T B in *note B: 51b. The matrix multiplication is carried out directly using the encoding of the Householder vectors without needing to form the full matrix Q^T. -- Function: int gsl_linalg_QR_Rsolve (const gsl_matrix *QR, const gsl_vector *b, gsl_vector *x) This function solves the triangular system R x = b for *note x: 51c. It may be useful if the product b' = Q^T b has already been computed using *note gsl_linalg_QR_QTvec(): 517. -- Function: int gsl_linalg_QR_Rsvx (const gsl_matrix *QR, gsl_vector *x) This function solves the triangular system R x = b for *note x: 51d. in-place. On input *note x: 51d. should contain the right-hand side b and is replaced by the solution on output. This function may be useful if the product b' = Q^T b has already been computed using *note gsl_linalg_QR_QTvec(): 517. -- Function: int gsl_linalg_QR_unpack (const gsl_matrix *QR, const gsl_vector *tau, gsl_matrix *Q, gsl_matrix *R) This function unpacks the encoded QR decomposition (*note QR: 51e, *note tau: 51e.) into the matrices *note Q: 51e. and *note R: 51e, where *note Q: 51e. is M-by-M and *note R: 51e. is M-by-N. -- Function: int gsl_linalg_QR_QRsolve (gsl_matrix *Q, gsl_matrix *R, const gsl_vector *b, gsl_vector *x) This function solves the system R x = Q^T b for *note x: 51f. It can be used when the QR decomposition of a matrix is available in unpacked form as (*note Q: 51f, *note R: 51f.). -- Function: int gsl_linalg_QR_update (gsl_matrix *Q, gsl_matrix *R, gsl_vector *w, const gsl_vector *v) This function performs a rank-1 update w v^T of the QR decomposition (*note Q: 520, *note R: 520.). The update is given by Q'R' = Q (R + w v^T) where the output matrices Q and R are also orthogonal and right triangular. Note that *note w: 520. is destroyed by the update. -- Function: int gsl_linalg_R_solve (const gsl_matrix *R, const gsl_vector *b, gsl_vector *x) This function solves the triangular system R x = b for the N-by-N matrix *note R: 521. -- Function: int gsl_linalg_R_svx (const gsl_matrix *R, gsl_vector *x) This function solves the triangular system R x = b in-place. On input *note x: 522. should contain the right-hand side b, which is replaced by the solution on output.  File: gsl-ref.info, Node: Triangle on Top of Rectangle, Next: Triangle on Top of Triangle, Prev: Level 2 Interface, Up: QR Decomposition 14.2.2 Triangle on Top of Rectangle ----------------------------------- This section provides routines for computing the QR decomposition of the specialized matrix [ U ] = Q R [ A ] where U is an N-by-N upper triangular matrix, and A is an M-by-N dense matrix. This type of matrix arises, for example, in the sequential TSQR algorithm. The Elmroth and Gustavson algorithm is used to efficiently factor this matrix. Due to the upper triangular factor, the Q matrix takes the form Q = I - V T V^T with V = [ I ] [ Y ] and Y is dense and of the same dimensions as A. -- Function: int gsl_linalg_QR_UR_decomp (gsl_matrix *U, gsl_matrix *A, gsl_matrix *T) This function computes the QR decomposition of the matrix (U ; A), where U is N-by-N upper triangular and A is M-by-N dense. On output, U is replaced by the R factor, and A is replaced by Y. The N-by-N upper triangular block reflector is stored in *note T: 524. on output.  File: gsl-ref.info, Node: Triangle on Top of Triangle, Next: Triangle on Top of Trapezoidal, Prev: Triangle on Top of Rectangle, Up: QR Decomposition 14.2.3 Triangle on Top of Triangle ---------------------------------- This section provides routines for computing the QR decomposition of the specialized matrix [ U_1 ] = Q R [ U_2 ] where U_1,U_2 are N-by-N upper triangular matrices. The Elmroth and Gustavson algorithm is used to efficiently factor this matrix. The Q matrix takes the form Q = I - V T V^T with V = [ I ] [ Y ] and Y is N-by-N upper triangular. -- Function: int gsl_linalg_QR_UU_decomp (gsl_matrix *U1, gsl_matrix *U2, gsl_matrix *T) This function computes the QR decomposition of the matrix (U_1 ; U_2), where U_1,U_2 are N-by-N upper triangular. On output, *note U1: 526. is replaced by the R factor, and *note U2: 526. is replaced by Y. The N-by-N upper triangular block reflector is stored in *note T: 526. on output. -- Function: int gsl_linalg_QR_UU_lssolve (const gsl_matrix *R, const gsl_matrix *Y, const gsl_matrix *T, const gsl_vector *b, gsl_vector *x, gsl_vector *work) This function find the least squares solution to the overdetermined system, \min_x \left| \left| b - \begin{pmatrix} U_1 \\ U_2 \end{pmatrix} x \right| \right|^2 where U_1,U_2 are N-by-N upper triangular matrices. The routine requires as input the QR decomposition of (U_1; U_2) into (*note R: 527, *note Y: 527, *note T: 527.) given by *note gsl_linalg_QR_UU_decomp(): 526. The parameter *note x: 527. is of length 2N. The solution x is returned in the first N rows of *note x: 527, i.e. x = ‘x[0], x[1], ..., x[N-1]’. The last N rows of *note x: 527. contain a vector whose norm is equal to the residual norm || b - (U_1; U_2) x ||. This similar to the behavior of LAPACK DGELS. Additional workspace of length N is required in *note work: 527. -- Function: int gsl_linalg_QR_UU_QTec (const gsl_matrix *Y, const gsl_matrix *T, gsl_vector *b, gsl_vector *work) This function computes Q^T b using the decomposition (*note Y: 528, *note T: 528.) previously computed by *note gsl_linalg_QR_UU_decomp(): 526. On input, *note b: 528. contains the vector b, and on output it will contain Q^T b. Additional workspace of length N is required in *note work: 528.  File: gsl-ref.info, Node: Triangle on Top of Trapezoidal, Next: Triangle on Top of Diagonal, Prev: Triangle on Top of Triangle, Up: QR Decomposition 14.2.4 Triangle on Top of Trapezoidal ------------------------------------- This section provides routines for computing the QR decomposition of the specialized matrix [ U ] = Q R [ A ] where U is an N-by-N upper triangular matrix, and A is an M-by-N upper trapezoidal matrix with M \ge N. A has the structure, A = [ A_d ] [ A_u ] where A_d is (M-N)-by-N dense, and A_u is N-by-N upper triangular. The Elmroth and Gustavson algorithm is used to efficiently factor this matrix. The Q matrix takes the form Q = I - V T V^T with V = [ I ] [ Y ] and Y is upper trapezoidal and of the same dimensions as A. -- Function: int gsl_linalg_QR_UZ_decomp (gsl_matrix *U, gsl_matrix *A, gsl_matrix *T) This function computes the QR decomposition of the matrix (U ; A), where U is N-by-N upper triangular and A is M-by-N upper trapezoidal. On output, U is replaced by the R factor, and A is replaced by Y. The N-by-N upper triangular block reflector is stored in *note T: 52a. on output.  File: gsl-ref.info, Node: Triangle on Top of Diagonal, Prev: Triangle on Top of Trapezoidal, Up: QR Decomposition 14.2.5 Triangle on Top of Diagonal ---------------------------------- This section provides routines for computing the QR decomposition of the specialized matrix [ U ] = Q R [ D ] where U is an N-by-N upper triangular matrix and D is an N-by-N diagonal matrix. This type of matrix arises in regularized least squares problems. The Elmroth and Gustavson algorithm is used to efficiently factor this matrix. The Q matrix takes the form Q = I - V T V^T with V = [ I ] [ Y ] and Y is N-by-N upper triangular. -- Function: int gsl_linalg_QR_UD_decomp (gsl_matrix *U, const gsl_vector *D, gsl_matrix *Y, gsl_matrix *T) This function computes the QR decomposition of the matrix (U ; D), where U is N-by-N upper triangular and D is N-by-N diagonal. On output, *note U: 52c. is replaced by the R factor and Y is stored in *note Y: 52c. The N-by-N upper triangular block reflector is stored in *note T: 52c. on output. -- Function: int gsl_linalg_QR_UD_lssolve (const gsl_matrix *R, const gsl_matrix *Y, const gsl_matrix *T, const gsl_vector *b, gsl_vector *x, gsl_vector *work) This function find the least squares solution to the overdetermined system, \min_x \left| \left| b - \begin{pmatrix} U \\ D \end{pmatrix} x \right| \right|^2 where U is N-by-N upper triangular and D is N-by-N diagonal. The routine requires as input the QR decomposition of (U; D) into (*note R: 52d, *note Y: 52d, *note T: 52d.) given by *note gsl_linalg_QR_UD_decomp(): 52c. The parameter *note x: 52d. is of length 2N. The solution x is returned in the first N rows of *note x: 52d, i.e. x = ‘x[0], x[1], ..., x[N-1]’. The last N rows of *note x: 52d. contain a vector whose norm is equal to the residual norm || b - (U; D) x ||. This similar to the behavior of LAPACK DGELS. Additional workspace of length N is required in *note work: 52d.  File: gsl-ref.info, Node: QR Decomposition with Column Pivoting, Next: LQ Decomposition, Prev: QR Decomposition, Up: Linear Algebra 14.3 QR Decomposition with Column Pivoting ========================================== The QR decomposition of an M-by-N matrix A can be extended to the rank deficient case by introducing a column permutation P, A P = Q R The first r columns of Q form an orthonormal basis for the range of A for a matrix with column rank r. This decomposition can also be used to convert the square linear system A x = b into the triangular system R y = Q^T b, x = P y, which can be solved by back-substitution and permutation. We denote the QR decomposition with column pivoting by QRP^T since A = Q R P^T. When A is rank deficient with r = {\rm rank}(A), the matrix R can be partitioned as R = [ R11 R12 ] =~ [ R11 R12 ] [ 0 R22 ] [ 0 0 ] where R_{11} is r-by-r and nonsingular. In this case, a `basic' least squares solution for the overdetermined system A x = b can be obtained as x = P [ R11^-1 c1 ] [ 0 ] where c_1 consists of the first r elements of Q^T b. This basic solution is not guaranteed to be the minimum norm solution unless R_{12} = 0 (see *note Complete Orthogonal Decomposition: 501.). -- Function: int gsl_linalg_QRPT_decomp (gsl_matrix *A, gsl_vector *tau, gsl_permutation *p, int *signum, gsl_vector *norm) This function factorizes the M-by-N matrix *note A: 52f. into the QRP^T decomposition A = Q R P^T. On output the diagonal and upper triangular part of the input matrix contain the matrix R. The permutation matrix P is stored in the permutation *note p: 52f. The sign of the permutation is given by *note signum: 52f. It has the value (-1)^n, where n is the number of interchanges in the permutation. The vector *note tau: 52f. and the columns of the lower triangular part of the matrix *note A: 52f. contain the Householder coefficients and vectors which encode the orthogonal matrix ‘Q’. The vector *note tau: 52f. must be of length k=\min(M,N). The matrix Q is related to these components by, Q = Q_k ... Q_2 Q_1 where Q_i = I - \tau_i v_i v_i^T and v_i is the Householder vector v_i = (0,...,1,A(i+1,i),A(i+2,i),...,A(m,i)) This is the same storage scheme as used by LAPACK. The vector *note norm: 52f. is a workspace of length ‘N’ used for column pivoting. The algorithm used to perform the decomposition is Householder QR with column pivoting (Golub & Van Loan, “Matrix Computations”, Algorithm 5.4.1). -- Function: int gsl_linalg_QRPT_decomp2 (const gsl_matrix *A, gsl_matrix *q, gsl_matrix *r, gsl_vector *tau, gsl_permutation *p, int *signum, gsl_vector *norm) This function factorizes the matrix *note A: 530. into the decomposition A = Q R P^T without modifying *note A: 530. itself and storing the output in the separate matrices *note q: 530. and *note r: 530. -- Function: int gsl_linalg_QRPT_solve (const gsl_matrix *QR, const gsl_vector *tau, const gsl_permutation *p, const gsl_vector *b, gsl_vector *x) This function solves the square system A x = b using the QRP^T decomposition of A held in (*note QR: 531, *note tau: 531, *note p: 531.) which must have been computed previously by *note gsl_linalg_QRPT_decomp(): 52f. -- Function: int gsl_linalg_QRPT_svx (const gsl_matrix *QR, const gsl_vector *tau, const gsl_permutation *p, gsl_vector *x) This function solves the square system A x = b in-place using the QRP^T decomposition of A held in (*note QR: 532, *note tau: 532, *note p: 532.). On input *note x: 532. should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_QRPT_lssolve (const gsl_matrix *QR, const gsl_vector *tau, const gsl_permutation *p, const gsl_vector *b, gsl_vector *x, gsl_vector *residual) This function finds the least squares solution to the overdetermined system A x = b where the matrix ‘A’ has more rows than columns and is assumed to have full rank. The least squares solution minimizes the Euclidean norm of the residual, ||b - A x||. The routine requires as input the QR decomposition of A into (*note QR: 533, *note tau: 533, *note p: 533.) given by *note gsl_linalg_QRPT_decomp(): 52f. The solution is returned in *note x: 533. The residual is computed as a by-product and stored in *note residual: 533. For rank deficient matrices, *note gsl_linalg_QRPT_lssolve2(): 534. should be used instead. -- Function: int gsl_linalg_QRPT_lssolve2 (const gsl_matrix *QR, const gsl_vector *tau, const gsl_permutation *p, const gsl_vector *b, const size_t rank, gsl_vector *x, gsl_vector *residual) This function finds the least squares solution to the overdetermined system A x = b where the matrix ‘A’ has more rows than columns and has rank given by the input *note rank: 534. If the user does not know the rank of A, the routine *note gsl_linalg_QRPT_rank(): 535. can be called to estimate it. The least squares solution is the so-called “basic” solution discussed above and may not be the minimum norm solution. The routine requires as input the QR decomposition of A into (*note QR: 534, *note tau: 534, *note p: 534.) given by *note gsl_linalg_QRPT_decomp(): 52f. The solution is returned in *note x: 534. The residual is computed as a by-product and stored in *note residual: 534. -- Function: int gsl_linalg_QRPT_QRsolve (const gsl_matrix *Q, const gsl_matrix *R, const gsl_permutation *p, const gsl_vector *b, gsl_vector *x) This function solves the square system R P^T x = Q^T b for *note x: 536. It can be used when the QR decomposition of a matrix is available in unpacked form as (*note Q: 536, *note R: 536.). -- Function: int gsl_linalg_QRPT_update (gsl_matrix *Q, gsl_matrix *R, const gsl_permutation *p, gsl_vector *w, const gsl_vector *v) This function performs a rank-1 update w v^T of the QRP^T decomposition (*note Q: 537, *note R: 537, *note p: 537.). The update is given by Q'R' = Q (R + w v^T P) where the output matrices Q' and R' are also orthogonal and right triangular. Note that *note w: 537. is destroyed by the update. The permutation *note p: 537. is not changed. -- Function: int gsl_linalg_QRPT_Rsolve (const gsl_matrix *QR, const gsl_permutation *p, const gsl_vector *b, gsl_vector *x) This function solves the triangular system R P^T x = b for the N-by-N matrix R contained in *note QR: 538. -- Function: int gsl_linalg_QRPT_Rsvx (const gsl_matrix *QR, const gsl_permutation *p, gsl_vector *x) This function solves the triangular system R P^T x = b in-place for the N-by-N matrix R contained in *note QR: 539. On input *note x: 539. should contain the right-hand side b, which is replaced by the solution on output. -- Function: size_t gsl_linalg_QRPT_rank (const gsl_matrix *QR, const double tol) This function estimates the rank of the triangular matrix R contained in *note QR: 535. The algorithm simply counts the number of diagonal elements of R whose absolute value is greater than the specified tolerance *note tol: 535. If the input *note tol: 535. is negative, a default value of 20 (M + N) eps(max(|diag(R)|)) is used. -- Function: int gsl_linalg_QRPT_rcond (const gsl_matrix *QR, double *rcond, gsl_vector *work) This function estimates the reciprocal condition number (using the 1-norm) of the R factor, stored in the upper triangle of *note QR: 53a. The reciprocal condition number estimate, defined as 1 / (||R||_1 \cdot ||R^{-1}||_1), is stored in *note rcond: 53a. Additional workspace of size 3 N is required in *note work: 53a.  File: gsl-ref.info, Node: LQ Decomposition, Next: QL Decomposition, Prev: QR Decomposition with Column Pivoting, Up: Linear Algebra 14.4 LQ Decomposition ===================== A general rectangular M-by-N matrix A has a LQ decomposition into the product of a lower trapezoidal M-by-N matrix L and an orthogonal N-by-N square matrix Q: A = L Q If M \le N, then L can be written as L = (L_1 \quad 0) where L_1 is M-by-M lower triangular, and A = \begin{pmatrix} L_1 & 0 \end{pmatrix} \begin{pmatrix} Q_1 \\ Q_2 \end{pmatrix} = L_1 Q_1 where Q_1 consists of the first M rows of Q, and Q_2 contains the remaining N - M rows. The LQ factorization of A is essentially the same as the *note QR factorization: 4fe. of A^T. The LQ factorization may be used to find the minimum norm solution of an underdetermined system of equations A x = b, where A is M-by-N and M \le N. The solution is x = Q^T \begin{pmatrix} L_1^{-1} b \\ 0 \end{pmatrix} -- Function: int gsl_linalg_LQ_decomp (gsl_matrix *A, gsl_vector *tau) This function factorizes the M-by-N matrix *note A: 53c. into the LQ decomposition A = L Q. On output the diagonal and lower trapezoidal part of the input matrix contain the matrix L. The vector *note tau: 53c. and the elements above the diagonal of the matrix *note A: 53c. contain the Householder coefficients and Householder vectors which encode the orthogonal matrix ‘Q’. The vector *note tau: 53c. must be of length k=\min(M,N). The matrix Q is related to these components by, Q = Q_k ... Q_2 Q_1 where Q_i = I - \tau_i v_i v_i^T and v_i is the Householder vector v_i = (0,...,1,A(i,i+1),A(i,i+2),...,A(i,N)). This is the same storage scheme as used by LAPACK. -- Function: int gsl_linalg_LQ_lssolve (const gsl_matrix *LQ, const gsl_vector *tau, const gsl_vector *b, gsl_vector *x, gsl_vector *residual) This function finds the minimum norm least squares solution to the underdetermined system A x = b, where the M-by-N matrix ‘A’ has M \le N. The routine requires as input the LQ decomposition of A into (*note LQ: 53d, *note tau: 53d.) given by *note gsl_linalg_LQ_decomp(): 53c. The solution is returned in *note x: 53d. The residual, b - Ax, is computed as a by-product and stored in *note residual: 53d. -- Function: int gsl_linalg_LQ_unpack (const gsl_matrix *LQ, const gsl_vector *tau, gsl_matrix *Q, gsl_matrix *L) This function unpacks the encoded LQ decomposition (*note LQ: 53e, *note tau: 53e.) into the matrices *note Q: 53e. and *note L: 53e, where *note Q: 53e. is N-by-N and *note L: 53e. is M-by-N. -- Function: int gsl_linalg_LQ_QTvec (const gsl_matrix *LQ, const gsl_vector *tau, gsl_vector *v) This function applies Q^T to the vector *note v: 53f, storing the result Q^T v in *note v: 53f. on output.  File: gsl-ref.info, Node: QL Decomposition, Next: Complete Orthogonal Decomposition, Prev: LQ Decomposition, Up: Linear Algebra 14.5 QL Decomposition ===================== A general rectangular M-by-N matrix A has a QL decomposition into the product of an orthogonal M-by-M square matrix Q (where Q^T Q = I) and an M-by-N left-triangular matrix L. When M \ge N, the decomposition is given by A = [ 0 ; L_1 ] where L_1 is N-by-N lower triangular. When M \le N, the decomposition is given by A = [ L_1 L_2 ] where L_1 is a dense M-by-N-M matrix and L_2 is a lower triangular M-by-M matrix. -- Function: int gsl_linalg_QL_decomp (gsl_matrix *A, gsl_vector *tau) This function factorizes the M-by-N matrix *note A: 541. into the QL decomposition A = Q L. The vector *note tau: 541. must be of length N and contains the Householder coefficients on output. The matrix Q is stored in packed form in *note A: 541. on output, using the same storage scheme as LAPACK. -- Function: int gsl_linalg_QL_unpack (const gsl_matrix *QL, const gsl_vector *tau, gsl_matrix *Q, gsl_matrix *L) This function unpacks the encoded QL decomposition (*note QL: 542, *note tau: 542.) into the matrices *note Q: 542. and *note L: 542, where *note Q: 542. is M-by-M and *note L: 542. is M-by-N.  File: gsl-ref.info, Node: Complete Orthogonal Decomposition, Next: Singular Value Decomposition, Prev: QL Decomposition, Up: Linear Algebra 14.6 Complete Orthogonal Decomposition ====================================== The complete orthogonal decomposition of a M-by-N matrix A is a generalization of the QR decomposition with column pivoting, given by A P = Q [ R11 0 ] Z^T [ 0 0 ] where P is a N-by-N permutation matrix, Q is M-by-M orthogonal, R_{11} is r-by-r upper triangular, with r = {\rm rank}(A), and Z is N-by-N orthogonal. If A has full rank, then R_{11} = R, Z = I and this reduces to the QR decomposition with column pivoting. For a rank deficient least squares problem, \min_x{|| b - Ax||^2}, the solution vector x is not unique. However if we further require that ||x||^2 is minimized, then the complete orthogonal decomposition gives us the ability to compute the unique minimum norm solution, which is given by x = P Z [ R11^-1 c1 ] [ 0 ] and the vector c_1 is the first r elements of Q^T b. The COD also enables a straightforward solution of regularized least squares problems in Tikhonov standard form, written as \min_x ||b - A x||^2 + \lambda^2 ||x||^2 where \lambda > 0 is a regularization parameter which represents a tradeoff between minimizing the residual norm ||b - A x|| and the solution norm ||x||. For this system, the solution is given by x = P Z [ y1 ] [ 0 ] where y_1 is a vector of length r which is found by solving [ R11 ] y_1 = [ c_1 ] [ \lambda I_r ] [ 0 ] and c_1 is defined above. The equation above can be solved efficiently for different values of \lambda using QR factorizations of the left hand side matrix. -- Function: int gsl_linalg_COD_decomp (gsl_matrix *A, gsl_vector *tau_Q, gsl_vector *tau_Z, gsl_permutation *p, size_t *rank, gsl_vector *work) -- Function: int gsl_linalg_COD_decomp_e (gsl_matrix *A, gsl_vector *tau_Q, gsl_vector *tau_Z, gsl_permutation *p, double tol, size_t *rank, gsl_vector *work) These functions factor the M-by-N matrix *note A: 545. into the decomposition A = Q R Z P^T. The rank of *note A: 545. is computed as the number of diagonal elements of R greater than the tolerance *note tol: 545. and output in *note rank: 545. If *note tol: 545. is not specified, a default value is used (see *note gsl_linalg_QRPT_rank(): 535.). On output, the permutation matrix P is stored in *note p: 545. The matrix R_{11} is stored in the upper *note rank: 545.-by-*note rank: 545. block of *note A: 545. The matrices Q and Z are encoded in packed storage in *note A: 545. on output. The vectors *note tau_Q: 545. and *note tau_Z: 545. contain the Householder scalars corresponding to the matrices Q and Z respectively and must be of length k = \min(M,N). The vector *note work: 545. is additional workspace of length N. -- Function: int gsl_linalg_COD_lssolve (const gsl_matrix *QRZT, const gsl_vector *tau_Q, const gsl_vector *tau_Z, const gsl_permutation *p, const size_t rank, const gsl_vector *b, gsl_vector *x, gsl_vector *residual) This function finds the unique minimum norm least squares solution to the overdetermined system A x = b where the matrix ‘A’ has more rows than columns. The least squares solution minimizes the Euclidean norm of the residual, ||b - A x|| as well as the norm of the solution ||x||. The routine requires as input the QRZT decomposition of A into (*note QRZT: 546, *note tau_Q: 546, *note tau_Z: 546, *note p: 546, *note rank: 546.) given by *note gsl_linalg_COD_decomp(): 544. The solution is returned in *note x: 546. The residual, b - Ax, is computed as a by-product and stored in *note residual: 546. -- Function: int gsl_linalg_COD_lssolve2 (const double lambda, const gsl_matrix *QRZT, const gsl_vector *tau_Q, const gsl_vector *tau_Z, const gsl_permutation *p, const size_t rank, const gsl_vector *b, gsl_vector *x, gsl_vector *residual, gsl_matrix *S, gsl_vector *work) This function finds the solution to the regularized least squares problem in Tikhonov standard form, \min_x ||b - Ax||^2 + \lambda^2 ||x||^2. The routine requires as input the QRZT decomposition of A into (*note QRZT: 547, *note tau_Q: 547, *note tau_Z: 547, *note p: 547, *note rank: 547.) given by *note gsl_linalg_COD_decomp(): 544. The parameter \lambda is supplied in *note lambda: 547. The solution is returned in *note x: 547. The residual, b - Ax, is stored in *note residual: 547. on output. *note S: 547. is additional workspace of size *note rank: 547.-by-*note rank: 547. *note work: 547. is additional workspace of length *note rank: 547. -- Function: int gsl_linalg_COD_unpack (const gsl_matrix *QRZT, const gsl_vector *tau_Q, const gsl_vector *tau_Z, const size_t rank, gsl_matrix *Q, gsl_matrix *R, gsl_matrix *Z) This function unpacks the encoded QRZT decomposition (*note QRZT: 548, *note tau_Q: 548, *note tau_Z: 548, *note rank: 548.) into the matrices *note Q: 548, *note R: 548, and *note Z: 548, where *note Q: 548. is M-by-M, *note R: 548. is M-by-N, and *note Z: 548. is N-by-N. -- Function: int gsl_linalg_COD_matZ (const gsl_matrix *QRZT, const gsl_vector *tau_Z, const size_t rank, gsl_matrix *A, gsl_vector *work) This function multiplies the input matrix *note A: 549. on the right by ‘Z’, A' = A Z using the encoded QRZT decomposition (*note QRZT: 549, *note tau_Z: 549, *note rank: 549.). *note A: 549. must have N columns but may have any number of rows. Additional workspace of length M is provided in *note work: 549.  File: gsl-ref.info, Node: Singular Value Decomposition, Next: Cholesky Decomposition, Prev: Complete Orthogonal Decomposition, Up: Linear Algebra 14.7 Singular Value Decomposition ================================= A general rectangular M-by-N matrix A has a singular value decomposition (SVD) into the product of an M-by-N orthogonal matrix U, an N-by-N diagonal matrix of singular values S and the transpose of an N-by-N orthogonal square matrix V, A = U S V^T The singular values \sigma_i = S_{ii} are all non-negative and are generally chosen to form a non-increasing sequence \sigma_1 >= \sigma_2 >= ... >= \sigma_N >= 0 The singular value decomposition of a matrix has many practical uses. The condition number of the matrix is given by the ratio of the largest singular value to the smallest singular value. The presence of a zero singular value indicates that the matrix is singular. The number of non-zero singular values indicates the rank of the matrix. In practice singular value decomposition of a rank-deficient matrix will not produce exact zeroes for singular values, due to finite numerical precision. Small singular values should be edited by choosing a suitable tolerance. For a rank-deficient matrix, the null space of A is given by the columns of V corresponding to the zero singular values. Similarly, the range of A is given by columns of U corresponding to the non-zero singular values. Note that the routines here compute the “thin” version of the SVD with U as M-by-N orthogonal matrix. This allows in-place computation and is the most commonly-used form in practice. Mathematically, the “full” SVD is defined with U as an M-by-M orthogonal matrix and S as an M-by-N diagonal matrix (with additional rows of zeros). -- Function: int gsl_linalg_SV_decomp (gsl_matrix *A, gsl_matrix *V, gsl_vector *S, gsl_vector *work) This function factorizes the M-by-N matrix *note A: 54b. into the singular value decomposition A = U S V^T for M \ge N. On output the matrix *note A: 54b. is replaced by U. The diagonal elements of the singular value matrix S are stored in the vector *note S: 54b. The singular values are non-negative and form a non-increasing sequence from S_1 to S_N. The matrix *note V: 54b. contains the elements of V in untransposed form. To form the product U S V^T it is necessary to take the transpose of *note V: 54b. A workspace of length ‘N’ is required in *note work: 54b. This routine uses the Golub-Reinsch SVD algorithm. -- Function: int gsl_linalg_SV_decomp_mod (gsl_matrix *A, gsl_matrix *X, gsl_matrix *V, gsl_vector *S, gsl_vector *work) This function computes the SVD using the modified Golub-Reinsch algorithm, which is faster for M \gg N. It requires the vector *note work: 54c. of length ‘N’ and the N-by-N matrix *note X: 54c. as additional working space. -- Function: int gsl_linalg_SV_decomp_jacobi (gsl_matrix *A, gsl_matrix *V, gsl_vector *S) This function computes the SVD of the M-by-N matrix *note A: 54d. using one-sided Jacobi orthogonalization for M \ge N. The Jacobi method can compute singular values to higher relative accuracy than Golub-Reinsch algorithms (see references for details). -- Function: int gsl_linalg_SV_solve (const gsl_matrix *U, const gsl_matrix *V, const gsl_vector *S, const gsl_vector *b, gsl_vector *x) This function solves the system A x = b using the singular value decomposition (*note U: 54e, *note S: 54e, *note V: 54e.) of A which must have been computed previously with *note gsl_linalg_SV_decomp(): 54b. Only non-zero singular values are used in computing the solution. The parts of the solution corresponding to singular values of zero are ignored. Other singular values can be edited out by setting them to zero before calling this function. In the over-determined case where ‘A’ has more rows than columns the system is solved in the least squares sense, returning the solution *note x: 54e. which minimizes ||A x - b||_2. -- Function: int gsl_linalg_SV_leverage (const gsl_matrix *U, gsl_vector *h) This function computes the statistical leverage values h_i of a matrix A using its singular value decomposition (*note U: 54f, ‘S’, ‘V’) previously computed with *note gsl_linalg_SV_decomp(): 54b. h_i are the diagonal values of the matrix A (A^T A)^{-1} A^T and depend only on the matrix *note U: 54f. which is the input to this function.  File: gsl-ref.info, Node: Cholesky Decomposition, Next: Pivoted Cholesky Decomposition, Prev: Singular Value Decomposition, Up: Linear Algebra 14.8 Cholesky Decomposition =========================== A symmetric, positive definite square matrix A has a Cholesky decomposition into a product of a lower triangular matrix L and its transpose L^T, A = L L^T This is sometimes referred to as taking the square-root of a matrix. The Cholesky decomposition can only be carried out when all the eigenvalues of the matrix are positive. This decomposition can be used to convert the linear system A x = b into a pair of triangular systems (L y = b, L^T x = y), which can be solved by forward and back-substitution. If the matrix A is near singular, it is sometimes possible to reduce the condition number and recover a more accurate solution vector x by scaling as ( S A S ) ( S^(-1) x ) = S b where S is a diagonal matrix whose elements are given by S_{ii} = 1/\sqrt{A_{ii}}. This scaling is also known as Jacobi preconditioning. There are routines below to solve both the scaled and unscaled systems. -- Function: int gsl_linalg_cholesky_decomp1 (gsl_matrix *A) -- Function: int gsl_linalg_complex_cholesky_decomp (gsl_matrix_complex *A) These functions factorize the symmetric, positive-definite square matrix *note A: 552. into the Cholesky decomposition A = L L^T (or A = L L^{\dagger} for the complex case). On input, the values from the diagonal and lower-triangular part of the matrix *note A: 552. are used (the upper triangular part is ignored). On output the diagonal and lower triangular part of the input matrix *note A: 552. contain the matrix L, while the upper triangular part contains the original matrix. If the matrix is not positive-definite then the decomposition will fail, returning the error code *note GSL_EDOM: 28. When testing whether a matrix is positive-definite, disable the error handler first to avoid triggering an error. These functions use Level 3 BLAS to compute the Cholesky factorization (Peise and Bientinesi, 2016). -- Function: int gsl_linalg_cholesky_decomp (gsl_matrix *A) This function is now deprecated and is provided only for backward compatibility. -- Function: int gsl_linalg_cholesky_solve (const gsl_matrix *cholesky, const gsl_vector *b, gsl_vector *x) -- Function: int gsl_linalg_complex_cholesky_solve (const gsl_matrix_complex *cholesky, const gsl_vector_complex *b, gsl_vector_complex *x) These functions solve the system A x = b using the Cholesky decomposition of A held in the matrix *note cholesky: 555. which must have been previously computed by *note gsl_linalg_cholesky_decomp(): 553. or *note gsl_linalg_complex_cholesky_decomp(): 552. -- Function: int gsl_linalg_cholesky_svx (const gsl_matrix *cholesky, gsl_vector *x) -- Function: int gsl_linalg_complex_cholesky_svx (const gsl_matrix_complex *cholesky, gsl_vector_complex *x) These functions solve the system A x = b in-place using the Cholesky decomposition of A held in the matrix *note cholesky: 557. which must have been previously computed by *note gsl_linalg_cholesky_decomp(): 553. or *note gsl_linalg_complex_cholesky_decomp(): 552. On input *note x: 557. should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_cholesky_invert (gsl_matrix *cholesky) -- Function: int gsl_linalg_complex_cholesky_invert (gsl_matrix_complex *cholesky) These functions compute the inverse of a matrix from its Cholesky decomposition *note cholesky: 559, which must have been previously computed by *note gsl_linalg_cholesky_decomp(): 553. or *note gsl_linalg_complex_cholesky_decomp(): 552. On output, the inverse is stored in-place in *note cholesky: 559. -- Function: int gsl_linalg_cholesky_decomp2 (gsl_matrix *A, gsl_vector *S) This function calculates a diagonal scaling transformation S for the symmetric, positive-definite square matrix *note A: 55a, and then computes the Cholesky decomposition S A S = L L^T. On input, the values from the diagonal and lower-triangular part of the matrix *note A: 55a. are used (the upper triangular part is ignored). On output the diagonal and lower triangular part of the input matrix *note A: 55a. contain the matrix L, while the upper triangular part of the input matrix is overwritten with L^T (the diagonal terms being identical for both L and L^T). If the matrix is not positive-definite then the decomposition will fail, returning the error code *note GSL_EDOM: 28. The diagonal scale factors are stored in *note S: 55a. on output. When testing whether a matrix is positive-definite, disable the error handler first to avoid triggering an error. -- Function: int gsl_linalg_cholesky_solve2 (const gsl_matrix *cholesky, const gsl_vector *S, const gsl_vector *b, gsl_vector *x) This function solves the system (S A S) (S^{-1} x) = S b using the Cholesky decomposition of S A S held in the matrix *note cholesky: 55b. which must have been previously computed by *note gsl_linalg_cholesky_decomp2(): 55a. -- Function: int gsl_linalg_cholesky_svx2 (const gsl_matrix *cholesky, const gsl_vector *S, gsl_vector *x) This function solves the system (S A S) (S^{-1} x) = S b in-place using the Cholesky decomposition of S A S held in the matrix *note cholesky: 55c. which must have been previously computed by *note gsl_linalg_cholesky_decomp2(): 55a. On input *note x: 55c. should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_cholesky_scale (const gsl_matrix *A, gsl_vector *S) This function calculates a diagonal scaling transformation of the symmetric, positive definite matrix *note A: 55d, such that S A S has a condition number within a factor of N of the matrix of smallest possible condition number over all possible diagonal scalings. On output, *note S: 55d. contains the scale factors, given by S_i = 1/\sqrt{A_{ii}}. For any A_{ii} \le 0, the corresponding scale factor S_i is set to 1. -- Function: int gsl_linalg_cholesky_scale_apply (gsl_matrix *A, const gsl_vector *S) This function applies the scaling transformation *note S: 55e. to the matrix *note A: 55e. On output, *note A: 55e. is replaced by S A S. -- Function: int gsl_linalg_cholesky_rcond (const gsl_matrix *cholesky, double *rcond, gsl_vector *work) This function estimates the reciprocal condition number (using the 1-norm) of the symmetric positive definite matrix A, using its Cholesky decomposition provided in *note cholesky: 55f. The reciprocal condition number estimate, defined as 1 / (||A||_1 \cdot ||A^{-1}||_1), is stored in *note rcond: 55f. Additional workspace of size 3 N is required in *note work: 55f.  File: gsl-ref.info, Node: Pivoted Cholesky Decomposition, Next: Modified Cholesky Decomposition, Prev: Cholesky Decomposition, Up: Linear Algebra 14.9 Pivoted Cholesky Decomposition =================================== A symmetric positive semi-definite square matrix A has an alternate Cholesky decomposition into a product of a lower unit triangular matrix L, a diagonal matrix D and L^T, given by L D L^T. For postive definite matrices, this is equivalent to the Cholesky formulation discussed above, with the standard Cholesky lower triangular factor given by L D^{1 \over 2}. For ill-conditioned matrices, it can help to use a pivoting strategy to prevent the entries of D and L from growing too large, and also ensure D_1 \ge D_2 \ge \cdots \ge D_n > 0, where D_i are the diagonal entries of D. The final decomposition is given by P A P^T = L D L^T where P is a permutation matrix. -- Function: int gsl_linalg_pcholesky_decomp (gsl_matrix *A, gsl_permutation *p) This function factors the symmetric, positive-definite square matrix *note A: 561. into the Pivoted Cholesky decomposition P A P^T = L D L^T. On input, the values from the diagonal and lower-triangular part of the matrix *note A: 561. are used to construct the factorization. On output the diagonal of the input matrix *note A: 561. stores the diagonal elements of D, and the lower triangular portion of *note A: 561. contains the matrix L. Since L has ones on its diagonal these do not need to be explicitely stored. The upper triangular portion of *note A: 561. is unmodified. The permutation matrix P is stored in *note p: 561. on output. -- Function: int gsl_linalg_pcholesky_solve (const gsl_matrix *LDLT, const gsl_permutation *p, const gsl_vector *b, gsl_vector *x) This function solves the system A x = b using the Pivoted Cholesky decomposition of A held in the matrix *note LDLT: 562. and permutation *note p: 562. which must have been previously computed by *note gsl_linalg_pcholesky_decomp(): 561. -- Function: int gsl_linalg_pcholesky_svx (const gsl_matrix *LDLT, const gsl_permutation *p, gsl_vector *x) This function solves the system A x = b in-place using the Pivoted Cholesky decomposition of A held in the matrix *note LDLT: 563. and permutation *note p: 563. which must have been previously computed by *note gsl_linalg_pcholesky_decomp(): 561. On input, *note x: 563. contains the right hand side vector b which is replaced by the solution vector on output. -- Function: int gsl_linalg_pcholesky_decomp2 (gsl_matrix *A, gsl_permutation *p, gsl_vector *S) This function computes the pivoted Cholesky factorization of the matrix S A S, where the input matrix *note A: 564. is symmetric and positive definite, and the diagonal scaling matrix *note S: 564. is computed to reduce the condition number of *note A: 564. as much as possible. See *note Cholesky Decomposition: 3f5. for more information on the matrix *note S: 564. The Pivoted Cholesky decomposition satisfies P S A S P^T = L D L^T. On input, the values from the diagonal and lower-triangular part of the matrix *note A: 564. are used to construct the factorization. On output the diagonal of the input matrix *note A: 564. stores the diagonal elements of D, and the lower triangular portion of *note A: 564. contains the matrix L. Since L has ones on its diagonal these do not need to be explicitely stored. The upper triangular portion of *note A: 564. is unmodified. The permutation matrix P is stored in *note p: 564. on output. The diagonal scaling transformation is stored in *note S: 564. on output. -- Function: int gsl_linalg_pcholesky_solve2 (const gsl_matrix *LDLT, const gsl_permutation *p, const gsl_vector *S, const gsl_vector *b, gsl_vector *x) This function solves the system (S A S) (S^{-1} x) = S b using the Pivoted Cholesky decomposition of S A S held in the matrix *note LDLT: 565, permutation *note p: 565, and vector *note S: 565, which must have been previously computed by *note gsl_linalg_pcholesky_decomp2(): 564. -- Function: int gsl_linalg_pcholesky_svx2 (const gsl_matrix *LDLT, const gsl_permutation *p, const gsl_vector *S, gsl_vector *x) This function solves the system (S A S) (S^{-1} x) = S b in-place using the Pivoted Cholesky decomposition of S A S held in the matrix *note LDLT: 566, permutation *note p: 566. and vector *note S: 566, which must have been previously computed by *note gsl_linalg_pcholesky_decomp2(): 564. On input, *note x: 566. contains the right hand side vector b which is replaced by the solution vector on output. -- Function: int gsl_linalg_pcholesky_invert (const gsl_matrix *LDLT, const gsl_permutation *p, gsl_matrix *Ainv) This function computes the inverse of the matrix A, using the Pivoted Cholesky decomposition stored in *note LDLT: 567. and *note p: 567. On output, the matrix *note Ainv: 567. contains A^{-1}. -- Function: int gsl_linalg_pcholesky_rcond (const gsl_matrix *LDLT, const gsl_permutation *p, double *rcond, gsl_vector *work) This function estimates the reciprocal condition number (using the 1-norm) of the symmetric positive definite matrix A, using its pivoted Cholesky decomposition provided in *note LDLT: 568. The reciprocal condition number estimate, defined as 1 / (||A||_1 \cdot ||A^{-1}||_1), is stored in *note rcond: 568. Additional workspace of size 3 N is required in *note work: 568.  File: gsl-ref.info, Node: Modified Cholesky Decomposition, Next: LDLT Decomposition, Prev: Pivoted Cholesky Decomposition, Up: Linear Algebra 14.10 Modified Cholesky Decomposition ===================================== The modified Cholesky decomposition is suitable for solving systems A x = b where A is a symmetric indefinite matrix. Such matrices arise in nonlinear optimization algorithms. The standard Cholesky decomposition requires a positive definite matrix and would fail in this case. Instead of resorting to a method like QR or SVD, which do not take into account the symmetry of the matrix, we can instead introduce a small perturbation to the matrix A to make it positive definite, and then use a Cholesky decomposition on the perturbed matrix. The resulting decomposition satisfies P (A + E) P^T = L D L^T where P is a permutation matrix, E is a diagonal perturbation matrix, L is unit lower triangular, and D is diagonal. If A is sufficiently positive definite, then the perturbation matrix E will be zero and this method is equivalent to the pivoted Cholesky algorithm. For indefinite matrices, the perturbation matrix E is computed to ensure that A + E is positive definite and well conditioned. -- Function: int gsl_linalg_mcholesky_decomp (gsl_matrix *A, gsl_permutation *p, gsl_vector *E) This function factors the symmetric, indefinite square matrix *note A: 56a. into the Modified Cholesky decomposition P (A + E) P^T = L D L^T. On input, the values from the diagonal and lower-triangular part of the matrix *note A: 56a. are used to construct the factorization. On output the diagonal of the input matrix *note A: 56a. stores the diagonal elements of D, and the lower triangular portion of *note A: 56a. contains the matrix L. Since L has ones on its diagonal these do not need to be explicitely stored. The upper triangular portion of *note A: 56a. is unmodified. The permutation matrix P is stored in *note p: 56a. on output. The diagonal perturbation matrix is stored in *note E: 56a. on output. The parameter *note E: 56a. may be set to NULL if it is not required. -- Function: int gsl_linalg_mcholesky_solve (const gsl_matrix *LDLT, const gsl_permutation *p, const gsl_vector *b, gsl_vector *x) This function solves the perturbed system (A + E) x = b using the Cholesky decomposition of A + E held in the matrix *note LDLT: 56b. and permutation *note p: 56b. which must have been previously computed by *note gsl_linalg_mcholesky_decomp(): 56a. -- Function: int gsl_linalg_mcholesky_svx (const gsl_matrix *LDLT, const gsl_permutation *p, gsl_vector *x) This function solves the perturbed system (A + E) x = b in-place using the Cholesky decomposition of A + E held in the matrix *note LDLT: 56c. and permutation *note p: 56c. which must have been previously computed by *note gsl_linalg_mcholesky_decomp(): 56a. On input, *note x: 56c. contains the right hand side vector b which is replaced by the solution vector on output. -- Function: int gsl_linalg_mcholesky_rcond (const gsl_matrix *LDLT, const gsl_permutation *p, double *rcond, gsl_vector *work) This function estimates the reciprocal condition number (using the 1-norm) of the perturbed matrix A + E, using its pivoted Cholesky decomposition provided in *note LDLT: 56d. The reciprocal condition number estimate, defined as 1 / (||A + E||_1 \cdot ||(A + E)^{-1}||_1), is stored in *note rcond: 56d. Additional workspace of size 3 N is required in *note work: 56d.  File: gsl-ref.info, Node: LDLT Decomposition, Next: Tridiagonal Decomposition of Real Symmetric Matrices, Prev: Modified Cholesky Decomposition, Up: Linear Algebra 14.11 LDLT Decomposition ======================== If A is a symmetric, nonsingular square matrix, then it has a unique factorization of the form A = L D L^T where L is a unit lower triangular matrix and D is diagonal. If A is positive definite, then this factorization is equivalent to the Cholesky factorization, where the lower triangular Cholesky factor is L D^{\frac{1}{2}}. Some indefinite matrices for which no Cholesky decomposition exists have an L D L^T decomposition with negative entries in D. The L D L^T algorithm is sometimes referred to as the `square root free' Cholesky decomposition, as the algorithm does not require the computation of square roots. The algorithm is stable for positive definite matrices, but is not guaranteed to be stable for indefinite matrices. -- Function: int gsl_linalg_ldlt_decomp (gsl_matrix *A) This function factorizes the symmetric, non-singular square matrix *note A: 570. into the decomposition A = L D L^T. On input, the values from the diagonal and lower-triangular part of the matrix *note A: 570. are used. The upper triangle of *note A: 570. is used as temporary workspace. On output the diagonal of *note A: 570. contains the matrix D and the lower triangle of *note A: 570. contains the unit lower triangular matrix L. The matrix 1-norm, ||A||_1 is stored in the upper right corner on output, for later use by *note gsl_linalg_ldlt_rcond(): 571. If the matrix is detected to be singular, the function returns the error code *note GSL_EDOM: 28. -- Function: int gsl_linalg_ldlt_solve (const gsl_matrix *LDLT, const gsl_vector *b, gsl_vector *x) This function solves the system A x = b using the L D L^T decomposition of A held in the matrix *note LDLT: 572. which must have been previously computed by *note gsl_linalg_ldlt_decomp(): 570. -- Function: int gsl_linalg_ldlt_svx (const gsl_matrix *LDLT, gsl_vector *x) This function solves the system A x = b in-place using the L D L^T decomposition of A held in the matrix *note LDLT: 573. which must have been previously computed by *note gsl_linalg_ldlt_decomp(): 570. On input *note x: 573. should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_ldlt_rcond (const gsl_matrix *LDLT, double *rcond, gsl_vector *work) This function estimates the reciprocal condition number (using the 1-norm) of the symmetric nonsingular matrix A, using its L D L^T decomposition provided in *note LDLT: 571. The reciprocal condition number estimate, defined as 1 / (||A||_1 \cdot ||A^{-1}||_1), is stored in *note rcond: 571. Additional workspace of size 3 N is required in *note work: 571.  File: gsl-ref.info, Node: Tridiagonal Decomposition of Real Symmetric Matrices, Next: Tridiagonal Decomposition of Hermitian Matrices, Prev: LDLT Decomposition, Up: Linear Algebra 14.12 Tridiagonal Decomposition of Real Symmetric Matrices ========================================================== A symmetric matrix A can be factorized by similarity transformations into the form, A = Q T Q^T where Q is an orthogonal matrix and T is a symmetric tridiagonal matrix. -- Function: int gsl_linalg_symmtd_decomp (gsl_matrix *A, gsl_vector *tau) This function factorizes the symmetric square matrix *note A: 575. into the symmetric tridiagonal decomposition Q T Q^T. On output the diagonal and subdiagonal part of the input matrix *note A: 575. contain the tridiagonal matrix T. The remaining lower triangular part of the input matrix contains the Householder vectors which, together with the Householder coefficients *note tau: 575, encode the orthogonal matrix Q. This storage scheme is the same as used by LAPACK. The upper triangular part of *note A: 575. is not referenced. -- Function: int gsl_linalg_symmtd_unpack (const gsl_matrix *A, const gsl_vector *tau, gsl_matrix *Q, gsl_vector *diag, gsl_vector *subdiag) This function unpacks the encoded symmetric tridiagonal decomposition (*note A: 576, *note tau: 576.) obtained from *note gsl_linalg_symmtd_decomp(): 575. into the orthogonal matrix *note Q: 576, the vector of diagonal elements *note diag: 576. and the vector of subdiagonal elements *note subdiag: 576. -- Function: int gsl_linalg_symmtd_unpack_T (const gsl_matrix *A, gsl_vector *diag, gsl_vector *subdiag) This function unpacks the diagonal and subdiagonal of the encoded symmetric tridiagonal decomposition (*note A: 577, ‘tau’) obtained from *note gsl_linalg_symmtd_decomp(): 575. into the vectors *note diag: 577. and *note subdiag: 577.  File: gsl-ref.info, Node: Tridiagonal Decomposition of Hermitian Matrices, Next: Hessenberg Decomposition of Real Matrices, Prev: Tridiagonal Decomposition of Real Symmetric Matrices, Up: Linear Algebra 14.13 Tridiagonal Decomposition of Hermitian Matrices ===================================================== A hermitian matrix A can be factorized by similarity transformations into the form, A = U T U^T where U is a unitary matrix and T is a real symmetric tridiagonal matrix. -- Function: int gsl_linalg_hermtd_decomp (gsl_matrix_complex *A, gsl_vector_complex *tau) This function factorizes the hermitian matrix *note A: 579. into the symmetric tridiagonal decomposition U T U^T. On output the real parts of the diagonal and subdiagonal part of the input matrix *note A: 579. contain the tridiagonal matrix T. The remaining lower triangular part of the input matrix contains the Householder vectors which, together with the Householder coefficients *note tau: 579, encode the unitary matrix U. This storage scheme is the same as used by LAPACK. The upper triangular part of *note A: 579. and imaginary parts of the diagonal are not referenced. -- Function: int gsl_linalg_hermtd_unpack (const gsl_matrix_complex *A, const gsl_vector_complex *tau, gsl_matrix_complex *U, gsl_vector *diag, gsl_vector *subdiag) This function unpacks the encoded tridiagonal decomposition (*note A: 57a, *note tau: 57a.) obtained from *note gsl_linalg_hermtd_decomp(): 579. into the unitary matrix *note U: 57a, the real vector of diagonal elements *note diag: 57a. and the real vector of subdiagonal elements *note subdiag: 57a. -- Function: int gsl_linalg_hermtd_unpack_T (const gsl_matrix_complex *A, gsl_vector *diag, gsl_vector *subdiag) This function unpacks the diagonal and subdiagonal of the encoded tridiagonal decomposition (*note A: 57b, ‘tau’) obtained from the *note gsl_linalg_hermtd_decomp(): 579. into the real vectors *note diag: 57b. and *note subdiag: 57b.  File: gsl-ref.info, Node: Hessenberg Decomposition of Real Matrices, Next: Hessenberg-Triangular Decomposition of Real Matrices, Prev: Tridiagonal Decomposition of Hermitian Matrices, Up: Linear Algebra 14.14 Hessenberg Decomposition of Real Matrices =============================================== A general real matrix A can be decomposed by orthogonal similarity transformations into the form A = U H U^T where U is orthogonal and H is an upper Hessenberg matrix, meaning that it has zeros below the first subdiagonal. The Hessenberg reduction is the first step in the Schur decomposition for the nonsymmetric eigenvalue problem, but has applications in other areas as well. -- Function: int gsl_linalg_hessenberg_decomp (gsl_matrix *A, gsl_vector *tau) This function computes the Hessenberg decomposition of the matrix *note A: 57d. by applying the similarity transformation H = U^T A U. On output, H is stored in the upper portion of *note A: 57d. The information required to construct the matrix U is stored in the lower triangular portion of *note A: 57d. U is a product of N - 2 Householder matrices. The Householder vectors are stored in the lower portion of *note A: 57d. (below the subdiagonal) and the Householder coefficients are stored in the vector *note tau: 57d. *note tau: 57d. must be of length ‘N’. -- Function: int gsl_linalg_hessenberg_unpack (gsl_matrix *H, gsl_vector *tau, gsl_matrix *U) This function constructs the orthogonal matrix U from the information stored in the Hessenberg matrix *note H: 57e. along with the vector *note tau: 57e. *note H: 57e. and *note tau: 57e. are outputs from *note gsl_linalg_hessenberg_decomp(): 57d. -- Function: int gsl_linalg_hessenberg_unpack_accum (gsl_matrix *H, gsl_vector *tau, gsl_matrix *V) This function is similar to *note gsl_linalg_hessenberg_unpack(): 57e, except it accumulates the matrix ‘U’ into *note V: 57f, so that V' = VU. The matrix *note V: 57f. must be initialized prior to calling this function. Setting *note V: 57f. to the identity matrix provides the same result as *note gsl_linalg_hessenberg_unpack(): 57e. If *note H: 57f. is order ‘N’, then *note V: 57f. must have ‘N’ columns but may have any number of rows. -- Function: int gsl_linalg_hessenberg_set_zero (gsl_matrix *H) This function sets the lower triangular portion of *note H: 580, below the subdiagonal, to zero. It is useful for clearing out the Householder vectors after calling *note gsl_linalg_hessenberg_decomp(): 57d.  File: gsl-ref.info, Node: Hessenberg-Triangular Decomposition of Real Matrices, Next: Bidiagonalization, Prev: Hessenberg Decomposition of Real Matrices, Up: Linear Algebra 14.15 Hessenberg-Triangular Decomposition of Real Matrices ========================================================== A general real matrix pair (A, B) can be decomposed by orthogonal similarity transformations into the form A = U H V^T B = U R V^T where U and V are orthogonal, H is an upper Hessenberg matrix, and R is upper triangular. The Hessenberg-Triangular reduction is the first step in the generalized Schur decomposition for the generalized eigenvalue problem. -- Function: int gsl_linalg_hesstri_decomp (gsl_matrix *A, gsl_matrix *B, gsl_matrix *U, gsl_matrix *V, gsl_vector *work) This function computes the Hessenberg-Triangular decomposition of the matrix pair (*note A: 582, *note B: 582.). On output, H is stored in *note A: 582, and R is stored in *note B: 582. If *note U: 582. and *note V: 582. are provided (they may be null), the similarity transformations are stored in them. Additional workspace of length N is needed in *note work: 582.  File: gsl-ref.info, Node: Bidiagonalization, Next: Givens Rotations, Prev: Hessenberg-Triangular Decomposition of Real Matrices, Up: Linear Algebra 14.16 Bidiagonalization ======================= A general matrix A can be factorized by similarity transformations into the form, A = U B V^T where U and V are orthogonal matrices and B is a N-by-N bidiagonal matrix with non-zero entries only on the diagonal and superdiagonal. The size of ‘U’ is M-by-N and the size of ‘V’ is N-by-N. -- Function: int gsl_linalg_bidiag_decomp (gsl_matrix *A, gsl_vector *tau_U, gsl_vector *tau_V) This function factorizes the M-by-N matrix *note A: 584. into bidiagonal form U B V^T. The diagonal and superdiagonal of the matrix B are stored in the diagonal and superdiagonal of *note A: 584. The orthogonal matrices U and ‘V’ are stored as compressed Householder vectors in the remaining elements of *note A: 584. The Householder coefficients are stored in the vectors *note tau_U: 584. and *note tau_V: 584. The length of *note tau_U: 584. must equal the number of elements in the diagonal of *note A: 584. and the length of *note tau_V: 584. should be one element shorter. -- Function: int gsl_linalg_bidiag_unpack (const gsl_matrix *A, const gsl_vector *tau_U, gsl_matrix *U, const gsl_vector *tau_V, gsl_matrix *V, gsl_vector *diag, gsl_vector *superdiag) This function unpacks the bidiagonal decomposition of *note A: 585. produced by *note gsl_linalg_bidiag_decomp(): 584, (*note A: 585, *note tau_U: 585, *note tau_V: 585.) into the separate orthogonal matrices *note U: 585, *note V: 585. and the diagonal vector *note diag: 585. and superdiagonal *note superdiag: 585. Note that *note U: 585. is stored as a compact M-by-N orthogonal matrix satisfying U^T U = I for efficiency. -- Function: int gsl_linalg_bidiag_unpack2 (gsl_matrix *A, gsl_vector *tau_U, gsl_vector *tau_V, gsl_matrix *V) This function unpacks the bidiagonal decomposition of *note A: 586. produced by *note gsl_linalg_bidiag_decomp(): 584, (*note A: 586, *note tau_U: 586, *note tau_V: 586.) into the separate orthogonal matrices ‘U’, *note V: 586. and the diagonal vector ‘diag’ and superdiagonal ‘superdiag’. The matrix ‘U’ is stored in-place in *note A: 586. -- Function: int gsl_linalg_bidiag_unpack_B (const gsl_matrix *A, gsl_vector *diag, gsl_vector *superdiag) This function unpacks the diagonal and superdiagonal of the bidiagonal decomposition of *note A: 587. from *note gsl_linalg_bidiag_decomp(): 584, into the diagonal vector *note diag: 587. and superdiagonal vector *note superdiag: 587.  File: gsl-ref.info, Node: Givens Rotations, Next: Householder Transformations, Prev: Bidiagonalization, Up: Linear Algebra 14.17 Givens Rotations ====================== A Givens rotation is a rotation in the plane acting on two elements of a given vector. It can be represented in matrix form as where the \cos{\theta} and \sin{\theta} appear at the intersection of the i-th and j-th rows and columns. When acting on a vector x, G(i,j,\theta) x performs a rotation of the (i,j) elements of x. Givens rotations are typically used to introduce zeros in vectors, such as during the QR decomposition of a matrix. In this case, it is typically desired to find c and s such that [ c -s ] [ a ] = [ r ] [ s c ] [ b ] [ 0 ] with r = \sqrt{a^2 + b^2}. -- Function: void gsl_linalg_givens (const double a, const double b, double *c, double *s) This function computes c = \cos{\theta} and s = \sin{\theta} so that the Givens matrix G(\theta) acting on the vector (a,b) produces (r, 0), with r = \sqrt{a^2 + b^2}. -- Function: void gsl_linalg_givens_gv (gsl_vector *v, const size_t i, const size_t j, const double c, const double s) This function applies the Givens rotation defined by c = \cos{\theta} and s = \sin{\theta} to the *note i: 58a. and *note j: 58a. elements of *note v: 58a. On output, (v(i),v(j)) \leftarrow G(\theta) (v(i),v(j)).  File: gsl-ref.info, Node: Householder Transformations, Next: Householder solver for linear systems, Prev: Givens Rotations, Up: Linear Algebra 14.18 Householder Transformations ================================= A Householder transformation is a rank-1 modification of the identity matrix which can be used to zero out selected elements of a vector. A Householder matrix H takes the form, H = I - \tau v v^T where v is a vector (called the `Householder vector') and \tau = 2/(v^T v). The functions described in this section use the rank-1 structure of the Householder matrix to create and apply Householder transformations efficiently. -- Function: double gsl_linalg_householder_transform (gsl_vector *w) -- Function: gsl_complex gsl_linalg_complex_householder_transform (gsl_vector_complex *w) This function prepares a Householder transformation H = I - \tau v v^T which can be used to zero all the elements of the input vector *note w: 58d. except the first. On output the Householder vector ‘v’ is stored in *note w: 58d. and the scalar \tau is returned. The householder vector ‘v’ is normalized so that ‘v[0] = 1’, however this 1 is not stored in the output vector. Instead, ‘w[0]’ is set to the first element of the transformed vector, so that if u = H w, ‘w[0] = u[0]’ on output and the remainder of u is zero. -- Function: int gsl_linalg_householder_hm (double tau, const gsl_vector *v, gsl_matrix *A) -- Function: int gsl_linalg_complex_householder_hm (gsl_complex tau, const gsl_vector_complex *v, gsl_matrix_complex *A) This function applies the Householder matrix H defined by the scalar *note tau: 58f. and the vector *note v: 58f. to the left-hand side of the matrix *note A: 58f. On output the result H A is stored in *note A: 58f. -- Function: int gsl_linalg_householder_mh (double tau, const gsl_vector *v, gsl_matrix *A) -- Function: int gsl_linalg_complex_householder_mh (gsl_complex tau, const gsl_vector_complex *v, gsl_matrix_complex *A) This function applies the Householder matrix H defined by the scalar *note tau: 591. and the vector *note v: 591. to the right-hand side of the matrix *note A: 591. On output the result A H is stored in *note A: 591. -- Function: int gsl_linalg_householder_hv (double tau, const gsl_vector *v, gsl_vector *w) -- Function: int gsl_linalg_complex_householder_hv (gsl_complex tau, const gsl_vector_complex *v, gsl_vector_complex *w) This function applies the Householder transformation H defined by the scalar *note tau: 593. and the vector *note v: 593. to the vector *note w: 593. On output the result H w is stored in *note w: 593.  File: gsl-ref.info, Node: Householder solver for linear systems, Next: Tridiagonal Systems, Prev: Householder Transformations, Up: Linear Algebra 14.19 Householder solver for linear systems =========================================== -- Function: int gsl_linalg_HH_solve (gsl_matrix *A, const gsl_vector *b, gsl_vector *x) This function solves the system A x = b directly using Householder transformations. On output the solution is stored in *note x: 595. and *note b: 595. is not modified. The matrix *note A: 595. is destroyed by the Householder transformations. -- Function: int gsl_linalg_HH_svx (gsl_matrix *A, gsl_vector *x) This function solves the system A x = b in-place using Householder transformations. On input *note x: 596. should contain the right-hand side b, which is replaced by the solution on output. The matrix *note A: 596. is destroyed by the Householder transformations.  File: gsl-ref.info, Node: Tridiagonal Systems, Next: Triangular Systems, Prev: Householder solver for linear systems, Up: Linear Algebra 14.20 Tridiagonal Systems ========================= The functions described in this section efficiently solve symmetric, non-symmetric and cyclic tridiagonal systems with minimal storage. Note that the current implementations of these functions use a variant of Cholesky decomposition, so the tridiagonal matrix must be positive definite. For non-positive definite matrices, the functions return the error code ‘GSL_ESING’. -- Function: int gsl_linalg_solve_tridiag (const gsl_vector *diag, const gsl_vector *e, const gsl_vector *f, const gsl_vector *b, gsl_vector *x) This function solves the general N-by-N system A x = b where ‘A’ is tridiagonal (N \geq 2). The super-diagonal and sub-diagonal vectors *note e: 598. and *note f: 598. must be one element shorter than the diagonal vector *note diag: 598. The form of ‘A’ for the 4-by-4 case is shown below, A = ( d_0 e_0 0 0 ) ( f_0 d_1 e_1 0 ) ( 0 f_1 d_2 e_2 ) ( 0 0 f_2 d_3 ) -- Function: int gsl_linalg_solve_symm_tridiag (const gsl_vector *diag, const gsl_vector *e, const gsl_vector *b, gsl_vector *x) This function solves the general N-by-N system A x = b where ‘A’ is symmetric tridiagonal (N \geq 2). The off-diagonal vector *note e: 599. must be one element shorter than the diagonal vector *note diag: 599. The form of ‘A’ for the 4-by-4 case is shown below, A = ( d_0 e_0 0 0 ) ( e_0 d_1 e_1 0 ) ( 0 e_1 d_2 e_2 ) ( 0 0 e_2 d_3 ) -- Function: int gsl_linalg_solve_cyc_tridiag (const gsl_vector *diag, const gsl_vector *e, const gsl_vector *f, const gsl_vector *b, gsl_vector *x) This function solves the general N-by-N system A x = b where ‘A’ is cyclic tridiagonal (N \geq 3). The cyclic super-diagonal and sub-diagonal vectors *note e: 59a. and *note f: 59a. must have the same number of elements as the diagonal vector *note diag: 59a. The form of ‘A’ for the 4-by-4 case is shown below, A = ( d_0 e_0 0 f_3 ) ( f_0 d_1 e_1 0 ) ( 0 f_1 d_2 e_2 ) ( e_3 0 f_2 d_3 ) -- Function: int gsl_linalg_solve_symm_cyc_tridiag (const gsl_vector *diag, const gsl_vector *e, const gsl_vector *b, gsl_vector *x) This function solves the general N-by-N system A x = b where ‘A’ is symmetric cyclic tridiagonal (N \geq 3). The cyclic off-diagonal vector *note e: 59b. must have the same number of elements as the diagonal vector *note diag: 59b. The form of ‘A’ for the 4-by-4 case is shown below, A = ( d_0 e_0 0 e_3 ) ( e_0 d_1 e_1 0 ) ( 0 e_1 d_2 e_2 ) ( e_3 0 e_2 d_3 )  File: gsl-ref.info, Node: Triangular Systems, Next: Banded Systems, Prev: Tridiagonal Systems, Up: Linear Algebra 14.21 Triangular Systems ======================== -- Function: int gsl_linalg_tri_invert (CBLAS_UPLO_t Uplo, CBLAS_DIAG_t Diag, gsl_matrix *T) -- Function: int gsl_linalg_complex_tri_invert (CBLAS_UPLO_t Uplo, CBLAS_DIAG_t Diag, gsl_matrix_complex *T) These functions compute the in-place inverse of the triangular matrix *note T: 59e, stored in the lower triangle when *note Uplo: 59e. = ‘CblasLower’ and upper triangle when *note Uplo: 59e. = ‘CblasUpper’. The parameter *note Diag: 59e. = ‘CblasUnit’, ‘CblasNonUnit’ specifies whether the matrix is unit triangular. -- Function: int gsl_linalg_tri_LTL (gsl_matrix *L) -- Function: int gsl_linalg_complex_tri_LHL (gsl_matrix_complex *L) These functions compute the product L^T L (or L^{\dagger} L) in-place and stores it in the lower triangle of *note L: 5a0. on output. -- Function: int gsl_linalg_tri_UL (gsl_matrix *LU) -- Function: int gsl_linalg_complex_tri_UL (gsl_matrix_complex *LU) These functions compute the product U L where U is upper triangular and L is unit lower triangular, stored in *note LU: 5a2, as computed by *note gsl_linalg_LU_decomp(): 4ec. or *note gsl_linalg_complex_LU_decomp(): 4ed. The product is computed in-place using Level 3 BLAS. -- Function: int gsl_linalg_tri_rcond (CBLAS_UPLO_t Uplo, const gsl_matrix *A, double *rcond, gsl_vector *work) This function estimates the 1-norm reciprocal condition number of the triangular matrix *note A: 5a3, using the lower triangle when *note Uplo: 5a3. is ‘CblasLower’ and upper triangle when *note Uplo: 5a3. is ‘CblasUpper’. The reciprocal condition number 1 / \left( \left|\left| A \right|\right|_1 \left|\left| A^{-1} \right|\right|_1 \right) is stored in *note rcond: 5a3. on output. Additional workspace of size 3N is required in *note work: 5a3.  File: gsl-ref.info, Node: Banded Systems, Next: Balancing, Prev: Triangular Systems, Up: Linear Algebra 14.22 Banded Systems ==================== Band matrices are sparse matrices whose non-zero entries are confined to a diagonal `band'. From a storage point of view, significant savings can be achieved by storing only the non-zero diagonals of a banded matrix. Algorithms such as LU and Cholesky factorizations preserve the band structure of these matrices. Computationally, working with compact banded matrices is preferable to working on the full dense matrix with many zero entries. * Menu: * General Banded Format:: * Symmetric Banded Format:: * Banded LU Decomposition:: * Banded Cholesky Decomposition:: * Banded LDLT Decomposition::  File: gsl-ref.info, Node: General Banded Format, Next: Symmetric Banded Format, Up: Banded Systems 14.22.1 General Banded Format ----------------------------- An example of a general banded matrix is given below. A = \begin{pmatrix} \alpha_1 & \beta_1 & \gamma_1 & 0 & 0 & 0 \\ \delta_1 & \alpha_2 & \beta_2 & \gamma_2 & 0 & 0 \\ 0 & \delta_2 & \alpha_3 & \beta_3 & \gamma_3 & 0 \\ 0 & 0 & \delta_3 & \alpha_4 & \beta_4 & \gamma_4 \\ 0 & 0 & 0 & \delta_4 & \alpha_5 & \beta_5 \\ 0 & 0 & 0 & 0 & \delta_5 & \alpha_6 \end{pmatrix} This matrix has a `lower bandwidth' of 1 and an `upper bandwidth' of 2. The lower bandwidth is the number of non-zero subdiagonals, and the upper bandwidth is the number of non-zero superdiagonals. A (p,q) banded matrix has a lower bandwidth p and upper bandwidth q. For example, diagonal matrices are (0,0), tridiagonal matrices are (1,1), and upper triangular matrices are (0,N-1) banded matrices. The corresponding 6-by-4 packed banded matrix looks like AB = \begin{pmatrix} * & * & \alpha_1 & \delta_1 \\ * & \beta_1 & \alpha_2 & \delta_2 \\ \gamma_1 & \beta_2 & \alpha_3 & \delta_3 \\ \gamma_2 & \beta_3 & \alpha_4 & \delta_4 \\ \gamma_3 & \beta_4 & \alpha_5 & \delta_5 \\ \gamma_4 & \beta_5 & \alpha_6 & * \end{pmatrix} where the superdiagonals are stored in columns, followed by the diagonal, followed by the subdiagonals. The entries marked by * are not referenced by the banded routines. With this format, each row of AB corresponds to the non-zero entries of the corresponding column of A. For an N-by-N matrix A, the dimension of AB will be N-by-(p+q+1).  File: gsl-ref.info, Node: Symmetric Banded Format, Next: Banded LU Decomposition, Prev: General Banded Format, Up: Banded Systems 14.22.2 Symmetric Banded Format ------------------------------- Symmetric banded matrices allow for additional storage savings. As an example, consider the following 6 \times 6 symmetric banded matrix with lower bandwidth p = 2: A = \begin{pmatrix} \alpha_1 & \beta_1 & \gamma_1 & 0 & 0 & 0 \\ \beta_1 & \alpha_2 & \beta_2 & \gamma_2 & 0 & 0 \\ \gamma_1 & \beta_2 & \alpha_3 & \beta_3 & \gamma_3 & 0 \\ 0 & \gamma_2 & \beta_3 & \alpha_4 & \beta_4 & \gamma_4 \\ 0 & 0 & \gamma_3 & \beta_4 & \alpha_5 & \beta_5 \\ 0 & 0 & 0 & \gamma_4 & \beta_5 & \alpha_6 \end{pmatrix} The packed symmetric banded 6 \times 3 matrix will look like: AB = \begin{pmatrix} \alpha_1 & \beta_1 & \gamma_1 \\ \alpha_2 & \beta_2 & \gamma_2 \\ \alpha_3 & \beta_3 & \gamma_3 \\ \alpha_4 & \beta_4 & \gamma_4 \\ \alpha_5 & \beta_5 & * \\ \alpha_6 & * & * \end{pmatrix} The entries marked by * are not referenced by the symmetric banded routines. The relationship between the packed format and original matrix is, AB(i,j) = A(i, i + j) = A(i + j, i) for i = 0, \dots, N - 1, j = 0, \dots, p. Conversely, A(i,j) = AB(j, i - j) for i = 0, \dots, N - 1, j = \textrm{max}(0,i-p), \dots, i. Warning: Note that this format is the transpose of the symmetric banded format used by LAPACK. In order to develop efficient routines for symmetric banded matrices, it helps to have the nonzero elements in each column in contiguous memory locations. Since C uses row-major order, GSL stores the columns in the rows of the packed banded format, while LAPACK, written in Fortran, uses the transposed format.  File: gsl-ref.info, Node: Banded LU Decomposition, Next: Banded Cholesky Decomposition, Prev: Symmetric Banded Format, Up: Banded Systems 14.22.3 Banded LU Decomposition ------------------------------- The routines in this section are designed to factor banded M-by-N matrices with an LU factorization, P A = L U. The matrix A is banded of type (p,q), i.e. a lower bandwidth of p and an upper bandwidth of q. See *note LU Decomposition: 4eb. for more information on the factorization. For banded (p,q) matrices, the U factor will have an upper bandwidth of p + q, while the L factor will have a lower bandwidth of at most p. Therefore, additional storage is needed to store the p additional bands of U. As an example, consider the M = N = 7 matrix with lower bandwidth p = 3 and upper bandwidth q = 2, A = \begin{pmatrix} \alpha_1 & \beta_1 & \gamma_1 & 0 & 0 & 0 & 0 \\ \delta_1 & \alpha_2 & \beta_2 & \gamma_2 & 0 & 0 & 0 \\ \epsilon_1 & \delta_2 & \alpha_3 & \beta_3 & \gamma_3 & 0 & 0 \\ \zeta_1 & \epsilon_2 & \delta_3 & \alpha_4 & \beta_4 & \gamma_4 & 0 \\ 0 & \zeta_2 & \epsilon_3 & \delta_4 & \alpha_5 & \beta_5 & \gamma_5 \\ 0 & 0 & \zeta_3 & \epsilon_4 & \delta_5 & \alpha_6 & \beta_6 \\ 0 & 0 & 0 & \zeta_4 & \epsilon_5 & \delta_6 & \alpha_7 \end{pmatrix} The corresponding N-by-2p + q + 1 packed banded matrix looks like AB = \begin{pmatrix} * & * & * & * & * & \alpha_1 & \delta_1 & \epsilon_1 & \zeta_1 \\ * & * & * & * & \beta_1 & \alpha_2 & \delta_2 & \epsilon_2 & \zeta_2 \\ * & * & * & \gamma_1 & \beta_2 & \alpha_3 & \delta_3 & \epsilon_3 & \zeta_3 \\ * & * & - & \gamma_2 & \beta_3 & \alpha_4 & \delta_4 & \epsilon_4 & \zeta_4 \\ * & - & - & \gamma_3 & \beta_4 & \alpha_5 & \delta_5 & \epsilon_5 & * \\ - & - & - & \gamma_4 & \beta_5 & \alpha_6 & \delta_6 & * & * \\ \undermat{p}{- & - & -} & \undermat{q}{\gamma_5 & \beta_6} & \alpha_7 & \undermat{p}{* & * & * & } \end{pmatrix} Entries marked with - are used to store the additional p diagonals of the U factor. Entries marked with * are not referenced by the banded routines. -- Function: int gsl_linalg_LU_band_decomp (const size_t M, const size_t lb, const size_t ub, gsl_matrix *AB, gsl_vector_uint *piv) This function computes the LU factorization of the banded matrix *note AB: 5a9. which is stored in packed band format (see above) and has dimension N-by-2p + q + 1. The number of rows M of the original matrix is provided in *note M: 5a9. The lower bandwidth p is provided in *note lb: 5a9. and the upper bandwidth q is provided in *note ub: 5a9. The vector *note piv: 5a9. has length \textrm{min}(M,N) and stores the pivot indices on output (for 0 \le i < \textrm{min}(M,N), row i of the matrix was interchanged with row ‘piv[i]’). On output, *note AB: 5a9. contains both the L and U factors in packed format. -- Function: int gsl_linalg_LU_band_solve (const size_t lb, const size_t ub, const gsl_matrix *LUB, const gsl_vector_uint *piv, const gsl_vector *b, gsl_vector *x) This function solves the square system Ax = b using the banded LU factorization (*note LUB: 5aa, *note piv: 5aa.) computed by *note gsl_linalg_LU_band_decomp(): 5a9. The lower and upper bandwidths are provided in *note lb: 5aa. and *note ub: 5aa. respectively. The right hand side vector is provided in *note b: 5aa. The solution vector is stored in *note x: 5aa. on output. -- Function: int gsl_linalg_LU_band_svx (const size_t lb, const size_t ub, const gsl_matrix *LUB, const gsl_vector_uint *piv, gsl_vector *x) This function solves the square system Ax = b in-place, using the banded LU factorization (*note LUB: 5ab, *note piv: 5ab.) computed by *note gsl_linalg_LU_band_decomp(): 5a9. The lower and upper bandwidths are provided in *note lb: 5ab. and *note ub: 5ab. respectively. On input, the right hand side vector b is provided in *note x: 5ab, which is replaced by the solution vector x on output. -- Function: int gsl_linalg_LU_band_unpack (const size_t M, const size_t lb, const size_t ub, const gsl_matrix *LUB, const gsl_vector_uint *piv, gsl_matrix *L, gsl_matrix *U) This function unpacks the banded LU factorization (*note LUB: 5ac, *note piv: 5ac.) previously computed by *note gsl_linalg_LU_band_decomp(): 5a9. into the matrices *note L: 5ac. and *note U: 5ac. The matrix *note U: 5ac. has dimension \textrm{min}(M,N)-by-N and stores the upper triangular factor on output. The matrix *note L: 5ac. has dimension M-by-\textrm{min}(M,N) and stores the matrix P^T L on output.  File: gsl-ref.info, Node: Banded Cholesky Decomposition, Next: Banded LDLT Decomposition, Prev: Banded LU Decomposition, Up: Banded Systems 14.22.4 Banded Cholesky Decomposition ------------------------------------- The routines in this section are designed to factor and solve N-by-N linear systems of the form A x = b where A is a banded, symmetric, and positive definite matrix with lower bandwidth p. See *note Cholesky Decomposition: 3f5. for more information on the factorization. The lower triangular factor of the Cholesky decomposition preserves the same banded structure as the matrix A, enabling an efficient algorithm which overwrites the original matrix with the L factor. -- Function: int gsl_linalg_cholesky_band_decomp (gsl_matrix *A) This function factorizes the symmetric, positive-definite square matrix *note A: 5ae. into the Cholesky decomposition A = L L^T. The input matrix *note A: 5ae. is given in *note symmetric banded format: 5a6, and has dimensions N-by-(p + 1), where p is the lower bandwidth of the matrix. On output, the entries of *note A: 5ae. are replaced by the entries of the matrix L in the same format. In addition, the lower right element of *note A: 5ae. is used to store the matrix 1-norm, used later by *note gsl_linalg_cholesky_band_rcond(): 5af. to calculate the reciprocal condition number. If the matrix is not positive-definite then the decomposition will fail, returning the error code *note GSL_EDOM: 28. When testing whether a matrix is positive-definite, disable the error handler first to avoid triggering an error. -- Function: int gsl_linalg_cholesky_band_solve (const gsl_matrix *LLT, const gsl_vector *b, gsl_vector *x) -- Function: int gsl_linalg_cholesky_band_solvem (const gsl_matrix *LLT, const gsl_matrix *B, gsl_matrix *X) This function solves the symmetric banded system A x = b (or A X = B) using the Cholesky decomposition of A held in the matrix *note LLT: 5b1. which must have been previously computed by *note gsl_linalg_cholesky_band_decomp(): 5ae. -- Function: int gsl_linalg_cholesky_band_svx (const gsl_matrix *LLT, gsl_vector *x) -- Function: int gsl_linalg_cholesky_band_svxm (const gsl_matrix *LLT, gsl_matrix *X) This function solves the symmetric banded system A x = b (or A X = B) in-place using the Cholesky decomposition of A held in the matrix *note LLT: 5b3. which must have been previously computed by *note gsl_linalg_cholesky_band_decomp(): 5ae. On input ‘x’ (or *note X: 5b3.) should contain the right-hand side b (or B), which is replaced by the solution on output. -- Function: int gsl_linalg_cholesky_band_invert (const gsl_matrix *LLT, gsl_matrix *Ainv) This function computes the inverse of a symmetric banded matrix from its Cholesky decomposition *note LLT: 5b4, which must have been previously computed by *note gsl_linalg_cholesky_band_decomp(): 5ae. On output, the inverse is stored in *note Ainv: 5b4, using both the lower and upper portions. -- Function: int gsl_linalg_cholesky_band_unpack (const gsl_matrix *LLT, gsl_matrix *L) This function unpacks the lower triangular Cholesky factor from *note LLT: 5b5. and stores it in the lower triangular portion of the N-by-N matrix *note L: 5b5. The upper triangular portion of *note L: 5b5. is not referenced. -- Function: int gsl_linalg_cholesky_band_scale (const gsl_matrix *A, gsl_vector *S) This function calculates a diagonal scaling transformation of the symmetric, positive definite banded matrix *note A: 5b6, such that S A S has a condition number within a factor of N of the matrix of smallest possible condition number over all possible diagonal scalings. On output, *note S: 5b6. contains the scale factors, given by S_i = 1/\sqrt{A_{ii}}. For any A_{ii} \le 0, the corresponding scale factor S_i is set to 1. -- Function: int gsl_linalg_cholesky_band_scale_apply (gsl_matrix *A, const gsl_vector *S) This function applies the scaling transformation *note S: 5b7. to the banded symmetric positive definite matrix *note A: 5b7. On output, *note A: 5b7. is replaced by S A S. -- Function: int gsl_linalg_cholesky_band_rcond (const gsl_matrix *LLT, double *rcond, gsl_vector *work) This function estimates the reciprocal condition number (using the 1-norm) of the symmetric banded positive definite matrix A, using its Cholesky decomposition provided in *note LLT: 5af. The reciprocal condition number estimate, defined as 1 / (||A||_1 \cdot ||A^{-1}||_1), is stored in *note rcond: 5af. Additional workspace of size 3 N is required in *note work: 5af.  File: gsl-ref.info, Node: Banded LDLT Decomposition, Prev: Banded Cholesky Decomposition, Up: Banded Systems 14.22.5 Banded LDLT Decomposition --------------------------------- The routines in this section are designed to factor and solve N-by-N linear systems of the form A x = b where A is a banded, symmetric, and non-singular matrix with lower bandwidth p. See *note LDLT Decomposition: 56f. for more information on the factorization. The lower triangular factor of the L D L^T decomposition preserves the same banded structure as the matrix A, enabling an efficient algorithm which overwrites the original matrix with the L and D factors. -- Function: int gsl_linalg_ldlt_band_decomp (gsl_matrix *A) This function factorizes the symmetric, non-singular square matrix *note A: 5b9. into the decomposition A = L D L^T. The input matrix *note A: 5b9. is given in *note symmetric banded format: 5a6, and has dimensions N-by-(p + 1), where p is the lower bandwidth of the matrix. On output, the entries of *note A: 5b9. are replaced by the entries of the matrices D and L in the same format. If the matrix is singular then the decomposition will fail, returning the error code *note GSL_EDOM: 28. -- Function: int gsl_linalg_ldlt_band_solve (const gsl_matrix *LDLT, const gsl_vector *b, gsl_vector *x) This function solves the symmetric banded system A x = b using the L D L^T decomposition of A held in the matrix *note LDLT: 5ba. which must have been previously computed by *note gsl_linalg_ldlt_band_decomp(): 5b9. -- Function: int gsl_linalg_ldlt_band_svx (const gsl_matrix *LDLT, gsl_vector *x) This function solves the symmetric banded system A x = b in-place using the L D L^T decomposition of A held in the matrix *note LDLT: 5bb. which must have been previously computed by *note gsl_linalg_ldlt_band_decomp(): 5b9. On input *note x: 5bb. should contain the right-hand side b, which is replaced by the solution on output. -- Function: int gsl_linalg_ldlt_band_unpack (const gsl_matrix *LDLT, gsl_matrix *L, gsl_vector *D) This function unpacks the unit lower triangular factor L from *note LDLT: 5bc. and stores it in the lower triangular portion of the N-by-N matrix *note L: 5bc. The upper triangular portion of *note L: 5bc. is not referenced. The diagonal matrix D is stored in the vector *note D: 5bc. -- Function: int gsl_linalg_ldlt_band_rcond (const gsl_matrix *LDLT, double *rcond, gsl_vector *work) This function estimates the reciprocal condition number (using the 1-norm) of the symmetric banded nonsingular matrix A, using its L D L^T decomposition provided in *note LDLT: 5bd. The reciprocal condition number estimate, defined as 1 / (||A||_1 \cdot ||A^{-1}||_1), is stored in *note rcond: 5bd. Additional workspace of size 3 N is required in *note work: 5bd.  File: gsl-ref.info, Node: Balancing, Next: Examples<9>, Prev: Banded Systems, Up: Linear Algebra 14.23 Balancing =============== The process of balancing a matrix applies similarity transformations to make the rows and columns have comparable norms. This is useful, for example, to reduce roundoff errors in the solution of eigenvalue problems. Balancing a matrix A consists of replacing A with a similar matrix A' = D^{-1} A D where D is a diagonal matrix whose entries are powers of the floating point radix. -- Function: int gsl_linalg_balance_matrix (gsl_matrix *A, gsl_vector *D) This function replaces the matrix *note A: 5c0. with its balanced counterpart and stores the diagonal elements of the similarity transformation into the vector *note D: 5c0.  File: gsl-ref.info, Node: Examples<9>, Next: References and Further Reading<9>, Prev: Balancing, Up: Linear Algebra 14.24 Examples ============== The following program solves the linear system A x = b. The system to be solved is, [ 0.18 0.60 0.57 0.96 ] [x0] [1.0] [ 0.41 0.24 0.99 0.58 ] [x1] = [2.0] [ 0.14 0.30 0.97 0.66 ] [x2] [3.0] [ 0.51 0.13 0.19 0.85 ] [x3] [4.0] and the solution is found using LU decomposition of the matrix A. #include #include int main (void) { double a_data[] = { 0.18, 0.60, 0.57, 0.96, 0.41, 0.24, 0.99, 0.58, 0.14, 0.30, 0.97, 0.66, 0.51, 0.13, 0.19, 0.85 }; double b_data[] = { 1.0, 2.0, 3.0, 4.0 }; gsl_matrix_view m = gsl_matrix_view_array (a_data, 4, 4); gsl_vector_view b = gsl_vector_view_array (b_data, 4); gsl_vector *x = gsl_vector_alloc (4); int s; gsl_permutation * p = gsl_permutation_alloc (4); gsl_linalg_LU_decomp (&m.matrix, p, &s); gsl_linalg_LU_solve (&m.matrix, p, &b.vector, x); printf ("x = \n"); gsl_vector_fprintf (stdout, x, "%g"); gsl_permutation_free (p); gsl_vector_free (x); return 0; } Here is the output from the program, x = -4.05205 -12.6056 1.66091 8.69377 This can be verified by multiplying the solution x by the original matrix A using GNU octave, octave> A = [ 0.18, 0.60, 0.57, 0.96; 0.41, 0.24, 0.99, 0.58; 0.14, 0.30, 0.97, 0.66; 0.51, 0.13, 0.19, 0.85 ]; octave> x = [ -4.05205; -12.6056; 1.66091; 8.69377]; octave> A * x ans = 1.0000 2.0000 3.0000 4.0000 This reproduces the original right-hand side vector, b, in accordance with the equation A x = b.  File: gsl-ref.info, Node: References and Further Reading<9>, Prev: Examples<9>, Up: Linear Algebra 14.25 References and Further Reading ==================================== Further information on the algorithms described in this section can be found in the following book, * G. H. Golub, C. F. Van Loan, “Matrix Computations” (3rd Ed, 1996), Johns Hopkins University Press, ISBN 0-8018-5414-8. The LAPACK library is described in the following manual, * `LAPACK Users’ Guide' (Third Edition, 1999), Published by SIAM, ISBN 0-89871-447-8 The LAPACK source code can be found at ‘http://www.netlib.org/lapack’, along with an online copy of the users guide. Further information on recursive Level 3 BLAS algorithms may be found in the following paper, * E. Peise and P. Bientinesi, “Recursive algorithms for dense linear algebra: the ReLAPACK collection”, ‘http://arxiv.org/abs/1602.06763’, 2016. The recursive Level 3 BLAS QR decomposition is described in the following paper, * E. Elmroth and F. G. Gustavson, 2000. Applying recursion to serial and parallel QR factorization leads to better performance. IBM Journal of Research and Development, 44(4), pp.605-624. The Modified Golub-Reinsch algorithm is described in the following paper, * T.F. Chan, “An Improved Algorithm for Computing the Singular Value Decomposition”, ACM Transactions on Mathematical Software, 8 (1982), pp 72–83. The Jacobi algorithm for singular value decomposition is described in the following papers, * J.C. Nash, “A one-sided transformation method for the singular value decomposition and algebraic eigenproblem”, Computer Journal, Volume 18, Number 1 (1975), p 74–76 * J.C. Nash and S. Shlien “Simple algorithms for the partial singular value decomposition”, Computer Journal, Volume 30 (1987), p 268–275. * J. Demmel, K. Veselic, “Jacobi’s Method is more accurate than QR”, Lapack Working Note 15 (LAWN-15), October 1989. Available from netlib, ‘http://www.netlib.org/lapack/’ in the ‘lawns’ or ‘lawnspdf’ directories. The algorithm for estimating a matrix condition number is described in the following paper, * N. J. Higham, “FORTRAN codes for estimating the one-norm of a real or complex matrix, with applications to condition estimation”, ACM Trans. Math. Soft., vol. 14, no. 4, pp. 381-396, December 1988.  File: gsl-ref.info, Node: Eigensystems, Next: Fast Fourier Transforms FFTs, Prev: Linear Algebra, Up: Top 15 Eigensystems *************** This chapter describes functions for computing eigenvalues and eigenvectors of matrices. There are routines for real symmetric, real nonsymmetric, complex hermitian, real generalized symmetric-definite, complex generalized hermitian-definite, and real generalized nonsymmetric eigensystems. Eigenvalues can be computed with or without eigenvectors. The hermitian and real symmetric matrix algorithms are symmetric bidiagonalization followed by QR reduction. The nonsymmetric algorithm is the Francis QR double-shift. The generalized nonsymmetric algorithm is the QZ method due to Moler and Stewart. The functions described in this chapter are declared in the header file ‘gsl_eigen.h’. * Menu: * Real Symmetric Matrices:: * Complex Hermitian Matrices:: * Real Nonsymmetric Matrices:: * Real Generalized Symmetric-Definite Eigensystems:: * Complex Generalized Hermitian-Definite Eigensystems:: * Real Generalized Nonsymmetric Eigensystems:: * Sorting Eigenvalues and Eigenvectors:: * Examples: Examples<10>. * References and Further Reading: References and Further Reading<10>.  File: gsl-ref.info, Node: Real Symmetric Matrices, Next: Complex Hermitian Matrices, Up: Eigensystems 15.1 Real Symmetric Matrices ============================ For real symmetric matrices, the library uses the symmetric bidiagonalization and QR reduction method. This is described in Golub & van Loan, section 8.3. The computed eigenvalues are accurate to an absolute accuracy of \epsilon ||A||_2, where \epsilon is the machine precision. -- Type: gsl_eigen_symm_workspace This workspace contains internal parameters used for solving symmetric eigenvalue problems. -- Function: *note gsl_eigen_symm_workspace: 5c6. *gsl_eigen_symm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of *note n: 5c7.-by-*note n: 5c7. real symmetric matrices. The size of the workspace is O(2n). -- Function: void gsl_eigen_symm_free (gsl_eigen_symm_workspace *w) This function frees the memory associated with the workspace *note w: 5c8. -- Function: int gsl_eigen_symm (gsl_matrix *A, gsl_vector *eval, gsl_eigen_symm_workspace *w) This function computes the eigenvalues of the real symmetric matrix *note A: 5c9. Additional workspace of the appropriate size must be provided in *note w: 5c9. The diagonal and lower triangular part of *note A: 5c9. are destroyed during the computation, but the strict upper triangular part is not referenced. The eigenvalues are stored in the vector *note eval: 5c9. and are unordered. -- Type: gsl_eigen_symmv_workspace This workspace contains internal parameters used for solving symmetric eigenvalue and eigenvector problems. -- Function: *note gsl_eigen_symmv_workspace: 5ca. *gsl_eigen_symmv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of *note n: 5cb.-by-*note n: 5cb. real symmetric matrices. The size of the workspace is O(4n). -- Function: void gsl_eigen_symmv_free (gsl_eigen_symmv_workspace *w) This function frees the memory associated with the workspace *note w: 5cc. -- Function: int gsl_eigen_symmv (gsl_matrix *A, gsl_vector *eval, gsl_matrix *evec, gsl_eigen_symmv_workspace *w) This function computes the eigenvalues and eigenvectors of the real symmetric matrix *note A: 5cd. Additional workspace of the appropriate size must be provided in *note w: 5cd. The diagonal and lower triangular part of *note A: 5cd. are destroyed during the computation, but the strict upper triangular part is not referenced. The eigenvalues are stored in the vector *note eval: 5cd. and are unordered. The corresponding eigenvectors are stored in the columns of the matrix *note evec: 5cd. For example, the eigenvector in the first column corresponds to the first eigenvalue. The eigenvectors are guaranteed to be mutually orthogonal and normalised to unit magnitude.  File: gsl-ref.info, Node: Complex Hermitian Matrices, Next: Real Nonsymmetric Matrices, Prev: Real Symmetric Matrices, Up: Eigensystems 15.2 Complex Hermitian Matrices =============================== For hermitian matrices, the library uses the complex form of the symmetric bidiagonalization and QR reduction method. -- Type: gsl_eigen_herm_workspace This workspace contains internal parameters used for solving hermitian eigenvalue problems. -- Function: *note gsl_eigen_herm_workspace: 5cf. *gsl_eigen_herm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of *note n: 5d0.-by-*note n: 5d0. complex hermitian matrices. The size of the workspace is O(3n). -- Function: void gsl_eigen_herm_free (gsl_eigen_herm_workspace *w) This function frees the memory associated with the workspace *note w: 5d1. -- Function: int gsl_eigen_herm (gsl_matrix_complex *A, gsl_vector *eval, gsl_eigen_herm_workspace *w) This function computes the eigenvalues of the complex hermitian matrix *note A: 5d2. Additional workspace of the appropriate size must be provided in *note w: 5d2. The diagonal and lower triangular part of *note A: 5d2. are destroyed during the computation, but the strict upper triangular part is not referenced. The imaginary parts of the diagonal are assumed to be zero and are not referenced. The eigenvalues are stored in the vector *note eval: 5d2. and are unordered. -- Type: gsl_eigen_hermv_workspace This workspace contains internal parameters used for solving hermitian eigenvalue and eigenvector problems. -- Function: *note gsl_eigen_hermv_workspace: 5d3. *gsl_eigen_hermv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of *note n: 5d4.-by-*note n: 5d4. complex hermitian matrices. The size of the workspace is O(5n). -- Function: void gsl_eigen_hermv_free (gsl_eigen_hermv_workspace *w) This function frees the memory associated with the workspace *note w: 5d5. -- Function: int gsl_eigen_hermv (gsl_matrix_complex *A, gsl_vector *eval, gsl_matrix_complex *evec, gsl_eigen_hermv_workspace *w) This function computes the eigenvalues and eigenvectors of the complex hermitian matrix *note A: 5d6. Additional workspace of the appropriate size must be provided in *note w: 5d6. The diagonal and lower triangular part of *note A: 5d6. are destroyed during the computation, but the strict upper triangular part is not referenced. The imaginary parts of the diagonal are assumed to be zero and are not referenced. The eigenvalues are stored in the vector *note eval: 5d6. and are unordered. The corresponding complex eigenvectors are stored in the columns of the matrix *note evec: 5d6. For example, the eigenvector in the first column corresponds to the first eigenvalue. The eigenvectors are guaranteed to be mutually orthogonal and normalised to unit magnitude.  File: gsl-ref.info, Node: Real Nonsymmetric Matrices, Next: Real Generalized Symmetric-Definite Eigensystems, Prev: Complex Hermitian Matrices, Up: Eigensystems 15.3 Real Nonsymmetric Matrices =============================== The solution of the real nonsymmetric eigensystem problem for a matrix A involves computing the Schur decomposition A = Z T Z^T where Z is an orthogonal matrix of Schur vectors and T, the Schur form, is quasi upper triangular with diagonal 1-by-1 blocks which are real eigenvalues of A, and diagonal 2-by-2 blocks whose eigenvalues are complex conjugate eigenvalues of A. The algorithm used is the double-shift Francis method. -- Type: gsl_eigen_nonsymm_workspace This workspace contains internal parameters used for solving nonsymmetric eigenvalue problems. -- Function: *note gsl_eigen_nonsymm_workspace: 5d8. *gsl_eigen_nonsymm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of *note n: 5d9.-by-*note n: 5d9. real nonsymmetric matrices. The size of the workspace is O(2n). -- Function: void gsl_eigen_nonsymm_free (gsl_eigen_nonsymm_workspace *w) This function frees the memory associated with the workspace *note w: 5da. -- Function: void gsl_eigen_nonsymm_params (const int compute_t, const int balance, gsl_eigen_nonsymm_workspace *w) This function sets some parameters which determine how the eigenvalue problem is solved in subsequent calls to *note gsl_eigen_nonsymm(): 5dc. If *note compute_t: 5db. is set to 1, the full Schur form T will be computed by *note gsl_eigen_nonsymm(): 5dc. If it is set to 0, T will not be computed (this is the default setting). Computing the full Schur form T requires approximately 1.5–2 times the number of flops. If *note balance: 5db. is set to 1, a balancing transformation is applied to the matrix prior to computing eigenvalues. This transformation is designed to make the rows and columns of the matrix have comparable norms, and can result in more accurate eigenvalues for matrices whose entries vary widely in magnitude. See *note Balancing: 5be. for more information. Note that the balancing transformation does not preserve the orthogonality of the Schur vectors, so if you wish to compute the Schur vectors with *note gsl_eigen_nonsymm_Z(): 5dd. you will obtain the Schur vectors of the balanced matrix instead of the original matrix. The relationship will be T = Q^T D^{-1} A D Q where ‘Q’ is the matrix of Schur vectors for the balanced matrix, and ‘D’ is the balancing transformation. Then *note gsl_eigen_nonsymm_Z(): 5dd. will compute a matrix ‘Z’ which satisfies T = Z^{-1} A Z with Z = D Q. Note that ‘Z’ will not be orthogonal. For this reason, balancing is not performed by default. -- Function: int gsl_eigen_nonsymm (gsl_matrix *A, gsl_vector_complex *eval, gsl_eigen_nonsymm_workspace *w) This function computes the eigenvalues of the real nonsymmetric matrix *note A: 5dc. and stores them in the vector *note eval: 5dc. If T is desired, it is stored in the upper portion of *note A: 5dc. on output. Otherwise, on output, the diagonal of *note A: 5dc. will contain the 1-by-1 real eigenvalues and 2-by-2 complex conjugate eigenvalue systems, and the rest of *note A: 5dc. is destroyed. In rare cases, this function may fail to find all eigenvalues. If this happens, an error code is returned and the number of converged eigenvalues is stored in ‘w->n_evals’. The converged eigenvalues are stored in the beginning of *note eval: 5dc. -- Function: int gsl_eigen_nonsymm_Z (gsl_matrix *A, gsl_vector_complex *eval, gsl_matrix *Z, gsl_eigen_nonsymm_workspace *w) This function is identical to *note gsl_eigen_nonsymm(): 5dc. except that it also computes the Schur vectors and stores them into *note Z: 5dd. -- Type: gsl_eigen_nonsymmv_workspace This workspace contains internal parameters used for solving nonsymmetric eigenvalue and eigenvector problems. -- Function: *note gsl_eigen_nonsymmv_workspace: 5de. *gsl_eigen_nonsymmv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of *note n: 5df.-by-*note n: 5df. real nonsymmetric matrices. The size of the workspace is O(5n). -- Function: void gsl_eigen_nonsymmv_free (gsl_eigen_nonsymmv_workspace *w) This function frees the memory associated with the workspace *note w: 5e0. -- Function: void gsl_eigen_nonsymmv_params (const int balance, gsl_eigen_nonsymm_workspace *w) This function sets parameters which determine how the eigenvalue problem is solved in subsequent calls to *note gsl_eigen_nonsymmv(): 5e2. If *note balance: 5e1. is set to 1, a balancing transformation is applied to the matrix. See *note gsl_eigen_nonsymm_params(): 5db. for more information. Balancing is turned off by default since it does not preserve the orthogonality of the Schur vectors. -- Function: int gsl_eigen_nonsymmv (gsl_matrix *A, gsl_vector_complex *eval, gsl_matrix_complex *evec, gsl_eigen_nonsymmv_workspace *w) This function computes eigenvalues and right eigenvectors of the ‘n’-by-‘n’ real nonsymmetric matrix *note A: 5e2. It first calls *note gsl_eigen_nonsymm(): 5dc. to compute the eigenvalues, Schur form T, and Schur vectors. Then it finds eigenvectors of T and backtransforms them using the Schur vectors. The Schur vectors are destroyed in the process, but can be saved by using *note gsl_eigen_nonsymmv_Z(): 5e3. The computed eigenvectors are normalized to have unit magnitude. On output, the upper portion of *note A: 5e2. contains the Schur form T. If *note gsl_eigen_nonsymm(): 5dc. fails, no eigenvectors are computed, and an error code is returned. -- Function: int gsl_eigen_nonsymmv_Z (gsl_matrix *A, gsl_vector_complex *eval, gsl_matrix_complex *evec, gsl_matrix *Z, gsl_eigen_nonsymmv_workspace *w) This function is identical to *note gsl_eigen_nonsymmv(): 5e2. except that it also saves the Schur vectors into *note Z: 5e3.  File: gsl-ref.info, Node: Real Generalized Symmetric-Definite Eigensystems, Next: Complex Generalized Hermitian-Definite Eigensystems, Prev: Real Nonsymmetric Matrices, Up: Eigensystems 15.4 Real Generalized Symmetric-Definite Eigensystems ===================================================== The real generalized symmetric-definite eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x where A and B are symmetric matrices, and B is positive-definite. This problem reduces to the standard symmetric eigenvalue problem by applying the Cholesky decomposition to B: A x = \lambda B x A x = \lambda L L^T x ( L^{-1} A L^{-T} ) L^T x = \lambda L^T x Therefore, the problem becomes C y = \lambda y where C = L^{-1} A L^{-T} is symmetric, and y = L^T x. The standard symmetric eigensolver can be applied to the matrix C. The resulting eigenvectors are backtransformed to find the vectors of the original problem. The eigenvalues and eigenvectors of the generalized symmetric-definite eigenproblem are always real. -- Type: gsl_eigen_gensymm_workspace This workspace contains internal parameters used for solving generalized symmetric eigenvalue problems. -- Function: *note gsl_eigen_gensymm_workspace: 5e5. *gsl_eigen_gensymm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of *note n: 5e6.-by-*note n: 5e6. real generalized symmetric-definite eigensystems. The size of the workspace is O(2n). -- Function: void gsl_eigen_gensymm_free (gsl_eigen_gensymm_workspace *w) This function frees the memory associated with the workspace *note w: 5e7. -- Function: int gsl_eigen_gensymm (gsl_matrix *A, gsl_matrix *B, gsl_vector *eval, gsl_eigen_gensymm_workspace *w) This function computes the eigenvalues of the real generalized symmetric-definite matrix pair (*note A: 5e8, *note B: 5e8.), and stores them in *note eval: 5e8, using the method outlined above. On output, *note B: 5e8. contains its Cholesky decomposition and *note A: 5e8. is destroyed. -- Type: gsl_eigen_gensymmv_workspace This workspace contains internal parameters used for solving generalized symmetric eigenvalue and eigenvector problems. -- Function: *note gsl_eigen_gensymmv_workspace: 5e9. *gsl_eigen_gensymmv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of *note n: 5ea.-by-*note n: 5ea. real generalized symmetric-definite eigensystems. The size of the workspace is O(4n). -- Function: void gsl_eigen_gensymmv_free (gsl_eigen_gensymmv_workspace *w) This function frees the memory associated with the workspace *note w: 5eb. -- Function: int gsl_eigen_gensymmv (gsl_matrix *A, gsl_matrix *B, gsl_vector *eval, gsl_matrix *evec, gsl_eigen_gensymmv_workspace *w) This function computes the eigenvalues and eigenvectors of the real generalized symmetric-definite matrix pair (*note A: 5ec, *note B: 5ec.), and stores them in *note eval: 5ec. and *note evec: 5ec. respectively. The computed eigenvectors are normalized to have unit magnitude. On output, *note B: 5ec. contains its Cholesky decomposition and *note A: 5ec. is destroyed.  File: gsl-ref.info, Node: Complex Generalized Hermitian-Definite Eigensystems, Next: Real Generalized Nonsymmetric Eigensystems, Prev: Real Generalized Symmetric-Definite Eigensystems, Up: Eigensystems 15.5 Complex Generalized Hermitian-Definite Eigensystems ======================================================== The complex generalized hermitian-definite eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x where A and B are hermitian matrices, and B is positive-definite. Similarly to the real case, this can be reduced to C y = \lambda y where C = L^{-1} A L^{-\dagger} is hermitian, and y = L^{\dagger} x. The standard hermitian eigensolver can be applied to the matrix C. The resulting eigenvectors are backtransformed to find the vectors of the original problem. The eigenvalues of the generalized hermitian-definite eigenproblem are always real. -- Type: gsl_eigen_genherm_workspace This workspace contains internal parameters used for solving generalized hermitian eigenvalue problems. -- Function: *note gsl_eigen_genherm_workspace: 5ee. *gsl_eigen_genherm_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of *note n: 5ef.-by-*note n: 5ef. complex generalized hermitian-definite eigensystems. The size of the workspace is O(3n). -- Function: void gsl_eigen_genherm_free (gsl_eigen_genherm_workspace *w) This function frees the memory associated with the workspace *note w: 5f0. -- Function: int gsl_eigen_genherm (gsl_matrix_complex *A, gsl_matrix_complex *B, gsl_vector *eval, gsl_eigen_genherm_workspace *w) This function computes the eigenvalues of the complex generalized hermitian-definite matrix pair (*note A: 5f1, *note B: 5f1.), and stores them in *note eval: 5f1, using the method outlined above. On output, *note B: 5f1. contains its Cholesky decomposition and *note A: 5f1. is destroyed. -- Type: gsl_eigen_genhermv_workspace This workspace contains internal parameters used for solving generalized hermitian eigenvalue and eigenvector problems. -- Function: *note gsl_eigen_genhermv_workspace: 5f2. *gsl_eigen_genhermv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of *note n: 5f3.-by-*note n: 5f3. complex generalized hermitian-definite eigensystems. The size of the workspace is O(5n). -- Function: void gsl_eigen_genhermv_free (gsl_eigen_genhermv_workspace *w) This function frees the memory associated with the workspace *note w: 5f4. -- Function: int gsl_eigen_genhermv (gsl_matrix_complex *A, gsl_matrix_complex *B, gsl_vector *eval, gsl_matrix_complex *evec, gsl_eigen_genhermv_workspace *w) This function computes the eigenvalues and eigenvectors of the complex generalized hermitian-definite matrix pair (*note A: 5f5, *note B: 5f5.), and stores them in *note eval: 5f5. and *note evec: 5f5. respectively. The computed eigenvectors are normalized to have unit magnitude. On output, *note B: 5f5. contains its Cholesky decomposition and *note A: 5f5. is destroyed.  File: gsl-ref.info, Node: Real Generalized Nonsymmetric Eigensystems, Next: Sorting Eigenvalues and Eigenvectors, Prev: Complex Generalized Hermitian-Definite Eigensystems, Up: Eigensystems 15.6 Real Generalized Nonsymmetric Eigensystems =============================================== Given two square matrices (A, B), the generalized nonsymmetric eigenvalue problem is to find eigenvalues \lambda and eigenvectors x such that A x = \lambda B x We may also define the problem as finding eigenvalues \mu and eigenvectors y such that \mu A y = B y Note that these two problems are equivalent (with \lambda = 1/\mu) if neither \lambda nor \mu is zero. If say, \lambda is zero, then it is still a well defined eigenproblem, but its alternate problem involving \mu is not. Therefore, to allow for zero (and infinite) eigenvalues, the problem which is actually solved is \beta A x = \alpha B x The eigensolver routines below will return two values \alpha and \beta and leave it to the user to perform the divisions \lambda = \alpha / \beta and \mu = \beta / \alpha. If the determinant of the matrix pencil A - \lambda B is zero for all \lambda, the problem is said to be singular; otherwise it is called regular. Singularity normally leads to some \alpha = \beta = 0 which means the eigenproblem is ill-conditioned and generally does not have well defined eigenvalue solutions. The routines below are intended for regular matrix pencils and could yield unpredictable results when applied to singular pencils. The solution of the real generalized nonsymmetric eigensystem problem for a matrix pair (A, B) involves computing the generalized Schur decomposition A = Q S Z^T B = Q T Z^T where Q and Z are orthogonal matrices of left and right Schur vectors respectively, and (S, T) is the generalized Schur form whose diagonal elements give the \alpha and \beta values. The algorithm used is the QZ method due to Moler and Stewart (see references). -- Type: gsl_eigen_gen_workspace This workspace contains internal parameters used for solving generalized eigenvalue problems. -- Function: *note gsl_eigen_gen_workspace: 5f7. *gsl_eigen_gen_alloc (const size_t n) This function allocates a workspace for computing eigenvalues of *note n: 5f8.-by-*note n: 5f8. real generalized nonsymmetric eigensystems. The size of the workspace is O(n). -- Function: void gsl_eigen_gen_free (gsl_eigen_gen_workspace *w) This function frees the memory associated with the workspace *note w: 5f9. -- Function: void gsl_eigen_gen_params (const int compute_s, const int compute_t, const int balance, gsl_eigen_gen_workspace *w) This function sets some parameters which determine how the eigenvalue problem is solved in subsequent calls to *note gsl_eigen_gen(): 5fb. If *note compute_s: 5fa. is set to 1, the full Schur form S will be computed by *note gsl_eigen_gen(): 5fb. If it is set to 0, S will not be computed (this is the default setting). S is a quasi upper triangular matrix with 1-by-1 and 2-by-2 blocks on its diagonal. 1-by-1 blocks correspond to real eigenvalues, and 2-by-2 blocks correspond to complex eigenvalues. If *note compute_t: 5fa. is set to 1, the full Schur form T will be computed by *note gsl_eigen_gen(): 5fb. If it is set to 0, T will not be computed (this is the default setting). T is an upper triangular matrix with non-negative elements on its diagonal. Any 2-by-2 blocks in S will correspond to a 2-by-2 diagonal block in T. The *note balance: 5fa. parameter is currently ignored, since generalized balancing is not yet implemented. -- Function: int gsl_eigen_gen (gsl_matrix *A, gsl_matrix *B, gsl_vector_complex *alpha, gsl_vector *beta, gsl_eigen_gen_workspace *w) This function computes the eigenvalues of the real generalized nonsymmetric matrix pair (*note A: 5fb, *note B: 5fb.), and stores them as pairs in (*note alpha: 5fb, *note beta: 5fb.), where *note alpha: 5fb. is complex and *note beta: 5fb. is real. If \beta_i is non-zero, then \lambda = \alpha_i / \beta_i is an eigenvalue. Likewise, if \alpha_i is non-zero, then \mu = \beta_i / \alpha_i is an eigenvalue of the alternate problem \mu A y = B y. The elements of *note beta: 5fb. are normalized to be non-negative. If S is desired, it is stored in *note A: 5fb. on output. If T is desired, it is stored in *note B: 5fb. on output. The ordering of eigenvalues in (*note alpha: 5fb, *note beta: 5fb.) follows the ordering of the diagonal blocks in the Schur forms S and T. In rare cases, this function may fail to find all eigenvalues. If this occurs, an error code is returned. -- Function: int gsl_eigen_gen_QZ (gsl_matrix *A, gsl_matrix *B, gsl_vector_complex *alpha, gsl_vector *beta, gsl_matrix *Q, gsl_matrix *Z, gsl_eigen_gen_workspace *w) This function is identical to *note gsl_eigen_gen(): 5fb. except that it also computes the left and right Schur vectors and stores them into *note Q: 5fc. and *note Z: 5fc. respectively. -- Type: gsl_eigen_genv_workspace This workspace contains internal parameters used for solving generalized eigenvalue and eigenvector problems. -- Function: *note gsl_eigen_genv_workspace: 5fd. *gsl_eigen_genv_alloc (const size_t n) This function allocates a workspace for computing eigenvalues and eigenvectors of *note n: 5fe.-by-*note n: 5fe. real generalized nonsymmetric eigensystems. The size of the workspace is O(7n). -- Function: void gsl_eigen_genv_free (gsl_eigen_genv_workspace *w) This function frees the memory associated with the workspace *note w: 5ff. -- Function: int gsl_eigen_genv (gsl_matrix *A, gsl_matrix *B, gsl_vector_complex *alpha, gsl_vector *beta, gsl_matrix_complex *evec, gsl_eigen_genv_workspace *w) This function computes eigenvalues and right eigenvectors of the ‘n’-by-‘n’ real generalized nonsymmetric matrix pair (*note A: 600, *note B: 600.). The eigenvalues are stored in (*note alpha: 600, *note beta: 600.) and the eigenvectors are stored in *note evec: 600. It first calls *note gsl_eigen_gen(): 5fb. to compute the eigenvalues, Schur forms, and Schur vectors. Then it finds eigenvectors of the Schur forms and backtransforms them using the Schur vectors. The Schur vectors are destroyed in the process, but can be saved by using *note gsl_eigen_genv_QZ(): 601. The computed eigenvectors are normalized to have unit magnitude. On output, (*note A: 600, *note B: 600.) contains the generalized Schur form (S, T). If *note gsl_eigen_gen(): 5fb. fails, no eigenvectors are computed, and an error code is returned. -- Function: int gsl_eigen_genv_QZ (gsl_matrix *A, gsl_matrix *B, gsl_vector_complex *alpha, gsl_vector *beta, gsl_matrix_complex *evec, gsl_matrix *Q, gsl_matrix *Z, gsl_eigen_genv_workspace *w) This function is identical to *note gsl_eigen_genv(): 600. except that it also computes the left and right Schur vectors and stores them into *note Q: 601. and *note Z: 601. respectively.  File: gsl-ref.info, Node: Sorting Eigenvalues and Eigenvectors, Next: Examples<10>, Prev: Real Generalized Nonsymmetric Eigensystems, Up: Eigensystems 15.7 Sorting Eigenvalues and Eigenvectors ========================================= -- Function: int gsl_eigen_symmv_sort (gsl_vector *eval, gsl_matrix *evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector *note eval: 603. and the corresponding real eigenvectors stored in the columns of the matrix *note evec: 603. into ascending or descending order according to the value of the parameter *note sort_type: 603, -- Type: gsl_eigen_sort_t ‘GSL_EIGEN_SORT_VAL_ASC’ ascending order in numerical value ‘GSL_EIGEN_SORT_VAL_DESC’ descending order in numerical value ‘GSL_EIGEN_SORT_ABS_ASC’ ascending order in magnitude ‘GSL_EIGEN_SORT_ABS_DESC’ descending order in magnitude -- Function: int gsl_eigen_hermv_sort (gsl_vector *eval, gsl_matrix_complex *evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector *note eval: 605. and the corresponding complex eigenvectors stored in the columns of the matrix *note evec: 605. into ascending or descending order according to the value of the parameter *note sort_type: 605. as shown above. -- Function: int gsl_eigen_nonsymmv_sort (gsl_vector_complex *eval, gsl_matrix_complex *evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector *note eval: 606. and the corresponding complex eigenvectors stored in the columns of the matrix *note evec: 606. into ascending or descending order according to the value of the parameter *note sort_type: 606. as shown above. Only ‘GSL_EIGEN_SORT_ABS_ASC’ and ‘GSL_EIGEN_SORT_ABS_DESC’ are supported due to the eigenvalues being complex. -- Function: int gsl_eigen_gensymmv_sort (gsl_vector *eval, gsl_matrix *evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector *note eval: 607. and the corresponding real eigenvectors stored in the columns of the matrix *note evec: 607. into ascending or descending order according to the value of the parameter *note sort_type: 607. as shown above. -- Function: int gsl_eigen_genhermv_sort (gsl_vector *eval, gsl_matrix_complex *evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vector *note eval: 608. and the corresponding complex eigenvectors stored in the columns of the matrix *note evec: 608. into ascending or descending order according to the value of the parameter *note sort_type: 608. as shown above. -- Function: int gsl_eigen_genv_sort (gsl_vector_complex *alpha, gsl_vector *beta, gsl_matrix_complex *evec, gsl_eigen_sort_t sort_type) This function simultaneously sorts the eigenvalues stored in the vectors (*note alpha: 609, *note beta: 609.) and the corresponding complex eigenvectors stored in the columns of the matrix *note evec: 609. into ascending or descending order according to the value of the parameter *note sort_type: 609. as shown above. Only ‘GSL_EIGEN_SORT_ABS_ASC’ and ‘GSL_EIGEN_SORT_ABS_DESC’ are supported due to the eigenvalues being complex.  File: gsl-ref.info, Node: Examples<10>, Next: References and Further Reading<10>, Prev: Sorting Eigenvalues and Eigenvectors, Up: Eigensystems 15.8 Examples ============= The following program computes the eigenvalues and eigenvectors of the 4-th order Hilbert matrix, H(i,j) = 1/(i + j + 1). #include #include #include int main (void) { double data[] = { 1.0 , 1/2.0, 1/3.0, 1/4.0, 1/2.0, 1/3.0, 1/4.0, 1/5.0, 1/3.0, 1/4.0, 1/5.0, 1/6.0, 1/4.0, 1/5.0, 1/6.0, 1/7.0 }; gsl_matrix_view m = gsl_matrix_view_array (data, 4, 4); gsl_vector *eval = gsl_vector_alloc (4); gsl_matrix *evec = gsl_matrix_alloc (4, 4); gsl_eigen_symmv_workspace * w = gsl_eigen_symmv_alloc (4); gsl_eigen_symmv (&m.matrix, eval, evec, w); gsl_eigen_symmv_free (w); gsl_eigen_symmv_sort (eval, evec, GSL_EIGEN_SORT_ABS_ASC); { int i; for (i = 0; i < 4; i++) { double eval_i = gsl_vector_get (eval, i); gsl_vector_view evec_i = gsl_matrix_column (evec, i); printf ("eigenvalue = %g\n", eval_i); printf ("eigenvector = \n"); gsl_vector_fprintf (stdout, &evec_i.vector, "%g"); } } gsl_vector_free (eval); gsl_matrix_free (evec); return 0; } Here is the beginning of the output from the program: $ ./a.out eigenvalue = 9.67023e-05 eigenvector = -0.0291933 0.328712 -0.791411 0.514553 ... This can be compared with the corresponding output from GNU octave: octave> [v,d] = eig(hilb(4)); octave> diag(d) ans = 9.6702e-05 6.7383e-03 1.6914e-01 1.5002e+00 octave> v v = 0.029193 0.179186 -0.582076 0.792608 -0.328712 -0.741918 0.370502 0.451923 0.791411 0.100228 0.509579 0.322416 -0.514553 0.638283 0.514048 0.252161 Note that the eigenvectors can differ by a change of sign, since the sign of an eigenvector is arbitrary. The following program illustrates the use of the nonsymmetric eigensolver, by computing the eigenvalues and eigenvectors of the Vandermonde matrix V(x;i,j) = x_i^{n - j} with x = (-1,-2,3,4). #include #include #include int main (void) { double data[] = { -1.0, 1.0, -1.0, 1.0, -8.0, 4.0, -2.0, 1.0, 27.0, 9.0, 3.0, 1.0, 64.0, 16.0, 4.0, 1.0 }; gsl_matrix_view m = gsl_matrix_view_array (data, 4, 4); gsl_vector_complex *eval = gsl_vector_complex_alloc (4); gsl_matrix_complex *evec = gsl_matrix_complex_alloc (4, 4); gsl_eigen_nonsymmv_workspace * w = gsl_eigen_nonsymmv_alloc (4); gsl_eigen_nonsymmv (&m.matrix, eval, evec, w); gsl_eigen_nonsymmv_free (w); gsl_eigen_nonsymmv_sort (eval, evec, GSL_EIGEN_SORT_ABS_DESC); { int i, j; for (i = 0; i < 4; i++) { gsl_complex eval_i = gsl_vector_complex_get (eval, i); gsl_vector_complex_view evec_i = gsl_matrix_complex_column (evec, i); printf ("eigenvalue = %g + %gi\n", GSL_REAL(eval_i), GSL_IMAG(eval_i)); printf ("eigenvector = \n"); for (j = 0; j < 4; ++j) { gsl_complex z = gsl_vector_complex_get(&evec_i.vector, j); printf("%g + %gi\n", GSL_REAL(z), GSL_IMAG(z)); } } } gsl_vector_complex_free(eval); gsl_matrix_complex_free(evec); return 0; } Here is the beginning of the output from the program: $ ./a.out eigenvalue = -6.41391 + 0i eigenvector = -0.0998822 + 0i -0.111251 + 0i 0.292501 + 0i 0.944505 + 0i eigenvalue = 5.54555 + 3.08545i eigenvector = -0.043487 + -0.0076308i 0.0642377 + -0.142127i -0.515253 + 0.0405118i -0.840592 + -0.00148565i ... This can be compared with the corresponding output from GNU octave: octave> [v,d] = eig(vander([-1 -2 3 4])); octave> diag(d) ans = -6.4139 + 0.0000i 5.5456 + 3.0854i 5.5456 - 3.0854i 2.3228 + 0.0000i octave> v v = Columns 1 through 3: -0.09988 + 0.00000i -0.04350 - 0.00755i -0.04350 + 0.00755i -0.11125 + 0.00000i 0.06399 - 0.14224i 0.06399 + 0.14224i 0.29250 + 0.00000i -0.51518 + 0.04142i -0.51518 - 0.04142i 0.94451 + 0.00000i -0.84059 + 0.00000i -0.84059 - 0.00000i Column 4: -0.14493 + 0.00000i 0.35660 + 0.00000i 0.91937 + 0.00000i 0.08118 + 0.00000i Note that the eigenvectors corresponding to the eigenvalue 5.54555 + 3.08545i differ by the multiplicative constant 0.9999984 + 0.0017674i which is an arbitrary phase factor of magnitude 1.  File: gsl-ref.info, Node: References and Further Reading<10>, Prev: Examples<10>, Up: Eigensystems 15.9 References and Further Reading =================================== Further information on the algorithms described in this section can be found in the following book, * G. H. Golub, C. F. Van Loan, “Matrix Computations” (3rd Ed, 1996), Johns Hopkins University Press, ISBN 0-8018-5414-8. Further information on the generalized eigensystems QZ algorithm can be found in this paper, * C. Moler, G. Stewart, “An Algorithm for Generalized Matrix Eigenvalue Problems”, SIAM J. Numer. Anal., Vol 10, No 2, 1973. Eigensystem routines for very large matrices can be found in the Fortran library LAPACK. The LAPACK library is described in, * LAPACK Users’ Guide (Third Edition, 1999), Published by SIAM, ISBN 0-89871-447-8. The LAPACK source code can be found at the website ‘http://www.netlib.org/lapack’ along with an online copy of the users guide.  File: gsl-ref.info, Node: Fast Fourier Transforms FFTs, Next: Numerical Integration, Prev: Eigensystems, Up: Top 16 Fast Fourier Transforms (FFTs) ********************************* This chapter describes functions for performing Fast Fourier Transforms (FFTs). The library includes radix-2 routines (for lengths which are a power of two) and mixed-radix routines (which work for any length). For efficiency there are separate versions of the routines for real data and for complex data. The mixed-radix routines are a reimplementation of the FFTPACK library of Paul Swarztrauber. Fortran code for FFTPACK is available on Netlib (FFTPACK also includes some routines for sine and cosine transforms but these are currently not available in GSL). For details and derivations of the underlying algorithms consult the document “GSL FFT Algorithms” (see *note References and Further Reading: 60e.) * Menu: * Mathematical Definitions:: * Overview of complex data FFTs:: * Radix-2 FFT routines for complex data:: * Mixed-radix FFT routines for complex data:: * Overview of real data FFTs:: * Radix-2 FFT routines for real data:: * Mixed-radix FFT routines for real data:: * References and Further Reading: References and Further Reading<11>.  File: gsl-ref.info, Node: Mathematical Definitions, Next: Overview of complex data FFTs, Up: Fast Fourier Transforms FFTs 16.1 Mathematical Definitions ============================= Fast Fourier Transforms are efficient algorithms for calculating the discrete Fourier transform (DFT), x_j = \sum_{k=0}^{n-1} z_k \exp(-2 \pi i j k / n) The DFT usually arises as an approximation to the continuous Fourier transform when functions are sampled at discrete intervals in space or time. The naive evaluation of the discrete Fourier transform is a matrix-vector multiplication W\vec{z}. A general matrix-vector multiplication takes O(n^2) operations for n data-points. Fast Fourier transform algorithms use a divide-and-conquer strategy to factorize the matrix W into smaller sub-matrices, corresponding to the integer factors of the length n. If n can be factorized into a product of integers f_1 f_2 \ldots f_m then the DFT can be computed in O(n \sum f_i) operations. For a radix-2 FFT this gives an operation count of O(n \log_2 n). All the FFT functions offer three types of transform: forwards, inverse and backwards, based on the same mathematical definitions. The definition of the `forward Fourier transform', x = \hbox{FFT}(z), is, x_j = \sum_{k=0}^{n-1} z_k \exp(-2 \pi i j k / n) and the definition of the `inverse Fourier transform', x = \hbox{IFFT}(z), is, z_j = {1 \over n} \sum_{k=0}^{n-1} x_k \exp(2 \pi i j k / n). The factor of 1/n makes this a true inverse. For example, a call to *note gsl_fft_complex_forward(): 610. followed by a call to *note gsl_fft_complex_inverse(): 611. should return the original data (within numerical errors). In general there are two possible choices for the sign of the exponential in the transform/ inverse-transform pair. GSL follows the same convention as FFTPACK, using a negative exponential for the forward transform. The advantage of this convention is that the inverse transform recreates the original function with simple Fourier synthesis. Numerical Recipes uses the opposite convention, a positive exponential in the forward transform. The `backwards FFT' is simply our terminology for an unscaled version of the inverse FFT, z^{backwards}_j = \sum_{k=0}^{n-1} x_k \exp(2 \pi i j k / n) When the overall scale of the result is unimportant it is often convenient to use the backwards FFT instead of the inverse to save unnecessary divisions.  File: gsl-ref.info, Node: Overview of complex data FFTs, Next: Radix-2 FFT routines for complex data, Prev: Mathematical Definitions, Up: Fast Fourier Transforms FFTs 16.2 Overview of complex data FFTs ================================== The inputs and outputs for the complex FFT routines are `packed arrays' of floating point numbers. In a packed array the real and imaginary parts of each complex number are placed in alternate neighboring elements. For example, the following definition of a packed array of length 6: double x[3*2]; gsl_complex_packed_array data = x; can be used to hold an array of three complex numbers, ‘z[3]’, in the following way: data[0] = Re(z[0]) data[1] = Im(z[0]) data[2] = Re(z[1]) data[3] = Im(z[1]) data[4] = Re(z[2]) data[5] = Im(z[2]) The array indices for the data have the same ordering as those in the definition of the DFT—i.e. there are no index transformations or permutations of the data. A `stride' parameter allows the user to perform transforms on the elements ‘z[stride*i]’ instead of ‘z[i]’. A stride greater than 1 can be used to take an in-place FFT of the column of a matrix. A stride of 1 accesses the array without any additional spacing between elements. To perform an FFT on a vector argument, such as ‘gsl_vector_complex * v’, use the following definitions (or their equivalents) when calling the functions described in this chapter: gsl_complex_packed_array data = v->data; size_t stride = v->stride; size_t n = v->size; For physical applications it is important to remember that the index appearing in the DFT does not correspond directly to a physical frequency. If the time-step of the DFT is \Delta then the frequency-domain includes both positive and negative frequencies, ranging from -1/(2\Delta) through 0 to +1/(2\Delta). The positive frequencies are stored from the beginning of the array up to the middle, and the negative frequencies are stored backwards from the end of the array. Here is a table which shows the layout of the array ‘data’, and the correspondence between the time-domain data z, and the frequency-domain data x: index z x = FFT(z) 0 z(t = 0) x(f = 0) 1 z(t = 1) x(f = 1/(n Delta)) 2 z(t = 2) x(f = 2/(n Delta)) . ........ .................. n/2 z(t = n/2) x(f = +1/(2 Delta), -1/(2 Delta)) . ........ .................. n-3 z(t = n-3) x(f = -3/(n Delta)) n-2 z(t = n-2) x(f = -2/(n Delta)) n-1 z(t = n-1) x(f = -1/(n Delta)) When n is even the location n/2 contains the most positive and negative frequencies (+1/(2 \Delta), -1/(2 \Delta)) which are equivalent. If n is odd then general structure of the table above still applies, but n/2 does not appear.  File: gsl-ref.info, Node: Radix-2 FFT routines for complex data, Next: Mixed-radix FFT routines for complex data, Prev: Overview of complex data FFTs, Up: Fast Fourier Transforms FFTs 16.3 Radix-2 FFT routines for complex data ========================================== The radix-2 algorithms described in this section are simple and compact, although not necessarily the most efficient. They use the Cooley-Tukey algorithm to compute in-place complex FFTs for lengths which are a power of 2—no additional storage is required. The corresponding self-sorting mixed-radix routines offer better performance at the expense of requiring additional working space. All the functions described in this section are declared in the header file ‘gsl_fft_complex.h’. -- Function: int gsl_fft_complex_radix2_forward (gsl_complex_packed_array data, size_t stride, size_t n) -- Function: int gsl_fft_complex_radix2_transform (gsl_complex_packed_array data, size_t stride, size_t n, gsl_fft_direction sign) -- Function: int gsl_fft_complex_radix2_backward (gsl_complex_packed_array data, size_t stride, size_t n) -- Function: int gsl_fft_complex_radix2_inverse (gsl_complex_packed_array data, size_t stride, size_t n) These functions compute forward, backward and inverse FFTs of length *note n: 616. with stride *note stride: 616, on the packed complex array *note data: 616. using an in-place radix-2 decimation-in-time algorithm. The length of the transform *note n: 616. is restricted to powers of two. For the ‘transform’ version of the function the ‘sign’ argument can be either ‘forward’ (-1) or ‘backward’ (+1). The functions return a value of ‘GSL_SUCCESS’ if no errors were detected, or *note GSL_EDOM: 28. if the length of the data *note n: 616. is not a power of two. -- Function: int gsl_fft_complex_radix2_dif_forward (gsl_complex_packed_array data, size_t stride, size_t n) -- Function: int gsl_fft_complex_radix2_dif_transform (gsl_complex_packed_array data, size_t stride, size_t n, gsl_fft_direction sign) -- Function: int gsl_fft_complex_radix2_dif_backward (gsl_complex_packed_array data, size_t stride, size_t n) -- Function: int gsl_fft_complex_radix2_dif_inverse (gsl_complex_packed_array data, size_t stride, size_t n) These are decimation-in-frequency versions of the radix-2 FFT functions. Here is an example program which computes the FFT of a short pulse in a sample of length 128. To make the resulting Fourier transform real the pulse is defined for equal positive and negative times (-10 \dots 10), where the negative times wrap around the end of the array. #include #include #include #include #define REAL(z,i) ((z)[2*(i)]) #define IMAG(z,i) ((z)[2*(i)+1]) int main (void) { int i; double data[2*128]; for (i = 0; i < 128; i++) { REAL(data,i) = 0.0; IMAG(data,i) = 0.0; } REAL(data,0) = 1.0; for (i = 1; i <= 10; i++) { REAL(data,i) = REAL(data,128-i) = 1.0; } for (i = 0; i < 128; i++) { printf ("%d %e %e\n", i, REAL(data,i), IMAG(data,i)); } printf ("\n\n"); gsl_fft_complex_radix2_forward (data, 1, 128); for (i = 0; i < 128; i++) { printf ("%d %e %e\n", i, REAL(data,i)/sqrt(128), IMAG(data,i)/sqrt(128)); } return 0; } Note that we have assumed that the program is using the default error handler (which calls ‘abort()’ for any errors). If you are not using a safe error handler you would need to check the return status of *note gsl_fft_complex_radix2_forward(): 35. The transformed data is rescaled by 1/\sqrt n so that it fits on the same plot as the input. Only the real part is shown, by the choice of the input data the imaginary part is zero. Allowing for the wrap-around of negative times at t=128, and working in units of k/n, the DFT approximates the continuum Fourier transform, giving a modulated sine function. \int_{-a}^{+a} e^{-2 \pi i k x} dx = {\sin(2\pi k a) \over\pi k} The output of the example program is plotted in Fig. %s. [gsl-ref-figures/fft-complex-radix2] Figure: A pulse and its discrete Fourier transform, output from the example program.  File: gsl-ref.info, Node: Mixed-radix FFT routines for complex data, Next: Overview of real data FFTs, Prev: Radix-2 FFT routines for complex data, Up: Fast Fourier Transforms FFTs 16.4 Mixed-radix FFT routines for complex data ============================================== This section describes mixed-radix FFT algorithms for complex data. The mixed-radix functions work for FFTs of any length. They are a reimplementation of Paul Swarztrauber’s Fortran FFTPACK library. The theory is explained in the review article “Self-sorting Mixed-radix FFTs” by Clive Temperton. The routines here use the same indexing scheme and basic algorithms as FFTPACK. The mixed-radix algorithm is based on sub-transform modules—highly optimized small length FFTs which are combined to create larger FFTs. There are efficient modules for factors of 2, 3, 4, 5, 6 and 7. The modules for the composite factors of 4 and 6 are faster than combining the modules for 2*2 and 2*3. For factors which are not implemented as modules there is a fall-back to a general length-n module which uses Singleton’s method for efficiently computing a DFT. This module is O(n^2), and slower than a dedicated module would be but works for any length n. Of course, lengths which use the general length-n module will still be factorized as much as possible. For example, a length of 143 will be factorized into 11*13. Large prime factors are the worst case scenario, e.g. as found in n=2*3*99991, and should be avoided because their O(n^2) scaling will dominate the run-time (consult the document “GSL FFT Algorithms” included in the GSL distribution if you encounter this problem). The mixed-radix initialization function *note gsl_fft_complex_wavetable_alloc(): 61d. returns the list of factors chosen by the library for a given length n. It can be used to check how well the length has been factorized, and estimate the run-time. To a first approximation the run-time scales as n \sum f_i, where the f_i are the factors of n. For programs under user control you may wish to issue a warning that the transform will be slow when the length is poorly factorized. If you frequently encounter data lengths which cannot be factorized using the existing small-prime modules consult “GSL FFT Algorithms” for details on adding support for other factors. All the functions described in this section are declared in the header file ‘gsl_fft_complex.h’. -- Function: *note gsl_fft_complex_wavetable: 61e. *gsl_fft_complex_wavetable_alloc (size_t n) This function prepares a trigonometric lookup table for a complex FFT of length *note n: 61d. The function returns a pointer to the newly allocated *note gsl_fft_complex_wavetable: 61e. if no errors were detected, and a null pointer in the case of error. The length *note n: 61d. is factorized into a product of subtransforms, and the factors and their trigonometric coefficients are stored in the wavetable. The trigonometric coefficients are computed using direct calls to ‘sin’ and ‘cos’, for accuracy. Recursion relations could be used to compute the lookup table faster, but if an application performs many FFTs of the same length then this computation is a one-off overhead which does not affect the final throughput. The wavetable structure can be used repeatedly for any transform of the same length. The table is not modified by calls to any of the other FFT functions. The same wavetable can be used for both forward and backward (or inverse) transforms of a given length. -- Function: void gsl_fft_complex_wavetable_free (gsl_fft_complex_wavetable *wavetable) This function frees the memory associated with the wavetable *note wavetable: 61f. The wavetable can be freed if no further FFTs of the same length will be needed. These functions operate on a *note gsl_fft_complex_wavetable: 61e. structure which contains internal parameters for the FFT. It is not necessary to set any of the components directly but it can sometimes be useful to examine them. For example, the chosen factorization of the FFT length is given and can be used to provide an estimate of the run-time or numerical error. The wavetable structure is declared in the header file ‘gsl_fft_complex.h’. -- Type: gsl_fft_complex_wavetable This is a structure that holds the factorization and trigonometric lookup tables for the mixed radix fft algorithm. It has the following components: ‘size_t n’ This is the number of complex data points ‘size_t nf’ This is the number of factors that the length ‘n’ was decomposed into. ‘size_t factor[64]’ This is the array of factors. Only the first ‘nf’ elements are used. ‘gsl_complex * trig’ This is a pointer to a preallocated trigonometric lookup table of ‘n’ complex elements. ‘gsl_complex * twiddle[64]’ This is an array of pointers into ‘trig’, giving the twiddle factors for each pass. -- Type: gsl_fft_complex_workspace The mixed radix algorithms require additional working space to hold the intermediate steps of the transform. -- Function: *note gsl_fft_complex_workspace: 620. *gsl_fft_complex_workspace_alloc (size_t n) This function allocates a workspace for a complex transform of length *note n: 621. -- Function: void gsl_fft_complex_workspace_free (gsl_fft_complex_workspace *workspace) This function frees the memory associated with the workspace *note workspace: 622. The workspace can be freed if no further FFTs of the same length will be needed. The following functions compute the transform, -- Function: int gsl_fft_complex_forward (gsl_complex_packed_array data, size_t stride, size_t n, const gsl_fft_complex_wavetable *wavetable, gsl_fft_complex_workspace *work) -- Function: int gsl_fft_complex_transform (gsl_complex_packed_array data, size_t stride, size_t n, const gsl_fft_complex_wavetable *wavetable, gsl_fft_complex_workspace *work, gsl_fft_direction sign) -- Function: int gsl_fft_complex_backward (gsl_complex_packed_array data, size_t stride, size_t n, const gsl_fft_complex_wavetable *wavetable, gsl_fft_complex_workspace *work) -- Function: int gsl_fft_complex_inverse (gsl_complex_packed_array data, size_t stride, size_t n, const gsl_fft_complex_wavetable *wavetable, gsl_fft_complex_workspace *work) These functions compute forward, backward and inverse FFTs of length *note n: 611. with stride *note stride: 611, on the packed complex array *note data: 611, using a mixed radix decimation-in-frequency algorithm. There is no restriction on the length *note n: 611. Efficient modules are provided for subtransforms of length 2, 3, 4, 5, 6 and 7. Any remaining factors are computed with a slow, O(n^2), general-n module. The caller must supply a *note wavetable: 611. containing the trigonometric lookup tables and a workspace *note work: 611. For the ‘transform’ version of the function the ‘sign’ argument can be either ‘forward’ (-1) or ‘backward’ (+1). The functions return a value of ‘0’ if no errors were detected. The following ‘gsl_errno’ conditions are defined for these functions: *note GSL_EDOM: 28. The length of the data *note n: 611. is not a positive integer (i.e. *note n: 611. is zero). *note GSL_EINVAL: 2b. The length of the data *note n: 611. and the length used to compute the given *note wavetable: 611. do not match. Here is an example program which computes the FFT of a short pulse in a sample of length 630 (=2*3*3*5*7) using the mixed-radix algorithm. #include #include #include #include #define REAL(z,i) ((z)[2*(i)]) #define IMAG(z,i) ((z)[2*(i)+1]) int main (void) { int i; const int n = 630; double data[2*n]; gsl_fft_complex_wavetable * wavetable; gsl_fft_complex_workspace * workspace; for (i = 0; i < n; i++) { REAL(data,i) = 0.0; IMAG(data,i) = 0.0; } data[0] = 1.0; for (i = 1; i <= 10; i++) { REAL(data,i) = REAL(data,n-i) = 1.0; } for (i = 0; i < n; i++) { printf ("%d: %e %e\n", i, REAL(data,i), IMAG(data,i)); } printf ("\n"); wavetable = gsl_fft_complex_wavetable_alloc (n); workspace = gsl_fft_complex_workspace_alloc (n); for (i = 0; i < (int) wavetable->nf; i++) { printf ("# factor %d: %zu\n", i, wavetable->factor[i]); } gsl_fft_complex_forward (data, 1, n, wavetable, workspace); for (i = 0; i < n; i++) { printf ("%d: %e %e\n", i, REAL(data,i), IMAG(data,i)); } gsl_fft_complex_wavetable_free (wavetable); gsl_fft_complex_workspace_free (workspace); return 0; } Note that we have assumed that the program is using the default ‘gsl’ error handler (which calls ‘abort()’ for any errors). If you are not using a safe error handler you would need to check the return status of all the ‘gsl’ routines.  File: gsl-ref.info, Node: Overview of real data FFTs, Next: Radix-2 FFT routines for real data, Prev: Mixed-radix FFT routines for complex data, Up: Fast Fourier Transforms FFTs 16.5 Overview of real data FFTs =============================== The functions for real data are similar to those for complex data. However, there is an important difference between forward and inverse transforms. The Fourier transform of a real sequence is not real. It is a complex sequence with a special symmetry: z_k = z_{n-k}^* A sequence with this symmetry is called `conjugate-complex' or `half-complex'. This different structure requires different storage layouts for the forward transform (from real to half-complex) and inverse transform (from half-complex back to real). As a consequence the routines are divided into two sets: functions in ‘gsl_fft_real’ which operate on real sequences and functions in ‘gsl_fft_halfcomplex’ which operate on half-complex sequences. Functions in ‘gsl_fft_real’ compute the frequency coefficients of a real sequence. The half-complex coefficients c of a real sequence x are given by Fourier analysis, c_k = \sum_{j=0}^{n-1} x_j \exp(-2 \pi i j k /n) Functions in ‘gsl_fft_halfcomplex’ compute inverse or backwards transforms. They reconstruct real sequences by Fourier synthesis from their half-complex frequency coefficients, c, x_j = {1 \over n} \sum_{k=0}^{n-1} c_k \exp(2 \pi i j k /n) The symmetry of the half-complex sequence implies that only half of the complex numbers in the output need to be stored. The remaining half can be reconstructed using the half-complex symmetry condition. This works for all lengths, even and odd—when the length is even the middle value where k=n/2 is also real. Thus only ‘n’ real numbers are required to store the half-complex sequence, and the transform of a real sequence can be stored in the same size array as the original data. The precise storage arrangements depend on the algorithm, and are different for radix-2 and mixed-radix routines. The radix-2 function operates in-place, which constrains the locations where each element can be stored. The restriction forces real and imaginary parts to be stored far apart. The mixed-radix algorithm does not have this restriction, and it stores the real and imaginary parts of a given term in neighboring locations (which is desirable for better locality of memory accesses).  File: gsl-ref.info, Node: Radix-2 FFT routines for real data, Next: Mixed-radix FFT routines for real data, Prev: Overview of real data FFTs, Up: Fast Fourier Transforms FFTs 16.6 Radix-2 FFT routines for real data ======================================= This section describes radix-2 FFT algorithms for real data. They use the Cooley-Tukey algorithm to compute in-place FFTs for lengths which are a power of 2. The radix-2 FFT functions for real data are declared in the header files ‘gsl_fft_real.h’ -- Function: int gsl_fft_real_radix2_transform (double data[], size_t stride, size_t n) This function computes an in-place radix-2 FFT of length *note n: 627. and stride *note stride: 627. on the real array *note data: 627. The output is a half-complex sequence, which is stored in-place. The arrangement of the half-complex terms uses the following scheme: for k < n/2 the real part of the k-th term is stored in location k, and the corresponding imaginary part is stored in location n-k. Terms with k > n/2 can be reconstructed using the symmetry z_k = z^*_{n-k}. The terms for k=0 and k=n/2 are both purely real, and count as a special case. Their real parts are stored in locations 0 and n/2 respectively, while their imaginary parts which are zero are not stored. The following table shows the correspondence between the output *note data: 627. and the equivalent results obtained by considering the input data as a complex sequence with zero imaginary part (assuming *note stride: 627. = 1}): complex[0].real = data[0] complex[0].imag = 0 complex[1].real = data[1] complex[1].imag = data[n-1] ............... ................ complex[k].real = data[k] complex[k].imag = data[n-k] ............... ................ complex[n/2].real = data[n/2] complex[n/2].imag = 0 ............... ................ complex[k'].real = data[k] k' = n - k complex[k'].imag = -data[n-k] ............... ................ complex[n-1].real = data[1] complex[n-1].imag = -data[n-1] Note that the output data can be converted into the full complex sequence using the function *note gsl_fft_halfcomplex_radix2_unpack(): 628. described below. The radix-2 FFT functions for halfcomplex data are declared in the header file ‘gsl_fft_halfcomplex.h’. -- Function: int gsl_fft_halfcomplex_radix2_inverse (double data[], size_t stride, size_t n) -- Function: int gsl_fft_halfcomplex_radix2_backward (double data[], size_t stride, size_t n) These functions compute the inverse or backwards in-place radix-2 FFT of length *note n: 62a. and stride *note stride: 62a. on the half-complex sequence *note data: 62a. stored according the output scheme used by ‘gsl_fft_real_radix2()’. The result is a real array stored in natural order. -- Function: int gsl_fft_halfcomplex_radix2_unpack (const double halfcomplex_coefficient[], gsl_complex_packed_array complex_coefficient, size_t stride, size_t n) This function converts *note halfcomplex_coefficient: 628, an array of half-complex coefficients as returned by *note gsl_fft_real_radix2_transform(): 627, into an ordinary complex array, *note complex_coefficient: 628. It fills in the complex array using the symmetry z_k = z_{n-k}^* to reconstruct the redundant elements. The algorithm for the conversion is: complex_coefficient[0].real = halfcomplex_coefficient[0]; complex_coefficient[0].imag = 0.0; for (i = 1; i < n - i; i++) { double hc_real = halfcomplex_coefficient[i*stride]; double hc_imag = halfcomplex_coefficient[(n-i)*stride]; complex_coefficient[i*stride].real = hc_real; complex_coefficient[i*stride].imag = hc_imag; complex_coefficient[(n - i)*stride].real = hc_real; complex_coefficient[(n - i)*stride].imag = -hc_imag; } if (i == n - i) { complex_coefficient[i*stride].real = halfcomplex_coefficient[(n - 1)*stride]; complex_coefficient[i*stride].imag = 0.0; }  File: gsl-ref.info, Node: Mixed-radix FFT routines for real data, Next: References and Further Reading<11>, Prev: Radix-2 FFT routines for real data, Up: Fast Fourier Transforms FFTs 16.7 Mixed-radix FFT routines for real data =========================================== This section describes mixed-radix FFT algorithms for real data. The mixed-radix functions work for FFTs of any length. They are a reimplementation of the real-FFT routines in the Fortran FFTPACK library by Paul Swarztrauber. The theory behind the algorithm is explained in the article “Fast Mixed-Radix Real Fourier Transforms” by Clive Temperton. The routines here use the same indexing scheme and basic algorithms as FFTPACK. The functions use the FFTPACK storage convention for half-complex sequences. In this convention the half-complex transform of a real sequence is stored with frequencies in increasing order, starting at zero, with the real and imaginary parts of each frequency in neighboring locations. When a value is known to be real the imaginary part is not stored. The imaginary part of the zero-frequency component is never stored. It is known to be zero (since the zero frequency component is simply the sum of the input data (all real)). For a sequence of even length the imaginary part of the frequency n/2 is not stored either, since the symmetry z_k = z_{n-k}^* implies that this is purely real too. The storage scheme is best shown by some examples. The table below shows the output for an odd-length sequence, n=5. The two columns give the correspondence between the 5 values in the half-complex sequence returned by *note gsl_fft_real_transform(): 62c, ‘halfcomplex[]’ and the values ‘complex[]’ that would be returned if the same real input sequence were passed to *note gsl_fft_complex_backward(): 624. as a complex sequence (with imaginary parts set to ‘0’): complex[0].real = halfcomplex[0] complex[0].imag = 0 complex[1].real = halfcomplex[1] complex[1].imag = halfcomplex[2] complex[2].real = halfcomplex[3] complex[2].imag = halfcomplex[4] complex[3].real = halfcomplex[3] complex[3].imag = -halfcomplex[4] complex[4].real = halfcomplex[1] complex[4].imag = -halfcomplex[2] The upper elements of the ‘complex’ array, ‘complex[3]’ and ‘complex[4]’ are filled in using the symmetry condition. The imaginary part of the zero-frequency term ‘complex[0].imag’ is known to be zero by the symmetry. The next table shows the output for an even-length sequence, n=6. In the even case there are two values which are purely real: complex[0].real = halfcomplex[0] complex[0].imag = 0 complex[1].real = halfcomplex[1] complex[1].imag = halfcomplex[2] complex[2].real = halfcomplex[3] complex[2].imag = halfcomplex[4] complex[3].real = halfcomplex[5] complex[3].imag = 0 complex[4].real = halfcomplex[3] complex[4].imag = -halfcomplex[4] complex[5].real = halfcomplex[1] complex[5].imag = -halfcomplex[2] The upper elements of the ‘complex’ array, ‘complex[4]’ and ‘complex[5]’ are filled in using the symmetry condition. Both ‘complex[0].imag’ and ‘complex[3].imag’ are known to be zero. All these functions are declared in the header files ‘gsl_fft_real.h’ and ‘gsl_fft_halfcomplex.h’. -- Type: gsl_fft_real_wavetable -- Type: gsl_fft_halfcomplex_wavetable These data structures contain lookup tables for an FFT of a fixed size. -- Function: *note gsl_fft_real_wavetable: 62d. *gsl_fft_real_wavetable_alloc (size_t n) -- Function: *note gsl_fft_halfcomplex_wavetable: 62e. *gsl_fft_halfcomplex_wavetable_alloc (size_t n) These functions prepare trigonometric lookup tables for an FFT of size n real elements. The functions return a pointer to the newly allocated struct if no errors were detected, and a null pointer in the case of error. The length *note n: 630. is factorized into a product of subtransforms, and the factors and their trigonometric coefficients are stored in the wavetable. The trigonometric coefficients are computed using direct calls to ‘sin’ and ‘cos’, for accuracy. Recursion relations could be used to compute the lookup table faster, but if an application performs many FFTs of the same length then computing the wavetable is a one-off overhead which does not affect the final throughput. The wavetable structure can be used repeatedly for any transform of the same length. The table is not modified by calls to any of the other FFT functions. The appropriate type of wavetable must be used for forward real or inverse half-complex transforms. -- Function: void gsl_fft_real_wavetable_free (gsl_fft_real_wavetable *wavetable) -- Function: void gsl_fft_halfcomplex_wavetable_free (gsl_fft_halfcomplex_wavetable *wavetable) These functions free the memory associated with the wavetable *note wavetable: 632. The wavetable can be freed if no further FFTs of the same length will be needed. The mixed radix algorithms require additional working space to hold the intermediate steps of the transform, -- Type: gsl_fft_real_workspace This workspace contains parameters needed to compute a real FFT. -- Function: *note gsl_fft_real_workspace: 633. *gsl_fft_real_workspace_alloc (size_t n) This function allocates a workspace for a real transform of length *note n: 634. The same workspace can be used for both forward real and inverse halfcomplex transforms. -- Function: void gsl_fft_real_workspace_free (gsl_fft_real_workspace *workspace) This function frees the memory associated with the workspace *note workspace: 635. The workspace can be freed if no further FFTs of the same length will be needed. The following functions compute the transforms of real and half-complex data, -- Function: int gsl_fft_real_transform (double data[], size_t stride, size_t n, const gsl_fft_real_wavetable *wavetable, gsl_fft_real_workspace *work) -- Function: int gsl_fft_halfcomplex_transform (double data[], size_t stride, size_t n, const gsl_fft_halfcomplex_wavetable *wavetable, gsl_fft_real_workspace *work) These functions compute the FFT of *note data: 636, a real or half-complex array of length *note n: 636, using a mixed radix decimation-in-frequency algorithm. For *note gsl_fft_real_transform(): 62c. *note data: 636. is an array of time-ordered real data. For *note gsl_fft_halfcomplex_transform(): 636. *note data: 636. contains Fourier coefficients in the half-complex ordering described above. There is no restriction on the length *note n: 636. Efficient modules are provided for subtransforms of length 2, 3, 4 and 5. Any remaining factors are computed with a slow, O(n^2), general-n module. The caller must supply a *note wavetable: 636. containing trigonometric lookup tables and a workspace *note work: 636. -- Function: int gsl_fft_real_unpack (const double real_coefficient[], gsl_complex_packed_array complex_coefficient, size_t stride, size_t n) This function converts a single real array, *note real_coefficient: 637. into an equivalent complex array, *note complex_coefficient: 637, (with imaginary part set to zero), suitable for ‘gsl_fft_complex’ routines. The algorithm for the conversion is simply: for (i = 0; i < n; i++) { complex_coefficient[i*stride].real = real_coefficient[i*stride]; complex_coefficient[i*stride].imag = 0.0; } -- Function: int gsl_fft_halfcomplex_unpack (const double halfcomplex_coefficient[], gsl_complex_packed_array complex_coefficient, size_t stride, size_t n) This function converts *note halfcomplex_coefficient: 638, an array of half-complex coefficients as returned by *note gsl_fft_real_transform(): 62c, into an ordinary complex array, *note complex_coefficient: 638. It fills in the complex array using the symmetry z_k = z_{n-k}^* to reconstruct the redundant elements. The algorithm for the conversion is: complex_coefficient[0].real = halfcomplex_coefficient[0]; complex_coefficient[0].imag = 0.0; for (i = 1; i < n - i; i++) { double hc_real = halfcomplex_coefficient[(2 * i - 1)*stride]; double hc_imag = halfcomplex_coefficient[(2 * i)*stride]; complex_coefficient[i*stride].real = hc_real; complex_coefficient[i*stride].imag = hc_imag; complex_coefficient[(n - i)*stride].real = hc_real; complex_coefficient[(n - i)*stride].imag = -hc_imag; } if (i == n - i) { complex_coefficient[i*stride].real = halfcomplex_coefficient[(n - 1)*stride]; complex_coefficient[i*stride].imag = 0.0; } Here is an example program using *note gsl_fft_real_transform(): 62c. and ‘gsl_fft_halfcomplex_inverse()’. It generates a real signal in the shape of a square pulse. The pulse is Fourier transformed to frequency space, and all but the lowest ten frequency components are removed from the array of Fourier coefficients returned by *note gsl_fft_real_transform(): 62c. The remaining Fourier coefficients are transformed back to the time-domain, to give a filtered version of the square pulse. Since Fourier coefficients are stored using the half-complex symmetry both positive and negative frequencies are removed and the final filtered signal is also real. #include #include #include #include #include int main (void) { int i, n = 100; double data[n]; gsl_fft_real_wavetable * real; gsl_fft_halfcomplex_wavetable * hc; gsl_fft_real_workspace * work; for (i = 0; i < n; i++) { data[i] = 0.0; } for (i = n / 3; i < 2 * n / 3; i++) { data[i] = 1.0; } for (i = 0; i < n; i++) { printf ("%d: %e\n", i, data[i]); } printf ("\n"); work = gsl_fft_real_workspace_alloc (n); real = gsl_fft_real_wavetable_alloc (n); gsl_fft_real_transform (data, 1, n, real, work); gsl_fft_real_wavetable_free (real); for (i = 11; i < n; i++) { data[i] = 0; } hc = gsl_fft_halfcomplex_wavetable_alloc (n); gsl_fft_halfcomplex_inverse (data, 1, n, hc, work); gsl_fft_halfcomplex_wavetable_free (hc); for (i = 0; i < n; i++) { printf ("%d: %e\n", i, data[i]); } gsl_fft_real_workspace_free (work); return 0; } The program output is shown in Fig. %s. [gsl-ref-figures/fft-real-mixedradix] Figure: Low-pass filtered version of a real pulse, output from the example program.  File: gsl-ref.info, Node: References and Further Reading<11>, Prev: Mixed-radix FFT routines for real data, Up: Fast Fourier Transforms FFTs 16.8 References and Further Reading =================================== A good starting point for learning more about the FFT is the following review article, * P. Duhamel and M. Vetterli. Fast Fourier transforms: A tutorial review and a state of the art. Signal Processing, 19:259–299, 1990. To find out about the algorithms used in the GSL routines you may want to consult the document “GSL FFT Algorithms” (it is included in GSL, as ‘doc/fftalgorithms.tex’). This has general information on FFTs and explicit derivations of the implementation for each routine. There are also references to the relevant literature. For convenience some of the more important references are reproduced below. There are several introductory books on the FFT with example programs, such as “The Fast Fourier Transform” by Brigham and “DFT/FFT and Convolution Algorithms” by Burrus and Parks, * 5. Oran Brigham. “The Fast Fourier Transform”. Prentice Hall, 1974. * C. S. Burrus and T. W. Parks. “DFT/FFT and Convolution Algorithms”, Wiley, 1984. Both these introductory books cover the radix-2 FFT in some detail. The mixed-radix algorithm at the heart of the FFTPACK routines is reviewed in Clive Temperton’s paper, * Clive Temperton, Self-sorting mixed-radix fast Fourier transforms, Journal of Computational Physics, 52(1):1–23, 1983. The derivation of FFTs for real-valued data is explained in the following two articles, * Henrik V. Sorenson, Douglas L. Jones, Michael T. Heideman, and C. Sidney Burrus. Real-valued fast Fourier transform algorithms. “IEEE Transactions on Acoustics, Speech, and Signal Processing”, ASSP-35(6):849–863, 1987. * Clive Temperton. Fast mixed-radix real Fourier transforms. “Journal of Computational Physics”, 52:340–350, 1983. In 1979 the IEEE published a compendium of carefully-reviewed Fortran FFT programs in “Programs for Digital Signal Processing”. It is a useful reference for implementations of many different FFT algorithms, * Digital Signal Processing Committee and IEEE Acoustics, Speech, and Signal Processing Committee, editors. Programs for Digital Signal Processing. IEEE Press, 1979. For large-scale FFT work we recommend the use of the dedicated FFTW library by Frigo and Johnson. The FFTW library is self-optimizing—it automatically tunes itself for each hardware platform in order to achieve maximum performance. It is available under the GNU GPL. * FFTW Website, ‘http://www.fftw.org/’ The source code for FFTPACK is available from ‘http://www.netlib.org/fftpack/’  File: gsl-ref.info, Node: Numerical Integration, Next: Random Number Generation, Prev: Fast Fourier Transforms FFTs, Up: Top 17 Numerical Integration ************************ This chapter describes routines for performing numerical integration (quadrature) of a function in one dimension. There are routines for adaptive and non-adaptive integration of general functions, with specialised routines for specific cases. These include integration over infinite and semi-infinite ranges, singular integrals, including logarithmic singularities, computation of Cauchy principal values and oscillatory integrals. The library reimplements the algorithms used in QUADPACK, a numerical integration package written by Piessens, de Doncker-Kapenga, Ueberhuber and Kahaner. Fortran code for QUADPACK is available on Netlib. Also included are non-adaptive, fixed-order Gauss-Legendre integration routines with high precision coefficients, as well as fixed-order quadrature rules for a variety of weighting functions from IQPACK. The functions described in this chapter are declared in the header file ‘gsl_integration.h’. * Menu: * Introduction: Introduction<2>. * QNG non-adaptive Gauss-Kronrod integration:: * QAG adaptive integration:: * QAGS adaptive integration with singularities:: * QAGP adaptive integration with known singular points:: * QAGI adaptive integration on infinite intervals:: * QAWC adaptive integration for Cauchy principal values:: * QAWS adaptive integration for singular functions:: * QAWO adaptive integration for oscillatory functions:: * QAWF adaptive integration for Fourier integrals:: * CQUAD doubly-adaptive integration:: * Romberg integration:: * Gauss-Legendre integration:: * Fixed point quadratures:: * Error codes:: * Examples: Examples<11>. * References and Further Reading: References and Further Reading<12>.  File: gsl-ref.info, Node: Introduction<2>, Next: QNG non-adaptive Gauss-Kronrod integration, Up: Numerical Integration 17.1 Introduction ================= Each algorithm computes an approximation to a definite integral of the form, I = \int_a^b f(x) w(x) dx where w(x) is a weight function (for general integrands w(x) = 1). The user provides absolute and relative error bounds (epsabs, epsrel) which specify the following accuracy requirement, |RESULT - I| <= max(epsabs, epsrel |I|) where RESULT is the numerical approximation obtained by the algorithm. The algorithms attempt to estimate the absolute error ABSERR = |RESULT - I| in such a way that the following inequality holds, |RESULT - I| <= ABSERR <= max(epsabs, epsrel |I|) In short, the routines return the first approximation which has an absolute error smaller than epsabs or a relative error smaller than epsrel. Note that this is an `either-or' constraint, not simultaneous. To compute to a specified absolute error, set epsrel to zero. To compute to a specified relative error, set epsabs to zero. The routines will fail to converge if the error bounds are too stringent, but always return the best approximation obtained up to that stage. The algorithms in QUADPACK use a naming convention based on the following letters: Q - quadrature routine N - non-adaptive integrator A - adaptive integrator G - general integrand (user-defined) W - weight function with integrand S - singularities can be more readily integrated P - points of special difficulty can be supplied I - infinite range of integration O - oscillatory weight function, cos or sin F - Fourier integral C - Cauchy principal value The algorithms are built on pairs of quadrature rules, a higher order rule and a lower order rule. The higher order rule is used to compute the best approximation to an integral over a small range. The difference between the results of the higher order rule and the lower order rule gives an estimate of the error in the approximation. * Menu: * Integrands without weight functions:: * Integrands with weight functions:: * Integrands with singular weight functions::  File: gsl-ref.info, Node: Integrands without weight functions, Next: Integrands with weight functions, Up: Introduction<2> 17.1.1 Integrands without weight functions ------------------------------------------ The algorithms for general functions (without a weight function) are based on Gauss-Kronrod rules. A Gauss-Kronrod rule begins with a classical Gaussian quadrature rule of order m. This is extended with additional points between each of the abscissae to give a higher order Kronrod rule of order 2m + 1. The Kronrod rule is efficient because it reuses existing function evaluations from the Gaussian rule. The higher order Kronrod rule is used as the best approximation to the integral, and the difference between the two rules is used as an estimate of the error in the approximation.  File: gsl-ref.info, Node: Integrands with weight functions, Next: Integrands with singular weight functions, Prev: Integrands without weight functions, Up: Introduction<2> 17.1.2 Integrands with weight functions --------------------------------------- For integrands with weight functions the algorithms use Clenshaw-Curtis quadrature rules. A Clenshaw-Curtis rule begins with an n-th order Chebyshev polynomial approximation to the integrand. This polynomial can be integrated exactly to give an approximation to the integral of the original function. The Chebyshev expansion can be extended to higher orders to improve the approximation and provide an estimate of the error.  File: gsl-ref.info, Node: Integrands with singular weight functions, Prev: Integrands with weight functions, Up: Introduction<2> 17.1.3 Integrands with singular weight functions ------------------------------------------------ The presence of singularities (or other behavior) in the integrand can cause slow convergence in the Chebyshev approximation. The modified Clenshaw-Curtis rules used in QUADPACK separate out several common weight functions which cause slow convergence. These weight functions are integrated analytically against the Chebyshev polynomials to precompute `modified Chebyshev moments'. Combining the moments with the Chebyshev approximation to the function gives the desired integral. The use of analytic integration for the singular part of the function allows exact cancellations and substantially improves the overall convergence behavior of the integration.  File: gsl-ref.info, Node: QNG non-adaptive Gauss-Kronrod integration, Next: QAG adaptive integration, Prev: Introduction<2>, Up: Numerical Integration 17.2 QNG non-adaptive Gauss-Kronrod integration =============================================== The QNG algorithm is a non-adaptive procedure which uses fixed Gauss-Kronrod-Patterson abscissae to sample the integrand at a maximum of 87 points. It is provided for fast integration of smooth functions. -- Function: int gsl_integration_qng (const gsl_function *f, double a, double b, double epsabs, double epsrel, double *result, double *abserr, size_t *neval) This function applies the Gauss-Kronrod 10-point, 21-point, 43-point and 87-point integration rules in succession until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, *note epsabs: 642. and *note epsrel: 642. The function returns the final approximation, *note result: 642, an estimate of the absolute error, *note abserr: 642. and the number of function evaluations used, *note neval: 642. The Gauss-Kronrod rules are designed in such a way that each rule uses all the results of its predecessors, in order to minimize the total number of function evaluations.  File: gsl-ref.info, Node: QAG adaptive integration, Next: QAGS adaptive integration with singularities, Prev: QNG non-adaptive Gauss-Kronrod integration, Up: Numerical Integration 17.3 QAG adaptive integration ============================= The QAG algorithm is a simple adaptive integration procedure. The integration region is divided into subintervals, and on each iteration the subinterval with the largest estimated error is bisected. This reduces the overall error rapidly, as the subintervals become concentrated around local difficulties in the integrand. These subintervals are managed by the following struct, -- Type: gsl_integration_workspace This workspace handles the memory for the subinterval ranges, results and error estimates. -- Function: *note gsl_integration_workspace: 644. *gsl_integration_workspace_alloc (size_t n) This function allocates a workspace sufficient to hold *note n: 645. double precision intervals, their integration results and error estimates. One workspace may be used multiple times as all necessary reinitialization is performed automatically by the integration routines. -- Function: void gsl_integration_workspace_free (gsl_integration_workspace *w) This function frees the memory associated with the workspace *note w: 646. -- Function: int gsl_integration_qag (const gsl_function *f, double a, double b, double epsabs, double epsrel, size_t limit, int key, gsl_integration_workspace *workspace, double *result, double *abserr) This function applies an integration rule adaptively until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, *note epsabs: 647. and *note epsrel: 647. The function returns the final approximation, *note result: 647, and an estimate of the absolute error, *note abserr: 647. The integration rule is determined by the value of *note key: 647, which should be chosen from the following symbolic names, Symbolic Name Key --------------------------------------- ‘GSL_INTEG_GAUSS15’ 1 ‘GSL_INTEG_GAUSS21’ 2 ‘GSL_INTEG_GAUSS31’ 3 ‘GSL_INTEG_GAUSS41’ 4 ‘GSL_INTEG_GAUSS51’ 5 ‘GSL_INTEG_GAUSS61’ 6 corresponding to the 15, 21, 31, 41, 51 and 61 point Gauss-Kronrod rules. The higher-order rules give better accuracy for smooth functions, while lower-order rules save time when the function contains local difficulties, such as discontinuities. On each iteration the adaptive integration strategy bisects the interval with the largest error estimate. The subintervals and their results are stored in the memory provided by *note workspace: 647. The maximum number of subintervals is given by *note limit: 647, which may not exceed the allocated size of the workspace.  File: gsl-ref.info, Node: QAGS adaptive integration with singularities, Next: QAGP adaptive integration with known singular points, Prev: QAG adaptive integration, Up: Numerical Integration 17.4 QAGS adaptive integration with singularities ================================================= The presence of an integrable singularity in the integration region causes an adaptive routine to concentrate new subintervals around the singularity. As the subintervals decrease in size the successive approximations to the integral converge in a limiting fashion. This approach to the limit can be accelerated using an extrapolation procedure. The QAGS algorithm combines adaptive bisection with the Wynn epsilon-algorithm to speed up the integration of many types of integrable singularities. -- Function: int gsl_integration_qags (const gsl_function *f, double a, double b, double epsabs, double epsrel, size_t limit, gsl_integration_workspace *workspace, double *result, double *abserr) This function applies the Gauss-Kronrod 21-point integration rule adaptively until an estimate of the integral of f over (a,b) is achieved within the desired absolute and relative error limits, *note epsabs: 649. and *note epsrel: 649. The results are extrapolated using the epsilon-algorithm, which accelerates the convergence of the integral in the presence of discontinuities and integrable singularities. The function returns the final approximation from the extrapolation, *note result: 649, and an estimate of the absolute error, *note abserr: 649. The subintervals and their results are stored in the memory provided by *note workspace: 649. The maximum number of subintervals is given by *note limit: 649, which may not exceed the allocated size of the workspace.  File: gsl-ref.info, Node: QAGP adaptive integration with known singular points, Next: QAGI adaptive integration on infinite intervals, Prev: QAGS adaptive integration with singularities, Up: Numerical Integration 17.5 QAGP adaptive integration with known singular points ========================================================= -- Function: int gsl_integration_qagp (const gsl_function *f, double *pts, size_t npts, double epsabs, double epsrel, size_t limit, gsl_integration_workspace *workspace, double *result, double *abserr) This function applies the adaptive integration algorithm QAGS taking account of the user-supplied locations of singular points. The array *note pts: 64b. of length *note npts: 64b. should contain the endpoints of the integration ranges defined by the integration region and locations of the singularities. For example, to integrate over the region (a,b) with break-points at x_1, x_2, x_3 (where a < x_1 < x_2 < x_3 < b) the following *note pts: 64b. array should be used: pts[0] = a pts[1] = x_1 pts[2] = x_2 pts[3] = x_3 pts[4] = b with *note npts: 64b. = 5. If you know the locations of the singular points in the integration region then this routine will be faster than *note gsl_integration_qags(): 649.  File: gsl-ref.info, Node: QAGI adaptive integration on infinite intervals, Next: QAWC adaptive integration for Cauchy principal values, Prev: QAGP adaptive integration with known singular points, Up: Numerical Integration 17.6 QAGI adaptive integration on infinite intervals ==================================================== -- Function: int gsl_integration_qagi (gsl_function *f, double epsabs, double epsrel, size_t limit, gsl_integration_workspace *workspace, double *result, double *abserr) This function computes the integral of the function *note f: 64d. over the infinite interval (-\infty,+\infty). The integral is mapped onto the semi-open interval (0,1] using the transformation x = (1-t)/t, \int_{-\infty}^{+\infty} dx f(x) = \int_0^1 dt (f((1-t)/t) + f(-(1-t)/t))/t^2. It is then integrated using the QAGS algorithm. The normal 21-point Gauss-Kronrod rule of QAGS is replaced by a 15-point rule, because the transformation can generate an integrable singularity at the origin. In this case a lower-order rule is more efficient. -- Function: int gsl_integration_qagiu (gsl_function *f, double a, double epsabs, double epsrel, size_t limit, gsl_integration_workspace *workspace, double *result, double *abserr) This function computes the integral of the function *note f: 64e. over the semi-infinite interval (a,+\infty). The integral is mapped onto the semi-open interval (0,1] using the transformation x = a + (1-t)/t, \int_{a}^{+\infty} dx f(x) = \int_0^1 dt f(a + (1-t)/t)/t^2 and then integrated using the QAGS algorithm. -- Function: int gsl_integration_qagil (gsl_function *f, double b, double epsabs, double epsrel, size_t limit, gsl_integration_workspace *workspace, double *result, double *abserr) This function computes the integral of the function *note f: 64f. over the semi-infinite interval (-\infty,b). The integral is mapped onto the semi-open interval (0,1] using the transformation x = b - (1-t)/t, \int_{-\infty}^{b} dx f(x) = \int_0^1 dt f(b - (1-t)/t)/t^2 and then integrated using the QAGS algorithm.  File: gsl-ref.info, Node: QAWC adaptive integration for Cauchy principal values, Next: QAWS adaptive integration for singular functions, Prev: QAGI adaptive integration on infinite intervals, Up: Numerical Integration 17.7 QAWC adaptive integration for Cauchy principal values ========================================================== -- Function: int gsl_integration_qawc (gsl_function *f, double a, double b, double c, double epsabs, double epsrel, size_t limit, gsl_integration_workspace *workspace, double *result, double *abserr) This function computes the Cauchy principal value of the integral of f over (a,b), with a singularity at *note c: 651, I = \int_a^b dx f(x) / (x - c) The adaptive bisection algorithm of QAG is used, with modifications to ensure that subdivisions do not occur at the singular point x = c. When a subinterval contains the point x = c or is close to it then a special 25-point modified Clenshaw-Curtis rule is used to control the singularity. Further away from the singularity the algorithm uses an ordinary 15-point Gauss-Kronrod integration rule.  File: gsl-ref.info, Node: QAWS adaptive integration for singular functions, Next: QAWO adaptive integration for oscillatory functions, Prev: QAWC adaptive integration for Cauchy principal values, Up: Numerical Integration 17.8 QAWS adaptive integration for singular functions ===================================================== The QAWS algorithm is designed for integrands with algebraic-logarithmic singularities at the end-points of an integration region. In order to work efficiently the algorithm requires a precomputed table of Chebyshev moments. -- Type: gsl_integration_qaws_table This structure contains precomputed quantities for the QAWS algorithm. -- Function: *note gsl_integration_qaws_table: 653. *gsl_integration_qaws_table_alloc (double alpha, double beta, int mu, int nu) This function allocates space for a *note gsl_integration_qaws_table: 653. struct describing a singular weight function w(x) with the parameters (\alpha, \beta, \mu, \nu), w(x) = (x - a)^\alpha (b - x)^\beta \log^\mu (x - a) \log^\nu (b - x) where \alpha > -1, \beta > -1, and \mu = 0, 1, \nu = 0, 1. The weight function can take four different forms depending on the values of \mu and \nu, Weight function w(x) (\mu,\nu) --------------------------------------------------------------------------------------- (x - a)^\alpha (b - x)^\beta (0,0) (x - a)^\alpha (b - x)^\beta \log{(x-a)} (1,0) (x - a)^\alpha (b - x)^\beta \log{(b-x)} (0,1) (x - a)^\alpha (b - x)^\beta \log{(x-a)} \log{(b-x)} (1,1) The singular points (a,b) do not have to be specified until the integral is computed, where they are the endpoints of the integration range. The function returns a pointer to the newly allocated table *note gsl_integration_qaws_table: 653. if no errors were detected, and 0 in the case of error. -- Function: int gsl_integration_qaws_table_set (gsl_integration_qaws_table *t, double alpha, double beta, int mu, int nu) This function modifies the parameters (\alpha, \beta, \mu, \nu) of an existing *note gsl_integration_qaws_table: 653. struct *note t: 655. -- Function: void gsl_integration_qaws_table_free (gsl_integration_qaws_table *t) This function frees all the memory associated with the *note gsl_integration_qaws_table: 653. struct *note t: 656. -- Function: int gsl_integration_qaws (gsl_function *f, const double a, const double b, gsl_integration_qaws_table *t, const double epsabs, const double epsrel, const size_t limit, gsl_integration_workspace *workspace, double *result, double *abserr) This function computes the integral of the function f(x) over the interval (a,b) with the singular weight function (x-a)^\alpha (b-x)^\beta \log^\mu (x-a) \log^\nu (b-x). The parameters of the weight function (\alpha, \beta, \mu, \nu) are taken from the table *note t: 657. The integral is, I = \int_a^b dx f(x) (x - a)^\alpha (b - x)^\beta \log^\mu (x - a) \log^\nu (b - x). The adaptive bisection algorithm of QAG is used. When a subinterval contains one of the endpoints then a special 25-point modified Clenshaw-Curtis rule is used to control the singularities. For subintervals which do not include the endpoints an ordinary 15-point Gauss-Kronrod integration rule is used.  File: gsl-ref.info, Node: QAWO adaptive integration for oscillatory functions, Next: QAWF adaptive integration for Fourier integrals, Prev: QAWS adaptive integration for singular functions, Up: Numerical Integration 17.9 QAWO adaptive integration for oscillatory functions ======================================================== The QAWO algorithm is designed for integrands with an oscillatory factor, \sin(\omega x) or \cos(\omega x). In order to work efficiently the algorithm requires a table of Chebyshev moments which must be pre-computed with calls to the functions below. -- Function: gsl_integration_qawo_table *gsl_integration_qawo_table_alloc (double omega, double L, enum gsl_integration_qawo_enum sine, size_t n) This function allocates space for a ‘gsl_integration_qawo_table’ struct and its associated workspace describing a sine or cosine weight function w(x) with the parameters (\omega, L), w(x) = sin(omega x) w(x) = cos(omega x) The parameter *note L: 659. must be the length of the interval over which the function will be integrated L = b - a. The choice of sine or cosine is made with the parameter *note sine: 659. which should be chosen from one of the two following symbolic values: -- Macro: GSL_INTEG_COSINE -- Macro: GSL_INTEG_SINE The ‘gsl_integration_qawo_table’ is a table of the trigonometric coefficients required in the integration process. The parameter *note n: 659. determines the number of levels of coefficients that are computed. Each level corresponds to one bisection of the interval L, so that *note n: 659. levels are sufficient for subintervals down to the length L/2^n. The integration routine *note gsl_integration_qawo(): 65c. returns the error ‘GSL_ETABLE’ if the number of levels is insufficient for the requested accuracy. -- Function: int gsl_integration_qawo_table_set (gsl_integration_qawo_table *t, double omega, double L, enum gsl_integration_qawo_enum sine) This function changes the parameters *note omega: 65d, *note L: 65d. and *note sine: 65d. of the existing workspace *note t: 65d. -- Function: int gsl_integration_qawo_table_set_length (gsl_integration_qawo_table *t, double L) This function allows the length parameter *note L: 65e. of the workspace *note t: 65e. to be changed. -- Function: void gsl_integration_qawo_table_free (gsl_integration_qawo_table *t) This function frees all the memory associated with the workspace *note t: 65f. -- Function: int gsl_integration_qawo (gsl_function *f, const double a, const double epsabs, const double epsrel, const size_t limit, gsl_integration_workspace *workspace, gsl_integration_qawo_table *wf, double *result, double *abserr) This function uses an adaptive algorithm to compute the integral of f over (a,b) with the weight function \sin(\omega x) or \cos(\omega x) defined by the table *note wf: 65c, I = int_a^b dx f(x) sin(omega x) I = int_a^b dx f(x) cos(omega x) The results are extrapolated using the epsilon-algorithm to accelerate the convergence of the integral. The function returns the final approximation from the extrapolation, *note result: 65c, and an estimate of the absolute error, *note abserr: 65c. The subintervals and their results are stored in the memory provided by *note workspace: 65c. The maximum number of subintervals is given by *note limit: 65c, which may not exceed the allocated size of the workspace. Those subintervals with “large” widths d where d\omega > 4 are computed using a 25-point Clenshaw-Curtis integration rule, which handles the oscillatory behavior. Subintervals with a “small” widths where d\omega < 4 are computed using a 15-point Gauss-Kronrod integration.  File: gsl-ref.info, Node: QAWF adaptive integration for Fourier integrals, Next: CQUAD doubly-adaptive integration, Prev: QAWO adaptive integration for oscillatory functions, Up: Numerical Integration 17.10 QAWF adaptive integration for Fourier integrals ===================================================== -- Function: int gsl_integration_qawf (gsl_function *f, const double a, const double epsabs, const size_t limit, gsl_integration_workspace *workspace, gsl_integration_workspace *cycle_workspace, gsl_integration_qawo_table *wf, double *result, double *abserr) This function attempts to compute a Fourier integral of the function *note f: 661. over the semi-infinite interval [a,+\infty) I = \int_a^{+\infty} dx f(x) sin(omega x) I = \int_a^{+\infty} dx f(x) cos(omega x) The parameter \omega and choice of \sin or \cos is taken from the table *note wf: 661. (the length ‘L’ can take any value, since it is overridden by this function to a value appropriate for the Fourier integration). The integral is computed using the QAWO algorithm over each of the subintervals, C_1 = [a, a + c] C_2 = [a + c, a + 2 c] ... = ... C_k = [a + (k-1) c, a + k c] where c = (2 floor(|\omega|) + 1) \pi/|\omega|. The width c is chosen to cover an odd number of periods so that the contributions from the intervals alternate in sign and are monotonically decreasing when *note f: 661. is positive and monotonically decreasing. The sum of this sequence of contributions is accelerated using the epsilon-algorithm. This function works to an overall absolute tolerance of *note abserr: 661. The following strategy is used: on each interval C_k the algorithm tries to achieve the tolerance TOL_k = u_k abserr where u_k = (1 - p)p^{k-1} and p = 9/10. The sum of the geometric series of contributions from each interval gives an overall tolerance of *note abserr: 661. If the integration of a subinterval leads to difficulties then the accuracy requirement for subsequent intervals is relaxed, TOL_k = u_k max(abserr, max_{i a Chebyshev Type 1 (a,b) 1 / \sqrt{(b - x) (x - a)} b > a Gegenbauer (a,b) ((b - x) (x - a))^{\alpha} \alpha > -1, b > a Jacobi (a,b) (b - x)^{\alpha} (x - a)^{\beta} \alpha,\beta > -1, b > a Laguerre (a,\infty) (x-a)^{\alpha} \exp{( -b (x - a) )} \alpha > -1, b > 0 Hermite (-\infty,\infty) |x-a|^{\alpha} \exp{( -b (x-a)^2 )} \alpha > -1, b > 0 Exponential (a,b) |x - (a + b)/2|^{\alpha} \alpha > -1, b > a Rational (a,\infty) (x - a)^{\alpha} (x + b)^{\beta} \alpha > -1, \alpha + \beta + 2n < 0, a + b > 0 Chebyshev Type 2 (a,b) \sqrt{(b - x) (x - a)} b > a The fixed point quadrature routines use the following workspace to store the nodes and weights, as well as additional variables for intermediate calculations: -- Type: gsl_integration_fixed_workspace This workspace is used for fixed point quadrature rules and looks like this: typedef struct { size_t n; /* number of nodes/weights */ double *weights; /* quadrature weights */ double *x; /* quadrature nodes */ double *diag; /* diagonal of Jacobi matrix */ double *subdiag; /* subdiagonal of Jacobi matrix */ const gsl_integration_fixed_type * type; } gsl_integration_fixed_workspace; -- Function: *note gsl_integration_fixed_workspace: 671. *gsl_integration_fixed_alloc (const gsl_integration_fixed_type *T, const size_t n, const double a, const double b, const double alpha, const double beta) This function allocates a workspace for computing integrals with interpolating quadratures using *note n: 672. quadrature nodes. The parameters *note a: 672, *note b: 672, *note alpha: 672, and *note beta: 672. specify the integration interval and/or weighting function for the various quadrature types. See the *note table: 670. above for constraints on these parameters. The size of the workspace is O(4n). -- Type: gsl_integration_fixed_type The type of quadrature used is specified by *note T: 672. which can be set to the following choices: -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_legendre This specifies Legendre quadrature integration. The parameters *note alpha: 672. and *note beta: 672. are ignored for this type. -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_chebyshev This specifies Chebyshev type 1 quadrature integration. The parameters *note alpha: 672. and *note beta: 672. are ignored for this type. -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_gegenbauer This specifies Gegenbauer quadrature integration. The parameter *note beta: 672. is ignored for this type. -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_jacobi This specifies Jacobi quadrature integration. -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_laguerre This specifies Laguerre quadrature integration. The parameter *note beta: 672. is ignored for this type. -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_hermite This specifies Hermite quadrature integration. The parameter *note beta: 672. is ignored for this type. -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_exponential This specifies exponential quadrature integration. The parameter *note beta: 672. is ignored for this type. -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_rational This specifies rational quadrature integration. -- Variable: *note gsl_integration_fixed_type: 673. *gsl_integration_fixed_chebyshev2 This specifies Chebyshev type 2 quadrature integration. The parameters *note alpha: 672. and *note beta: 672. are ignored for this type. -- Function: void gsl_integration_fixed_free (gsl_integration_fixed_workspace *w) This function frees the memory assocated with the workspace *note w: 67d. -- Function: size_t gsl_integration_fixed_n (const gsl_integration_fixed_workspace *w) This function returns the number of quadrature nodes and weights. -- Function: double *gsl_integration_fixed_nodes (const gsl_integration_fixed_workspace *w) This function returns a pointer to an array of size ‘n’ containing the quadrature nodes x_i. -- Function: double *gsl_integration_fixed_weights (const gsl_integration_fixed_workspace *w) This function returns a pointer to an array of size ‘n’ containing the quadrature weights w_i. -- Function: int gsl_integration_fixed (const gsl_function *func, double *result, const gsl_integration_fixed_workspace *w) This function integrates the function f(x) provided in *note func: 681. using previously computed fixed quadrature rules. The integral is approximated as \sum_{i=1}^n w_i f(x_i) where w_i are the quadrature weights and x_i are the quadrature nodes computed previously by *note gsl_integration_fixed_alloc(): 672. The sum is stored in *note result: 681. on output.  File: gsl-ref.info, Node: Error codes, Next: Examples<11>, Prev: Fixed point quadratures, Up: Numerical Integration 17.15 Error codes ================= In addition to the standard error codes for invalid arguments the functions can return the following values, ‘GSL_EMAXITER’ the maximum number of subdivisions was exceeded. ‘GSL_EROUND’ cannot reach tolerance because of roundoff error, or roundoff error was detected in the extrapolation table. ‘GSL_ESING’ a non-integrable singularity or other bad integrand behavior was found in the integration interval. ‘GSL_EDIVERGE’ the integral is divergent, or too slowly convergent to be integrated numerically. *note GSL_EDOM: 28. error in the values of the input arguments  File: gsl-ref.info, Node: Examples<11>, Next: References and Further Reading<12>, Prev: Error codes, Up: Numerical Integration 17.16 Examples ============== * Menu: * Adaptive integration example:: * Fixed-point quadrature example::  File: gsl-ref.info, Node: Adaptive integration example, Next: Fixed-point quadrature example, Up: Examples<11> 17.16.1 Adaptive integration example ------------------------------------ The integrator ‘QAGS’ will handle a large class of definite integrals. For example, consider the following integral, which has an algebraic-logarithmic singularity at the origin, \int_0^1 x^{-1/2} \log(x) dx = -4 The program below computes this integral to a relative accuracy bound of ‘1e-7’. #include #include #include double f (double x, void * params) { double alpha = *(double *) params; double f = log(alpha*x) / sqrt(x); return f; } int main (void) { gsl_integration_workspace * w = gsl_integration_workspace_alloc (1000); double result, error; double expected = -4.0; double alpha = 1.0; gsl_function F; F.function = &f; F.params = α gsl_integration_qags (&F, 0, 1, 0, 1e-7, 1000, w, &result, &error); printf ("result = % .18f\n", result); printf ("exact result = % .18f\n", expected); printf ("estimated error = % .18f\n", error); printf ("actual error = % .18f\n", result - expected); printf ("intervals = %zu\n", w->size); gsl_integration_workspace_free (w); return 0; } The results below show that the desired accuracy is achieved after 8 subdivisions. result = -4.000000000000085265 exact result = -4.000000000000000000 estimated error = 0.000000000000135447 actual error = -0.000000000000085265 intervals = 8 In fact, the extrapolation procedure used by ‘QAGS’ produces an accuracy of almost twice as many digits. The error estimate returned by the extrapolation procedure is larger than the actual error, giving a margin of safety of one order of magnitude.  File: gsl-ref.info, Node: Fixed-point quadrature example, Prev: Adaptive integration example, Up: Examples<11> 17.16.2 Fixed-point quadrature example -------------------------------------- In this example, we use a fixed-point quadrature rule to integrate the integral \int_{-\infty}^{\infty} e^{-x^2} \left( x^m + 1 \right) dx = \left\{ \begin{array}{cc} \sqrt{\pi} + \Gamma{\left( \frac{m+1}{2} \right)}, & m \textrm{ even} \\ \sqrt{\pi}, & m \textrm{ odd} \end{array} \right. for integer m. Consulting our *note table: 670. of fixed point quadratures, we see that this integral can be evaluated with a Hermite quadrature rule, setting \alpha = 0, a = 0, b = 1. Since we are integrating a polynomial of degree m, we need to choose the number of nodes n \ge (m+1)/2 to achieve the best results. First we will try integrating for m = 10, n = 5, which does not satisfy our criteria above: $ ./integration2 10 5 The output is, m = 10 intervals = 5 result = 47.468529694563351029 exact result = 54.115231635459025483 actual error = -6.646701940895674454 So, we find a large error. Now we try integrating for m = 10, n = 6 which does satisfy the criteria above: $ ./integration2 10 6 The output is, m = 10 intervals = 6 result = 54.115231635459096537 exact result = 54.115231635459025483 actual error = 0.000000000000071054 The program is given below. #include #include #include #include double f(double x, void * params) { int m = *(int *) params; double f = gsl_pow_int(x, m) + 1.0; return f; } int main (int argc, char *argv[]) { gsl_integration_fixed_workspace * w; const gsl_integration_fixed_type * T = gsl_integration_fixed_hermite; int m = 10; int n = 6; double expected, result; gsl_function F; if (argc > 1) m = atoi(argv[1]); if (argc > 2) n = atoi(argv[2]); w = gsl_integration_fixed_alloc(T, n, 0.0, 1.0, 0.0, 0.0); F.function = &f; F.params = &m; gsl_integration_fixed(&F, &result, w); if (m % 2 == 0) expected = M_SQRTPI + gsl_sf_gamma(0.5*(1.0 + m)); else expected = M_SQRTPI; printf ("m = %d\n", m); printf ("intervals = %zu\n", gsl_integration_fixed_n(w)); printf ("result = % .18f\n", result); printf ("exact result = % .18f\n", expected); printf ("actual error = % .18f\n", result - expected); gsl_integration_fixed_free (w); return 0; }  File: gsl-ref.info, Node: References and Further Reading<12>, Prev: Examples<11>, Up: Numerical Integration 17.17 References and Further Reading ==================================== The following book is the definitive reference for QUADPACK, and was written by the original authors. It provides descriptions of the algorithms, program listings, test programs and examples. It also includes useful advice on numerical integration and many references to the numerical integration literature used in developing QUADPACK. * R. Piessens, E. de Doncker-Kapenga, C.W. Ueberhuber, D.K. Kahaner. QUADPACK A subroutine package for automatic integration Springer Verlag, 1983. The CQUAD integration algorithm is described in the following paper: * P. Gonnet, “Increasing the Reliability of Adaptive Quadrature Using Explicit Interpolants”, ACM Transactions on Mathematical Software, Volume 37 (2010), Issue 3, Article 26. The fixed-point quadrature routines are based on IQPACK, described in the following papers: * S. Elhay, J. Kautsky, Algorithm 655: IQPACK, FORTRAN Subroutines for the Weights of Interpolatory Quadrature, ACM Transactions on Mathematical Software, Volume 13, Number 4, December 1987, pages 399-415. * J. Kautsky, S. Elhay, Calculation of the Weights of Interpolatory Quadratures, Numerische Mathematik, Volume 40, Number 3, October 1982, pages 407-422.  File: gsl-ref.info, Node: Random Number Generation, Next: Quasi-Random Sequences, Prev: Numerical Integration, Up: Top 18 Random Number Generation *************************** The library provides a large collection of random number generators which can be accessed through a uniform interface. Environment variables allow you to select different generators and seeds at runtime, so that you can easily switch between generators without needing to recompile your program. Each instance of a generator keeps track of its own state, allowing the generators to be used in multi-threaded programs. Additional functions are available for transforming uniform random numbers into samples from continuous or discrete probability distributions such as the Gaussian, log-normal or Poisson distributions. These functions are declared in the header file ‘gsl_rng.h’. * Menu: * General comments on random numbers:: * The Random Number Generator Interface:: * Random number generator initialization:: * Sampling from a random number generator:: * Auxiliary random number generator functions:: * Random number environment variables:: * Copying random number generator state:: * Reading and writing random number generator state:: * Random number generator algorithms:: * Unix random number generators:: * Other random number generators:: * Performance:: * Examples: Examples<12>. * References and Further Reading: References and Further Reading<13>. * Acknowledgements::  File: gsl-ref.info, Node: General comments on random numbers, Next: The Random Number Generator Interface, Up: Random Number Generation 18.1 General comments on random numbers ======================================= In 1988, Park and Miller wrote a paper entitled “Random number generators: good ones are hard to find.” [Commun.: ACM, 31, 1192–1201]. Fortunately, some excellent random number generators are available, though poor ones are still in common use. You may be happy with the system-supplied random number generator on your computer, but you should be aware that as computers get faster, requirements on random number generators increase. Nowadays, a simulation that calls a random number generator millions of times can often finish before you can make it down the hall to the coffee machine and back. A very nice review of random number generators was written by Pierre L’Ecuyer, as Chapter 4 of the book: Handbook on Simulation, Jerry Banks, ed. (Wiley, 1997). The chapter is available in postscript from L’Ecuyer’s ftp site (see references). Knuth’s volume on Seminumerical Algorithms (originally published in 1968) devotes 170 pages to random number generators, and has recently been updated in its 3rd edition (1997). It is brilliant, a classic. If you don’t own it, you should stop reading right now, run to the nearest bookstore, and buy it. A good random number generator will satisfy both theoretical and statistical properties. Theoretical properties are often hard to obtain (they require real math!), but one prefers a random number generator with a long period, low serial correlation, and a tendency `not' to “fall mainly on the planes.” Statistical tests are performed with numerical simulations. Generally, a random number generator is used to estimate some quantity for which the theory of probability provides an exact answer. Comparison to this exact answer provides a measure of “randomness”.  File: gsl-ref.info, Node: The Random Number Generator Interface, Next: Random number generator initialization, Prev: General comments on random numbers, Up: Random Number Generation 18.2 The Random Number Generator Interface ========================================== It is important to remember that a random number generator is not a “real” function like sine or cosine. Unlike real functions, successive calls to a random number generator yield different return values. Of course that is just what you want for a random number generator, but to achieve this effect, the generator must keep track of some kind of “state” variable. Sometimes this state is just an integer (sometimes just the value of the previously generated random number), but often it is more complicated than that and may involve a whole array of numbers, possibly with some indices thrown in. To use the random number generators, you do not need to know the details of what comprises the state, and besides that varies from algorithm to algorithm. -- Type: gsl_rng_type -- Type: gsl_rng The random number generator library uses two special structs, *note gsl_rng_type: 68b. which holds static information about each type of generator and *note gsl_rng: 68c. which describes an instance of a generator created from a given *note gsl_rng_type: 68b. The functions described in this section are declared in the header file ‘gsl_rng.h’.  File: gsl-ref.info, Node: Random number generator initialization, Next: Sampling from a random number generator, Prev: The Random Number Generator Interface, Up: Random Number Generation 18.3 Random number generator initialization =========================================== -- Function: *note gsl_rng: 68c. *gsl_rng_alloc (const gsl_rng_type *T) This function returns a pointer to a newly-created instance of a random number generator of type *note T: 68e. For example, the following code creates an instance of the Tausworthe generator: gsl_rng * r = gsl_rng_alloc (gsl_rng_taus); If there is insufficient memory to create the generator then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. The generator is automatically initialized with the default seed, *note gsl_rng_default_seed: 68f. This is zero by default but can be changed either directly or by using the environment variable *note GSL_RNG_SEED: 690. The details of the available generator types are described later in this chapter. -- Function: void gsl_rng_set (const gsl_rng *r, unsigned long int s) This function initializes (or “seeds”) the random number generator. If the generator is seeded with the same value of *note s: 691. on two different runs, the same stream of random numbers will be generated by successive calls to the routines below. If different values of s \geq 1 are supplied, then the generated streams of random numbers should be completely different. If the seed *note s: 691. is zero then the standard seed from the original implementation is used instead. For example, the original Fortran source code for the ‘ranlux’ generator used a seed of 314159265, and so choosing *note s: 691. equal to zero reproduces this when using *note gsl_rng_ranlux: 692. When using multiple seeds with the same generator, choose seed values greater than zero to avoid collisions with the default setting. Note that the most generators only accept 32-bit seeds, with higher values being reduced modulo 2^{32}. For generators with smaller ranges the maximum seed value will typically be lower. -- Function: void gsl_rng_free (gsl_rng *r) This function frees all the memory associated with the generator *note r: 693.  File: gsl-ref.info, Node: Sampling from a random number generator, Next: Auxiliary random number generator functions, Prev: Random number generator initialization, Up: Random Number Generation 18.4 Sampling from a random number generator ============================================ The following functions return uniformly distributed random numbers, either as integers or double precision floating point numbers. Inline versions of these functions are used when ‘HAVE_INLINE’ is defined. To obtain non-uniform distributions, see *note Random Number Distributions: 695. -- Function: unsigned long int gsl_rng_get (const gsl_rng *r) This function returns a random integer from the generator *note r: 696. The minimum and maximum values depend on the algorithm used, but all integers in the range [‘min’, ‘max’] are equally likely. The values of ‘min’ and ‘max’ can be determined using the auxiliary functions *note gsl_rng_max(): 697. and *note gsl_rng_min(): 698. -- Function: double gsl_rng_uniform (const gsl_rng *r) This function returns a double precision floating point number uniformly distributed in the range [0,1). The range includes 0.0 but excludes 1.0. The value is typically obtained by dividing the result of ‘gsl_rng_get(r)’ by ‘gsl_rng_max(r) + 1.0’ in double precision. Some generators compute this ratio internally so that they can provide floating point numbers with more than 32 bits of randomness (the maximum number of bits that can be portably represented in a single ‘unsigned long int’). -- Function: double gsl_rng_uniform_pos (const gsl_rng *r) This function returns a positive double precision floating point number uniformly distributed in the range (0,1), excluding both 0.0 and 1.0. The number is obtained by sampling the generator with the algorithm of *note gsl_rng_uniform(): 699. until a non-zero value is obtained. You can use this function if you need to avoid a singularity at 0.0. -- Function: unsigned long int gsl_rng_uniform_int (const gsl_rng *r, unsigned long int n) This function returns a random integer from 0 to n-1 inclusive by scaling down and/or discarding samples from the generator *note r: 69b. All integers in the range [0,n-1] are produced with equal probability. For generators with a non-zero minimum value an offset is applied so that zero is returned with the correct probability. Note that this function is designed for sampling from ranges smaller than the range of the underlying generator. The parameter *note n: 69b. must be less than or equal to the range of the generator *note r: 69b. If *note n: 69b. is larger than the range of the generator then the function calls the error handler with an error code of *note GSL_EINVAL: 2b. and returns zero. In particular, this function is not intended for generating the full range of unsigned integer values [0,2^{32}-1]. Instead choose a generator with the maximal integer range and zero minimum value, such as *note gsl_rng_ranlxd1: 69c, *note gsl_rng_mt19937: 69d. or *note gsl_rng_taus: 69e, and sample it directly using *note gsl_rng_get(): 696. The range of each generator can be found using the auxiliary functions described in the next section.  File: gsl-ref.info, Node: Auxiliary random number generator functions, Next: Random number environment variables, Prev: Sampling from a random number generator, Up: Random Number Generation 18.5 Auxiliary random number generator functions ================================================ The following functions provide information about an existing generator. You should use them in preference to hard-coding the generator parameters into your own code. -- Function: const char *gsl_rng_name (const gsl_rng *r) This function returns a pointer to the name of the generator. For example: printf ("r is a '%s' generator\n", gsl_rng_name (r)); would print something like: r is a 'taus' generator -- Function: unsigned long int gsl_rng_max (const gsl_rng *r) This function returns the largest value that *note gsl_rng_get(): 696. can return. -- Function: unsigned long int gsl_rng_min (const gsl_rng *r) This function returns the smallest value that *note gsl_rng_get(): 696. can return. Usually this value is zero. There are some generators with algorithms that cannot return zero, and for these generators the minimum value is 1. -- Function: void *gsl_rng_state (const gsl_rng *r) -- Function: size_t gsl_rng_size (const gsl_rng *r) These functions return a pointer to the state of generator *note r: 6a2. and its size. You can use this information to access the state directly. For example, the following code will write the state of a generator to a stream: void * state = gsl_rng_state (r); size_t n = gsl_rng_size (r); fwrite (state, n, 1, stream); -- Function: const *note gsl_rng_type: 68b. **gsl_rng_types_setup (void) This function returns a pointer to an array of all the available generator types, terminated by a null pointer. The function should be called once at the start of the program, if needed. The following code fragment shows how to iterate over the array of generator types to print the names of the available algorithms: const gsl_rng_type **t, **t0; t0 = gsl_rng_types_setup (); printf ("Available generators:\n"); for (t = t0; *t != 0; t++) { printf ("%s\n", (*t)->name); }  File: gsl-ref.info, Node: Random number environment variables, Next: Copying random number generator state, Prev: Auxiliary random number generator functions, Up: Random Number Generation 18.6 Random number environment variables ======================================== The library allows you to choose a default generator and seed from the environment variables *note GSL_RNG_TYPE: 6a5. and *note GSL_RNG_SEED: 690. and the function *note gsl_rng_env_setup(): 6a6. This makes it easy try out different generators and seeds without having to recompile your program. -- Macro: GSL_RNG_TYPE This environment variable specifies the default random number generator. It should be the name of a generator, such as ‘taus’ or ‘mt19937’. -- Macro: GSL_RNG_SEED This environment variable specifies the default seed for the random number generator -- Variable: *note gsl_rng_type: 68b. *gsl_rng_default This global library variable specifies the default random number generator, and can be initialized from *note GSL_RNG_TYPE: 6a5. using *note gsl_rng_env_setup(): 6a6. It is defined as follows: extern const gsl_rng_type *gsl_rng_default -- Variable: unsigned long int gsl_rng_default_seed This global library variable specifies the seed for the default random number generator, and can be initialized from *note GSL_RNG_SEED: 690. using *note gsl_rng_env_setup(): 6a6. It is set to zero by default and is defined as follows: extern unsigned long int gsl_rng_default_seed -- Function: const *note gsl_rng_type: 68b. *gsl_rng_env_setup (void) This function reads the environment variables *note GSL_RNG_TYPE: 6a5. and *note GSL_RNG_SEED: 690. and uses their values to set the corresponding library variables *note gsl_rng_default: 6a7. and *note gsl_rng_default_seed: 68f. The value of *note GSL_RNG_SEED: 690. is converted to an ‘unsigned long int’ using the C library function ‘strtoul()’. If you don’t specify a generator for *note GSL_RNG_TYPE: 6a5. then *note gsl_rng_mt19937: 69d. is used as the default. The initial value of *note gsl_rng_default_seed: 68f. is zero. Here is a short program which shows how to create a global generator using the environment variables *note GSL_RNG_TYPE: 6a5. and *note GSL_RNG_SEED: 690, #include #include gsl_rng * r; /* global generator */ int main (void) { const gsl_rng_type * T; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); printf ("generator type: %s\n", gsl_rng_name (r)); printf ("seed = %lu\n", gsl_rng_default_seed); printf ("first value = %lu\n", gsl_rng_get (r)); gsl_rng_free (r); return 0; } Running the program without any environment variables uses the initial defaults, an ‘mt19937’ generator with a seed of 0, generator type: mt19937 seed = 0 first value = 4293858116 By setting the two variables on the command line we can change the default generator and the seed: $ GSL_RNG_TYPE="taus" GSL_RNG_SEED=123 ./a.out GSL_RNG_TYPE=taus GSL_RNG_SEED=123 generator type: taus seed = 123 first value = 2720986350  File: gsl-ref.info, Node: Copying random number generator state, Next: Reading and writing random number generator state, Prev: Random number environment variables, Up: Random Number Generation 18.7 Copying random number generator state ========================================== The above methods do not expose the random number state which changes from call to call. It is often useful to be able to save and restore the state. To permit these practices, a few somewhat more advanced functions are supplied. These include: -- Function: int gsl_rng_memcpy (gsl_rng *dest, const gsl_rng *src) This function copies the random number generator *note src: 6a9. into the pre-existing generator *note dest: 6a9, making *note dest: 6a9. into an exact copy of *note src: 6a9. The two generators must be of the same type. -- Function: *note gsl_rng: 68c. *gsl_rng_clone (const gsl_rng *r) This function returns a pointer to a newly created generator which is an exact copy of the generator *note r: 6aa.  File: gsl-ref.info, Node: Reading and writing random number generator state, Next: Random number generator algorithms, Prev: Copying random number generator state, Up: Random Number Generation 18.8 Reading and writing random number generator state ====================================================== The library provides functions for reading and writing the random number state to a file as binary data. -- Function: int gsl_rng_fwrite (FILE *stream, const gsl_rng *r) This function writes the random number state of the random number generator *note r: 6ac. to the stream *note stream: 6ac. in binary format. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_rng_fread (FILE *stream, gsl_rng *r) This function reads the random number state into the random number generator *note r: 6ad. from the open stream *note stream: 6ad. in binary format. The random number generator *note r: 6ad. must be preinitialized with the correct random number generator type since type information is not saved. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture.  File: gsl-ref.info, Node: Random number generator algorithms, Next: Unix random number generators, Prev: Reading and writing random number generator state, Up: Random Number Generation 18.9 Random number generator algorithms ======================================= The functions described above make no reference to the actual algorithm used. This is deliberate so that you can switch algorithms without having to change any of your application source code. The library provides a large number of generators of different types, including simulation quality generators, generators provided for compatibility with other libraries and historical generators from the past. The following generators are recommended for use in simulation. They have extremely long periods, low correlation and pass most statistical tests. For the most reliable source of uncorrelated numbers, the second-generation RANLUX generators have the strongest proof of randomness. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_mt19937 The MT19937 generator of Makoto Matsumoto and Takuji Nishimura is a variant of the twisted generalized feedback shift-register algorithm, and is known as the “Mersenne Twister” generator. It has a Mersenne prime period of 2^{19937} - 1 (about 10^{6000}) and is equi-distributed in 623 dimensions. It has passed the DIEHARD statistical tests. It uses 624 words of state per generator and is comparable in speed to the other generators. The original generator used a default seed of 4357 and choosing ‘s’ equal to zero in *note gsl_rng_set(): 691. reproduces this. Later versions switched to 5489 as the default seed, you can choose this explicitly via *note gsl_rng_set(): 691. instead if you require it. For more information see, * Makoto Matsumoto and Takuji Nishimura, “Mersenne Twister: A 623-dimensionally equidistributed uniform pseudorandom number generator”. ACM Transactions on Modeling and Computer Simulation, Vol.: 8, No.: 1 (Jan. 1998), Pages 3–30 The generator *note gsl_rng_mt19937: 69d. uses the second revision of the seeding procedure published by the two authors above in 2002. The original seeding procedures could cause spurious artifacts for some seed values. They are still available through the alternative generators ‘gsl_rng_mt19937_1999’ and ‘gsl_rng_mt19937_1998’. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranlxs0 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranlxs1 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranlxs2 The generator ‘ranlxs0’ is a second-generation version of the RANLUX algorithm of Luscher, which produces “luxury random numbers”. This generator provides single precision output (24 bits) at three luxury levels ‘ranlxs0’, ‘ranlxs1’ and ‘ranlxs2’, in increasing order of strength. It uses double-precision floating point arithmetic internally and can be significantly faster than the integer version of ‘ranlux’, particularly on 64-bit architectures. The period of the generator is about 10^{171}. The algorithm has mathematically proven properties and can provide truly decorrelated numbers at a known level of randomness. The higher luxury levels provide increased decorrelation between samples as an additional safety margin. Note that the range of allowed seeds for this generator is [0,2^{31}-1]. Higher seed values are wrapped modulo 2^{31}. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranlxd1 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranlxd2 These generators produce double precision output (48 bits) from the RANLXS generator. The library provides two luxury levels ‘ranlxd1’ and ‘ranlxd2’, in increasing order of strength. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranlux -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranlux389 The ‘ranlux’ generator is an implementation of the original algorithm developed by Luscher. It uses a lagged-fibonacci-with-skipping algorithm to produce “luxury random numbers”. It is a 24-bit generator, originally designed for single-precision IEEE floating point numbers. This implementation is based on integer arithmetic, while the second-generation versions RANLXS and RANLXD described above provide floating-point implementations which will be faster on many platforms. The period of the generator is about 10^{171}. The algorithm has mathematically proven properties and it can provide truly decorrelated numbers at a known level of randomness. The default level of decorrelation recommended by Luscher is provided by *note gsl_rng_ranlux: 692, while *note gsl_rng_ranlux389: 6b3. gives the highest level of randomness, with all 24 bits decorrelated. Both types of generator use 24 words of state per generator. For more information see, * M. Luscher, “A portable high-quality random number generator for lattice field theory calculations”, Computer Physics Communications, 79 (1994) 100–110. * F. James, “RANLUX: A Fortran implementation of the high-quality pseudo-random number generator of Luscher”, Computer Physics Communications, 79 (1994) 111–114 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_cmrg This is a combined multiple recursive generator by L’Ecuyer. Its sequence is, z_n = (x_n - y_n) \mod m_1 where the two underlying generators x_n and y_n are, x_n = (a_1 x_{n-1} + a_2 x_{n-2} + a_3 x_{n-3}) mod m_1 y_n = (b_1 y_{n-1} + b_2 y_{n-2} + b_3 y_{n-3}) mod m_2 with coefficients a_1 = 0, a_2 = 63308, a_3 = -183326, b_1 = 86098, b_2 = 0, b_3 = -539608, and moduli m_1 = 2^{31} - 1 = 2147483647 and m_2 = 2145483479. The period of this generator is \hbox{lcm}(m_1^3-1, m_2^3-1), which is approximately 2^{185} (about 10^{56}). It uses 6 words of state per generator. For more information see, * P. L’Ecuyer, “Combined Multiple Recursive Random Number Generators”, Operations Research, 44, 5 (1996), 816–822. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_mrg This is a fifth-order multiple recursive generator by L’Ecuyer, Blouin and Coutre. Its sequence is, x_n = (a_1 x_{n-1} + a_5 x_{n-5}) \mod m with a_1 = 107374182, a_2 = a_3 = a_4 = 0, a_5 = 104480 and m = 2^{31}-1. The period of this generator is about 10^{46}. It uses 5 words of state per generator. More information can be found in the following paper, * P. L’Ecuyer, F. Blouin, and R. Coutre, “A search for good multiple recursive random number generators”, ACM Transactions on Modeling and Computer Simulation 3, 87–98 (1993). -- Variable: *note gsl_rng_type: 68b. *gsl_rng_taus -- Variable: *note gsl_rng_type: 68b. *gsl_rng_taus2 This is a maximally equidistributed combined Tausworthe generator by L’Ecuyer. The sequence is, x_n = (s1_n ^^ s2_n ^^ s3_n) where, s1_{n+1} = (((s1_n&4294967294)<<12)^^(((s1_n<<13)^^s1_n)>>19)) s2_{n+1} = (((s2_n&4294967288)<< 4)^^(((s2_n<< 2)^^s2_n)>>25)) s3_{n+1} = (((s3_n&4294967280)<<17)^^(((s3_n<< 3)^^s3_n)>>11)) computed modulo 2^{32}. In the formulas above \oplus denotes `exclusive-or'. Note that the algorithm relies on the properties of 32-bit unsigned integers and has been implemented using a bitmask of ‘0xFFFFFFFF’ to make it work on 64 bit machines. The period of this generator is 2^{88} (about 10^{26}). It uses 3 words of state per generator. For more information see, * P. L’Ecuyer, “Maximally Equidistributed Combined Tausworthe Generators”, Mathematics of Computation, 65, 213 (1996), 203–213. The generator *note gsl_rng_taus2: 6b6. uses the same algorithm as *note gsl_rng_taus: 69e. but with an improved seeding procedure described in the paper, * P. L’Ecuyer, “Tables of Maximally Equidistributed Combined LFSR Generators”, Mathematics of Computation, 68, 225 (1999), 261–269 The generator *note gsl_rng_taus2: 6b6. should now be used in preference to *note gsl_rng_taus: 69e. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_gfsr4 The ‘gfsr4’ generator is like a lagged-fibonacci generator, and produces each number as an ‘xor’’d sum of four previous values. r_n = r_{n-A} ^^ r_{n-B} ^^ r_{n-C} ^^ r_{n-D} Ziff (ref below) notes that “it is now widely known” that two-tap registers (such as R250, which is described below) have serious flaws, the most obvious one being the three-point correlation that comes from the definition of the generator. Nice mathematical properties can be derived for GFSR’s, and numerics bears out the claim that 4-tap GFSR’s with appropriately chosen offsets are as random as can be measured, using the author’s test. This implementation uses the values suggested the example on p392 of Ziff’s article: A=471, B=1586, C=6988, D=9689. If the offsets are appropriately chosen (such as the one ones in this implementation), then the sequence is said to be maximal; that means that the period is 2^D - 1, where D is the longest lag. (It is one less than 2^D because it is not permitted to have all zeros in the ‘ra[]’ array.) For this implementation with D=9689 that works out to about 10^{2917}. Note that the implementation of this generator using a 32-bit integer amounts to 32 parallel implementations of one-bit generators. One consequence of this is that the period of this 32-bit generator is the same as for the one-bit generator. Moreover, this independence means that all 32-bit patterns are equally likely, and in particular that 0 is an allowed random value. (We are grateful to Heiko Bauke for clarifying for us these properties of GFSR random number generators.) For more information see, * Robert M. Ziff, “Four-tap shift-register-sequence random-number generators”, Computers in Physics, 12(4), Jul/Aug 1998, pp 385–392.  File: gsl-ref.info, Node: Unix random number generators, Next: Other random number generators, Prev: Random number generator algorithms, Up: Random Number Generation 18.10 Unix random number generators =================================== The standard Unix random number generators ‘rand’, ‘random’ and ‘rand48’ are provided as part of GSL. Although these generators are widely available individually often they aren’t all available on the same platform. This makes it difficult to write portable code using them and so we have included the complete set of Unix generators in GSL for convenience. Note that these generators don’t produce high-quality randomness and aren’t suitable for work requiring accurate statistics. However, if you won’t be measuring statistical quantities and just want to introduce some variation into your program then these generators are quite acceptable. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_rand This is the BSD ‘rand’ generator. Its sequence is x_{n+1} = (a x_n + c) \mod m with a = 1103515245, c = 12345 and m = 2^{31}. The seed specifies the initial value, x_1. The period of this generator is 2^{31}, and it uses 1 word of storage per generator. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_random_bsd -- Variable: *note gsl_rng_type: 68b. *gsl_rng_random_libc5 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_random_glibc2 These generators implement the ‘random’ family of functions, a set of linear feedback shift register generators originally used in BSD Unix. There are several versions of ‘random’ in use today: the original BSD version (e.g. on SunOS4), a libc5 version (found on older GNU/Linux systems) and a glibc2 version. Each version uses a different seeding procedure, and thus produces different sequences. The original BSD routines accepted a variable length buffer for the generator state, with longer buffers providing higher-quality randomness. The ‘random’ function implemented algorithms for buffer lengths of 8, 32, 64, 128 and 256 bytes, and the algorithm with the largest length that would fit into the user-supplied buffer was used. To support these algorithms additional generators are available with the following names: gsl_rng_random8_bsd gsl_rng_random32_bsd gsl_rng_random64_bsd gsl_rng_random128_bsd gsl_rng_random256_bsd where the numeric suffix indicates the buffer length. The original BSD ‘random’ function used a 128-byte default buffer and so *note gsl_rng_random_bsd: 6ba. has been made equivalent to ‘gsl_rng_random128_bsd’. Corresponding versions of the ‘libc5’ and ‘glibc2’ generators are also available, with the names ‘gsl_rng_random8_libc5’, ‘gsl_rng_random8_glibc2’, etc. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_rand48 This is the Unix ‘rand48’ generator. Its sequence is x_{n+1} = (a x_n + c) \mod m defined on 48-bit unsigned integers with a = 25214903917, c = 11 and m = 2^{48}. The seed specifies the upper 32 bits of the initial value, x_1, with the lower 16 bits set to ‘0x330E’. The function *note gsl_rng_get(): 696. returns the upper 32 bits from each term of the sequence. This does not have a direct parallel in the original ‘rand48’ functions, but forcing the result to type ‘long int’ reproduces the output of ‘mrand48’. The function *note gsl_rng_uniform(): 699. uses the full 48 bits of internal state to return the double precision number x_n/m, which is equivalent to the function ‘drand48’. Note that some versions of the GNU C Library contained a bug in ‘mrand48’ function which caused it to produce different results (only the lower 16-bits of the return value were set).  File: gsl-ref.info, Node: Other random number generators, Next: Performance, Prev: Unix random number generators, Up: Random Number Generation 18.11 Other random number generators ==================================== The generators in this section are provided for compatibility with existing libraries. If you are converting an existing program to use GSL then you can select these generators to check your new implementation against the original one, using the same random number generator. After verifying that your new program reproduces the original results you can then switch to a higher-quality generator. Note that most of the generators in this section are based on single linear congruence relations, which are the least sophisticated type of generator. In particular, linear congruences have poor properties when used with a non-prime modulus, as several of these routines do (e.g. with a power of two modulus, 2^{31} or 2^{32}). This leads to periodicity in the least significant bits of each number, with only the higher bits having any randomness. Thus if you want to produce a random bitstream it is best to avoid using the least significant bits. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranf This is the CRAY random number generator ‘RANF’. Its sequence is x_{n+1} = (a x_n) \mod m defined on 48-bit unsigned integers with a = 44485709377909 and m = 2^{48}. The seed specifies the lower 32 bits of the initial value, x_1, with the lowest bit set to prevent the seed taking an even value. The upper 16 bits of x_1 are set to 0. A consequence of this procedure is that the pairs of seeds 2 and 3, 4 and 5, etc.: produce the same sequences. The generator compatible with the CRAY MATHLIB routine RANF. It produces double precision floating point numbers which should be identical to those from the original RANF. There is a subtlety in the implementation of the seeding. The initial state is reversed through one step, by multiplying by the modular inverse of a mod m. This is done for compatibility with the original CRAY implementation. Note that you can only seed the generator with integers up to 2^{32}, while the original CRAY implementation uses non-portable wide integers which can cover all 2^{48} states of the generator. The function *note gsl_rng_get(): 696. returns the upper 32 bits from each term of the sequence. The function *note gsl_rng_uniform(): 699. uses the full 48 bits to return the double precision number x_n/m. The period of this generator is 2^{46}. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_ranmar This is the RANMAR lagged-fibonacci generator of Marsaglia, Zaman and Tsang. It is a 24-bit generator, originally designed for single-precision IEEE floating point numbers. It was included in the CERNLIB high-energy physics library. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_r250 This is the shift-register generator of Kirkpatrick and Stoll. The sequence is based on the recurrence x_n = x_{n-103} ^^ x_{n-250} where \oplus denotes `exclusive-or', defined on 32-bit words. The period of this generator is about 2^{250} and it uses 250 words of state per generator. For more information see, * S. Kirkpatrick and E. Stoll, “A very fast shift-register sequence random number generator”, Journal of Computational Physics, 40, 517–526 (1981) -- Variable: *note gsl_rng_type: 68b. *gsl_rng_tt800 This is an earlier version of the twisted generalized feedback shift-register generator, and has been superseded by the development of MT19937. However, it is still an acceptable generator in its own right. It has a period of 2^{800} and uses 33 words of storage per generator. For more information see, * Makoto Matsumoto and Yoshiharu Kurita, “Twisted GFSR Generators II”, ACM Transactions on Modelling and Computer Simulation, Vol.: 4, No.: 3, 1994, pages 254–266. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_vax This is the VAX generator ‘MTH$RANDOM’. Its sequence is, x_{n+1} = (a x_n + c) \mod m with a = 69069, c = 1 and m = 2^{32}. The seed specifies the initial value, x_1. The period of this generator is 2^{32} and it uses 1 word of storage per generator. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_transputer This is the random number generator from the INMOS Transputer Development system. Its sequence is, x_{n+1} = (a x_n) \mod m with a = 1664525 and m = 2^{32}. The seed specifies the initial value, x_1. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_randu This is the IBM ‘RANDU’ generator. Its sequence is x_{n+1} = (a x_n) \mod m with a = 65539 and m = 2^{31}. The seed specifies the initial value, x_1. The period of this generator was only 2^{29}. It has become a textbook example of a poor generator. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_minstd This is Park and Miller’s “minimal standard” MINSTD generator, a simple linear congruence which takes care to avoid the major pitfalls of such algorithms. Its sequence is, x_{n+1} = (a x_n) \mod m with a = 16807 and m = 2^{31} - 1 = 2147483647. The seed specifies the initial value, x_1. The period of this generator is about 2^{31}. This generator was used in the IMSL Library (subroutine RNUN) and in MATLAB (the RAND function) in the past. It is also sometimes known by the acronym “GGL” (I’m not sure what that stands for). For more information see, * Park and Miller, “Random Number Generators: Good ones are hard to find”, Communications of the ACM, October 1988, Volume 31, No 10, pages 1192–1201. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_uni -- Variable: *note gsl_rng_type: 68b. *gsl_rng_uni32 This is a reimplementation of the 16-bit SLATEC random number generator RUNIF. A generalization of the generator to 32 bits is provided by *note gsl_rng_uni32: 6c8. The original source code is available from NETLIB. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_slatec This is the SLATEC random number generator RAND. It is ancient. The original source code is available from NETLIB. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_zuf This is the ZUFALL lagged Fibonacci series generator of Peterson. Its sequence is, t = u_{n-273} + u_{n-607} u_n = t - floor(t) The original source code is available from NETLIB. For more information see, * W. Petersen, “Lagged Fibonacci Random Number Generators for the NEC SX-3”, International Journal of High Speed Computing (1994). -- Variable: *note gsl_rng_type: 68b. *gsl_rng_knuthran2 This is a second-order multiple recursive generator described by Knuth in Seminumerical Algorithms, 3rd Ed., page 108. Its sequence is, x_n = (a_1 x_{n-1} + a_2 x_{n-2}) \mod m with a_1 = 271828183, a_2 = 314159269, and m = 2^{31}-1. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_knuthran2002 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_knuthran This is a second-order multiple recursive generator described by Knuth in Seminumerical Algorithms, 3rd Ed., Section 3.6. Knuth provides its C code. The updated routine *note gsl_rng_knuthran2002: 6cc. is from the revised 9th printing and corrects some weaknesses in the earlier version, which is implemented as *note gsl_rng_knuthran: 6cd. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_borosh13 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_fishman18 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_fishman20 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_lecuyer21 -- Variable: *note gsl_rng_type: 68b. *gsl_rng_waterman14 These multiplicative generators are taken from Knuth’s Seminumerical Algorithms, 3rd Ed., pages 106–108. Their sequence is, x_{n+1} = (a x_n) \mod m where the seed specifies the initial value, x_1. The parameters a and m are as follows, Borosh-Niederreiter: a = 1812433253, m = 2^{32}, Fishman18: a = 62089911, m = 2^{31}-1, Fishman20: a = 48271, m = 2^{31}-1, L’Ecuyer: a = 40692, m = 2^{31}-249, Waterman: a = 1566083941, m = 2^{32}. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_fishman2x This is the L’Ecuyer–Fishman random number generator. It is taken from Knuth’s Seminumerical Algorithms, 3rd Ed., page 108. Its sequence is, z_{n+1} = (x_n - y_n) \mod m with m = 2^{31}-1. x_n and y_n are given by the ‘fishman20’ and ‘lecuyer21’ algorithms. The seed specifies the initial value, x_1. -- Variable: *note gsl_rng_type: 68b. *gsl_rng_coveyou This is the Coveyou random number generator. It is taken from Knuth’s Seminumerical Algorithms, 3rd Ed., Section 3.2.2. Its sequence is, x_{n+1} = (x_n (x_n + 1)) \mod m with m = 2^{32}. The seed specifies the initial value, x_1.  File: gsl-ref.info, Node: Performance, Next: Examples<12>, Prev: Other random number generators, Up: Random Number Generation 18.12 Performance ================= The following table shows the relative performance of a selection the available random number generators. The fastest simulation quality generators are ‘taus’, ‘gfsr4’ and ‘mt19937’. The generators which offer the best mathematically-proven quality are those based on the RANLUX algorithm: 1754 k ints/sec, 870 k doubles/sec, taus 1613 k ints/sec, 855 k doubles/sec, gfsr4 1370 k ints/sec, 769 k doubles/sec, mt19937 565 k ints/sec, 571 k doubles/sec, ranlxs0 400 k ints/sec, 405 k doubles/sec, ranlxs1 490 k ints/sec, 389 k doubles/sec, mrg 407 k ints/sec, 297 k doubles/sec, ranlux 243 k ints/sec, 254 k doubles/sec, ranlxd1 251 k ints/sec, 253 k doubles/sec, ranlxs2 238 k ints/sec, 215 k doubles/sec, cmrg 247 k ints/sec, 198 k doubles/sec, ranlux389 141 k ints/sec, 140 k doubles/sec, ranlxd2  File: gsl-ref.info, Node: Examples<12>, Next: References and Further Reading<13>, Prev: Performance, Up: Random Number Generation 18.13 Examples ============== The following program demonstrates the use of a random number generator to produce uniform random numbers in the range [0.0, 1.0), #include #include int main (void) { const gsl_rng_type * T; gsl_rng * r; int i, n = 10; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); for (i = 0; i < n; i++) { double u = gsl_rng_uniform (r); printf ("%.5f\n", u); } gsl_rng_free (r); return 0; } Here is the output of the program, 0.99974 0.16291 0.28262 0.94720 0.23166 0.48497 0.95748 0.74431 0.54004 0.73995 The numbers depend on the seed used by the generator. The default seed can be changed with the *note GSL_RNG_SEED: 690. environment variable to produce a different stream of numbers. The generator itself can be changed using the environment variable *note GSL_RNG_TYPE: 6a5. Here is the output of the program using a seed value of 123 and the multiple-recursive generator ‘mrg’: $ GSL_RNG_SEED=123 GSL_RNG_TYPE=mrg ./a.out 0.33050 0.86631 0.32982 0.67620 0.53391 0.06457 0.16847 0.70229 0.04371 0.86374  File: gsl-ref.info, Node: References and Further Reading<13>, Next: Acknowledgements, Prev: Examples<12>, Up: Random Number Generation 18.14 References and Further Reading ==================================== The subject of random number generation and testing is reviewed extensively in Knuth’s `Seminumerical Algorithms'. * Donald E. Knuth, The Art of Computer Programming: Seminumerical Algorithms (Vol 2, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896842. Further information is available in the review paper written by Pierre L’Ecuyer, * P. L’Ecuyer, “Random Number Generation”, Chapter 4 of the Handbook on Simulation, Jerry Banks Ed., Wiley, 1998, 93–137. * ‘http://www.iro.umontreal.ca/~lecuyer/papers.html’ in the file ‘handsim.ps’. The source code for the DIEHARD random number generator tests is also available online, * DIEHARD source code, G. Marsaglia, ‘http://stat.fsu.edu/pub/diehard/’ A comprehensive set of random number generator tests is available from NIST, * NIST Special Publication 800-22, “A Statistical Test Suite for the Validation of Random Number Generators and Pseudo Random Number Generators for Cryptographic Applications”. * ‘http://csrc.nist.gov/rng/’  File: gsl-ref.info, Node: Acknowledgements, Prev: References and Further Reading<13>, Up: Random Number Generation 18.15 Acknowledgements ====================== Thanks to Makoto Matsumoto, Takuji Nishimura and Yoshiharu Kurita for making the source code to their generators (MT19937, MM&TN; TT800, MM&YK) available under the GNU General Public License. Thanks to Martin Luscher for providing notes and source code for the RANLXS and RANLXD generators.  File: gsl-ref.info, Node: Quasi-Random Sequences, Next: Random Number Distributions, Prev: Random Number Generation, Up: Top 19 Quasi-Random Sequences ************************* This chapter describes functions for generating quasi-random sequences in arbitrary dimensions. A quasi-random sequence progressively covers a d-dimensional space with a set of points that are uniformly distributed. Quasi-random sequences are also known as low-discrepancy sequences. The quasi-random sequence generators use an interface that is similar to the interface for random number generators, except that seeding is not required—each generator produces a single sequence. The functions described in this section are declared in the header file ‘gsl_qrng.h’. * Menu: * Quasi-random number generator initialization:: * Sampling from a quasi-random number generator:: * Auxiliary quasi-random number generator functions:: * Saving and restoring quasi-random number generator state:: * Quasi-random number generator algorithms:: * Examples: Examples<13>. * References::  File: gsl-ref.info, Node: Quasi-random number generator initialization, Next: Sampling from a quasi-random number generator, Up: Quasi-Random Sequences 19.1 Quasi-random number generator initialization ================================================= -- Type: gsl_qrng This is a workspace for computing quasi-random sequences. -- Function: *note gsl_qrng: 6dc. *gsl_qrng_alloc (const gsl_qrng_type *T, unsigned int d) This function returns a pointer to a newly-created instance of a quasi-random sequence generator of type *note T: 6dd. and dimension *note d: 6dd. If there is insufficient memory to create the generator then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: void gsl_qrng_free (gsl_qrng *q) This function frees all the memory associated with the generator *note q: 6de. -- Function: void gsl_qrng_init (gsl_qrng *q) This function reinitializes the generator *note q: 6df. to its starting point. Note that quasi-random sequences do not use a seed and always produce the same set of values.  File: gsl-ref.info, Node: Sampling from a quasi-random number generator, Next: Auxiliary quasi-random number generator functions, Prev: Quasi-random number generator initialization, Up: Quasi-Random Sequences 19.2 Sampling from a quasi-random number generator ================================================== -- Function: int gsl_qrng_get (const gsl_qrng *q, double x[]) This function stores the next point from the sequence generator *note q: 6e1. in the array *note x: 6e1. The space available for *note x: 6e1. must match the dimension of the generator. The point *note x: 6e1. will lie in the range 0 < x_i < 1 for each x_i. An inline version of this function is used when ‘HAVE_INLINE’ is defined.  File: gsl-ref.info, Node: Auxiliary quasi-random number generator functions, Next: Saving and restoring quasi-random number generator state, Prev: Sampling from a quasi-random number generator, Up: Quasi-Random Sequences 19.3 Auxiliary quasi-random number generator functions ====================================================== -- Function: const char *gsl_qrng_name (const gsl_qrng *q) This function returns a pointer to the name of the generator. -- Function: size_t gsl_qrng_size (const gsl_qrng *q) -- Function: void *gsl_qrng_state (const gsl_qrng *q) These functions return a pointer to the state of generator ‘r’ and its size. You can use this information to access the state directly. For example, the following code will write the state of a generator to a stream: void * state = gsl_qrng_state (q); size_t n = gsl_qrng_size (q); fwrite (state, n, 1, stream);  File: gsl-ref.info, Node: Saving and restoring quasi-random number generator state, Next: Quasi-random number generator algorithms, Prev: Auxiliary quasi-random number generator functions, Up: Quasi-Random Sequences 19.4 Saving and restoring quasi-random number generator state ============================================================= -- Function: int gsl_qrng_memcpy (gsl_qrng *dest, const gsl_qrng *src) This function copies the quasi-random sequence generator *note src: 6e7. into the pre-existing generator *note dest: 6e7, making *note dest: 6e7. into an exact copy of *note src: 6e7. The two generators must be of the same type. -- Function: *note gsl_qrng: 6dc. *gsl_qrng_clone (const gsl_qrng *q) This function returns a pointer to a newly created generator which is an exact copy of the generator *note q: 6e8.  File: gsl-ref.info, Node: Quasi-random number generator algorithms, Next: Examples<13>, Prev: Saving and restoring quasi-random number generator state, Up: Quasi-Random Sequences 19.5 Quasi-random number generator algorithms ============================================= The following quasi-random sequence algorithms are available, -- Type: gsl_qrng_type -- Variable: *note gsl_qrng_type: 6ea. *gsl_qrng_niederreiter_2 This generator uses the algorithm described in Bratley, Fox, Niederreiter, ACM Trans. Model. Comp. Sim. 2, 195 (1992). It is valid up to 12 dimensions. -- Variable: *note gsl_qrng_type: 6ea. *gsl_qrng_sobol This generator uses the Sobol sequence described in Antonov, Saleev, USSR Comput. Maths. Math. Phys. 19, 252 (1980). It is valid up to 40 dimensions. -- Variable: *note gsl_qrng_type: 6ea. *gsl_qrng_halton -- Variable: *note gsl_qrng_type: 6ea. *gsl_qrng_reversehalton These generators use the Halton and reverse Halton sequences described in J.H. Halton, Numerische Mathematik, 2, 84-90 (1960) and B. Vandewoestyne and R. Cools Computational and Applied Mathematics, 189, 1&2, 341-361 (2006). They are valid up to 1229 dimensions.  File: gsl-ref.info, Node: Examples<13>, Next: References, Prev: Quasi-random number generator algorithms, Up: Quasi-Random Sequences 19.6 Examples ============= The following program prints the first 1024 points of the 2-dimensional Sobol sequence. #include #include int main (void) { int i; gsl_qrng * q = gsl_qrng_alloc (gsl_qrng_sobol, 2); for (i = 0; i < 1024; i++) { double v[2]; gsl_qrng_get (q, v); printf ("%.5f %.5f\n", v[0], v[1]); } gsl_qrng_free (q); return 0; } Here is the output from the program: $ ./a.out 0.50000 0.50000 0.75000 0.25000 0.25000 0.75000 0.37500 0.37500 0.87500 0.87500 0.62500 0.12500 0.12500 0.62500 .... It can be seen that successive points progressively fill-in the spaces between previous points. Fig. %s shows the distribution in the x-y plane of the first 1024 points from the Sobol sequence, [gsl-ref-figures/qrng] Figure: Distribution of the first 1024 points from the quasi-random Sobol sequence  File: gsl-ref.info, Node: References, Prev: Examples<13>, Up: Quasi-Random Sequences 19.7 References =============== The implementations of the quasi-random sequence routines are based on the algorithms described in the following paper, * P. Bratley and B.L. Fox and H. Niederreiter, “Algorithm 738: Programs to Generate Niederreiter’s Low-discrepancy Sequences”, ACM Transactions on Mathematical Software, Vol.: 20, No.: 4, December, 1994, p.: 494–495.  File: gsl-ref.info, Node: Random Number Distributions, Next: Statistics, Prev: Quasi-Random Sequences, Up: Top 20 Random Number Distributions ****************************** This chapter describes functions for generating random variates and computing their probability distributions. Samples from the distributions described in this chapter can be obtained using any of the random number generators in the library as an underlying source of randomness. In the simplest cases a non-uniform distribution can be obtained analytically from the uniform distribution of a random number generator by applying an appropriate transformation. This method uses one call to the random number generator. More complicated distributions are created by the `acceptance-rejection' method, which compares the desired distribution against a distribution which is similar and known analytically. This usually requires several samples from the generator. The library also provides cumulative distribution functions and inverse cumulative distribution functions, sometimes referred to as quantile functions. The cumulative distribution functions and their inverses are computed separately for the upper and lower tails of the distribution, allowing full accuracy to be retained for small results. The functions for random variates and probability density functions described in this section are declared in ‘gsl_randist.h’. The corresponding cumulative distribution functions are declared in ‘gsl_cdf.h’. Note that the discrete random variate functions always return a value of type ‘unsigned int’, and on most platforms this has a maximum value of 2^32-1 ~=~ 4.29e9 They should only be called with a safe range of parameters (where there is a negligible probability of a variate exceeding this limit) to prevent incorrect results due to overflow. * Menu: * Introduction: Introduction<3>. * The Gaussian Distribution:: * The Gaussian Tail Distribution:: * The Bivariate Gaussian Distribution:: * The Multivariate Gaussian Distribution:: * The Exponential Distribution:: * The Laplace Distribution:: * The Exponential Power Distribution:: * The Cauchy Distribution:: * The Rayleigh Distribution:: * The Rayleigh Tail Distribution:: * The Landau Distribution:: * The Levy alpha-Stable Distributions:: * The Levy skew alpha-Stable Distribution:: * The Gamma Distribution:: * The Flat (Uniform) Distribution: The Flat Uniform Distribution. * The Lognormal Distribution:: * The Chi-squared Distribution:: * The F-distribution:: * The t-distribution:: * The Beta Distribution:: * The Logistic Distribution:: * The Pareto Distribution:: * Spherical Vector Distributions:: * The Weibull Distribution:: * The Type-1 Gumbel Distribution:: * The Type-2 Gumbel Distribution:: * The Dirichlet Distribution:: * General Discrete Distributions:: * The Poisson Distribution:: * The Bernoulli Distribution:: * The Binomial Distribution:: * The Multinomial Distribution:: * The Negative Binomial Distribution:: * The Pascal Distribution:: * The Geometric Distribution:: * The Hypergeometric Distribution:: * The Logarithmic Distribution:: * The Wishart Distribution:: * Shuffling and Sampling:: * Examples: Examples<14>. * References and Further Reading: References and Further Reading<14>.  File: gsl-ref.info, Node: Introduction<3>, Next: The Gaussian Distribution, Up: Random Number Distributions 20.1 Introduction ================= Continuous random number distributions are defined by a probability density function, p(x), such that the probability of x occurring in the infinitesimal range x to x + dx is p(x) dx. The cumulative distribution function for the lower tail P(x) is defined by the integral, P(x) = \int_{-\infty}^{x} dx' p(x') and gives the probability of a variate taking a value less than x. The cumulative distribution function for the upper tail Q(x) is defined by the integral, Q(x) = \int_{x}^{+\infty} dx' p(x') and gives the probability of a variate taking a value greater than x. The upper and lower cumulative distribution functions are related by P(x) + Q(x) = 1 and satisfy 0 \le P(x) \le 1, 0 \le Q(x) \le 1. The inverse cumulative distributions, x = P^{-1}(P) and x = Q^{-1}(Q) give the values of x which correspond to a specific value of P or Q. They can be used to find confidence limits from probability values. For discrete distributions the probability of sampling the integer value k is given by p(k), where \sum_k p(k) = 1. The cumulative distribution for the lower tail P(k) of a discrete distribution is defined as, P(k) = \sum_{i \le k} p(i) where the sum is over the allowed range of the distribution less than or equal to k. The cumulative distribution for the upper tail of a discrete distribution Q(k) is defined as Q(k) = \sum_{i > k} p(i) giving the sum of probabilities for all values greater than k. These two definitions satisfy the identity P(k)+Q(k)=1. If the range of the distribution is 1 to n inclusive then P(n) = 1, Q(n) = 0 while P(1) = p(1), Q(1) = 1 - p(1).  File: gsl-ref.info, Node: The Gaussian Distribution, Next: The Gaussian Tail Distribution, Prev: Introduction<3>, Up: Random Number Distributions 20.2 The Gaussian Distribution ============================== -- Function: double gsl_ran_gaussian (const gsl_rng *r, double sigma) This function returns a Gaussian random variate, with mean zero and standard deviation *note sigma: 6f6. The probability distribution for Gaussian random variates is, p(x) dx = {1 \over \sqrt{2 \pi \sigma^2}} \exp (-x^2 / 2\sigma^2) dx for x in the range -\infty to +\infty. Use the transformation z = \mu + x on the numbers returned by *note gsl_ran_gaussian(): 6f6. to obtain a Gaussian distribution with mean \mu. This function uses the Box-Muller algorithm which requires two calls to the random number generator *note r: 6f6. -- Function: double gsl_ran_gaussian_pdf (double x, double sigma) This function computes the probability density p(x) at *note x: 6f7. for a Gaussian distribution with standard deviation *note sigma: 6f7, using the formula given above. [gsl-ref-figures/rand-gaussian] -- Function: double gsl_ran_gaussian_ziggurat (const gsl_rng *r, double sigma) -- Function: double gsl_ran_gaussian_ratio_method (const gsl_rng *r, double sigma) This function computes a Gaussian random variate using the alternative Marsaglia-Tsang ziggurat and Kinderman-Monahan-Leva ratio methods. The Ziggurat algorithm is the fastest available algorithm in most cases. -- Function: double gsl_ran_ugaussian (const gsl_rng *r) -- Function: double gsl_ran_ugaussian_pdf (double x) -- Function: double gsl_ran_ugaussian_ratio_method (const gsl_rng *r) These functions compute results for the unit Gaussian distribution. They are equivalent to the functions above with a standard deviation of one, ‘sigma’ = 1. -- Function: double gsl_cdf_gaussian_P (double x, double sigma) -- Function: double gsl_cdf_gaussian_Q (double x, double sigma) -- Function: double gsl_cdf_gaussian_Pinv (double P, double sigma) -- Function: double gsl_cdf_gaussian_Qinv (double Q, double sigma) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Gaussian distribution with standard deviation *note sigma: 700. -- Function: double gsl_cdf_ugaussian_P (double x) -- Function: double gsl_cdf_ugaussian_Q (double x) -- Function: double gsl_cdf_ugaussian_Pinv (double P) -- Function: double gsl_cdf_ugaussian_Qinv (double Q) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the unit Gaussian distribution.  File: gsl-ref.info, Node: The Gaussian Tail Distribution, Next: The Bivariate Gaussian Distribution, Prev: The Gaussian Distribution, Up: Random Number Distributions 20.3 The Gaussian Tail Distribution =================================== -- Function: double gsl_ran_gaussian_tail (const gsl_rng *r, double a, double sigma) This function provides random variates from the upper tail of a Gaussian distribution with standard deviation *note sigma: 706. The values returned are larger than the lower limit *note a: 706, which must be positive. The method is based on Marsaglia’s famous rectangle-wedge-tail algorithm (Ann. Math. Stat. 32, 894–899 (1961)), with this aspect explained in Knuth, v2, 3rd ed, p139,586 (exercise 11). The probability distribution for Gaussian tail random variates is, p(x) dx = {1 \over N(a;\sigma) \sqrt{2 \pi \sigma^2}} \exp (- x^2 / 2\sigma^2) dx for x > a where N(a;\sigma) is the normalization constant, N(a;\sigma) = (1/2) erfc(a / sqrt(2 sigma^2)). -- Function: double gsl_ran_gaussian_tail_pdf (double x, double a, double sigma) This function computes the probability density p(x) at *note x: 707. for a Gaussian tail distribution with standard deviation *note sigma: 707. and lower limit *note a: 707, using the formula given above. [gsl-ref-figures/rand-gaussian-tail] -- Function: double gsl_ran_ugaussian_tail (const gsl_rng *r, double a) -- Function: double gsl_ran_ugaussian_tail_pdf (double x, double a) These functions compute results for the tail of a unit Gaussian distribution. They are equivalent to the functions above with a standard deviation of one, ‘sigma’ = 1.  File: gsl-ref.info, Node: The Bivariate Gaussian Distribution, Next: The Multivariate Gaussian Distribution, Prev: The Gaussian Tail Distribution, Up: Random Number Distributions 20.4 The Bivariate Gaussian Distribution ======================================== -- Function: void gsl_ran_bivariate_gaussian (const gsl_rng *r, double sigma_x, double sigma_y, double rho, double *x, double *y) This function generates a pair of correlated Gaussian variates, with mean zero, correlation coefficient *note rho: 70b. and standard deviations *note sigma_x: 70b. and *note sigma_y: 70b. in the x and y directions. The probability distribution for bivariate Gaussian random variates is, p(x,y) dx dy = {1 \over 2 \pi \sigma_x \sigma_y \sqrt{1-\rho^2}} \exp (-(x^2/\sigma_x^2 + y^2/\sigma_y^2 - 2 \rho x y/(\sigma_x\sigma_y))/2(1-\rho^2)) dx dy for x,y in the range -\infty to +\infty. The correlation coefficient *note rho: 70b. should lie between 1 and -1. -- Function: double gsl_ran_bivariate_gaussian_pdf (double x, double y, double sigma_x, double sigma_y, double rho) This function computes the probability density p(x,y) at (*note x: 70c, *note y: 70c.) for a bivariate Gaussian distribution with standard deviations *note sigma_x: 70c, *note sigma_y: 70c. and correlation coefficient *note rho: 70c, using the formula given above. [gsl-ref-figures/rand-bivariate-gaussian]  File: gsl-ref.info, Node: The Multivariate Gaussian Distribution, Next: The Exponential Distribution, Prev: The Bivariate Gaussian Distribution, Up: Random Number Distributions 20.5 The Multivariate Gaussian Distribution =========================================== -- Function: int gsl_ran_multivariate_gaussian (const gsl_rng *r, const gsl_vector *mu, const gsl_matrix *L, gsl_vector *result) This function generates a random vector satisfying the k-dimensional multivariate Gaussian distribution with mean \mu and variance-covariance matrix \Sigma. On input, the k-vector \mu is given in *note mu: 70e, and the Cholesky factor of the k-by-k matrix \Sigma = L L^T is given in the lower triangle of *note L: 70e, as output from *note gsl_linalg_cholesky_decomp(): 553. The random vector is stored in *note result: 70e. on output. The probability distribution for multivariate Gaussian random variates is p(x_1,...,x_k) dx_1 ... dx_k = 1 / ( \sqrt{(2 \pi)^k |\Sigma| ) \exp (-1/2 (x - \mu)^T \Sigma^{-1} (x - \mu)) dx_1 ... dx_k -- Function: int gsl_ran_multivariate_gaussian_pdf (const gsl_vector *x, const gsl_vector *mu, const gsl_matrix *L, double *result, gsl_vector *work) -- Function: int gsl_ran_multivariate_gaussian_log_pdf (const gsl_vector *x, const gsl_vector *mu, const gsl_matrix *L, double *result, gsl_vector *work) These functions compute p(x) or \log{p(x)} at the point *note x: 710, using mean vector *note mu: 710. and variance-covariance matrix specified by its Cholesky factor *note L: 710. using the formula above. Additional workspace of length k is required in *note work: 710. -- Function: int gsl_ran_multivariate_gaussian_mean (const gsl_matrix *X, gsl_vector *mu_hat) Given a set of n samples X_j from a k-dimensional multivariate Gaussian distribution, this function computes the maximum likelihood estimate of the mean of the distribution, given by \Hat{\mu} = {1 \over n} \sum_{j=1}^n X_j The samples X_1,X_2,\dots,X_n are given in the n-by-k matrix *note X: 711, and the maximum likelihood estimate of the mean is stored in *note mu_hat: 711. on output. -- Function: int gsl_ran_multivariate_gaussian_vcov (const gsl_matrix *X, gsl_matrix *sigma_hat) Given a set of n samples X_j from a k-dimensional multivariate Gaussian distribution, this function computes the maximum likelihood estimate of the variance-covariance matrix of the distribution, given by \Hat{\Sigma} = (1 / n) \sum_{j=1}^n ( X_j - \Hat{\mu} ) ( X_j - \Hat{\mu} )^T The samples X_1,X_2,\dots,X_n are given in the n-by-k matrix *note X: 712. and the maximum likelihood estimate of the variance-covariance matrix is stored in *note sigma_hat: 712. on output.  File: gsl-ref.info, Node: The Exponential Distribution, Next: The Laplace Distribution, Prev: The Multivariate Gaussian Distribution, Up: Random Number Distributions 20.6 The Exponential Distribution ================================= -- Function: double gsl_ran_exponential (const gsl_rng *r, double mu) This function returns a random variate from the exponential distribution with mean *note mu: 714. The distribution is, p(x) dx = {1 \over \mu} \exp(-x/\mu) dx for x \ge 0. -- Function: double gsl_ran_exponential_pdf (double x, double mu) This function computes the probability density p(x) at *note x: 715. for an exponential distribution with mean *note mu: 715, using the formula given above. [gsl-ref-figures/rand-exponential] -- Function: double gsl_cdf_exponential_P (double x, double mu) -- Function: double gsl_cdf_exponential_Q (double x, double mu) -- Function: double gsl_cdf_exponential_Pinv (double P, double mu) -- Function: double gsl_cdf_exponential_Qinv (double Q, double mu) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the exponential distribution with mean *note mu: 719.  File: gsl-ref.info, Node: The Laplace Distribution, Next: The Exponential Power Distribution, Prev: The Exponential Distribution, Up: Random Number Distributions 20.7 The Laplace Distribution ============================= -- Function: double gsl_ran_laplace (const gsl_rng *r, double a) This function returns a random variate from the Laplace distribution with width *note a: 71b. The distribution is, p(x) dx = {1 \over 2 a} \exp(-|x/a|) dx for -\infty < x < \infty. -- Function: double gsl_ran_laplace_pdf (double x, double a) This function computes the probability density p(x) at *note x: 71c. for a Laplace distribution with width *note a: 71c, using the formula given above. [gsl-ref-figures/rand-laplace] -- Function: double gsl_cdf_laplace_P (double x, double a) -- Function: double gsl_cdf_laplace_Q (double x, double a) -- Function: double gsl_cdf_laplace_Pinv (double P, double a) -- Function: double gsl_cdf_laplace_Qinv (double Q, double a) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Laplace distribution with width *note a: 720.  File: gsl-ref.info, Node: The Exponential Power Distribution, Next: The Cauchy Distribution, Prev: The Laplace Distribution, Up: Random Number Distributions 20.8 The Exponential Power Distribution ======================================= -- Function: double gsl_ran_exppow (const gsl_rng *r, double a, double b) This function returns a random variate from the exponential power distribution with scale parameter *note a: 722. and exponent *note b: 722. The distribution is, p(x) dx = {1 \over 2 a \Gamma(1+1/b)} \exp(-|x/a|^b) dx for x \ge 0. For b = 1 this reduces to the Laplace distribution. For b = 2 it has the same form as a Gaussian distribution, but with a = \sqrt{2} \sigma. -- Function: double gsl_ran_exppow_pdf (double x, double a, double b) This function computes the probability density p(x) at *note x: 723. for an exponential power distribution with scale parameter *note a: 723. and exponent *note b: 723, using the formula given above. [gsl-ref-figures/rand-exppow] -- Function: double gsl_cdf_exppow_P (double x, double a, double b) -- Function: double gsl_cdf_exppow_Q (double x, double a, double b) These functions compute the cumulative distribution functions P(x), Q(x) for the exponential power distribution with parameters *note a: 725. and *note b: 725.  File: gsl-ref.info, Node: The Cauchy Distribution, Next: The Rayleigh Distribution, Prev: The Exponential Power Distribution, Up: Random Number Distributions 20.9 The Cauchy Distribution ============================ -- Function: double gsl_ran_cauchy (const gsl_rng *r, double a) This function returns a random variate from the Cauchy distribution with scale parameter *note a: 727. The probability distribution for Cauchy random variates is, p(x) dx = {1 \over a\pi (1 + (x/a)^2) } dx for x in the range -\infty to +\infty. The Cauchy distribution is also known as the Lorentz distribution. -- Function: double gsl_ran_cauchy_pdf (double x, double a) This function computes the probability density p(x) at *note x: 728. for a Cauchy distribution with scale parameter *note a: 728, using the formula given above. [gsl-ref-figures/rand-cauchy] -- Function: double gsl_cdf_cauchy_P (double x, double a) -- Function: double gsl_cdf_cauchy_Q (double x, double a) -- Function: double gsl_cdf_cauchy_Pinv (double P, double a) -- Function: double gsl_cdf_cauchy_Qinv (double Q, double a) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Cauchy distribution with scale parameter *note a: 72c.  File: gsl-ref.info, Node: The Rayleigh Distribution, Next: The Rayleigh Tail Distribution, Prev: The Cauchy Distribution, Up: Random Number Distributions 20.10 The Rayleigh Distribution =============================== -- Function: double gsl_ran_rayleigh (const gsl_rng *r, double sigma) This function returns a random variate from the Rayleigh distribution with scale parameter *note sigma: 72e. The distribution is, p(x) dx = {x \over \sigma^2} \exp(- x^2/(2 \sigma^2)) dx for x > 0. -- Function: double gsl_ran_rayleigh_pdf (double x, double sigma) This function computes the probability density p(x) at *note x: 72f. for a Rayleigh distribution with scale parameter *note sigma: 72f, using the formula given above. [gsl-ref-figures/rand-rayleigh] -- Function: double gsl_cdf_rayleigh_P (double x, double sigma) -- Function: double gsl_cdf_rayleigh_Q (double x, double sigma) -- Function: double gsl_cdf_rayleigh_Pinv (double P, double sigma) -- Function: double gsl_cdf_rayleigh_Qinv (double Q, double sigma) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Rayleigh distribution with scale parameter *note sigma: 733.  File: gsl-ref.info, Node: The Rayleigh Tail Distribution, Next: The Landau Distribution, Prev: The Rayleigh Distribution, Up: Random Number Distributions 20.11 The Rayleigh Tail Distribution ==================================== -- Function: double gsl_ran_rayleigh_tail (const gsl_rng *r, double a, double sigma) This function returns a random variate from the tail of the Rayleigh distribution with scale parameter *note sigma: 735. and a lower limit of *note a: 735. The distribution is, p(x) dx = {x \over \sigma^2} \exp ((a^2 - x^2) /(2 \sigma^2)) dx for x > a. -- Function: double gsl_ran_rayleigh_tail_pdf (double x, double a, double sigma) This function computes the probability density p(x) at *note x: 736. for a Rayleigh tail distribution with scale parameter *note sigma: 736. and lower limit *note a: 736, using the formula given above. [gsl-ref-figures/rand-rayleigh-tail]  File: gsl-ref.info, Node: The Landau Distribution, Next: The Levy alpha-Stable Distributions, Prev: The Rayleigh Tail Distribution, Up: Random Number Distributions 20.12 The Landau Distribution ============================= -- Function: double gsl_ran_landau (const gsl_rng *r) This function returns a random variate from the Landau distribution. The probability distribution for Landau random variates is defined analytically by the complex integral, p(x) = (1/(2 \pi i)) \int_{c-i\infty}^{c+i\infty} ds exp(s log(s) + x s) For numerical purposes it is more convenient to use the following equivalent form of the integral, p(x) = (1/\pi) \int_0^\infty dt \exp(-t \log(t) - x t) \sin(\pi t). -- Function: double gsl_ran_landau_pdf (double x) This function computes the probability density p(x) at *note x: 739. for the Landau distribution using an approximation to the formula given above. [gsl-ref-figures/rand-landau]  File: gsl-ref.info, Node: The Levy alpha-Stable Distributions, Next: The Levy skew alpha-Stable Distribution, Prev: The Landau Distribution, Up: Random Number Distributions 20.13 The Levy alpha-Stable Distributions ========================================= -- Function: double gsl_ran_levy (const gsl_rng *r, double c, double alpha) This function returns a random variate from the Levy symmetric stable distribution with scale *note c: 73b. and exponent *note alpha: 73b. The symmetric stable probability distribution is defined by a Fourier transform, p(x) = 1 / (2 \pi) \int_{-\infty}^{+\infty} dt \exp(-it x - |c t|^alpha) There is no explicit solution for the form of p(x) and the library does not define a corresponding ‘pdf’ function. For \alpha = 1 the distribution reduces to the Cauchy distribution. For \alpha = 2 it is a Gaussian distribution with \sigma = \sqrt{2} c. For \alpha < 1 the tails of the distribution become extremely wide. The algorithm only works for 0 < \alpha \le 2. [gsl-ref-figures/rand-levy]  File: gsl-ref.info, Node: The Levy skew alpha-Stable Distribution, Next: The Gamma Distribution, Prev: The Levy alpha-Stable Distributions, Up: Random Number Distributions 20.14 The Levy skew alpha-Stable Distribution ============================================= -- Function: double gsl_ran_levy_skew (const gsl_rng *r, double c, double alpha, double beta) This function returns a random variate from the Levy skew stable distribution with scale *note c: 73d, exponent *note alpha: 73d. and skewness parameter *note beta: 73d. The skewness parameter must lie in the range [-1,1]. The Levy skew stable probability distribution is defined by a Fourier transform, p(x) = 1 / (2 \pi) \int_{-\infty}^{+\infty} dt \exp(-it x - |c t|^alpha (1-i beta sign(t) tan(pi alpha/2))) When \alpha = 1 the term \tan(\pi \alpha/2) is replaced by -(2/\pi)\log|t|. There is no explicit solution for the form of p(x) and the library does not define a corresponding ‘pdf’ function. For \alpha = 2 the distribution reduces to a Gaussian distribution with \sigma = \sqrt{2} c and the skewness parameter has no effect. For \alpha < 1 the tails of the distribution become extremely wide. The symmetric distribution corresponds to \beta = 0. The algorithm only works for 0 < \alpha \le 2. The Levy alpha-stable distributions have the property that if N alpha-stable variates are drawn from the distribution p(c, \alpha, \beta) then the sum Y = X_1 + X_2 + \dots + X_N will also be distributed as an alpha-stable variate, p(N^{1/\alpha} c, \alpha, \beta). [gsl-ref-figures/rand-levyskew]  File: gsl-ref.info, Node: The Gamma Distribution, Next: The Flat Uniform Distribution, Prev: The Levy skew alpha-Stable Distribution, Up: Random Number Distributions 20.15 The Gamma Distribution ============================ -- Function: double gsl_ran_gamma (const gsl_rng *r, double a, double b) This function returns a random variate from the gamma distribution. The distribution function is, p(x) dx = {1 \over \Gamma(a) b^a} x^{a-1} e^{-x/b} dx for x > 0. The gamma distribution with an integer parameter *note a: 73f. is known as the Erlang distribution. The variates are computed using the Marsaglia-Tsang fast gamma method. This function for this method was previously called ‘gsl_ran_gamma_mt()’ and can still be accessed using this name. -- Function: double gsl_ran_gamma_knuth (const gsl_rng *r, double a, double b) This function returns a gamma variate using the algorithms from Knuth (vol 2). -- Function: double gsl_ran_gamma_pdf (double x, double a, double b) This function computes the probability density p(x) at *note x: 741. for a gamma distribution with parameters *note a: 741. and *note b: 741, using the formula given above. [gsl-ref-figures/rand-gamma] -- Function: double gsl_cdf_gamma_P (double x, double a, double b) -- Function: double gsl_cdf_gamma_Q (double x, double a, double b) -- Function: double gsl_cdf_gamma_Pinv (double P, double a, double b) -- Function: double gsl_cdf_gamma_Qinv (double Q, double a, double b) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the gamma distribution with parameters *note a: 745. and *note b: 745.  File: gsl-ref.info, Node: The Flat Uniform Distribution, Next: The Lognormal Distribution, Prev: The Gamma Distribution, Up: Random Number Distributions 20.16 The Flat (Uniform) Distribution ===================================== -- Function: double gsl_ran_flat (const gsl_rng *r, double a, double b) This function returns a random variate from the flat (uniform) distribution from *note a: 747. to *note b: 747. The distribution is, p(x) dx = {1 \over (b-a)} dx if a \le x < b and 0 otherwise. -- Function: double gsl_ran_flat_pdf (double x, double a, double b) This function computes the probability density p(x) at *note x: 748. for a uniform distribution from *note a: 748. to *note b: 748, using the formula given above. [gsl-ref-figures/rand-flat] -- Function: double gsl_cdf_flat_P (double x, double a, double b) -- Function: double gsl_cdf_flat_Q (double x, double a, double b) -- Function: double gsl_cdf_flat_Pinv (double P, double a, double b) -- Function: double gsl_cdf_flat_Qinv (double Q, double a, double b) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for a uniform distribution from *note a: 74c. to *note b: 74c.  File: gsl-ref.info, Node: The Lognormal Distribution, Next: The Chi-squared Distribution, Prev: The Flat Uniform Distribution, Up: Random Number Distributions 20.17 The Lognormal Distribution ================================ -- Function: double gsl_ran_lognormal (const gsl_rng *r, double zeta, double sigma) This function returns a random variate from the lognormal distribution. The distribution function is, p(x) dx = {1 \over x \sqrt{2 \pi \sigma^2}} \exp(-(\ln(x) - \zeta)^2/2 \sigma^2) dx for x > 0. -- Function: double gsl_ran_lognormal_pdf (double x, double zeta, double sigma) This function computes the probability density p(x) at *note x: 74f. for a lognormal distribution with parameters *note zeta: 74f. and *note sigma: 74f, using the formula given above. [gsl-ref-figures/rand-lognormal] -- Function: double gsl_cdf_lognormal_P (double x, double zeta, double sigma) -- Function: double gsl_cdf_lognormal_Q (double x, double zeta, double sigma) -- Function: double gsl_cdf_lognormal_Pinv (double P, double zeta, double sigma) -- Function: double gsl_cdf_lognormal_Qinv (double Q, double zeta, double sigma) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the lognormal distribution with parameters *note zeta: 753. and *note sigma: 753.  File: gsl-ref.info, Node: The Chi-squared Distribution, Next: The F-distribution, Prev: The Lognormal Distribution, Up: Random Number Distributions 20.18 The Chi-squared Distribution ================================== The chi-squared distribution arises in statistics. If Y_i are n independent Gaussian random variates with unit variance then the sum-of-squares, X_i = \sum_i Y_i^2 has a chi-squared distribution with n degrees of freedom. -- Function: double gsl_ran_chisq (const gsl_rng *r, double nu) This function returns a random variate from the chi-squared distribution with *note nu: 755. degrees of freedom. The distribution function is, p(x) dx = {1 \over 2 \Gamma(\nu/2) } (x/2)^{\nu/2 - 1} \exp(-x/2) dx for x \ge 0. -- Function: double gsl_ran_chisq_pdf (double x, double nu) This function computes the probability density p(x) at *note x: 756. for a chi-squared distribution with *note nu: 756. degrees of freedom, using the formula given above. [gsl-ref-figures/rand-chisq] -- Function: double gsl_cdf_chisq_P (double x, double nu) -- Function: double gsl_cdf_chisq_Q (double x, double nu) -- Function: double gsl_cdf_chisq_Pinv (double P, double nu) -- Function: double gsl_cdf_chisq_Qinv (double Q, double nu) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the chi-squared distribution with *note nu: 75a. degrees of freedom.  File: gsl-ref.info, Node: The F-distribution, Next: The t-distribution, Prev: The Chi-squared Distribution, Up: Random Number Distributions 20.19 The F-distribution ======================== The F-distribution arises in statistics. If Y_1 and Y_2 are chi-squared deviates with \nu_1 and \nu_2 degrees of freedom then the ratio, X = { (Y_1 / \nu_1) \over (Y_2 / \nu_2) } has an F-distribution F(x;\nu_1,\nu_2). -- Function: double gsl_ran_fdist (const gsl_rng *r, double nu1, double nu2) This function returns a random variate from the F-distribution with degrees of freedom *note nu1: 75c. and *note nu2: 75c. The distribution function is, p(x) dx = { \Gamma((\nu_1 + \nu_2)/2) \over \Gamma(\nu_1/2) \Gamma(\nu_2/2) } \nu_1^{\nu_1/2} \nu_2^{\nu_2/2} x^{\nu_1/2 - 1} (\nu_2 + \nu_1 x)^{-\nu_1/2 -\nu_2/2} for x \ge 0. -- Function: double gsl_ran_fdist_pdf (double x, double nu1, double nu2) This function computes the probability density p(x) at *note x: 75d. for an F-distribution with *note nu1: 75d. and *note nu2: 75d. degrees of freedom, using the formula given above. [gsl-ref-figures/rand-fdist] -- Function: double gsl_cdf_fdist_P (double x, double nu1, double nu2) -- Function: double gsl_cdf_fdist_Q (double x, double nu1, double nu2) -- Function: double gsl_cdf_fdist_Pinv (double P, double nu1, double nu2) -- Function: double gsl_cdf_fdist_Qinv (double Q, double nu1, double nu2) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the F-distribution with *note nu1: 761. and *note nu2: 761. degrees of freedom.  File: gsl-ref.info, Node: The t-distribution, Next: The Beta Distribution, Prev: The F-distribution, Up: Random Number Distributions 20.20 The t-distribution ======================== The t-distribution arises in statistics. If Y_1 has a normal distribution and Y_2 has a chi-squared distribution with \nu degrees of freedom then the ratio, X = { Y_1 \over \sqrt{Y_2 / \nu} } has a t-distribution t(x;\nu) with \nu degrees of freedom. -- Function: double gsl_ran_tdist (const gsl_rng *r, double nu) This function returns a random variate from the t-distribution. The distribution function is, p(x) dx = {\Gamma((\nu + 1)/2) \over \sqrt{\pi \nu} \Gamma(\nu/2)} (1 + x^2/\nu)^{-(\nu + 1)/2} dx for -\infty < x < +\infty. -- Function: double gsl_ran_tdist_pdf (double x, double nu) This function computes the probability density p(x) at *note x: 764. for a t-distribution with *note nu: 764. degrees of freedom, using the formula given above. [gsl-ref-figures/rand-tdist] -- Function: double gsl_cdf_tdist_P (double x, double nu) -- Function: double gsl_cdf_tdist_Q (double x, double nu) -- Function: double gsl_cdf_tdist_Pinv (double P, double nu) -- Function: double gsl_cdf_tdist_Qinv (double Q, double nu) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the t-distribution with *note nu: 768. degrees of freedom.  File: gsl-ref.info, Node: The Beta Distribution, Next: The Logistic Distribution, Prev: The t-distribution, Up: Random Number Distributions 20.21 The Beta Distribution =========================== -- Function: double gsl_ran_beta (const gsl_rng *r, double a, double b) This function returns a random variate from the beta distribution. The distribution function is, p(x) dx = {\Gamma(a+b) \over \Gamma(a) \Gamma(b)} x^{a-1} (1-x)^{b-1} dx for 0 \le x \le 1. -- Function: double gsl_ran_beta_pdf (double x, double a, double b) This function computes the probability density p(x) at *note x: 76b. for a beta distribution with parameters *note a: 76b. and *note b: 76b, using the formula given above. [gsl-ref-figures/rand-beta] -- Function: double gsl_cdf_beta_P (double x, double a, double b) -- Function: double gsl_cdf_beta_Q (double x, double a, double b) -- Function: double gsl_cdf_beta_Pinv (double P, double a, double b) -- Function: double gsl_cdf_beta_Qinv (double Q, double a, double b) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the beta distribution with parameters *note a: 76f. and *note b: 76f.  File: gsl-ref.info, Node: The Logistic Distribution, Next: The Pareto Distribution, Prev: The Beta Distribution, Up: Random Number Distributions 20.22 The Logistic Distribution =============================== -- Function: double gsl_ran_logistic (const gsl_rng *r, double a) This function returns a random variate from the logistic distribution. The distribution function is, p(x) dx = { \exp(-x/a) \over a (1 + \exp(-x/a))^2 } dx for -\infty < x < +\infty. -- Function: double gsl_ran_logistic_pdf (double x, double a) This function computes the probability density p(x) at *note x: 772. for a logistic distribution with scale parameter *note a: 772, using the formula given above. [gsl-ref-figures/rand-logistic] -- Function: double gsl_cdf_logistic_P (double x, double a) -- Function: double gsl_cdf_logistic_Q (double x, double a) -- Function: double gsl_cdf_logistic_Pinv (double P, double a) -- Function: double gsl_cdf_logistic_Qinv (double Q, double a) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the logistic distribution with scale parameter *note a: 776.  File: gsl-ref.info, Node: The Pareto Distribution, Next: Spherical Vector Distributions, Prev: The Logistic Distribution, Up: Random Number Distributions 20.23 The Pareto Distribution ============================= -- Function: double gsl_ran_pareto (const gsl_rng *r, double a, double b) This function returns a random variate from the Pareto distribution of order *note a: 778. The distribution function is, p(x) dx = (a/b) / (x/b)^{a+1} dx for x \ge b. -- Function: double gsl_ran_pareto_pdf (double x, double a, double b) This function computes the probability density p(x) at *note x: 779. for a Pareto distribution with exponent *note a: 779. and scale *note b: 779, using the formula given above. [gsl-ref-figures/rand-pareto] -- Function: double gsl_cdf_pareto_P (double x, double a, double b) -- Function: double gsl_cdf_pareto_Q (double x, double a, double b) -- Function: double gsl_cdf_pareto_Pinv (double P, double a, double b) -- Function: double gsl_cdf_pareto_Qinv (double Q, double a, double b) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Pareto distribution with exponent *note a: 77d. and scale *note b: 77d.  File: gsl-ref.info, Node: Spherical Vector Distributions, Next: The Weibull Distribution, Prev: The Pareto Distribution, Up: Random Number Distributions 20.24 Spherical Vector Distributions ==================================== The spherical distributions generate random vectors, located on a spherical surface. They can be used as random directions, for example in the steps of a random walk. -- Function: void gsl_ran_dir_2d (const gsl_rng *r, double *x, double *y) -- Function: void gsl_ran_dir_2d_trig_method (const gsl_rng *r, double *x, double *y) This function returns a random direction vector v = (*note x: 780, *note y: 780.) in two dimensions. The vector is normalized such that |v|^2 = x^2 + y^2 = 1. The obvious way to do this is to take a uniform random number between 0 and 2\pi and let *note x: 780. and *note y: 780. be the sine and cosine respectively. Two trig functions would have been expensive in the old days, but with modern hardware implementations, this is sometimes the fastest way to go. This is the case for the Pentium (but not the case for the Sun Sparcstation). One can avoid the trig evaluations by choosing *note x: 780. and *note y: 780. in the interior of a unit circle (choose them at random from the interior of the enclosing square, and then reject those that are outside the unit circle), and then dividing by \sqrt{x^2 + y^2}. A much cleverer approach, attributed to von Neumann (See Knuth, v2, 3rd ed, p140, exercise 23), requires neither trig nor a square root. In this approach, ‘u’ and ‘v’ are chosen at random from the interior of a unit circle, and then x=(u^2-v^2)/(u^2+v^2) and y=2uv/(u^2+v^2). -- Function: void gsl_ran_dir_3d (const gsl_rng *r, double *x, double *y, double *z) This function returns a random direction vector v = (*note x: 781, *note y: 781, *note z: 781.) in three dimensions. The vector is normalized such that |v|^2 = x^2 + y^2 + z^2 = 1. The method employed is due to Robert E. Knop (CACM 13, 326 (1970)), and explained in Knuth, v2, 3rd ed, p136. It uses the surprising fact that the distribution projected along any axis is actually uniform (this is only true for 3 dimensions). -- Function: void gsl_ran_dir_nd (const gsl_rng *r, size_t n, double *x) This function returns a random direction vector v = (x_1,x_2,\ldots,x_n) in *note n: 782. dimensions. The vector is normalized such that |v|^2 = x_1^2 + x_2^2 + \cdots + x_n^2 = 1. The method uses the fact that a multivariate Gaussian distribution is spherically symmetric. Each component is generated to have a Gaussian distribution, and then the components are normalized. The method is described by Knuth, v2, 3rd ed, p135–136, and attributed to G. W. Brown, Modern Mathematics for the Engineer (1956).  File: gsl-ref.info, Node: The Weibull Distribution, Next: The Type-1 Gumbel Distribution, Prev: Spherical Vector Distributions, Up: Random Number Distributions 20.25 The Weibull Distribution ============================== -- Function: double gsl_ran_weibull (const gsl_rng *r, double a, double b) This function returns a random variate from the Weibull distribution. The distribution function is, p(x) dx = {b \over a^b} x^{b-1} \exp(-(x/a)^b) dx for x \ge 0. -- Function: double gsl_ran_weibull_pdf (double x, double a, double b) This function computes the probability density p(x) at *note x: 785. for a Weibull distribution with scale *note a: 785. and exponent *note b: 785, using the formula given above. [gsl-ref-figures/rand-weibull] -- Function: double gsl_cdf_weibull_P (double x, double a, double b) -- Function: double gsl_cdf_weibull_Q (double x, double a, double b) -- Function: double gsl_cdf_weibull_Pinv (double P, double a, double b) -- Function: double gsl_cdf_weibull_Qinv (double Q, double a, double b) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Weibull distribution with scale *note a: 789. and exponent *note b: 789.  File: gsl-ref.info, Node: The Type-1 Gumbel Distribution, Next: The Type-2 Gumbel Distribution, Prev: The Weibull Distribution, Up: Random Number Distributions 20.26 The Type-1 Gumbel Distribution ==================================== -- Function: double gsl_ran_gumbel1 (const gsl_rng *r, double a, double b) This function returns a random variate from the Type-1 Gumbel distribution. The Type-1 Gumbel distribution function is, p(x) dx = a b \exp(-(b \exp(-ax) + ax)) dx for -\infty < x < \infty. -- Function: double gsl_ran_gumbel1_pdf (double x, double a, double b) This function computes the probability density p(x) at *note x: 78c. for a Type-1 Gumbel distribution with parameters *note a: 78c. and *note b: 78c, using the formula given above. [gsl-ref-figures/rand-gumbel1] -- Function: double gsl_cdf_gumbel1_P (double x, double a, double b) -- Function: double gsl_cdf_gumbel1_Q (double x, double a, double b) -- Function: double gsl_cdf_gumbel1_Pinv (double P, double a, double b) -- Function: double gsl_cdf_gumbel1_Qinv (double Q, double a, double b) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Type-1 Gumbel distribution with parameters *note a: 790. and *note b: 790.  File: gsl-ref.info, Node: The Type-2 Gumbel Distribution, Next: The Dirichlet Distribution, Prev: The Type-1 Gumbel Distribution, Up: Random Number Distributions 20.27 The Type-2 Gumbel Distribution ==================================== -- Function: double gsl_ran_gumbel2 (const gsl_rng *r, double a, double b) This function returns a random variate from the Type-2 Gumbel distribution. The Type-2 Gumbel distribution function is, p(x) dx = a b x^{-a-1} \exp(-b x^{-a}) dx for 0 < x < \infty. -- Function: double gsl_ran_gumbel2_pdf (double x, double a, double b) This function computes the probability density p(x) at *note x: 793. for a Type-2 Gumbel distribution with parameters *note a: 793. and *note b: 793, using the formula given above. [gsl-ref-figures/rand-gumbel2] -- Function: double gsl_cdf_gumbel2_P (double x, double a, double b) -- Function: double gsl_cdf_gumbel2_Q (double x, double a, double b) -- Function: double gsl_cdf_gumbel2_Pinv (double P, double a, double b) -- Function: double gsl_cdf_gumbel2_Qinv (double Q, double a, double b) These functions compute the cumulative distribution functions P(x), Q(x) and their inverses for the Type-2 Gumbel distribution with parameters *note a: 797. and *note b: 797.  File: gsl-ref.info, Node: The Dirichlet Distribution, Next: General Discrete Distributions, Prev: The Type-2 Gumbel Distribution, Up: Random Number Distributions 20.28 The Dirichlet Distribution ================================ -- Function: void gsl_ran_dirichlet (const gsl_rng *r, size_t K, const double alpha[], double theta[]) This function returns an array of *note K: 799. random variates from a Dirichlet distribution of order *note K: 799.-1. The distribution function is p(\theta_1, ..., \theta_K) d\theta_1 ... d\theta_K = (1/Z) \prod_{i=1}^K \theta_i^{\alpha_i - 1} \delta(1 -\sum_{i=1}^K \theta_i) d\theta_1 ... d\theta_K for \theta_i \ge 0 and \alpha_i > 0. The delta function ensures that \sum \theta_i = 1. The normalization factor Z is Z = {\prod_{i=1}^K \Gamma(\alpha_i) \over \Gamma( \sum_{i=1}^K \alpha_i)} The random variates are generated by sampling *note K: 799. values from gamma distributions with parameters a=\alpha_i$, $b=1, and renormalizing. See A.M. Law, W.D. Kelton, `Simulation Modeling and Analysis' (1991). -- Function: double gsl_ran_dirichlet_pdf (size_t K, const double alpha[], const double theta[]) This function computes the probability density p(\theta_1, \ldots , \theta_K) at ‘theta[K]’ for a Dirichlet distribution with parameters ‘alpha[K]’, using the formula given above. -- Function: double gsl_ran_dirichlet_lnpdf (size_t K, const double alpha[], const double theta[]) This function computes the logarithm of the probability density p(\theta_1, \ldots , \theta_K) for a Dirichlet distribution with parameters ‘alpha[K]’.  File: gsl-ref.info, Node: General Discrete Distributions, Next: The Poisson Distribution, Prev: The Dirichlet Distribution, Up: Random Number Distributions 20.29 General Discrete Distributions ==================================== Given K discrete events with different probabilities P[k], produce a random value k consistent with its probability. The obvious way to do this is to preprocess the probability list by generating a cumulative probability array with K + 1 elements: C[0] = 0 C[k+1] = C[k] + P[k] Note that this construction produces C[K] = 1. Now choose a uniform deviate u between 0 and 1, and find the value of k such that C[k] \le u < C[k+1]. Although this in principle requires of order \log K steps per random number generation, they are fast steps, and if you use something like \lfloor uK \rfloor as a starting point, you can often do pretty well. But faster methods have been devised. Again, the idea is to preprocess the probability list, and save the result in some form of lookup table; then the individual calls for a random discrete event can go rapidly. An approach invented by G. Marsaglia (Generating discrete random variables in a computer, Comm ACM 6, 37–38 (1963)) is very clever, and readers interested in examples of good algorithm design are directed to this short and well-written paper. Unfortunately, for large K, Marsaglia’s lookup table can be quite large. A much better approach is due to Alastair J. Walker (An efficient method for generating discrete random variables with general distributions, ACM Trans on Mathematical Software 3, 253–256 (1977); see also Knuth, v2, 3rd ed, p120–121,139). This requires two lookup tables, one floating point and one integer, but both only of size K. After preprocessing, the random numbers are generated in O(1) time, even for large K. The preprocessing suggested by Walker requires O(K^2) effort, but that is not actually necessary, and the implementation provided here only takes O(K) effort. In general, more preprocessing leads to faster generation of the individual random numbers, but a diminishing return is reached pretty early. Knuth points out that the optimal preprocessing is combinatorially difficult for large K. This method can be used to speed up some of the discrete random number generators below, such as the binomial distribution. To use it for something like the Poisson Distribution, a modification would have to be made, since it only takes a finite set of K outcomes. -- Type: gsl_ran_discrete_t This structure contains the lookup table for the discrete random number generator. -- Function: *note gsl_ran_discrete_t: 79d. *gsl_ran_discrete_preproc (size_t K, const double *P) This function returns a pointer to a structure that contains the lookup table for the discrete random number generator. The array *note P: 79e. contains the probabilities of the discrete events; these array elements must all be positive, but they needn’t add up to one (so you can think of them more generally as “weights”)—the preprocessor will normalize appropriately. This return value is used as an argument for the *note gsl_ran_discrete(): 79f. function below. -- Function: size_t gsl_ran_discrete (const gsl_rng *r, const gsl_ran_discrete_t *g) After the preprocessor, above, has been called, you use this function to get the discrete random numbers. -- Function: double gsl_ran_discrete_pdf (size_t k, const gsl_ran_discrete_t *g) Returns the probability P[k] of observing the variable *note k: 7a0. Since P[k] is not stored as part of the lookup table, it must be recomputed; this computation takes O(K), so if ‘K’ is large and you care about the original array P[k] used to create the lookup table, then you should just keep this original array P[k] around. -- Function: void gsl_ran_discrete_free (gsl_ran_discrete_t *g) De-allocates the lookup table pointed to by *note g: 7a1.  File: gsl-ref.info, Node: The Poisson Distribution, Next: The Bernoulli Distribution, Prev: General Discrete Distributions, Up: Random Number Distributions 20.30 The Poisson Distribution ============================== -- Function: unsigned int gsl_ran_poisson (const gsl_rng *r, double mu) This function returns a random integer from the Poisson distribution with mean *note mu: 7a3. The probability distribution for Poisson variates is, p(k) = {\mu^k \over k!} \exp(-\mu) for k \ge 0. -- Function: double gsl_ran_poisson_pdf (unsigned int k, double mu) This function computes the probability p(k) of obtaining *note k: 7a4. from a Poisson distribution with mean *note mu: 7a4, using the formula given above. [gsl-ref-figures/rand-poisson] -- Function: double gsl_cdf_poisson_P (unsigned int k, double mu) -- Function: double gsl_cdf_poisson_Q (unsigned int k, double mu) These functions compute the cumulative distribution functions P(k), Q(k) for the Poisson distribution with parameter *note mu: 7a6.  File: gsl-ref.info, Node: The Bernoulli Distribution, Next: The Binomial Distribution, Prev: The Poisson Distribution, Up: Random Number Distributions 20.31 The Bernoulli Distribution ================================ -- Function: unsigned int gsl_ran_bernoulli (const gsl_rng *r, double p) This function returns either 0 or 1, the result of a Bernoulli trial with probability *note p: 7a8. The probability distribution for a Bernoulli trial is, p(0) = 1 - p p(1) = p -- Function: double gsl_ran_bernoulli_pdf (unsigned int k, double p) This function computes the probability p(k) of obtaining *note k: 7a9. from a Bernoulli distribution with probability parameter *note p: 7a9, using the formula given above. [gsl-ref-figures/rand-bernoulli]  File: gsl-ref.info, Node: The Binomial Distribution, Next: The Multinomial Distribution, Prev: The Bernoulli Distribution, Up: Random Number Distributions 20.32 The Binomial Distribution =============================== -- Function: unsigned int gsl_ran_binomial (const gsl_rng *r, double p, unsigned int n) This function returns a random integer from the binomial distribution, the number of successes in *note n: 7ab. independent trials with probability *note p: 7ab. The probability distribution for binomial variates is, p(k) = {n! \over k! (n-k)!} p^k (1-p)^{n-k} for 0 \le k \le n. -- Function: double gsl_ran_binomial_pdf (unsigned int k, double p, unsigned int n) This function computes the probability p(k) of obtaining *note k: 7ac. from a binomial distribution with parameters *note p: 7ac. and *note n: 7ac, using the formula given above. [gsl-ref-figures/rand-binomial] -- Function: double gsl_cdf_binomial_P (unsigned int k, double p, unsigned int n) -- Function: double gsl_cdf_binomial_Q (unsigned int k, double p, unsigned int n) These functions compute the cumulative distribution functions P(k), Q(k) for the binomial distribution with parameters *note p: 7ae. and *note n: 7ae.  File: gsl-ref.info, Node: The Multinomial Distribution, Next: The Negative Binomial Distribution, Prev: The Binomial Distribution, Up: Random Number Distributions 20.33 The Multinomial Distribution ================================== -- Function: void gsl_ran_multinomial (const gsl_rng *r, size_t K, unsigned int N, const double p[], unsigned int n[]) This function computes a random sample *note n: 7b0. from the multinomial distribution formed by *note N: 7b0. trials from an underlying distribution ‘p[K]’. The distribution function for *note n: 7b0. is, P(n_1, n_2, ..., n_K) = (N!/(n_1! n_2! ... n_K!)) p_1^n_1 p_2^n_2 ... p_K^n_K where (n_1, n_2, \ldots, n_K) are nonnegative integers with \sum_{k=1}^{K} n_k = N, and (p_1, p_2, \ldots, p_K) is a probability distribution with \sum p_i = 1. If the array ‘p[K]’ is not normalized then its entries will be treated as weights and normalized appropriately. The arrays *note n: 7b0. and *note p: 7b0. must both be of length *note K: 7b0. Random variates are generated using the conditional binomial method (see C.S. Davis, `The computer generation of multinomial random variates', Comp. Stat. Data Anal. 16 (1993) 205–217 for details). -- Function: double gsl_ran_multinomial_pdf (size_t K, const double p[], const unsigned int n[]) This function computes the probability P(n_1, n_2, \ldots, n_K) of sampling ‘n[K]’ from a multinomial distribution with parameters ‘p[K]’, using the formula given above. -- Function: double gsl_ran_multinomial_lnpdf (size_t K, const double p[], const unsigned int n[]) This function returns the logarithm of the probability for the multinomial distribution P(n_1, n_2, \ldots, n_K) with parameters ‘p[K]’.  File: gsl-ref.info, Node: The Negative Binomial Distribution, Next: The Pascal Distribution, Prev: The Multinomial Distribution, Up: Random Number Distributions 20.34 The Negative Binomial Distribution ======================================== -- Function: unsigned int gsl_ran_negative_binomial (const gsl_rng *r, double p, double n) This function returns a random integer from the negative binomial distribution, the number of failures occurring before *note n: 7b4. successes in independent trials with probability *note p: 7b4. of success. The probability distribution for negative binomial variates is, p(k) = {\Gamma(n + k) \over \Gamma(k+1) \Gamma(n) } p^n (1-p)^k Note that n is not required to be an integer. -- Function: double gsl_ran_negative_binomial_pdf (unsigned int k, double p, double n) This function computes the probability p(k) of obtaining *note k: 7b5. from a negative binomial distribution with parameters *note p: 7b5. and *note n: 7b5, using the formula given above. [gsl-ref-figures/rand-nbinomial] -- Function: double gsl_cdf_negative_binomial_P (unsigned int k, double p, double n) -- Function: double gsl_cdf_negative_binomial_Q (unsigned int k, double p, double n) These functions compute the cumulative distribution functions P(k), Q(k) for the negative binomial distribution with parameters *note p: 7b7. and *note n: 7b7.  File: gsl-ref.info, Node: The Pascal Distribution, Next: The Geometric Distribution, Prev: The Negative Binomial Distribution, Up: Random Number Distributions 20.35 The Pascal Distribution ============================= -- Function: unsigned int gsl_ran_pascal (const gsl_rng *r, double p, unsigned int n) This function returns a random integer from the Pascal distribution. The Pascal distribution is simply a negative binomial distribution with an integer value of n. p(k) = {(n + k - 1)! \over k! (n - 1)! } p^n (1-p)^k for k \ge 0. -- Function: double gsl_ran_pascal_pdf (unsigned int k, double p, unsigned int n) This function computes the probability p(k) of obtaining *note k: 7ba. from a Pascal distribution with parameters *note p: 7ba. and *note n: 7ba, using the formula given above. [gsl-ref-figures/rand-pascal] -- Function: double gsl_cdf_pascal_P (unsigned int k, double p, unsigned int n) -- Function: double gsl_cdf_pascal_Q (unsigned int k, double p, unsigned int n) These functions compute the cumulative distribution functions P(k), Q(k) for the Pascal distribution with parameters *note p: 7bc. and *note n: 7bc.  File: gsl-ref.info, Node: The Geometric Distribution, Next: The Hypergeometric Distribution, Prev: The Pascal Distribution, Up: Random Number Distributions 20.36 The Geometric Distribution ================================ -- Function: unsigned int gsl_ran_geometric (const gsl_rng *r, double p) This function returns a random integer from the geometric distribution, the number of independent trials with probability *note p: 7be. until the first success. The probability distribution for geometric variates is, p(k) = p (1-p)^{k-1} for k \ge 1. Note that the distribution begins with k = 1 with this definition. There is another convention in which the exponent k - 1 is replaced by k. -- Function: double gsl_ran_geometric_pdf (unsigned int k, double p) This function computes the probability p(k) of obtaining *note k: 7bf. from a geometric distribution with probability parameter *note p: 7bf, using the formula given above. [gsl-ref-figures/rand-geometric] -- Function: double gsl_cdf_geometric_P (unsigned int k, double p) -- Function: double gsl_cdf_geometric_Q (unsigned int k, double p) These functions compute the cumulative distribution functions P(k), Q(k) for the geometric distribution with parameter *note p: 7c1.  File: gsl-ref.info, Node: The Hypergeometric Distribution, Next: The Logarithmic Distribution, Prev: The Geometric Distribution, Up: Random Number Distributions 20.37 The Hypergeometric Distribution ===================================== -- Function: unsigned int gsl_ran_hypergeometric (const gsl_rng *r, unsigned int n1, unsigned int n2, unsigned int t) This function returns a random integer from the hypergeometric distribution. The probability distribution for hypergeometric random variates is, p(k) = C(n_1, k) C(n_2, t - k) / C(n_1 + n_2, t) where C(a,b) = a!/(b!(a-b)!) and t \leq n_1 + n_2. The domain of k is \max(0, t - n_2), \ldots, \min(t, n_1) If a population contains n_1 elements of “type 1” and n_2 elements of “type 2” then the hypergeometric distribution gives the probability of obtaining k elements of “type 1” in t samples from the population without replacement. -- Function: double gsl_ran_hypergeometric_pdf (unsigned int k, unsigned int n1, unsigned int n2, unsigned int t) This function computes the probability p(k) of obtaining *note k: 7c4. from a hypergeometric distribution with parameters *note n1: 7c4, *note n2: 7c4, *note t: 7c4, using the formula given above. [gsl-ref-figures/rand-hypergeometric] -- Function: double gsl_cdf_hypergeometric_P (unsigned int k, unsigned int n1, unsigned int n2, unsigned int t) -- Function: double gsl_cdf_hypergeometric_Q (unsigned int k, unsigned int n1, unsigned int n2, unsigned int t) These functions compute the cumulative distribution functions P(k), Q(k) for the hypergeometric distribution with parameters *note n1: 7c6, *note n2: 7c6. and *note t: 7c6.  File: gsl-ref.info, Node: The Logarithmic Distribution, Next: The Wishart Distribution, Prev: The Hypergeometric Distribution, Up: Random Number Distributions 20.38 The Logarithmic Distribution ================================== -- Function: unsigned int gsl_ran_logarithmic (const gsl_rng *r, double p) This function returns a random integer from the logarithmic distribution. The probability distribution for logarithmic random variates is, p(k) = {-1 \over \log(1-p)} {(p^k \over k)} for k \ge 1. -- Function: double gsl_ran_logarithmic_pdf (unsigned int k, double p) This function computes the probability p(k) of obtaining *note k: 7c9. from a logarithmic distribution with probability parameter *note p: 7c9, using the formula given above. [gsl-ref-figures/rand-logarithmic]  File: gsl-ref.info, Node: The Wishart Distribution, Next: Shuffling and Sampling, Prev: The Logarithmic Distribution, Up: Random Number Distributions 20.39 The Wishart Distribution ============================== -- Function: int gsl_ran_wishart (const gsl_rng *r, const double n, const gsl_matrix *L, gsl_matrix *result, gsl_matrix *work) This function computes a random symmetric p-by-p matrix from the Wishart distribution. The probability distribution for Wishart random variates is, p(X) = \frac{|X|^{(n-p-1)/2} e^{-tr( V^{-1} X)/2}}{2^{(np)/2} |V|^{n/2} \Gamma_p(n/2)} Here, n > p - 1 is the number of degrees of freedom, V is a symmetric positive definite p-by-p scale matrix, whose Cholesky factor is specified by *note L: 7cb, and *note work: 7cb. is p-by-p workspace. The p-by-p Wishart distributed matrix X is stored in *note result: 7cb. on output. -- Function: int gsl_ran_wishart_pdf (const gsl_matrix *X, const gsl_matrix *L_X, const double n, const gsl_matrix *L, double *result, gsl_matrix *work) -- Function: int gsl_ran_wishart_log_pdf (const gsl_matrix *X, const gsl_matrix *L_X, const double n, const gsl_matrix *L, double *result, gsl_matrix *work) These functions compute p(X) or \log{p(X)} for the p-by-p matrix *note X: 7cd, whose Cholesky factor is specified in *note L_X: 7cd. The degrees of freedom is given by *note n: 7cd, the Cholesky factor of the scale matrix V is specified in *note L: 7cd, and *note work: 7cd. is p-by-p workspace. The probably density value is returned in *note result: 7cd.  File: gsl-ref.info, Node: Shuffling and Sampling, Next: Examples<14>, Prev: The Wishart Distribution, Up: Random Number Distributions 20.40 Shuffling and Sampling ============================ The following functions allow the shuffling and sampling of a set of objects. The algorithms rely on a random number generator as a source of randomness and a poor quality generator can lead to correlations in the output. In particular it is important to avoid generators with a short period. For more information see Knuth, v2, 3rd ed, Section 3.4.2, “Random Sampling and Shuffling”. -- Function: void gsl_ran_shuffle (const gsl_rng *r, void *base, size_t n, size_t size) This function randomly shuffles the order of *note n: 7cf. objects, each of size *note size: 7cf, stored in the array ‘base[0..n-1]’. The output of the random number generator *note r: 7cf. is used to produce the permutation. The algorithm generates all possible n! permutations with equal probability, assuming a perfect source of random numbers. The following code shows how to shuffle the numbers from 0 to 51: int a[52]; for (i = 0; i < 52; i++) { a[i] = i; } gsl_ran_shuffle (r, a, 52, sizeof (int)); -- Function: int gsl_ran_choose (const gsl_rng *r, void *dest, size_t k, void *src, size_t n, size_t size) This function fills the array ‘dest[k]’ with *note k: 7d0. objects taken randomly from the *note n: 7d0. elements of the array ‘src[0..n-1]’. The objects are each of size *note size: 7d0. The output of the random number generator *note r: 7d0. is used to make the selection. The algorithm ensures all possible samples are equally likely, assuming a perfect source of randomness. The objects are sampled `without' replacement, thus each object can only appear once in *note dest: 7d0. It is required that *note k: 7d0. be less than or equal to *note n: 7d0. The objects in *note dest: 7d0. will be in the same relative order as those in *note src: 7d0. You will need to call ‘gsl_ran_shuffle(r, dest, n, size)’ if you want to randomize the order. The following code shows how to select a random sample of three unique numbers from the set 0 to 99: double a[3], b[100]; for (i = 0; i < 100; i++) { b[i] = (double) i; } gsl_ran_choose (r, a, 3, b, 100, sizeof (double)); -- Function: void gsl_ran_sample (const gsl_rng *r, void *dest, size_t k, void *src, size_t n, size_t size) This function is like *note gsl_ran_choose(): 7d0. but samples *note k: 7d1. items from the original array of *note n: 7d1. items *note src: 7d1. with replacement, so the same object can appear more than once in the output sequence *note dest: 7d1. There is no requirement that *note k: 7d1. be less than *note n: 7d1. in this case.  File: gsl-ref.info, Node: Examples<14>, Next: References and Further Reading<14>, Prev: Shuffling and Sampling, Up: Random Number Distributions 20.41 Examples ============== The following program demonstrates the use of a random number generator to produce variates from a distribution. It prints 10 samples from the Poisson distribution with a mean of 3. #include #include #include int main (void) { const gsl_rng_type * T; gsl_rng * r; int i, n = 10; double mu = 3.0; /* create a generator chosen by the environment variable GSL_RNG_TYPE */ gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); /* print n random variates chosen from the poisson distribution with mean parameter mu */ for (i = 0; i < n; i++) { unsigned int k = gsl_ran_poisson (r, mu); printf (" %u", k); } printf ("\n"); gsl_rng_free (r); return 0; } If the library and header files are installed under ‘/usr/local’ (the default location) then the program can be compiled with these options: $ gcc -Wall demo.c -lgsl -lgslcblas -lm Here is the output of the program, 2 5 5 2 1 0 3 4 1 1 The variates depend on the seed used by the generator. The seed for the default generator type *note gsl_rng_default: 6a7. can be changed with the *note GSL_RNG_SEED: 690. environment variable to produce a different stream of variates: $ GSL_RNG_SEED=123 ./a.out giving output 4 5 6 3 3 1 4 2 5 5 The following program generates a random walk in two dimensions. #include #include #include int main (void) { int i; double x = 0, y = 0, dx, dy; const gsl_rng_type * T; gsl_rng * r; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc (T); printf ("%g %g\n", x, y); for (i = 0; i < 10; i++) { gsl_ran_dir_2d (r, &dx, &dy); x += dx; y += dy; printf ("%g %g\n", x, y); } gsl_rng_free (r); return 0; } Fig. %s shows the output from the program. [gsl-ref-figures/random-walk] Figure: Four 10-step random walks from the origin. The following program computes the upper and lower cumulative distribution functions for the standard normal distribution at x = 2. #include #include int main (void) { double P, Q; double x = 2.0; P = gsl_cdf_ugaussian_P (x); printf ("prob(x < %f) = %f\n", x, P); Q = gsl_cdf_ugaussian_Q (x); printf ("prob(x > %f) = %f\n", x, Q); x = gsl_cdf_ugaussian_Pinv (P); printf ("Pinv(%f) = %f\n", P, x); x = gsl_cdf_ugaussian_Qinv (Q); printf ("Qinv(%f) = %f\n", Q, x); return 0; } Here is the output of the program, prob(x < 2.000000) = 0.977250 prob(x > 2.000000) = 0.022750 Pinv(0.977250) = 2.000000 Qinv(0.022750) = 2.000000  File: gsl-ref.info, Node: References and Further Reading<14>, Prev: Examples<14>, Up: Random Number Distributions 20.42 References and Further Reading ==================================== For an encyclopaedic coverage of the subject readers are advised to consult the book “Non-Uniform Random Variate Generation” by Luc Devroye. It covers every imaginable distribution and provides hundreds of algorithms. * Luc Devroye, “Non-Uniform Random Variate Generation”, Springer-Verlag, ISBN 0-387-96305-7. Available online at ‘http://cg.scs.carleton.ca/~luc/rnbookindex.html’. The subject of random variate generation is also reviewed by Knuth, who describes algorithms for all the major distributions. * Donald E. Knuth, “The Art of Computer Programming: Seminumerical Algorithms” (Vol 2, 3rd Ed, 1997), Addison-Wesley, ISBN 0201896842. The Particle Data Group provides a short review of techniques for generating distributions of random numbers in the “Monte Carlo” section of its Annual Review of Particle Physics. * Review of Particle Properties, R.M. Barnett et al., Physical Review D54, 1 (1996) ‘http://pdg.lbl.gov/’. The Review of Particle Physics is available online in postscript and pdf format. An overview of methods used to compute cumulative distribution functions can be found in `Statistical Computing' by W.J. Kennedy and J.E. Gentle. Another general reference is `Elements of Statistical Computing' by R.A. Thisted. * William E. Kennedy and James E. Gentle, Statistical Computing (1980), Marcel Dekker, ISBN 0-8247-6898-1. * Ronald A. Thisted, Elements of Statistical Computing (1988), Chapman & Hall, ISBN 0-412-01371-1. The cumulative distribution functions for the Gaussian distribution are based on the following papers, * Rational Chebyshev Approximations Using Linear Equations, W.J. Cody, W. Fraser, J.F. Hart. Numerische Mathematik 12, 242–251 (1968). * Rational Chebyshev Approximations for the Error Function, W.J. Cody. Mathematics of Computation 23, n107, 631–637 (July 1969).  File: gsl-ref.info, Node: Statistics, Next: Running Statistics, Prev: Random Number Distributions, Up: Top 21 Statistics ************* This chapter describes the statistical functions in the library. The basic statistical functions include routines to compute the mean, variance and standard deviation. More advanced functions allow you to calculate absolute deviations, skewness, and kurtosis as well as the median and arbitrary percentiles. The algorithms use recurrence relations to compute average quantities in a stable way, without large intermediate values that might overflow. The functions are available in versions for datasets in the standard floating-point and integer types. The versions for double precision floating-point data have the prefix ‘gsl_stats’ and are declared in the header file ‘gsl_statistics_double.h’. The versions for integer data have the prefix ‘gsl_stats_int’ and are declared in the header file ‘gsl_statistics_int.h’. All the functions operate on C arrays with a ‘stride’ parameter specifying the spacing between elements. * Menu: * Mean, Standard Deviation and Variance: Mean Standard Deviation and Variance. * Absolute deviation:: * Higher moments (skewness and kurtosis): Higher moments skewness and kurtosis. * Autocorrelation:: * Covariance:: * Correlation:: * Weighted Samples:: * Maximum and Minimum values:: * Median and Percentiles:: * Order Statistics:: * Robust Location Estimates:: * Robust Scale Estimates:: * Examples: Examples<15>. * References and Further Reading: References and Further Reading<15>.  File: gsl-ref.info, Node: Mean Standard Deviation and Variance, Next: Absolute deviation, Up: Statistics 21.1 Mean, Standard Deviation and Variance ========================================== -- Function: double gsl_stats_mean (const double data[], size_t stride, size_t n) This function returns the arithmetic mean of *note data: 1d, a dataset of length *note n: 1d. with stride *note stride: 1d. The arithmetic mean, or `sample mean', is denoted by \Hat\mu and defined as, \Hat\mu = {1 \over N} \sum x_i where x_i are the elements of the dataset *note data: 1d. For samples drawn from a gaussian distribution the variance of \Hat\mu is \sigma^2 / N. -- Function: double gsl_stats_variance (const double data[], size_t stride, size_t n) This function returns the estimated, or `sample', variance of *note data: 7d8, a dataset of length *note n: 7d8. with stride *note stride: 7d8. The estimated variance is denoted by \Hat\sigma^2 and is defined by, \Hat\sigma^2 = (1/(N-1)) \sum (x_i - \Hat\mu)^2 where x_i are the elements of the dataset *note data: 7d8. Note that the normalization factor of 1/(N-1) results from the derivation of \Hat\sigma^2 as an unbiased estimator of the population variance \sigma^2. For samples drawn from a Gaussian distribution the variance of \Hat\sigma^2 itself is 2 \sigma^4 / N. This function computes the mean via a call to *note gsl_stats_mean(): 1d. If you have already computed the mean then you can pass it directly to *note gsl_stats_variance_m(): 7d9. -- Function: double gsl_stats_variance_m (const double data[], size_t stride, size_t n, double mean) This function returns the sample variance of *note data: 7d9. relative to the given value of *note mean: 7d9. The function is computed with \Hat\mu replaced by the value of *note mean: 7d9. that you supply, \Hat\sigma^2 = (1/(N-1)) \sum (x_i - mean)^2 -- Function: double gsl_stats_sd (const double data[], size_t stride, size_t n) -- Function: double gsl_stats_sd_m (const double data[], size_t stride, size_t n, double mean) The standard deviation is defined as the square root of the variance. These functions return the square root of the corresponding variance functions above. -- Function: double gsl_stats_tss (const double data[], size_t stride, size_t n) -- Function: double gsl_stats_tss_m (const double data[], size_t stride, size_t n, double mean) These functions return the total sum of squares (TSS) of *note data: 7dd. about the mean. For *note gsl_stats_tss_m(): 7dd. the user-supplied value of *note mean: 7dd. is used, and for *note gsl_stats_tss(): 7dc. it is computed using *note gsl_stats_mean(): 1d. TSS = \sum (x_i - mean)^2 -- Function: double gsl_stats_variance_with_fixed_mean (const double data[], size_t stride, size_t n, double mean) This function computes an unbiased estimate of the variance of *note data: 7de. when the population mean *note mean: 7de. of the underlying distribution is known `a priori'. In this case the estimator for the variance uses the factor 1/N and the sample mean \Hat\mu is replaced by the known population mean \mu, \Hat\sigma^2 = (1/N) \sum (x_i - \mu)^2 -- Function: double gsl_stats_sd_with_fixed_mean (const double data[], size_t stride, size_t n, double mean) This function calculates the standard deviation of *note data: 7df. for a fixed population mean *note mean: 7df. The result is the square root of the corresponding variance function.  File: gsl-ref.info, Node: Absolute deviation, Next: Higher moments skewness and kurtosis, Prev: Mean Standard Deviation and Variance, Up: Statistics 21.2 Absolute deviation ======================= -- Function: double gsl_stats_absdev (const double data[], size_t stride, size_t n) This function computes the absolute deviation from the mean of *note data: 7e1, a dataset of length *note n: 7e1. with stride *note stride: 7e1. The absolute deviation from the mean is defined as, absdev = (1/N) \sum |x_i - \Hat\mu| where x_i are the elements of the dataset *note data: 7e1. The absolute deviation from the mean provides a more robust measure of the width of a distribution than the variance. This function computes the mean of *note data: 7e1. via a call to *note gsl_stats_mean(): 1d. -- Function: double gsl_stats_absdev_m (const double data[], size_t stride, size_t n, double mean) This function computes the absolute deviation of the dataset *note data: 7e2. relative to the given value of *note mean: 7e2, absdev = (1/N) \sum |x_i - mean| This function is useful if you have already computed the mean of *note data: 7e2. (and want to avoid recomputing it), or wish to calculate the absolute deviation relative to another value (such as zero, or the median).  File: gsl-ref.info, Node: Higher moments skewness and kurtosis, Next: Autocorrelation, Prev: Absolute deviation, Up: Statistics 21.3 Higher moments (skewness and kurtosis) =========================================== -- Function: double gsl_stats_skew (const double data[], size_t stride, size_t n) This function computes the skewness of *note data: 7e4, a dataset of length *note n: 7e4. with stride *note stride: 7e4. The skewness is defined as, skew = (1/N) \sum ((x_i - \Hat\mu)/\Hat\sigma)^3 where x_i are the elements of the dataset *note data: 7e4. The skewness measures the asymmetry of the tails of a distribution. The function computes the mean and estimated standard deviation of *note data: 7e4. via calls to *note gsl_stats_mean(): 1d. and *note gsl_stats_sd(): 7da. -- Function: double gsl_stats_skew_m_sd (const double data[], size_t stride, size_t n, double mean, double sd) This function computes the skewness of the dataset *note data: 7e5. using the given values of the mean *note mean: 7e5. and standard deviation *note sd: 7e5, skew = (1/N) \sum ((x_i - mean)/sd)^3 These functions are useful if you have already computed the mean and standard deviation of *note data: 7e5. and want to avoid recomputing them. -- Function: double gsl_stats_kurtosis (const double data[], size_t stride, size_t n) This function computes the kurtosis of *note data: 7e6, a dataset of length *note n: 7e6. with stride *note stride: 7e6. The kurtosis is defined as, kurtosis = ((1/N) \sum ((x_i - \Hat\mu)/\Hat\sigma)^4) - 3 The kurtosis measures how sharply peaked a distribution is, relative to its width. The kurtosis is normalized to zero for a Gaussian distribution. -- Function: double gsl_stats_kurtosis_m_sd (const double data[], size_t stride, size_t n, double mean, double sd) This function computes the kurtosis of the dataset *note data: 7e7. using the given values of the mean *note mean: 7e7. and standard deviation *note sd: 7e7, kurtosis = ((1/N) \sum ((x_i - mean)/sd)^4) - 3 This function is useful if you have already computed the mean and standard deviation of *note data: 7e7. and want to avoid recomputing them.  File: gsl-ref.info, Node: Autocorrelation, Next: Covariance, Prev: Higher moments skewness and kurtosis, Up: Statistics 21.4 Autocorrelation ==================== -- Function: double gsl_stats_lag1_autocorrelation (const double data[], const size_t stride, const size_t n) This function computes the lag-1 autocorrelation of the dataset *note data: 7e9. a_1 = {\sum_{i = 2}^{n} (x_{i} - \Hat\mu) (x_{i-1} - \Hat\mu) \over \sum_{i = 1}^{n} (x_{i} - \Hat\mu) (x_{i} - \Hat\mu)} -- Function: double gsl_stats_lag1_autocorrelation_m (const double data[], const size_t stride, const size_t n, const double mean) This function computes the lag-1 autocorrelation of the dataset *note data: 7ea. using the given value of the mean *note mean: 7ea.  File: gsl-ref.info, Node: Covariance, Next: Correlation, Prev: Autocorrelation, Up: Statistics 21.5 Covariance =============== -- Function: double gsl_stats_covariance (const double data1[], const size_t stride1, const double data2[], const size_t stride2, const size_t n) This function computes the covariance of the datasets *note data1: 7ec. and *note data2: 7ec. which must both be of the same length *note n: 7ec. covar = (1/(n - 1)) \sum_{i = 1}^{n} (x_i - \Hat x) (y_i - \Hat y) -- Function: double gsl_stats_covariance_m (const double data1[], const size_t stride1, const double data2[], const size_t stride2, const size_t n, const double mean1, const double mean2) This function computes the covariance of the datasets *note data1: 7ed. and *note data2: 7ed. using the given values of the means, *note mean1: 7ed. and *note mean2: 7ed. This is useful if you have already computed the means of *note data1: 7ed. and *note data2: 7ed. and want to avoid recomputing them.  File: gsl-ref.info, Node: Correlation, Next: Weighted Samples, Prev: Covariance, Up: Statistics 21.6 Correlation ================ -- Function: double gsl_stats_correlation (const double data1[], const size_t stride1, const double data2[], const size_t stride2, const size_t n) This function efficiently computes the Pearson correlation coefficient between the datasets *note data1: 7ef. and *note data2: 7ef. which must both be of the same length *note n: 7ef. r = cov(x, y) / (\Hat\sigma_x \Hat\sigma_y) = {1/(n-1) \sum (x_i - \Hat x) (y_i - \Hat y) \over \sqrt{1/(n-1) \sum (x_i - \Hat x)^2} \sqrt{1/(n-1) \sum (y_i - \Hat y)^2} } -- Function: double gsl_stats_spearman (const double data1[], const size_t stride1, const double data2[], const size_t stride2, const size_t n, double work[]) This function computes the Spearman rank correlation coefficient between the datasets *note data1: 7f0. and *note data2: 7f0. which must both be of the same length *note n: 7f0. Additional workspace of size 2 * *note n: 7f0. is required in *note work: 7f0. The Spearman rank correlation between vectors x and y is equivalent to the Pearson correlation between the ranked vectors x_R and y_R, where ranks are defined to be the average of the positions of an element in the ascending order of the values.  File: gsl-ref.info, Node: Weighted Samples, Next: Maximum and Minimum values, Prev: Correlation, Up: Statistics 21.7 Weighted Samples ===================== The functions described in this section allow the computation of statistics for weighted samples. The functions accept an array of samples, x_i, with associated weights, w_i. Each sample x_i is considered as having been drawn from a Gaussian distribution with variance \sigma_i^2. The sample weight w_i is defined as the reciprocal of this variance, w_i = 1/\sigma_i^2. Setting a weight to zero corresponds to removing a sample from a dataset. -- Function: double gsl_stats_wmean (const double w[], size_t wstride, const double data[], size_t stride, size_t n) This function returns the weighted mean of the dataset *note data: 7f2. with stride *note stride: 7f2. and length *note n: 7f2, using the set of weights *note w: 7f2. with stride *note wstride: 7f2. and length *note n: 7f2. The weighted mean is defined as, \Hat\mu = (\sum w_i x_i) / (\sum w_i) -- Function: double gsl_stats_wvariance (const double w[], size_t wstride, const double data[], size_t stride, size_t n) This function returns the estimated variance of the dataset *note data: 7f3. with stride *note stride: 7f3. and length *note n: 7f3, using the set of weights *note w: 7f3. with stride *note wstride: 7f3. and length *note n: 7f3. The estimated variance of a weighted dataset is calculated as, \Hat\sigma^2 = ((\sum w_i)/((\sum w_i)^2 - \sum (w_i^2))) \sum w_i (x_i - \Hat\mu)^2 Note that this expression reduces to an unweighted variance with the familiar 1/(N-1) factor when there are N equal non-zero weights. -- Function: double gsl_stats_wvariance_m (const double w[], size_t wstride, const double data[], size_t stride, size_t n, double wmean) This function returns the estimated variance of the weighted dataset *note data: 7f4. using the given weighted mean *note wmean: 7f4. -- Function: double gsl_stats_wsd (const double w[], size_t wstride, const double data[], size_t stride, size_t n) The standard deviation is defined as the square root of the variance. This function returns the square root of the corresponding variance function *note gsl_stats_wvariance(): 7f3. above. -- Function: double gsl_stats_wsd_m (const double w[], size_t wstride, const double data[], size_t stride, size_t n, double wmean) This function returns the square root of the corresponding variance function *note gsl_stats_wvariance_m(): 7f4. above. -- Function: double gsl_stats_wvariance_with_fixed_mean (const double w[], size_t wstride, const double data[], size_t stride, size_t n, const double mean) This function computes an unbiased estimate of the variance of the weighted dataset *note data: 7f7. when the population mean *note mean: 7f7. of the underlying distribution is known `a priori'. In this case the estimator for the variance replaces the sample mean \Hat\mu by the known population mean \mu, \Hat\sigma^2 = (\sum w_i (x_i - \mu)^2) / (\sum w_i) -- Function: double gsl_stats_wsd_with_fixed_mean (const double w[], size_t wstride, const double data[], size_t stride, size_t n, const double mean) The standard deviation is defined as the square root of the variance. This function returns the square root of the corresponding variance function above. -- Function: double gsl_stats_wtss (const double w[], const size_t wstride, const double data[], size_t stride, size_t n) -- Function: double gsl_stats_wtss_m (const double w[], const size_t wstride, const double data[], size_t stride, size_t n, double wmean) These functions return the weighted total sum of squares (TSS) of *note data: 7fa. about the weighted mean. For *note gsl_stats_wtss_m(): 7fa. the user-supplied value of *note wmean: 7fa. is used, and for *note gsl_stats_wtss(): 7f9. it is computed using *note gsl_stats_wmean(): 7f2. TSS = \sum w_i (x_i - wmean)^2 -- Function: double gsl_stats_wabsdev (const double w[], size_t wstride, const double data[], size_t stride, size_t n) This function computes the weighted absolute deviation from the weighted mean of *note data: 7fb. The absolute deviation from the mean is defined as, absdev = (\sum w_i |x_i - \Hat\mu|) / (\sum w_i) -- Function: double gsl_stats_wabsdev_m (const double w[], size_t wstride, const double data[], size_t stride, size_t n, double wmean) This function computes the absolute deviation of the weighted dataset *note data: 7fc. about the given weighted mean *note wmean: 7fc. -- Function: double gsl_stats_wskew (const double w[], size_t wstride, const double data[], size_t stride, size_t n) This function computes the weighted skewness of the dataset *note data: 7fd. skew = (\sum w_i ((x_i - \Hat x)/\Hat \sigma)^3) / (\sum w_i) -- Function: double gsl_stats_wskew_m_sd (const double w[], size_t wstride, const double data[], size_t stride, size_t n, double wmean, double wsd) This function computes the weighted skewness of the dataset *note data: 7fe. using the given values of the weighted mean and weighted standard deviation, *note wmean: 7fe. and *note wsd: 7fe. -- Function: double gsl_stats_wkurtosis (const double w[], size_t wstride, const double data[], size_t stride, size_t n) This function computes the weighted kurtosis of the dataset *note data: 7ff. kurtosis = ((\sum w_i ((x_i - \Hat x)/\Hat \sigma)^4) / (\sum w_i)) - 3 -- Function: double gsl_stats_wkurtosis_m_sd (const double w[], size_t wstride, const double data[], size_t stride, size_t n, double wmean, double wsd) This function computes the weighted kurtosis of the dataset *note data: 800. using the given values of the weighted mean and weighted standard deviation, *note wmean: 800. and *note wsd: 800.  File: gsl-ref.info, Node: Maximum and Minimum values, Next: Median and Percentiles, Prev: Weighted Samples, Up: Statistics 21.8 Maximum and Minimum values =============================== The following functions find the maximum and minimum values of a dataset (or their indices). If the data contains ‘NaN’-s then a ‘NaN’ will be returned, since the maximum or minimum value is undefined. For functions which return an index, the location of the first ‘NaN’ in the array is returned. -- Function: double gsl_stats_max (const double data[], size_t stride, size_t n) This function returns the maximum value in *note data: 802, a dataset of length *note n: 802. with stride *note stride: 802. The maximum value is defined as the value of the element x_i which satisfies x_i \ge x_j for all j. If you want instead to find the element with the largest absolute magnitude you will need to apply ‘fabs()’ or ‘abs()’ to your data before calling this function. -- Function: double gsl_stats_min (const double data[], size_t stride, size_t n) This function returns the minimum value in *note data: 803, a dataset of length *note n: 803. with stride *note stride: 803. The minimum value is defined as the value of the element x_i which satisfies x_i \le x_j for all j. If you want instead to find the element with the smallest absolute magnitude you will need to apply ‘fabs()’ or ‘abs()’ to your data before calling this function. -- Function: void gsl_stats_minmax (double *min, double *max, const double data[], size_t stride, size_t n) This function finds both the minimum and maximum values *note min: 804, *note max: 804. in *note data: 804. in a single pass. -- Function: size_t gsl_stats_max_index (const double data[], size_t stride, size_t n) This function returns the index of the maximum value in *note data: 805, a dataset of length *note n: 805. with stride *note stride: 805. The maximum value is defined as the value of the element x_i which satisfies x_i \ge x_j for all j. When there are several equal maximum elements then the first one is chosen. -- Function: size_t gsl_stats_min_index (const double data[], size_t stride, size_t n) This function returns the index of the minimum value in *note data: 806, a dataset of length *note n: 806. with stride *note stride: 806. The minimum value is defined as the value of the element x_i which satisfies x_i \ge x_j for all j. When there are several equal minimum elements then the first one is chosen. -- Function: void gsl_stats_minmax_index (size_t *min_index, size_t *max_index, const double data[], size_t stride, size_t n) This function returns the indexes *note min_index: 807, *note max_index: 807. of the minimum and maximum values in *note data: 807. in a single pass.  File: gsl-ref.info, Node: Median and Percentiles, Next: Order Statistics, Prev: Maximum and Minimum values, Up: Statistics 21.9 Median and Percentiles =========================== The median and percentile functions described in this section operate on sorted data in O(1) time. There is also a routine for computing the median of an unsorted input array in average O(n) time using the quickselect algorithm. For convenience we use `quantiles', measured on a scale of 0 to 1, instead of percentiles (which use a scale of 0 to 100). -- Function: double gsl_stats_median_from_sorted_data (const double sorted_data[], const size_t stride, const size_t n) This function returns the median value of *note sorted_data: 809, a dataset of length *note n: 809. with stride *note stride: 809. The elements of the array must be in ascending numerical order. There are no checks to see whether the data are sorted, so the function *note gsl_sort(): 462. should always be used first. When the dataset has an odd number of elements the median is the value of element (n-1)/2. When the dataset has an even number of elements the median is the mean of the two nearest middle values, elements (n-1)/2 and n/2. Since the algorithm for computing the median involves interpolation this function always returns a floating-point number, even for integer data types. -- Function: double gsl_stats_median (double data[], const size_t stride, const size_t n) This function returns the median value of *note data: 80a, a dataset of length *note n: 80a. with stride *note stride: 80a. The median is found using the quickselect algorithm. The input array does not need to be sorted, but note that the algorithm rearranges the array and so the input is not preserved on output. -- Function: double gsl_stats_quantile_from_sorted_data (const double sorted_data[], size_t stride, size_t n, double f) This function returns a quantile value of *note sorted_data: 80b, a double-precision array of length *note n: 80b. with stride *note stride: 80b. The elements of the array must be in ascending numerical order. The quantile is determined by the *note f: 80b, a fraction between 0 and 1. For example, to compute the value of the 75th percentile *note f: 80b. should have the value 0.75. There are no checks to see whether the data are sorted, so the function *note gsl_sort(): 462. should always be used first. The quantile is found by interpolation, using the formula quantile = (1 - \delta) x_i + \delta x_{i+1} where i is ‘floor((n - 1)f)’ and \delta is (n-1)f - i. Thus the minimum value of the array (‘data[0*stride]’) is given by *note f: 80b. equal to zero, the maximum value (‘data[(n-1)*stride]’) is given by *note f: 80b. equal to one and the median value is given by *note f: 80b. equal to 0.5. Since the algorithm for computing quantiles involves interpolation this function always returns a floating-point number, even for integer data types.  File: gsl-ref.info, Node: Order Statistics, Next: Robust Location Estimates, Prev: Median and Percentiles, Up: Statistics 21.10 Order Statistics ====================== The k-th `order statistic' of a sample is equal to its k-th smallest value. The k-th order statistic of a set of n values x = \left\{ x_i \right\}, 1 \le i \le n is denoted x_{(k)}. The median of the set x is equal to x_{\left( \frac{n}{2} \right)} if n is odd, or the average of x_{\left( \frac{n}{2} \right)} and x_{\left( \frac{n}{2} + 1 \right)} if n is even. The k-th smallest element of a length n vector can be found in average O(n) time using the quickselect algorithm. -- Function: double gsl_stats_select (double data[], const size_t stride, const size_t n, const size_t k) This function finds the *note k: 80d.-th smallest element of the input array *note data: 80d. of length *note n: 80d. and stride *note stride: 80d. using the quickselect method. The algorithm rearranges the elements of *note data: 80d. and so the input array is not preserved on output.  File: gsl-ref.info, Node: Robust Location Estimates, Next: Robust Scale Estimates, Prev: Order Statistics, Up: Statistics 21.11 Robust Location Estimates =============================== A `location estimate' refers to a typical or central value which best describes a given dataset. The mean and median are both examples of location estimators. However, the mean has a severe sensitivity to data outliers and can give erroneous values when even a small number of outliers are present. The median on the other hand, has a strong insensitivity to data outliers, but due to its non-smoothness it can behave unexpectedly in certain situations. GSL offers the following alternative location estimators, which are robust to the presence of outliers. * Menu: * Trimmed Mean:: * Gastwirth Estimator::  File: gsl-ref.info, Node: Trimmed Mean, Next: Gastwirth Estimator, Up: Robust Location Estimates 21.11.1 Trimmed Mean -------------------- The trimmed mean, or `truncated mean', discards a certain number of smallest and largest samples from the input vector before computing the mean of the remaining samples. The amount of trimming is specified by a factor \alpha \in [0,0.5]. Then the number of samples discarded from both ends of the input vector is \left\lfloor \alpha n \right\rfloor, where n is the length of the input. So to discard 25% of the samples from each end, one would set \alpha = 0.25. -- Function: double gsl_stats_trmean_from_sorted_data (const double alpha, const double sorted_data[], const size_t stride, const size_t n) This function returns the trimmed mean of *note sorted_data: 810, a dataset of length *note n: 810. with stride *note stride: 810. The elements of the array must be in ascending numerical order. There are no checks to see whether the data are sorted, so the function *note gsl_sort(): 462. should always be used first. The trimming factor \alpha is given in *note alpha: 810. If \alpha \ge 0.5, then the median of the input is returned.  File: gsl-ref.info, Node: Gastwirth Estimator, Prev: Trimmed Mean, Up: Robust Location Estimates 21.11.2 Gastwirth Estimator --------------------------- Gastwirth’s location estimator is a weighted sum of three order statistics, gastwirth = 0.3 * Q_{1/3} + 0.4 * Q_{1/2} + 0.3 * Q_{2/3} where Q_{\frac{1}{3}} is the one-third quantile, Q_{\frac{1}{2}} is the one-half quantile (i.e. median), and Q_{\frac{2}{3}} is the two-thirds quantile. -- Function: double gsl_stats_gastwirth_from_sorted_data (const double sorted_data[], const size_t stride, const size_t n) This function returns the Gastwirth location estimator of *note sorted_data: 812, a dataset of length *note n: 812. with stride *note stride: 812. The elements of the array must be in ascending numerical order. There are no checks to see whether the data are sorted, so the function *note gsl_sort(): 462. should always be used first.  File: gsl-ref.info, Node: Robust Scale Estimates, Next: Examples<15>, Prev: Robust Location Estimates, Up: Statistics 21.12 Robust Scale Estimates ============================ A `robust scale estimate', also known as a robust measure of scale, attempts to quantify the statistical dispersion (variability, scatter, spread) in a set of data which may contain outliers. In such datasets, the usual variance or standard deviation scale estimate can be rendered useless by even a single outlier. * Menu: * Median Absolute Deviation (MAD): Median Absolute Deviation MAD. * S_n Statistic:: * Q_n Statistic::  File: gsl-ref.info, Node: Median Absolute Deviation MAD, Next: S_n Statistic, Up: Robust Scale Estimates 21.12.1 Median Absolute Deviation (MAD) --------------------------------------- The median absolute deviation (MAD) is defined as MAD = 1.4826 median { | x_i - median(x) | } In words, first the median of all samples is computed. Then the median is subtracted from all samples in the input to find the deviation of each sample from the median. The median of all absolute deviations is then the MAD. The factor 1.4826 makes the MAD an unbiased estimator of the standard deviation for Gaussian data. The median absolute deviation has an asymptotic efficiency of 37%. -- Function: double gsl_stats_mad0 (const double data[], const size_t stride, const size_t n, double work[]) -- Function: double gsl_stats_mad (const double data[], const size_t stride, const size_t n, double work[]) These functions return the median absolute deviation of *note data: 817, a dataset of length *note n: 817. and stride *note stride: 817. The ‘mad0’ function calculates \textrm{median} \left\{ \left| x_i - \textrm{median} \left( x \right) \right| \right\} (i.e. the MAD statistic without the bias correction scale factor). These functions require additional workspace of size ‘n’ provided in *note work: 817.  File: gsl-ref.info, Node: S_n Statistic, Next: Q_n Statistic, Prev: Median Absolute Deviation MAD, Up: Robust Scale Estimates 21.12.2 S_n Statistic --------------------- The S_n statistic developed by Croux and Rousseeuw is defined as S_n = 1.1926 * c_n * median_i { median_j ( | x_i - x_j | ) } For each sample x_i, 1 \le i \le n, the median of the values \left| x_i - x_j \right| is computed for all x_j, 1 \le j \le n. This yields n values, whose median then gives the final S_n. The factor 1.1926 makes S_n an unbiased estimate of the standard deviation for Gaussian data. The factor c_n is a correction factor to correct bias in small sample sizes. S_n has an asymptotic efficiency of 58%. -- Function: double gsl_stats_Sn0_from_sorted_data (const double sorted_data[], const size_t stride, const size_t n, double work[]) -- Function: double gsl_stats_Sn_from_sorted_data (const double sorted_data[], const size_t stride, const size_t n, double work[]) These functions return the S_n statistic of *note sorted_data: 81b, a dataset of length *note n: 81b. with stride *note stride: 81b. The elements of the array must be in ascending numerical order. There are no checks to see whether the data are sorted, so the function *note gsl_sort(): 462. should always be used first. The ‘Sn0’ function calculates \textrm{median}_i \left\{ \textrm{median}_j \left( \left| x_i - x_j \right| \right) \right\} (i.e. the S_n statistic without the bias correction scale factors). These functions require additional workspace of size ‘n’ provided in *note work: 81b.  File: gsl-ref.info, Node: Q_n Statistic, Prev: S_n Statistic, Up: Robust Scale Estimates 21.12.3 Q_n Statistic --------------------- The Q_n statistic developed by Croux and Rousseeuw is defined as Q_n = 2.21914 * d_n * { | x_i - x_j |, i < j }_{(k)} The factor 2.21914 makes Q_n an unbiased estimate of the standard deviation for Gaussian data. The factor d_n is a correction factor to correct bias in small sample sizes. The order statistic is k = ( floor(n/2) + 1 ) ( 2 ) Q_n has an asymptotic efficiency of 82%. -- Function: double gsl_stats_Qn0_from_sorted_data (const double sorted_data[], const size_t stride, const size_t n, double work[], int work_int[]) -- Function: double gsl_stats_Qn_from_sorted_data (const double sorted_data[], const size_t stride, const size_t n, double work[], int work_int[]) These functions return the Q_n statistic of *note sorted_data: 81f, a dataset of length *note n: 81f. with stride *note stride: 81f. The elements of the array must be in ascending numerical order. There are no checks to see whether the data are sorted, so the function *note gsl_sort(): 462. should always be used first. The ‘Qn0’ function calculates \left\{ \left| x_i - x_j \right|, i < j \right\}_{(k)} (i.e. Q_n without the bias correction scale factors). These functions require additional workspace of size ‘3n’ provided in *note work: 81f. and integer workspace of size ‘5n’ provided in *note work_int: 81f.  File: gsl-ref.info, Node: Examples<15>, Next: References and Further Reading<15>, Prev: Robust Scale Estimates, Up: Statistics 21.13 Examples ============== Here is a basic example of how to use the statistical functions: #include #include int main(void) { double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6}; double mean, variance, largest, smallest; mean = gsl_stats_mean(data, 1, 5); variance = gsl_stats_variance(data, 1, 5); largest = gsl_stats_max(data, 1, 5); smallest = gsl_stats_min(data, 1, 5); printf ("The dataset is %g, %g, %g, %g, %g\n", data[0], data[1], data[2], data[3], data[4]); printf ("The sample mean is %g\n", mean); printf ("The estimated variance is %g\n", variance); printf ("The largest value is %g\n", largest); printf ("The smallest value is %g\n", smallest); return 0; } The program should produce the following output, The dataset is 17.2, 18.1, 16.5, 18.3, 12.6 The sample mean is 16.54 The estimated variance is 5.373 The largest value is 18.3 The smallest value is 12.6 Here is an example using sorted data, #include #include #include int main(void) { double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6}; double median, upperq, lowerq; printf ("Original dataset: %g, %g, %g, %g, %g\n", data[0], data[1], data[2], data[3], data[4]); gsl_sort (data, 1, 5); printf ("Sorted dataset: %g, %g, %g, %g, %g\n", data[0], data[1], data[2], data[3], data[4]); median = gsl_stats_median_from_sorted_data (data, 1, 5); upperq = gsl_stats_quantile_from_sorted_data (data, 1, 5, 0.75); lowerq = gsl_stats_quantile_from_sorted_data (data, 1, 5, 0.25); printf ("The median is %g\n", median); printf ("The upper quartile is %g\n", upperq); printf ("The lower quartile is %g\n", lowerq); return 0; } This program should produce the following output, Original dataset: 17.2, 18.1, 16.5, 18.3, 12.6 Sorted dataset: 12.6, 16.5, 17.2, 18.1, 18.3 The median is 17.2 The upper quartile is 18.1 The lower quartile is 16.5  File: gsl-ref.info, Node: References and Further Reading<15>, Prev: Examples<15>, Up: Statistics 21.14 References and Further Reading ==================================== The standard reference for almost any topic in statistics is the multi-volume `Advanced Theory of Statistics' by Kendall and Stuart. * Maurice Kendall, Alan Stuart, and J. Keith Ord. `The Advanced Theory of Statistics' (multiple volumes) reprinted as `Kendall’s Advanced Theory of Statistics'. Wiley, ISBN 047023380X. Many statistical concepts can be more easily understood by a Bayesian approach. The following book by Gelman, Carlin, Stern and Rubin gives a comprehensive coverage of the subject. * Andrew Gelman, John B. Carlin, Hal S. Stern, Donald B. Rubin. `Bayesian Data Analysis'. Chapman & Hall, ISBN 0412039915. For physicists the Particle Data Group provides useful reviews of Probability and Statistics in the “Mathematical Tools” section of its Annual Review of Particle Physics. * `Review of Particle Properties', R.M. Barnett et al., Physical Review D54, 1 (1996) The Review of Particle Physics is available online at the website ‘http://pdg.lbl.gov/’. The following papers describe robust scale estimation, * C. Croux and P. J. Rousseeuw, `Time-Efficient algorithms for two highly robust estimators of scale', Comp. Stat., Physica, Heidelberg, 1992. * P. J. Rousseeuw and C. Croux, `Explicit scale estimators with high breakdown point', L1-Statistical Analysis and Related Methods, pp. 77-92, 1992.  File: gsl-ref.info, Node: Running Statistics, Next: Moving Window Statistics, Prev: Statistics, Up: Top 22 Running Statistics ********************* This chapter describes routines for computing running statistics, also known as online statistics, of data. These routines are suitable for handling large datasets for which it may be inconvenient or impractical to store in memory all at once. The data can be processed in a single pass, one point at a time. Each time a data point is added to the accumulator, internal parameters are updated in order to compute the current mean, variance, standard deviation, skewness, and kurtosis. These statistics are exact, and are updated with numerically stable single-pass algorithms. The median and arbitrary quantiles are also available, however these calculations use algorithms which provide approximations, and grow more accurate as more data is added to the accumulator. The functions described in this chapter are declared in the header file ‘gsl_rstat.h’. * Menu: * Initializing the Accumulator:: * Adding Data to the Accumulator:: * Current Statistics:: * Quantiles:: * Examples: Examples<16>. * References and Further Reading: References and Further Reading<16>.  File: gsl-ref.info, Node: Initializing the Accumulator, Next: Adding Data to the Accumulator, Up: Running Statistics 22.1 Initializing the Accumulator ================================= -- Type: gsl_rstat_workspace This workspace contains parameters used to calculate various statistics and are updated after each data point is added to the accumulator. -- Function: *note gsl_rstat_workspace: 825. *gsl_rstat_alloc (void) This function allocates a workspace for computing running statistics. The size of the workspace is O(1). -- Function: void gsl_rstat_free (gsl_rstat_workspace *w) This function frees the memory associated with the workspace *note w: 827. -- Function: int gsl_rstat_reset (gsl_rstat_workspace *w) This function resets the workspace *note w: 828. to its initial state, so it can begin working on a new set of data.  File: gsl-ref.info, Node: Adding Data to the Accumulator, Next: Current Statistics, Prev: Initializing the Accumulator, Up: Running Statistics 22.2 Adding Data to the Accumulator =================================== -- Function: int gsl_rstat_add (const double x, gsl_rstat_workspace *w) This function adds the data point *note x: 82a. to the statistical accumulator, updating calculations of the mean, variance, standard deviation, skewness, kurtosis, and median. -- Function: size_t gsl_rstat_n (const gsl_rstat_workspace *w) This function returns the number of data so far added to the accumulator.  File: gsl-ref.info, Node: Current Statistics, Next: Quantiles, Prev: Adding Data to the Accumulator, Up: Running Statistics 22.3 Current Statistics ======================= -- Function: double gsl_rstat_min (const gsl_rstat_workspace *w) This function returns the minimum value added to the accumulator. -- Function: double gsl_rstat_max (const gsl_rstat_workspace *w) This function returns the maximum value added to the accumulator. -- Function: double gsl_rstat_mean (const gsl_rstat_workspace *w) This function returns the mean of all data added to the accumulator, defined as \Hat\mu = (1/N) \sum x_i -- Function: double gsl_rstat_variance (const gsl_rstat_workspace *w) This function returns the variance of all data added to the accumulator, defined as \Hat\sigma^2 = (1/(N-1)) \sum (x_i - \Hat\mu)^2 -- Function: double gsl_rstat_sd (const gsl_rstat_workspace *w) This function returns the standard deviation of all data added to the accumulator, defined as the square root of the variance given above. -- Function: double gsl_rstat_sd_mean (const gsl_rstat_workspace *w) This function returns the standard deviation of the mean, defined as sd_mean = \Hat\sigma / \sqrt{N} -- Function: double gsl_rstat_rms (const gsl_rstat_workspace *w) This function returns the root mean square of all data added to the accumulator, defined as rms = \sqrt{{1 \over N} \sum x_i^2} -- Function: double gsl_rstat_skew (const gsl_rstat_workspace *w) This function returns the skewness of all data added to the accumulator, defined as skew = (1/N) \sum ((x_i - \Hat\mu)/\Hat\sigma)^3 -- Function: double gsl_rstat_kurtosis (const gsl_rstat_workspace *w) This function returns the kurtosis of all data added to the accumulator, defined as kurtosis = ((1/N) \sum ((x_i - \Hat\mu)/\Hat\sigma)^4) - 3 -- Function: double gsl_rstat_median (gsl_rstat_workspace *w) This function returns an estimate of the median of the data added to the accumulator.  File: gsl-ref.info, Node: Quantiles, Next: Examples<16>, Prev: Current Statistics, Up: Running Statistics 22.4 Quantiles ============== The functions in this section estimate quantiles dynamically without storing the entire dataset, using the algorithm of Jain and Chlamtec, 1985. Only five points (markers) are stored which represent the minimum and maximum of the data, as well as current estimates of the p/2-, p-, and (1+p)/2-quantiles. Each time a new data point is added, the marker positions and heights are updated. -- Type: gsl_rstat_quantile_workspace This workspace contains parameters for estimating quantiles of the current dataset -- Function: *note gsl_rstat_quantile_workspace: 838. *gsl_rstat_quantile_alloc (const double p) This function allocates a workspace for the dynamic estimation of *note p: 839.-quantiles, where *note p: 839. is between 0 and 1. The median corresponds to p = 0.5. The size of the workspace is O(1). -- Function: void gsl_rstat_quantile_free (gsl_rstat_quantile_workspace *w) This function frees the memory associated with the workspace *note w: 83a. -- Function: int gsl_rstat_quantile_reset (gsl_rstat_quantile_workspace *w) This function resets the workspace *note w: 83b. to its initial state, so it can begin working on a new set of data. -- Function: int gsl_rstat_quantile_add (const double x, gsl_rstat_quantile_workspace *w) This function updates the estimate of the p-quantile with the new data point *note x: 83c. -- Function: double gsl_rstat_quantile_get (gsl_rstat_quantile_workspace *w) This function returns the current estimate of the p-quantile.  File: gsl-ref.info, Node: Examples<16>, Next: References and Further Reading<16>, Prev: Quantiles, Up: Running Statistics 22.5 Examples ============= Here is a basic example of how to use the statistical functions: #include #include int main(void) { double data[5] = {17.2, 18.1, 16.5, 18.3, 12.6}; double mean, variance, largest, smallest, sd, rms, sd_mean, median, skew, kurtosis; gsl_rstat_workspace *rstat_p = gsl_rstat_alloc(); size_t i, n; /* add data to rstat accumulator */ for (i = 0; i < 5; ++i) gsl_rstat_add(data[i], rstat_p); mean = gsl_rstat_mean(rstat_p); variance = gsl_rstat_variance(rstat_p); largest = gsl_rstat_max(rstat_p); smallest = gsl_rstat_min(rstat_p); median = gsl_rstat_median(rstat_p); sd = gsl_rstat_sd(rstat_p); sd_mean = gsl_rstat_sd_mean(rstat_p); skew = gsl_rstat_skew(rstat_p); rms = gsl_rstat_rms(rstat_p); kurtosis = gsl_rstat_kurtosis(rstat_p); n = gsl_rstat_n(rstat_p); printf ("The dataset is %g, %g, %g, %g, %g\n", data[0], data[1], data[2], data[3], data[4]); printf ("The sample mean is %g\n", mean); printf ("The estimated variance is %g\n", variance); printf ("The largest value is %g\n", largest); printf ("The smallest value is %g\n", smallest); printf( "The median is %g\n", median); printf( "The standard deviation is %g\n", sd); printf( "The root mean square is %g\n", rms); printf( "The standard devation of the mean is %g\n", sd_mean); printf( "The skew is %g\n", skew); printf( "The kurtosis %g\n", kurtosis); printf( "There are %zu items in the accumulator\n", n); gsl_rstat_reset(rstat_p); n = gsl_rstat_n(rstat_p); printf( "There are %zu items in the accumulator\n", n); gsl_rstat_free(rstat_p); return 0; } The program should produce the following output, The dataset is 17.2, 18.1, 16.5, 18.3, 12.6 The sample mean is 16.54 The estimated variance is 5.373 The largest value is 18.3 The smallest value is 12.6 The median is 17.2 The standard deviation is 2.31797 The root mean square is 16.6694 The standard devation of the mean is 1.03663 The skew is -0.829058 The kurtosis -1.2217 There are 5 items in the accumulator There are 0 items in the accumulator This next program estimates the lower quartile, median and upper quartile from 10,000 samples of a random Rayleigh distribution, using the P^2 algorithm of Jain and Chlamtec. For comparison, the exact values are also computed from the sorted dataset. #include #include #include #include #include #include #include int main(void) { const size_t N = 10000; double *data = malloc(N * sizeof(double)); gsl_rstat_quantile_workspace *work_25 = gsl_rstat_quantile_alloc(0.25); gsl_rstat_quantile_workspace *work_50 = gsl_rstat_quantile_alloc(0.5); gsl_rstat_quantile_workspace *work_75 = gsl_rstat_quantile_alloc(0.75); gsl_rng *r = gsl_rng_alloc(gsl_rng_default); double exact_p25, exact_p50, exact_p75; double val_p25, val_p50, val_p75; size_t i; /* add data to quantile accumulators; also store data for exact * comparisons */ for (i = 0; i < N; ++i) { data[i] = gsl_ran_rayleigh(r, 1.0); gsl_rstat_quantile_add(data[i], work_25); gsl_rstat_quantile_add(data[i], work_50); gsl_rstat_quantile_add(data[i], work_75); } /* exact values */ gsl_sort(data, 1, N); exact_p25 = gsl_stats_quantile_from_sorted_data(data, 1, N, 0.25); exact_p50 = gsl_stats_quantile_from_sorted_data(data, 1, N, 0.5); exact_p75 = gsl_stats_quantile_from_sorted_data(data, 1, N, 0.75); /* estimated values */ val_p25 = gsl_rstat_quantile_get(work_25); val_p50 = gsl_rstat_quantile_get(work_50); val_p75 = gsl_rstat_quantile_get(work_75); printf ("The dataset is %g, %g, %g, %g, %g, ...\n", data[0], data[1], data[2], data[3], data[4]); printf ("0.25 quartile: exact = %.5f, estimated = %.5f, error = %.6e\n", exact_p25, val_p25, (val_p25 - exact_p25) / exact_p25); printf ("0.50 quartile: exact = %.5f, estimated = %.5f, error = %.6e\n", exact_p50, val_p50, (val_p50 - exact_p50) / exact_p50); printf ("0.75 quartile: exact = %.5f, estimated = %.5f, error = %.6e\n", exact_p75, val_p75, (val_p75 - exact_p75) / exact_p75); gsl_rstat_quantile_free(work_25); gsl_rstat_quantile_free(work_50); gsl_rstat_quantile_free(work_75); gsl_rng_free(r); free(data); return 0; } The program should produce the following output, The dataset is 0.00645272, 0.0074002, 0.0120706, 0.0207256, 0.0227282, ... 0.25 quartile: exact = 0.75766, estimated = 0.75580, error = -2.450209e-03 0.50 quartile: exact = 1.17508, estimated = 1.17438, error = -5.995912e-04 0.75 quartile: exact = 1.65347, estimated = 1.65696, error = 2.110571e-03  File: gsl-ref.info, Node: References and Further Reading<16>, Prev: Examples<16>, Up: Running Statistics 22.6 References and Further Reading =================================== The algorithm used to dynamically estimate p-quantiles is described in the paper, * R. Jain and I. Chlamtac. `The P^2 algorithm for dynamic calculation of quantiles and histograms without storing observations', Communications of the ACM, Volume 28 (October), Number 10, 1985, p. 1076-1085.  File: gsl-ref.info, Node: Moving Window Statistics, Next: Digital Filtering, Prev: Running Statistics, Up: Top 23 Moving Window Statistics *************************** This chapter describes routines for computing `moving window statistics' (also called rolling statistics and running statistics), using a window around a sample which is used to calculate various local statistical properties of an input data stream. The window is then slid forward by one sample to process the next data point and so on. The functions described in this chapter are declared in the header file ‘gsl_movstat.h’. * Menu: * Introduction: Introduction<4>. * Handling Endpoints:: * Allocation for Moving Window Statistics:: * Moving Mean:: * Moving Variance and Standard Deviation:: * Moving Minimum and Maximum:: * Moving Sum:: * Moving Median:: * Robust Scale Estimation:: * User-defined Moving Statistics:: * Accumulators:: * Examples: Examples<17>. * References and Further Reading: References and Further Reading<17>.  File: gsl-ref.info, Node: Introduction<4>, Next: Handling Endpoints, Up: Moving Window Statistics 23.1 Introduction ================= This chapter is concerned with calculating various statistics from subsets of a given dataset. The main idea is to compute statistics in the vicinity of a given data sample by defining a `window' which includes the sample itself as well as some specified number of samples before and after the sample in question. For a sample x_i, we define a window W_i^{H,J} as W_i^{H,J} = {x_{i-H},...,x_i,...,x_{i+J}} The parameters H and J are non-negative integers specifying the number of samples to include before and after the sample x_i. Statistics such as the mean and standard deviation of the window W_i^{H,J} may be computed, and then the window is shifted forward by one sample to focus on x_{i+1}. The total number of samples in the window is K = H + J + 1. To define a symmetric window centered on x_i, one would set H = J = \left\lfloor K / 2 \right\rfloor.  File: gsl-ref.info, Node: Handling Endpoints, Next: Allocation for Moving Window Statistics, Prev: Introduction<4>, Up: Moving Window Statistics 23.2 Handling Endpoints ======================= When processing samples near the ends of the input signal, there will not be enough samples to fill the window W_i^{H,J} defined above. Therefore the user must specify how to construct the windows near the end points. This is done by passing an input argument of type *note gsl_movstat_end_t: 844.: -- Type: gsl_movstat_end_t This data type specifies how to construct windows near end points and can be selected from the following choices: -- Macro: GSL_MOVSTAT_END_PADZERO With this option, a full window of length K will be constructed by inserting zeros into the window near the signal end points. Effectively, the input signal is modified to x~ = {0, ..., 0, x_1, x_2, ..., x_{n-1}, x_n, 0, ..., 0} to ensure a well-defined window for all x_i. -- Macro: GSL_MOVSTAT_END_PADVALUE With this option, a full window of length K will be constructed by padding the window with the first and last sample in the input signal. Effectively, the input signal is modified to x~ = {x_1, ..., x_1, x_1, x_2, ..., x_{n-1}, x_n, x_n, ..., x_n} -- Macro: GSL_MOVSTAT_END_TRUNCATE With this option, no padding is performed, and the windows are simply truncated as the end points are approached.  File: gsl-ref.info, Node: Allocation for Moving Window Statistics, Next: Moving Mean, Prev: Handling Endpoints, Up: Moving Window Statistics 23.3 Allocation for Moving Window Statistics ============================================ -- Type: gsl_movstat_workspace The moving window statistical routines use a common workspace. -- Function: *note gsl_movstat_workspace: 849. *gsl_movstat_alloc (const size_t K) This function allocates a workspace for computing symmetric, centered moving statistics with a window length of K samples. In this case, H = J = \left\lfloor K/2 \right\rfloor. The size of the workspace is O(7K). -- Function: *note gsl_movstat_workspace: 849. *gsl_movstat_alloc2 (const size_t H, const size_t J) This function allocates a workspace for computing moving statistics using a window with H samples prior to the current sample, and J samples after the current sample. The total window size is K = H + J + 1. The size of the workspace is O(7K). -- Function: void *gsl_movstat_free (gsl_movstat_workspace *w) This function frees the memory associated with *note w: 84c.  File: gsl-ref.info, Node: Moving Mean, Next: Moving Variance and Standard Deviation, Prev: Allocation for Moving Window Statistics, Up: Moving Window Statistics 23.4 Moving Mean ================ The moving window mean calculates the mean of the values of each window W_i^{H,J}. \hat{\mu}_i = 1/| W_i^{H,J} | \sum_{x_m \in W_i^{H,J}} x_m Here, \left| W_i^{H,J} \right| represents the number of elements in the window W_i^{H,J}. This will normally be K, unless the ‘GSL_MOVSTAT_END_TRUNCATE’ option is selected, in which case it could be less than K near the signal end points. -- Function: int gsl_movstat_mean (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_movstat_workspace *w) This function computes the moving window mean of the input vector *note x: 84e, storing the output in *note y: 84e. The parameter *note endtype: 84e. specifies how windows near the ends of the input should be handled. It is allowed to have *note x: 84e. = *note y: 84e. for an in-place moving mean.  File: gsl-ref.info, Node: Moving Variance and Standard Deviation, Next: Moving Minimum and Maximum, Prev: Moving Mean, Up: Moving Window Statistics 23.5 Moving Variance and Standard Deviation =========================================== The moving window variance calculates the `sample variance' of the values of each window W_i^{H,J}, defined by \hat{\sigma}_i^2 = 1/(|W_i^{H,J}| - 1) \sum_{x_m \in W_i^{H,J}} ( x_m - \hat{\mu}_i )^2 where \hat{\mu}_i is the mean of W_i^{H,J} defined above. The standard deviation \hat{\sigma}_i is the square root of the variance. -- Function: int gsl_movstat_variance (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_movstat_workspace *w) This function computes the moving window variance of the input vector *note x: 850, storing the output in *note y: 850. The parameter *note endtype: 850. specifies how windows near the ends of the input should be handled. It is allowed to have *note x: 850. = *note y: 850. for an in-place moving variance. -- Function: int gsl_movstat_sd (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_movstat_workspace *w) This function computes the moving window standard deviation of the input vector *note x: 851, storing the output in *note y: 851. The parameter *note endtype: 851. specifies how windows near the ends of the input should be handled. It is allowed to have *note x: 851. = *note y: 851. for an in-place moving standard deviation.  File: gsl-ref.info, Node: Moving Minimum and Maximum, Next: Moving Sum, Prev: Moving Variance and Standard Deviation, Up: Moving Window Statistics 23.6 Moving Minimum and Maximum =============================== The moving minimum/maximum calculates the minimum and maximum values of each window W_i^{H,J}. y_i^{min} = \min W_i^{H,J} y_i^{max} = \max W_i^{H,J} -- Function: int gsl_movstat_min (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_movstat_workspace *w) This function computes the moving minimum of the input vector *note x: 853, storing the result in *note y: 853. The parameter *note endtype: 853. specifies how windows near the ends of the input should be handled. It is allowed to have *note x: 853. = *note y: 853. for an in-place moving minimum. -- Function: int gsl_movstat_max (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_movstat_workspace *w) This function computes the moving maximum of the input vector *note x: 854, storing the result in *note y: 854. The parameter *note endtype: 854. specifies how windows near the ends of the input should be handled. It is allowed to have *note x: 854. = *note y: 854. for an in-place moving maximum. -- Function: int gsl_movstat_minmax (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *y_min, gsl_vector *y_max, gsl_movstat_workspace *w) This function computes the moving minimum and maximum of the input vector *note x: 855, storing the window minimums in *note y_min: 855. and the window maximums in *note y_max: 855. The parameter *note endtype: 855. specifies how windows near the ends of the input should be handled.  File: gsl-ref.info, Node: Moving Sum, Next: Moving Median, Prev: Moving Minimum and Maximum, Up: Moving Window Statistics 23.7 Moving Sum =============== The moving window sum calculates the sum of the values of each window W_i^{H,J}. y_i = \sum_{x_m \in W_i^{H,J}} x_m -- Function: int gsl_movstat_sum (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_movstat_workspace *w) This function computes the moving window sum of the input vector *note x: 857, storing the output in *note y: 857. The parameter *note endtype: 857. specifies how windows near the ends of the input should be handled. It is allowed to have *note x: 857. = *note y: 857. for an in-place moving sum.  File: gsl-ref.info, Node: Moving Median, Next: Robust Scale Estimation, Prev: Moving Sum, Up: Moving Window Statistics 23.8 Moving Median ================== The moving median calculates the median of the window W_i^{H,J} for each sample x_i: y_i = median(W_i^{H,J}) -- Function: int gsl_movstat_median (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_movstat_workspace *w) This function computes the moving median of the input vector *note x: 859, storing the output in *note y: 859. The parameter *note endtype: 859. specifies how windows near the ends of the input should be handled. It is allowed for *note x: 859. = *note y: 859. for an in-place moving window median.  File: gsl-ref.info, Node: Robust Scale Estimation, Next: User-defined Moving Statistics, Prev: Moving Median, Up: Moving Window Statistics 23.9 Robust Scale Estimation ============================ A common problem in statistics is to quantify the dispersion (also known as the variability, scatter, and spread) of a set of data. Often this is done by calculating the variance or standard deviation. However these statistics are strongly influenced by outliers, and can often provide erroneous results when even a small number of outliers are present. Several useful statistics have emerged to provide robust estimates of scale which are not as susceptible to data outliers. A few of these statistical scale estimators are described below. * Menu: * Moving MAD:: * Moving QQR:: * Moving S_n:: * Moving Q_n::  File: gsl-ref.info, Node: Moving MAD, Next: Moving QQR, Up: Robust Scale Estimation 23.9.1 Moving MAD ----------------- The median absolute deviation (MAD) for the window W_i^{H,J} is defined to be the median of the absolute deviations from the window’s median: MAD_i = 1.4826 * median[ |W_i^{H,J} - median(W_i^{H,J})| ] The factor of 1.4826 makes the MAD an unbiased estimator of the standard deviation for Gaussian data. The MAD has an efficiency of 37%. See *note here: 815. for more information. -- Function: int gsl_movstat_mad0 (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *xmedian, gsl_vector *xmad, gsl_movstat_workspace *w) -- Function: int gsl_movstat_mad (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *xmedian, gsl_vector *xmad, gsl_movstat_workspace *w) These functions compute the moving MAD of the input vector *note x: 85d. and store the result in *note xmad: 85d. The medians of each window W_i^{H,J} are stored in *note xmedian: 85d. on output. The inputs *note x: 85d, *note xmedian: 85d, and *note xmad: 85d. must all be the same length. The parameter *note endtype: 85d. specifies how windows near the ends of the input should be handled. The function ‘mad0’ does not include the scale factor of 1.4826, while the function ‘mad’ does include this factor.  File: gsl-ref.info, Node: Moving QQR, Next: Moving S_n, Prev: Moving MAD, Up: Robust Scale Estimation 23.9.2 Moving QQR ----------------- The q-quantile range (QQR) is the difference between the (1-q) and q quantiles of a set of data, QQR = Q_{1-q} - Q_q The case q = 0.25 corresponds to the well-known `interquartile range (IQR)', which is the difference between the 75th and 25th percentiles of a set of data. The QQR is a `trimmed estimator', the main idea being to discard the largest and smallest values in a data window and compute a scale estimate from the remaining middle values. In the case of the IQR, the largest and smallest 25% of the data are discarded and the scale is estimated from the remaining (middle) 50%. -- Function: int gsl_movstat_qqr (const gsl_movstat_end_t endtype, const gsl_vector *x, const double q, gsl_vector *xqqr, gsl_movstat_workspace *w) This function computes the moving QQR of the input vector *note x: 85f. and stores the q-quantile ranges of each window W_i^{H,J} in *note xqqr: 85f. The quantile parameter *note q: 85f. must be between 0 and 0.5. The input q = 0.25 corresponds to the IQR. The inputs *note x: 85f. and *note xqqr: 85f. must be the same length. The parameter *note endtype: 85f. specifies how windows near the ends of the input should be handled.  File: gsl-ref.info, Node: Moving S_n, Next: Moving Q_n, Prev: Moving QQR, Up: Robust Scale Estimation 23.9.3 Moving S_n ----------------- The S_n statistic proposed by Croux and Rousseeuw is based on pairwise differences between all samples in the window. It has an efficiency of 58%, significantly higher than the MAD. See *note here: 819. for more information. -- Function: int gsl_movstat_Sn (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *xscale, gsl_movstat_workspace *w) This function computes the moving S_n of the input vector *note x: 861. and stores the output in *note xscale: 861. The inputs *note x: 861. and *note xscale: 861. must be the same length. The parameter *note endtype: 861. specifies how windows near the ends of the input should be handled. It is allowed for *note x: 861. = *note xscale: 861. for an in-place moving window S_n.  File: gsl-ref.info, Node: Moving Q_n, Prev: Moving S_n, Up: Robust Scale Estimation 23.9.4 Moving Q_n ----------------- The Q_n statistic proposed by Croux and Rousseeuw is loosely based on the Hodges-Lehmann location estimator. It has a relatively high efficiency of 82%. See *note here: 81d. for more information. -- Function: int gsl_movstat_Qn (const gsl_movstat_end_t endtype, const gsl_vector *x, gsl_vector *xscale, gsl_movstat_workspace *w) This function computes the moving Q_n of the input vector *note x: 863. and stores the output in *note xscale: 863. The inputs *note x: 863. and *note xscale: 863. must be the same length. The parameter *note endtype: 863. specifies how windows near the ends of the input should be handled. It is allowed for *note x: 863. = *note xscale: 863. for an in-place moving window Q_n.  File: gsl-ref.info, Node: User-defined Moving Statistics, Next: Accumulators, Prev: Robust Scale Estimation, Up: Moving Window Statistics 23.10 User-defined Moving Statistics ==================================== GSL offers an interface for users to define their own moving window statistics functions, without needing to implement the edge-handling and accumulator machinery. This can be done by explicitly constructing the windows W_i^{H,J} for a given input signal (*note gsl_movstat_fill(): 865.), or by calculating a user-defined function for each window automatically. In order to apply a user-defined function to each window, users must define a variable of type *note gsl_movstat_function: 866. to pass into *note gsl_movstat_apply(): 867. This structure is defined as follows. -- Type: gsl_movstat_function Structure specifying user-defined moving window statistical function: typedef struct { double (* function) (const size_t n, double x[], void * params); void * params; } gsl_movstat_function; This structure contains a pointer to the user-defined function as well as possible parameters to pass to the function. -- Member: double (*function) (const size_t n, double x[], void *params) This function returns the user-defined statistic of the array ‘x’ of length ‘n’. User-specified parameters are passed in via *note params: 869. It is allowed to modify the array ‘x’. -- Member: void *params User-specified parameters to be passed into the function. -- Function: int gsl_movstat_apply (const gsl_movstat_end_t endtype, const gsl_movstat_function *F, const gsl_vector *x, gsl_vector *y, gsl_movstat_workspace *w) This function applies the user-defined moving window statistic specified in *note F: 867. to the input vector *note x: 867, storing the output in *note y: 867. The parameter *note endtype: 867. specifies how windows near the ends of the input should be handled. It is allowed for *note x: 867. = *note y: 867. for an in-place moving window calculation. -- Function: size_t gsl_movstat_fill (const gsl_movstat_end_t endtype, const gsl_vector *x, const size_t idx, const size_t H, const size_t J, double *window) This function explicitly constructs the sliding window for the input vector *note x: 865. which is centered on the sample *note idx: 865. On output, the array *note window: 865. will contain W_{idx}^{H,J}. The number of samples to the left and right of the sample *note idx: 865. are specified by *note H: 865. and *note J: 865. respectively. The parameter *note endtype: 865. specifies how windows near the ends of the input should be handled. The function returns the size of the window.  File: gsl-ref.info, Node: Accumulators, Next: Examples<17>, Prev: User-defined Moving Statistics, Up: Moving Window Statistics 23.11 Accumulators ================== Many of the algorithms of this chapter are based on an accumulator design, which process the input vector one sample at a time, updating calculations of the desired statistic for the current window. Each accumulator is stored in the following structure: -- Type: gsl_movstat_accum Structure specifying accumulator for moving window statistics: typedef struct { size_t (* size) (const size_t n); int (* init) (const size_t n, void * vstate); int (* insert) (const double x, void * vstate); int (* delete) (void * vstate); int (* get) (void * params, double * result, const void * vstate); } gsl_movstat_accum; The structure contains function pointers responsible for performing different tasks for the accumulator. -- Member: size_t (*size) (const size_t n) This function returns the size of the workspace (in bytes) needed by the accumulator for a moving window of length ‘n’. -- Member: int (*init) (const size_t n, void *vstate) This function initializes the workspace ‘vstate’ for a moving window of length ‘n’. -- Member: int (*insert) (const double x, void *vstate) This function inserts a single sample ‘x’ into the accumulator, updating internal calculations of the desired statistic. If the accumulator is full (i.e. n samples have already been inserted), then the oldest sample is deleted from the accumulator. -- Member: int (*delete) (void *vstate) This function deletes the oldest sample from the accumulator, updating internal calculations of the desired statistic. -- Member: int (*get) (void *params, double *result, const void *vstate) This function stores the desired statistic for the current window in ‘result’. The input ‘params’ specifies optional parameters for calculating the statistic. The following accumulators of type *note gsl_movstat_accum: 86b. are defined by GSL to perform moving window statistics calculations. -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_min -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_max -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_minmax These accumulators calculate moving window minimum/maximums efficiently, using the algorithm of D. Lemire. -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_mean -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_sd -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_variance These accumulators calculate the moving window mean, standard deviation, and variance, using the algorithm of B. P. Welford. -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_median This accumulator calculates the moving window median using the min/max heap algorithm of Härdle and Steiger. -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_Sn -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_Qn These accumulators calculate the moving window S_n and Q_n statistics developed by Croux and Rousseeuw. -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_sum This accumulator calculates the moving window sum. -- Variable: *note gsl_movstat_accum: 86b. *gsl_movstat_accum_qqr This accumulator calculates the moving window q-quantile range.  File: gsl-ref.info, Node: Examples<17>, Next: References and Further Reading<17>, Prev: Accumulators, Up: Moving Window Statistics 23.12 Examples ============== * Menu: * Example 1:: * Example 2; Robust Scale: Example 2 Robust Scale. * Example 3; User-defined Moving Window: Example 3 User-defined Moving Window.  File: gsl-ref.info, Node: Example 1, Next: Example 2 Robust Scale, Up: Examples<17> 23.12.1 Example 1 ----------------- The following example program computes the moving mean, minimum and maximum of a noisy sinusoid signal of length N = 500 with a symmetric moving window of size K = 11. [gsl-ref-figures/movstat1] Figure: Original signal time series (gray) with moving mean (green), moving minimum (blue), and moving maximum (orange). The program is given below. #include #include #include #include #include #include #include int main(void) { const size_t N = 500; /* length of time series */ const size_t K = 11; /* window size */ gsl_movstat_workspace * w = gsl_movstat_alloc(K); gsl_vector *x = gsl_vector_alloc(N); gsl_vector *xmean = gsl_vector_alloc(N); gsl_vector *xmin = gsl_vector_alloc(N); gsl_vector *xmax = gsl_vector_alloc(N); gsl_rng *r = gsl_rng_alloc(gsl_rng_default); size_t i; for (i = 0; i < N; ++i) { double xi = cos(4.0 * M_PI * i / (double) N); double ei = gsl_ran_gaussian(r, 0.1); gsl_vector_set(x, i, xi + ei); } /* compute moving statistics */ gsl_movstat_mean(GSL_MOVSTAT_END_PADVALUE, x, xmean, w); gsl_movstat_minmax(GSL_MOVSTAT_END_PADVALUE, x, xmin, xmax, w); /* print results */ for (i = 0; i < N; ++i) { printf("%zu %f %f %f %f\n", i, gsl_vector_get(x, i), gsl_vector_get(xmean, i), gsl_vector_get(xmin, i), gsl_vector_get(xmax, i)); } gsl_vector_free(x); gsl_vector_free(xmean); gsl_rng_free(r); gsl_movstat_free(w); return 0; }  File: gsl-ref.info, Node: Example 2 Robust Scale, Next: Example 3 User-defined Moving Window, Prev: Example 1, Up: Examples<17> 23.12.2 Example 2: Robust Scale ------------------------------- The following example program analyzes a time series of length N = 1000 composed of Gaussian random variates with zero mean whose standard deviation changes in a piecewise constant fashion as shown in the table below. Sample Range \sigma ------------------------------------ 1-200 1.0 201-450 5.0 451-600 1.0 601-850 3.0 851-1000 5.0 Additionally, about 1% of the samples are perturbed to represent outliers by adding \pm 15 to the random Gaussian variate. The program calculates the moving statistics MAD, IQR, S_n, Q_n, and the standard deviation using a symmetric moving window of length K = 41. The results are shown in Fig. %s. [gsl-ref-figures/movstat2] Figure: Top: time series of piecewise constant variance. Bottom: scale estimates using a moving window; the true sigma value is in light blue, MAD in green, IQR in red, S_n in yellow, and Q_n in dark blue. The moving standard deviation is shown in gray. The robust statistics follow the true standard deviation piecewise changes well, without being influenced by the outliers. The moving standard deviation (gray curve) is heavily influenced by the presence of the outliers. The program is given below. #include #include #include #include #include #include #include int main(void) { const size_t N = 1000; /* length of time series */ const double sigma[] = { 1.0, 5.0, 1.0, 3.0, 5.0 }; /* variances */ const size_t N_sigma[] = { 200, 450, 600, 850, 1000 }; /* samples where variance changes */ const size_t K = 41; /* window size */ gsl_vector *x = gsl_vector_alloc(N); gsl_vector *xmedian = gsl_vector_alloc(N); gsl_vector *xmad = gsl_vector_alloc(N); gsl_vector *xiqr = gsl_vector_alloc(N); gsl_vector *xSn = gsl_vector_alloc(N); gsl_vector *xQn = gsl_vector_alloc(N); gsl_vector *xsd = gsl_vector_alloc(N); gsl_rng *r = gsl_rng_alloc(gsl_rng_default); gsl_movstat_workspace * w = gsl_movstat_alloc(K); size_t idx = 0; size_t i; for (i = 0; i < N; ++i) { double gi = gsl_ran_gaussian(r, sigma[idx]); double u = gsl_rng_uniform(r); double outlier = (u < 0.01) ? 15.0*GSL_SIGN(gi) : 0.0; double xi = gi + outlier; gsl_vector_set(x, i, xi); if (i == N_sigma[idx] - 1) ++idx; } /* compute moving statistics */ gsl_movstat_mad(GSL_MOVSTAT_END_TRUNCATE, x, xmedian, xmad, w); gsl_movstat_qqr(GSL_MOVSTAT_END_TRUNCATE, x, 0.25, xiqr, w); gsl_movstat_Sn(GSL_MOVSTAT_END_TRUNCATE, x, xSn, w); gsl_movstat_Qn(GSL_MOVSTAT_END_TRUNCATE, x, xQn, w); gsl_movstat_sd(GSL_MOVSTAT_END_TRUNCATE, x, xsd, w); /* scale IQR by factor to approximate standard deviation */ gsl_vector_scale(xiqr, 0.7413); /* print results */ idx = 0; for (i = 0; i < N; ++i) { printf("%zu %f %f %f %f %f %f %f\n", i, gsl_vector_get(x, i), sigma[idx], gsl_vector_get(xmad, i), gsl_vector_get(xiqr, i), gsl_vector_get(xSn, i), gsl_vector_get(xQn, i), gsl_vector_get(xsd, i)); if (i == N_sigma[idx] - 1) ++idx; } gsl_vector_free(x); gsl_vector_free(xmedian); gsl_vector_free(xmad); gsl_vector_free(xiqr); gsl_vector_free(xSn); gsl_vector_free(xQn); gsl_vector_free(xsd); gsl_rng_free(r); gsl_movstat_free(w); return 0; }  File: gsl-ref.info, Node: Example 3 User-defined Moving Window, Prev: Example 2 Robust Scale, Up: Examples<17> 23.12.3 Example 3: User-defined Moving Window --------------------------------------------- This example program illustrates how a user can define their own moving window function to apply to an input vector. It constructs a random noisy time series of length N = 1000 with some outliers added. Then it applies a moving window trimmed mean to the time series with trim parameter \alpha = 0.1. The length of the moving window is K = 11, so the smallest and largest sample of each window is discarded prior to computing the mean. The results are shown in Fig. %s. [gsl-ref-figures/movstat3] Figure: Noisy time series data (black) with moving window trimmed mean (red) The program is given below. #include #include #include #include #include #include #include #include #include double func(const size_t n, double x[], void * params) { const double alpha = *(double *) params; gsl_sort(x, 1, n); return gsl_stats_trmean_from_sorted_data(alpha, x, 1, n); } int main(void) { const size_t N = 1000; /* length of time series */ const size_t K = 11; /* window size */ double alpha = 0.1; /* trimmed mean parameter */ gsl_vector *x = gsl_vector_alloc(N); /* input vector */ gsl_vector *y = gsl_vector_alloc(N); /* filtered output vector for alpha1 */ gsl_rng *r = gsl_rng_alloc(gsl_rng_default); gsl_movstat_workspace *w = gsl_movstat_alloc(K); gsl_movstat_function F; size_t i; double sum = 0.0; /* generate input signal */ for (i = 0; i < N; ++i) { double ui = gsl_ran_gaussian(r, 1.0); double outlier = (gsl_rng_uniform(r) < 0.01) ? 10.0*GSL_SIGN(ui) : 0.0; sum += ui; gsl_vector_set(x, i, sum + outlier); } /* apply moving window function */ F.function = func; F.params = α gsl_movstat_apply(GSL_MOVSTAT_END_PADVALUE, &F, x, y, w); /* print results */ for (i = 0; i < N; ++i) { double xi = gsl_vector_get(x, i); double yi = gsl_vector_get(y, i); printf("%f %f\n", xi, yi); } gsl_vector_free(x); gsl_vector_free(y); gsl_rng_free(r); gsl_movstat_free(w); return 0; }  File: gsl-ref.info, Node: References and Further Reading<17>, Prev: Examples<17>, Up: Moving Window Statistics 23.13 References and Further Reading ==================================== The following publications are relevant to the algorithms described in this chapter, * W.Hardle and W. Steiger, `Optimal Median Smoothing', Appl. Statist., 44 (2), 1995. * D. Lemire, `Streaming Maximum-Minimum Filter Using No More than Three Comparisons per Element', Nordic Journal of Computing, 13 (4), 2006 (‘https://arxiv.org/abs/cs/0610046’). * B. P. Welford, `Note on a method for calculating corrected sums of squares and products', Technometrics, 4 (3), 1962.  File: gsl-ref.info, Node: Digital Filtering, Next: Histograms, Prev: Moving Window Statistics, Up: Top 24 Digital Filtering ******************** * Menu: * Introduction: Introduction<5>. * Handling Endpoints: Handling Endpoints<2>. * Linear Digital Filters:: * Nonlinear Digital Filters:: * Examples: Examples<18>. * References and Further Reading: References and Further Reading<18>.  File: gsl-ref.info, Node: Introduction<5>, Next: Handling Endpoints<2>, Up: Digital Filtering 24.1 Introduction ================= The filters discussed in this chapter are based on the following moving data window which is centered on i-th sample: W_i^H = { x_{i-H}, ..., x_i, ..., x_{i+H} } Here, H is a non-negative integer called the `window half-length', which represents the number of samples before and after sample i. The total window length is K = 2 H + 1.  File: gsl-ref.info, Node: Handling Endpoints<2>, Next: Linear Digital Filters, Prev: Introduction<5>, Up: Digital Filtering 24.2 Handling Endpoints ======================= When processing samples near the ends of the input signal, there will not be enough samples to fill the window W_i^H defined above. Therefore the user must specify how to construct the windows near the end points. This is done by passing an input argument of type *note gsl_filter_end_t: 888.: -- Type: gsl_filter_end_t This data type specifies how to construct windows near end points and can be selected from the following choices: -- Macro: GSL_FILTER_END_PADZERO With this option, a full window of length K will be constructed by inserting zeros into the window near the signal end points. Effectively, the input signal is modified to x~ = { 0, ..., 0, x_1, x_2, ..., x_{n-1}, x_n, 0, ..., 0 } to ensure a well-defined window for all x_i. -- Macro: GSL_FILTER_END_PADVALUE With this option, a full window of length K will be constructed by padding the window with the first and last sample in the input signal. Effectively, the input signal is modified to x~ = { x_1, ..., x_1, x_1, x_2, ..., x_{n-1}, x_n, x_n, ..., x_n } -- Macro: GSL_FILTER_END_TRUNCATE With this option, no padding is performed, and the windows are simply truncated as the end points are approached.  File: gsl-ref.info, Node: Linear Digital Filters, Next: Nonlinear Digital Filters, Prev: Handling Endpoints<2>, Up: Digital Filtering 24.3 Linear Digital Filters =========================== * Menu: * Gaussian Filter::  File: gsl-ref.info, Node: Gaussian Filter, Up: Linear Digital Filters 24.3.1 Gaussian Filter ---------------------- The Gaussian filter convolves the input signal with a Gaussian kernel or window. This filter is often used as a smoothing or noise reduction filter. The Gaussian kernel is defined by G(k) = e^{-1/2 ( \alpha k/((K-1)/2) )^2} = e^{-k^2/2\sigma^2} for -(K-1)/2 \le k \le (K-1)/2, and K is the size of the kernel. The parameter \alpha specifies the number of standard deviations \sigma desired in the kernel. So for example setting \alpha = 3 would define a Gaussian window of length K which spans \pm 3 \sigma. It is often more convenient to specify the parameter \alpha rather than the standard deviation \sigma when constructing the kernel, since a fixed value of \alpha would correspond to the same shape of Gaussian regardless of the size K. The appropriate value of the standard deviation depends on K and is related to \alpha as \sigma = (K - 1)/(2 \alpha) The routines below accept \alpha as an input argument instead of \sigma. The Gaussian filter offers a convenient way of differentiating and smoothing an input signal in a single pass. Using the derivative property of a convolution, d/dt ( G * x ) = dG/dt * x the input signal x(t) can be smoothed and differentiated at the same time by convolution with a derivative Gaussian kernel, which can be readily computed from the analytic expression above. The same principle applies to higher order derivatives. -- Function: gsl_filter_gaussian_workspace *gsl_filter_gaussian_alloc (const size_t K) This function initializes a workspace for Gaussian filtering using a kernel of size *note K: 88e. Here, H = K / 2. If K is even, it is rounded up to the next odd integer to ensure a symmetric window. The size of the workspace is O(K). -- Function: void gsl_filter_gaussian_free (gsl_filter_gaussian_workspace *w) This function frees the memory associated with *note w: 88f. -- Function: int gsl_filter_gaussian (const gsl_filter_end_t endtype, const double alpha, const size_t order, const gsl_vector *x, gsl_vector *y, gsl_filter_gaussian_workspace *w) This function applies a Gaussian filter parameterized by *note alpha: 890. to the input vector *note x: 890, storing the output in *note y: 890. The derivative order is specified by *note order: 890, with ‘0’ corresponding to a Gaussian, ‘1’ corresponding to a first derivative Gaussian, and so on. The parameter *note endtype: 890. specifies how the signal end points are handled. It is allowed for *note x: 890. = *note y: 890. for an in-place filter. -- Function: int gsl_filter_gaussian_kernel (const double alpha, const size_t order, const int normalize, gsl_vector *kernel) This function constructs a Gaussian kernel parameterized by *note alpha: 891. and stores the output in *note kernel: 891. The parameter *note order: 891. specifies the derivative order, with ‘0’ corresponding to a Gaussian, ‘1’ corresponding to a first derivative Gaussian, and so on. If *note normalize: 891. is set to ‘1’, then the kernel will be normalized to sum to one on output. If *note normalize: 891. is set to ‘0’, no normalization is performed.  File: gsl-ref.info, Node: Nonlinear Digital Filters, Next: Examples<18>, Prev: Linear Digital Filters, Up: Digital Filtering 24.4 Nonlinear Digital Filters ============================== The nonlinear digital filters described below are based on the window median, which is given by m_i = median { W_i^H } = median { x_{i-H}, ..., x_i, ..., x_{i+H} } The median is considered robust to local outliers, unlike the mean. Median filters can preserve sharp edges while at the same removing signal noise, and are used in a wide range of applications. * Menu: * Standard Median Filter:: * Recursive Median Filter:: * Impulse Detection Filter::  File: gsl-ref.info, Node: Standard Median Filter, Next: Recursive Median Filter, Up: Nonlinear Digital Filters 24.4.1 Standard Median Filter ----------------------------- The `standard median filter' (SMF) simply replaces the sample x_i by the median m_i of the window W_i^H: This filter has one tuning parameter given by H. The standard median filter is considered highly resistant to local outliers and local noise in the data sequence \{x_i\}. -- Function: gsl_filter_median_workspace *gsl_filter_median_alloc (const size_t K) This function initializes a workspace for standard median filtering using a symmetric centered moving window of size *note K: 894. Here, H = K / 2. If K is even, it is rounded up to the next odd integer to ensure a symmetric window. The size of the workspace is O(7K). -- Function: void gsl_filter_median_free (gsl_filter_median_workspace *w) This function frees the memory associated with *note w: 895. -- Function: int gsl_filter_median (const gsl_filter_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_filter_median_workspace *w) This function applies a standard median filter to the input *note x: 896, storing the output in *note y: 896. The parameter *note endtype: 896. specifies how the signal end points are handled. It is allowed to have *note x: 896. = *note y: 896. for an in-place filter.  File: gsl-ref.info, Node: Recursive Median Filter, Next: Impulse Detection Filter, Prev: Standard Median Filter, Up: Nonlinear Digital Filters 24.4.2 Recursive Median Filter ------------------------------ The `recursive median filter' (RMF) is a modification of the SMF to include previous filter outputs in the window before computing the median. The filter’s response is y_i = median ( y_{i-H}, ..., y_{i-1}, x_i, x_{i+1}, ..., x_{i+H} ) Sometimes, the SMF must be applied several times in a row to achieve adequate smoothing (i.e. a cascade filter). The RMF, on the other hand, converges to a `root sequence' in one pass, and can sometimes provide a smoother result than several passes of the SMF. A root sequence is an input which is left unchanged by the filter. So there is no need to apply a recursive median filter twice to an input vector. -- Function: gsl_filter_rmedian_workspace *gsl_filter_rmedian_alloc (const size_t K) This function initializes a workspace for recursive median filtering using a symmetric centered moving window of size *note K: 898. Here, H = K / 2. If K is even, it is rounded up to the next odd integer to ensure a symmetric window. The size of the workspace is O(K). -- Function: void gsl_filter_rmedian_free (gsl_filter_rmedian_workspace *w) This function frees the memory associated with *note w: 899. -- Function: int gsl_filter_rmedian (const gsl_filter_end_t endtype, const gsl_vector *x, gsl_vector *y, gsl_filter_rmedian_workspace *w) This function applies a recursive median filter to the input *note x: 89a, storing the output in *note y: 89a. The parameter *note endtype: 89a. specifies how the signal end points are handled. It is allowed to have *note x: 89a. = *note y: 89a. for an in-place filter.  File: gsl-ref.info, Node: Impulse Detection Filter, Prev: Recursive Median Filter, Up: Nonlinear Digital Filters 24.4.3 Impulse Detection Filter ------------------------------- Impulsive noise is characterized by short sequences of data points distinct from those in the surrounding neighborhood. This section describes a powerful class of filters, also known as `impulse rejection filters' and `decision-based filters', designed to detect and remove such outliers from data. The filter’s response is given by y_i = { x_i, |x_i - m_i| <= t * S_i { m_i, |x_i - m_i| > t * S_i where m_i is the median value of the window W_i^H, S_i is a robust estimate of the scatter or dispersion for the window W_i^H, and t is a tuning parameter specifying the number of scale factors needed to determine that a point is an outlier. The main idea is that the median m_i will be unaffected by a small number of outliers in the window, and so a given sample x_i is tested to determine how far away it is from the median in terms of the local scale estimate S_i. Samples which are more than t scale estimates away from the median are labeled as outliers and replaced by the window median m_i. Samples which are less than t scale estimates from the median are left unchanged by the filter. Note that when t = 0, the impulse detection filter is equivalent to the standard median filter. When t \rightarrow \infty, it becomes the identity filter. This means the impulse detection filter can be viewed as a “less aggressive” version of the standard median filter, becoming less aggressive as t is increased. Note that this filter modifies only samples identified as outliers, while the standard median filter changes all samples to the local median, regardless of whether they are outliers. This fact, plus the additional flexibility offered by the additional tuning parameter t can make the impulse detection filter a better choice for some applications. It is important to have a robust and accurate scale estimate S_i in order to detect impulse outliers even in the presence of noise. The window standard deviation is not typically a good choice, as it can be significantly perturbed by the presence of even one outlier. GSL offers the following choices (specified by a parameter of type *note gsl_filter_scale_t: 89c.) for computing the scale estimate S_i, all of which are robust to the presence of impulse outliers. -- Type: gsl_filter_scale_t This type specifies how the scale estimate S_i of the window W_i^H is calculated. -- Macro: GSL_FILTER_SCALE_MAD This option specifies the median absolute deviation (MAD) scale estimate, defined by S_i = 1.4826 median { | W_i^H - m_i | } This choice of scale estimate is also known as the `Hampel filter' in the statistical literature. See *note here: 815. for more information. -- Macro: GSL_FILTER_SCALE_IQR This option specifies the interquartile range (IQR) scale estimate, defined as the difference between the 75th and 25th percentiles of the window W_i^H, S_i = 0.7413 ( Q_{0.75} - Q_{0.25} ) where Q_p is the p-quantile of the window W_i^H. The idea is to throw away the largest and smallest 25% of the window samples (where the outliers would be), and estimate a scale from the middle 50%. The factor 0.7413 provides an unbiased estimate of the standard deviation for Gaussian data. -- Macro: GSL_FILTER_SCALE_SN This option specifies the so-called S_n statistic proposed by Croux and Rousseeuw. See *note here: 819. for more information. -- Macro: GSL_FILTER_SCALE_QN This option specifies the so-called Q_n statistic proposed by Croux and Rousseeuw. See *note here: 81d. for more information. Warning: While the scale estimates defined above are much less sensitive to outliers than the standard deviation, they can suffer from an effect called `implosion'. The standard deviation of a window W_i^H will be zero if and only if all samples in the window are equal. However, it is possible for the MAD of a window to be zero even if all the samples in the window are not equal. For example, if K/2 + 1 or more of the K samples in the window are equal to some value x^{*}, then the window median will be equal to x^{*}. Consequently, at least K/2 + 1 of the absolute deviations |x_j - x^{*}| will be zero, and so the MAD will be zero. In such a case, the Hampel filter will act like the standard median filter regardless of the value of t. Caution should also be exercised if dividing by S_i. -- Function: gsl_filter_impulse_workspace *gsl_filter_impulse_alloc (const size_t K) This function initializes a workspace for impulse detection filtering using a symmetric moving window of size *note K: 8a1. Here, H = K / 2. If K is even, it is rounded up to the next odd integer to ensure a symmetric window. The size of the workspace is O(6K). -- Function: void gsl_filter_impulse_free (gsl_filter_impulse_workspace *w) This function frees the memory associated with *note w: 8a2. -- Function: int gsl_filter_impulse (const gsl_filter_end_t endtype, const gsl_filter_scale_t scale_type, const double t, const gsl_vector *x, gsl_vector *y, gsl_vector *xmedian, gsl_vector *xsigma, size_t *noutlier, gsl_vector_int *ioutlier, gsl_filter_impulse_workspace *w) These functions apply an impulse detection filter to the input vector *note x: 8a3, storing the filtered output in *note y: 8a3. The tuning parameter t is provided in *note t: 8a3. The window medians m_i are stored in *note xmedian: 8a3. and the S_i are stored in *note xsigma: 8a3. on output. The number of outliers detected is stored in *note noutlier: 8a3. on output, while the locations of flagged outliers are stored in the boolean array *note ioutlier: 8a3. The input *note ioutlier: 8a3. may be ‘NULL’ if not desired. It is allowed to have *note x: 8a3. = *note y: 8a3. for an in-place filter.  File: gsl-ref.info, Node: Examples<18>, Next: References and Further Reading<18>, Prev: Nonlinear Digital Filters, Up: Digital Filtering 24.5 Examples ============= * Menu: * Gaussian Example 1:: * Gaussian Example 2:: * Square Wave Signal Example:: * Impulse Detection Example::  File: gsl-ref.info, Node: Gaussian Example 1, Next: Gaussian Example 2, Up: Examples<18> 24.5.1 Gaussian Example 1 ------------------------- This example program illustrates the Gaussian filter applied to smoothing a time series of length N = 500 with a kernel size of K = 51. Three filters are applied with parameters \alpha = 0.5, 3, 10. The results are shown in Fig. %s. [gsl-ref-figures/gaussfilt] Figure: Top panel: Gaussian kernels (unnormalized) for \alpha = 0.5, 3, 10. Bottom panel: Time series (gray) with Gaussian filter output for same \alpha values. We see that the filter corresponding to \alpha = 0.5 applies the most smoothing, while \alpha = 10 corresponds to the least amount of smoothing. The program is given below. #include #include #include #include #include #include #include int main(void) { const size_t N = 500; /* length of time series */ const size_t K = 51; /* window size */ const double alpha[3] = { 0.5, 3.0, 10.0 }; /* alpha values */ gsl_vector *x = gsl_vector_alloc(N); /* input vector */ gsl_vector *y1 = gsl_vector_alloc(N); /* filtered output vector for alpha1 */ gsl_vector *y2 = gsl_vector_alloc(N); /* filtered output vector for alpha2 */ gsl_vector *y3 = gsl_vector_alloc(N); /* filtered output vector for alpha3 */ gsl_vector *k1 = gsl_vector_alloc(K); /* Gaussian kernel for alpha1 */ gsl_vector *k2 = gsl_vector_alloc(K); /* Gaussian kernel for alpha2 */ gsl_vector *k3 = gsl_vector_alloc(K); /* Gaussian kernel for alpha3 */ gsl_rng *r = gsl_rng_alloc(gsl_rng_default); gsl_filter_gaussian_workspace *gauss_p = gsl_filter_gaussian_alloc(K); size_t i; double sum = 0.0; /* generate input signal */ for (i = 0; i < N; ++i) { double ui = gsl_ran_gaussian(r, 1.0); sum += ui; gsl_vector_set(x, i, sum); } /* compute kernels without normalization */ gsl_filter_gaussian_kernel(alpha[0], 0, 0, k1); gsl_filter_gaussian_kernel(alpha[1], 0, 0, k2); gsl_filter_gaussian_kernel(alpha[2], 0, 0, k3); /* apply filters */ gsl_filter_gaussian(GSL_FILTER_END_PADVALUE, alpha[0], 0, x, y1, gauss_p); gsl_filter_gaussian(GSL_FILTER_END_PADVALUE, alpha[1], 0, x, y2, gauss_p); gsl_filter_gaussian(GSL_FILTER_END_PADVALUE, alpha[2], 0, x, y3, gauss_p); /* print kernels */ for (i = 0; i < K; ++i) { double k1i = gsl_vector_get(k1, i); double k2i = gsl_vector_get(k2, i); double k3i = gsl_vector_get(k3, i); printf("%e %e %e\n", k1i, k2i, k3i); } printf("\n\n"); /* print filter results */ for (i = 0; i < N; ++i) { double xi = gsl_vector_get(x, i); double y1i = gsl_vector_get(y1, i); double y2i = gsl_vector_get(y2, i); double y3i = gsl_vector_get(y3, i); printf("%.12e %.12e %.12e %.12e\n", xi, y1i, y2i, y3i); } gsl_vector_free(x); gsl_vector_free(y1); gsl_vector_free(y2); gsl_vector_free(y3); gsl_vector_free(k1); gsl_vector_free(k2); gsl_vector_free(k3); gsl_rng_free(r); gsl_filter_gaussian_free(gauss_p); return 0; }  File: gsl-ref.info, Node: Gaussian Example 2, Next: Square Wave Signal Example, Prev: Gaussian Example 1, Up: Examples<18> 24.5.2 Gaussian Example 2 ------------------------- A common application of the Gaussian filter is to detect edges, or sudden jumps, in a noisy input signal. It is used both for 1D edge detection in time series, as well as 2D edge detection in images. Here we will examine a noisy time series of length N = 1000 with a single edge. The input signal is defined as x(n) = e(n) + { 0, n <= N/2 { 0.5, n > N/2 where e(n) is Gaussian random noise. The program smooths the input signal with order 0,1, and 2 Gaussian filters of length K = 61 with \alpha = 3. For comparison, the program also computes finite differences of the input signal without smoothing. The results are shown in Fig. %s. [gsl-ref-figures/gaussfilt2] Figure: Top row: original input signal x(n) (black) with Gaussian smoothed signal in red. Second row: First finite differences of input signal. Third row: Input signal smoothed with a first order Gaussian filter. Fourth row: Input signal smoothed with a second order Gaussian filter. The finite difference approximation of the first derivative (second row) shows the common problem with differentiating a noisy signal. The noise is amplified and makes it extremely difficult to detect the sharp gradient at sample 500. The third row shows the first order Gaussian smoothed signal with a clear peak at the location of the edge. Alternatively, one could examine the second order Gaussian smoothed signal (fourth row) and look for zero crossings to determine the edge location. The program is given below. #include #include #include #include #include #include #include int main(void) { const size_t N = 1000; /* length of time series */ const size_t K = 61; /* window size */ const double alpha = 3.0; /* Gaussian kernel has +/- 3 standard deviations */ gsl_vector *x = gsl_vector_alloc(N); /* input vector */ gsl_vector *y = gsl_vector_alloc(N); /* filtered output vector */ gsl_vector *dy = gsl_vector_alloc(N); /* first derivative filtered vector */ gsl_vector *d2y = gsl_vector_alloc(N); /* second derivative filtered vector */ gsl_rng *r = gsl_rng_alloc(gsl_rng_default); gsl_filter_gaussian_workspace *gauss_p = gsl_filter_gaussian_alloc(K); size_t i; /* generate input signal */ for (i = 0; i < N; ++i) { double xi = (i > N / 2) ? 0.5 : 0.0; double ei = gsl_ran_gaussian(r, 0.1); gsl_vector_set(x, i, xi + ei); } /* apply filters */ gsl_filter_gaussian(GSL_FILTER_END_PADVALUE, alpha, 0, x, y, gauss_p); gsl_filter_gaussian(GSL_FILTER_END_PADVALUE, alpha, 1, x, dy, gauss_p); gsl_filter_gaussian(GSL_FILTER_END_PADVALUE, alpha, 2, x, d2y, gauss_p); /* print results */ for (i = 0; i < N; ++i) { double xi = gsl_vector_get(x, i); double yi = gsl_vector_get(y, i); double dyi = gsl_vector_get(dy, i); double d2yi = gsl_vector_get(d2y, i); double dxi; /* compute finite difference of x vector */ if (i == 0) dxi = gsl_vector_get(x, i + 1) - xi; else if (i == N - 1) dxi = gsl_vector_get(x, i) - gsl_vector_get(x, i - 1); else dxi = 0.5 * (gsl_vector_get(x, i + 1) - gsl_vector_get(x, i - 1)); printf("%.12e %.12e %.12e %.12e %.12e\n", xi, yi, dxi, dyi, d2yi); } gsl_vector_free(x); gsl_vector_free(y); gsl_vector_free(dy); gsl_vector_free(d2y); gsl_rng_free(r); gsl_filter_gaussian_free(gauss_p); return 0; }  File: gsl-ref.info, Node: Square Wave Signal Example, Next: Impulse Detection Example, Prev: Gaussian Example 2, Up: Examples<18> 24.5.3 Square Wave Signal Example --------------------------------- The following example program illustrates the median filters on a noisy square wave signal. Median filters are well known for preserving sharp edges in the input signal while reducing noise. The program constructs a 5 Hz square wave signal with Gaussian noise added. Then the signal is filtered with a standard median filter and recursive median filter using a symmetric window of length K = 7. The results are shown in Fig. %s. [gsl-ref-figures/filt_edge] Figure: Original time series is in gray. The standard median filter output is in green and the recursive median filter output is in red. Both filters preserve the sharp signal edges while reducing the noise. The recursive median filter achieves a smoother result than the standard median filter. The “blocky” nature of the output is characteristic of all median filters. The program is given below. #include #include #include #include #include #include #include int main(void) { const size_t N = 1000; /* length of time series */ const size_t K = 7; /* window size */ const double f = 5.0; /* frequency of square wave in Hz */ gsl_filter_median_workspace *median_p = gsl_filter_median_alloc(K); gsl_filter_rmedian_workspace *rmedian_p = gsl_filter_rmedian_alloc(K); gsl_vector *t = gsl_vector_alloc(N); /* time */ gsl_vector *x = gsl_vector_alloc(N); /* input vector */ gsl_vector *y_median = gsl_vector_alloc(N); /* median filtered output */ gsl_vector *y_rmedian = gsl_vector_alloc(N); /* recursive median filtered output */ gsl_rng *r = gsl_rng_alloc(gsl_rng_default); size_t i; /* generate input signal */ for (i = 0; i < N; ++i) { double ti = (double) i / (N - 1.0); double tmp = sin(2.0 * M_PI * f * ti); double xi = (tmp >= 0.0) ? 1.0 : -1.0; double ei = gsl_ran_gaussian(r, 0.1); gsl_vector_set(t, i, ti); gsl_vector_set(x, i, xi + ei); } gsl_filter_median(GSL_FILTER_END_PADVALUE, x, y_median, median_p); gsl_filter_rmedian(GSL_FILTER_END_PADVALUE, x, y_rmedian, rmedian_p); /* print results */ for (i = 0; i < N; ++i) { double ti = gsl_vector_get(t, i); double xi = gsl_vector_get(x, i); double medi = gsl_vector_get(y_median, i); double rmedi = gsl_vector_get(y_rmedian, i); printf("%f %f %f %f\n", ti, xi, medi, rmedi); } gsl_vector_free(t); gsl_vector_free(x); gsl_vector_free(y_median); gsl_vector_free(y_rmedian); gsl_rng_free(r); gsl_filter_median_free(median_p); return 0; }  File: gsl-ref.info, Node: Impulse Detection Example, Prev: Square Wave Signal Example, Up: Examples<18> 24.5.4 Impulse Detection Example -------------------------------- The following example program illustrates the impulse detection filter. First, it constructs a sinusoid signal of length N = 1000 with Gaussian noise added. Then, about 1% of the data are perturbed to represent large outliers. An impulse detecting filter is applied with a window size K = 25 and tuning parameter t = 4, using the Q_n statistic as the robust measure of scale. The results are plotted in Fig. %s. [gsl-ref-figures/impulse] Figure: Original time series is in blue, filter output is in green, upper and lower intervals for detecting outliers are in red and yellow respectively. Detected outliers are marked with squares. The program is given below. #include #include #include #include #include #include #include int main(void) { const size_t N = 1000; /* length of time series */ const size_t K = 25; /* window size */ const double t = 4.0; /* number of scale factors for outlier detection */ gsl_vector *x = gsl_vector_alloc(N); /* input vector */ gsl_vector *y = gsl_vector_alloc(N); /* output (filtered) vector */ gsl_vector *xmedian = gsl_vector_alloc(N); /* window medians */ gsl_vector *xsigma = gsl_vector_alloc(N); /* window scale estimates */ gsl_vector_int *ioutlier = gsl_vector_int_alloc(N); /* outlier detected? */ gsl_filter_impulse_workspace * w = gsl_filter_impulse_alloc(K); gsl_rng *r = gsl_rng_alloc(gsl_rng_default); size_t noutlier; size_t i; /* generate input signal */ for (i = 0; i < N; ++i) { double xi = 10.0 * sin(2.0 * M_PI * i / (double) N); double ei = gsl_ran_gaussian(r, 2.0); double u = gsl_rng_uniform(r); double outlier = (u < 0.01) ? 15.0*GSL_SIGN(ei) : 0.0; gsl_vector_set(x, i, xi + ei + outlier); } /* apply impulse detection filter */ gsl_filter_impulse(GSL_FILTER_END_TRUNCATE, GSL_FILTER_SCALE_QN, t, x, y, xmedian, xsigma, &noutlier, ioutlier, w); /* print results */ for (i = 0; i < N; ++i) { double xi = gsl_vector_get(x, i); double yi = gsl_vector_get(y, i); double xmedi = gsl_vector_get(xmedian, i); double xsigmai = gsl_vector_get(xsigma, i); int outlier = gsl_vector_int_get(ioutlier, i); printf("%zu %f %f %f %f %d\n", i, xi, yi, xmedi + t * xsigmai, xmedi - t * xsigmai, outlier); } gsl_vector_free(x); gsl_vector_free(y); gsl_vector_free(xmedian); gsl_vector_free(xsigma); gsl_vector_int_free(ioutlier); gsl_filter_impulse_free(w); gsl_rng_free(r); return 0; }  File: gsl-ref.info, Node: References and Further Reading<18>, Prev: Examples<18>, Up: Digital Filtering 24.6 References and Further Reading =================================== The following publications are relevant to the algorithms described in this chapter, * F. J. Harris, `On the use of windows for harmonic analysis with the discrete Fourier transform', Proceedings of the IEEE, 66 (1), 1978. * S-J. Ko, Y-H. Lee, and A. T. Fam. `Efficient implementation of one-dimensional recursive median filters', IEEE transactions on circuits and systems 37.11 (1990): 1447-1450. * R. K. Pearson and M. Gabbouj, `Nonlinear Digital Filtering with Python: An Introduction'. CRC Press, 2015.  File: gsl-ref.info, Node: Histograms, Next: N-tuples, Prev: Digital Filtering, Up: Top 25 Histograms ************* This chapter describes functions for creating histograms. Histograms provide a convenient way of summarizing the distribution of a set of data. A histogram consists of a set of `bins' which count the number of events falling into a given range of a continuous variable x. In GSL the bins of a histogram contain floating-point numbers, so they can be used to record both integer and non-integer distributions. The bins can use arbitrary sets of ranges (uniformly spaced bins are the default). Both one and two-dimensional histograms are supported. Once a histogram has been created it can also be converted into a probability distribution function. The library provides efficient routines for selecting random samples from probability distributions. This can be useful for generating simulations based on real data. The functions are declared in the header files ‘gsl_histogram.h’ and ‘gsl_histogram2d.h’. * Menu: * The histogram struct:: * Histogram allocation:: * Copying Histograms:: * Updating and accessing histogram elements:: * Searching histogram ranges:: * Histogram Statistics:: * Histogram Operations:: * Reading and writing histograms:: * Resampling from histograms:: * The histogram probability distribution struct:: * Example programs for histograms:: * Two dimensional histograms:: * The 2D histogram struct:: * 2D Histogram allocation:: * Copying 2D Histograms:: * Updating and accessing 2D histogram elements:: * Searching 2D histogram ranges:: * 2D Histogram Statistics:: * 2D Histogram Operations:: * Reading and writing 2D histograms:: * Resampling from 2D histograms:: * Example programs for 2D histograms::  File: gsl-ref.info, Node: The histogram struct, Next: Histogram allocation, Up: Histograms 25.1 The histogram struct ========================= A histogram is defined by the following struct, -- Type: gsl_histogram size_t n This is the number of histogram bins double * range The ranges of the bins are stored in an array of ‘n+1’ elements pointed to by range. double * bin The counts for each bin are stored in an array of ‘n’ elements pointed to by ‘bin’. The bins are floating-point numbers, so you can increment them by non-integer values if necessary. The range for ‘bin[i]’ is given by ‘range[i]’ to ‘range[i+1]’. For n bins there are ‘n+1’ entries in the array ‘range’. Each bin is inclusive at the lower end and exclusive at the upper end. Mathematically this means that the bins are defined by the following inequality, bin[i] corresponds to range[i] <= x < range[i+1] Here is a diagram of the correspondence between ranges and bins on the number-line for x: [ bin[0] )[ bin[1] )[ bin[2] )[ bin[3] )[ bin[4] ) ---|---------|---------|---------|---------|---------|--- x r[0] r[1] r[2] r[3] r[4] r[5] In this picture the values of the ‘range’ array are denoted by r. On the left-hand side of each bin the square bracket ‘[’ denotes an inclusive lower bound (r \le x), and the round parentheses ‘)’ on the right-hand side denote an exclusive upper bound (x < r). Thus any samples which fall on the upper end of the histogram are excluded. If you want to include this value for the last bin you will need to add an extra bin to your histogram. The *note gsl_histogram: 8b1. struct and its associated functions are defined in the header file ‘gsl_histogram.h’.  File: gsl-ref.info, Node: Histogram allocation, Next: Copying Histograms, Prev: The histogram struct, Up: Histograms 25.2 Histogram allocation ========================= The functions for allocating memory to a histogram follow the style of ‘malloc()’ and ‘free()’. In addition they also perform their own error checking. If there is insufficient memory available to allocate a histogram then the functions call the error handler (with an error number of *note GSL_ENOMEM: 2a.) in addition to returning a null pointer. Thus if you use the library error handler to abort your program then it isn’t necessary to check every histogram ‘alloc’. -- Function: *note gsl_histogram: 8b1. *gsl_histogram_alloc (size_t n) This function allocates memory for a histogram with *note n: 8b3. bins, and returns a pointer to a newly created *note gsl_histogram: 8b1. struct. If insufficient memory is available a null pointer is returned and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. The bins and ranges are not initialized, and should be prepared using one of the range-setting functions below in order to make the histogram ready for use. -- Function: int gsl_histogram_set_ranges (gsl_histogram *h, const double range[], size_t size) This function sets the ranges of the existing histogram *note h: 8b4. using the array *note range: 8b4. of size *note size: 8b4. The values of the histogram bins are reset to zero. The *note range: 8b4. array should contain the desired bin limits. The ranges can be arbitrary, subject to the restriction that they are monotonically increasing. The following example shows how to create a histogram with logarithmic bins with ranges [1,10), [10,100) and [100,1000): gsl_histogram * h = gsl_histogram_alloc (3); /* bin[0] covers the range 1 <= x < 10 */ /* bin[1] covers the range 10 <= x < 100 */ /* bin[2] covers the range 100 <= x < 1000 */ double range[4] = { 1.0, 10.0, 100.0, 1000.0 }; gsl_histogram_set_ranges (h, range, 4); Note that the size of the *note range: 8b4. array should be defined to be one element bigger than the number of bins. The additional element is required for the upper value of the final bin. -- Function: int gsl_histogram_set_ranges_uniform (gsl_histogram *h, double xmin, double xmax) This function sets the ranges of the existing histogram *note h: 8b5. to cover the range *note xmin: 8b5. to *note xmax: 8b5. uniformly. The values of the histogram bins are reset to zero. The bin ranges are shown in the table below, bin[0] corresponds to xmin <= x < xmin + d bin[1] corresponds to xmin + d <= x < xmin + 2 d ...... bin[n-1] corresponds to xmin + (n-1)d <= x < xmax where d is the bin spacing, d = (xmax-xmin)/n. -- Function: void gsl_histogram_free (gsl_histogram *h) This function frees the histogram *note h: 8b6. and all of the memory associated with it.  File: gsl-ref.info, Node: Copying Histograms, Next: Updating and accessing histogram elements, Prev: Histogram allocation, Up: Histograms 25.3 Copying Histograms ======================= -- Function: int gsl_histogram_memcpy (gsl_histogram *dest, const gsl_histogram *src) This function copies the histogram *note src: 8b8. into the pre-existing histogram *note dest: 8b8, making *note dest: 8b8. into an exact copy of *note src: 8b8. The two histograms must be of the same size. -- Function: *note gsl_histogram: 8b1. *gsl_histogram_clone (const gsl_histogram *src) This function returns a pointer to a newly created histogram which is an exact copy of the histogram *note src: 8b9.  File: gsl-ref.info, Node: Updating and accessing histogram elements, Next: Searching histogram ranges, Prev: Copying Histograms, Up: Histograms 25.4 Updating and accessing histogram elements ============================================== There are two ways to access histogram bins, either by specifying an x coordinate or by using the bin-index directly. The functions for accessing the histogram through x coordinates use a binary search to identify the bin which covers the appropriate range. -- Function: int gsl_histogram_increment (gsl_histogram *h, double x) This function updates the histogram *note h: 8bb. by adding one (1.0) to the bin whose range contains the coordinate *note x: 8bb. If *note x: 8bb. lies in the valid range of the histogram then the function returns zero to indicate success. If *note x: 8bb. is less than the lower limit of the histogram then the function returns *note GSL_EDOM: 28, and none of bins are modified. Similarly, if the value of *note x: 8bb. is greater than or equal to the upper limit of the histogram then the function returns *note GSL_EDOM: 28, and none of the bins are modified. The error handler is not called, however, since it is often necessary to compute histograms for a small range of a larger dataset, ignoring the values outside the range of interest. -- Function: int gsl_histogram_accumulate (gsl_histogram *h, double x, double weight) This function is similar to *note gsl_histogram_increment(): 8bb. but increases the value of the appropriate bin in the histogram *note h: 8bc. by the floating-point number *note weight: 8bc. -- Function: double gsl_histogram_get (const gsl_histogram *h, size_t i) This function returns the contents of the *note i: 8bd.-th bin of the histogram *note h: 8bd. If *note i: 8bd. lies outside the valid range of indices for the histogram then the error handler is called with an error code of *note GSL_EDOM: 28. and the function returns 0. -- Function: int gsl_histogram_get_range (const gsl_histogram *h, size_t i, double *lower, double *upper) This function finds the upper and lower range limits of the *note i: 8be.-th bin of the histogram *note h: 8be. If the index *note i: 8be. is valid then the corresponding range limits are stored in *note lower: 8be. and *note upper: 8be. The lower limit is inclusive (i.e. events with this coordinate are included in the bin) and the upper limit is exclusive (i.e. events with the coordinate of the upper limit are excluded and fall in the neighboring higher bin, if it exists). The function returns 0 to indicate success. If *note i: 8be. lies outside the valid range of indices for the histogram then the error handler is called and the function returns an error code of *note GSL_EDOM: 28. -- Function: double gsl_histogram_max (const gsl_histogram *h) -- Function: double gsl_histogram_min (const gsl_histogram *h) -- Function: size_t gsl_histogram_bins (const gsl_histogram *h) These functions return the maximum upper and minimum lower range limits and the number of bins of the histogram *note h: 8c1. They provide a way of determining these values without accessing the *note gsl_histogram: 8b1. struct directly. -- Function: void gsl_histogram_reset (gsl_histogram *h) This function resets all the bins in the histogram *note h: 8c2. to zero.  File: gsl-ref.info, Node: Searching histogram ranges, Next: Histogram Statistics, Prev: Updating and accessing histogram elements, Up: Histograms 25.5 Searching histogram ranges =============================== The following functions are used by the access and update routines to locate the bin which corresponds to a given x coordinate. -- Function: int gsl_histogram_find (const gsl_histogram *h, double x, size_t *i) This function finds and sets the index *note i: 8c4. to the bin number which covers the coordinate *note x: 8c4. in the histogram *note h: 8c4. The bin is located using a binary search. The search includes an optimization for histograms with uniform range, and will return the correct bin immediately in this case. If *note x: 8c4. is found in the range of the histogram then the function sets the index *note i: 8c4. and returns ‘GSL_SUCCESS’. If *note x: 8c4. lies outside the valid range of the histogram then the function returns *note GSL_EDOM: 28. and the error handler is invoked.  File: gsl-ref.info, Node: Histogram Statistics, Next: Histogram Operations, Prev: Searching histogram ranges, Up: Histograms 25.6 Histogram Statistics ========================= -- Function: double gsl_histogram_max_val (const gsl_histogram *h) This function returns the maximum value contained in the histogram bins. -- Function: size_t gsl_histogram_max_bin (const gsl_histogram *h) This function returns the index of the bin containing the maximum value. In the case where several bins contain the same maximum value the smallest index is returned. -- Function: double gsl_histogram_min_val (const gsl_histogram *h) This function returns the minimum value contained in the histogram bins. -- Function: size_t gsl_histogram_min_bin (const gsl_histogram *h) This function returns the index of the bin containing the minimum value. In the case where several bins contain the same minimum value the smallest index is returned. -- Function: double gsl_histogram_mean (const gsl_histogram *h) This function returns the mean of the histogrammed variable, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. The accuracy of the result is limited by the bin width. -- Function: double gsl_histogram_sigma (const gsl_histogram *h) This function returns the standard deviation of the histogrammed variable, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. The accuracy of the result is limited by the bin width. -- Function: double gsl_histogram_sum (const gsl_histogram *h) This function returns the sum of all bin values. Negative bin values are included in the sum.  File: gsl-ref.info, Node: Histogram Operations, Next: Reading and writing histograms, Prev: Histogram Statistics, Up: Histograms 25.7 Histogram Operations ========================= -- Function: int gsl_histogram_equal_bins_p (const gsl_histogram *h1, const gsl_histogram *h2) This function returns 1 if the all of the individual bin ranges of the two histograms are identical, and 0 otherwise. -- Function: int gsl_histogram_add (gsl_histogram *h1, const gsl_histogram *h2) This function adds the contents of the bins in histogram *note h2: 8cf. to the corresponding bins of histogram *note h1: 8cf, i.e. h'_1(i) = h_1(i) + h_2(i). The two histograms must have identical bin ranges. -- Function: int gsl_histogram_sub (gsl_histogram *h1, const gsl_histogram *h2) This function subtracts the contents of the bins in histogram *note h2: 8d0. from the corresponding bins of histogram *note h1: 8d0, i.e. h'_1(i) = h_1(i) - h_2(i). The two histograms must have identical bin ranges. -- Function: int gsl_histogram_mul (gsl_histogram *h1, const gsl_histogram *h2) This function multiplies the contents of the bins of histogram *note h1: 8d1. by the contents of the corresponding bins in histogram *note h2: 8d1, i.e. h'_1(i) = h_1(i) * h_2(i). The two histograms must have identical bin ranges. -- Function: int gsl_histogram_div (gsl_histogram *h1, const gsl_histogram *h2) This function divides the contents of the bins of histogram *note h1: 8d2. by the contents of the corresponding bins in histogram *note h2: 8d2, i.e. h'_1(i) = h_1(i) / h_2(i). The two histograms must have identical bin ranges. -- Function: int gsl_histogram_scale (gsl_histogram *h, double scale) This function multiplies the contents of the bins of histogram *note h: 8d3. by the constant *note scale: 8d3, i.e. h'_1(i) = h_1(i) * scale -- Function: int gsl_histogram_shift (gsl_histogram *h, double offset) This function shifts the contents of the bins of histogram *note h: 8d4. by the constant *note offset: 8d4, i.e. h'_1(i) = h_1(i) + offset  File: gsl-ref.info, Node: Reading and writing histograms, Next: Resampling from histograms, Prev: Histogram Operations, Up: Histograms 25.8 Reading and writing histograms =================================== The library provides functions for reading and writing histograms to a file as binary data or formatted text. -- Function: int gsl_histogram_fwrite (FILE *stream, const gsl_histogram *h) This function writes the ranges and bins of the histogram *note h: 8d6. to the stream *note stream: 8d6. in binary format. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_histogram_fread (FILE *stream, gsl_histogram *h) This function reads into the histogram *note h: 8d7. from the open stream *note stream: 8d7. in binary format. The histogram *note h: 8d7. must be preallocated with the correct size since the function uses the number of bins in *note h: 8d7. to determine how many bytes to read. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_histogram_fprintf (FILE *stream, const gsl_histogram *h, const char *range_format, const char *bin_format) This function writes the ranges and bins of the histogram *note h: 8d8. line-by-line to the stream *note stream: 8d8. using the format specifiers *note range_format: 8d8. and *note bin_format: 8d8. These should be one of the ‘%g’, ‘%e’ or ‘%f’ formats for floating point numbers. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. The histogram output is formatted in three columns, and the columns are separated by spaces, like this: range[0] range[1] bin[0] range[1] range[2] bin[1] range[2] range[3] bin[2] .... range[n-1] range[n] bin[n-1] The values of the ranges are formatted using *note range_format: 8d8. and the value of the bins are formatted using *note bin_format: 8d8. Each line contains the lower and upper limit of the range of the bins and the value of the bin itself. Since the upper limit of one bin is the lower limit of the next there is duplication of these values between lines but this allows the histogram to be manipulated with line-oriented tools. -- Function: int gsl_histogram_fscanf (FILE *stream, gsl_histogram *h) This function reads formatted data from the stream *note stream: 8d9. into the histogram *note h: 8d9. The data is assumed to be in the three-column format used by *note gsl_histogram_fprintf(): 8d8. The histogram *note h: 8d9. must be preallocated with the correct length since the function uses the size of *note h: 8d9. to determine how many numbers to read. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file.  File: gsl-ref.info, Node: Resampling from histograms, Next: The histogram probability distribution struct, Prev: Reading and writing histograms, Up: Histograms 25.9 Resampling from histograms =============================== A histogram made by counting events can be regarded as a measurement of a probability distribution. Allowing for statistical error, the height of each bin represents the probability of an event where the value of x falls in the range of that bin. The probability distribution function has the one-dimensional form p(x)dx where, p(x) = n_i / (N w_i) In this equation n_i is the number of events in the bin which contains x, w_i is the width of the bin and N is the total number of events. The distribution of events within each bin is assumed to be uniform.  File: gsl-ref.info, Node: The histogram probability distribution struct, Next: Example programs for histograms, Prev: Resampling from histograms, Up: Histograms 25.10 The histogram probability distribution struct =================================================== The probability distribution function for a histogram consists of a set of `bins' which measure the probability of an event falling into a given range of a continuous variable x. A probability distribution function is defined by the following struct, which actually stores the cumulative probability distribution function. This is the natural quantity for generating samples via the inverse transform method, because there is a one-to-one mapping between the cumulative probability distribution and the range [0,1]. It can be shown that by taking a uniform random number in this range and finding its corresponding coordinate in the cumulative probability distribution we obtain samples with the desired probability distribution. -- Type: gsl_histogram_pdf ‘size_t n’ This is the number of bins used to approximate the probability distribution function. ‘double * range’ The ranges of the bins are stored in an array of n + 1 elements pointed to by ‘range’. ‘double * sum’ The cumulative probability for the bins is stored in an array of ‘n’ elements pointed to by ‘sum’. The following functions allow you to create a *note gsl_histogram_pdf: 8dc. struct which represents this probability distribution and generate random samples from it. -- Function: *note gsl_histogram_pdf: 8dc. *gsl_histogram_pdf_alloc (size_t n) This function allocates memory for a probability distribution with *note n: 8dd. bins and returns a pointer to a newly initialized *note gsl_histogram_pdf: 8dc. struct. If insufficient memory is available a null pointer is returned and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: int gsl_histogram_pdf_init (gsl_histogram_pdf *p, const gsl_histogram *h) This function initializes the probability distribution *note p: 8de. with the contents of the histogram *note h: 8de. If any of the bins of *note h: 8de. are negative then the error handler is invoked with an error code of *note GSL_EDOM: 28. because a probability distribution cannot contain negative values. -- Function: void gsl_histogram_pdf_free (gsl_histogram_pdf *p) This function frees the probability distribution function *note p: 8df. and all of the memory associated with it. -- Function: double gsl_histogram_pdf_sample (const gsl_histogram_pdf *p, double r) This function uses *note r: 8e0, a uniform random number between zero and one, to compute a single random sample from the probability distribution *note p: 8e0. The algorithm used to compute the sample s is given by the following formula, s = range[i] + delta * (range[i+1] - range[i]) where i is the index which satisfies sum[i] \le r < sum[i+1] and delta is (r - sum[i])/(sum[i+1] - sum[i]).  File: gsl-ref.info, Node: Example programs for histograms, Next: Two dimensional histograms, Prev: The histogram probability distribution struct, Up: Histograms 25.11 Example programs for histograms ===================================== The following program shows how to make a simple histogram of a column of numerical data supplied on ‘stdin’. The program takes three arguments, specifying the upper and lower bounds of the histogram and the number of bins. It then reads numbers from ‘stdin’, one line at a time, and adds them to the histogram. When there is no more data to read it prints out the accumulated histogram using *note gsl_histogram_fprintf(): 8d8. #include #include #include int main (int argc, char **argv) { double a, b; size_t n; if (argc != 4) { printf ("Usage: gsl-histogram xmin xmax n\n" "Computes a histogram of the data " "on stdin using n bins from xmin " "to xmax\n"); exit (0); } a = atof (argv[1]); b = atof (argv[2]); n = atoi (argv[3]); { double x; gsl_histogram * h = gsl_histogram_alloc (n); gsl_histogram_set_ranges_uniform (h, a, b); while (fscanf (stdin, "%lg", &x) == 1) { gsl_histogram_increment (h, x); } gsl_histogram_fprintf (stdout, h, "%g", "%g"); gsl_histogram_free (h); } exit (0); } Here is an example of the program in use. We generate 10000 random samples from a Cauchy distribution with a width of 30 and histogram them over the range -100 to 100, using 200 bins: $ gsl-randist 0 10000 cauchy 30 | gsl-histogram -- -100 100 200 > histogram.dat Fig. %s shows the familiar shape of the Cauchy distribution and the fluctuations caused by the finite sample size. [gsl-ref-figures/histogram] Figure: Histogram output from example program  File: gsl-ref.info, Node: Two dimensional histograms, Next: The 2D histogram struct, Prev: Example programs for histograms, Up: Histograms 25.12 Two dimensional histograms ================================ A two dimensional histogram consists of a set of `bins' which count the number of events falling in a given area of the (x,y) plane. The simplest way to use a two dimensional histogram is to record two-dimensional position information, n(x,y). Another possibility is to form a `joint distribution' by recording related variables. For example a detector might record both the position of an event (x) and the amount of energy it deposited E. These could be histogrammed as the joint distribution n(x,E).  File: gsl-ref.info, Node: The 2D histogram struct, Next: 2D Histogram allocation, Prev: Two dimensional histograms, Up: Histograms 25.13 The 2D histogram struct ============================= Two dimensional histograms are defined by the following struct, -- Type: gsl_histogram2d ‘size_t nx, ny’ This is the number of histogram bins in the x and y directions. ‘double * xrange’ The ranges of the bins in the x-direction are stored in an array of ‘nx + 1’ elements pointed to by ‘xrange’. ‘double * yrange’ The ranges of the bins in the y-direction are stored in an array of ‘ny + 1’ elements pointed to by ‘yrange’. ‘double * bin’ The counts for each bin are stored in an array pointed to by ‘bin’. The bins are floating-point numbers, so you can increment them by non-integer values if necessary. The array ‘bin’ stores the two dimensional array of bins in a single block of memory according to the mapping ‘bin(i,j)’ = ‘bin[i * ny + j]’. The range for ‘bin(i,j)’ is given by ‘xrange[i]’ to ‘xrange[i+1]’ in the x-direction and ‘yrange[j]’ to ‘yrange[j+1]’ in the y-direction. Each bin is inclusive at the lower end and exclusive at the upper end. Mathematically this means that the bins are defined by the following inequality, bin(i,j) corresponds to xrange[i] <= x < xrange[i+1] and yrange[j] <= y < yrange[j+1] Note that any samples which fall on the upper sides of the histogram are excluded. If you want to include these values for the side bins you will need to add an extra row or column to your histogram. The *note gsl_histogram2d: 8e5. struct and its associated functions are defined in the header file ‘gsl_histogram2d.h’.  File: gsl-ref.info, Node: 2D Histogram allocation, Next: Copying 2D Histograms, Prev: The 2D histogram struct, Up: Histograms 25.14 2D Histogram allocation ============================= The functions for allocating memory to a 2D histogram follow the style of ‘malloc()’ and ‘free()’. In addition they also perform their own error checking. If there is insufficient memory available to allocate a histogram then the functions call the error handler (with an error number of *note GSL_ENOMEM: 2a.) in addition to returning a null pointer. Thus if you use the library error handler to abort your program then it isn’t necessary to check every 2D histogram ‘alloc’. -- Function: *note gsl_histogram2d: 8e5. *gsl_histogram2d_alloc (size_t nx, size_t ny) This function allocates memory for a two-dimensional histogram with *note nx: 8e7. bins in the x direction and *note ny: 8e7. bins in the y direction. The function returns a pointer to a newly created *note gsl_histogram2d: 8e5. struct. If insufficient memory is available a null pointer is returned and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. The bins and ranges must be initialized with one of the functions below before the histogram is ready for use. -- Function: int gsl_histogram2d_set_ranges (gsl_histogram2d *h, const double xrange[], size_t xsize, const double yrange[], size_t ysize) This function sets the ranges of the existing histogram *note h: 8e8. using the arrays *note xrange: 8e8. and *note yrange: 8e8. of size *note xsize: 8e8. and *note ysize: 8e8. respectively. The values of the histogram bins are reset to zero. -- Function: int gsl_histogram2d_set_ranges_uniform (gsl_histogram2d *h, double xmin, double xmax, double ymin, double ymax) This function sets the ranges of the existing histogram *note h: 8e9. to cover the ranges *note xmin: 8e9. to *note xmax: 8e9. and *note ymin: 8e9. to *note ymax: 8e9. uniformly. The values of the histogram bins are reset to zero. -- Function: void gsl_histogram2d_free (gsl_histogram2d *h) This function frees the 2D histogram *note h: 8ea. and all of the memory associated with it.  File: gsl-ref.info, Node: Copying 2D Histograms, Next: Updating and accessing 2D histogram elements, Prev: 2D Histogram allocation, Up: Histograms 25.15 Copying 2D Histograms =========================== -- Function: int gsl_histogram2d_memcpy (gsl_histogram2d *dest, const gsl_histogram2d *src) This function copies the histogram *note src: 8ec. into the pre-existing histogram *note dest: 8ec, making *note dest: 8ec. into an exact copy of *note src: 8ec. The two histograms must be of the same size. -- Function: *note gsl_histogram2d: 8e5. *gsl_histogram2d_clone (const gsl_histogram2d *src) This function returns a pointer to a newly created histogram which is an exact copy of the histogram *note src: 8ed.  File: gsl-ref.info, Node: Updating and accessing 2D histogram elements, Next: Searching 2D histogram ranges, Prev: Copying 2D Histograms, Up: Histograms 25.16 Updating and accessing 2D histogram elements ================================================== You can access the bins of a two-dimensional histogram either by specifying a pair of (x,y) coordinates or by using the bin indices (i,j) directly. The functions for accessing the histogram through (x,y) coordinates use binary searches in the x and y directions to identify the bin which covers the appropriate range. -- Function: int gsl_histogram2d_increment (gsl_histogram2d *h, double x, double y) This function updates the histogram *note h: 8ef. by adding one (1.0) to the bin whose x and y ranges contain the coordinates (*note x: 8ef, *note y: 8ef.). If the point (x,y) lies inside the valid ranges of the histogram then the function returns zero to indicate success. If (x,y) lies outside the limits of the histogram then the function returns *note GSL_EDOM: 28, and none of the bins are modified. The error handler is not called, since it is often necessary to compute histograms for a small range of a larger dataset, ignoring any coordinates outside the range of interest. -- Function: int gsl_histogram2d_accumulate (gsl_histogram2d *h, double x, double y, double weight) This function is similar to *note gsl_histogram2d_increment(): 8ef. but increases the value of the appropriate bin in the histogram *note h: 8f0. by the floating-point number *note weight: 8f0. -- Function: double gsl_histogram2d_get (const gsl_histogram2d *h, size_t i, size_t j) This function returns the contents of the (*note i: 8f1, *note j: 8f1.)-th bin of the histogram *note h: 8f1. If (*note i: 8f1, *note j: 8f1.) lies outside the valid range of indices for the histogram then the error handler is called with an error code of *note GSL_EDOM: 28. and the function returns 0. -- Function: int gsl_histogram2d_get_xrange (const gsl_histogram2d *h, size_t i, double *xlower, double *xupper) -- Function: int gsl_histogram2d_get_yrange (const gsl_histogram2d *h, size_t j, double *ylower, double *yupper) These functions find the upper and lower range limits of the ‘i’-th and *note j: 8f3.-th bins in the x and y directions of the histogram *note h: 8f3. The range limits are stored in ‘xlower’ and ‘xupper’ or *note ylower: 8f3. and *note yupper: 8f3. The lower limits are inclusive (i.e. events with these coordinates are included in the bin) and the upper limits are exclusive (i.e. events with the value of the upper limit are not included and fall in the neighboring higher bin, if it exists). The functions return 0 to indicate success. If ‘i’ or *note j: 8f3. lies outside the valid range of indices for the histogram then the error handler is called with an error code of *note GSL_EDOM: 28. -- Function: double gsl_histogram2d_xmax (const gsl_histogram2d *h) -- Function: double gsl_histogram2d_xmin (const gsl_histogram2d *h) -- Function: size_t gsl_histogram2d_nx (const gsl_histogram2d *h) -- Function: double gsl_histogram2d_ymax (const gsl_histogram2d *h) -- Function: double gsl_histogram2d_ymin (const gsl_histogram2d *h) -- Function: size_t gsl_histogram2d_ny (const gsl_histogram2d *h) These functions return the maximum upper and minimum lower range limits and the number of bins for the x and y directions of the histogram *note h: 8f9. They provide a way of determining these values without accessing the *note gsl_histogram2d: 8e5. struct directly. -- Function: void gsl_histogram2d_reset (gsl_histogram2d *h) This function resets all the bins of the histogram *note h: 8fa. to zero.  File: gsl-ref.info, Node: Searching 2D histogram ranges, Next: 2D Histogram Statistics, Prev: Updating and accessing 2D histogram elements, Up: Histograms 25.17 Searching 2D histogram ranges =================================== The following functions are used by the access and update routines to locate the bin which corresponds to a given (x,y) coordinate. -- Function: int gsl_histogram2d_find (const gsl_histogram2d *h, double x, double y, size_t *i, size_t *j) This function finds and sets the indices *note i: 8fc. and *note j: 8fc. to the bin which covers the coordinates (*note x: 8fc, *note y: 8fc.). The bin is located using a binary search. The search includes an optimization for histograms with uniform ranges, and will return the correct bin immediately in this case. If (x,y) is found then the function sets the indices (*note i: 8fc, *note j: 8fc.) and returns ‘GSL_SUCCESS’. If (x,y) lies outside the valid range of the histogram then the function returns *note GSL_EDOM: 28. and the error handler is invoked.  File: gsl-ref.info, Node: 2D Histogram Statistics, Next: 2D Histogram Operations, Prev: Searching 2D histogram ranges, Up: Histograms 25.18 2D Histogram Statistics ============================= -- Function: double gsl_histogram2d_max_val (const gsl_histogram2d *h) This function returns the maximum value contained in the histogram bins. -- Function: void gsl_histogram2d_max_bin (const gsl_histogram2d *h, size_t *i, size_t *j) This function finds the indices of the bin containing the maximum value in the histogram *note h: 8ff. and stores the result in (*note i: 8ff, *note j: 8ff.). In the case where several bins contain the same maximum value the first bin found is returned. -- Function: double gsl_histogram2d_min_val (const gsl_histogram2d *h) This function returns the minimum value contained in the histogram bins. -- Function: void gsl_histogram2d_min_bin (const gsl_histogram2d *h, size_t *i, size_t *j) This function finds the indices of the bin containing the minimum value in the histogram *note h: 901. and stores the result in (*note i: 901, *note j: 901.). In the case where several bins contain the same maximum value the first bin found is returned. -- Function: double gsl_histogram2d_xmean (const gsl_histogram2d *h) This function returns the mean of the histogrammed x variable, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. -- Function: double gsl_histogram2d_ymean (const gsl_histogram2d *h) This function returns the mean of the histogrammed y variable, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. -- Function: double gsl_histogram2d_xsigma (const gsl_histogram2d *h) This function returns the standard deviation of the histogrammed x variable, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. -- Function: double gsl_histogram2d_ysigma (const gsl_histogram2d *h) This function returns the standard deviation of the histogrammed y variable, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. -- Function: double gsl_histogram2d_cov (const gsl_histogram2d *h) This function returns the covariance of the histogrammed x and y variables, where the histogram is regarded as a probability distribution. Negative bin values are ignored for the purposes of this calculation. -- Function: double gsl_histogram2d_sum (const gsl_histogram2d *h) This function returns the sum of all bin values. Negative bin values are included in the sum.  File: gsl-ref.info, Node: 2D Histogram Operations, Next: Reading and writing 2D histograms, Prev: 2D Histogram Statistics, Up: Histograms 25.19 2D Histogram Operations ============================= -- Function: int gsl_histogram2d_equal_bins_p (const gsl_histogram2d *h1, const gsl_histogram2d *h2) This function returns 1 if all the individual bin ranges of the two histograms are identical, and 0 otherwise. -- Function: int gsl_histogram2d_add (gsl_histogram2d *h1, const gsl_histogram2d *h2) This function adds the contents of the bins in histogram *note h2: 90a. to the corresponding bins of histogram *note h1: 90a, i.e. h'_1(i,j) = h_1(i,j) + h_2(i,j). The two histograms must have identical bin ranges. -- Function: int gsl_histogram2d_sub (gsl_histogram2d *h1, const gsl_histogram2d *h2) This function subtracts the contents of the bins in histogram *note h2: 90b. from the corresponding bins of histogram *note h1: 90b, i.e. h'_1(i,j) = h_1(i,j) - h_2(i,j). The two histograms must have identical bin ranges. -- Function: int gsl_histogram2d_mul (gsl_histogram2d *h1, const gsl_histogram2d *h2) This function multiplies the contents of the bins of histogram *note h1: 90c. by the contents of the corresponding bins in histogram *note h2: 90c, i.e. h'_1(i,j) = h_1(i,j) * h_2(i,j). The two histograms must have identical bin ranges. -- Function: int gsl_histogram2d_div (gsl_histogram2d *h1, const gsl_histogram2d *h2) This function divides the contents of the bins of histogram *note h1: 90d. by the contents of the corresponding bins in histogram *note h2: 90d, i.e. h'_1(i,j) = h_1(i,j) / h_2(i,j). The two histograms must have identical bin ranges. -- Function: int gsl_histogram2d_scale (gsl_histogram2d *h, double scale) This function multiplies the contents of the bins of histogram *note h: 90e. by the constant *note scale: 90e, i.e. h'_1(i,j) = h_1(i,j) scale -- Function: int gsl_histogram2d_shift (gsl_histogram2d *h, double offset) This function shifts the contents of the bins of histogram *note h: 90f. by the constant *note offset: 90f, i.e. h'_1(i,j) = h_1(i,j) + offset  File: gsl-ref.info, Node: Reading and writing 2D histograms, Next: Resampling from 2D histograms, Prev: 2D Histogram Operations, Up: Histograms 25.20 Reading and writing 2D histograms ======================================= The library provides functions for reading and writing two dimensional histograms to a file as binary data or formatted text. -- Function: int gsl_histogram2d_fwrite (FILE *stream, const gsl_histogram2d *h) This function writes the ranges and bins of the histogram *note h: 911. to the stream *note stream: 911. in binary format. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. -- Function: int gsl_histogram2d_fread (FILE *stream, gsl_histogram2d *h) This function reads into the histogram *note h: 912. from the stream *note stream: 912. in binary format. The histogram *note h: 912. must be preallocated with the correct size since the function uses the number of x and y bins in *note h: 912. to determine how many bytes to read. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. -- Function: int gsl_histogram2d_fprintf (FILE *stream, const gsl_histogram2d *h, const char *range_format, const char *bin_format) This function writes the ranges and bins of the histogram *note h: 913. line-by-line to the stream *note stream: 913. using the format specifiers *note range_format: 913. and *note bin_format: 913. These should be one of the ‘%g’, ‘%e’ or ‘%f’ formats for floating point numbers. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. The histogram output is formatted in five columns, and the columns are separated by spaces, like this: xrange[0] xrange[1] yrange[0] yrange[1] bin(0,0) xrange[0] xrange[1] yrange[1] yrange[2] bin(0,1) xrange[0] xrange[1] yrange[2] yrange[3] bin(0,2) .... xrange[0] xrange[1] yrange[ny-1] yrange[ny] bin(0,ny-1) xrange[1] xrange[2] yrange[0] yrange[1] bin(1,0) xrange[1] xrange[2] yrange[1] yrange[2] bin(1,1) xrange[1] xrange[2] yrange[1] yrange[2] bin(1,2) .... xrange[1] xrange[2] yrange[ny-1] yrange[ny] bin(1,ny-1) .... xrange[nx-1] xrange[nx] yrange[0] yrange[1] bin(nx-1,0) xrange[nx-1] xrange[nx] yrange[1] yrange[2] bin(nx-1,1) xrange[nx-1] xrange[nx] yrange[1] yrange[2] bin(nx-1,2) .... xrange[nx-1] xrange[nx] yrange[ny-1] yrange[ny] bin(nx-1,ny-1) Each line contains the lower and upper limits of the bin and the contents of the bin. Since the upper limits of the each bin are the lower limits of the neighboring bins there is duplication of these values but this allows the histogram to be manipulated with line-oriented tools. -- Function: int gsl_histogram2d_fscanf (FILE *stream, gsl_histogram2d *h) This function reads formatted data from the stream *note stream: 914. into the histogram *note h: 914. The data is assumed to be in the five-column format used by *note gsl_histogram2d_fprintf(): 913. The histogram *note h: 914. must be preallocated with the correct lengths since the function uses the sizes of *note h: 914. to determine how many numbers to read. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file.  File: gsl-ref.info, Node: Resampling from 2D histograms, Next: Example programs for 2D histograms, Prev: Reading and writing 2D histograms, Up: Histograms 25.21 Resampling from 2D histograms =================================== As in the one-dimensional case, a two-dimensional histogram made by counting events can be regarded as a measurement of a probability distribution. Allowing for statistical error, the height of each bin represents the probability of an event where (x, y) falls in the range of that bin. For a two-dimensional histogram the probability distribution takes the form p(x,y) dx dy where, p(x,y) = n_{ij} / (N A_{ij}) In this equation n_{ij} is the number of events in the bin which contains (x,y), A_{ij} is the area of the bin and N is the total number of events. The distribution of events within each bin is assumed to be uniform. -- Type: gsl_histogram2d_pdf ‘size_t nx, ny’ This is the number of histogram bins used to approximate the probability distribution function in the x and y directions. ‘double * xrange’ The ranges of the bins in the x-direction are stored in an array of ‘nx + 1’ elements pointed to by ‘xrange’. ‘double * yrange’ The ranges of the bins in the y-direction are stored in an array of ‘ny + 1’ pointed to by ‘yrange’. ‘double * sum’ The cumulative probability for the bins is stored in an array of ‘nx’ * ‘ny’ elements pointed to by ‘sum’. The following functions allow you to create a *note gsl_histogram2d_pdf: 916. struct which represents a two dimensional probability distribution and generate random samples from it. -- Function: *note gsl_histogram2d_pdf: 916. *gsl_histogram2d_pdf_alloc (size_t nx, size_t ny) This function allocates memory for a two-dimensional probability distribution of size *note nx: 917.-by-*note ny: 917. and returns a pointer to a newly initialized *note gsl_histogram2d_pdf: 916. struct. If insufficient memory is available a null pointer is returned and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: int gsl_histogram2d_pdf_init (gsl_histogram2d_pdf *p, const gsl_histogram2d *h) This function initializes the two-dimensional probability distribution calculated *note p: 918. from the histogram *note h: 918. If any of the bins of *note h: 918. are negative then the error handler is invoked with an error code of *note GSL_EDOM: 28. because a probability distribution cannot contain negative values. -- Function: void gsl_histogram2d_pdf_free (gsl_histogram2d_pdf *p) This function frees the two-dimensional probability distribution function *note p: 919. and all of the memory associated with it. -- Function: int gsl_histogram2d_pdf_sample (const gsl_histogram2d_pdf *p, double r1, double r2, double *x, double *y) This function uses two uniform random numbers between zero and one, *note r1: 91a. and *note r2: 91a, to compute a single random sample from the two-dimensional probability distribution *note p: 91a.  File: gsl-ref.info, Node: Example programs for 2D histograms, Prev: Resampling from 2D histograms, Up: Histograms 25.22 Example programs for 2D histograms ======================================== This program demonstrates two features of two-dimensional histograms. First a 10-by-10 two-dimensional histogram is created with x and y running from 0 to 1. Then a few sample points are added to the histogram, at (0.3,0.3) with a height of 1, at (0.8,0.1) with a height of 5 and at (0.7,0.9) with a height of 0.5. This histogram with three events is used to generate a random sample of 1000 simulated events, which are printed out. #include #include #include int main (void) { const gsl_rng_type * T; gsl_rng * r; gsl_histogram2d * h = gsl_histogram2d_alloc (10, 10); gsl_histogram2d_set_ranges_uniform (h, 0.0, 1.0, 0.0, 1.0); gsl_histogram2d_accumulate (h, 0.3, 0.3, 1); gsl_histogram2d_accumulate (h, 0.8, 0.1, 5); gsl_histogram2d_accumulate (h, 0.7, 0.9, 0.5); gsl_rng_env_setup (); T = gsl_rng_default; r = gsl_rng_alloc (T); { int i; gsl_histogram2d_pdf * p = gsl_histogram2d_pdf_alloc (h->nx, h->ny); gsl_histogram2d_pdf_init (p, h); for (i = 0; i < 1000; i++) { double x, y; double u = gsl_rng_uniform (r); double v = gsl_rng_uniform (r); gsl_histogram2d_pdf_sample (p, u, v, &x, &y); printf ("%g %g\n", x, y); } gsl_histogram2d_pdf_free (p); } gsl_histogram2d_free (h); gsl_rng_free (r); return 0; } The following plot shows the distribution of the simulated events. Using a higher resolution grid we can see the original underlying histogram and also the statistical fluctuations caused by the events being uniformly distributed over the area of the original bins. [gsl-ref-figures/histogram2d] Figure: Distribution of simulated events from example program  File: gsl-ref.info, Node: N-tuples, Next: Monte Carlo Integration, Prev: Histograms, Up: Top 26 N-tuples *********** This chapter describes functions for creating and manipulating `ntuples', sets of values associated with events. The ntuples are stored in files. Their values can be extracted in any combination and `booked' in a histogram using a selection function. The values to be stored are held in a user-defined data structure, and an ntuple is created associating this data structure with a file. The values are then written to the file (normally inside a loop) using the ntuple functions described below. A histogram can be created from ntuple data by providing a selection function and a value function. The selection function specifies whether an event should be included in the subset to be analyzed or not. The value function computes the entry to be added to the histogram for each event. All the ntuple functions are defined in the header file ‘gsl_ntuple.h’. * Menu: * The ntuple struct:: * Creating ntuples:: * Opening an existing ntuple file:: * Writing ntuples:: * Reading ntuples:: * Closing an ntuple file:: * Histogramming ntuple values:: * Examples: Examples<19>. * References and Further Reading: References and Further Reading<19>.  File: gsl-ref.info, Node: The ntuple struct, Next: Creating ntuples, Up: N-tuples 26.1 The ntuple struct ====================== -- Type: gsl_ntuple Ntuples are manipulated using the *note gsl_ntuple: 91f. struct. This struct contains information on the file where the ntuple data is stored, a pointer to the current ntuple data row and the size of the user-defined ntuple data struct: typedef struct { FILE * file; void * ntuple_data; size_t size; } gsl_ntuple;  File: gsl-ref.info, Node: Creating ntuples, Next: Opening an existing ntuple file, Prev: The ntuple struct, Up: N-tuples 26.2 Creating ntuples ===================== -- Function: *note gsl_ntuple: 91f. *gsl_ntuple_create (char *filename, void *ntuple_data, size_t size) This function creates a new write-only ntuple file *note filename: 921. for ntuples of size *note size: 921. and returns a pointer to the newly created ntuple struct. Any existing file with the same name is truncated to zero length and overwritten. A pointer to memory for the current ntuple row *note ntuple_data: 921. must be supplied—this is used to copy ntuples in and out of the file.  File: gsl-ref.info, Node: Opening an existing ntuple file, Next: Writing ntuples, Prev: Creating ntuples, Up: N-tuples 26.3 Opening an existing ntuple file ==================================== -- Function: *note gsl_ntuple: 91f. *gsl_ntuple_open (char *filename, void *ntuple_data, size_t size) This function opens an existing ntuple file *note filename: 923. for reading and returns a pointer to a corresponding ntuple struct. The ntuples in the file must have size *note size: 923. A pointer to memory for the current ntuple row *note ntuple_data: 923. must be supplied—this is used to copy ntuples in and out of the file.  File: gsl-ref.info, Node: Writing ntuples, Next: Reading ntuples, Prev: Opening an existing ntuple file, Up: N-tuples 26.4 Writing ntuples ==================== -- Function: int gsl_ntuple_write (gsl_ntuple *ntuple) This function writes the current ntuple ‘ntuple->ntuple_data’ of size ‘ntuple->size’ to the corresponding file. -- Function: int gsl_ntuple_bookdata (gsl_ntuple *ntuple) This function is a synonym for *note gsl_ntuple_write(): 925.  File: gsl-ref.info, Node: Reading ntuples, Next: Closing an ntuple file, Prev: Writing ntuples, Up: N-tuples 26.5 Reading ntuples ==================== -- Function: int gsl_ntuple_read (gsl_ntuple *ntuple) This function reads the current row of the ntuple file for *note ntuple: 928. and stores the values in ‘ntuple->data’.  File: gsl-ref.info, Node: Closing an ntuple file, Next: Histogramming ntuple values, Prev: Reading ntuples, Up: N-tuples 26.6 Closing an ntuple file =========================== -- Function: int gsl_ntuple_close (gsl_ntuple *ntuple) This function closes the ntuple file *note ntuple: 92a. and frees its associated allocated memory.  File: gsl-ref.info, Node: Histogramming ntuple values, Next: Examples<19>, Prev: Closing an ntuple file, Up: N-tuples 26.7 Histogramming ntuple values ================================ Once an ntuple has been created its contents can be histogrammed in various ways using the function *note gsl_ntuple_project(): 92c. Two user-defined functions must be provided, a function to select events and a function to compute scalar values. The selection function and the value function both accept the ntuple row as a first argument and other parameters as a second argument. -- Type: gsl_ntuple_select_fn The `selection function' determines which ntuple rows are selected for histogramming. It is defined by the following struct: typedef struct { int (* function) (void * ntuple_data, void * params); void * params; } gsl_ntuple_select_fn; The struct component ‘function’ should return a non-zero value for each ntuple row that is to be included in the histogram. -- Type: gsl_ntuple_value_fn The `value function' computes scalar values for those ntuple rows selected by the selection function: typedef struct { double (* function) (void * ntuple_data, void * params); void * params; } gsl_ntuple_value_fn; In this case the struct component ‘function’ should return the value to be added to the histogram for the ntuple row. -- Function: int gsl_ntuple_project (gsl_histogram *h, gsl_ntuple *ntuple, gsl_ntuple_value_fn *value_func, gsl_ntuple_select_fn *select_func) This function updates the histogram *note h: 92c. from the ntuple *note ntuple: 92c. using the functions *note value_func: 92c. and *note select_func: 92c. For each ntuple row where the selection function *note select_func: 92c. is non-zero the corresponding value of that row is computed using the function *note value_func: 92c. and added to the histogram. Those ntuple rows where *note select_func: 92c. returns zero are ignored. New entries are added to the histogram, so subsequent calls can be used to accumulate further data in the same histogram.  File: gsl-ref.info, Node: Examples<19>, Next: References and Further Reading<19>, Prev: Histogramming ntuple values, Up: N-tuples 26.8 Examples ============= The following example programs demonstrate the use of ntuples in managing a large dataset. The first program creates a set of 10,000 simulated “events”, each with 3 associated values (x,y,z). These are generated from a Gaussian distribution with unit variance, for demonstration purposes, and written to the ntuple file ‘test.dat’. #include #include #include struct data { double x; double y; double z; }; int main (void) { const gsl_rng_type * T; gsl_rng * r; struct data ntuple_row; int i; gsl_ntuple *ntuple = gsl_ntuple_create ("test.dat", &ntuple_row, sizeof (ntuple_row)); gsl_rng_env_setup (); T = gsl_rng_default; r = gsl_rng_alloc (T); for (i = 0; i < 10000; i++) { ntuple_row.x = gsl_ran_ugaussian (r); ntuple_row.y = gsl_ran_ugaussian (r); ntuple_row.z = gsl_ran_ugaussian (r); gsl_ntuple_write (ntuple); } gsl_ntuple_close (ntuple); gsl_rng_free (r); return 0; } The next program analyses the ntuple data in the file ‘test.dat’. The analysis procedure is to compute the squared-magnitude of each event, E^2=x^2+y^2+z^2, and select only those which exceed a lower limit of 1.5. The selected events are then histogrammed using their E^2 values. #include #include #include struct data { double x; double y; double z; }; int sel_func (void *ntuple_data, void *params); double val_func (void *ntuple_data, void *params); int main (void) { struct data ntuple_row; gsl_ntuple *ntuple = gsl_ntuple_open ("test.dat", &ntuple_row, sizeof (ntuple_row)); double lower = 1.5; gsl_ntuple_select_fn S; gsl_ntuple_value_fn V; gsl_histogram *h = gsl_histogram_alloc (100); gsl_histogram_set_ranges_uniform(h, 0.0, 10.0); S.function = &sel_func; S.params = &lower; V.function = &val_func; V.params = 0; gsl_ntuple_project (h, ntuple, &V, &S); gsl_histogram_fprintf (stdout, h, "%f", "%f"); gsl_histogram_free (h); gsl_ntuple_close (ntuple); return 0; } int sel_func (void *ntuple_data, void *params) { struct data * data = (struct data *) ntuple_data; double x, y, z, E2, scale; scale = *(double *) params; x = data->x; y = data->y; z = data->z; E2 = x * x + y * y + z * z; return E2 > scale; } double val_func (void *ntuple_data, void *params) { (void)(params); /* avoid unused parameter warning */ struct data * data = (struct data *) ntuple_data; double x, y, z; x = data->x; y = data->y; z = data->z; return x * x + y * y + z * z; } Fig. %s shows the distribution of the selected events. Note the cut-off at the lower bound. [gsl-ref-figures/ntuple] Figure: Distribution of selected events  File: gsl-ref.info, Node: References and Further Reading<19>, Prev: Examples<19>, Up: N-tuples 26.9 References and Further Reading =================================== Further information on the use of ntuples can be found in the documentation for the CERN packages PAW and HBOOK (available online).  File: gsl-ref.info, Node: Monte Carlo Integration, Next: Simulated Annealing, Prev: N-tuples, Up: Top 27 Monte Carlo Integration ************************** This chapter describes routines for multidimensional Monte Carlo integration. These include the traditional Monte Carlo method and adaptive algorithms such as VEGAS and MISER which use importance sampling and stratified sampling techniques. Each algorithm computes an estimate of a multidimensional definite integral of the form, I = \int_{x_l}^{x_u} dx \int_{y_l}^{y_u} dy ... f(x, y, ...) over a hypercubic region ((x_l,x_u), (y_l,y_u), ...) using a fixed number of function calls. The routines also provide a statistical estimate of the error on the result. This error estimate should be taken as a guide rather than as a strict error bound—random sampling of the region may not uncover all the important features of the function, resulting in an underestimate of the error. The functions are defined in separate header files for each routine, ‘gsl_monte_plain.h’, ‘gsl_monte_miser.h’ and ‘gsl_monte_vegas.h’. * Menu: * Interface:: * PLAIN Monte Carlo:: * MISER:: * VEGAS:: * Examples: Examples<20>. * References and Further Reading: References and Further Reading<20>.  File: gsl-ref.info, Node: Interface, Next: PLAIN Monte Carlo, Up: Monte Carlo Integration 27.1 Interface ============== All of the Monte Carlo integration routines use the same general form of interface. There is an allocator to allocate memory for control variables and workspace, a routine to initialize those control variables, the integrator itself, and a function to free the space when done. Each integration function requires a random number generator to be supplied, and returns an estimate of the integral and its standard deviation. The accuracy of the result is determined by the number of function calls specified by the user. If a known level of accuracy is required this can be achieved by calling the integrator several times and averaging the individual results until the desired accuracy is obtained. Random sample points used within the Monte Carlo routines are always chosen strictly within the integration region, so that endpoint singularities are automatically avoided. The function to be integrated has its own datatype, defined in the header file ‘gsl_monte.h’. -- Type: gsl_monte_function This data type defines a general function with parameters for Monte Carlo integration. ‘double (* f) (double * x, size_t dim, void * params)’ this function should return the value f(x,params) for the argument ‘x’ and parameters ‘params’, where ‘x’ is an array of size ‘dim’ giving the coordinates of the point where the function is to be evaluated. ‘size_t dim’ the number of dimensions for ‘x’. ‘void * params’ a pointer to the parameters of the function. Here is an example for a quadratic function in two dimensions, f(x,y) = a x^2 + b x y + c y^2 with a = 3, b = 2, c = 1. The following code defines a *note gsl_monte_function: 935. ‘F’ which you could pass to an integrator: struct my_f_params { double a; double b; double c; }; double my_f (double x[], size_t dim, void * p) { struct my_f_params * fp = (struct my_f_params *)p; if (dim != 2) { fprintf (stderr, "error: dim != 2"); abort (); } return fp->a * x[0] * x[0] + fp->b * x[0] * x[1] + fp->c * x[1] * x[1]; } gsl_monte_function F; struct my_f_params params = { 3.0, 2.0, 1.0 }; F.f = &my_f; F.dim = 2; F.params = ¶ms; The function f(x) can be evaluated using the following macro: #define GSL_MONTE_FN_EVAL(F,x) (*((F)->f))(x,(F)->dim,(F)->params)  File: gsl-ref.info, Node: PLAIN Monte Carlo, Next: MISER, Prev: Interface, Up: Monte Carlo Integration 27.2 PLAIN Monte Carlo ====================== The plain Monte Carlo algorithm samples points randomly from the integration region to estimate the integral and its error. Using this algorithm the estimate of the integral E(f; N) for N randomly distributed points x_i is given by, E(f; N) = = V = (V / N) \sum_i^N f(x_i) where V is the volume of the integration region. The error on this estimate \sigma(E;N) is calculated from the estimated variance of the mean, \sigma^2 (E; N) = (V^2 / N^2) \sum_i^N (f(x_i) - )^2. For large N this variance decreases asymptotically as \Var(f)/N, where \Var(f) is the true variance of the function over the integration region. The error estimate itself should decrease as \sigma(f)/\sqrt{N}. The familiar law of errors decreasing as 1/\sqrt{N} applies—to reduce the error by a factor of 10 requires a 100-fold increase in the number of sample points. The functions described in this section are declared in the header file ‘gsl_monte_plain.h’. -- Type: gsl_monte_plain_state This is a workspace for plain Monte Carlo integration -- Function: *note gsl_monte_plain_state: 937. *gsl_monte_plain_alloc (size_t dim) This function allocates and initializes a workspace for Monte Carlo integration in *note dim: 938. dimensions. -- Function: int gsl_monte_plain_init (gsl_monte_plain_state *s) This function initializes a previously allocated integration state. This allows an existing workspace to be reused for different integrations. -- Function: int gsl_monte_plain_integrate (gsl_monte_function *f, const double xl[], const double xu[], size_t dim, size_t calls, gsl_rng *r, gsl_monte_plain_state *s, double *result, double *abserr) This routines uses the plain Monte Carlo algorithm to integrate the function *note f: 93a. over the *note dim: 93a.-dimensional hypercubic region defined by the lower and upper limits in the arrays *note xl: 93a. and *note xu: 93a, each of size *note dim: 93a. The integration uses a fixed number of function calls *note calls: 93a, and obtains random sampling points using the random number generator *note r: 93a. A previously allocated workspace *note s: 93a. must be supplied. The result of the integration is returned in *note result: 93a, with an estimated absolute error *note abserr: 93a. -- Function: void gsl_monte_plain_free (gsl_monte_plain_state *s) This function frees the memory associated with the integrator state *note s: 93b.  File: gsl-ref.info, Node: MISER, Next: VEGAS, Prev: PLAIN Monte Carlo, Up: Monte Carlo Integration 27.3 MISER ========== The MISER algorithm of Press and Farrar is based on recursive stratified sampling. This technique aims to reduce the overall integration error by concentrating integration points in the regions of highest variance. The idea of stratified sampling begins with the observation that for two disjoint regions a and b with Monte Carlo estimates of the integral E_a(f) and E_b(f) and variances \sigma_a^2(f) and \sigma_b^2(f), the variance \Var(f) of the combined estimate E(f) = {1\over 2} (E_a(f) + E_b(f)) is given by, \Var(f) = (\sigma_a^2(f) / 4 N_a) + (\sigma_b^2(f) / 4 N_b). It can be shown that this variance is minimized by distributing the points such that, N_a / (N_a + N_b) = \sigma_a / (\sigma_a + \sigma_b). Hence the smallest error estimate is obtained by allocating sample points in proportion to the standard deviation of the function in each sub-region. The MISER algorithm proceeds by bisecting the integration region along one coordinate axis to give two sub-regions at each step. The direction is chosen by examining all d possible bisections and selecting the one which will minimize the combined variance of the two sub-regions. The variance in the sub-regions is estimated by sampling with a fraction of the total number of points available to the current step. The same procedure is then repeated recursively for each of the two half-spaces from the best bisection. The remaining sample points are allocated to the sub-regions using the formula for N_a and N_b. This recursive allocation of integration points continues down to a user-specified depth where each sub-region is integrated using a plain Monte Carlo estimate. These individual values and their error estimates are then combined upwards to give an overall result and an estimate of its error. The functions described in this section are declared in the header file ‘gsl_monte_miser.h’. -- Type: gsl_monte_miser_state This workspace is used for MISER Monte Carlo integration -- Function: *note gsl_monte_miser_state: 93d. *gsl_monte_miser_alloc (size_t dim) This function allocates and initializes a workspace for Monte Carlo integration in *note dim: 93e. dimensions. The workspace is used to maintain the state of the integration. -- Function: int gsl_monte_miser_init (gsl_monte_miser_state *s) This function initializes a previously allocated integration state. This allows an existing workspace to be reused for different integrations. -- Function: int gsl_monte_miser_integrate (gsl_monte_function *f, const double xl[], const double xu[], size_t dim, size_t calls, gsl_rng *r, gsl_monte_miser_state *s, double *result, double *abserr) This routines uses the MISER Monte Carlo algorithm to integrate the function *note f: 940. over the *note dim: 940.-dimensional hypercubic region defined by the lower and upper limits in the arrays *note xl: 940. and *note xu: 940, each of size *note dim: 940. The integration uses a fixed number of function calls *note calls: 940, and obtains random sampling points using the random number generator *note r: 940. A previously allocated workspace *note s: 940. must be supplied. The result of the integration is returned in *note result: 940, with an estimated absolute error *note abserr: 940. -- Function: void gsl_monte_miser_free (gsl_monte_miser_state *s) This function frees the memory associated with the integrator state *note s: 941. The MISER algorithm has several configurable parameters which can be changed using the following two functions (1). -- Function: void gsl_monte_miser_params_get (const gsl_monte_miser_state *s, gsl_monte_miser_params *params) This function copies the parameters of the integrator state into the user-supplied *note params: 942. structure. -- Function: void gsl_monte_miser_params_set (gsl_monte_miser_state *s, const gsl_monte_miser_params *params) This function sets the integrator parameters based on values provided in the *note params: 943. structure. Typically the values of the parameters are first read using *note gsl_monte_miser_params_get(): 942, the necessary changes are made to the fields of the ‘params’ structure, and the values are copied back into the integrator state using *note gsl_monte_miser_params_set(): 943. The functions use the *note gsl_monte_miser_params: 944. structure which contains the following fields: -- Type: gsl_monte_miser_params -- Variable: double estimate_frac This parameter specifies the fraction of the currently available number of function calls which are allocated to estimating the variance at each recursive step. The default value is 0.1. -- Variable: size_t min_calls This parameter specifies the minimum number of function calls required for each estimate of the variance. If the number of function calls allocated to the estimate using *note estimate_frac: 945. falls below *note min_calls: 946. then *note min_calls: 946. are used instead. This ensures that each estimate maintains a reasonable level of accuracy. The default value of *note min_calls: 946. is ‘16 * dim’. -- Variable: size_t min_calls_per_bisection This parameter specifies the minimum number of function calls required to proceed with a bisection step. When a recursive step has fewer calls available than *note min_calls_per_bisection: 947. it performs a plain Monte Carlo estimate of the current sub-region and terminates its branch of the recursion. The default value of this parameter is ‘32 * min_calls’. -- Variable: double alpha This parameter controls how the estimated variances for the two sub-regions of a bisection are combined when allocating points. With recursive sampling the overall variance should scale better than 1/N, since the values from the sub-regions will be obtained using a procedure which explicitly minimizes their variance. To accommodate this behavior the MISER algorithm allows the total variance to depend on a scaling parameter \alpha, \Var(f) = {\sigma_a \over N_a^\alpha} + {\sigma_b \over N_b^\alpha}. The authors of the original paper describing MISER recommend the value \alpha = 2 as a good choice, obtained from numerical experiments, and this is used as the default value in this implementation. -- Variable: double dither This parameter introduces a random fractional variation of size *note dither: 949. into each bisection, which can be used to break the symmetry of integrands which are concentrated near the exact center of the hypercubic integration region. The default value of dither is zero, so no variation is introduced. If needed, a typical value of *note dither: 949. is 0.1. ---------- Footnotes ---------- (1) (1) The previous method of accessing these fields directly through the *note gsl_monte_miser_state: 93d. struct is now deprecated.  File: gsl-ref.info, Node: VEGAS, Next: Examples<20>, Prev: MISER, Up: Monte Carlo Integration 27.4 VEGAS ========== The VEGAS algorithm of Lepage is based on importance sampling. It samples points from the probability distribution described by the function |f|, so that the points are concentrated in the regions that make the largest contribution to the integral. In general, if the Monte Carlo integral of f is sampled with points distributed according to a probability distribution described by the function g, we obtain an estimate E_g(f; N), E_g(f; N) = E(f/g; N) with a corresponding variance, \Var_g(f; N) = \Var(f/g; N) If the probability distribution is chosen as g = |f|/I(|f|) then it can be shown that the variance V_g(f; N) vanishes, and the error in the estimate will be zero. In practice it is not possible to sample from the exact distribution g for an arbitrary function, so importance sampling algorithms aim to produce efficient approximations to the desired distribution. The VEGAS algorithm approximates the exact distribution by making a number of passes over the integration region while histogramming the function f. Each histogram is used to define a sampling distribution for the next pass. Asymptotically this procedure converges to the desired distribution. In order to avoid the number of histogram bins growing like K^d the probability distribution is approximated by a separable function: g(x_1, x_2, \ldots) = g_1(x_1) g_2(x_2) \ldots so that the number of bins required is only Kd. This is equivalent to locating the peaks of the function from the projections of the integrand onto the coordinate axes. The efficiency of VEGAS depends on the validity of this assumption. It is most efficient when the peaks of the integrand are well-localized. If an integrand can be rewritten in a form which is approximately separable this will increase the efficiency of integration with VEGAS. VEGAS incorporates a number of additional features, and combines both stratified sampling and importance sampling. The integration region is divided into a number of “boxes”, with each box getting a fixed number of points (the goal is 2). Each box can then have a fractional number of bins, but if the ratio of bins-per-box is less than two, Vegas switches to a kind variance reduction (rather than importance sampling). -- Type: gsl_monte_vegas_state This workspace is used for VEGAS Monte Carlo integration -- Function: *note gsl_monte_vegas_state: 94b. *gsl_monte_vegas_alloc (size_t dim) This function allocates and initializes a workspace for Monte Carlo integration in *note dim: 94c. dimensions. The workspace is used to maintain the state of the integration. -- Function: int gsl_monte_vegas_init (gsl_monte_vegas_state *s) This function initializes a previously allocated integration state. This allows an existing workspace to be reused for different integrations. -- Function: int gsl_monte_vegas_integrate (gsl_monte_function *f, double xl[], double xu[], size_t dim, size_t calls, gsl_rng *r, gsl_monte_vegas_state *s, double *result, double *abserr) This routines uses the VEGAS Monte Carlo algorithm to integrate the function *note f: 94e. over the *note dim: 94e.-dimensional hypercubic region defined by the lower and upper limits in the arrays *note xl: 94e. and *note xu: 94e, each of size *note dim: 94e. The integration uses a fixed number of function calls *note calls: 94e, and obtains random sampling points using the random number generator *note r: 94e. A previously allocated workspace *note s: 94e. must be supplied. The result of the integration is returned in *note result: 94e, with an estimated absolute error *note abserr: 94e. The result and its error estimate are based on a weighted average of independent samples. The chi-squared per degree of freedom for the weighted average is returned via the state struct component, ‘s->chisq’, and must be consistent with 1 for the weighted average to be reliable. -- Function: void gsl_monte_vegas_free (gsl_monte_vegas_state *s) This function frees the memory associated with the integrator state *note s: 94f. The VEGAS algorithm computes a number of independent estimates of the integral internally, according to the ‘iterations’ parameter described below, and returns their weighted average. Random sampling of the integrand can occasionally produce an estimate where the error is zero, particularly if the function is constant in some regions. An estimate with zero error causes the weighted average to break down and must be handled separately. In the original Fortran implementations of VEGAS the error estimate is made non-zero by substituting a small value (typically ‘1e-30’). The implementation in GSL differs from this and avoids the use of an arbitrary constant—it either assigns the value a weight which is the average weight of the preceding estimates or discards it according to the following procedure, * current estimate has zero error, weighted average has finite error The current estimate is assigned a weight which is the average weight of the preceding estimates. * current estimate has finite error, previous estimates had zero error The previous estimates are discarded and the weighted averaging procedure begins with the current estimate. * current estimate has zero error, previous estimates had zero error The estimates are averaged using the arithmetic mean, but no error is computed. The convergence of the algorithm can be tested using the overall chi-squared value of the results, which is available from the following function: -- Function: double gsl_monte_vegas_chisq (const gsl_monte_vegas_state *s) This function returns the chi-squared per degree of freedom for the weighted estimate of the integral. The returned value should be close to 1. A value which differs significantly from 1 indicates that the values from different iterations are inconsistent. In this case the weighted error will be under-estimated, and further iterations of the algorithm are needed to obtain reliable results. -- Function: void gsl_monte_vegas_runval (const gsl_monte_vegas_state *s, double *result, double *sigma) This function returns the raw (unaveraged) values of the integral *note result: 951. and its error *note sigma: 951. from the most recent iteration of the algorithm. The VEGAS algorithm is highly configurable. Several parameters can be changed using the following two functions. -- Function: void gsl_monte_vegas_params_get (const gsl_monte_vegas_state *s, gsl_monte_vegas_params *params) This function copies the parameters of the integrator state into the user-supplied *note params: 952. structure. -- Function: void gsl_monte_vegas_params_set (gsl_monte_vegas_state *s, const gsl_monte_vegas_params *params) This function sets the integrator parameters based on values provided in the *note params: 953. structure. Typically the values of the parameters are first read using *note gsl_monte_vegas_params_get(): 952, the necessary changes are made to the fields of the ‘params’ structure, and the values are copied back into the integrator state using *note gsl_monte_vegas_params_set(): 953. The functions use the *note gsl_monte_vegas_params: 954. structure which contains the following fields: -- Type: gsl_monte_vegas_params -- Variable: double alpha The parameter *note alpha: 955. controls the stiffness of the rebinning algorithm. It is typically set between one and two. A value of zero prevents rebinning of the grid. The default value is 1.5. -- Variable: size_t iterations The number of iterations to perform for each call to the routine. The default value is 5 iterations. -- Variable: int stage Setting this determines the `stage' of the calculation. Normally, ‘stage = 0’ which begins with a new uniform grid and empty weighted average. Calling VEGAS with ‘stage = 1’ retains the grid from the previous run but discards the weighted average, so that one can “tune” the grid using a relatively small number of points and then do a large run with ‘stage = 1’ on the optimized grid. Setting ‘stage = 2’ keeps the grid and the weighted average from the previous run, but may increase (or decrease) the number of histogram bins in the grid depending on the number of calls available. Choosing ‘stage = 3’ enters at the main loop, so that nothing is changed, and is equivalent to performing additional iterations in a previous call. -- Variable: int mode The possible choices are ‘GSL_VEGAS_MODE_IMPORTANCE’, ‘GSL_VEGAS_MODE_STRATIFIED’, ‘GSL_VEGAS_MODE_IMPORTANCE_ONLY’. This determines whether VEGAS will use importance sampling or stratified sampling, or whether it can pick on its own. In low dimensions VEGAS uses strict stratified sampling (more precisely, stratified sampling is chosen if there are fewer than 2 bins per box). -- Variable: int verbose -- Variable: FILE *ostream These parameters set the level of information printed by VEGAS. All information is written to the stream *note ostream: 95a. The default setting of *note verbose: 959. is ‘-1’, which turns off all output. A *note verbose: 959. value of ‘0’ prints summary information about the weighted average and final result, while a value of ‘1’ also displays the grid coordinates. A value of ‘2’ prints information from the rebinning procedure for each iteration. The above fields and the ‘chisq’ value can also be accessed directly in the *note gsl_monte_vegas_state: 94b. but such use is deprecated.  File: gsl-ref.info, Node: Examples<20>, Next: References and Further Reading<20>, Prev: VEGAS, Up: Monte Carlo Integration 27.5 Examples ============= The example program below uses the Monte Carlo routines to estimate the value of the following 3-dimensional integral from the theory of random walks, I = \int_{-pi}^{+pi} {dk_x/(2 pi)} \int_{-pi}^{+pi} {dk_y/(2 pi)} \int_{-pi}^{+pi} {dk_z/(2 pi)} 1 / (1 - cos(k_x)cos(k_y)cos(k_z)). The analytic value of this integral can be shown to be I = \Gamma(1/4)^4/(4 \pi^3) = 1.393203929685676859.... The integral gives the mean time spent at the origin by a random walk on a body-centered cubic lattice in three dimensions. For simplicity we will compute the integral over the region (0,0,0) to (\pi,\pi,\pi) and multiply by 8 to obtain the full result. The integral is slowly varying in the middle of the region but has integrable singularities at the corners (0,0,0), (0,\pi,\pi), (\pi,0,\pi) and (\pi,\pi,0). The Monte Carlo routines only select points which are strictly within the integration region and so no special measures are needed to avoid these singularities. #include #include #include #include #include #include /* Computation of the integral, I = int (dx dy dz)/(2pi)^3 1/(1-cos(x)cos(y)cos(z)) over (-pi,-pi,-pi) to (+pi, +pi, +pi). The exact answer is Gamma(1/4)^4/(4 pi^3). This example is taken from C.Itzykson, J.M.Drouffe, "Statistical Field Theory - Volume 1", Section 1.1, p21, which cites the original paper M.L.Glasser, I.J.Zucker, Proc.Natl.Acad.Sci.USA 74 1800 (1977) */ /* For simplicity we compute the integral over the region (0,0,0) -> (pi,pi,pi) and multiply by 8 */ double exact = 1.3932039296856768591842462603255; double g (double *k, size_t dim, void *params) { (void)(dim); /* avoid unused parameter warnings */ (void)(params); double A = 1.0 / (M_PI * M_PI * M_PI); return A / (1.0 - cos (k[0]) * cos (k[1]) * cos (k[2])); } void display_results (char *title, double result, double error) { printf ("%s ==================\n", title); printf ("result = % .6f\n", result); printf ("sigma = % .6f\n", error); printf ("exact = % .6f\n", exact); printf ("error = % .6f = %.2g sigma\n", result - exact, fabs (result - exact) / error); } int main (void) { double res, err; double xl[3] = { 0, 0, 0 }; double xu[3] = { M_PI, M_PI, M_PI }; const gsl_rng_type *T; gsl_rng *r; gsl_monte_function G = { &g, 3, 0 }; size_t calls = 500000; gsl_rng_env_setup (); T = gsl_rng_default; r = gsl_rng_alloc (T); { gsl_monte_plain_state *s = gsl_monte_plain_alloc (3); gsl_monte_plain_integrate (&G, xl, xu, 3, calls, r, s, &res, &err); gsl_monte_plain_free (s); display_results ("plain", res, err); } { gsl_monte_miser_state *s = gsl_monte_miser_alloc (3); gsl_monte_miser_integrate (&G, xl, xu, 3, calls, r, s, &res, &err); gsl_monte_miser_free (s); display_results ("miser", res, err); } { gsl_monte_vegas_state *s = gsl_monte_vegas_alloc (3); gsl_monte_vegas_integrate (&G, xl, xu, 3, 10000, r, s, &res, &err); display_results ("vegas warm-up", res, err); printf ("converging...\n"); do { gsl_monte_vegas_integrate (&G, xl, xu, 3, calls/5, r, s, &res, &err); printf ("result = % .6f sigma = % .6f " "chisq/dof = %.1f\n", res, err, gsl_monte_vegas_chisq (s)); } while (fabs (gsl_monte_vegas_chisq (s) - 1.0) > 0.5); display_results ("vegas final", res, err); gsl_monte_vegas_free (s); } gsl_rng_free (r); return 0; } With 500,000 function calls the plain Monte Carlo algorithm achieves a fractional error of 1%. The estimated error ‘sigma’ is roughly consistent with the actual error–the computed result differs from the true result by about 1.4 standard deviations: plain ================== result = 1.412209 sigma = 0.013436 exact = 1.393204 error = 0.019005 = 1.4 sigma The MISER algorithm reduces the error by a factor of four, and also correctly estimates the error: miser ================== result = 1.391322 sigma = 0.003461 exact = 1.393204 error = -0.001882 = 0.54 sigma In the case of the VEGAS algorithm the program uses an initial warm-up run of 10,000 function calls to prepare, or “warm up”, the grid. This is followed by a main run with five iterations of 100,000 function calls. The chi-squared per degree of freedom for the five iterations are checked for consistency with 1, and the run is repeated if the results have not converged. In this case the estimates are consistent on the first pass: vegas warm-up ================== result = 1.392673 sigma = 0.003410 exact = 1.393204 error = -0.000531 = 0.16 sigma converging... result = 1.393281 sigma = 0.000362 chisq/dof = 1.5 vegas final ================== result = 1.393281 sigma = 0.000362 exact = 1.393204 error = 0.000077 = 0.21 sigma If the value of ‘chisq’ had differed significantly from 1 it would indicate inconsistent results, with a correspondingly underestimated error. The final estimate from VEGAS (using a similar number of function calls) is significantly more accurate than the other two algorithms.  File: gsl-ref.info, Node: References and Further Reading<20>, Prev: Examples<20>, Up: Monte Carlo Integration 27.6 References and Further Reading =================================== The MISER algorithm is described in the following article by Press and Farrar, * W.H. Press, G.R. Farrar, `Recursive Stratified Sampling for Multidimensional Monte Carlo Integration', Computers in Physics, v4 (1990), pp190–195. The VEGAS algorithm is described in the following papers, * G.P. Lepage, `A New Algorithm for Adaptive Multidimensional Integration', Journal of Computational Physics 27, 192–203, (1978) * G.P. Lepage, `VEGAS: An Adaptive Multi-dimensional Integration Program', Cornell preprint CLNS 80-447, March 1980  File: gsl-ref.info, Node: Simulated Annealing, Next: Ordinary Differential Equations, Prev: Monte Carlo Integration, Up: Top 28 Simulated Annealing ********************** Stochastic search techniques are used when the structure of a space is not well understood or is not smooth, so that techniques like Newton’s method (which requires calculating Jacobian derivative matrices) cannot be used. In particular, these techniques are frequently used to solve combinatorial optimization problems, such as the traveling salesman problem. The goal is to find a point in the space at which a real valued `energy function' (or `cost function') is minimized. Simulated annealing is a minimization technique which has given good results in avoiding local minima; it is based on the idea of taking a random walk through the space at successively lower temperatures, where the probability of taking a step is given by a Boltzmann distribution. The functions described in this chapter are declared in the header file ‘gsl_siman.h’. * Menu: * Simulated Annealing algorithm:: * Simulated Annealing functions:: * Examples: Examples<21>. * References and Further Reading: References and Further Reading<21>.  File: gsl-ref.info, Node: Simulated Annealing algorithm, Next: Simulated Annealing functions, Up: Simulated Annealing 28.1 Simulated Annealing algorithm ================================== The simulated annealing algorithm takes random walks through the problem space, looking for points with low energies; in these random walks, the probability of taking a step is determined by the Boltzmann distribution, p = e^{-(E_{i+1} - E_i)/(kT)} if E_{i+1} > E_i, and p = 1 when E_{i+1} \le E_i. In other words, a step will occur if the new energy is lower. If the new energy is higher, the transition can still occur, and its likelihood is proportional to the temperature T and inversely proportional to the energy difference E_{i+1} - E_i. The temperature T is initially set to a high value, and a random walk is carried out at that temperature. Then the temperature is lowered very slightly according to a `cooling schedule', for example: T \rightarrow T/\mu_T where \mu_T is slightly greater than 1. The slight probability of taking a step that gives higher energy is what allows simulated annealing to frequently get out of local minima.  File: gsl-ref.info, Node: Simulated Annealing functions, Next: Examples<21>, Prev: Simulated Annealing algorithm, Up: Simulated Annealing 28.2 Simulated Annealing functions ================================== -- Function: void gsl_siman_solve (const gsl_rng *r, void *x0_p, gsl_siman_Efunc_t Ef, gsl_siman_step_t take_step, gsl_siman_metric_t distance, gsl_siman_print_t print_position, gsl_siman_copy_t copyfunc, gsl_siman_copy_construct_t copy_constructor, gsl_siman_destroy_t destructor, size_t element_size, gsl_siman_params_t params) This function performs a simulated annealing search through a given space. The space is specified by providing the functions *note Ef: 961. and *note distance: 961. The simulated annealing steps are generated using the random number generator *note r: 961. and the function *note take_step: 961. The starting configuration of the system should be given by *note x0_p: 961. The routine offers two modes for updating configurations, a fixed-size mode and a variable-size mode. In the fixed-size mode the configuration is stored as a single block of memory of size *note element_size: 961. Copies of this configuration are created, copied and destroyed internally using the standard library functions ‘malloc()’, ‘memcpy()’ and ‘free()’. The function pointers *note copyfunc: 961, *note copy_constructor: 961. and *note destructor: 961. should be null pointers in fixed-size mode. In the variable-size mode the functions *note copyfunc: 961, *note copy_constructor: 961. and *note destructor: 961. are used to create, copy and destroy configurations internally. The variable *note element_size: 961. should be zero in the variable-size mode. The *note params: 961. structure (described below) controls the run by providing the temperature schedule and other tunable parameters to the algorithm. On exit the best result achieved during the search is placed in *note x0_p: 961. If the annealing process has been successful this should be a good approximation to the optimal point in the space. If the function pointer *note print_position: 961. is not null, a debugging log will be printed to ‘stdout’ with the following columns: #-iter #-evals temperature position energy best_energy and the output of the function *note print_position: 961. itself. If *note print_position: 961. is null then no information is printed. The simulated annealing routines require several user-specified functions to define the configuration space and energy function. The prototypes for these functions are given below. -- Type: gsl_siman_Efunc_t This function type should return the energy of a configuration ‘xp’: double (*gsl_siman_Efunc_t) (void *xp) -- Type: gsl_siman_step_t This function type should modify the configuration ‘xp’ using a random step taken from the generator ‘r’, up to a maximum distance of ‘step_size’: void (*gsl_siman_step_t) (const gsl_rng *r, void *xp, double step_size) -- Type: gsl_siman_metric_t This function type should return the distance between two configurations ‘xp’ and ‘yp’: double (*gsl_siman_metric_t) (void *xp, void *yp) -- Type: gsl_siman_print_t This function type should print the contents of the configuration ‘xp’: void (*gsl_siman_print_t) (void *xp) -- Type: gsl_siman_copy_t This function type should copy the configuration ‘source’ into ‘dest’: void (*gsl_siman_copy_t) (void *source, void *dest) -- Type: gsl_siman_copy_construct_t This function type should create a new copy of the configuration ‘xp’: void * (*gsl_siman_copy_construct_t) (void *xp) -- Type: gsl_siman_destroy_t This function type should destroy the configuration ‘xp’, freeing its memory: void (*gsl_siman_destroy_t) (void *xp) -- Type: gsl_siman_params_t These are the parameters that control a run of *note gsl_siman_solve(): 961. This structure contains all the information needed to control the search, beyond the energy function, the step function and the initial guess. ‘int n_tries’ The number of points to try for each step. ‘int iters_fixed_T’ The number of iterations at each temperature. ‘double step_size’ The maximum step size in the random walk. ‘double k, t_initial, mu_t, t_min’ The parameters of the Boltzmann distribution and cooling schedule.  File: gsl-ref.info, Node: Examples<21>, Next: References and Further Reading<21>, Prev: Simulated Annealing functions, Up: Simulated Annealing 28.3 Examples ============= The simulated annealing package is clumsy, and it has to be because it is written in C, for C callers, and tries to be polymorphic at the same time. But here we provide some examples which can be pasted into your application with little change and should make things easier. * Menu: * Trivial example:: * Traveling Salesman Problem::  File: gsl-ref.info, Node: Trivial example, Next: Traveling Salesman Problem, Up: Examples<21> 28.3.1 Trivial example ---------------------- The first example, in one dimensional Cartesian space, sets up an energy function which is a damped sine wave; this has many local minima, but only one global minimum, somewhere between 1.0 and 1.5. The initial guess given is 15.5, which is several local minima away from the global minimum. #include #include #include #include /* set up parameters for this simulated annealing run */ /* how many points do we try before stepping */ #define N_TRIES 200 /* how many iterations for each T? */ #define ITERS_FIXED_T 1000 /* max step size in random walk */ #define STEP_SIZE 1.0 /* Boltzmann constant */ #define K 1.0 /* initial temperature */ #define T_INITIAL 0.008 /* damping factor for temperature */ #define MU_T 1.003 #define T_MIN 2.0e-6 gsl_siman_params_t params = {N_TRIES, ITERS_FIXED_T, STEP_SIZE, K, T_INITIAL, MU_T, T_MIN}; /* now some functions to test in one dimension */ double E1(void *xp) { double x = * ((double *) xp); return exp(-pow((x-1.0),2.0))*sin(8*x); } double M1(void *xp, void *yp) { double x = *((double *) xp); double y = *((double *) yp); return fabs(x - y); } void S1(const gsl_rng * r, void *xp, double step_size) { double old_x = *((double *) xp); double new_x; double u = gsl_rng_uniform(r); new_x = u * 2 * step_size - step_size + old_x; memcpy(xp, &new_x, sizeof(new_x)); } void P1(void *xp) { printf ("%12g", *((double *) xp)); } int main(void) { const gsl_rng_type * T; gsl_rng * r; double x_initial = 15.5; gsl_rng_env_setup(); T = gsl_rng_default; r = gsl_rng_alloc(T); gsl_siman_solve(r, &x_initial, E1, S1, M1, P1, NULL, NULL, NULL, sizeof(double), params); gsl_rng_free (r); return 0; } Fig. %s is generated by running ‘siman_test’ in the following way: $ ./siman_test | awk '!/^#/ {print $1, $4}' | graph -y 1.34 1.4 -W0 -X generation -Y position | plot -Tps > siman-test.eps Fig. %s is generated by running ‘siman_test’ in the following way: $ ./siman_test | awk '!/^#/ {print $1, $5}' | graph -y -0.88 -0.83 -W0 -X generation -Y energy | plot -Tps > siman-energy.eps [gsl-ref-figures/siman-test] Figure: Example of a simulated annealing run: at higher temperatures (early in the plot) you see that the solution can fluctuate, but at lower temperatures it converges. [gsl-ref-figures/siman-energy] Figure: Simulated annealing energy vs generation  File: gsl-ref.info, Node: Traveling Salesman Problem, Prev: Trivial example, Up: Examples<21> 28.3.2 Traveling Salesman Problem --------------------------------- The TSP (`Traveling Salesman Problem') is the classic combinatorial optimization problem. I have provided a very simple version of it, based on the coordinates of twelve cities in the southwestern United States. This should maybe be called the `Flying Salesman Problem', since I am using the great-circle distance between cities, rather than the driving distance. Also: I assume the earth is a sphere, so I don’t use geoid distances. The *note gsl_siman_solve(): 961. routine finds a route which is 3490.62 Kilometers long; this is confirmed by an exhaustive search of all possible routes with the same initial city. The full code is given below. /* siman/siman_tsp.c * * Copyright (C) 1996, 1997, 1998, 1999, 2000 Mark Galassi * * This program is free software; you can redistribute it and/or modify * it under the terms of the GNU General Public License as published by * the Free Software Foundation; either version 3 of the License, or (at * your option) any later version. * * This program is distributed in the hope that it will be useful, but * WITHOUT ANY WARRANTY; without even the implied warranty of * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU * General Public License for more details. * * You should have received a copy of the GNU General Public License * along with this program; if not, write to the Free Software * Foundation, Inc., 51 Franklin Street, Fifth Floor, Boston, MA 02110-1301, USA. */ #include #include #include #include #include #include #include #include /* set up parameters for this simulated annealing run */ #define N_TRIES 200 /* how many points do we try before stepping */ #define ITERS_FIXED_T 2000 /* how many iterations for each T? */ #define STEP_SIZE 1.0 /* max step size in random walk */ #define K 1.0 /* Boltzmann constant */ #define T_INITIAL 5000.0 /* initial temperature */ #define MU_T 1.002 /* damping factor for temperature */ #define T_MIN 5.0e-1 gsl_siman_params_t params = {N_TRIES, ITERS_FIXED_T, STEP_SIZE, K, T_INITIAL, MU_T, T_MIN}; struct s_tsp_city { const char * name; double lat, longitude; /* coordinates */ }; typedef struct s_tsp_city Stsp_city; void prepare_distance_matrix(void); void exhaustive_search(void); void print_distance_matrix(void); double city_distance(Stsp_city c1, Stsp_city c2); double Etsp(void *xp); double Mtsp(void *xp, void *yp); void Stsp(const gsl_rng * r, void *xp, double step_size); void Ptsp(void *xp); /* in this table, latitude and longitude are obtained from the US Census Bureau, at http://www.census.gov/cgi-bin/gazetteer */ Stsp_city cities[] = {{"Santa Fe", 35.68, 105.95}, {"Phoenix", 33.54, 112.07}, {"Albuquerque", 35.12, 106.62}, {"Clovis", 34.41, 103.20}, {"Durango", 37.29, 107.87}, {"Dallas", 32.79, 96.77}, {"Tesuque", 35.77, 105.92}, {"Grants", 35.15, 107.84}, {"Los Alamos", 35.89, 106.28}, {"Las Cruces", 32.34, 106.76}, {"Cortez", 37.35, 108.58}, {"Gallup", 35.52, 108.74}}; #define N_CITIES (sizeof(cities)/sizeof(Stsp_city)) double distance_matrix[N_CITIES][N_CITIES]; /* distance between two cities */ double city_distance(Stsp_city c1, Stsp_city c2) { const double earth_radius = 6375.000; /* 6000KM approximately */ /* sin and cos of lat and long; must convert to radians */ double sla1 = sin(c1.lat*M_PI/180), cla1 = cos(c1.lat*M_PI/180), slo1 = sin(c1.longitude*M_PI/180), clo1 = cos(c1.longitude*M_PI/180); double sla2 = sin(c2.lat*M_PI/180), cla2 = cos(c2.lat*M_PI/180), slo2 = sin(c2.longitude*M_PI/180), clo2 = cos(c2.longitude*M_PI/180); double x1 = cla1*clo1; double x2 = cla2*clo2; double y1 = cla1*slo1; double y2 = cla2*slo2; double z1 = sla1; double z2 = sla2; double dot_product = x1*x2 + y1*y2 + z1*z2; double angle = acos(dot_product); /* distance is the angle (in radians) times the earth radius */ return angle*earth_radius; } /* energy for the travelling salesman problem */ double Etsp(void *xp) { /* an array of N_CITIES integers describing the order */ int *route = (int *) xp; double E = 0; unsigned int i; for (i = 0; i < N_CITIES; ++i) { /* use the distance_matrix to optimize this calculation; it had better be allocated!! */ E += distance_matrix[route[i]][route[(i + 1) % N_CITIES]]; } return E; } double Mtsp(void *xp, void *yp) { int *route1 = (int *) xp, *route2 = (int *) yp; double distance = 0; unsigned int i; for (i = 0; i < N_CITIES; ++i) { distance += ((route1[i] == route2[i]) ? 0 : 1); } return distance; } /* take a step through the TSP space */ void Stsp(const gsl_rng * r, void *xp, double step_size) { int x1, x2, dummy; int *route = (int *) xp; step_size = 0 ; /* prevent warnings about unused parameter */ /* pick the two cities to swap in the matrix; we leave the first city fixed */ x1 = (gsl_rng_get (r) % (N_CITIES-1)) + 1; do { x2 = (gsl_rng_get (r) % (N_CITIES-1)) + 1; } while (x2 == x1); dummy = route[x1]; route[x1] = route[x2]; route[x2] = dummy; } void Ptsp(void *xp) { unsigned int i; int *route = (int *) xp; printf(" ["); for (i = 0; i < N_CITIES; ++i) { printf(" %d ", route[i]); } printf("] "); } int main(void) { int x_initial[N_CITIES]; unsigned int i; const gsl_rng * r = gsl_rng_alloc (gsl_rng_env_setup()) ; gsl_ieee_env_setup (); prepare_distance_matrix(); /* set up a trivial initial route */ printf("# initial order of cities:\n"); for (i = 0; i < N_CITIES; ++i) { printf("# \"%s\"\n", cities[i].name); x_initial[i] = i; } printf("# distance matrix is:\n"); print_distance_matrix(); printf("# initial coordinates of cities (longitude and latitude)\n"); /* this can be plotted with */ /* ./siman_tsp > hhh ; grep city_coord hhh | awk '{print $2 " " $3}' | xyplot -ps -d "xy" > c.eps */ for (i = 0; i < N_CITIES+1; ++i) { printf("###initial_city_coord: %g %g \"%s\"\n", -cities[x_initial[i % N_CITIES]].longitude, cities[x_initial[i % N_CITIES]].lat, cities[x_initial[i % N_CITIES]].name); } /* exhaustive_search(); */ gsl_siman_solve(r, x_initial, Etsp, Stsp, Mtsp, Ptsp, NULL, NULL, NULL, N_CITIES*sizeof(int), params); printf("# final order of cities:\n"); for (i = 0; i < N_CITIES; ++i) { printf("# \"%s\"\n", cities[x_initial[i]].name); } printf("# final coordinates of cities (longitude and latitude)\n"); /* this can be plotted with */ /* ./siman_tsp > hhh ; grep city_coord hhh | awk '{print $2 " " $3}' | xyplot -ps -d "xy" > c.eps */ for (i = 0; i < N_CITIES+1; ++i) { printf("###final_city_coord: %g %g %s\n", -cities[x_initial[i % N_CITIES]].longitude, cities[x_initial[i % N_CITIES]].lat, cities[x_initial[i % N_CITIES]].name); } printf("# "); fflush(stdout); #if 0 system("date"); #endif /* 0 */ fflush(stdout); return 0; } void prepare_distance_matrix() { unsigned int i, j; double dist; for (i = 0; i < N_CITIES; ++i) { for (j = 0; j < N_CITIES; ++j) { if (i == j) { dist = 0; } else { dist = city_distance(cities[i], cities[j]); } distance_matrix[i][j] = dist; } } } void print_distance_matrix() { unsigned int i, j; for (i = 0; i < N_CITIES; ++i) { printf("# "); for (j = 0; j < N_CITIES; ++j) { printf("%15.8f ", distance_matrix[i][j]); } printf("\n"); } } /* [only works for 12] search the entire space for solutions */ static double best_E = 1.0e100, second_E = 1.0e100, third_E = 1.0e100; static int best_route[N_CITIES]; static int second_route[N_CITIES]; static int third_route[N_CITIES]; static void do_all_perms(int *route, int n); void exhaustive_search() { static int initial_route[N_CITIES] = {0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11}; printf("\n# "); fflush(stdout); #if 0 system("date"); #endif fflush(stdout); do_all_perms(initial_route, 1); printf("\n# "); fflush(stdout); #if 0 system("date"); #endif /* 0 */ fflush(stdout); printf("# exhaustive best route: "); Ptsp(best_route); printf("\n# its energy is: %g\n", best_E); printf("# exhaustive second_best route: "); Ptsp(second_route); printf("\n# its energy is: %g\n", second_E); printf("# exhaustive third_best route: "); Ptsp(third_route); printf("\n# its energy is: %g\n", third_E); } /* James Theiler's recursive algorithm for generating all routes */ static void do_all_perms(int *route, int n) { if (n == (N_CITIES-1)) { /* do it! calculate the energy/cost for that route */ double E; E = Etsp(route); /* TSP energy function */ /* now save the best 3 energies and routes */ if (E < best_E) { third_E = second_E; memcpy(third_route, second_route, N_CITIES*sizeof(*route)); second_E = best_E; memcpy(second_route, best_route, N_CITIES*sizeof(*route)); best_E = E; memcpy(best_route, route, N_CITIES*sizeof(*route)); } else if (E < second_E) { third_E = second_E; memcpy(third_route, second_route, N_CITIES*sizeof(*route)); second_E = E; memcpy(second_route, route, N_CITIES*sizeof(*route)); } else if (E < third_E) { third_E = E; memcpy(route, third_route, N_CITIES*sizeof(*route)); } } else { int new_route[N_CITIES]; unsigned int j; int swap_tmp; memcpy(new_route, route, N_CITIES*sizeof(*route)); for (j = n; j < N_CITIES; ++j) { swap_tmp = new_route[j]; new_route[j] = new_route[n]; new_route[n] = swap_tmp; do_all_perms(new_route, n+1); } } } Below are some plots generated in the following way: $ ./siman_tsp > tsp.output $ grep -v "^#" tsp.output | awk '{print $1, $NF}' | graph -y 3300 6500 -W0 -X generation -Y distance -L "TSP - 12 southwest cities" | plot -Tps > 12-cities.eps $ grep initial_city_coord tsp.output | awk '{print $2, $3}' | graph -X "longitude (- means west)" -Y "latitude" -L "TSP - initial-order" -f 0.03 -S 1 0.1 | plot -Tps > initial-route.eps $ grep final_city_coord tsp.output | awk '{print $2, $3}' | graph -X "longitude (- means west)" -Y "latitude" -L "TSP - final-order" -f 0.03 -S 1 0.1 | plot -Tps > final-route.eps This is the output showing the initial order of the cities; longitude is negative, since it is west and I want the plot to look like a map: # initial coordinates of cities (longitude and latitude) ###initial_city_coord: -105.95 35.68 Santa Fe ###initial_city_coord: -112.07 33.54 Phoenix ###initial_city_coord: -106.62 35.12 Albuquerque ###initial_city_coord: -103.2 34.41 Clovis ###initial_city_coord: -107.87 37.29 Durango ###initial_city_coord: -96.77 32.79 Dallas ###initial_city_coord: -105.92 35.77 Tesuque ###initial_city_coord: -107.84 35.15 Grants ###initial_city_coord: -106.28 35.89 Los Alamos ###initial_city_coord: -106.76 32.34 Las Cruces ###initial_city_coord: -108.58 37.35 Cortez ###initial_city_coord: -108.74 35.52 Gallup ###initial_city_coord: -105.95 35.68 Santa Fe The optimal route turns out to be: # final coordinates of cities (longitude and latitude) ###final_city_coord: -105.95 35.68 Santa Fe ###final_city_coord: -103.2 34.41 Clovis ###final_city_coord: -96.77 32.79 Dallas ###final_city_coord: -106.76 32.34 Las Cruces ###final_city_coord: -112.07 33.54 Phoenix ###final_city_coord: -108.74 35.52 Gallup ###final_city_coord: -108.58 37.35 Cortez ###final_city_coord: -107.87 37.29 Durango ###final_city_coord: -107.84 35.15 Grants ###final_city_coord: -106.62 35.12 Albuquerque ###final_city_coord: -106.28 35.89 Los Alamos ###final_city_coord: -105.92 35.77 Tesuque ###final_city_coord: -105.95 35.68 Santa Fe [gsl-ref-figures/siman-initial-route] Figure: Initial route for the 12 southwestern cities Flying Salesman Problem. [gsl-ref-figures/siman-final-route] Figure: Final (optimal) route for the 12 southwestern cities Flying Salesman Problem. Here’s a plot of the cost function (energy) versus generation (point in the calculation at which a new temperature is set) for this problem: [gsl-ref-figures/siman-12-cities] Figure: Example of a simulated annealing run for the 12 southwestern cities Flying Salesman Problem.  File: gsl-ref.info, Node: References and Further Reading<21>, Prev: Examples<21>, Up: Simulated Annealing 28.4 References and Further Reading =================================== Further information is available in the following book, * `Modern Heuristic Techniques for Combinatorial Problems', Colin R. Reeves (ed.), McGraw-Hill, 1995 (ISBN 0-07-709239-2).  File: gsl-ref.info, Node: Ordinary Differential Equations, Next: Interpolation, Prev: Simulated Annealing, Up: Top 29 Ordinary Differential Equations ********************************** This chapter describes functions for solving ordinary differential equation (ODE) initial value problems. The library provides a variety of low-level methods, such as Runge-Kutta and Bulirsch-Stoer routines, and higher-level components for adaptive step-size control. The components can be combined by the user to achieve the desired solution, with full access to any intermediate steps. A driver object can be used as a high level wrapper for easy use of low level functions. These functions are declared in the header file ‘gsl_odeiv2.h’. This is a new interface in version 1.15 and uses the prefix ‘gsl_odeiv2’ for all functions. It is recommended over the previous ‘gsl_odeiv’ implementation defined in ‘gsl_odeiv.h’ The old interface has been retained under the original name for backwards compatibility. * Menu: * Defining the ODE System:: * Stepping Functions:: * Adaptive Step-size Control:: * Evolution:: * Driver:: * Examples: Examples<22>. * References and Further Reading: References and Further Reading<22>.  File: gsl-ref.info, Node: Defining the ODE System, Next: Stepping Functions, Up: Ordinary Differential Equations 29.1 Defining the ODE System ============================ The routines solve the general n-dimensional first-order system, dy_i(t)/dt = f_i(t, y_1(t), ..., y_n(t)) for i = 1, \dots, n. The stepping functions rely on the vector of derivatives f_i and the Jacobian matrix, J_{ij} = df_i(t,y(t)) / dy_j A system of equations is defined using the *note gsl_odeiv2_system: 973. datatype. -- Type: gsl_odeiv2_system This data type defines a general ODE system with arbitrary parameters. ‘int (* function) (double t, const double y[], double dydt[], void * params)’ This function should store the vector elements f_i(t,y,params) in the array ‘dydt’, for arguments (‘t’, ‘y’) and parameters ‘params’. The function should return ‘GSL_SUCCESS’ if the calculation was completed successfully. Any other return value indicates an error. A special return value ‘GSL_EBADFUNC’ causes ‘gsl_odeiv2’ routines to immediately stop and return. If ‘function’ is modified (for example contents of ‘params’), the user must call an appropriate reset function (*note gsl_odeiv2_driver_reset(): 974, *note gsl_odeiv2_evolve_reset(): 975. or *note gsl_odeiv2_step_reset(): 976.) before continuing. Use return values distinct from standard GSL error codes to distinguish your function as the source of the error. ‘int (* jacobian) (double t, const double y[], double * dfdy, double dfdt[], void * params)’ This function should store the vector of derivative elements df_i(t,y,params)/dt in the array ‘dfdt’ and the Jacobian matrix J_{ij} in the array ‘dfdy’, regarded as a row-ordered matrix ‘J(i,j) = dfdy[i * dimension + j]’ where ‘dimension’ is the dimension of the system. Not all of the stepper algorithms of ‘gsl_odeiv2’ make use of the Jacobian matrix, so it may not be necessary to provide this function (the ‘jacobian’ element of the struct can be replaced by a null pointer for those algorithms). The function should return ‘GSL_SUCCESS’ if the calculation was completed successfully. Any other return value indicates an error. A special return value ‘GSL_EBADFUNC’ causes ‘gsl_odeiv2’ routines to immediately stop and return. If ‘jacobian’ is modified (for example contents of ‘params’), the user must call an appropriate reset function (*note gsl_odeiv2_driver_reset(): 974, *note gsl_odeiv2_evolve_reset(): 975. or *note gsl_odeiv2_step_reset(): 976.) before continuing. Use return values distinct from standard GSL error codes to distinguish your function as the source of the error. ‘size_t dimension’ This is the dimension of the system of equations. ‘void * params’ This is a pointer to the arbitrary parameters of the system.  File: gsl-ref.info, Node: Stepping Functions, Next: Adaptive Step-size Control, Prev: Defining the ODE System, Up: Ordinary Differential Equations 29.2 Stepping Functions ======================= The lowest level components are the `stepping functions' which advance a solution from time t to t+h for a fixed step-size h and estimate the resulting local error. -- Type: gsl_odeiv2_step This contains internal parameters for a stepping function. -- Function: *note gsl_odeiv2_step: 978. *gsl_odeiv2_step_alloc (const gsl_odeiv2_step_type *T, size_t dim) This function returns a pointer to a newly allocated instance of a stepping function of type *note T: 979. for a system of *note dim: 979. dimensions. Please note that if you use a stepper method that requires access to a driver object, it is advisable to use a driver allocation method, which automatically allocates a stepper, too. -- Function: int gsl_odeiv2_step_reset (gsl_odeiv2_step *s) This function resets the stepping function *note s: 976. It should be used whenever the next use of *note s: 976. will not be a continuation of a previous step. -- Function: void gsl_odeiv2_step_free (gsl_odeiv2_step *s) This function frees all the memory associated with the stepping function *note s: 97a. -- Function: const char *gsl_odeiv2_step_name (const gsl_odeiv2_step *s) This function returns a pointer to the name of the stepping function. For example: printf ("step method is '%s'\n", gsl_odeiv2_step_name (s)); would print something like ‘step method is 'rkf45'’. -- Function: unsigned int gsl_odeiv2_step_order (const gsl_odeiv2_step *s) This function returns the order of the stepping function on the previous step. The order can vary if the stepping function itself is adaptive. -- Function: int gsl_odeiv2_step_set_driver (gsl_odeiv2_step *s, const gsl_odeiv2_driver *d) This function sets a pointer of the driver object *note d: 97d. for stepper *note s: 97d, to allow the stepper to access control (and evolve) object through the driver object. This is a requirement for some steppers, to get the desired error level for internal iteration of stepper. Allocation of a driver object calls this function automatically. -- Function: int gsl_odeiv2_step_apply (gsl_odeiv2_step *s, double t, double h, double y[], double yerr[], const double dydt_in[], double dydt_out[], const gsl_odeiv2_system *sys) This function applies the stepping function *note s: 97e. to the system of equations defined by *note sys: 97e, using the step-size *note h: 97e. to advance the system from time *note t: 97e. and state *note y: 97e. to time *note t: 97e. + *note h: 97e. The new state of the system is stored in *note y: 97e. on output, with an estimate of the absolute error in each component stored in *note yerr: 97e. If the argument *note dydt_in: 97e. is not null it should point an array containing the derivatives for the system at time *note t: 97e. on input. This is optional as the derivatives will be computed internally if they are not provided, but allows the reuse of existing derivative information. On output the new derivatives of the system at time *note t: 97e. + *note h: 97e. will be stored in *note dydt_out: 97e. if it is not null. The stepping function returns ‘GSL_FAILURE’ if it is unable to compute the requested step. Also, if the user-supplied functions defined in the system *note sys: 97e. return a status other than ‘GSL_SUCCESS’ the step will be aborted. In that case, the elements of *note y: 97e. will be restored to their pre-step values and the error code from the user-supplied function will be returned. Failure may be due to a singularity in the system or too large step-size *note h: 97e. In that case the step should be attempted again with a smaller step-size, e.g. *note h: 97e. / 2. If the driver object is not appropriately set via *note gsl_odeiv2_step_set_driver(): 97d. for those steppers that need it, the stepping function returns ‘GSL_EFAULT’. If the user-supplied functions defined in the system *note sys: 97e. returns ‘GSL_EBADFUNC’, the function returns immediately with the same return code. In this case the user must call *note gsl_odeiv2_step_reset(): 976. before calling this function again. The following algorithms are available. Please note that algorithms which use step doubling for error estimation apply the more accurate values from two half steps instead of values from a single step for the new state ‘y’. -- Type: gsl_odeiv2_step_type -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_rk2 Explicit embedded Runge-Kutta (2, 3) method. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_rk4 Explicit 4th order (classical) Runge-Kutta. Error estimation is carried out by the step doubling method. For more efficient estimate of the error, use the embedded methods described below. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_rkf45 Explicit embedded Runge-Kutta-Fehlberg (4, 5) method. This method is a good general-purpose integrator. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_rkck Explicit embedded Runge-Kutta Cash-Karp (4, 5) method. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_rk8pd Explicit embedded Runge-Kutta Prince-Dormand (8, 9) method. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_rk1imp Implicit Gaussian first order Runge-Kutta. Also known as implicit Euler or backward Euler method. Error estimation is carried out by the step doubling method. This algorithm requires the Jacobian and access to the driver object via *note gsl_odeiv2_step_set_driver(): 97d. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_rk2imp Implicit Gaussian second order Runge-Kutta. Also known as implicit mid-point rule. Error estimation is carried out by the step doubling method. This stepper requires the Jacobian and access to the driver object via *note gsl_odeiv2_step_set_driver(): 97d. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_rk4imp Implicit Gaussian 4th order Runge-Kutta. Error estimation is carried out by the step doubling method. This algorithm requires the Jacobian and access to the driver object via *note gsl_odeiv2_step_set_driver(): 97d. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_bsimp Implicit Bulirsch-Stoer method of Bader and Deuflhard. The method is generally suitable for stiff problems. This stepper requires the Jacobian. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_msadams A variable-coefficient linear multistep Adams method in Nordsieck form. This stepper uses explicit Adams-Bashforth (predictor) and implicit Adams-Moulton (corrector) methods in P(EC)^m functional iteration mode. Method order varies dynamically between 1 and 12. This stepper requires the access to the driver object via *note gsl_odeiv2_step_set_driver(): 97d. -- Variable: *note gsl_odeiv2_step_type: 97f. *gsl_odeiv2_step_msbdf A variable-coefficient linear multistep backward differentiation formula (BDF) method in Nordsieck form. This stepper uses the explicit BDF formula as predictor and implicit BDF formula as corrector. A modified Newton iteration method is used to solve the system of non-linear equations. Method order varies dynamically between 1 and 5. The method is generally suitable for stiff problems. This stepper requires the Jacobian and the access to the driver object via *note gsl_odeiv2_step_set_driver(): 97d.  File: gsl-ref.info, Node: Adaptive Step-size Control, Next: Evolution, Prev: Stepping Functions, Up: Ordinary Differential Equations 29.3 Adaptive Step-size Control =============================== The control function examines the proposed change to the solution produced by a stepping function and attempts to determine the optimal step-size for a user-specified level of error. -- Type: gsl_odeiv2_control This is a workspace for controlling step size. -- Type: gsl_odeiv2_control_type This specifies the control type. -- Function: *note gsl_odeiv2_control: 98c. *gsl_odeiv2_control_standard_new (double eps_abs, double eps_rel, double a_y, double a_dydt) The standard control object is a four parameter heuristic based on absolute and relative errors *note eps_abs: 98e. and *note eps_rel: 98e, and scaling factors *note a_y: 98e. and *note a_dydt: 98e. for the system state y(t) and derivatives y'(t) respectively. The step-size adjustment procedure for this method begins by computing the desired error level D_i for each component, D_i = eps_abs + eps_rel * (a_y |y_i| + a_dydt h |y\prime_i|) and comparing it with the observed error E_i = |yerr_i|. If the observed error ‘E’ exceeds the desired error level ‘D’ by more than 10% for any component then the method reduces the step-size by an appropriate factor, h_{new} = h_{old} * S * (E/D)^{-1/q} where q is the consistency order of the method (e.g. q=4 for 4(5) embedded RK), and S is a safety factor of 0.9. The ratio E/D is taken to be the maximum of the ratios E_i/D_i. If the observed error E is less than 50% of the desired error level ‘D’ for the maximum ratio E_i/D_i then the algorithm takes the opportunity to increase the step-size to bring the error in line with the desired level, h_{new} = h_{old} * S * (E/D)^{-1/(q+1)} This encompasses all the standard error scaling methods. To avoid uncontrolled changes in the stepsize, the overall scaling factor is limited to the range 1/5 to 5. -- Function: *note gsl_odeiv2_control: 98c. *gsl_odeiv2_control_y_new (double eps_abs, double eps_rel) This function creates a new control object which will keep the local error on each step within an absolute error of *note eps_abs: 98f. and relative error of *note eps_rel: 98f. with respect to the solution y_i(t). This is equivalent to the standard control object with ‘a_y’ = 1 and ‘a_dydt’ = 0. -- Function: *note gsl_odeiv2_control: 98c. *gsl_odeiv2_control_yp_new (double eps_abs, double eps_rel) This function creates a new control object which will keep the local error on each step within an absolute error of *note eps_abs: 990. and relative error of *note eps_rel: 990. with respect to the derivatives of the solution y'_i(t). This is equivalent to the standard control object with ‘a_y’ = 0 and ‘a_dydt’ = 1. -- Function: *note gsl_odeiv2_control: 98c. *gsl_odeiv2_control_scaled_new (double eps_abs, double eps_rel, double a_y, double a_dydt, const double scale_abs[], size_t dim) This function creates a new control object which uses the same algorithm as *note gsl_odeiv2_control_standard_new(): 98e. but with an absolute error which is scaled for each component by the array *note scale_abs: 991. The formula for D_i for this control object is, D_i = eps_abs * s_i + eps_rel * (a_y |y_i| + a_dydt h |y\prime_i|) where s_i is the i-th component of the array *note scale_abs: 991. The same error control heuristic is used by the Matlab ODE suite. -- Function: *note gsl_odeiv2_control: 98c. *gsl_odeiv2_control_alloc (const gsl_odeiv2_control_type *T) This function returns a pointer to a newly allocated instance of a control function of type *note T: 992. This function is only needed for defining new types of control functions. For most purposes the standard control functions described above should be sufficient. -- Function: int gsl_odeiv2_control_init (gsl_odeiv2_control *c, double eps_abs, double eps_rel, double a_y, double a_dydt) This function initializes the control function *note c: 993. with the parameters *note eps_abs: 993. (absolute error), *note eps_rel: 993. (relative error), *note a_y: 993. (scaling factor for y) and *note a_dydt: 993. (scaling factor for derivatives). -- Function: void gsl_odeiv2_control_free (gsl_odeiv2_control *c) This function frees all the memory associated with the control function *note c: 994. -- Function: int gsl_odeiv2_control_hadjust (gsl_odeiv2_control *c, gsl_odeiv2_step *s, const double y[], const double yerr[], const double dydt[], double *h) This function adjusts the step-size *note h: 995. using the control function *note c: 995, and the current values of *note y: 995, *note yerr: 995. and *note dydt: 995. The stepping function ‘step’ is also needed to determine the order of the method. If the error in the y-values *note yerr: 995. is found to be too large then the step-size *note h: 995. is reduced and the function returns ‘GSL_ODEIV_HADJ_DEC’. If the error is sufficiently small then *note h: 995. may be increased and ‘GSL_ODEIV_HADJ_INC’ is returned. The function returns ‘GSL_ODEIV_HADJ_NIL’ if the step-size is unchanged. The goal of the function is to estimate the largest step-size which satisfies the user-specified accuracy requirements for the current point. -- Function: const char *gsl_odeiv2_control_name (const gsl_odeiv2_control *c) This function returns a pointer to the name of the control function. For example: printf ("control method is '%s'\n", gsl_odeiv2_control_name (c)); would print something like ‘control method is 'standard'’ -- Function: int gsl_odeiv2_control_errlevel (gsl_odeiv2_control *c, const double y, const double dydt, const double h, const size_t ind, double *errlev) This function calculates the desired error level of the *note ind: 997.-th component to *note errlev: 997. It requires the value (*note y: 997.) and value of the derivative (*note dydt: 997.) of the component, and the current step size *note h: 997. -- Function: int gsl_odeiv2_control_set_driver (gsl_odeiv2_control *c, const gsl_odeiv2_driver *d) This function sets a pointer of the driver object *note d: 998. for control object *note c: 998.  File: gsl-ref.info, Node: Evolution, Next: Driver, Prev: Adaptive Step-size Control, Up: Ordinary Differential Equations 29.4 Evolution ============== The evolution function combines the results of a stepping function and control function to reliably advance the solution forward one step using an acceptable step-size. -- Type: gsl_odeiv2_evolve This workspace contains parameters for controlling the evolution function -- Function: *note gsl_odeiv2_evolve: 99a. *gsl_odeiv2_evolve_alloc (size_t dim) This function returns a pointer to a newly allocated instance of an evolution function for a system of *note dim: 99b. dimensions. -- Function: int gsl_odeiv2_evolve_apply (gsl_odeiv2_evolve *e, gsl_odeiv2_control *con, gsl_odeiv2_step *step, const gsl_odeiv2_system *sys, double *t, double t1, double *h, double y[]) This function advances the system (*note e: 99c, *note sys: 99c.) from time *note t: 99c. and position *note y: 99c. using the stepping function *note step: 99c. The new time and position are stored in *note t: 99c. and *note y: 99c. on output. The initial step-size is taken as *note h: 99c. The control function *note con: 99c. is applied to check whether the local error estimated by the stepping function *note step: 99c. using step-size *note h: 99c. exceeds the required error tolerance. If the error is too high, the step is retried by calling *note step: 99c. with a decreased step-size. This process is continued until an acceptable step-size is found. An estimate of the local error for the step can be obtained from the components of the array ‘e->yerr[]’. If the user-supplied functions defined in the system *note sys: 99c. returns ‘GSL_EBADFUNC’, the function returns immediately with the same return code. In this case the user must call *note gsl_odeiv2_step_reset(): 976. and *note gsl_odeiv2_evolve_reset(): 975. before calling this function again. Otherwise, if the user-supplied functions defined in the system *note sys: 99c. or the stepping function *note step: 99c. return a status other than ‘GSL_SUCCESS’, the step is retried with a decreased step-size. If the step-size decreases below machine precision, a status of ‘GSL_FAILURE’ is returned if the user functions returned ‘GSL_SUCCESS’. Otherwise the value returned by user function is returned. If no acceptable step can be made, *note t: 99c. and *note y: 99c. will be restored to their pre-step values and *note h: 99c. contains the final attempted step-size. If the step is successful the function returns a suggested step-size for the next step in *note h: 99c. The maximum time *note t1: 99c. is guaranteed not to be exceeded by the time-step. On the final time-step the value of *note t: 99c. will be set to *note t1: 99c. exactly. -- Function: int gsl_odeiv2_evolve_apply_fixed_step (gsl_odeiv2_evolve *e, gsl_odeiv2_control *con, gsl_odeiv2_step *step, const gsl_odeiv2_system *sys, double *t, const double h, double y[]) This function advances the ODE-system (*note e: 99d, *note sys: 99d, *note con: 99d.) from time *note t: 99d. and position *note y: 99d. using the stepping function *note step: 99d. by a specified step size *note h: 99d. If the local error estimated by the stepping function exceeds the desired error level, the step is not taken and the function returns ‘GSL_FAILURE’. Otherwise the value returned by user function is returned. -- Function: int gsl_odeiv2_evolve_reset (gsl_odeiv2_evolve *e) This function resets the evolution function *note e: 975. It should be used whenever the next use of *note e: 975. will not be a continuation of a previous step. -- Function: void gsl_odeiv2_evolve_free (gsl_odeiv2_evolve *e) This function frees all the memory associated with the evolution function *note e: 99e. -- Function: int gsl_odeiv2_evolve_set_driver (gsl_odeiv2_evolve *e, const gsl_odeiv2_driver *d) This function sets a pointer of the driver object *note d: 99f. for evolve object *note e: 99f. If a system has discontinuous changes in the derivatives at known points, it is advisable to evolve the system between each discontinuity in sequence. For example, if a step-change in an external driving force occurs at times t_a, t_b and t_c then evolution should be carried out over the ranges (t_0,t_a), (t_a,t_b), (t_b,t_c), and (t_c,t_1) separately and not directly over the range (t_0,t_1).  File: gsl-ref.info, Node: Driver, Next: Examples<22>, Prev: Evolution, Up: Ordinary Differential Equations 29.5 Driver =========== The driver object is a high level wrapper that combines the evolution, control and stepper objects for easy use. -- Function: gsl_odeiv2_driver *gsl_odeiv2_driver_alloc_y_new (const gsl_odeiv2_system *sys, const gsl_odeiv2_step_type *T, const double hstart, const double epsabs, const double epsrel) -- Function: gsl_odeiv2_driver *gsl_odeiv2_driver_alloc_yp_new (const gsl_odeiv2_system *sys, const gsl_odeiv2_step_type *T, const double hstart, const double epsabs, const double epsrel) -- Function: gsl_odeiv2_driver *gsl_odeiv2_driver_alloc_standard_new (const gsl_odeiv2_system *sys, const gsl_odeiv2_step_type *T, const double hstart, const double epsabs, const double epsrel, const double a_y, const double a_dydt) -- Function: gsl_odeiv2_driver *gsl_odeiv2_driver_alloc_scaled_new (const gsl_odeiv2_system *sys, const gsl_odeiv2_step_type *T, const double hstart, const double epsabs, const double epsrel, const double a_y, const double a_dydt, const double scale_abs[]) These functions return a pointer to a newly allocated instance of a driver object. The functions automatically allocate and initialise the evolve, control and stepper objects for ODE system *note sys: 9a4. using stepper type *note T: 9a4. The initial step size is given in *note hstart: 9a4. The rest of the arguments follow the syntax and semantics of the control functions with same name (‘gsl_odeiv2_control_*_new’). -- Function: int gsl_odeiv2_driver_set_hmin (gsl_odeiv2_driver *d, const double hmin) The function sets a minimum for allowed step size *note hmin: 9a5. for driver *note d: 9a5. Default value is 0. -- Function: int gsl_odeiv2_driver_set_hmax (gsl_odeiv2_driver *d, const double hmax) The function sets a maximum for allowed step size *note hmax: 9a6. for driver *note d: 9a6. Default value is ‘GSL_DBL_MAX’. -- Function: int gsl_odeiv2_driver_set_nmax (gsl_odeiv2_driver *d, const unsigned long int nmax) The function sets a maximum for allowed number of steps *note nmax: 9a7. for driver *note d: 9a7. Default value of 0 sets no limit for steps. -- Function: int gsl_odeiv2_driver_apply (gsl_odeiv2_driver *d, double *t, const double t1, double y[]) This function evolves the driver system *note d: 9a8. from *note t: 9a8. to *note t1: 9a8. Initially vector *note y: 9a8. should contain the values of dependent variables at point *note t: 9a8. If the function is unable to complete the calculation, an error code from *note gsl_odeiv2_evolve_apply(): 99c. is returned, and *note t: 9a8. and *note y: 9a8. contain the values from last successful step. If maximum number of steps is reached, a value of ‘GSL_EMAXITER’ is returned. If the step size drops below minimum value, the function returns with ‘GSL_ENOPROG’. If the user-supplied functions defined in the system ‘sys’ returns ‘GSL_EBADFUNC’, the function returns immediately with the same return code. In this case the user must call *note gsl_odeiv2_driver_reset(): 974. before calling this function again. -- Function: int gsl_odeiv2_driver_apply_fixed_step (gsl_odeiv2_driver *d, double *t, const double h, const unsigned long int n, double y[]) This function evolves the driver system *note d: 9a9. from *note t: 9a9. with *note n: 9a9. steps of size *note h: 9a9. If the function is unable to complete the calculation, an error code from *note gsl_odeiv2_evolve_apply_fixed_step(): 99d. is returned, and *note t: 9a9. and *note y: 9a9. contain the values from last successful step. -- Function: int gsl_odeiv2_driver_reset (gsl_odeiv2_driver *d) This function resets the evolution and stepper objects. -- Function: int gsl_odeiv2_driver_reset_hstart (gsl_odeiv2_driver *d, const double hstart) The routine resets the evolution and stepper objects and sets new initial step size to *note hstart: 9aa. This function can be used e.g. to change the direction of integration. -- Function: int gsl_odeiv2_driver_free (gsl_odeiv2_driver *d) This function frees the driver object, and the related evolution, stepper and control objects.  File: gsl-ref.info, Node: Examples<22>, Next: References and Further Reading<22>, Prev: Driver, Up: Ordinary Differential Equations 29.6 Examples ============= The following program solves the second-order nonlinear Van der Pol oscillator equation, u''(t) + \mu u'(t) (u(t)^2 - 1) + u(t) = 0 This can be converted into a first order system suitable for use with the routines described in this chapter by introducing a separate variable for the velocity, v = u'(t), u' = v v' = -u + \mu v (1-u^2) The program begins by defining functions for these derivatives and their Jacobian. The main function uses driver level functions to solve the problem. The program evolves the solution from (u, v) = (1, 0) at t = 0 to t = 100. The step-size h is automatically adjusted by the controller to maintain an absolute accuracy of 10^{-6} in the function values (u, v). The loop in the example prints the solution at the points t_i = 1, 2, \dots, 100. #include #include #include #include int func (double t, const double y[], double f[], void *params) { (void)(t); /* avoid unused parameter warning */ double mu = *(double *)params; f[0] = y[1]; f[1] = -y[0] - mu*y[1]*(y[0]*y[0] - 1); return GSL_SUCCESS; } int jac (double t, const double y[], double *dfdy, double dfdt[], void *params) { (void)(t); /* avoid unused parameter warning */ double mu = *(double *)params; gsl_matrix_view dfdy_mat = gsl_matrix_view_array (dfdy, 2, 2); gsl_matrix * m = &dfdy_mat.matrix; gsl_matrix_set (m, 0, 0, 0.0); gsl_matrix_set (m, 0, 1, 1.0); gsl_matrix_set (m, 1, 0, -2.0*mu*y[0]*y[1] - 1.0); gsl_matrix_set (m, 1, 1, -mu*(y[0]*y[0] - 1.0)); dfdt[0] = 0.0; dfdt[1] = 0.0; return GSL_SUCCESS; } int main (void) { double mu = 10; gsl_odeiv2_system sys = {func, jac, 2, &mu}; gsl_odeiv2_driver * d = gsl_odeiv2_driver_alloc_y_new (&sys, gsl_odeiv2_step_rk8pd, 1e-6, 1e-6, 0.0); int i; double t = 0.0, t1 = 100.0; double y[2] = { 1.0, 0.0 }; for (i = 1; i <= 100; i++) { double ti = i * t1 / 100.0; int status = gsl_odeiv2_driver_apply (d, &t, ti, y); if (status != GSL_SUCCESS) { printf ("error, return value=%d\n", status); break; } printf ("%.5e %.5e %.5e\n", t, y[0], y[1]); } gsl_odeiv2_driver_free (d); return 0; } The user can work with the lower level functions directly, as in the following example. In this case an intermediate result is printed after each successful step instead of equidistant time points. int main (void) { const gsl_odeiv2_step_type * T = gsl_odeiv2_step_rk8pd; gsl_odeiv2_step * s = gsl_odeiv2_step_alloc (T, 2); gsl_odeiv2_control * c = gsl_odeiv2_control_y_new (1e-6, 0.0); gsl_odeiv2_evolve * e = gsl_odeiv2_evolve_alloc (2); double mu = 10; gsl_odeiv2_system sys = {func, jac, 2, &mu}; double t = 0.0, t1 = 100.0; double h = 1e-6; double y[2] = { 1.0, 0.0 }; while (t < t1) { int status = gsl_odeiv2_evolve_apply (e, c, s, &sys, &t, t1, &h, y); if (status != GSL_SUCCESS) break; printf ("%.5e %.5e %.5e\n", t, y[0], y[1]); } gsl_odeiv2_evolve_free (e); gsl_odeiv2_control_free (c); gsl_odeiv2_step_free (s); return 0; } For functions with multiple parameters, the appropriate information can be passed in through the ‘params’ argument in *note gsl_odeiv2_system: 973. definition (‘mu’ in this example) by using a pointer to a struct. [gsl-ref-figures/ode-vdp] Figure: Numerical solution of the Van der Pol oscillator equation using Prince-Dormand 8th order Runge-Kutta. It is also possible to work with a non-adaptive integrator, using only the stepping function itself, *note gsl_odeiv2_driver_apply_fixed_step(): 9a9. or *note gsl_odeiv2_evolve_apply_fixed_step(): 99d. The following program uses the driver level function, with fourth-order Runge-Kutta stepping function with a fixed stepsize of 0.001. int main (void) { double mu = 10; gsl_odeiv2_system sys = { func, jac, 2, &mu }; gsl_odeiv2_driver *d = gsl_odeiv2_driver_alloc_y_new (&sys, gsl_odeiv2_step_rk4, 1e-3, 1e-8, 1e-8); double t = 0.0; double y[2] = { 1.0, 0.0 }; int i, s; for (i = 0; i < 100; i++) { s = gsl_odeiv2_driver_apply_fixed_step (d, &t, 1e-3, 1000, y); if (s != GSL_SUCCESS) { printf ("error: driver returned %d\n", s); break; } printf ("%.5e %.5e %.5e\n", t, y[0], y[1]); } gsl_odeiv2_driver_free (d); return s; }  File: gsl-ref.info, Node: References and Further Reading<22>, Prev: Examples<22>, Up: Ordinary Differential Equations 29.7 References and Further Reading =================================== * Ascher, U.M., Petzold, L.R., `Computer Methods for Ordinary Differential and Differential-Algebraic Equations', SIAM, Philadelphia, 1998. * Hairer, E., Norsett, S. P., Wanner, G., `Solving Ordinary Differential Equations I: Nonstiff Problems', Springer, Berlin, 1993. * Hairer, E., Wanner, G., `Solving Ordinary Differential Equations II: Stiff and Differential-Algebraic Problems', Springer, Berlin, 1996. Many of the basic Runge-Kutta formulas can be found in the Handbook of Mathematical Functions, * Abramowitz & Stegun (eds.), `Handbook of Mathematical Functions', Section 25.5. The implicit Bulirsch-Stoer algorithm ‘bsimp’ is described in the following paper, * G. Bader and P. Deuflhard, “A Semi-Implicit Mid-Point Rule for Stiff Systems of Ordinary Differential Equations.”, Numer.: Math.: 41, 373–398, 1983. The Adams and BDF multistep methods ‘msadams’ and ‘msbdf’ are based on the following articles, * G. D. Byrne and A. C. Hindmarsh, “A Polyalgorithm for the Numerical Solution of Ordinary Differential Equations.”, ACM Trans. Math. Software, 1, 71–96, 1975. * P. N. Brown, G. D. Byrne and A. C. Hindmarsh, “VODE: A Variable-coefficient ODE Solver.”, SIAM J. Sci. Stat. Comput. 10, 1038–1051, 1989. * A. C. Hindmarsh, P. N. Brown, K. E. Grant, S. L. Lee, R. Serban, D. E. Shumaker and C. S. Woodward, “SUNDIALS: Suite of Nonlinear and Differential/Algebraic Equation Solvers.”, ACM Trans. Math. Software 31, 363–396, 2005.  File: gsl-ref.info, Node: Interpolation, Next: Numerical Differentiation, Prev: Ordinary Differential Equations, Up: Top 30 Interpolation **************** This chapter describes functions for performing interpolation. The library provides a variety of interpolation methods, including Cubic, Akima, and Steffen splines. The interpolation types are interchangeable, allowing different methods to be used without recompiling. Interpolations can be defined for both normal and periodic boundary conditions. Additional functions are available for computing derivatives and integrals of interpolating functions. Routines are provided for interpolating both one and two dimensional datasets. These interpolation methods produce curves that pass through each datapoint. To interpolate noisy data with a smoothing curve see *note Basis Splines: 9b1. The functions described in this section are declared in the header files ‘gsl_interp.h’ and ‘gsl_spline.h’. * Menu: * Introduction to 1D Interpolation:: * 1D Interpolation Functions:: * 1D Interpolation Types:: * 1D Index Look-up and Acceleration:: * 1D Evaluation of Interpolating Functions:: * 1D Higher-level Interface:: * 1D Interpolation Example Programs:: * Introduction to 2D Interpolation:: * 2D Interpolation Functions:: * 2D Interpolation Grids:: * 2D Interpolation Types:: * 2D Evaluation of Interpolating Functions:: * 2D Higher-level Interface:: * 2D Interpolation Example programs:: * References and Further Reading: References and Further Reading<23>.  File: gsl-ref.info, Node: Introduction to 1D Interpolation, Next: 1D Interpolation Functions, Up: Interpolation 30.1 Introduction to 1D Interpolation ===================================== Given a set of data points (x_1, y_1) \dots (x_n, y_n) the routines described in this section compute a continuous interpolating function y(x) such that y(x_i) = y_i. The interpolation is piecewise smooth, and its behavior at the end-points is determined by the type of interpolation used.  File: gsl-ref.info, Node: 1D Interpolation Functions, Next: 1D Interpolation Types, Prev: Introduction to 1D Interpolation, Up: Interpolation 30.2 1D Interpolation Functions =============================== The interpolation function for a given dataset is stored in a *note gsl_interp: 9b4. object. These are created by the following functions. -- Type: gsl_interp Workspace for 1D interpolation -- Function: *note gsl_interp: 9b4. *gsl_interp_alloc (const gsl_interp_type *T, size_t size) This function returns a pointer to a newly allocated interpolation object of type *note T: 9b5. for *note size: 9b5. data-points. -- Function: int gsl_interp_init (gsl_interp *interp, const double xa[], const double ya[], size_t size) This function initializes the interpolation object *note interp: 9b6. for the data (*note xa: 9b6, *note ya: 9b6.) where *note xa: 9b6. and *note ya: 9b6. are arrays of size *note size: 9b6. The interpolation object (*note gsl_interp: 9b4.) does not save the data arrays *note xa: 9b6. and *note ya: 9b6. and only stores the static state computed from the data. The *note xa: 9b6. data array is always assumed to be strictly ordered, with increasing x values; the behavior for other arrangements is not defined. -- Function: void gsl_interp_free (gsl_interp *interp) This function frees the interpolation object *note interp: 9b7.  File: gsl-ref.info, Node: 1D Interpolation Types, Next: 1D Index Look-up and Acceleration, Prev: 1D Interpolation Functions, Up: Interpolation 30.3 1D Interpolation Types =========================== The interpolation library provides the following interpolation types: -- Type: gsl_interp_type -- Variable: *note gsl_interp_type: 9b9. *gsl_interp_linear Linear interpolation. This interpolation method does not require any additional memory. -- Variable: *note gsl_interp_type: 9b9. *gsl_interp_polynomial Polynomial interpolation. This method should only be used for interpolating small numbers of points because polynomial interpolation introduces large oscillations, even for well-behaved datasets. The number of terms in the interpolating polynomial is equal to the number of points. -- Variable: *note gsl_interp_type: 9b9. *gsl_interp_cspline Cubic spline with natural boundary conditions. The resulting curve is piecewise cubic on each interval, with matching first and second derivatives at the supplied data-points. The second derivative is chosen to be zero at the first point and last point. -- Variable: *note gsl_interp_type: 9b9. *gsl_interp_cspline_periodic Cubic spline with periodic boundary conditions. The resulting curve is piecewise cubic on each interval, with matching first and second derivatives at the supplied data-points. The derivatives at the first and last points are also matched. Note that the last point in the data must have the same y-value as the first point, otherwise the resulting periodic interpolation will have a discontinuity at the boundary. -- Variable: *note gsl_interp_type: 9b9. *gsl_interp_akima Non-rounded Akima spline with natural boundary conditions. This method uses the non-rounded corner algorithm of Wodicka. -- Variable: *note gsl_interp_type: 9b9. *gsl_interp_akima_periodic Non-rounded Akima spline with periodic boundary conditions. This method uses the non-rounded corner algorithm of Wodicka. -- Variable: *note gsl_interp_type: 9b9. *gsl_interp_steffen Steffen’s method guarantees the monotonicity of the interpolating function between the given data points. Therefore, minima and maxima can only occur exactly at the data points, and there can never be spurious oscillations between data points. The interpolated function is piecewise cubic in each interval. The resulting curve and its first derivative are guaranteed to be continuous, but the second derivative may be discontinuous. The following related functions are available: -- Function: const char *gsl_interp_name (const gsl_interp *interp) This function returns the name of the interpolation type used by *note interp: 9c1. For example: printf ("interp uses '%s' interpolation.\n", gsl_interp_name (interp)); would print something like: interp uses 'cspline' interpolation. -- Function: unsigned int gsl_interp_min_size (const gsl_interp *interp) -- Function: unsigned int gsl_interp_type_min_size (const gsl_interp_type *T) These functions return the minimum number of points required by the interpolation object ‘interp’ or interpolation type *note T: 9c3. For example, Akima spline interpolation requires a minimum of 5 points.  File: gsl-ref.info, Node: 1D Index Look-up and Acceleration, Next: 1D Evaluation of Interpolating Functions, Prev: 1D Interpolation Types, Up: Interpolation 30.4 1D Index Look-up and Acceleration ====================================== The state of searches can be stored in a *note gsl_interp_accel: 9c5. object, which is a kind of iterator for interpolation lookups. -- Type: gsl_interp_accel This workspace stores state variables for interpolation lookups. It caches the previous value of an index lookup. When the subsequent interpolation point falls in the same interval its index value can be returned immediately. -- Function: size_t gsl_interp_bsearch (const double x_array[], double x, size_t index_lo, size_t index_hi) This function returns the index i of the array *note x_array: 9c6. such that ‘x_array[i] <= x < x_array[i+1]’. The index is searched for in the range [*note index_lo: 9c6, *note index_hi: 9c6.]. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: *note gsl_interp_accel: 9c5. *gsl_interp_accel_alloc (void) This function returns a pointer to an accelerator object, which is a kind of iterator for interpolation lookups. It tracks the state of lookups, thus allowing for application of various acceleration strategies. When multiple interpolants are in use, the same accelerator object may be used for all datasets with the same domain (‘x_array’), but different accelerators should be used for data defined on different domains. -- Function: size_t gsl_interp_accel_find (gsl_interp_accel *a, const double x_array[], size_t size, double x) This function performs a lookup action on the data array *note x_array: 9c8. of size *note size: 9c8, using the given accelerator *note a: 9c8. This is how lookups are performed during evaluation of an interpolation. The function returns an index i such that ‘x_array[i] <= x < x_array[i+1]’. An inline version of this function is used when ‘HAVE_INLINE’ is defined. -- Function: int gsl_interp_accel_reset (gsl_interp_accel *acc); This function reinitializes the accelerator object *note acc: 9c9. It should be used when the cached information is no longer applicable—for example, when switching to a new dataset. -- Function: void gsl_interp_accel_free (gsl_interp_accel *acc) This function frees the accelerator object *note acc: 9ca.  File: gsl-ref.info, Node: 1D Evaluation of Interpolating Functions, Next: 1D Higher-level Interface, Prev: 1D Index Look-up and Acceleration, Up: Interpolation 30.5 1D Evaluation of Interpolating Functions ============================================= -- Function: double gsl_interp_eval (const gsl_interp *interp, const double xa[], const double ya[], double x, gsl_interp_accel *acc) -- Function: int gsl_interp_eval_e (const gsl_interp *interp, const double xa[], const double ya[], double x, gsl_interp_accel *acc, double *y) These functions return the interpolated value of *note y: 9cd. for a given point *note x: 9cd, using the interpolation object *note interp: 9cd, data arrays *note xa: 9cd. and *note ya: 9cd. and the accelerator *note acc: 9cd. When *note x: 9cd. is outside the range of *note xa: 9cd, the error code *note GSL_EDOM: 28. is returned with a value of *note GSL_NAN: 3c. for *note y: 9cd. -- Function: double gsl_interp_eval_deriv (const gsl_interp *interp, const double xa[], const double ya[], double x, gsl_interp_accel *acc) -- Function: int gsl_interp_eval_deriv_e (const gsl_interp *interp, const double xa[], const double ya[], double x, gsl_interp_accel *acc, double *d) These functions return the derivative *note d: 9cf. of an interpolated function for a given point *note x: 9cf, using the interpolation object *note interp: 9cf, data arrays *note xa: 9cf. and *note ya: 9cf. and the accelerator *note acc: 9cf. -- Function: double gsl_interp_eval_deriv2 (const gsl_interp *interp, const double xa[], const double ya[], double x, gsl_interp_accel *acc) -- Function: int gsl_interp_eval_deriv2_e (const gsl_interp *interp, const double xa[], const double ya[], double x, gsl_interp_accel *acc, double *d2) These functions return the second derivative *note d2: 9d1. of an interpolated function for a given point *note x: 9d1, using the interpolation object *note interp: 9d1, data arrays *note xa: 9d1. and *note ya: 9d1. and the accelerator *note acc: 9d1. -- Function: double gsl_interp_eval_integ (const gsl_interp *interp, const double xa[], const double ya[], double a, double b, gsl_interp_accel *acc) -- Function: int gsl_interp_eval_integ_e (const gsl_interp *interp, const double xa[], const double ya[], double a, double b, gsl_interp_accel *acc, double *result) These functions return the numerical integral *note result: 9d3. of an interpolated function over the range [*note a: 9d3, *note b: 9d3.], using the interpolation object *note interp: 9d3, data arrays *note xa: 9d3. and *note ya: 9d3. and the accelerator *note acc: 9d3.  File: gsl-ref.info, Node: 1D Higher-level Interface, Next: 1D Interpolation Example Programs, Prev: 1D Evaluation of Interpolating Functions, Up: Interpolation 30.6 1D Higher-level Interface ============================== The functions described in the previous sections required the user to supply pointers to the x and y arrays on each call. The following functions are equivalent to the corresponding *note gsl_interp: 9b4. functions but maintain a copy of this data in the *note gsl_spline: 9d5. object. This removes the need to pass both ‘xa’ and ‘ya’ as arguments on each evaluation. These functions are defined in the header file ‘gsl_spline.h’. -- Type: gsl_spline This workspace provides a higher level interface for the *note gsl_interp: 9b4. object -- Function: *note gsl_spline: 9d5. *gsl_spline_alloc (const gsl_interp_type *T, size_t size) -- Function: int gsl_spline_init (gsl_spline *spline, const double xa[], const double ya[], size_t size) -- Function: void gsl_spline_free (gsl_spline *spline) -- Function: const char *gsl_spline_name (const gsl_spline *spline) -- Function: unsigned int gsl_spline_min_size (const gsl_spline *spline) -- Function: double gsl_spline_eval (const gsl_spline *spline, double x, gsl_interp_accel *acc) -- Function: int gsl_spline_eval_e (const gsl_spline *spline, double x, gsl_interp_accel *acc, double *y) -- Function: double gsl_spline_eval_deriv (const gsl_spline *spline, double x, gsl_interp_accel *acc) -- Function: int gsl_spline_eval_deriv_e (const gsl_spline *spline, double x, gsl_interp_accel *acc, double *d) -- Function: double gsl_spline_eval_deriv2 (const gsl_spline *spline, double x, gsl_interp_accel *acc) -- Function: int gsl_spline_eval_deriv2_e (const gsl_spline *spline, double x, gsl_interp_accel *acc, double *d2) -- Function: double gsl_spline_eval_integ (const gsl_spline *spline, double a, double b, gsl_interp_accel *acc) -- Function: int gsl_spline_eval_integ_e (const gsl_spline *spline, double a, double b, gsl_interp_accel *acc, double *result)  File: gsl-ref.info, Node: 1D Interpolation Example Programs, Next: Introduction to 2D Interpolation, Prev: 1D Higher-level Interface, Up: Interpolation 30.7 1D Interpolation Example Programs ====================================== The following program demonstrates the use of the interpolation and spline functions. It computes a cubic spline interpolation of the 10-point dataset (x_i, y_i) where x_i = i + \sin(i)/2 and y_i = i + \cos(i^2) for i = 0 \dots 9. #include #include #include #include #include int main (void) { int i; double xi, yi, x[10], y[10]; printf ("#m=0,S=17\n"); for (i = 0; i < 10; i++) { x[i] = i + 0.5 * sin (i); y[i] = i + cos (i * i); printf ("%g %g\n", x[i], y[i]); } printf ("#m=1,S=0\n"); { gsl_interp_accel *acc = gsl_interp_accel_alloc (); gsl_spline *spline = gsl_spline_alloc (gsl_interp_cspline, 10); gsl_spline_init (spline, x, y, 10); for (xi = x[0]; xi < x[9]; xi += 0.01) { yi = gsl_spline_eval (spline, xi, acc); printf ("%g %g\n", xi, yi); } gsl_spline_free (spline); gsl_interp_accel_free (acc); } return 0; } The output is designed to be used with the GNU plotutils ‘graph’ program: $ ./a.out > interp.dat $ graph -T ps < interp.dat > interp.ps [gsl-ref-figures/interp] Figure: Cubic spline interpolation Fig. %s shows a smooth interpolation of the original points. The interpolation method can be changed simply by varying the first argument of *note gsl_spline_alloc(): 9d6. The next program demonstrates a periodic cubic spline with 4 data points. Note that the first and last points must be supplied with the same y-value for a periodic spline. #include #include #include #include #include int main (void) { int N = 4; double x[4] = {0.00, 0.10, 0.27, 0.30}; double y[4] = {0.15, 0.70, -0.10, 0.15}; /* Note: y[0] == y[3] for periodic data */ gsl_interp_accel *acc = gsl_interp_accel_alloc (); const gsl_interp_type *t = gsl_interp_cspline_periodic; gsl_spline *spline = gsl_spline_alloc (t, N); int i; double xi, yi; printf ("#m=0,S=5\n"); for (i = 0; i < N; i++) { printf ("%g %g\n", x[i], y[i]); } printf ("#m=1,S=0\n"); gsl_spline_init (spline, x, y, N); for (i = 0; i <= 100; i++) { xi = (1 - i / 100.0) * x[0] + (i / 100.0) * x[N-1]; yi = gsl_spline_eval (spline, xi, acc); printf ("%g %g\n", xi, yi); } gsl_spline_free (spline); gsl_interp_accel_free (acc); return 0; } The output can be plotted with GNU ‘graph’: $ ./a.out > interp.dat $ graph -T ps < interp.dat > interp.ps [gsl-ref-figures/interpp] Figure: Periodic cubic spline interpolation Fig. %s shows a periodic interpolation of the original points. The slope of the fitted curve is the same at the beginning and end of the data, and the second derivative is also. The next program illustrates the difference between the cubic spline, Akima, and Steffen interpolation types on a difficult dataset. #include #include #include #include #include int main(void) { size_t i; const size_t N = 9; /* this dataset is taken from * J. M. Hyman, Accurate Monotonicity preserving cubic interpolation, * SIAM J. Sci. Stat. Comput. 4, 4, 1983. */ const double x[] = { 7.99, 8.09, 8.19, 8.7, 9.2, 10.0, 12.0, 15.0, 20.0 }; const double y[] = { 0.0, 2.76429e-5, 4.37498e-2, 0.169183, 0.469428, 0.943740, 0.998636, 0.999919, 0.999994 }; gsl_interp_accel *acc = gsl_interp_accel_alloc(); gsl_spline *spline_cubic = gsl_spline_alloc(gsl_interp_cspline, N); gsl_spline *spline_akima = gsl_spline_alloc(gsl_interp_akima, N); gsl_spline *spline_steffen = gsl_spline_alloc(gsl_interp_steffen, N); gsl_spline_init(spline_cubic, x, y, N); gsl_spline_init(spline_akima, x, y, N); gsl_spline_init(spline_steffen, x, y, N); for (i = 0; i < N; ++i) printf("%g %g\n", x[i], y[i]); printf("\n\n"); for (i = 0; i <= 100; ++i) { double xi = (1 - i / 100.0) * x[0] + (i / 100.0) * x[N-1]; double yi_cubic = gsl_spline_eval(spline_cubic, xi, acc); double yi_akima = gsl_spline_eval(spline_akima, xi, acc); double yi_steffen = gsl_spline_eval(spline_steffen, xi, acc); printf("%g %g %g %g\n", xi, yi_cubic, yi_akima, yi_steffen); } gsl_spline_free(spline_cubic); gsl_spline_free(spline_akima); gsl_spline_free(spline_steffen); gsl_interp_accel_free(acc); return 0; } [gsl-ref-figures/interp_compare] Figure: Comparison of different 1D interpolation methods The output is shown in Fig. %s. The cubic method exhibits a local maxima between the 6th and 7th data points and continues oscillating for the rest of the data. Akima also shows a local maxima but recovers and follows the data well after the 7th grid point. Steffen preserves monotonicity in all intervals and does not exhibit oscillations, at the expense of having a discontinuous second derivative.  File: gsl-ref.info, Node: Introduction to 2D Interpolation, Next: 2D Interpolation Functions, Prev: 1D Interpolation Example Programs, Up: Interpolation 30.8 Introduction to 2D Interpolation ===================================== Given a set of x coordinates x_1,...,x_m and a set of y coordinates y_1,...,y_n, each in increasing order, plus a set of function values z_{ij} for each grid point (x_i,y_j), the routines described in this section compute a continuous interpolation function z(x,y) such that z(x_i,y_j) = z_{ij}.  File: gsl-ref.info, Node: 2D Interpolation Functions, Next: 2D Interpolation Grids, Prev: Introduction to 2D Interpolation, Up: Interpolation 30.9 2D Interpolation Functions =============================== The interpolation function for a given dataset is stored in a *note gsl_interp2d: 9e9. object. These are created by the following functions. -- Type: gsl_interp2d Workspace for 2D interpolation -- Function: *note gsl_interp2d: 9e9. *gsl_interp2d_alloc (const gsl_interp2d_type *T, const size_t xsize, const size_t ysize) This function returns a pointer to a newly allocated interpolation object of type *note T: 9ea. for *note xsize: 9ea. grid points in the x direction and *note ysize: 9ea. grid points in the y direction. -- Function: int gsl_interp2d_init (gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const size_t xsize, const size_t ysize) This function initializes the interpolation object *note interp: 9eb. for the data (*note xa: 9eb, *note ya: 9eb, *note za: 9eb.) where *note xa: 9eb. and *note ya: 9eb. are arrays of the x and y grid points of size *note xsize: 9eb. and *note ysize: 9eb. respectively, and *note za: 9eb. is an array of function values of size *note xsize: 9eb. * *note ysize: 9eb. The interpolation object (*note gsl_interp2d: 9e9.) does not save the data arrays *note xa: 9eb, *note ya: 9eb, and *note za: 9eb. and only stores the static state computed from the data. The *note xa: 9eb. and *note ya: 9eb. data arrays are always assumed to be strictly ordered, with increasing x,y values; the behavior for other arrangements is not defined. -- Function: void gsl_interp2d_free (gsl_interp2d *interp) This function frees the interpolation object *note interp: 9ec.  File: gsl-ref.info, Node: 2D Interpolation Grids, Next: 2D Interpolation Types, Prev: 2D Interpolation Functions, Up: Interpolation 30.10 2D Interpolation Grids ============================ The 2D interpolation routines access the function values z_{ij} with the following ordering: z_{ij} = za[j*xsize + i] with i = 0,...,xsize-1 and j = 0,...,ysize-1. However, for ease of use, the following functions are provided to add and retrieve elements from the function grid without requiring knowledge of the internal ordering. -- Function: int gsl_interp2d_set (const gsl_interp2d *interp, double za[], const size_t i, const size_t j, const double z) This function sets the value z_{ij} for grid point (*note i: 9ee, *note j: 9ee.) of the array *note za: 9ee. to *note z: 9ee. -- Function: double gsl_interp2d_get (const gsl_interp2d *interp, const double za[], const size_t i, const size_t j) This function returns the value z_{ij} for grid point (*note i: 9ef, *note j: 9ef.) stored in the array *note za: 9ef. -- Function: size_t gsl_interp2d_idx (const gsl_interp2d *interp, const size_t i, const size_t j) This function returns the index corresponding to the grid point (*note i: 9f0, *note j: 9f0.). The index is given by j*xsize + i.  File: gsl-ref.info, Node: 2D Interpolation Types, Next: 2D Evaluation of Interpolating Functions, Prev: 2D Interpolation Grids, Up: Interpolation 30.11 2D Interpolation Types ============================ -- Type: gsl_interp2d_type The interpolation library provides the following 2D interpolation types: -- Variable: *note gsl_interp2d_type: 9f2. *gsl_interp2d_bilinear Bilinear interpolation. This interpolation method does not require any additional memory. -- Variable: *note gsl_interp2d_type: 9f2. *gsl_interp2d_bicubic Bicubic interpolation. -- Function: const char *gsl_interp2d_name (const gsl_interp2d *interp) This function returns the name of the interpolation type used by *note interp: 9f5. For example: printf ("interp uses '%s' interpolation.\n", gsl_interp2d_name (interp)); would print something like: interp uses 'bilinear' interpolation. -- Function: unsigned int gsl_interp2d_min_size (const gsl_interp2d *interp) -- Function: unsigned int gsl_interp2d_type_min_size (const gsl_interp2d_type *T) These functions return the minimum number of points required by the interpolation object ‘interp’ or interpolation type *note T: 9f7. For example, bicubic interpolation requires a minimum of 4 points.  File: gsl-ref.info, Node: 2D Evaluation of Interpolating Functions, Next: 2D Higher-level Interface, Prev: 2D Interpolation Types, Up: Interpolation 30.12 2D Evaluation of Interpolating Functions ============================================== -- Function: double gsl_interp2d_eval (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_interp2d_eval_e (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *z) These functions return the interpolated value of *note z: 9fa. for a given point (*note x: 9fa, *note y: 9fa.), using the interpolation object *note interp: 9fa, data arrays *note xa: 9fa, *note ya: 9fa, and *note za: 9fa. and the accelerators *note xacc: 9fa. and *note yacc: 9fa. When *note x: 9fa. is outside the range of *note xa: 9fa. or *note y: 9fa. is outside the range of *note ya: 9fa, the error code *note GSL_EDOM: 28. is returned. -- Function: double gsl_interp2d_eval_extrap (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_interp2d_eval_extrap_e (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *z) These functions return the interpolated value of *note z: 9fc. for a given point (*note x: 9fc, *note y: 9fc.), using the interpolation object *note interp: 9fc, data arrays *note xa: 9fc, *note ya: 9fc, and *note za: 9fc. and the accelerators *note xacc: 9fc. and *note yacc: 9fc. The functions perform no bounds checking, so when *note x: 9fc. is outside the range of *note xa: 9fc. or *note y: 9fc. is outside the range of *note ya: 9fc, extrapolation is performed. -- Function: double gsl_interp2d_eval_deriv_x (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_interp2d_eval_deriv_x_e (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) These functions return the interpolated value *note d: 9fe. = \partial z / \partial x for a given point (*note x: 9fe, *note y: 9fe.), using the interpolation object *note interp: 9fe, data arrays *note xa: 9fe, *note ya: 9fe, and *note za: 9fe. and the accelerators *note xacc: 9fe. and *note yacc: 9fe. When *note x: 9fe. is outside the range of *note xa: 9fe. or *note y: 9fe. is outside the range of *note ya: 9fe, the error code *note GSL_EDOM: 28. is returned. -- Function: double gsl_interp2d_eval_deriv_y (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_interp2d_eval_deriv_y_e (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) These functions return the interpolated value *note d: a00. = \partial z / \partial y for a given point (*note x: a00, *note y: a00.), using the interpolation object *note interp: a00, data arrays *note xa: a00, *note ya: a00, and *note za: a00. and the accelerators *note xacc: a00. and *note yacc: a00. When *note x: a00. is outside the range of *note xa: a00. or *note y: a00. is outside the range of *note ya: a00, the error code *note GSL_EDOM: 28. is returned. -- Function: double gsl_interp2d_eval_deriv_xx (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_interp2d_eval_deriv_xx_e (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) These functions return the interpolated value *note d: a02. = \partial^2 z / \partial x^2 for a given point (*note x: a02, *note y: a02.), using the interpolation object *note interp: a02, data arrays *note xa: a02, *note ya: a02, and *note za: a02. and the accelerators *note xacc: a02. and *note yacc: a02. When *note x: a02. is outside the range of *note xa: a02. or *note y: a02. is outside the range of *note ya: a02, the error code *note GSL_EDOM: 28. is returned. -- Function: double gsl_interp2d_eval_deriv_yy (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_interp2d_eval_deriv_yy_e (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) These functions return the interpolated value *note d: a04. = \partial^2 z / \partial y^2 for a given point (*note x: a04, *note y: a04.), using the interpolation object *note interp: a04, data arrays *note xa: a04, *note ya: a04, and *note za: a04. and the accelerators *note xacc: a04. and *note yacc: a04. When *note x: a04. is outside the range of *note xa: a04. or *note y: a04. is outside the range of *note ya: a04, the error code *note GSL_EDOM: 28. is returned. -- Function: double gsl_interp2d_eval_deriv_xy (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_interp2d_eval_deriv_xy_e (const gsl_interp2d *interp, const double xa[], const double ya[], const double za[], const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) These functions return the interpolated value *note d: a06. = \partial^2 z / \partial x \partial y for a given point (*note x: a06, *note y: a06.), using the interpolation object *note interp: a06, data arrays *note xa: a06, *note ya: a06, and *note za: a06. and the accelerators *note xacc: a06. and *note yacc: a06. When *note x: a06. is outside the range of *note xa: a06. or *note y: a06. is outside the range of *note ya: a06, the error code *note GSL_EDOM: 28. is returned.  File: gsl-ref.info, Node: 2D Higher-level Interface, Next: 2D Interpolation Example programs, Prev: 2D Evaluation of Interpolating Functions, Up: Interpolation 30.13 2D Higher-level Interface =============================== The functions described in the previous sections required the user to supply pointers to the x, y, and z arrays on each call. The following functions are equivalent to the corresponding ‘gsl_interp2d’ functions but maintain a copy of this data in the *note gsl_spline2d: a08. object. This removes the need to pass ‘xa’, ‘ya’, and ‘za’ as arguments on each evaluation. These functions are defined in the header file ‘gsl_spline2d.h’. -- Type: gsl_spline2d This workspace provides a higher level interface for the *note gsl_interp2d: 9e9. object -- Function: *note gsl_spline2d: a08. *gsl_spline2d_alloc (const gsl_interp2d_type *T, size_t xsize, size_t ysize) -- Function: int gsl_spline2d_init (gsl_spline2d *spline, const double xa[], const double ya[], const double za[], size_t xsize, size_t ysize) -- Function: void gsl_spline2d_free (gsl_spline2d *spline) -- Function: const char *gsl_spline2d_name (const gsl_spline2d *spline) -- Function: unsigned int gsl_spline2d_min_size (const gsl_spline2d *spline) -- Function: double gsl_spline2d_eval (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_spline2d_eval_e (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *z) -- Function: double gsl_spline2d_eval_extrap (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_spline2d_eval_extrap_e (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *z) -- Function: double gsl_spline2d_eval_deriv_x (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_spline2d_eval_deriv_x_e (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) -- Function: double gsl_spline2d_eval_deriv_y (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_spline2d_eval_deriv_y_e (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) -- Function: double gsl_spline2d_eval_deriv_xx (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_spline2d_eval_deriv_xx_e (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) -- Function: double gsl_spline2d_eval_deriv_yy (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_spline2d_eval_deriv_yy_e (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) -- Function: double gsl_spline2d_eval_deriv_xy (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc) -- Function: int gsl_spline2d_eval_deriv_xy_e (const gsl_spline2d *spline, const double x, const double y, gsl_interp_accel *xacc, gsl_interp_accel *yacc, double *d) -- Function: int gsl_spline2d_set (const gsl_spline2d *spline, double za[], const size_t i, const size_t j, const double z) -- Function: double gsl_spline2d_get (const gsl_spline2d *spline, const double za[], const size_t i, const size_t j) This function returns the value z_{ij} for grid point (*note i: a1d, *note j: a1d.) stored in the array *note za: a1d.  File: gsl-ref.info, Node: 2D Interpolation Example programs, Next: References and Further Reading<23>, Prev: 2D Higher-level Interface, Up: Interpolation 30.14 2D Interpolation Example programs ======================================= The following example performs bilinear interpolation on the unit square, using z values of (0,1,0.5,1) going clockwise around the square. #include #include #include #include #include int main() { const gsl_interp2d_type *T = gsl_interp2d_bilinear; const size_t N = 100; /* number of points to interpolate */ const double xa[] = { 0.0, 1.0 }; /* define unit square */ const double ya[] = { 0.0, 1.0 }; const size_t nx = sizeof(xa) / sizeof(double); /* x grid points */ const size_t ny = sizeof(ya) / sizeof(double); /* y grid points */ double *za = malloc(nx * ny * sizeof(double)); gsl_spline2d *spline = gsl_spline2d_alloc(T, nx, ny); gsl_interp_accel *xacc = gsl_interp_accel_alloc(); gsl_interp_accel *yacc = gsl_interp_accel_alloc(); size_t i, j; /* set z grid values */ gsl_spline2d_set(spline, za, 0, 0, 0.0); gsl_spline2d_set(spline, za, 0, 1, 1.0); gsl_spline2d_set(spline, za, 1, 1, 0.5); gsl_spline2d_set(spline, za, 1, 0, 1.0); /* initialize interpolation */ gsl_spline2d_init(spline, xa, ya, za, nx, ny); /* interpolate N values in x and y and print out grid for plotting */ for (i = 0; i < N; ++i) { double xi = i / (N - 1.0); for (j = 0; j < N; ++j) { double yj = j / (N - 1.0); double zij = gsl_spline2d_eval(spline, xi, yj, xacc, yacc); printf("%f %f %f\n", xi, yj, zij); } printf("\n"); } gsl_spline2d_free(spline); gsl_interp_accel_free(xacc); gsl_interp_accel_free(yacc); free(za); return 0; } The results of the interpolation are shown in Fig. %s, where the corners are labeled with their fixed z values. [gsl-ref-figures/interp2d] Figure: 2D interpolation example  File: gsl-ref.info, Node: References and Further Reading<23>, Prev: 2D Interpolation Example programs, Up: Interpolation 30.15 References and Further Reading ==================================== Descriptions of the interpolation algorithms and further references can be found in the following publications: * C.W. Ueberhuber, `Numerical Computation (Volume 1), Chapter 9 “Interpolation”', Springer (1997), ISBN 3-540-62058-3. * D.M. Young, R.T. Gregory, `A Survey of Numerical Mathematics (Volume 1), Chapter 6.8', Dover (1988), ISBN 0-486-65691-8. * M. Steffen, `A simple method for monotonic interpolation in one dimension', Astron. Astrophys. 239, 443-450, 1990.  File: gsl-ref.info, Node: Numerical Differentiation, Next: Chebyshev Approximations, Prev: Interpolation, Up: Top 31 Numerical Differentiation **************************** The functions described in this chapter compute numerical derivatives by finite differencing. An adaptive algorithm is used to find the best choice of finite difference and to estimate the error in the derivative. These functions are declared in the header file ‘gsl_deriv.h’. * Menu: * Functions:: * Examples: Examples<23>. * References and Further Reading: References and Further Reading<24>.  File: gsl-ref.info, Node: Functions, Next: Examples<23>, Up: Numerical Differentiation 31.1 Functions ============== -- Function: int gsl_deriv_central (const gsl_function *f, double x, double h, double *result, double *abserr) This function computes the numerical derivative of the function *note f: a24. at the point *note x: a24. using an adaptive central difference algorithm with a step-size of *note h: a24. The derivative is returned in *note result: a24. and an estimate of its absolute error is returned in *note abserr: a24. The initial value of *note h: a24. is used to estimate an optimal step-size, based on the scaling of the truncation error and round-off error in the derivative calculation. The derivative is computed using a 5-point rule for equally spaced abscissae at x - h, x - h/2, x, x + h/2, x+h, with an error estimate taken from the difference between the 5-point rule and the corresponding 3-point rule x-h, x, x+h. Note that the value of the function at x does not contribute to the derivative calculation, so only 4-points are actually used. -- Function: int gsl_deriv_forward (const gsl_function *f, double x, double h, double *result, double *abserr) This function computes the numerical derivative of the function *note f: a25. at the point *note x: a25. using an adaptive forward difference algorithm with a step-size of *note h: a25. The function is evaluated only at points greater than *note x: a25, and never at *note x: a25. itself. The derivative is returned in *note result: a25. and an estimate of its absolute error is returned in *note abserr: a25. This function should be used if f(x) has a discontinuity at *note x: a25, or is undefined for values less than *note x: a25. The initial value of *note h: a25. is used to estimate an optimal step-size, based on the scaling of the truncation error and round-off error in the derivative calculation. The derivative at x is computed using an “open” 4-point rule for equally spaced abscissae at x+h/4, x + h/2, x + 3h/4, x+h, with an error estimate taken from the difference between the 4-point rule and the corresponding 2-point rule x+h/2, x+h. -- Function: int gsl_deriv_backward (const gsl_function *f, double x, double h, double *result, double *abserr) This function computes the numerical derivative of the function *note f: a26. at the point *note x: a26. using an adaptive backward difference algorithm with a step-size of *note h: a26. The function is evaluated only at points less than *note x: a26, and never at *note x: a26. itself. The derivative is returned in *note result: a26. and an estimate of its absolute error is returned in *note abserr: a26. This function should be used if f(x) has a discontinuity at *note x: a26, or is undefined for values greater than *note x: a26. This function is equivalent to calling *note gsl_deriv_forward(): a25. with a negative step-size.  File: gsl-ref.info, Node: Examples<23>, Next: References and Further Reading<24>, Prev: Functions, Up: Numerical Differentiation 31.2 Examples ============= The following code estimates the derivative of the function f(x) = x^{3/2} at x = 2 and at x = 0. The function f(x) is undefined for x < 0 so the derivative at x=0 is computed using *note gsl_deriv_forward(): a25. #include #include #include double f (double x, void * params) { (void)(params); /* avoid unused parameter warning */ return pow (x, 1.5); } int main (void) { gsl_function F; double result, abserr; F.function = &f; F.params = 0; printf ("f(x) = x^(3/2)\n"); gsl_deriv_central (&F, 2.0, 1e-8, &result, &abserr); printf ("x = 2.0\n"); printf ("f'(x) = %.10f +/- %.10f\n", result, abserr); printf ("exact = %.10f\n\n", 1.5 * sqrt(2.0)); gsl_deriv_forward (&F, 0.0, 1e-8, &result, &abserr); printf ("x = 0.0\n"); printf ("f'(x) = %.10f +/- %.10f\n", result, abserr); printf ("exact = %.10f\n", 0.0); return 0; } Here is the output of the program, f(x) = x^(3/2) x = 2.0 f'(x) = 2.1213203120 +/- 0.0000005006 exact = 2.1213203436 x = 0.0 f'(x) = 0.0000000160 +/- 0.0000000339 exact = 0.0000000000  File: gsl-ref.info, Node: References and Further Reading<24>, Prev: Examples<23>, Up: Numerical Differentiation 31.3 References and Further Reading =================================== The algorithms used by these functions are described in the following sources: * Abramowitz and Stegun, `Handbook of Mathematical Functions', Section 25.3.4, and Table 25.5 (Coefficients for Differentiation). * S.D. Conte and Carl de Boor, `Elementary Numerical Analysis: An Algorithmic Approach', McGraw-Hill, 1972.  File: gsl-ref.info, Node: Chebyshev Approximations, Next: Series Acceleration, Prev: Numerical Differentiation, Up: Top 32 Chebyshev Approximations *************************** This chapter describes routines for computing Chebyshev approximations to univariate functions. A Chebyshev approximation is a truncation of the series f(x) = \sum c_n T_n(x), where the Chebyshev polynomials T_n(x) = \cos(n \arccos x) provide an orthogonal basis of polynomials on the interval [-1,1] with the weight function 1 / \sqrt{1-x^2}. The first few Chebyshev polynomials are, T_0(x) = 1, T_1(x) = x, T_2(x) = 2 x^2 - 1. For further information see Abramowitz & Stegun, Chapter 22. The functions described in this chapter are declared in the header file ‘gsl_chebyshev.h’. * Menu: * Definitions:: * Creation and Calculation of Chebyshev Series:: * Auxiliary Functions:: * Chebyshev Series Evaluation:: * Derivatives and Integrals:: * Examples: Examples<24>. * References and Further Reading: References and Further Reading<25>.  File: gsl-ref.info, Node: Definitions, Next: Creation and Calculation of Chebyshev Series, Up: Chebyshev Approximations 32.1 Definitions ================ -- Type: gsl_cheb_series A Chebyshev series is stored using the following structure: typedef struct { double * c; /* coefficients c[0] .. c[order] */ int order; /* order of expansion */ double a; /* lower interval point */ double b; /* upper interval point */ ... } gsl_cheb_series The approximation is made over the range [a,b] using ‘order’ + 1 terms, including the coefficient c[0]. The series is computed using the following convention, f(x) = (c_0 / 2) + \sum_{n=1} c_n T_n(x) which is needed when accessing the coefficients directly.  File: gsl-ref.info, Node: Creation and Calculation of Chebyshev Series, Next: Auxiliary Functions, Prev: Definitions, Up: Chebyshev Approximations 32.2 Creation and Calculation of Chebyshev Series ================================================= -- Function: *note gsl_cheb_series: a2c. *gsl_cheb_alloc (const size_t n) This function allocates space for a Chebyshev series of order *note n: a2e. and returns a pointer to a new *note gsl_cheb_series: a2c. struct. -- Function: void gsl_cheb_free (gsl_cheb_series *cs) This function frees a previously allocated Chebyshev series *note cs: a2f. -- Function: int gsl_cheb_init (gsl_cheb_series *cs, const gsl_function *f, const double a, const double b) This function computes the Chebyshev approximation *note cs: a30. for the function *note f: a30. over the range (a,b) to the previously specified order. The computation of the Chebyshev approximation is an O(n^2) process, and requires n function evaluations.  File: gsl-ref.info, Node: Auxiliary Functions, Next: Chebyshev Series Evaluation, Prev: Creation and Calculation of Chebyshev Series, Up: Chebyshev Approximations 32.3 Auxiliary Functions ======================== The following functions provide information about an existing Chebyshev series. -- Function: size_t gsl_cheb_order (const gsl_cheb_series *cs) This function returns the order of Chebyshev series *note cs: a32. -- Function: size_t gsl_cheb_size (const gsl_cheb_series *cs) -- Function: double *gsl_cheb_coeffs (const gsl_cheb_series *cs) These functions return the size of the Chebyshev coefficient array ‘c[]’ and a pointer to its location in memory for the Chebyshev series *note cs: a34.  File: gsl-ref.info, Node: Chebyshev Series Evaluation, Next: Derivatives and Integrals, Prev: Auxiliary Functions, Up: Chebyshev Approximations 32.4 Chebyshev Series Evaluation ================================ -- Function: double gsl_cheb_eval (const gsl_cheb_series *cs, double x) This function evaluates the Chebyshev series *note cs: a36. at a given point *note x: a36. -- Function: int gsl_cheb_eval_err (const gsl_cheb_series *cs, const double x, double *result, double *abserr) This function computes the Chebyshev series *note cs: a37. at a given point *note x: a37, estimating both the series *note result: a37. and its absolute error *note abserr: a37. The error estimate is made from the first neglected term in the series. -- Function: double gsl_cheb_eval_n (const gsl_cheb_series *cs, size_t order, double x) This function evaluates the Chebyshev series *note cs: a38. at a given point *note x: a38, to (at most) the given order *note order: a38. -- Function: int gsl_cheb_eval_n_err (const gsl_cheb_series *cs, const size_t order, const double x, double *result, double *abserr) This function evaluates a Chebyshev series *note cs: a39. at a given point *note x: a39, estimating both the series *note result: a39. and its absolute error *note abserr: a39, to (at most) the given order *note order: a39. The error estimate is made from the first neglected term in the series.  File: gsl-ref.info, Node: Derivatives and Integrals, Next: Examples<24>, Prev: Chebyshev Series Evaluation, Up: Chebyshev Approximations 32.5 Derivatives and Integrals ============================== The following functions allow a Chebyshev series to be differentiated or integrated, producing a new Chebyshev series. Note that the error estimate produced by evaluating the derivative series will be underestimated due to the contribution of higher order terms being neglected. -- Function: int gsl_cheb_calc_deriv (gsl_cheb_series *deriv, const gsl_cheb_series *cs) This function computes the derivative of the series *note cs: a3b, storing the derivative coefficients in the previously allocated *note deriv: a3b. The two series *note cs: a3b. and *note deriv: a3b. must have been allocated with the same order. -- Function: int gsl_cheb_calc_integ (gsl_cheb_series *integ, const gsl_cheb_series *cs) This function computes the integral of the series *note cs: a3c, storing the integral coefficients in the previously allocated *note integ: a3c. The two series *note cs: a3c. and *note integ: a3c. must have been allocated with the same order. The lower limit of the integration is taken to be the left hand end of the range ‘a’.  File: gsl-ref.info, Node: Examples<24>, Next: References and Further Reading<25>, Prev: Derivatives and Integrals, Up: Chebyshev Approximations 32.6 Examples ============= The following example program computes Chebyshev approximations to a step function. This is an extremely difficult approximation to make, due to the discontinuity, and was chosen as an example where approximation error is visible. For smooth functions the Chebyshev approximation converges extremely rapidly and errors would not be visible. #include #include #include double f (double x, void *p) { (void)(p); /* avoid unused parameter warning */ if (x < 0.5) return 0.25; else return 0.75; } int main (void) { int i, n = 10000; gsl_cheb_series *cs = gsl_cheb_alloc (40); gsl_function F; F.function = f; F.params = 0; gsl_cheb_init (cs, &F, 0.0, 1.0); for (i = 0; i < n; i++) { double x = i / (double)n; double r10 = gsl_cheb_eval_n (cs, 10, x); double r40 = gsl_cheb_eval (cs, x); printf ("%g %g %g %g\n", x, GSL_FN_EVAL (&F, x), r10, r40); } gsl_cheb_free (cs); return 0; } Fig. %s shows output from the program with the original function, 10-th order approximation and 40-th order approximation, all sampled at intervals of 0.001 in x. [gsl-ref-figures/cheb] Figure: Chebyshev approximations to a step function  File: gsl-ref.info, Node: References and Further Reading<25>, Prev: Examples<24>, Up: Chebyshev Approximations 32.7 References and Further Reading =================================== The following paper describes the use of Chebyshev series, * R. Broucke, “Ten Subroutines for the Manipulation of Chebyshev Series [C1] (Algorithm 446)”. `Communications of the ACM' 16(4), 254–256 (1973)  File: gsl-ref.info, Node: Series Acceleration, Next: Wavelet Transforms, Prev: Chebyshev Approximations, Up: Top 33 Series Acceleration ********************** The functions described in this chapter accelerate the convergence of a series using the Levin u-transform. This method takes a small number of terms from the start of a series and uses a systematic approximation to compute an extrapolated value and an estimate of its error. The u-transform works for both convergent and divergent series, including asymptotic series. These functions are declared in the header file ‘gsl_sum.h’. * Menu: * Acceleration functions:: * Acceleration functions without error estimation:: * Examples: Examples<25>. * References and Further Reading: References and Further Reading<26>.  File: gsl-ref.info, Node: Acceleration functions, Next: Acceleration functions without error estimation, Up: Series Acceleration 33.1 Acceleration functions =========================== The following functions compute the full Levin u-transform of a series with its error estimate. The error estimate is computed by propagating rounding errors from each term through to the final extrapolation. These functions are intended for summing analytic series where each term is known to high accuracy, and the rounding errors are assumed to originate from finite precision. They are taken to be relative errors of order ‘GSL_DBL_EPSILON’ for each term. The calculation of the error in the extrapolated value is an O(N^2) process, which is expensive in time and memory. A faster but less reliable method which estimates the error from the convergence of the extrapolated value is described in the next section. For the method described here a full table of intermediate values and derivatives through to O(N) must be computed and stored, but this does give a reliable error estimate. -- Type: gsl_sum_levin_u_workspace Workspace for a Leven u-transform. -- Function: *note gsl_sum_levin_u_workspace: a43. *gsl_sum_levin_u_alloc (size_t n) This function allocates a workspace for a Levin u-transform of *note n: a44. terms. The size of the workspace is O(2n^2 + 3n). -- Function: void gsl_sum_levin_u_free (gsl_sum_levin_u_workspace *w) This function frees the memory associated with the workspace *note w: a45. -- Function: int gsl_sum_levin_u_accel (const double *array, size_t array_size, gsl_sum_levin_u_workspace *w, double *sum_accel, double *abserr) This function takes the terms of a series in *note array: a46. of size *note array_size: a46. and computes the extrapolated limit of the series using a Levin u-transform. Additional working space must be provided in *note w: a46. The extrapolated sum is stored in *note sum_accel: a46, with an estimate of the absolute error stored in *note abserr: a46. The actual term-by-term sum is returned in ‘w->sum_plain’. The algorithm calculates the truncation error (the difference between two successive extrapolations) and round-off error (propagated from the individual terms) to choose an optimal number of terms for the extrapolation. All the terms of the series passed in through *note array: a46. should be non-zero.  File: gsl-ref.info, Node: Acceleration functions without error estimation, Next: Examples<25>, Prev: Acceleration functions, Up: Series Acceleration 33.2 Acceleration functions without error estimation ==================================================== The functions described in this section compute the Levin u-transform of series and attempt to estimate the error from the “truncation error” in the extrapolation, the difference between the final two approximations. Using this method avoids the need to compute an intermediate table of derivatives because the error is estimated from the behavior of the extrapolated value itself. Consequently this algorithm is an O(N) process and only requires O(N) terms of storage. If the series converges sufficiently fast then this procedure can be acceptable. It is appropriate to use this method when there is a need to compute many extrapolations of series with similar convergence properties at high-speed. For example, when numerically integrating a function defined by a parameterized series where the parameter varies only slightly. A reliable error estimate should be computed first using the full algorithm described above in order to verify the consistency of the results. -- Type: gsl_sum_levin_utrunc_workspace Workspace for a Levin u-transform without error estimation -- Function: *note gsl_sum_levin_utrunc_workspace: a48. *gsl_sum_levin_utrunc_alloc (size_t n) This function allocates a workspace for a Levin u-transform of *note n: a49. terms, without error estimation. The size of the workspace is O(3n). -- Function: void gsl_sum_levin_utrunc_free (gsl_sum_levin_utrunc_workspace *w) This function frees the memory associated with the workspace *note w: a4a. -- Function: int gsl_sum_levin_utrunc_accel (const double *array, size_t array_size, gsl_sum_levin_utrunc_workspace *w, double *sum_accel, double *abserr_trunc) This function takes the terms of a series in *note array: a4b. of size *note array_size: a4b. and computes the extrapolated limit of the series using a Levin u-transform. Additional working space must be provided in *note w: a4b. The extrapolated sum is stored in *note sum_accel: a4b. The actual term-by-term sum is returned in ‘w->sum_plain’. The algorithm terminates when the difference between two successive extrapolations reaches a minimum or is sufficiently small. The difference between these two values is used as estimate of the error and is stored in *note abserr_trunc: a4b. To improve the reliability of the algorithm the extrapolated values are replaced by moving averages when calculating the truncation error, smoothing out any fluctuations.  File: gsl-ref.info, Node: Examples<25>, Next: References and Further Reading<26>, Prev: Acceleration functions without error estimation, Up: Series Acceleration 33.3 Examples ============= The following code calculates an estimate of \zeta(2) = \pi^2 / 6 using the series, \zeta(2) = 1 + 1/2^2 + 1/3^2 + 1/4^2 + \dots After ‘N’ terms the error in the sum is O(1/N), making direct summation of the series converge slowly. #include #include #include #define N 20 int main (void) { double t[N]; double sum_accel, err; double sum = 0; int n; gsl_sum_levin_u_workspace * w = gsl_sum_levin_u_alloc (N); const double zeta_2 = M_PI * M_PI / 6.0; /* terms for zeta(2) = \sum_{n=1}^{\infty} 1/n^2 */ for (n = 0; n < N; n++) { double np1 = n + 1.0; t[n] = 1.0 / (np1 * np1); sum += t[n]; } gsl_sum_levin_u_accel (t, N, w, &sum_accel, &err); printf ("term-by-term sum = % .16f using %d terms\n", sum, N); printf ("term-by-term sum = % .16f using %zu terms\n", w->sum_plain, w->terms_used); printf ("exact value = % .16f\n", zeta_2); printf ("accelerated sum = % .16f using %zu terms\n", sum_accel, w->terms_used); printf ("estimated error = % .16f\n", err); printf ("actual error = % .16f\n", sum_accel - zeta_2); gsl_sum_levin_u_free (w); return 0; } The output below shows that the Levin u-transform is able to obtain an estimate of the sum to 1 part in 10^{10} using the first eleven terms of the series. The error estimate returned by the function is also accurate, giving the correct number of significant digits. term-by-term sum = 1.5961632439130233 using 20 terms term-by-term sum = 1.5759958390005426 using 13 terms exact value = 1.6449340668482264 accelerated sum = 1.6449340669228176 using 13 terms estimated error = 0.0000000000888360 actual error = 0.0000000000745912 Note that a direct summation of this series would require 10^{10} terms to achieve the same precision as the accelerated sum does in 13 terms.  File: gsl-ref.info, Node: References and Further Reading<26>, Prev: Examples<25>, Up: Series Acceleration 33.4 References and Further Reading =================================== The algorithms used by these functions are described in the following papers, * T. Fessler, W.F. Ford, D.A. Smith, HURRY: An acceleration algorithm for scalar sequences and series `ACM Transactions on Mathematical Software', 9(3):346–354, 1983. and Algorithm 602 9(3):355–357, 1983. The theory of the u-transform was presented by Levin, * D. Levin, Development of Non-Linear Transformations for Improving Convergence of Sequences, `Intern.: J.: Computer Math.' B3:371–388, 1973. A review paper on the Levin Transform is available online, * Herbert H. H. Homeier, Scalar Levin-Type Sequence Transformations, ‘http://arxiv.org/abs/math/0005209’  File: gsl-ref.info, Node: Wavelet Transforms, Next: Discrete Hankel Transforms, Prev: Series Acceleration, Up: Top 34 Wavelet Transforms ********************* This chapter describes functions for performing Discrete Wavelet Transforms (DWTs). The library includes wavelets for real data in both one and two dimensions. The wavelet functions are declared in the header files ‘gsl_wavelet.h’ and ‘gsl_wavelet2d.h’. * Menu: * Definitions: Definitions<2>. * Initialization:: * Transform Functions:: * Examples: Examples<26>. * References and Further Reading: References and Further Reading<27>.  File: gsl-ref.info, Node: Definitions<2>, Next: Initialization, Up: Wavelet Transforms 34.1 Definitions ================ The continuous wavelet transform and its inverse are defined by the relations, w(s, \tau) = \int_{-\infty}^\infty f(t) * \psi^*_{s,\tau}(t) dt and, f(t) = \int_0^\infty ds \int_{-\infty}^\infty w(s, \tau) * \psi_{s,\tau}(t) d\tau where the basis functions \psi_{s,\tau} are obtained by scaling and translation from a single function, referred to as the `mother wavelet'. The discrete version of the wavelet transform acts on equally-spaced samples, with fixed scaling and translation steps (s, \tau). The frequency and time axes are sampled `dyadically' on scales of 2^j through a level parameter j. The resulting family of functions \{\psi_{j,n}\} constitutes an orthonormal basis for square-integrable signals. The discrete wavelet transform is an O(N) algorithm, and is also referred to as the `fast wavelet transform'.  File: gsl-ref.info, Node: Initialization, Next: Transform Functions, Prev: Definitions<2>, Up: Wavelet Transforms 34.2 Initialization =================== -- Type: gsl_wavelet This structure contains the filter coefficients defining the wavelet and any associated offset parameters. -- Function: *note gsl_wavelet: a52. *gsl_wavelet_alloc (const gsl_wavelet_type *T, size_t k) This function allocates and initializes a wavelet object of type *note T: a53. The parameter *note k: a53. selects the specific member of the wavelet family. A null pointer is returned if insufficient memory is available or if a unsupported member is selected. The following wavelet types are implemented: -- Type: gsl_wavelet_type -- Variable: *note gsl_wavelet_type: a54. *gsl_wavelet_daubechies -- Variable: *note gsl_wavelet_type: a54. *gsl_wavelet_daubechies_centered This is the Daubechies wavelet family of maximum phase with k/2 vanishing moments. The implemented wavelets are k=4, 6, \dots, 20, with ‘k’ even. -- Variable: *note gsl_wavelet_type: a54. *gsl_wavelet_haar -- Variable: *note gsl_wavelet_type: a54. *gsl_wavelet_haar_centered This is the Haar wavelet. The only valid choice of k for the Haar wavelet is k=2. -- Variable: *note gsl_wavelet_type: a54. *gsl_wavelet_bspline -- Variable: *note gsl_wavelet_type: a54. *gsl_wavelet_bspline_centered This is the biorthogonal B-spline wavelet family of order (i,j). The implemented values of k = 100*i + j are 103, 105, 202, 204, 206, 208, 301, 303, 305 307, 309. The centered forms of the wavelets align the coefficients of the various sub-bands on edges. Thus the resulting visualization of the coefficients of the wavelet transform in the phase plane is easier to understand. -- Function: const char *gsl_wavelet_name (const gsl_wavelet *w) This function returns a pointer to the name of the wavelet family for *note w: a5b. -- Function: void gsl_wavelet_free (gsl_wavelet *w) This function frees the wavelet object *note w: a5c. -- Type: gsl_wavelet_workspace This structure contains scratch space of the same size as the input data and is used to hold intermediate results during the transform. -- Function: *note gsl_wavelet_workspace: a5d. *gsl_wavelet_workspace_alloc (size_t n) This function allocates a workspace for the discrete wavelet transform. To perform a one-dimensional transform on *note n: a5e. elements, a workspace of size *note n: a5e. must be provided. For two-dimensional transforms of *note n: a5e.-by-*note n: a5e. matrices it is sufficient to allocate a workspace of size *note n: a5e, since the transform operates on individual rows and columns. A null pointer is returned if insufficient memory is available. -- Function: void gsl_wavelet_workspace_free (gsl_wavelet_workspace *work) This function frees the allocated workspace *note work: a5f.  File: gsl-ref.info, Node: Transform Functions, Next: Examples<26>, Prev: Initialization, Up: Wavelet Transforms 34.3 Transform Functions ======================== This sections describes the actual functions performing the discrete wavelet transform. Note that the transforms use periodic boundary conditions. If the signal is not periodic in the sample length then spurious coefficients will appear at the beginning and end of each level of the transform. * Menu: * Wavelet transforms in one dimension:: * Wavelet transforms in two dimension::  File: gsl-ref.info, Node: Wavelet transforms in one dimension, Next: Wavelet transforms in two dimension, Up: Transform Functions 34.3.1 Wavelet transforms in one dimension ------------------------------------------ -- Function: int gsl_wavelet_transform (const gsl_wavelet *w, double *data, size_t stride, size_t n, gsl_wavelet_direction dir, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet_transform_forward (const gsl_wavelet *w, double *data, size_t stride, size_t n, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet_transform_inverse (const gsl_wavelet *w, double *data, size_t stride, size_t n, gsl_wavelet_workspace *work) These functions compute in-place forward and inverse discrete wavelet transforms of length *note n: a64. with stride *note stride: a64. on the array *note data: a64. The length of the transform *note n: a64. is restricted to powers of two. For the ‘transform’ version of the function the argument ‘dir’ can be either ‘forward’ (+1) or ‘backward’ (-1). A workspace *note work: a64. of length *note n: a64. must be provided. For the forward transform, the elements of the original array are replaced by the discrete wavelet transform f_i \rightarrow w_{j,k} in a packed triangular storage layout, where ‘j’ is the index of the level j = 0 \dots J-1 and ‘k’ is the index of the coefficient within each level, k = 0 \dots 2^j - 1. The total number of levels is J = \log_2(n). The output data has the following form, (s_{-1,0}, d_{0,0}, d_{1,0}, d_{1,1}, d_{2,0},\cdots, d_{j,k},\cdots, d_{J-1,2^{J-1} - 1}) where the first element is the smoothing coefficient s_{-1,0}, followed by the detail coefficients d_{j,k} for each level j. The backward transform inverts these coefficients to obtain the original data. These functions return a status of ‘GSL_SUCCESS’ upon successful completion. *note GSL_EINVAL: 2b. is returned if *note n: a64. is not an integer power of 2 or if insufficient workspace is provided.  File: gsl-ref.info, Node: Wavelet transforms in two dimension, Prev: Wavelet transforms in one dimension, Up: Transform Functions 34.3.2 Wavelet transforms in two dimension ------------------------------------------ The library provides functions to perform two-dimensional discrete wavelet transforms on square matrices. The matrix dimensions must be an integer power of two. There are two possible orderings of the rows and columns in the two-dimensional wavelet transform, referred to as the “standard” and “non-standard” forms. The “standard” transform performs a complete discrete wavelet transform on the rows of the matrix, followed by a separate complete discrete wavelet transform on the columns of the resulting row-transformed matrix. This procedure uses the same ordering as a two-dimensional Fourier transform. The “non-standard” transform is performed in interleaved passes on the rows and columns of the matrix for each level of the transform. The first level of the transform is applied to the matrix rows, and then to the matrix columns. This procedure is then repeated across the rows and columns of the data for the subsequent levels of the transform, until the full discrete wavelet transform is complete. The non-standard form of the discrete wavelet transform is typically used in image analysis. The functions described in this section are declared in the header file ‘gsl_wavelet2d.h’. -- Function: int gsl_wavelet2d_transform (const gsl_wavelet *w, double *data, size_t tda, size_t size1, size_t size2, gsl_wavelet_direction dir, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet2d_transform_forward (const gsl_wavelet *w, double *data, size_t tda, size_t size1, size_t size2, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet2d_transform_inverse (const gsl_wavelet *w, double *data, size_t tda, size_t size1, size_t size2, gsl_wavelet_workspace *work) These functions compute two-dimensional in-place forward and inverse discrete wavelet transforms in standard form on the array *note data: a68. stored in row-major form with dimensions *note size1: a68. and *note size2: a68. and physical row length *note tda: a68. The dimensions must be equal (square matrix) and are restricted to powers of two. For the ‘transform’ version of the function the argument ‘dir’ can be either ‘forward’ (+1) or ‘backward’ (-1). A workspace *note work: a68. of the appropriate size must be provided. On exit, the appropriate elements of the array *note data: a68. are replaced by their two-dimensional wavelet transform. The functions return a status of ‘GSL_SUCCESS’ upon successful completion. *note GSL_EINVAL: 2b. is returned if *note size1: a68. and *note size2: a68. are not equal and integer powers of 2, or if insufficient workspace is provided. -- Function: int gsl_wavelet2d_transform_matrix (const gsl_wavelet *w, gsl_matrix *m, gsl_wavelet_direction dir, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet2d_transform_matrix_forward (const gsl_wavelet *w, gsl_matrix *m, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet2d_transform_matrix_inverse (const gsl_wavelet *w, gsl_matrix *m, gsl_wavelet_workspace *work) These functions compute the two-dimensional in-place wavelet transform on a matrix *note m: a6b. -- Function: int gsl_wavelet2d_nstransform (const gsl_wavelet *w, double *data, size_t tda, size_t size1, size_t size2, gsl_wavelet_direction dir, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet2d_nstransform_forward (const gsl_wavelet *w, double *data, size_t tda, size_t size1, size_t size2, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet2d_nstransform_inverse (const gsl_wavelet *w, double *data, size_t tda, size_t size1, size_t size2, gsl_wavelet_workspace *work) These functions compute the two-dimensional wavelet transform in non-standard form. -- Function: int gsl_wavelet2d_nstransform_matrix (const gsl_wavelet *w, gsl_matrix *m, gsl_wavelet_direction dir, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet2d_nstransform_matrix_forward (const gsl_wavelet *w, gsl_matrix *m, gsl_wavelet_workspace *work) -- Function: int gsl_wavelet2d_nstransform_matrix_inverse (const gsl_wavelet *w, gsl_matrix *m, gsl_wavelet_workspace *work) These functions compute the non-standard form of the two-dimensional in-place wavelet transform on a matrix *note m: a71.  File: gsl-ref.info, Node: Examples<26>, Next: References and Further Reading<27>, Prev: Transform Functions, Up: Wavelet Transforms 34.4 Examples ============= The following program demonstrates the use of the one-dimensional wavelet transform functions. It computes an approximation to an input signal (of length 256) using the 20 largest components of the wavelet transform, while setting the others to zero. #include #include #include #include int main (int argc, char **argv) { (void)(argc); /* avoid unused parameter warning */ int i, n = 256, nc = 20; double *orig_data = malloc (n * sizeof (double)); double *data = malloc (n * sizeof (double)); double *abscoeff = malloc (n * sizeof (double)); size_t *p = malloc (n * sizeof (size_t)); FILE * f; gsl_wavelet *w; gsl_wavelet_workspace *work; w = gsl_wavelet_alloc (gsl_wavelet_daubechies, 4); work = gsl_wavelet_workspace_alloc (n); f = fopen (argv[1], "r"); for (i = 0; i < n; i++) { fscanf (f, "%lg", &orig_data[i]); data[i] = orig_data[i]; } fclose (f); gsl_wavelet_transform_forward (w, data, 1, n, work); for (i = 0; i < n; i++) { abscoeff[i] = fabs (data[i]); } gsl_sort_index (p, abscoeff, 1, n); for (i = 0; (i + nc) < n; i++) data[p[i]] = 0; gsl_wavelet_transform_inverse (w, data, 1, n, work); for (i = 0; i < n; i++) { printf ("%g %g\n", orig_data[i], data[i]); } gsl_wavelet_free (w); gsl_wavelet_workspace_free (work); free (data); free (orig_data); free (abscoeff); free (p); return 0; } The output can be used with the GNU plotutils ‘graph’ program: $ ./a.out ecg.dat > dwt.txt $ graph -T ps -x 0 256 32 -h 0.3 -a dwt.txt > dwt.ps Fig. %s shows an original and compressed version of a sample ECG recording from the MIT-BIH Arrhythmia Database, part of the PhysioNet archive of public-domain of medical datasets. [gsl-ref-figures/dwt] Figure: Original (upper) and wavelet-compressed (lower) ECG signals, using the 20 largest components of the Daubechies(4) discrete wavelet transform.  File: gsl-ref.info, Node: References and Further Reading<27>, Prev: Examples<26>, Up: Wavelet Transforms 34.5 References and Further Reading =================================== The mathematical background to wavelet transforms is covered in the original lectures by Daubechies, * Ingrid Daubechies. Ten Lectures on Wavelets. `CBMS-NSF Regional Conference Series in Applied Mathematics' (1992), SIAM, ISBN 0898712742. An easy to read introduction to the subject with an emphasis on the application of the wavelet transform in various branches of science is, * Paul S. Addison. `The Illustrated Wavelet Transform Handbook'. Institute of Physics Publishing (2002), ISBN 0750306920. For extensive coverage of signal analysis by wavelets, wavelet packets and local cosine bases see, * S. G. Mallat. `A wavelet tour of signal processing' (Second edition). Academic Press (1999), ISBN 012466606X. The concept of multiresolution analysis underlying the wavelet transform is described in, * S. G. Mallat. Multiresolution Approximations and Wavelet Orthonormal Bases of L^2(R). `Transactions of the American Mathematical Society', 315(1), 1989, 69–87. * S. G. Mallat. A Theory for Multiresolution Signal Decomposition—The Wavelet Representation. `IEEE Transactions on Pattern Analysis and Machine Intelligence', 11, 1989, 674–693. The coefficients for the individual wavelet families implemented by the library can be found in the following papers, * I. Daubechies. Orthonormal Bases of Compactly Supported Wavelets. `Communications on Pure and Applied Mathematics', 41 (1988) 909–996. * A. Cohen, I. Daubechies, and J.-C. Feauveau. Biorthogonal Bases of Compactly Supported Wavelets. `Communications on Pure and Applied Mathematics', 45 (1992) 485–560. The PhysioNet archive of physiological datasets can be found online at ‘http://www.physionet.org/’ and is described in the following paper, * Goldberger et al. PhysioBank, PhysioToolkit, and PhysioNet: Components of a New Research Resource for Complex Physiologic Signals. `Circulation' 101(23):e215-e220 2000.  File: gsl-ref.info, Node: Discrete Hankel Transforms, Next: One Dimensional Root-Finding, Prev: Wavelet Transforms, Up: Top 35 Discrete Hankel Transforms ***************************** This chapter describes functions for performing Discrete Hankel Transforms (DHTs). The functions are declared in the header file ‘gsl_dht.h’. * Menu: * Definitions: Definitions<3>. * Functions: Functions<2>. * References and Further Reading: References and Further Reading<28>.  File: gsl-ref.info, Node: Definitions<3>, Next: Functions<2>, Up: Discrete Hankel Transforms 35.1 Definitions ================ The discrete Hankel transform acts on a vector of sampled data, where the samples are assumed to have been taken at points related to the zeros of a Bessel function of fixed order; compare this to the case of the discrete Fourier transform, where samples are taken at points related to the zeroes of the sine or cosine function. Starting with its definition, the Hankel transform (or Bessel transform) of order \nu of a function f with \nu > -1/2 is defined as (see Johnson, 1987 and Lemoine, 1994) F_\nu(u) = \int_0^\infty f(t) J_\nu(u t) t dt If the integral exists, F_\nu is called the Hankel transformation of f. The reverse transform is given by f(t) = \int_0^\infty F_\nu(u) J_\nu(u t) u du where \int_0^\infty f(t) t^{1/2} dt must exist and be absolutely convergent, and where f(t) satisfies Dirichlet’s conditions (of limited total fluctuations) in the interval [0,\infty]. Now the discrete Hankel transform works on a discrete function f, which is sampled on points n=1...M located at positions t_n=(j_{\nu,n}/j_{\nu,M}) X in real space and at u_n=j_{\nu,n}/X in reciprocal space. Here, j_{\nu,m} are the m-th zeros of the Bessel function J_\nu(x) arranged in ascending order. Moreover, the discrete functions are assumed to be band limited, so f(t_n)=0 and F(u_n)=0 for n>M. Accordingly, the function f is defined on the interval [0,X]. Following the work of Johnson, 1987 and Lemoine, 1994, the discrete Hankel transform is given by F_\nu(u_m) = (2 X^2 / j_(\nu,M)^2) \sum_{k=1}^{M-1} f(j_(\nu,k) X/j_(\nu,M)) (J_\nu(j_(\nu,m) j_(\nu,k) / j_(\nu,M)) / J_(\nu+1)(j_(\nu,k))^2). It is this discrete expression which defines the discrete Hankel transform calculated by GSL. In GSL, forward and backward transforms are defined equally and calculate F_\nu(u_m). Following Johnson, the backward transform reads f(t_k) = (2 / X^2) \sum_{m=1}^{M-1} F(j_(\nu,m)/X) (J_\nu(j_(\nu,m) j_(\nu,k) / j_(\nu,M)) / J_(\nu+1)(j_(\nu,m))^2). Obviously, using the forward transform instead of the backward transform gives an additional factor X^4/j_{\nu,M}^2=t_m^2/u_m^2. The kernel in the summation above defines the matrix of the \nu-Hankel transform of size M-1. The coefficients of this matrix, being dependent on \nu and M, must be precomputed and stored; the *note gsl_dht: a78. object encapsulates this data. The allocation function *note gsl_dht_alloc(): a79. returns a *note gsl_dht: a78. object which must be properly initialized with *note gsl_dht_init(): a7a. before it can be used to perform transforms on data sample vectors, for fixed \nu and M, using the *note gsl_dht_apply(): a7b. function. The implementation allows to define the length X of the fundamental interval, for convenience, while discrete Hankel transforms are often defined on the unit interval instead of [0,X]. Notice that by assumption f(t) vanishes at the endpoints of the interval, consistent with the inversion formula and the sampling formula given above. Therefore, this transform corresponds to an orthogonal expansion in eigenfunctions of the Dirichlet problem for the Bessel differential equation.  File: gsl-ref.info, Node: Functions<2>, Next: References and Further Reading<28>, Prev: Definitions<3>, Up: Discrete Hankel Transforms 35.2 Functions ============== -- Type: gsl_dht Workspace for computing discrete Hankel transforms -- Function: *note gsl_dht: a78. *gsl_dht_alloc (size_t size) This function allocates a Discrete Hankel transform object of size *note size: a79. -- Function: int gsl_dht_init (gsl_dht *t, double nu, double xmax) This function initializes the transform *note t: a7a. for the given values of *note nu: a7a. and *note xmax: a7a. -- Function: *note gsl_dht: a78. *gsl_dht_new (size_t size, double nu, double xmax) This function allocates a Discrete Hankel transform object of size *note size: a7d. and initializes it for the given values of *note nu: a7d. and *note xmax: a7d. -- Function: void gsl_dht_free (gsl_dht *t) This function frees the transform *note t: a7e. -- Function: int gsl_dht_apply (const gsl_dht *t, double *f_in, double *f_out) This function applies the transform *note t: a7b. to the array *note f_in: a7b. whose size is equal to the size of the transform. The result is stored in the array *note f_out: a7b. which must be of the same length. Applying this function to its output gives the original data multiplied by (X^2/j_{\nu,M})^2, up to numerical errors. -- Function: double gsl_dht_x_sample (const gsl_dht *t, int n) This function returns the value of the *note n: a7f.-th sample point in the unit interval, {({j_{\nu,n+1}} / {j_{\nu,M}}}) X. These are the points where the function f(t) is assumed to be sampled. -- Function: double gsl_dht_k_sample (const gsl_dht *t, int n) This function returns the value of the *note n: a80.-th sample point in “k-space”, {{j_{\nu,n+1}} / X}.  File: gsl-ref.info, Node: References and Further Reading<28>, Prev: Functions<2>, Up: Discrete Hankel Transforms 35.3 References and Further Reading =================================== The algorithms used by these functions are described in the following papers, * 8. Fisk Johnson, Comp.: Phys.: Comm.: 43, 181 (1987). * 4. Lemoine, J. Chem.: Phys.: 101, 3936 (1994).  File: gsl-ref.info, Node: One Dimensional Root-Finding, Next: One Dimensional Minimization, Prev: Discrete Hankel Transforms, Up: Top 36 One Dimensional Root-Finding ******************************* This chapter describes routines for finding roots of arbitrary one-dimensional functions. The library provides low level components for a variety of iterative solvers and convergence tests. These can be combined by the user to achieve the desired solution, with full access to the intermediate steps of the iteration. Each class of methods uses the same framework, so that you can switch between solvers at runtime without needing to recompile your program. Each instance of a solver keeps track of its own state, allowing the solvers to be used in multi-threaded programs. The header file ‘gsl_roots.h’ contains prototypes for the root finding functions and related declarations. * Menu: * Overview:: * Caveats:: * Initializing the Solver:: * Providing the function to solve:: * Search Bounds and Guesses:: * Iteration:: * Search Stopping Parameters:: * Root Bracketing Algorithms:: * Root Finding Algorithms using Derivatives:: * Examples: Examples<27>. * References and Further Reading: References and Further Reading<29>.  File: gsl-ref.info, Node: Overview, Next: Caveats, Up: One Dimensional Root-Finding 36.1 Overview ============= One-dimensional root finding algorithms can be divided into two classes, `root bracketing' and `root polishing'. Algorithms which proceed by bracketing a root are guaranteed to converge. Bracketing algorithms begin with a bounded region known to contain a root. The size of this bounded region is reduced, iteratively, until it encloses the root to a desired tolerance. This provides a rigorous error estimate for the location of the root. The technique of `root polishing' attempts to improve an initial guess to the root. These algorithms converge only if started “close enough” to a root, and sacrifice a rigorous error bound for speed. By approximating the behavior of a function in the vicinity of a root they attempt to find a higher order improvement of an initial guess. When the behavior of the function is compatible with the algorithm and a good initial guess is available a polishing algorithm can provide rapid convergence. In GSL both types of algorithm are available in similar frameworks. The user provides a high-level driver for the algorithms, and the library provides the individual functions necessary for each of the steps. There are three main phases of the iteration. The steps are, * initialize solver state, ‘s’, for algorithm ‘T’ * update ‘s’ using the iteration ‘T’ * test ‘s’ for convergence, and repeat iteration if necessary The state for bracketing solvers is held in a *note gsl_root_fsolver: a85. struct. The updating procedure uses only function evaluations (not derivatives). The state for root polishing solvers is held in a *note gsl_root_fdfsolver: a86. struct. The updates require both the function and its derivative (hence the name ‘fdf’) to be supplied by the user.  File: gsl-ref.info, Node: Caveats, Next: Initializing the Solver, Prev: Overview, Up: One Dimensional Root-Finding 36.2 Caveats ============ Note that root finding functions can only search for one root at a time. When there are several roots in the search area, the first root to be found will be returned; however it is difficult to predict which of the roots this will be. `In most cases, no error will be reported if you try to find a root in an area where there is more than one.' Care must be taken when a function may have a multiple root (such as f(x) = (x-x_0)^2 or f(x) = (x-x_0)^3. It is not possible to use root-bracketing algorithms on even-multiplicity roots. For these algorithms the initial interval must contain a zero-crossing, where the function is negative at one end of the interval and positive at the other end. Roots with even-multiplicity do not cross zero, but only touch it instantaneously. Algorithms based on root bracketing will still work for odd-multiplicity roots (e.g. cubic, quintic, …). Root polishing algorithms generally work with higher multiplicity roots, but at a reduced rate of convergence. In these cases the `Steffenson algorithm' can be used to accelerate the convergence of multiple roots. While it is not absolutely required that f have a root within the search region, numerical root finding functions should not be used haphazardly to check for the `existence' of roots. There are better ways to do this. Because it is easy to create situations where numerical root finders can fail, it is a bad idea to throw a root finder at a function you do not know much about. In general it is best to examine the function visually by plotting before searching for a root.  File: gsl-ref.info, Node: Initializing the Solver, Next: Providing the function to solve, Prev: Caveats, Up: One Dimensional Root-Finding 36.3 Initializing the Solver ============================ -- Type: gsl_root_fsolver This is a workspace for finding roots using methods which do not require derivatives. -- Type: gsl_root_fdfsolver This is a workspace for finding roots using methods which require derivatives. -- Function: *note gsl_root_fsolver: a85. *gsl_root_fsolver_alloc (const gsl_root_fsolver_type *T) This function returns a pointer to a newly allocated instance of a solver of type *note T: a89. For example, the following code creates an instance of a bisection solver: const gsl_root_fsolver_type * T = gsl_root_fsolver_bisection; gsl_root_fsolver * s = gsl_root_fsolver_alloc (T); If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: *note gsl_root_fdfsolver: a86. *gsl_root_fdfsolver_alloc (const gsl_root_fdfsolver_type *T) This function returns a pointer to a newly allocated instance of a derivative-based solver of type *note T: a8a. For example, the following code creates an instance of a Newton-Raphson solver: const gsl_root_fdfsolver_type * T = gsl_root_fdfsolver_newton; gsl_root_fdfsolver * s = gsl_root_fdfsolver_alloc (T); If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: int gsl_root_fsolver_set (gsl_root_fsolver *s, gsl_function *f, double x_lower, double x_upper) This function initializes, or reinitializes, an existing solver *note s: a8b. to use the function *note f: a8b. and the initial search interval [*note x_lower: a8b, *note x_upper: a8b.]. -- Function: int gsl_root_fdfsolver_set (gsl_root_fdfsolver *s, gsl_function_fdf *fdf, double root) This function initializes, or reinitializes, an existing solver *note s: a8c. to use the function and derivative *note fdf: a8c. and the initial guess *note root: a8c. -- Function: void gsl_root_fsolver_free (gsl_root_fsolver *s) -- Function: void gsl_root_fdfsolver_free (gsl_root_fdfsolver *s) These functions free all the memory associated with the solver *note s: a8e. -- Function: const char *gsl_root_fsolver_name (const gsl_root_fsolver *s) -- Function: const char *gsl_root_fdfsolver_name (const gsl_root_fdfsolver *s) These functions return a pointer to the name of the solver. For example: printf ("s is a '%s' solver\n", gsl_root_fsolver_name (s)); would print something like ‘s is a 'bisection' solver’.  File: gsl-ref.info, Node: Providing the function to solve, Next: Search Bounds and Guesses, Prev: Initializing the Solver, Up: One Dimensional Root-Finding 36.4 Providing the function to solve ==================================== You must provide a continuous function of one variable for the root finders to operate on, and, sometimes, its first derivative. In order to allow for general parameters the functions are defined by the following data types: -- Type: gsl_function This data type defines a general function with parameters. ‘double (* function) (double x, void * params)’ this function should return the value f(x,params) for argument ‘x’ and parameters ‘params’ ‘void * params’ a pointer to the parameters of the function Here is an example for the general quadratic function, f(x) = a x^2 + b x + c with a = 3, b = 2, c = 1. The following code defines a *note gsl_function: a93. ‘F’ which you could pass to a root finder as a function pointer: struct my_f_params { double a; double b; double c; }; double my_f (double x, void * p) { struct my_f_params * params = (struct my_f_params *)p; double a = (params->a); double b = (params->b); double c = (params->c); return (a * x + b) * x + c; } gsl_function F; struct my_f_params params = { 3.0, 2.0, 1.0 }; F.function = &my_f; F.params = ¶ms; The function f(x) can be evaluated using the macro ‘GSL_FN_EVAL(&F,x)’ defined in ‘gsl_math.h’. -- Type: gsl_function_fdf This data type defines a general function with parameters and its first derivative. ‘double (* f) (double x, void * params)’ this function should return the value of f(x,params) for argument ‘x’ and parameters ‘params’ ‘double (* df) (double x, void * params)’ this function should return the value of the derivative of ‘f’ with respect to ‘x’, f'(x,params), for argument ‘x’ and parameters ‘params’ ‘void (* fdf) (double x, void * params, double * f, double * df)’ this function should set the values of the function ‘f’ to f(x,params) and its derivative ‘df’ to f'(x,params) for argument ‘x’ and parameters ‘params’. This function provides an optimization of the separate functions for f(x) and f'(x)—it is always faster to compute the function and its derivative at the same time. ‘void * params’ a pointer to the parameters of the function Here is an example where f(x) = \exp(2x): double my_f (double x, void * params) { return exp (2 * x); } double my_df (double x, void * params) { return 2 * exp (2 * x); } void my_fdf (double x, void * params, double * f, double * df) { double t = exp (2 * x); *f = t; *df = 2 * t; /* uses existing value */ } gsl_function_fdf FDF; FDF.f = &my_f; FDF.df = &my_df; FDF.fdf = &my_fdf; FDF.params = 0; The function f(x) can be evaluated using the macro ‘GSL_FN_FDF_EVAL_F(&FDF,x)’ and the derivative f'(x) can be evaluated using the macro ‘GSL_FN_FDF_EVAL_DF(&FDF,x)’. Both the function y = f(x) and its derivative dy = f'(x) can be evaluated at the same time using the macro ‘GSL_FN_FDF_EVAL_F_DF(&FDF,x,y,dy)’. The macro stores f(x) in its ‘y’ argument and f'(x) in its ‘dy’ argument—both of these should be pointers to ‘double’.  File: gsl-ref.info, Node: Search Bounds and Guesses, Next: Iteration, Prev: Providing the function to solve, Up: One Dimensional Root-Finding 36.5 Search Bounds and Guesses ============================== You provide either search bounds or an initial guess; this section explains how search bounds and guesses work and how function arguments control them. A guess is simply an x value which is iterated until it is within the desired precision of a root. It takes the form of a ‘double’. Search bounds are the endpoints of an interval which is iterated until the length of the interval is smaller than the requested precision. The interval is defined by two values, the lower limit and the upper limit. Whether the endpoints are intended to be included in the interval or not depends on the context in which the interval is used.  File: gsl-ref.info, Node: Iteration, Next: Search Stopping Parameters, Prev: Search Bounds and Guesses, Up: One Dimensional Root-Finding 36.6 Iteration ============== The following functions drive the iteration of each algorithm. Each function performs one iteration to update the state of any solver of the corresponding type. The same functions work for all solvers so that different methods can be substituted at runtime without modifications to the code. -- Function: int gsl_root_fsolver_iterate (gsl_root_fsolver *s) -- Function: int gsl_root_fdfsolver_iterate (gsl_root_fdfsolver *s) These functions perform a single iteration of the solver *note s: a98. If the iteration encounters an unexpected problem then an error code will be returned, ‘GSL_EBADFUNC’ the iteration encountered a singular point where the function or its derivative evaluated to ‘Inf’ or ‘NaN’. ‘GSL_EZERODIV’ the derivative of the function vanished at the iteration point, preventing the algorithm from continuing without a division by zero. The solver maintains a current best estimate of the root at all times. The bracketing solvers also keep track of the current best interval bounding the root. This information can be accessed with the following auxiliary functions, -- Function: double gsl_root_fsolver_root (const gsl_root_fsolver *s) -- Function: double gsl_root_fdfsolver_root (const gsl_root_fdfsolver *s) These functions return the current estimate of the root for the solver *note s: a9a. -- Function: double gsl_root_fsolver_x_lower (const gsl_root_fsolver *s) -- Function: double gsl_root_fsolver_x_upper (const gsl_root_fsolver *s) These functions return the current bracketing interval for the solver *note s: a9c.  File: gsl-ref.info, Node: Search Stopping Parameters, Next: Root Bracketing Algorithms, Prev: Iteration, Up: One Dimensional Root-Finding 36.7 Search Stopping Parameters =============================== A root finding procedure should stop when one of the following conditions is true: * A root has been found to within the user-specified precision. * A user-specified maximum number of iterations has been reached. * An error has occurred. The handling of these conditions is under user control. The functions below allow the user to test the precision of the current result in several standard ways. -- Function: int gsl_root_test_interval (double x_lower, double x_upper, double epsabs, double epsrel) This function tests for the convergence of the interval [*note x_lower: a9e, *note x_upper: a9e.] with absolute error *note epsabs: a9e. and relative error *note epsrel: a9e. The test returns ‘GSL_SUCCESS’ if the following condition is achieved, |a - b| < epsabs + epsrel min(|a|,|b|) when the interval x = [a,b] does not include the origin. If the interval includes the origin then \min(|a|,|b|) is replaced by zero (which is the minimum value of |x| over the interval). This ensures that the relative error is accurately estimated for roots close to the origin. This condition on the interval also implies that any estimate of the root r in the interval satisfies the same condition with respect to the true root r^*, |r - r^*| < epsabs + epsrel r^* assuming that the true root r^* is contained within the interval. -- Function: int gsl_root_test_delta (double x1, double x0, double epsabs, double epsrel) This function tests for the convergence of the sequence *note x0: a9f, *note x1: a9f. with absolute error *note epsabs: a9f. and relative error *note epsrel: a9f. The test returns ‘GSL_SUCCESS’ if the following condition is achieved, |x_1 - x_0| < epsabs + epsrel |x_1| and returns ‘GSL_CONTINUE’ otherwise. -- Function: int gsl_root_test_residual (double f, double epsabs) This function tests the residual value *note f: aa0. against the absolute error bound *note epsabs: aa0. The test returns ‘GSL_SUCCESS’ if the following condition is achieved, |f| < epsabs and returns ‘GSL_CONTINUE’ otherwise. This criterion is suitable for situations where the precise location of the root, x, is unimportant provided a value can be found where the residual, |f(x)|, is small enough.  File: gsl-ref.info, Node: Root Bracketing Algorithms, Next: Root Finding Algorithms using Derivatives, Prev: Search Stopping Parameters, Up: One Dimensional Root-Finding 36.8 Root Bracketing Algorithms =============================== The root bracketing algorithms described in this section require an initial interval which is guaranteed to contain a root—if a and b are the endpoints of the interval then f(a) must differ in sign from f(b). This ensures that the function crosses zero at least once in the interval. If a valid initial interval is used then these algorithm cannot fail, provided the function is well-behaved. Note that a bracketing algorithm cannot find roots of even degree, since these do not cross the x-axis. -- Type: gsl_root_fsolver_type -- Variable: *note gsl_root_fsolver_type: aa2. *gsl_root_fsolver_bisection The `bisection algorithm' is the simplest method of bracketing the roots of a function. It is the slowest algorithm provided by the library, with linear convergence. On each iteration, the interval is bisected and the value of the function at the midpoint is calculated. The sign of this value is used to determine which half of the interval does not contain a root. That half is discarded to give a new, smaller interval containing the root. This procedure can be continued indefinitely until the interval is sufficiently small. At any time the current estimate of the root is taken as the midpoint of the interval. -- Variable: *note gsl_root_fsolver_type: aa2. *gsl_root_fsolver_falsepos The `false position algorithm' is a method of finding roots based on linear interpolation. Its convergence is linear, but it is usually faster than bisection. On each iteration a line is drawn between the endpoints (a,f(a)) and (b,f(b)) and the point where this line crosses the x-axis taken as a “midpoint”. The value of the function at this point is calculated and its sign is used to determine which side of the interval does not contain a root. That side is discarded to give a new, smaller interval containing the root. This procedure can be continued indefinitely until the interval is sufficiently small. The best estimate of the root is taken from the linear interpolation of the interval on the current iteration. -- Variable: *note gsl_root_fsolver_type: aa2. *gsl_root_fsolver_brent The `Brent-Dekker method' (referred to here as `Brent’s method') combines an interpolation strategy with the bisection algorithm. This produces a fast algorithm which is still robust. On each iteration Brent’s method approximates the function using an interpolating curve. On the first iteration this is a linear interpolation of the two endpoints. For subsequent iterations the algorithm uses an inverse quadratic fit to the last three points, for higher accuracy. The intercept of the interpolating curve with the x-axis is taken as a guess for the root. If it lies within the bounds of the current interval then the interpolating point is accepted, and used to generate a smaller interval. If the interpolating point is not accepted then the algorithm falls back to an ordinary bisection step. The best estimate of the root is taken from the most recent interpolation or bisection.  File: gsl-ref.info, Node: Root Finding Algorithms using Derivatives, Next: Examples<27>, Prev: Root Bracketing Algorithms, Up: One Dimensional Root-Finding 36.9 Root Finding Algorithms using Derivatives ============================================== The root polishing algorithms described in this section require an initial guess for the location of the root. There is no absolute guarantee of convergence—the function must be suitable for this technique and the initial guess must be sufficiently close to the root for it to work. When these conditions are satisfied then convergence is quadratic. These algorithms make use of both the function and its derivative. -- Type: gsl_root_fdfsolver_type -- Variable: *note gsl_root_fdfsolver_type: aa7. *gsl_root_fdfsolver_newton Newton’s Method is the standard root-polishing algorithm. The algorithm begins with an initial guess for the location of the root. On each iteration, a line tangent to the function f is drawn at that position. The point where this line crosses the x-axis becomes the new guess. The iteration is defined by the following sequence, x_{i+1} = x_i - f(x_i)/f'(x_i) Newton’s method converges quadratically for single roots, and linearly for multiple roots. -- Variable: *note gsl_root_fdfsolver_type: aa7. *gsl_root_fdfsolver_secant The `secant method' is a simplified version of Newton’s method which does not require the computation of the derivative on every step. On its first iteration the algorithm begins with Newton’s method, using the derivative to compute a first step, x_1 = x_0 - f(x_0)/f'(x_0) Subsequent iterations avoid the evaluation of the derivative by replacing it with a numerical estimate, the slope of the line through the previous two points, x_{i+1} = x_i f(x_i) / f'_{est} where f'_{est} = (f(x_i) - f(x_{i-1})/(x_i - x_{i-1}) When the derivative does not change significantly in the vicinity of the root the secant method gives a useful saving. Asymptotically the secant method is faster than Newton’s method whenever the cost of evaluating the derivative is more than 0.44 times the cost of evaluating the function itself. As with all methods of computing a numerical derivative the estimate can suffer from cancellation errors if the separation of the points becomes too small. On single roots, the method has a convergence of order (1 + \sqrt 5)/2 (approximately 1.62). It converges linearly for multiple roots. -- Variable: *note gsl_root_fdfsolver_type: aa7. *gsl_root_fdfsolver_steffenson The `Steffenson Method' (1) provides the fastest convergence of all the routines. It combines the basic Newton algorithm with an Aitken “delta-squared” acceleration. If the Newton iterates are x_i then the acceleration procedure generates a new sequence R_i, R_i = x_i - (x_{i+1} - x_i)^2 / (x_{i+2} - 2 x_{i+1} + x_{i}) which converges faster than the original sequence under reasonable conditions. The new sequence requires three terms before it can produce its first value so the method returns accelerated values on the second and subsequent iterations. On the first iteration it returns the ordinary Newton estimate. The Newton iterate is also returned if the denominator of the acceleration term ever becomes zero. As with all acceleration procedures this method can become unstable if the function is not well-behaved. ---------- Footnotes ---------- (1) (1) J.F. Steffensen (1873–1961). The spelling used in the name of the function is slightly incorrect, but has been preserved to avoid incompatibility.  File: gsl-ref.info, Node: Examples<27>, Next: References and Further Reading<29>, Prev: Root Finding Algorithms using Derivatives, Up: One Dimensional Root-Finding 36.10 Examples ============== For any root finding algorithm we need to prepare the function to be solved. For this example we will use the general quadratic equation described earlier. We first need a header file (‘demo_fn.h’) to define the function parameters, struct quadratic_params { double a, b, c; }; double quadratic (double x, void *params); double quadratic_deriv (double x, void *params); void quadratic_fdf (double x, void *params, double *y, double *dy); We place the function definitions in a separate file (‘demo_fn.c’), double quadratic (double x, void *params) { struct quadratic_params *p = (struct quadratic_params *) params; double a = p->a; double b = p->b; double c = p->c; return (a * x + b) * x + c; } double quadratic_deriv (double x, void *params) { struct quadratic_params *p = (struct quadratic_params *) params; double a = p->a; double b = p->b; return 2.0 * a * x + b; } void quadratic_fdf (double x, void *params, double *y, double *dy) { struct quadratic_params *p = (struct quadratic_params *) params; double a = p->a; double b = p->b; double c = p->c; *y = (a * x + b) * x + c; *dy = 2.0 * a * x + b; } The first program uses the function solver ‘gsl_root_fsolver_brent’ for Brent’s method and the general quadratic defined above to solve the following equation, x^2 - 5 = 0 with solution x = \sqrt 5 = 2.236068... #include #include #include #include #include "demo_fn.h" #include "demo_fn.c" int main (void) { int status; int iter = 0, max_iter = 100; const gsl_root_fsolver_type *T; gsl_root_fsolver *s; double r = 0, r_expected = sqrt (5.0); double x_lo = 0.0, x_hi = 5.0; gsl_function F; struct quadratic_params params = {1.0, 0.0, -5.0}; F.function = &quadratic; F.params = ¶ms; T = gsl_root_fsolver_brent; s = gsl_root_fsolver_alloc (T); gsl_root_fsolver_set (s, &F, x_lo, x_hi); printf ("using %s method\n", gsl_root_fsolver_name (s)); printf ("%5s [%9s, %9s] %9s %10s %9s\n", "iter", "lower", "upper", "root", "err", "err(est)"); do { iter++; status = gsl_root_fsolver_iterate (s); r = gsl_root_fsolver_root (s); x_lo = gsl_root_fsolver_x_lower (s); x_hi = gsl_root_fsolver_x_upper (s); status = gsl_root_test_interval (x_lo, x_hi, 0, 0.001); if (status == GSL_SUCCESS) printf ("Converged:\n"); printf ("%5d [%.7f, %.7f] %.7f %+.7f %.7f\n", iter, x_lo, x_hi, r, r - r_expected, x_hi - x_lo); } while (status == GSL_CONTINUE && iter < max_iter); gsl_root_fsolver_free (s); return status; } Here are the results of the iterations: $ ./a.out using brent method iter [ lower, upper] root err err(est) 1 [1.0000000, 5.0000000] 1.0000000 -1.2360680 4.0000000 2 [1.0000000, 3.0000000] 3.0000000 +0.7639320 2.0000000 3 [2.0000000, 3.0000000] 2.0000000 -0.2360680 1.0000000 4 [2.2000000, 3.0000000] 2.2000000 -0.0360680 0.8000000 5 [2.2000000, 2.2366300] 2.2366300 +0.0005621 0.0366300 Converged: 6 [2.2360634, 2.2366300] 2.2360634 -0.0000046 0.0005666 If the program is modified to use the bisection solver instead of Brent’s method, by changing ‘gsl_root_fsolver_brent’ to ‘gsl_root_fsolver_bisection’ the slower convergence of the Bisection method can be observed: $ ./a.out using bisection method iter [ lower, upper] root err err(est) 1 [0.0000000, 2.5000000] 1.2500000 -0.9860680 2.5000000 2 [1.2500000, 2.5000000] 1.8750000 -0.3610680 1.2500000 3 [1.8750000, 2.5000000] 2.1875000 -0.0485680 0.6250000 4 [2.1875000, 2.5000000] 2.3437500 +0.1076820 0.3125000 5 [2.1875000, 2.3437500] 2.2656250 +0.0295570 0.1562500 6 [2.1875000, 2.2656250] 2.2265625 -0.0095055 0.0781250 7 [2.2265625, 2.2656250] 2.2460938 +0.0100258 0.0390625 8 [2.2265625, 2.2460938] 2.2363281 +0.0002601 0.0195312 9 [2.2265625, 2.2363281] 2.2314453 -0.0046227 0.0097656 10 [2.2314453, 2.2363281] 2.2338867 -0.0021813 0.0048828 11 [2.2338867, 2.2363281] 2.2351074 -0.0009606 0.0024414 Converged: 12 [2.2351074, 2.2363281] 2.2357178 -0.0003502 0.0012207 The next program solves the same function using a derivative solver instead. #include #include #include #include #include "demo_fn.h" #include "demo_fn.c" int main (void) { int status; int iter = 0, max_iter = 100; const gsl_root_fdfsolver_type *T; gsl_root_fdfsolver *s; double x0, x = 5.0, r_expected = sqrt (5.0); gsl_function_fdf FDF; struct quadratic_params params = {1.0, 0.0, -5.0}; FDF.f = &quadratic; FDF.df = &quadratic_deriv; FDF.fdf = &quadratic_fdf; FDF.params = ¶ms; T = gsl_root_fdfsolver_newton; s = gsl_root_fdfsolver_alloc (T); gsl_root_fdfsolver_set (s, &FDF, x); printf ("using %s method\n", gsl_root_fdfsolver_name (s)); printf ("%-5s %10s %10s %10s\n", "iter", "root", "err", "err(est)"); do { iter++; status = gsl_root_fdfsolver_iterate (s); x0 = x; x = gsl_root_fdfsolver_root (s); status = gsl_root_test_delta (x, x0, 0, 1e-3); if (status == GSL_SUCCESS) printf ("Converged:\n"); printf ("%5d %10.7f %+10.7f %10.7f\n", iter, x, x - r_expected, x - x0); } while (status == GSL_CONTINUE && iter < max_iter); gsl_root_fdfsolver_free (s); return status; } Here are the results for Newton’s method: $ ./a.out using newton method iter root err err(est) 1 3.0000000 +0.7639320 -2.0000000 2 2.3333333 +0.0972654 -0.6666667 3 2.2380952 +0.0020273 -0.0952381 Converged: 4 2.2360689 +0.0000009 -0.0020263 Note that the error can be estimated more accurately by taking the difference between the current iterate and next iterate rather than the previous iterate. The other derivative solvers can be investigated by changing ‘gsl_root_fdfsolver_newton’ to ‘gsl_root_fdfsolver_secant’ or ‘gsl_root_fdfsolver_steffenson’.  File: gsl-ref.info, Node: References and Further Reading<29>, Prev: Examples<27>, Up: One Dimensional Root-Finding 36.11 References and Further Reading ==================================== For information on the Brent-Dekker algorithm see the following two papers, * R. P. Brent, “An algorithm with guaranteed convergence for finding a zero of a function”, `Computer Journal', 14 (1971) 422–425 * J. C. P. Bus and T. J. Dekker, “Two Efficient Algorithms with Guaranteed Convergence for Finding a Zero of a Function”, `ACM Transactions of Mathematical Software', Vol.: 1 No.: 4 (1975) 330–345  File: gsl-ref.info, Node: One Dimensional Minimization, Next: Multidimensional Root-Finding, Prev: One Dimensional Root-Finding, Up: Top 37 One Dimensional Minimization ******************************* This chapter describes routines for finding minima of arbitrary one-dimensional functions. The library provides low level components for a variety of iterative minimizers and convergence tests. These can be combined by the user to achieve the desired solution, with full access to the intermediate steps of the algorithms. Each class of methods uses the same framework, so that you can switch between minimizers at runtime without needing to recompile your program. Each instance of a minimizer keeps track of its own state, allowing the minimizers to be used in multi-threaded programs. The header file ‘gsl_min.h’ contains prototypes for the minimization functions and related declarations. To use the minimization algorithms to find the maximum of a function simply invert its sign. * Menu: * Overview: Overview<2>. * Caveats: Caveats<2>. * Initializing the Minimizer:: * Providing the function to minimize:: * Iteration: Iteration<2>. * Stopping Parameters:: * Minimization Algorithms:: * Examples: Examples<28>. * References and Further Reading: References and Further Reading<30>.  File: gsl-ref.info, Node: Overview<2>, Next: Caveats<2>, Up: One Dimensional Minimization 37.1 Overview ============= The minimization algorithms begin with a bounded region known to contain a minimum. The region is described by a lower bound a and an upper bound b, with an estimate of the location of the minimum x, as shown in Fig. %s. [gsl-ref-figures/min-interval] Figure: Function with lower and upper bounds with an estimate of the minimum. The value of the function at x must be less than the value of the function at the ends of the interval, f(a) > f(x) < f(b) This condition guarantees that a minimum is contained somewhere within the interval. On each iteration a new point x' is selected using one of the available algorithms. If the new point is a better estimate of the minimum, i.e.: where f(x') < f(x), then the current estimate of the minimum x is updated. The new point also allows the size of the bounded interval to be reduced, by choosing the most compact set of points which satisfies the constraint f(a) > f(x) < f(b). The interval is reduced until it encloses the true minimum to a desired tolerance. This provides a best estimate of the location of the minimum and a rigorous error estimate. Several bracketing algorithms are available within a single framework. The user provides a high-level driver for the algorithm, and the library provides the individual functions necessary for each of the steps. There are three main phases of the iteration. The steps are, * initialize minimizer state, ‘s’, for algorithm ‘T’ * update ‘s’ using the iteration ‘T’ * test ‘s’ for convergence, and repeat iteration if necessary The state for the minimizers is held in a *note gsl_min_fminimizer: ab1. struct. The updating procedure uses only function evaluations (not derivatives).  File: gsl-ref.info, Node: Caveats<2>, Next: Initializing the Minimizer, Prev: Overview<2>, Up: One Dimensional Minimization 37.2 Caveats ============ Note that minimization functions can only search for one minimum at a time. When there are several minima in the search area, the first minimum to be found will be returned; however it is difficult to predict which of the minima this will be. `In most cases, no error will be reported if you try to find a minimum in an area where there is more than one.' With all minimization algorithms it can be difficult to determine the location of the minimum to full numerical precision. The behavior of the function in the region of the minimum x^* can be approximated by a Taylor expansion, y = f(x^*) + (1/2) f''(x^*) (x - x^*)^2 and the second term of this expansion can be lost when added to the first term at finite precision. This magnifies the error in locating x^*, making it proportional to \sqrt \epsilon (where \epsilon is the relative accuracy of the floating point numbers). For functions with higher order minima, such as x^4, the magnification of the error is correspondingly worse. The best that can be achieved is to converge to the limit of numerical accuracy in the function values, rather than the location of the minimum itself.  File: gsl-ref.info, Node: Initializing the Minimizer, Next: Providing the function to minimize, Prev: Caveats<2>, Up: One Dimensional Minimization 37.3 Initializing the Minimizer =============================== -- Type: gsl_min_fminimizer This is a workspace for minimizing functions. -- Function: *note gsl_min_fminimizer: ab1. *gsl_min_fminimizer_alloc (const gsl_min_fminimizer_type *T) This function returns a pointer to a newly allocated instance of a minimizer of type *note T: ab4. For example, the following code creates an instance of a golden section minimizer: const gsl_min_fminimizer_type * T = gsl_min_fminimizer_goldensection; gsl_min_fminimizer * s = gsl_min_fminimizer_alloc (T); If there is insufficient memory to create the minimizer then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: int gsl_min_fminimizer_set (gsl_min_fminimizer *s, gsl_function *f, double x_minimum, double x_lower, double x_upper) This function sets, or resets, an existing minimizer *note s: ab5. to use the function *note f: ab5. and the initial search interval [*note x_lower: ab5, *note x_upper: ab5.], with a guess for the location of the minimum *note x_minimum: ab5. If the interval given does not contain a minimum, then the function returns an error code of *note GSL_EINVAL: 2b. -- Function: int gsl_min_fminimizer_set_with_values (gsl_min_fminimizer *s, gsl_function *f, double x_minimum, double f_minimum, double x_lower, double f_lower, double x_upper, double f_upper) This function is equivalent to *note gsl_min_fminimizer_set(): ab5. but uses the values *note f_minimum: ab6, *note f_lower: ab6. and *note f_upper: ab6. instead of computing ‘f(x_minimum)’, ‘f(x_lower)’ and ‘f(x_upper)’. -- Function: void gsl_min_fminimizer_free (gsl_min_fminimizer *s) This function frees all the memory associated with the minimizer *note s: ab7. -- Function: const char *gsl_min_fminimizer_name (const gsl_min_fminimizer *s) This function returns a pointer to the name of the minimizer. For example: printf ("s is a '%s' minimizer\n", gsl_min_fminimizer_name (s)); would print something like ‘s is a 'brent' minimizer’.  File: gsl-ref.info, Node: Providing the function to minimize, Next: Iteration<2>, Prev: Initializing the Minimizer, Up: One Dimensional Minimization 37.4 Providing the function to minimize ======================================= You must provide a continuous function of one variable for the minimizers to operate on. In order to allow for general parameters the functions are defined by a *note gsl_function: a93. data type (*note Providing the function to solve: a91.).  File: gsl-ref.info, Node: Iteration<2>, Next: Stopping Parameters, Prev: Providing the function to minimize, Up: One Dimensional Minimization 37.5 Iteration ============== The following functions drive the iteration of each algorithm. Each function performs one iteration to update the state of any minimizer of the corresponding type. The same functions work for all minimizers so that different methods can be substituted at runtime without modifications to the code. -- Function: int gsl_min_fminimizer_iterate (gsl_min_fminimizer *s) This function performs a single iteration of the minimizer *note s: abb. If the iteration encounters an unexpected problem then an error code will be returned, ‘GSL_EBADFUNC’ the iteration encountered a singular point where the function evaluated to ‘Inf’ or ‘NaN’. ‘GSL_FAILURE’ the algorithm could not improve the current best approximation or bounding interval. The minimizer maintains a current best estimate of the position of the minimum at all times, and the current interval bounding the minimum. This information can be accessed with the following auxiliary functions, -- Function: double gsl_min_fminimizer_x_minimum (const gsl_min_fminimizer *s) This function returns the current estimate of the position of the minimum for the minimizer *note s: abc. -- Function: double gsl_min_fminimizer_x_upper (const gsl_min_fminimizer *s) -- Function: double gsl_min_fminimizer_x_lower (const gsl_min_fminimizer *s) These functions return the current upper and lower bound of the interval for the minimizer *note s: abe. -- Function: double gsl_min_fminimizer_f_minimum (const gsl_min_fminimizer *s) -- Function: double gsl_min_fminimizer_f_upper (const gsl_min_fminimizer *s) -- Function: double gsl_min_fminimizer_f_lower (const gsl_min_fminimizer *s) These functions return the value of the function at the current estimate of the minimum and at the upper and lower bounds of the interval for the minimizer *note s: ac1.  File: gsl-ref.info, Node: Stopping Parameters, Next: Minimization Algorithms, Prev: Iteration<2>, Up: One Dimensional Minimization 37.6 Stopping Parameters ======================== A minimization procedure should stop when one of the following conditions is true: * A minimum has been found to within the user-specified precision. * A user-specified maximum number of iterations has been reached. * An error has occurred. The handling of these conditions is under user control. The function below allows the user to test the precision of the current result. -- Function: int gsl_min_test_interval (double x_lower, double x_upper, double epsabs, double epsrel) This function tests for the convergence of the interval [*note x_lower: ac3, *note x_upper: ac3.] with absolute error *note epsabs: ac3. and relative error *note epsrel: ac3. The test returns ‘GSL_SUCCESS’ if the following condition is achieved, |a - b| < epsabs + epsrel min(|a|,|b|) when the interval x = [a,b] does not include the origin. If the interval includes the origin then \min(|a|,|b|) is replaced by zero (which is the minimum value of |x| over the interval). This ensures that the relative error is accurately estimated for minima close to the origin. This condition on the interval also implies that any estimate of the minimum x_m in the interval satisfies the same condition with respect to the true minimum x_m^*, |x_m - x_m^*| < epsabs + epsrel x_m^* assuming that the true minimum x_m^* is contained within the interval.  File: gsl-ref.info, Node: Minimization Algorithms, Next: Examples<28>, Prev: Stopping Parameters, Up: One Dimensional Minimization 37.7 Minimization Algorithms ============================ The minimization algorithms described in this section require an initial interval which is guaranteed to contain a minimum—if a and b are the endpoints of the interval and x is an estimate of the minimum then f(a) > f(x) < f(b). This ensures that the function has at least one minimum somewhere in the interval. If a valid initial interval is used then these algorithm cannot fail, provided the function is well-behaved. -- Type: gsl_min_fminimizer_type -- Variable: *note gsl_min_fminimizer_type: ac5. *gsl_min_fminimizer_goldensection The `golden section algorithm' is the simplest method of bracketing the minimum of a function. It is the slowest algorithm provided by the library, with linear convergence. On each iteration, the algorithm first compares the subintervals from the endpoints to the current minimum. The larger subinterval is divided in a golden section (using the famous ratio (3-\sqrt 5)/2 \approx 0.3819660 and the value of the function at this new point is calculated. The new value is used with the constraint f(a') > f(x') < f(b') to a select new interval containing the minimum, by discarding the least useful point. This procedure can be continued indefinitely until the interval is sufficiently small. Choosing the golden section as the bisection ratio can be shown to provide the fastest convergence for this type of algorithm. -- Variable: *note gsl_min_fminimizer_type: ac5. *gsl_min_fminimizer_brent The `Brent minimization algorithm' combines a parabolic interpolation with the golden section algorithm. This produces a fast algorithm which is still robust. The outline of the algorithm can be summarized as follows: on each iteration Brent’s method approximates the function using an interpolating parabola through three existing points. The minimum of the parabola is taken as a guess for the minimum. If it lies within the bounds of the current interval then the interpolating point is accepted, and used to generate a smaller interval. If the interpolating point is not accepted then the algorithm falls back to an ordinary golden section step. The full details of Brent’s method include some additional checks to improve convergence. -- Variable: *note gsl_min_fminimizer_type: ac5. *gsl_min_fminimizer_quad_golden This is a variant of Brent’s algorithm which uses the safeguarded step-length algorithm of Gill and Murray.  File: gsl-ref.info, Node: Examples<28>, Next: References and Further Reading<30>, Prev: Minimization Algorithms, Up: One Dimensional Minimization 37.8 Examples ============= The following program uses the Brent algorithm to find the minimum of the function f(x) = \cos(x) + 1, which occurs at x = \pi. The starting interval is (0,6), with an initial guess for the minimum of 2. #include #include #include #include double fn1 (double x, void * params) { (void)(params); /* avoid unused parameter warning */ return cos(x) + 1.0; } int main (void) { int status; int iter = 0, max_iter = 100; const gsl_min_fminimizer_type *T; gsl_min_fminimizer *s; double m = 2.0, m_expected = M_PI; double a = 0.0, b = 6.0; gsl_function F; F.function = &fn1; F.params = 0; T = gsl_min_fminimizer_brent; s = gsl_min_fminimizer_alloc (T); gsl_min_fminimizer_set (s, &F, m, a, b); printf ("using %s method\n", gsl_min_fminimizer_name (s)); printf ("%5s [%9s, %9s] %9s %10s %9s\n", "iter", "lower", "upper", "min", "err", "err(est)"); printf ("%5d [%.7f, %.7f] %.7f %+.7f %.7f\n", iter, a, b, m, m - m_expected, b - a); do { iter++; status = gsl_min_fminimizer_iterate (s); m = gsl_min_fminimizer_x_minimum (s); a = gsl_min_fminimizer_x_lower (s); b = gsl_min_fminimizer_x_upper (s); status = gsl_min_test_interval (a, b, 0.001, 0.0); if (status == GSL_SUCCESS) printf ("Converged:\n"); printf ("%5d [%.7f, %.7f] " "%.7f %+.7f %.7f\n", iter, a, b, m, m - m_expected, b - a); } while (status == GSL_CONTINUE && iter < max_iter); gsl_min_fminimizer_free (s); return status; } Here are the results of the minimization procedure. using brent method iter [ lower, upper] min err err(est) 0 [0.0000000, 6.0000000] 2.0000000 -1.1415927 6.0000000 1 [2.0000000, 6.0000000] 3.5278640 +0.3862713 4.0000000 2 [2.0000000, 3.5278640] 3.1748217 +0.0332290 1.5278640 3 [2.0000000, 3.1748217] 3.1264576 -0.0151351 1.1748217 4 [3.1264576, 3.1748217] 3.1414743 -0.0001183 0.0483641 5 [3.1414743, 3.1748217] 3.1415930 +0.0000004 0.0333474 Converged: 6 [3.1414743, 3.1415930] 3.1415927 +0.0000000 0.0001187  File: gsl-ref.info, Node: References and Further Reading<30>, Prev: Examples<28>, Up: One Dimensional Minimization 37.9 References and Further Reading =================================== Further information on Brent’s algorithm is available in the following book, * Richard Brent, `Algorithms for minimization without derivatives', Prentice-Hall (1973), republished by Dover in paperback (2002), ISBN 0-486-41998-3.  File: gsl-ref.info, Node: Multidimensional Root-Finding, Next: Multidimensional Minimization, Prev: One Dimensional Minimization, Up: Top 38 Multidimensional Root-Finding ******************************** This chapter describes functions for multidimensional root-finding (solving nonlinear systems with n equations in n unknowns). The library provides low level components for a variety of iterative solvers and convergence tests. These can be combined by the user to achieve the desired solution, with full access to the intermediate steps of the iteration. Each class of methods uses the same framework, so that you can switch between solvers at runtime without needing to recompile your program. Each instance of a solver keeps track of its own state, allowing the solvers to be used in multi-threaded programs. The solvers are based on the original Fortran library MINPACK. The header file ‘gsl_multiroots.h’ contains prototypes for the multidimensional root finding functions and related declarations. * Menu: * Overview: Overview<3>. * Initializing the Solver: Initializing the Solver<2>. * Providing the function to solve: Providing the function to solve<2>. * Iteration: Iteration<3>. * Search Stopping Parameters: Search Stopping Parameters<2>. * Algorithms using Derivatives:: * Algorithms without Derivatives:: * Examples: Examples<29>. * References and Further Reading: References and Further Reading<31>.  File: gsl-ref.info, Node: Overview<3>, Next: Initializing the Solver<2>, Up: Multidimensional Root-Finding 38.1 Overview ============= The problem of multidimensional root finding requires the simultaneous solution of n equations, f_i, in n variables, x_i, f_i (x_1, ..., x_n) = 0 for i = 1 ... n. In general there are no bracketing methods available for n dimensional systems, and no way of knowing whether any solutions exist. All algorithms proceed from an initial guess using a variant of the Newton iteration, x -> x' = x - J^{-1} f(x) where x, f are vector quantities and J is the Jacobian matrix J_{ij} = \partial f_i / \partial x_j. Additional strategies can be used to enlarge the region of convergence. These include requiring a decrease in the norm |f| on each step proposed by Newton’s method, or taking steepest-descent steps in the direction of the negative gradient of |f|. Several root-finding algorithms are available within a single framework. The user provides a high-level driver for the algorithms, and the library provides the individual functions necessary for each of the steps. There are three main phases of the iteration. The steps are, * initialize solver state, ‘s’, for algorithm ‘T’ * update ‘s’ using the iteration ‘T’ * test ‘s’ for convergence, and repeat iteration if necessary The evaluation of the Jacobian matrix can be problematic, either because programming the derivatives is intractable or because computation of the n^2 terms of the matrix becomes too expensive. For these reasons the algorithms provided by the library are divided into two classes according to whether the derivatives are available or not. The state for solvers with an analytic Jacobian matrix is held in a *note gsl_multiroot_fdfsolver: ace. struct. The updating procedure requires both the function and its derivatives to be supplied by the user. The state for solvers which do not use an analytic Jacobian matrix is held in a *note gsl_multiroot_fsolver: acf. struct. The updating procedure uses only function evaluations (not derivatives). The algorithms estimate the matrix J or J^{-1} by approximate methods.  File: gsl-ref.info, Node: Initializing the Solver<2>, Next: Providing the function to solve<2>, Prev: Overview<3>, Up: Multidimensional Root-Finding 38.2 Initializing the Solver ============================ The following functions initialize a multidimensional solver, either with or without derivatives. The solver itself depends only on the dimension of the problem and the algorithm and can be reused for different problems. -- Type: gsl_multiroot_fsolver This is a workspace for multidimensional root-finding without derivatives. -- Type: gsl_multiroot_fdfsolver This is a workspace for multidimensional root-finding with derivatives. -- Function: *note gsl_multiroot_fsolver: acf. *gsl_multiroot_fsolver_alloc (const gsl_multiroot_fsolver_type *T, size_t n) This function returns a pointer to a newly allocated instance of a solver of type *note T: ad1. for a system of *note n: ad1. dimensions. For example, the following code creates an instance of a hybrid solver, to solve a 3-dimensional system of equations: const gsl_multiroot_fsolver_type * T = gsl_multiroot_fsolver_hybrid; gsl_multiroot_fsolver * s = gsl_multiroot_fsolver_alloc (T, 3); If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: *note gsl_multiroot_fdfsolver: ace. *gsl_multiroot_fdfsolver_alloc (const gsl_multiroot_fdfsolver_type *T, size_t n) This function returns a pointer to a newly allocated instance of a derivative solver of type *note T: ad2. for a system of *note n: ad2. dimensions. For example, the following code creates an instance of a Newton-Raphson solver, for a 2-dimensional system of equations: const gsl_multiroot_fdfsolver_type * T = gsl_multiroot_fdfsolver_newton; gsl_multiroot_fdfsolver * s = gsl_multiroot_fdfsolver_alloc (T, 2); If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: int gsl_multiroot_fsolver_set (gsl_multiroot_fsolver *s, gsl_multiroot_function *f, const gsl_vector *x) -- Function: int gsl_multiroot_fdfsolver_set (gsl_multiroot_fdfsolver *s, gsl_multiroot_function_fdf *fdf, const gsl_vector *x) These functions set, or reset, an existing solver *note s: ad4. to use the function ‘f’ or function and derivative *note fdf: ad4, and the initial guess *note x: ad4. Note that the initial position is copied from *note x: ad4, this argument is not modified by subsequent iterations. -- Function: void gsl_multiroot_fsolver_free (gsl_multiroot_fsolver *s) -- Function: void gsl_multiroot_fdfsolver_free (gsl_multiroot_fdfsolver *s) These functions free all the memory associated with the solver *note s: ad6. -- Function: const char *gsl_multiroot_fsolver_name (const gsl_multiroot_fsolver *s) -- Function: const char *gsl_multiroot_fdfsolver_name (const gsl_multiroot_fdfsolver *s) These functions return a pointer to the name of the solver. For example: printf ("s is a '%s' solver\n", gsl_multiroot_fdfsolver_name (s)); would print something like ‘s is a 'newton' solver’.  File: gsl-ref.info, Node: Providing the function to solve<2>, Next: Iteration<3>, Prev: Initializing the Solver<2>, Up: Multidimensional Root-Finding 38.3 Providing the function to solve ==================================== You must provide n functions of n variables for the root finders to operate on. In order to allow for general parameters the functions are defined by the following data types: -- Type: gsl_multiroot_function This data type defines a general system of functions with parameters. ‘int (* f) (const gsl_vector * x, void * params, gsl_vector * f)’ this function should store the vector result f(x,params) in ‘f’ for argument ‘x’ and parameters ‘params’, returning an appropriate error code if the function cannot be computed. ‘size_t n’ the dimension of the system, i.e. the number of components of the vectors ‘x’ and ‘f’. ‘void * params’ a pointer to the parameters of the function. Here is an example using Powell’s test function, f_1(x) = A x_0 x_1 - 1, f_2(x) = exp(-x_0) + exp(-x_1) - (1 + 1/A) with A = 10^4. The following code defines a *note gsl_multiroot_function: ada. system ‘F’ which you could pass to a solver: struct powell_params { double A; }; int powell (gsl_vector * x, void * p, gsl_vector * f) { struct powell_params * params = (struct powell_params *)p; const double A = (params->A); const double x0 = gsl_vector_get(x,0); const double x1 = gsl_vector_get(x,1); gsl_vector_set (f, 0, A * x0 * x1 - 1); gsl_vector_set (f, 1, (exp(-x0) + exp(-x1) - (1.0 + 1.0/A))); return GSL_SUCCESS } gsl_multiroot_function F; struct powell_params params = { 10000.0 }; F.f = &powell; F.n = 2; F.params = ¶ms; -- Type: gsl_multiroot_function_fdf This data type defines a general system of functions with parameters and the corresponding Jacobian matrix of derivatives, ‘int (* f) (const gsl_vector * x, void * params, gsl_vector * f)’ this function should store the vector result f(x,params) in ‘f’ for argument ‘x’ and parameters ‘params’, returning an appropriate error code if the function cannot be computed. ‘int (* df) (const gsl_vector * x, void * params, gsl_matrix * J)’ this function should store the ‘n’-by-‘n’ matrix result J_ij = d f_i(x,params) / d x_j in ‘J’ for argument ‘x’ and parameters ‘params’, returning an appropriate error code if the function cannot be computed. ‘int (* fdf) (const gsl_vector * x, void * params, gsl_vector * f, gsl_matrix * J)’ This function should set the values of the ‘f’ and ‘J’ as above, for arguments ‘x’ and parameters ‘params’. This function provides an optimization of the separate functions for f(x) and J(x)—it is always faster to compute the function and its derivative at the same time. ‘size_t n’ the dimension of the system, i.e. the number of components of the vectors ‘x’ and ‘f’. ‘void * params’ a pointer to the parameters of the function. The example of Powell’s test function defined above can be extended to include analytic derivatives using the following code: int powell_df (gsl_vector * x, void * p, gsl_matrix * J) { struct powell_params * params = (struct powell_params *)p; const double A = (params->A); const double x0 = gsl_vector_get(x,0); const double x1 = gsl_vector_get(x,1); gsl_matrix_set (J, 0, 0, A * x1); gsl_matrix_set (J, 0, 1, A * x0); gsl_matrix_set (J, 1, 0, -exp(-x0)); gsl_matrix_set (J, 1, 1, -exp(-x1)); return GSL_SUCCESS } int powell_fdf (gsl_vector * x, void * p, gsl_matrix * f, gsl_matrix * J) { struct powell_params * params = (struct powell_params *)p; const double A = (params->A); const double x0 = gsl_vector_get(x,0); const double x1 = gsl_vector_get(x,1); const double u0 = exp(-x0); const double u1 = exp(-x1); gsl_vector_set (f, 0, A * x0 * x1 - 1); gsl_vector_set (f, 1, u0 + u1 - (1 + 1/A)); gsl_matrix_set (J, 0, 0, A * x1); gsl_matrix_set (J, 0, 1, A * x0); gsl_matrix_set (J, 1, 0, -u0); gsl_matrix_set (J, 1, 1, -u1); return GSL_SUCCESS } gsl_multiroot_function_fdf FDF; FDF.f = &powell_f; FDF.df = &powell_df; FDF.fdf = &powell_fdf; FDF.n = 2; FDF.params = 0; Note that the function ‘powell_fdf’ is able to reuse existing terms from the function when calculating the Jacobian, thus saving time.  File: gsl-ref.info, Node: Iteration<3>, Next: Search Stopping Parameters<2>, Prev: Providing the function to solve<2>, Up: Multidimensional Root-Finding 38.4 Iteration ============== The following functions drive the iteration of each algorithm. Each function performs one iteration to update the state of any solver of the corresponding type. The same functions work for all solvers so that different methods can be substituted at runtime without modifications to the code. -- Function: int gsl_multiroot_fsolver_iterate (gsl_multiroot_fsolver *s) -- Function: int gsl_multiroot_fdfsolver_iterate (gsl_multiroot_fdfsolver *s) These functions perform a single iteration of the solver *note s: ade. If the iteration encounters an unexpected problem then an error code will be returned, ‘GSL_EBADFUNC’ the iteration encountered a singular point where the function or its derivative evaluated to ‘Inf’ or ‘NaN’. ‘GSL_ENOPROG’ the iteration is not making any progress, preventing the algorithm from continuing. The solver maintains a current best estimate of the root ‘s->x’ and its function value ‘s->f’ at all times. This information can be accessed with the following auxiliary functions, -- Function: *note gsl_vector: 35f. *gsl_multiroot_fsolver_root (const gsl_multiroot_fsolver *s) -- Function: *note gsl_vector: 35f. *gsl_multiroot_fdfsolver_root (const gsl_multiroot_fdfsolver *s) These functions return the current estimate of the root for the solver *note s: ae0, given by ‘s->x’. -- Function: *note gsl_vector: 35f. *gsl_multiroot_fsolver_f (const gsl_multiroot_fsolver *s) -- Function: *note gsl_vector: 35f. *gsl_multiroot_fdfsolver_f (const gsl_multiroot_fdfsolver *s) These functions return the function value f(x) at the current estimate of the root for the solver *note s: ae2, given by ‘s->f’. -- Function: *note gsl_vector: 35f. *gsl_multiroot_fsolver_dx (const gsl_multiroot_fsolver *s) -- Function: *note gsl_vector: 35f. *gsl_multiroot_fdfsolver_dx (const gsl_multiroot_fdfsolver *s) These functions return the last step dx taken by the solver *note s: ae4, given by ‘s->dx’.  File: gsl-ref.info, Node: Search Stopping Parameters<2>, Next: Algorithms using Derivatives, Prev: Iteration<3>, Up: Multidimensional Root-Finding 38.5 Search Stopping Parameters =============================== A root finding procedure should stop when one of the following conditions is true: * A multidimensional root has been found to within the user-specified precision. * A user-specified maximum number of iterations has been reached. * An error has occurred. The handling of these conditions is under user control. The functions below allow the user to test the precision of the current result in several standard ways. -- Function: int gsl_multiroot_test_delta (const gsl_vector *dx, const gsl_vector *x, double epsabs, double epsrel) This function tests for the convergence of the sequence by comparing the last step *note dx: ae6. with the absolute error *note epsabs: ae6. and relative error *note epsrel: ae6. to the current position *note x: ae6. The test returns ‘GSL_SUCCESS’ if the following condition is achieved, |dx_i| < epsabs + epsrel |x_i| for each component of *note x: ae6. and returns ‘GSL_CONTINUE’ otherwise. -- Function: int gsl_multiroot_test_residual (const gsl_vector *f, double epsabs) This function tests the residual value *note f: ae7. against the absolute error bound *note epsabs: ae7. The test returns ‘GSL_SUCCESS’ if the following condition is achieved, \sum_i |f_i| < epsabs and returns ‘GSL_CONTINUE’ otherwise. This criterion is suitable for situations where the precise location of the root, x, is unimportant provided a value can be found where the residual is small enough.  File: gsl-ref.info, Node: Algorithms using Derivatives, Next: Algorithms without Derivatives, Prev: Search Stopping Parameters<2>, Up: Multidimensional Root-Finding 38.6 Algorithms using Derivatives ================================= The root finding algorithms described in this section make use of both the function and its derivative. They require an initial guess for the location of the root, but there is no absolute guarantee of convergence—the function must be suitable for this technique and the initial guess must be sufficiently close to the root for it to work. When the conditions are satisfied then convergence is quadratic. -- Type: gsl_multiroot_fdfsolver_type The following are available algorithms for minimizing functions using derivatives. -- Variable: *note gsl_multiroot_fdfsolver_type: ae9. *gsl_multiroot_fdfsolver_hybridsj This is a modified version of Powell’s Hybrid method as implemented in the HYBRJ algorithm in MINPACK. Minpack was written by Jorge J. Moré, Burton S. Garbow and Kenneth E. Hillstrom. The Hybrid algorithm retains the fast convergence of Newton’s method but will also reduce the residual when Newton’s method is unreliable. The algorithm uses a generalized trust region to keep each step under control. In order to be accepted a proposed new position x' must satisfy the condition |D (x' - x)| < \delta, where D is a diagonal scaling matrix and \delta is the size of the trust region. The components of D are computed internally, using the column norms of the Jacobian to estimate the sensitivity of the residual to each component of x. This improves the behavior of the algorithm for badly scaled functions. On each iteration the algorithm first determines the standard Newton step by solving the system J dx = - f. If this step falls inside the trust region it is used as a trial step in the next stage. If not, the algorithm uses the linear combination of the Newton and gradient directions which is predicted to minimize the norm of the function while staying inside the trust region, dx = - \alpha J^{-1} f(x) - \beta \nabla |f(x)|^2 This combination of Newton and gradient directions is referred to as a `dogleg step'. The proposed step is now tested by evaluating the function at the resulting point, x'. If the step reduces the norm of the function sufficiently then it is accepted and size of the trust region is increased. If the proposed step fails to improve the solution then the size of the trust region is decreased and another trial step is computed. The speed of the algorithm is increased by computing the changes to the Jacobian approximately, using a rank-1 update. If two successive attempts fail to reduce the residual then the full Jacobian is recomputed. The algorithm also monitors the progress of the solution and returns an error if several steps fail to make any improvement, ‘GSL_ENOPROG’ the iteration is not making any progress, preventing the algorithm from continuing. ‘GSL_ENOPROGJ’ re-evaluations of the Jacobian indicate that the iteration is not making any progress, preventing the algorithm from continuing. -- Variable: *note gsl_multiroot_fdfsolver_type: ae9. *gsl_multiroot_fdfsolver_hybridj This algorithm is an unscaled version of HYBRIDSJ. The steps are controlled by a spherical trust region |x' - x| < \delta, instead of a generalized region. This can be useful if the generalized region estimated by HYBRIDSJ is inappropriate. -- Variable: *note gsl_multiroot_fdfsolver_type: ae9. *gsl_multiroot_fdfsolver_newton Newton’s Method is the standard root-polishing algorithm. The algorithm begins with an initial guess for the location of the solution. On each iteration a linear approximation to the function F is used to estimate the step which will zero all the components of the residual. The iteration is defined by the following sequence, x -> x' = x - J^{-1} f(x) where the Jacobian matrix J is computed from the derivative functions provided by ‘f’. The step dx is obtained by solving the linear system, J dx = - f(x) using LU decomposition. If the Jacobian matrix is singular, an error code of *note GSL_EDOM: 28. is returned. -- Variable: *note gsl_multiroot_fdfsolver_type: ae9. *gsl_multiroot_fdfsolver_gnewton This is a modified version of Newton’s method which attempts to improve global convergence by requiring every step to reduce the Euclidean norm of the residual, |f(x)|. If the Newton step leads to an increase in the norm then a reduced step of relative size, t = (\sqrt{1 + 6 r} - 1) / (3 r) is proposed, with r being the ratio of norms |f(x')|^2/|f(x)|^2. This procedure is repeated until a suitable step size is found.  File: gsl-ref.info, Node: Algorithms without Derivatives, Next: Examples<29>, Prev: Algorithms using Derivatives, Up: Multidimensional Root-Finding 38.7 Algorithms without Derivatives =================================== The algorithms described in this section do not require any derivative information to be supplied by the user. Any derivatives needed are approximated by finite differences. Note that if the finite-differencing step size chosen by these routines is inappropriate, an explicit user-supplied numerical derivative can always be used with the algorithms described in the previous section. -- Type: gsl_multiroot_fsolver_type The following are available algorithms for minimizing functions without derivatives. -- Variable: *note gsl_multiroot_fsolver_type: aef. *gsl_multiroot_fsolver_hybrids This is a version of the Hybrid algorithm which replaces calls to the Jacobian function by its finite difference approximation. The finite difference approximation is computed using ‘gsl_multiroots_fdjac()’ with a relative step size of ‘GSL_SQRT_DBL_EPSILON’. Note that this step size will not be suitable for all problems. -- Variable: *note gsl_multiroot_fsolver_type: aef. *gsl_multiroot_fsolver_hybrid This is a finite difference version of the Hybrid algorithm without internal scaling. -- Variable: *note gsl_multiroot_fsolver_type: aef. *gsl_multiroot_fsolver_dnewton The `discrete Newton algorithm' is the simplest method of solving a multidimensional system. It uses the Newton iteration x -> x - J^{-1} f(x) where the Jacobian matrix J is approximated by taking finite differences of the function ‘f’. The approximation scheme used by this implementation is, J_{ij} = (f_i(x + \delta_j) - f_i(x)) / \delta_j where \delta_j is a step of size \sqrt\epsilon |x_j| with \epsilon being the machine precision (\epsilon \approx 2.22 \times 10^{-16}). The order of convergence of Newton’s algorithm is quadratic, but the finite differences require n^2 function evaluations on each iteration. The algorithm may become unstable if the finite differences are not a good approximation to the true derivatives. -- Variable: *note gsl_multiroot_fsolver_type: aef. *gsl_multiroot_fsolver_broyden The `Broyden algorithm' is a version of the discrete Newton algorithm which attempts to avoids the expensive update of the Jacobian matrix on each iteration. The changes to the Jacobian are also approximated, using a rank-1 update, J^{-1} \to J^{-1} - (J^{-1} df - dx) dx^T J^{-1} / dx^T J^{-1} df where the vectors dx and df are the changes in x and f. On the first iteration the inverse Jacobian is estimated using finite differences, as in the discrete Newton algorithm. This approximation gives a fast update but is unreliable if the changes are not small, and the estimate of the inverse Jacobian becomes worse as time passes. The algorithm has a tendency to become unstable unless it starts close to the root. The Jacobian is refreshed if this instability is detected (consult the source for details). This algorithm is included only for demonstration purposes, and is not recommended for serious use.  File: gsl-ref.info, Node: Examples<29>, Next: References and Further Reading<31>, Prev: Algorithms without Derivatives, Up: Multidimensional Root-Finding 38.8 Examples ============= The multidimensional solvers are used in a similar way to the one-dimensional root finding algorithms. This first example demonstrates the HYBRIDS scaled-hybrid algorithm, which does not require derivatives. The program solves the Rosenbrock system of equations, f_1 (x, y) = a (1 - x) f_2 (x, y) = b (y - x^2) with a = 1, b = 10. The solution of this system lies at (x,y) = (1,1) in a narrow valley. The first stage of the program is to define the system of equations: #include #include #include #include struct rparams { double a; double b; }; int rosenbrock_f (const gsl_vector * x, void *params, gsl_vector * f) { double a = ((struct rparams *) params)->a; double b = ((struct rparams *) params)->b; const double x0 = gsl_vector_get (x, 0); const double x1 = gsl_vector_get (x, 1); const double y0 = a * (1 - x0); const double y1 = b * (x1 - x0 * x0); gsl_vector_set (f, 0, y0); gsl_vector_set (f, 1, y1); return GSL_SUCCESS; } The main program begins by creating the function object ‘f’, with the arguments ‘(x,y)’ and parameters ‘(a,b)’. The solver ‘s’ is initialized to use this function, with the ‘gsl_multiroot_fsolver_hybrids’ method: int main (void) { const gsl_multiroot_fsolver_type *T; gsl_multiroot_fsolver *s; int status; size_t i, iter = 0; const size_t n = 2; struct rparams p = {1.0, 10.0}; gsl_multiroot_function f = {&rosenbrock_f, n, &p}; double x_init[2] = {-10.0, -5.0}; gsl_vector *x = gsl_vector_alloc (n); gsl_vector_set (x, 0, x_init[0]); gsl_vector_set (x, 1, x_init[1]); T = gsl_multiroot_fsolver_hybrids; s = gsl_multiroot_fsolver_alloc (T, 2); gsl_multiroot_fsolver_set (s, &f, x); print_state (iter, s); do { iter++; status = gsl_multiroot_fsolver_iterate (s); print_state (iter, s); if (status) /* check if solver is stuck */ break; status = gsl_multiroot_test_residual (s->f, 1e-7); } while (status == GSL_CONTINUE && iter < 1000); printf ("status = %s\n", gsl_strerror (status)); gsl_multiroot_fsolver_free (s); gsl_vector_free (x); return 0; } Note that it is important to check the return status of each solver step, in case the algorithm becomes stuck. If an error condition is detected, indicating that the algorithm cannot proceed, then the error can be reported to the user, a new starting point chosen or a different algorithm used. The intermediate state of the solution is displayed by the following function. The solver state contains the vector ‘s->x’ which is the current position, and the vector ‘s->f’ with corresponding function values: int print_state (size_t iter, gsl_multiroot_fsolver * s) { printf ("iter = %3u x = % .3f % .3f " "f(x) = % .3e % .3e\n", iter, gsl_vector_get (s->x, 0), gsl_vector_get (s->x, 1), gsl_vector_get (s->f, 0), gsl_vector_get (s->f, 1)); } Here are the results of running the program. The algorithm is started at (-10,-5) far from the solution. Since the solution is hidden in a narrow valley the earliest steps follow the gradient of the function downhill, in an attempt to reduce the large value of the residual. Once the root has been approximately located, on iteration 8, the Newton behavior takes over and convergence is very rapid: iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 1 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 2 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 3 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 4 x = -3.976 24.827 f(x) = 4.976e+00 9.020e+01 iter = 5 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01 iter = 6 x = -1.274 -5.680 f(x) = 2.274e+00 -7.302e+01 iter = 7 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00 iter = 8 x = 0.249 0.298 f(x) = 7.511e-01 2.359e+00 iter = 9 x = 1.000 0.878 f(x) = 1.268e-10 -1.218e+00 iter = 10 x = 1.000 0.989 f(x) = 1.124e-11 -1.080e-01 iter = 11 x = 1.000 1.000 f(x) = 0.000e+00 0.000e+00 status = success Note that the algorithm does not update the location on every iteration. Some iterations are used to adjust the trust-region parameter, after trying a step which was found to be divergent, or to recompute the Jacobian, when poor convergence behavior is detected. The next example program adds derivative information, in order to accelerate the solution. There are two derivative functions ‘rosenbrock_df’ and ‘rosenbrock_fdf’. The latter computes both the function and its derivative simultaneously. This allows the optimization of any common terms. For simplicity we substitute calls to the separate ‘f’ and ‘df’ functions at this point in the code below: int rosenbrock_df (const gsl_vector * x, void *params, gsl_matrix * J) { const double a = ((struct rparams *) params)->a; const double b = ((struct rparams *) params)->b; const double x0 = gsl_vector_get (x, 0); const double df00 = -a; const double df01 = 0; const double df10 = -2 * b * x0; const double df11 = b; gsl_matrix_set (J, 0, 0, df00); gsl_matrix_set (J, 0, 1, df01); gsl_matrix_set (J, 1, 0, df10); gsl_matrix_set (J, 1, 1, df11); return GSL_SUCCESS; } int rosenbrock_fdf (const gsl_vector * x, void *params, gsl_vector * f, gsl_matrix * J) { rosenbrock_f (x, params, f); rosenbrock_df (x, params, J); return GSL_SUCCESS; } The main program now makes calls to the corresponding ‘fdfsolver’ versions of the functions: int main (void) { const gsl_multiroot_fdfsolver_type *T; gsl_multiroot_fdfsolver *s; int status; size_t i, iter = 0; const size_t n = 2; struct rparams p = {1.0, 10.0}; gsl_multiroot_function_fdf f = {&rosenbrock_f, &rosenbrock_df, &rosenbrock_fdf, n, &p}; double x_init[2] = {-10.0, -5.0}; gsl_vector *x = gsl_vector_alloc (n); gsl_vector_set (x, 0, x_init[0]); gsl_vector_set (x, 1, x_init[1]); T = gsl_multiroot_fdfsolver_gnewton; s = gsl_multiroot_fdfsolver_alloc (T, n); gsl_multiroot_fdfsolver_set (s, &f, x); print_state (iter, s); do { iter++; status = gsl_multiroot_fdfsolver_iterate (s); print_state (iter, s); if (status) break; status = gsl_multiroot_test_residual (s->f, 1e-7); } while (status == GSL_CONTINUE && iter < 1000); printf ("status = %s\n", gsl_strerror (status)); gsl_multiroot_fdfsolver_free (s); gsl_vector_free (x); return 0; } The addition of derivative information to the ‘gsl_multiroot_fsolver_hybrids’ solver does not make any significant difference to its behavior, since it able to approximate the Jacobian numerically with sufficient accuracy. To illustrate the behavior of a different derivative solver we switch to ‘gsl_multiroot_fdfsolver_gnewton’. This is a traditional Newton solver with the constraint that it scales back its step if the full step would lead “uphill”. Here is the output for the ‘gsl_multiroot_fdfsolver_gnewton’ algorithm: iter = 0 x = -10.000 -5.000 f(x) = 1.100e+01 -1.050e+03 iter = 1 x = -4.231 -65.317 f(x) = 5.231e+00 -8.321e+02 iter = 2 x = 1.000 -26.358 f(x) = -8.882e-16 -2.736e+02 iter = 3 x = 1.000 1.000 f(x) = -2.220e-16 -4.441e-15 status = success The convergence is much more rapid, but takes a wide excursion out to the point (-4.23,-65.3). This could cause the algorithm to go astray in a realistic application. The hybrid algorithm follows the downhill path to the solution more reliably.  File: gsl-ref.info, Node: References and Further Reading<31>, Prev: Examples<29>, Up: Multidimensional Root-Finding 38.9 References and Further Reading =================================== The original version of the Hybrid method is described in the following articles by Powell, * M.J.D. Powell, “A Hybrid Method for Nonlinear Equations” (Chap 6, p 87–114) and “A Fortran Subroutine for Solving systems of Nonlinear Algebraic Equations” (Chap 7, p 115–161), in `Numerical Methods for Nonlinear Algebraic Equations', P. Rabinowitz, editor. Gordon and Breach, 1970. The following papers are also relevant to the algorithms described in this section, * J.J. Moré, M.Y. Cosnard, “Numerical Solution of Nonlinear Equations”, `ACM Transactions on Mathematical Software', Vol 5, No 1, (1979), p 64–85 * C.G. Broyden, “A Class of Methods for Solving Nonlinear Simultaneous Equations”, `Mathematics of Computation', Vol 19 (1965), p 577–593 * J.J. Moré, B.S. Garbow, K.E. Hillstrom, “Testing Unconstrained Optimization Software”, ACM Transactions on Mathematical Software, Vol 7, No 1 (1981), p 17–41  File: gsl-ref.info, Node: Multidimensional Minimization, Next: Linear Least-Squares Fitting, Prev: Multidimensional Root-Finding, Up: Top 39 Multidimensional Minimization ******************************** This chapter describes routines for finding minima of arbitrary multidimensional functions. The library provides low level components for a variety of iterative minimizers and convergence tests. These can be combined by the user to achieve the desired solution, while providing full access to the intermediate steps of the algorithms. Each class of methods uses the same framework, so that you can switch between minimizers at runtime without needing to recompile your program. Each instance of a minimizer keeps track of its own state, allowing the minimizers to be used in multi-threaded programs. The minimization algorithms can be used to maximize a function by inverting its sign. The header file ‘gsl_multimin.h’ contains prototypes for the minimization functions and related declarations. * Menu: * Overview: Overview<4>. * Caveats: Caveats<3>. * Initializing the Multidimensional Minimizer:: * Providing a function to minimize:: * Iteration: Iteration<4>. * Stopping Criteria:: * Algorithms with Derivatives:: * Algorithms without Derivatives: Algorithms without Derivatives<2>. * Examples: Examples<30>. * References and Further Reading: References and Further Reading<32>.  File: gsl-ref.info, Node: Overview<4>, Next: Caveats<3>, Up: Multidimensional Minimization 39.1 Overview ============= The problem of multidimensional minimization requires finding a point x such that the scalar function, f(x_1, \dots, x_n) takes a value which is lower than at any neighboring point. For smooth functions the gradient g = \nabla f vanishes at the minimum. In general there are no bracketing methods available for the minimization of n-dimensional functions. The algorithms proceed from an initial guess using a search algorithm which attempts to move in a downhill direction. Algorithms making use of the gradient of the function perform a one-dimensional line minimisation along this direction until the lowest point is found to a suitable tolerance. The search direction is then updated with local information from the function and its derivatives, and the whole process repeated until the true n-dimensional minimum is found. Algorithms which do not require the gradient of the function use different strategies. For example, the Nelder-Mead Simplex algorithm maintains n+1 trial parameter vectors as the vertices of a n-dimensional simplex. On each iteration it tries to improve the worst vertex of the simplex by geometrical transformations. The iterations are continued until the overall size of the simplex has decreased sufficiently. Both types of algorithms use a standard framework. The user provides a high-level driver for the algorithms, and the library provides the individual functions necessary for each of the steps. There are three main phases of the iteration. The steps are, * initialize minimizer state, ‘s’, for algorithm ‘T’ * update ‘s’ using the iteration ‘T’ * test ‘s’ for convergence, and repeat iteration if necessary Each iteration step consists either of an improvement to the line-minimisation in the current direction or an update to the search direction itself. The state for the minimizers is held in a *note gsl_multimin_fdfminimizer: af9. struct or a *note gsl_multimin_fminimizer: afa. struct.  File: gsl-ref.info, Node: Caveats<3>, Next: Initializing the Multidimensional Minimizer, Prev: Overview<4>, Up: Multidimensional Minimization 39.2 Caveats ============ Note that the minimization algorithms can only search for one local minimum at a time. When there are several local minima in the search area, the first minimum to be found will be returned; however it is difficult to predict which of the minima this will be. In most cases, no error will be reported if you try to find a local minimum in an area where there is more than one. It is also important to note that the minimization algorithms find local minima; there is no way to determine whether a minimum is a global minimum of the function in question.  File: gsl-ref.info, Node: Initializing the Multidimensional Minimizer, Next: Providing a function to minimize, Prev: Caveats<3>, Up: Multidimensional Minimization 39.3 Initializing the Multidimensional Minimizer ================================================ The following function initializes a multidimensional minimizer. The minimizer itself depends only on the dimension of the problem and the algorithm and can be reused for different problems. -- Type: gsl_multimin_fdfminimizer This is a workspace for minimizing functions using derivatives. -- Type: gsl_multimin_fminimizer This is a workspace for minimizing functions without derivatives. -- Function: *note gsl_multimin_fdfminimizer: af9. *gsl_multimin_fdfminimizer_alloc (const gsl_multimin_fdfminimizer_type *T, size_t n) -- Function: *note gsl_multimin_fminimizer: afa. *gsl_multimin_fminimizer_alloc (const gsl_multimin_fminimizer_type *T, size_t n) This function returns a pointer to a newly allocated instance of a minimizer of type *note T: afe. for an *note n: afe.-dimension function. If there is insufficient memory to create the minimizer then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: int gsl_multimin_fdfminimizer_set (gsl_multimin_fdfminimizer *s, gsl_multimin_function_fdf *fdf, const gsl_vector *x, double step_size, double tol) -- Function: int gsl_multimin_fminimizer_set (gsl_multimin_fminimizer *s, gsl_multimin_function *f, const gsl_vector *x, const gsl_vector *step_size) The function *note gsl_multimin_fdfminimizer_set(): aff. initializes the minimizer *note s: b00. to minimize the function ‘fdf’ starting from the initial point *note x: b00. The size of the first trial step is given by *note step_size: b00. The accuracy of the line minimization is specified by ‘tol’. The precise meaning of this parameter depends on the method used. Typically the line minimization is considered successful if the gradient of the function g is orthogonal to the current search direction p to a relative accuracy of ‘tol’, where p \cdot g < tol |p| |g|. A ‘tol’ value of 0.1 is suitable for most purposes, since line minimization only needs to be carried out approximately. Note that setting ‘tol’ to zero will force the use of “exact” line-searches, which are extremely expensive. The function *note gsl_multimin_fminimizer_set(): b00. initializes the minimizer *note s: b00. to minimize the function *note f: b00, starting from the initial point *note x: b00. The size of the initial trial steps is given in vector *note step_size: b00. The precise meaning of this parameter depends on the method used. -- Function: void gsl_multimin_fdfminimizer_free (gsl_multimin_fdfminimizer *s) -- Function: void gsl_multimin_fminimizer_free (gsl_multimin_fminimizer *s) This function frees all the memory associated with the minimizer *note s: b02. -- Function: const char *gsl_multimin_fdfminimizer_name (const gsl_multimin_fdfminimizer *s) -- Function: const char *gsl_multimin_fminimizer_name (const gsl_multimin_fminimizer *s) This function returns a pointer to the name of the minimizer. For example: printf ("s is a '%s' minimizer\n", gsl_multimin_fdfminimizer_name (s)); would print something like ‘s is a 'conjugate_pr' minimizer’.  File: gsl-ref.info, Node: Providing a function to minimize, Next: Iteration<4>, Prev: Initializing the Multidimensional Minimizer, Up: Multidimensional Minimization 39.4 Providing a function to minimize ===================================== You must provide a parametric function of n variables for the minimizers to operate on. You may also need to provide a routine which calculates the gradient of the function and a third routine which calculates both the function value and the gradient together. In order to allow for general parameters the functions are defined by the following data types: -- Type: gsl_multimin_function_fdf This data type defines a general function of n variables with parameters and the corresponding gradient vector of derivatives, ‘double (* f) (const gsl_vector * x, void * params)’ this function should return the result f(x,params) for argument ‘x’ and parameters ‘params’. If the function cannot be computed, an error value of *note GSL_NAN: 3c. should be returned. ‘void (* df) (const gsl_vector * x, void * params, gsl_vector * g)’ this function should store the ‘n’-dimensional gradient g_i = d f(x,params) / d x_i in the vector ‘g’ for argument ‘x’ and parameters ‘params’, returning an appropriate error code if the function cannot be computed. ‘void (* fdf) (const gsl_vector * x, void * params, double * f, gsl_vector * g)’ This function should set the values of the ‘f’ and ‘g’ as above, for arguments ‘x’ and parameters ‘params’. This function provides an optimization of the separate functions for f(x) and g(x)—it is always faster to compute the function and its derivative at the same time. ‘size_t n’ the dimension of the system, i.e. the number of components of the vectors ‘x’. ‘void * params’ a pointer to the parameters of the function. -- Type: gsl_multimin_function This data type defines a general function of n variables with parameters, ‘double (* f) (const gsl_vector * x, void * params)’ this function should return the result f(x,params) for argument ‘x’ and parameters ‘params’. If the function cannot be computed, an error value of *note GSL_NAN: 3c. should be returned. ‘size_t n’ the dimension of the system, i.e. the number of components of the vectors ‘x’. ‘void * params’ a pointer to the parameters of the function. The following example function defines a simple two-dimensional paraboloid with five parameters, /* Paraboloid centered on (p[0],p[1]), with scale factors (p[2],p[3]) and minimum p[4] */ double my_f (const gsl_vector *v, void *params) { double x, y; double *p = (double *)params; x = gsl_vector_get(v, 0); y = gsl_vector_get(v, 1); return p[2] * (x - p[0]) * (x - p[0]) + p[3] * (y - p[1]) * (y - p[1]) + p[4]; } /* The gradient of f, df = (df/dx, df/dy). */ void my_df (const gsl_vector *v, void *params, gsl_vector *df) { double x, y; double *p = (double *)params; x = gsl_vector_get(v, 0); y = gsl_vector_get(v, 1); gsl_vector_set(df, 0, 2.0 * p[2] * (x - p[0])); gsl_vector_set(df, 1, 2.0 * p[3] * (y - p[1])); } /* Compute both f and df together. */ void my_fdf (const gsl_vector *x, void *params, double *f, gsl_vector *df) { *f = my_f(x, params); my_df(x, params, df); } The function can be initialized using the following code: gsl_multimin_function_fdf my_func; /* Paraboloid center at (1,2), scale factors (10, 20), minimum value 30 */ double p[5] = { 1.0, 2.0, 10.0, 20.0, 30.0 }; my_func.n = 2; /* number of function components */ my_func.f = &my_f; my_func.df = &my_df; my_func.fdf = &my_fdf; my_func.params = (void *)p;  File: gsl-ref.info, Node: Iteration<4>, Next: Stopping Criteria, Prev: Providing a function to minimize, Up: Multidimensional Minimization 39.5 Iteration ============== The following function drives the iteration of each algorithm. The function performs one iteration to update the state of the minimizer. The same function works for all minimizers so that different methods can be substituted at runtime without modifications to the code. -- Function: int gsl_multimin_fdfminimizer_iterate (gsl_multimin_fdfminimizer *s) -- Function: int gsl_multimin_fminimizer_iterate (gsl_multimin_fminimizer *s) These functions perform a single iteration of the minimizer *note s: b0b. If the iteration encounters an unexpected problem then an error code will be returned. The error code ‘GSL_ENOPROG’ signifies that the minimizer is unable to improve on its current estimate, either due to numerical difficulty or because a genuine local minimum has been reached. The minimizer maintains a current best estimate of the minimum at all times. This information can be accessed with the following auxiliary functions, -- Function: *note gsl_vector: 35f. *gsl_multimin_fdfminimizer_x (const gsl_multimin_fdfminimizer *s) -- Function: *note gsl_vector: 35f. *gsl_multimin_fminimizer_x (const gsl_multimin_fminimizer *s) -- Function: double gsl_multimin_fdfminimizer_minimum (const gsl_multimin_fdfminimizer *s) -- Function: double gsl_multimin_fminimizer_minimum (const gsl_multimin_fminimizer *s) -- Function: *note gsl_vector: 35f. *gsl_multimin_fdfminimizer_gradient (const gsl_multimin_fdfminimizer *s) -- Function: *note gsl_vector: 35f. *gsl_multimin_fdfminimizer_dx (const gsl_multimin_fdfminimizer *s) -- Function: double gsl_multimin_fminimizer_size (const gsl_multimin_fminimizer *s) These functions return the current best estimate of the location of the minimum, the value of the function at that point, its gradient, the last step increment of the estimate, and minimizer specific characteristic size for the minimizer *note s: b12. -- Function: int gsl_multimin_fdfminimizer_restart (gsl_multimin_fdfminimizer *s) This function resets the minimizer *note s: b13. to use the current point as a new starting point.  File: gsl-ref.info, Node: Stopping Criteria, Next: Algorithms with Derivatives, Prev: Iteration<4>, Up: Multidimensional Minimization 39.6 Stopping Criteria ====================== A minimization procedure should stop when one of the following conditions is true: * A minimum has been found to within the user-specified precision. * A user-specified maximum number of iterations has been reached. * An error has occurred. The handling of these conditions is under user control. The functions below allow the user to test the precision of the current result. -- Function: int gsl_multimin_test_gradient (const gsl_vector *g, double epsabs) This function tests the norm of the gradient *note g: b15. against the absolute tolerance *note epsabs: b15. The gradient of a multidimensional function goes to zero at a minimum. The test returns ‘GSL_SUCCESS’ if the following condition is achieved, |g| < epsabs and returns ‘GSL_CONTINUE’ otherwise. A suitable choice of *note epsabs: b15. can be made from the desired accuracy in the function for small variations in x. The relationship between these quantities is given by \delta{f} = g\,\delta{x}. -- Function: int gsl_multimin_test_size (const double size, double epsabs) This function tests the minimizer specific characteristic size (if applicable to the used minimizer) against absolute tolerance *note epsabs: b16. The test returns ‘GSL_SUCCESS’ if the size is smaller than tolerance, otherwise ‘GSL_CONTINUE’ is returned.  File: gsl-ref.info, Node: Algorithms with Derivatives, Next: Algorithms without Derivatives<2>, Prev: Stopping Criteria, Up: Multidimensional Minimization 39.7 Algorithms with Derivatives ================================ There are several minimization methods available. The best choice of algorithm depends on the problem. The algorithms described in this section use the value of the function and its gradient at each evaluation point. -- Type: gsl_multimin_fdfminimizer_type This type specifies a minimization algorithm using gradients. -- Variable: *note gsl_multimin_fdfminimizer_type: b18. *gsl_multimin_fdfminimizer_conjugate_fr This is the Fletcher-Reeves conjugate gradient algorithm. The conjugate gradient algorithm proceeds as a succession of line minimizations. The sequence of search directions is used to build up an approximation to the curvature of the function in the neighborhood of the minimum. An initial search direction ‘p’ is chosen using the gradient, and line minimization is carried out in that direction. The accuracy of the line minimization is specified by the parameter ‘tol’. The minimum along this line occurs when the function gradient ‘g’ and the search direction ‘p’ are orthogonal. The line minimization terminates when p\cdot g < tol |p| |g|. The search direction is updated using the Fletcher-Reeves formula p' = g' - \beta p where \beta=-|g'|^2/|g|^2, and the line minimization is then repeated for the new search direction. -- Variable: *note gsl_multimin_fdfminimizer_type: b18. *gsl_multimin_fdfminimizer_conjugate_pr This is the Polak-Ribiere conjugate gradient algorithm. It is similar to the Fletcher-Reeves method, differing only in the choice of the coefficient \beta. Both methods work well when the evaluation point is close enough to the minimum of the objective function that it is well approximated by a quadratic hypersurface. -- Variable: *note gsl_multimin_fdfminimizer_type: b18. *gsl_multimin_fdfminimizer_vector_bfgs2 -- Variable: *note gsl_multimin_fdfminimizer_type: b18. *gsl_multimin_fdfminimizer_vector_bfgs These methods use the vector Broyden-Fletcher-Goldfarb-Shanno (BFGS) algorithm. This is a quasi-Newton method which builds up an approximation to the second derivatives of the function f using the difference between successive gradient vectors. By combining the first and second derivatives the algorithm is able to take Newton-type steps towards the function minimum, assuming quadratic behavior in that region. The ‘bfgs2’ version of this minimizer is the most efficient version available, and is a faithful implementation of the line minimization scheme described in Fletcher’s `Practical Methods of Optimization', Algorithms 2.6.2 and 2.6.4. It supersedes the original ‘bfgs’ routine and requires substantially fewer function and gradient evaluations. The user-supplied tolerance ‘tol’ corresponds to the parameter \sigma used by Fletcher. A value of 0.1 is recommended for typical use (larger values correspond to less accurate line searches). -- Variable: *note gsl_multimin_fdfminimizer_type: b18. *gsl_multimin_fdfminimizer_steepest_descent The steepest descent algorithm follows the downhill gradient of the function at each step. When a downhill step is successful the step-size is increased by a factor of two. If the downhill step leads to a higher function value then the algorithm backtracks and the step size is decreased using the parameter ‘tol’. A suitable value of ‘tol’ for most applications is 0.1. The steepest descent method is inefficient and is included only for demonstration purposes.  File: gsl-ref.info, Node: Algorithms without Derivatives<2>, Next: Examples<30>, Prev: Algorithms with Derivatives, Up: Multidimensional Minimization 39.8 Algorithms without Derivatives =================================== The algorithms described in this section use only the value of the function at each evaluation point. -- Type: gsl_multimin_fminimizer_type This type specifies minimization algorithms which do not use gradients. -- Variable: *note gsl_multimin_fminimizer_type: b1f. *gsl_multimin_fminimizer_nmsimplex2 -- Variable: *note gsl_multimin_fminimizer_type: b1f. *gsl_multimin_fminimizer_nmsimplex These methods use the Simplex algorithm of Nelder and Mead. Starting from the initial vector x = p_0, the algorithm constructs an additional n vectors p_i using the step size vector s = step\_size as follows: p_0 = (x_0, x_1, ... , x_n) p_1 = (x_0 + s_0, x_1, ... , x_n) p_2 = (x_0, x_1 + s_1, ... , x_n) ... = ... p_n = (x_0, x_1, ... , x_n + s_n) These vectors form the n+1 vertices of a simplex in n dimensions. On each iteration the algorithm uses simple geometrical transformations to update the vector corresponding to the highest function value. The geometric transformations are reflection, reflection followed by expansion, contraction and multiple contraction. Using these transformations the simplex moves through the space towards the minimum, where it contracts itself. After each iteration, the best vertex is returned. Note, that due to the nature of the algorithm not every step improves the current best parameter vector. Usually several iterations are required. The minimizer-specific characteristic size is calculated as the average distance from the geometrical center of the simplex to all its vertices. This size can be used as a stopping criteria, as the simplex contracts itself near the minimum. The size is returned by the function *note gsl_multimin_fminimizer_size(): b12. The *note gsl_multimin_fminimizer_nmsimplex2: b20. version of this minimiser is a new O(N) operations implementation of the earlier O(N^2) operations *note gsl_multimin_fminimizer_nmsimplex: b21. minimiser. It uses the same underlying algorithm, but the simplex updates are computed more efficiently for high-dimensional problems. In addition, the size of simplex is calculated as the RMS distance of each vertex from the center rather than the mean distance, allowing a linear update of this quantity on each step. The memory usage is O(N^2) for both algorithms. -- Variable: *note gsl_multimin_fminimizer_type: b1f. *gsl_multimin_fminimizer_nmsimplex2rand This method is a variant of *note gsl_multimin_fminimizer_nmsimplex2: b20. which initialises the simplex around the starting point ‘x’ using a randomly-oriented set of basis vectors instead of the fixed coordinate axes. The final dimensions of the simplex are scaled along the coordinate axes by the vector ‘step_size’. The randomisation uses a simple deterministic generator so that repeated calls to *note gsl_multimin_fminimizer_set(): b00. for a given solver object will vary the orientation in a well-defined way.  File: gsl-ref.info, Node: Examples<30>, Next: References and Further Reading<32>, Prev: Algorithms without Derivatives<2>, Up: Multidimensional Minimization 39.9 Examples ============= This example program finds the minimum of the *note paraboloid function: b08. defined earlier. The location of the minimum is offset from the origin in x and y, and the function value at the minimum is non-zero. The main program is given below, it requires the example function given earlier in this chapter. int main (void) { size_t iter = 0; int status; const gsl_multimin_fdfminimizer_type *T; gsl_multimin_fdfminimizer *s; /* Position of the minimum (1,2), scale factors 10,20, height 30. */ double par[5] = { 1.0, 2.0, 10.0, 20.0, 30.0 }; gsl_vector *x; gsl_multimin_function_fdf my_func; my_func.n = 2; my_func.f = my_f; my_func.df = my_df; my_func.fdf = my_fdf; my_func.params = par; /* Starting point, x = (5,7) */ x = gsl_vector_alloc (2); gsl_vector_set (x, 0, 5.0); gsl_vector_set (x, 1, 7.0); T = gsl_multimin_fdfminimizer_conjugate_fr; s = gsl_multimin_fdfminimizer_alloc (T, 2); gsl_multimin_fdfminimizer_set (s, &my_func, x, 0.01, 1e-4); do { iter++; status = gsl_multimin_fdfminimizer_iterate (s); if (status) break; status = gsl_multimin_test_gradient (s->gradient, 1e-3); if (status == GSL_SUCCESS) printf ("Minimum found at:\n"); printf ("%5d %.5f %.5f %10.5f\n", iter, gsl_vector_get (s->x, 0), gsl_vector_get (s->x, 1), s->f); } while (status == GSL_CONTINUE && iter < 100); gsl_multimin_fdfminimizer_free (s); gsl_vector_free (x); return 0; } The initial step-size is chosen as 0.01, a conservative estimate in this case, and the line minimization parameter is set at 0.0001. The program terminates when the norm of the gradient has been reduced below 0.001. The output of the program is shown below, x y f 1 4.99629 6.99072 687.84780 2 4.98886 6.97215 683.55456 3 4.97400 6.93501 675.01278 4 4.94429 6.86073 658.10798 5 4.88487 6.71217 625.01340 6 4.76602 6.41506 561.68440 7 4.52833 5.82083 446.46694 8 4.05295 4.63238 261.79422 9 3.10219 2.25548 75.49762 10 2.85185 1.62963 67.03704 11 2.19088 1.76182 45.31640 12 0.86892 2.02622 30.18555 Minimum found at: 13 1.00000 2.00000 30.00000 Note that the algorithm gradually increases the step size as it successfully moves downhill, as can be seen by plotting the successive points in Fig. %s. [gsl-ref-figures/multimin] Figure: Function contours with path taken by minimization algorithm The conjugate gradient algorithm finds the minimum on its second direction because the function is purely quadratic. Additional iterations would be needed for a more complicated function. Here is another example using the Nelder-Mead Simplex algorithm to minimize the same example object function, as above. int main(void) { double par[5] = {1.0, 2.0, 10.0, 20.0, 30.0}; const gsl_multimin_fminimizer_type *T = gsl_multimin_fminimizer_nmsimplex2; gsl_multimin_fminimizer *s = NULL; gsl_vector *ss, *x; gsl_multimin_function minex_func; size_t iter = 0; int status; double size; /* Starting point */ x = gsl_vector_alloc (2); gsl_vector_set (x, 0, 5.0); gsl_vector_set (x, 1, 7.0); /* Set initial step sizes to 1 */ ss = gsl_vector_alloc (2); gsl_vector_set_all (ss, 1.0); /* Initialize method and iterate */ minex_func.n = 2; minex_func.f = my_f; minex_func.params = par; s = gsl_multimin_fminimizer_alloc (T, 2); gsl_multimin_fminimizer_set (s, &minex_func, x, ss); do { iter++; status = gsl_multimin_fminimizer_iterate(s); if (status) break; size = gsl_multimin_fminimizer_size (s); status = gsl_multimin_test_size (size, 1e-2); if (status == GSL_SUCCESS) { printf ("converged to minimum at\n"); } printf ("%5d %10.3e %10.3e f() = %7.3f size = %.3f\n", iter, gsl_vector_get (s->x, 0), gsl_vector_get (s->x, 1), s->fval, size); } while (status == GSL_CONTINUE && iter < 100); gsl_vector_free(x); gsl_vector_free(ss); gsl_multimin_fminimizer_free (s); return status; } The minimum search stops when the Simplex size drops to 0.01. The output is shown below. 1 6.500e+00 5.000e+00 f() = 512.500 size = 1.130 2 5.250e+00 4.000e+00 f() = 290.625 size = 1.409 3 5.250e+00 4.000e+00 f() = 290.625 size = 1.409 4 5.500e+00 1.000e+00 f() = 252.500 size = 1.409 5 2.625e+00 3.500e+00 f() = 101.406 size = 1.847 6 2.625e+00 3.500e+00 f() = 101.406 size = 1.847 7 0.000e+00 3.000e+00 f() = 60.000 size = 1.847 8 2.094e+00 1.875e+00 f() = 42.275 size = 1.321 9 2.578e-01 1.906e+00 f() = 35.684 size = 1.069 10 5.879e-01 2.445e+00 f() = 35.664 size = 0.841 11 1.258e+00 2.025e+00 f() = 30.680 size = 0.476 12 1.258e+00 2.025e+00 f() = 30.680 size = 0.367 13 1.093e+00 1.849e+00 f() = 30.539 size = 0.300 14 8.830e-01 2.004e+00 f() = 30.137 size = 0.172 15 8.830e-01 2.004e+00 f() = 30.137 size = 0.126 16 9.582e-01 2.060e+00 f() = 30.090 size = 0.106 17 1.022e+00 2.004e+00 f() = 30.005 size = 0.063 18 1.022e+00 2.004e+00 f() = 30.005 size = 0.043 19 1.022e+00 2.004e+00 f() = 30.005 size = 0.043 20 1.022e+00 2.004e+00 f() = 30.005 size = 0.027 21 1.022e+00 2.004e+00 f() = 30.005 size = 0.022 22 9.920e-01 1.997e+00 f() = 30.001 size = 0.016 23 9.920e-01 1.997e+00 f() = 30.001 size = 0.013 converged to minimum at 24 9.920e-01 1.997e+00 f() = 30.001 size = 0.008 The simplex size first increases, while the simplex moves towards the minimum. After a while the size begins to decrease as the simplex contracts around the minimum.  File: gsl-ref.info, Node: References and Further Reading<32>, Prev: Examples<30>, Up: Multidimensional Minimization 39.10 References and Further Reading ==================================== The conjugate gradient and BFGS methods are described in detail in the following book, * R. Fletcher, `Practical Methods of Optimization (Second Edition)' Wiley (1987), ISBN 0471915475. A brief description of multidimensional minimization algorithms and more recent references can be found in, * C.W. Ueberhuber, `Numerical Computation (Volume 2)', Chapter 14, Section 4.4 “Minimization Methods”, p.: 325–335, Springer (1997), ISBN 3-540-62057-5. The simplex algorithm is described in the following paper, * J.A. Nelder and R. Mead, `A simplex method for function minimization', Computer Journal vol.: 7 (1965), 308–313.  File: gsl-ref.info, Node: Linear Least-Squares Fitting, Next: Nonlinear Least-Squares Fitting, Prev: Multidimensional Minimization, Up: Top 40 Linear Least-Squares Fitting ******************************* This chapter describes routines for performing least squares fits to experimental data using linear combinations of functions. The data may be weighted or unweighted, i.e. with known or unknown errors. For weighted data the functions compute the best fit parameters and their associated covariance matrix. For unweighted data the covariance matrix is estimated from the scatter of the points, giving a variance-covariance matrix. The functions are divided into separate versions for simple one- or two-parameter regression and multiple-parameter fits. * Menu: * Overview: Overview<5>. * Linear regression:: * Multi-parameter regression:: * Regularized regression:: * Robust linear regression:: * Large dense linear systems:: * Troubleshooting:: * Examples: Examples<31>. * References and Further Reading: References and Further Reading<33>.  File: gsl-ref.info, Node: Overview<5>, Next: Linear regression, Up: Linear Least-Squares Fitting 40.1 Overview ============= Least-squares fits are found by minimizing \chi^2 (chi-squared), the weighted sum of squared residuals over n experimental datapoints (x_i, y_i) for the model Y(c,x), \chi^2 = \sum_i w_i (y_i - Y(c, x_i))^2 The p parameters of the model are c = \{c_0, c_1, \dots\}. The weight factors w_i are given by w_i = 1/\sigma_i^2 where \sigma_i is the experimental error on the data-point y_i. The errors are assumed to be Gaussian and uncorrelated. For unweighted data the chi-squared sum is computed without any weight factors. The fitting routines return the best-fit parameters c and their p \times p covariance matrix. The covariance matrix measures the statistical errors on the best-fit parameters resulting from the errors on the data, \sigma_i, and is defined as C_{ab} = <\delta c_a \delta c_b> where \langle \, \rangle denotes an average over the Gaussian error distributions of the underlying datapoints. The covariance matrix is calculated by error propagation from the data errors \sigma_i. The change in a fitted parameter \delta c_a caused by a small change in the data \delta y_i is given by \delta c_a = \sum_i (dc_a/dy_i) \delta y_i allowing the covariance matrix to be written in terms of the errors on the data, C_{ab} = \sum_{i,j} (dc_a/dy_i) (dc_b/dy_j) <\delta y_i \delta y_j> For uncorrelated data the fluctuations of the underlying datapoints satisfy <\delta y_i \delta y_j> = \sigma_i^2 \delta_{ij} giving a corresponding parameter covariance matrix of C_{ab} = \sum_i (1/w_i) (dc_a/dy_i) (dc_b/dy_i) When computing the covariance matrix for unweighted data, i.e. data with unknown errors, the weight factors w_i in this sum are replaced by the single estimate w = 1/\sigma^2, where \sigma^2 is the computed variance of the residuals about the best-fit model, \sigma^2 = \sum (y_i - Y(c,x_i))^2 / (n-p). This is referred to as the `variance-covariance matrix'. The standard deviations of the best-fit parameters are given by the square root of the corresponding diagonal elements of the covariance matrix, \sigma_{c_a} = \sqrt{C_{aa}}. The correlation coefficient of the fit parameters c_a and c_b is given by \rho_{ab} = C_{ab} / \sqrt{C_{aa} C_{bb}}.  File: gsl-ref.info, Node: Linear regression, Next: Multi-parameter regression, Prev: Overview<5>, Up: Linear Least-Squares Fitting 40.2 Linear regression ====================== The functions in this section are used to fit simple one or two parameter linear regression models. The functions are declared in the header file ‘gsl_fit.h’. * Menu: * Linear regression with a constant term:: * Linear regression without a constant term::  File: gsl-ref.info, Node: Linear regression with a constant term, Next: Linear regression without a constant term, Up: Linear regression 40.2.1 Linear regression with a constant term --------------------------------------------- The functions described in this section can be used to perform least-squares fits to a straight line model, Y(c,x) = c_0 + c_1 x. -- Function: int gsl_fit_linear (const double *x, const size_t xstride, const double *y, const size_t ystride, size_t n, double *c0, double *c1, double *cov00, double *cov01, double *cov11, double *sumsq) This function computes the best-fit linear regression coefficients (*note c0: b2c, *note c1: b2c.) of the model Y = c_0 + c_1 X for the dataset (*note x: b2c, *note y: b2c.), two vectors of length *note n: b2c. with strides *note xstride: b2c. and *note ystride: b2c. The errors on *note y: b2c. are assumed unknown so the variance-covariance matrix for the parameters (*note c0: b2c, *note c1: b2c.) is estimated from the scatter of the points around the best-fit line and returned via the parameters (*note cov00: b2c, *note cov01: b2c, *note cov11: b2c.). The sum of squares of the residuals from the best-fit line is returned in *note sumsq: b2c. Note: the correlation coefficient of the data can be computed using *note gsl_stats_correlation(): 7ef, it does not depend on the fit. -- Function: int gsl_fit_wlinear (const double *x, const size_t xstride, const double *w, const size_t wstride, const double *y, const size_t ystride, size_t n, double *c0, double *c1, double *cov00, double *cov01, double *cov11, double *chisq) This function computes the best-fit linear regression coefficients (*note c0: b2d, *note c1: b2d.) of the model Y = c_0 + c_1 X for the weighted dataset (*note x: b2d, *note y: b2d.), two vectors of length *note n: b2d. with strides *note xstride: b2d. and *note ystride: b2d. The vector *note w: b2d, of length *note n: b2d. and stride *note wstride: b2d, specifies the weight of each datapoint. The weight is the reciprocal of the variance for each datapoint in *note y: b2d. The covariance matrix for the parameters (*note c0: b2d, *note c1: b2d.) is computed using the weights and returned via the parameters (*note cov00: b2d, *note cov01: b2d, *note cov11: b2d.). The weighted sum of squares of the residuals from the best-fit line, \chi^2, is returned in *note chisq: b2d. -- Function: int gsl_fit_linear_est (double x, double c0, double c1, double cov00, double cov01, double cov11, double *y, double *y_err) This function uses the best-fit linear regression coefficients *note c0: b2e, *note c1: b2e. and their covariance *note cov00: b2e, *note cov01: b2e, *note cov11: b2e. to compute the fitted function *note y: b2e. and its standard deviation *note y_err: b2e. for the model Y = c_0 + c_1 X at the point *note x: b2e.  File: gsl-ref.info, Node: Linear regression without a constant term, Prev: Linear regression with a constant term, Up: Linear regression 40.2.2 Linear regression without a constant term ------------------------------------------------ The functions described in this section can be used to perform least-squares fits to a straight line model without a constant term, Y = c_1 X. -- Function: int gsl_fit_mul (const double *x, const size_t xstride, const double *y, const size_t ystride, size_t n, double *c1, double *cov11, double *sumsq) This function computes the best-fit linear regression coefficient *note c1: b30. of the model Y = c_1 X for the datasets (*note x: b30, *note y: b30.), two vectors of length *note n: b30. with strides *note xstride: b30. and *note ystride: b30. The errors on *note y: b30. are assumed unknown so the variance of the parameter *note c1: b30. is estimated from the scatter of the points around the best-fit line and returned via the parameter *note cov11: b30. The sum of squares of the residuals from the best-fit line is returned in *note sumsq: b30. -- Function: int gsl_fit_wmul (const double *x, const size_t xstride, const double *w, const size_t wstride, const double *y, const size_t ystride, size_t n, double *c1, double *cov11, double *sumsq) This function computes the best-fit linear regression coefficient *note c1: b31. of the model Y = c_1 X for the weighted datasets (*note x: b31, *note y: b31.), two vectors of length *note n: b31. with strides *note xstride: b31. and *note ystride: b31. The vector *note w: b31, of length *note n: b31. and stride *note wstride: b31, specifies the weight of each datapoint. The weight is the reciprocal of the variance for each datapoint in *note y: b31. The variance of the parameter *note c1: b31. is computed using the weights and returned via the parameter *note cov11: b31. The weighted sum of squares of the residuals from the best-fit line, \chi^2, is returned in ‘chisq’. -- Function: int gsl_fit_mul_est (double x, double c1, double cov11, double *y, double *y_err) This function uses the best-fit linear regression coefficient *note c1: b32. and its covariance *note cov11: b32. to compute the fitted function *note y: b32. and its standard deviation *note y_err: b32. for the model Y = c_1 X at the point *note x: b32.  File: gsl-ref.info, Node: Multi-parameter regression, Next: Regularized regression, Prev: Linear regression, Up: Linear Least-Squares Fitting 40.3 Multi-parameter regression =============================== This section describes routines which perform least squares fits to a linear model by minimizing the cost function \chi^2 = \sum_i w_i (y_i - \sum_j X_{ij} c_j)^2 = || y - Xc ||_W^2 where y is a vector of n observations, X is an n-by-p matrix of predictor variables, c is a vector of the p unknown best-fit parameters to be estimated, and ||r||_W^2 = r^T W r. The matrix W = \diag(w_1,w_2,...,w_n) defines the weights or uncertainties of the observation vector. This formulation can be used for fits to any number of functions and/or variables by preparing the n-by-p matrix X appropriately. For example, to fit to a p-th order polynomial in ‘x’, use the following matrix, X_{ij} = x_i^j where the index i runs over the observations and the index j runs from 0 to p-1. To fit to a set of p sinusoidal functions with fixed frequencies \omega_1, \omega_2, \ldots, \omega_p, use, X_{ij} = \sin(\omega_j x_i) To fit to p independent variables x_1, x_2, \ldots, x_p, use, X_{ij} = x_j(i) where x_j(i) is the i-th value of the predictor variable x_j. The solution of the general linear least-squares system requires an additional working space for intermediate results, such as the singular value decomposition of the matrix X. These functions are declared in the header file ‘gsl_multifit.h’. -- Type: gsl_multifit_linear_workspace This workspace contains internal variables for fitting multi-parameter models. -- Function: *note gsl_multifit_linear_workspace: b34. *gsl_multifit_linear_alloc (const size_t n, const size_t p) This function allocates a workspace for fitting a model to a maximum of *note n: b35. observations using a maximum of *note p: b35. parameters. The user may later supply a smaller least squares system if desired. The size of the workspace is O(np + p^2). -- Function: void gsl_multifit_linear_free (gsl_multifit_linear_workspace *work) This function frees the memory associated with the workspace ‘w’. -- Function: int gsl_multifit_linear_svd (const gsl_matrix *X, gsl_multifit_linear_workspace *work) This function performs a singular value decomposition of the matrix *note X: b37. and stores the SVD factors internally in *note work: b37. -- Function: int gsl_multifit_linear_bsvd (const gsl_matrix *X, gsl_multifit_linear_workspace *work) This function performs a singular value decomposition of the matrix *note X: b38. and stores the SVD factors internally in *note work: b38. The matrix *note X: b38. is first balanced by applying column scaling factors to improve the accuracy of the singular values. -- Function: int gsl_multifit_linear (const gsl_matrix *X, const gsl_vector *y, gsl_vector *c, gsl_matrix *cov, double *chisq, gsl_multifit_linear_workspace *work) This function computes the best-fit parameters *note c: b39. of the model y = X c for the observations *note y: b39. and the matrix of predictor variables *note X: b39, using the preallocated workspace provided in *note work: b39. The p-by-p variance-covariance matrix of the model parameters *note cov: b39. is set to \sigma^2 (X^T X)^{-1}, where \sigma is the standard deviation of the fit residuals. The sum of squares of the residuals from the best-fit, \chi^2, is returned in *note chisq: b39. If the coefficient of determination is desired, it can be computed from the expression R^2 = 1 - \chi^2 / TSS, where the total sum of squares (TSS) of the observations *note y: b39. may be computed from *note gsl_stats_tss(): 7dc. The best-fit is found by singular value decomposition of the matrix *note X: b39. using the modified Golub-Reinsch SVD algorithm, with column scaling to improve the accuracy of the singular values. Any components which have zero singular value (to machine precision) are discarded from the fit. -- Function: int gsl_multifit_linear_tsvd (const gsl_matrix *X, const gsl_vector *y, const double tol, gsl_vector *c, gsl_matrix *cov, double *chisq, size_t *rank, gsl_multifit_linear_workspace *work) This function computes the best-fit parameters *note c: b3a. of the model y = X c for the observations *note y: b3a. and the matrix of predictor variables *note X: b3a, using a truncated SVD expansion. Singular values which satisfy s_i \le tol \times s_0 are discarded from the fit, where s_0 is the largest singular value. The p-by-p variance-covariance matrix of the model parameters *note cov: b3a. is set to \sigma^2 (X^T X)^{-1}, where \sigma is the standard deviation of the fit residuals. The sum of squares of the residuals from the best-fit, \chi^2, is returned in *note chisq: b3a. The effective rank (number of singular values used in solution) is returned in *note rank: b3a. If the coefficient of determination is desired, it can be computed from the expression R^2 = 1 - \chi^2 / TSS, where the total sum of squares (TSS) of the observations *note y: b3a. may be computed from *note gsl_stats_tss(): 7dc. -- Function: int gsl_multifit_wlinear (const gsl_matrix *X, const gsl_vector *w, const gsl_vector *y, gsl_vector *c, gsl_matrix *cov, double *chisq, gsl_multifit_linear_workspace *work) This function computes the best-fit parameters *note c: b3b. of the weighted model y = X c for the observations *note y: b3b. with weights *note w: b3b. and the matrix of predictor variables *note X: b3b, using the preallocated workspace provided in *note work: b3b. The p-by-p covariance matrix of the model parameters *note cov: b3b. is computed as (X^T W X)^{-1}. The weighted sum of squares of the residuals from the best-fit, \chi^2, is returned in *note chisq: b3b. If the coefficient of determination is desired, it can be computed from the expression R^2 = 1 - \chi^2 / WTSS, where the weighted total sum of squares (WTSS) of the observations *note y: b3b. may be computed from *note gsl_stats_wtss(): 7f9. -- Function: int gsl_multifit_wlinear_tsvd (const gsl_matrix *X, const gsl_vector *w, const gsl_vector *y, const double tol, gsl_vector *c, gsl_matrix *cov, double *chisq, size_t *rank, gsl_multifit_linear_workspace *work) This function computes the best-fit parameters *note c: b3c. of the weighted model y = X c for the observations *note y: b3c. with weights *note w: b3c. and the matrix of predictor variables *note X: b3c, using a truncated SVD expansion. Singular values which satisfy s_i \le tol \times s_0 are discarded from the fit, where s_0 is the largest singular value. The p-by-p covariance matrix of the model parameters *note cov: b3c. is computed as (X^T W X)^{-1}. The weighted sum of squares of the residuals from the best-fit, \chi^2, is returned in *note chisq: b3c. The effective rank of the system (number of singular values used in the solution) is returned in *note rank: b3c. If the coefficient of determination is desired, it can be computed from the expression R^2 = 1 - \chi^2 / WTSS, where the weighted total sum of squares (WTSS) of the observations *note y: b3c. may be computed from *note gsl_stats_wtss(): 7f9. -- Function: int gsl_multifit_linear_est (const gsl_vector *x, const gsl_vector *c, const gsl_matrix *cov, double *y, double *y_err) This function uses the best-fit multilinear regression coefficients *note c: b3d. and their covariance matrix *note cov: b3d. to compute the fitted function value *note y: b3d. and its standard deviation *note y_err: b3d. for the model y = x.c at the point *note x: b3d. -- Function: int gsl_multifit_linear_residuals (const gsl_matrix *X, const gsl_vector *y, const gsl_vector *c, gsl_vector *r) This function computes the vector of residuals r = y - X c for the observations *note y: b3e, coefficients *note c: b3e. and matrix of predictor variables *note X: b3e. -- Function: size_t gsl_multifit_linear_rank (const double tol, const gsl_multifit_linear_workspace *work) This function returns the rank of the matrix X which must first have its singular value decomposition computed. The rank is computed by counting the number of singular values \sigma_j which satisfy \sigma_j > tol \times \sigma_0, where \sigma_0 is the largest singular value.  File: gsl-ref.info, Node: Regularized regression, Next: Robust linear regression, Prev: Multi-parameter regression, Up: Linear Least-Squares Fitting 40.4 Regularized regression =========================== Ordinary weighted least squares models seek a solution vector c which minimizes the residual \chi^2 = || y - Xc ||_W^2 where y is the n-by-1 observation vector, X is the n-by-p design matrix, c is the p-by-1 solution vector, W = \diag(w_1,...,w_n) is the data weighting matrix, and ||r||_W^2 = r^T W r. In cases where the least squares matrix X is ill-conditioned, small perturbations (ie: noise) in the observation vector could lead to widely different solution vectors c. One way of dealing with ill-conditioned matrices is to use a “truncated SVD” in which small singular values, below some given tolerance, are discarded from the solution. The truncated SVD method is available using the functions *note gsl_multifit_linear_tsvd(): b3a. and *note gsl_multifit_wlinear_tsvd(): b3c. Another way to help solve ill-posed problems is to include a regularization term in the least squares minimization \chi^2 = || y - Xc ||_W^2 + \lambda^2 || L c ||^2 for a suitably chosen regularization parameter \lambda and matrix L. This type of regularization is known as Tikhonov, or ridge, regression. In some applications, L is chosen as the identity matrix, giving preference to solution vectors c with smaller norms. Including this regularization term leads to the explicit “normal equations” solution c = ( X^T W X + \lambda^2 L^T L )^-1 X^T W y which reduces to the ordinary least squares solution when L = 0. In practice, it is often advantageous to transform a regularized least squares system into the form \chi^2 = || y~ - X~ c~ ||^2 + \lambda^2 || c~ ||^2 This is known as the Tikhonov “standard form” and has the normal equations solution \tilde{c} = ( \tilde{X}^T \tilde{X} + \lambda^2 I )^{-1} \tilde{X}^T \tilde{y} For an m-by-p matrix L which is full rank and has m >= p (ie: L is square or has more rows than columns), we can calculate the “thin” QR decomposition of L, and note that ||L c|| = ||R c|| since the Q factor will not change the norm. Since R is p-by-p, we can then use the transformation X~ = sqrt(W) X R^-1 y~ = sqrt(W) y c~ = R c to achieve the standard form. For a rectangular matrix L with m < p, a more sophisticated approach is needed (see Hansen 1998, chapter 2.3). In practice, the normal equations solution above is not desirable due to numerical instabilities, and so the system is solved using the singular value decomposition of the matrix \tilde{X}. The matrix L is often chosen as the identity matrix, or as a first or second finite difference operator, to ensure a smoothly varying coefficient vector c, or as a diagonal matrix to selectively damp each model parameter differently. If L \ne I, the user must first convert the least squares problem to standard form using *note gsl_multifit_linear_stdform1(): b42. or *note gsl_multifit_linear_stdform2(): b43, solve the system, and then backtransform the solution vector to recover the solution of the original problem (see *note gsl_multifit_linear_genform1(): b44. and *note gsl_multifit_linear_genform2(): b45.). In many regularization problems, care must be taken when choosing the regularization parameter \lambda. Since both the residual norm ||y - X c|| and solution norm ||L c|| are being minimized, the parameter \lambda represents a tradeoff between minimizing either the residuals or the solution vector. A common tool for visualizing the comprimise between the minimization of these two quantities is known as the L-curve. The L-curve is a log-log plot of the residual norm ||y - X c|| on the horizontal axis and the solution norm ||L c|| on the vertical axis. This curve nearly always as an L shaped appearance, with a distinct corner separating the horizontal and vertical sections of the curve. The regularization parameter corresponding to this corner is often chosen as the optimal value. GSL provides routines to calculate the L-curve for all relevant regularization parameters as well as locating the corner. Another method of choosing the regularization parameter is known as Generalized Cross Validation (GCV). This method is based on the idea that if an arbitrary element y_i is left out of the right hand side, the resulting regularized solution should predict this element accurately. This leads to choosing the parameter \lambda which minimizes the GCV function G(\lambda) = (||y - X c_{\lambda}||^2) / Tr(I_n - X X^I)^2 where X_{\lambda}^I is the matrix which relates the solution c_{\lambda} to the right hand side y, ie: c_{\lambda} = X_{\lambda}^I y. GSL provides routines to compute the GCV curve and its minimum. For most applications, the steps required to solve a regularized least squares problem are as follows: 1. Construct the least squares system (X, y, W, L) 2. Transform the system to standard form (\tilde{X}, \tilde{y}). This step can be skipped if L = I_p and W = I_n. 3. Calculate the SVD of \tilde{X}. 4. Determine an appropriate regularization parameter \lambda (using for example L-curve or GCV analysis). 5. Solve the standard form system using the chosen \lambda and the SVD of \tilde{X}. 6. Backtransform the standard form solution \tilde{c} to recover the original solution vector c. -- Function: int gsl_multifit_linear_stdform1 (const gsl_vector *L, const gsl_matrix *X, const gsl_vector *y, gsl_matrix *Xs, gsl_vector *ys, gsl_multifit_linear_workspace *work) -- Function: int gsl_multifit_linear_wstdform1 (const gsl_vector *L, const gsl_matrix *X, const gsl_vector *w, const gsl_vector *y, gsl_matrix *Xs, gsl_vector *ys, gsl_multifit_linear_workspace *work) These functions define a regularization matrix L = \diag(l_0,l_1,...,l_{p-1}). The diagonal matrix element l_i is provided by the i-th element of the input vector *note L: b46. The n-by-p least squares matrix *note X: b46. and vector *note y: b46. of length n are then converted to standard form as described above and the parameters (\tilde{X}, \tilde{y}) are stored in *note Xs: b46. and *note ys: b46. on output. *note Xs: b46. and *note ys: b46. have the same dimensions as *note X: b46. and *note y: b46. Optional data weights may be supplied in the vector *note w: b46. of length n. In order to apply this transformation, L^{-1} must exist and so none of the l_i may be zero. After the standard form system has been solved, use *note gsl_multifit_linear_genform1(): b44. to recover the original solution vector. It is allowed to have *note X: b46. = *note Xs: b46. and *note y: b46. = *note ys: b46. for an in-place transform. In order to perform a weighted regularized fit with L = I, the user may call *note gsl_multifit_linear_applyW(): b47. to convert to standard form. -- Function: int gsl_multifit_linear_L_decomp (gsl_matrix *L, gsl_vector *tau) This function factors the m-by-p regularization matrix *note L: b48. into a form needed for the later transformation to standard form. *note L: b48. may have any number of rows m. If m \ge p the QR decomposition of *note L: b48. is computed and stored in *note L: b48. on output. If m < p, the QR decomposition of L^T is computed and stored in *note L: b48. on output. On output, the Householder scalars are stored in the vector *note tau: b48. of size MIN(m,p). These outputs will be used by *note gsl_multifit_linear_wstdform2(): b49. to complete the transformation to standard form. -- Function: int gsl_multifit_linear_stdform2 (const gsl_matrix *LQR, const gsl_vector *Ltau, const gsl_matrix *X, const gsl_vector *y, gsl_matrix *Xs, gsl_vector *ys, gsl_matrix *M, gsl_multifit_linear_workspace *work) -- Function: int gsl_multifit_linear_wstdform2 (const gsl_matrix *LQR, const gsl_vector *Ltau, const gsl_matrix *X, const gsl_vector *w, const gsl_vector *y, gsl_matrix *Xs, gsl_vector *ys, gsl_matrix *M, gsl_multifit_linear_workspace *work) These functions convert the least squares system (*note X: b49, *note y: b49, ‘W’, L) to standard form (\tilde{X}, \tilde{y}) which are stored in *note Xs: b49. and *note ys: b49. respectively. The m-by-p regularization matrix ‘L’ is specified by the inputs *note LQR: b49. and *note Ltau: b49, which are outputs from *note gsl_multifit_linear_L_decomp(): b48. The dimensions of the standard form parameters (\tilde{X}, \tilde{y}) depend on whether m is larger or less than p. For m \ge p, *note Xs: b49. is n-by-p, *note ys: b49. is n-by-1, and *note M: b49. is not used. For m < p, *note Xs: b49. is (n - p + m)-by-m, *note ys: b49. is (n - p + m)-by-1, and *note M: b49. is additional n-by-p workspace, which is required to recover the original solution vector after the system has been solved (see *note gsl_multifit_linear_genform2(): b45.). Optional data weights may be supplied in the vector *note w: b49. of length n, where W = \diag(w). -- Function: int gsl_multifit_linear_solve (const double lambda, const gsl_matrix *Xs, const gsl_vector *ys, gsl_vector *cs, double *rnorm, double *snorm, gsl_multifit_linear_workspace *work) This function computes the regularized best-fit parameters \tilde{c} which minimize the cost function \chi^2 = || \tilde{y} - \tilde{X} \tilde{c} ||^2 + \lambda^2 || \tilde{c} ||^2 which is in standard form. The least squares system must therefore be converted to standard form prior to calling this function. The observation vector \tilde{y} is provided in *note ys: b4a. and the matrix of predictor variables \tilde{X} in *note Xs: b4a. The solution vector \tilde{c} is returned in *note cs: b4a, which has length min(m,p). The SVD of *note Xs: b4a. must be computed prior to calling this function, using *note gsl_multifit_linear_svd(): b37. The regularization parameter \lambda is provided in *note lambda: b4a. The residual norm || \tilde{y} - \tilde{X} \tilde{c} || = ||y - X c||_W is returned in *note rnorm: b4a. The solution norm || \tilde{c} || = ||L c|| is returned in *note snorm: b4a. -- Function: int gsl_multifit_linear_genform1 (const gsl_vector *L, const gsl_vector *cs, gsl_vector *c, gsl_multifit_linear_workspace *work) After a regularized system has been solved with L = \diag(\l_0,\l_1,...,\l_{p-1}), this function backtransforms the standard form solution vector *note cs: b44. to recover the solution vector of the original problem *note c: b44. The diagonal matrix elements l_i are provided in the vector *note L: b44. It is allowed to have *note c: b44. = *note cs: b44. for an in-place transform. -- Function: int gsl_multifit_linear_genform2 (const gsl_matrix *LQR, const gsl_vector *Ltau, const gsl_matrix *X, const gsl_vector *y, const gsl_vector *cs, const gsl_matrix *M, gsl_vector *c, gsl_multifit_linear_workspace *work) -- Function: int gsl_multifit_linear_wgenform2 (const gsl_matrix *LQR, const gsl_vector *Ltau, const gsl_matrix *X, const gsl_vector *w, const gsl_vector *y, const gsl_vector *cs, const gsl_matrix *M, gsl_vector *c, gsl_multifit_linear_workspace *work) After a regularized system has been solved with a general rectangular matrix L, specified by (*note LQR: b4b, *note Ltau: b4b.), this function backtransforms the standard form solution *note cs: b4b. to recover the solution vector of the original problem, which is stored in *note c: b4b, of length p. The original least squares matrix and observation vector are provided in *note X: b4b. and *note y: b4b. respectively. *note M: b4b. is the matrix computed by *note gsl_multifit_linear_stdform2(): b43. For weighted fits, the weight vector *note w: b4b. must also be supplied. -- Function: int gsl_multifit_linear_applyW (const gsl_matrix *X, const gsl_vector *w, const gsl_vector *y, gsl_matrix *WX, gsl_vector *Wy) For weighted least squares systems with L = I, this function may be used to convert the system to standard form by applying the weight matrix W = \diag(w) to the least squares matrix *note X: b47. and observation vector *note y: b47. On output, *note WX: b47. is equal to W^{1/2} X and *note Wy: b47. is equal to W^{1/2} y. It is allowed for *note WX: b47. = *note X: b47. and *note Wy: b47. = *note y: b47. for an in-place transform. -- Function: int gsl_multifit_linear_lcurve (const gsl_vector *y, gsl_vector *reg_param, gsl_vector *rho, gsl_vector *eta, gsl_multifit_linear_workspace *work) This function computes the L-curve for a least squares system using the right hand side vector *note y: b4c. and the SVD decomposition of the least squares matrix ‘X’, which must be provided to *note gsl_multifit_linear_svd(): b37. prior to calling this function. The output vectors *note reg_param: b4c, *note rho: b4c, and *note eta: b4c. must all be the same size, and will contain the regularization parameters \lambda_i, residual norms ||y - X c_i||, and solution norms || L c_i || which compose the L-curve, where c_i is the regularized solution vector corresponding to \lambda_i. The user may determine the number of points on the L-curve by adjusting the size of these input arrays. The regularization parameters \lambda_i are estimated from the singular values of ‘X’, and chosen to represent the most relevant portion of the L-curve. -- Function: int gsl_multifit_linear_lcurvature (const gsl_vector *y, const gsl_vector *reg_param, const gsl_vector *rho, const gsl_vector *eta, gsl_vector *kappa, gsl_multifit_linear_workspace *work) This function computes the curvature of the L-curve as a function of the regularization parameter \lambda, using the right hand side vector *note y: b4d, the vector of regularization parameters, *note reg_param: b4d, vector of residual norms, *note rho: b4d, and vector of solution norms, *note eta: b4d. The arrays *note reg_param: b4d, *note rho: b4d, and *note eta: b4d. can be computed by *note gsl_multifit_linear_lcurve(): b4c. The curvature is defined as \kappa(\lambda) = \frac{\hat{\rho}' \hat{\eta}'' - \hat{\rho}'' \hat{\eta}'}{\left( (\hat{\rho}')^2 + (\hat{\eta}')^2 \right)^{\frac{3}{2}}} where \hat{\rho}(\lambda) = \log{||y - X c_{\lambda}||} and \hat{\eta}(\lambda) = \log{|| L c_{\lambda} ||}. The curvature values are stored in *note kappa: b4d. on output. -- Function: int gsl_multifit_linear_lcorner (const gsl_vector *rho, const gsl_vector *eta, size_t *idx) This function attempts to locate the corner of the L-curve (||y - X c||, ||L c||) defined by the *note rho: b4e. and *note eta: b4e. input arrays respectively. The corner is defined as the point of maximum curvature of the L-curve in log-log scale. The *note rho: b4e. and *note eta: b4e. arrays can be outputs of *note gsl_multifit_linear_lcurve(): b4c. The algorithm used simply fits a circle to 3 consecutive points on the L-curve and uses the circle’s radius to determine the curvature at the middle point. Therefore, the input array sizes must be \ge 3. With more points provided for the L-curve, a better estimate of the curvature can be obtained. The array index corresponding to maximum curvature (ie: the corner) is returned in *note idx: b4e. If the input arrays contain colinear points, this function could fail and return *note GSL_EINVAL: 2b. -- Function: int gsl_multifit_linear_lcorner2 (const gsl_vector *reg_param, const gsl_vector *eta, size_t *idx) This function attempts to locate the corner of an alternate L-curve (\lambda^2, ||L c||^2) studied by Rezghi and Hosseini, 2009. This alternate L-curve can provide better estimates of the regularization parameter for smooth solution vectors. The regularization parameters \lambda and solution norms ||L c|| are provided in the *note reg_param: b4f. and *note eta: b4f. input arrays respectively. The corner is defined as the point of maximum curvature of this alternate L-curve in linear scale. The *note reg_param: b4f. and *note eta: b4f. arrays can be outputs of *note gsl_multifit_linear_lcurve(): b4c. The algorithm used simply fits a circle to 3 consecutive points on the L-curve and uses the circle’s radius to determine the curvature at the middle point. Therefore, the input array sizes must be \ge 3. With more points provided for the L-curve, a better estimate of the curvature can be obtained. The array index corresponding to maximum curvature (ie: the corner) is returned in *note idx: b4f. If the input arrays contain colinear points, this function could fail and return *note GSL_EINVAL: 2b. -- Function: int gsl_multifit_linear_gcv_init (const gsl_vector *y, gsl_vector *reg_param, gsl_vector *UTy, double *delta0, gsl_multifit_linear_workspace *work) This function performs some initialization in preparation for computing the GCV curve and its minimum. The right hand side vector is provided in *note y: b50. On output, *note reg_param: b50. is set to a vector of regularization parameters in decreasing order and may be of any size. The vector *note UTy: b50. of size p is set to U^T y. The parameter *note delta0: b50. is needed for subsequent steps of the GCV calculation. -- Function: int gsl_multifit_linear_gcv_curve (const gsl_vector *reg_param, const gsl_vector *UTy, const double delta0, gsl_vector *G, gsl_multifit_linear_workspace *work) This funtion calculates the GCV curve G(\lambda) and stores it in *note G: b51. on output, which must be the same size as *note reg_param: b51. The inputs *note reg_param: b51, *note UTy: b51. and *note delta0: b51. are computed in *note gsl_multifit_linear_gcv_init(): b50. -- Function: int gsl_multifit_linear_gcv_min (const gsl_vector *reg_param, const gsl_vector *UTy, const gsl_vector *G, const double delta0, double *lambda, gsl_multifit_linear_workspace *work) This function computes the value of the regularization parameter which minimizes the GCV curve G(\lambda) and stores it in *note lambda: b52. The input *note G: b52. is calculated by *note gsl_multifit_linear_gcv_curve(): b51. and the inputs *note reg_param: b52, *note UTy: b52. and *note delta0: b52. are computed by *note gsl_multifit_linear_gcv_init(): b50. -- Function: double gsl_multifit_linear_gcv_calc (const double lambda, const gsl_vector *UTy, const double delta0, gsl_multifit_linear_workspace *work) This function returns the value of the GCV curve G(\lambda) corresponding to the input *note lambda: b53. -- Function: int gsl_multifit_linear_gcv (const gsl_vector *y, gsl_vector *reg_param, gsl_vector *G, double *lambda, double *G_lambda, gsl_multifit_linear_workspace *work) This function combines the steps ‘gcv_init’, ‘gcv_curve’, and ‘gcv_min’ defined above into a single function. The input *note y: b54. is the right hand side vector. On output, *note reg_param: b54. and *note G: b54, which must be the same size, are set to vectors of \lambda and G(\lambda) values respectively. The output *note lambda: b54. is set to the optimal value of \lambda which minimizes the GCV curve. The minimum value of the GCV curve is returned in *note G_lambda: b54. -- Function: int gsl_multifit_linear_Lk (const size_t p, const size_t k, gsl_matrix *L) This function computes the discrete approximation to the derivative operator L_k of order *note k: b55. on a regular grid of *note p: b55. points and stores it in *note L: b55. The dimensions of *note L: b55. are (p-k)-by-p. -- Function: int gsl_multifit_linear_Lsobolev (const size_t p, const size_t kmax, const gsl_vector *alpha, gsl_matrix *L, gsl_multifit_linear_workspace *work) This function computes the regularization matrix *note L: b56. corresponding to the weighted Sobolov norm ||L c||^2 = \sum_k \alpha_k^2 ||L_k c||^2 where L_k approximates the derivative operator of order k. This regularization norm can be useful in applications where it is necessary to smooth several derivatives of the solution. *note p: b56. is the number of model parameters, *note kmax: b56. is the highest derivative to include in the summation above, and *note alpha: b56. is the vector of weights of size *note kmax: b56. + 1, where ‘alpha[k]’ = \alpha_k is the weight assigned to the derivative of order k. The output matrix *note L: b56. is size *note p: b56.-by-*note p: b56. and upper triangular. -- Function: double gsl_multifit_linear_rcond (const gsl_multifit_linear_workspace *work) This function returns the reciprocal condition number of the least squares matrix X, defined as the ratio of the smallest and largest singular values, rcond = \sigma_{min}/\sigma_{max}. The routine *note gsl_multifit_linear_svd(): b37. must first be called to compute the SVD of X.  File: gsl-ref.info, Node: Robust linear regression, Next: Large dense linear systems, Prev: Regularized regression, Up: Linear Least-Squares Fitting 40.5 Robust linear regression ============================= Ordinary least squares (OLS) models are often heavily influenced by the presence of outliers. Outliers are data points which do not follow the general trend of the other observations, although there is strictly no precise definition of an outlier. Robust linear regression refers to regression algorithms which are robust to outliers. The most common type of robust regression is M-estimation. The general M-estimator minimizes the objective function \sum_i \rho(e_i) = \sum_i \rho (y_i - Y(c, x_i)) where e_i = y_i - Y(c, x_i) is the residual of the ith data point, and \rho(e_i) is a function which should have the following properties: * \rho(e) \ge 0 * \rho(0) = 0 * \rho(-e) = \rho(e) * \rho(e_1) > \rho(e_2) for |e_1| > |e_2| The special case of ordinary least squares is given by \rho(e_i) = e_i^2. Letting \psi = \rho' be the derivative of \rho, differentiating the objective function with respect to the coefficients c and setting the partial derivatives to zero produces the system of equations \sum_i \psi(e_i) X_i = 0 where X_i is a vector containing row i of the design matrix X. Next, we define a weight function w(e) = \psi(e)/e, and let w_i = w(e_i): \sum_i w_i e_i X_i = 0 This system of equations is equivalent to solving a weighted ordinary least squares problem, minimizing \chi^2 = \sum_i w_i e_i^2. The weights however, depend on the residuals e_i, which depend on the coefficients c, which depend on the weights. Therefore, an iterative solution is used, called Iteratively Reweighted Least Squares (IRLS). 1. Compute initial estimates of the coefficients c^{(0)} using ordinary least squares 2. For iteration k, form the residuals e_i^{(k)} = (y_i - X_i c^{(k-1)})/(t \sigma^{(k)} \sqrt{1 - h_i}), where t is a tuning constant depending on the choice of \psi, and h_i are the statistical leverages (diagonal elements of the matrix X (X^T X)^{-1} X^T). Including t and h_i in the residual calculation has been shown to improve the convergence of the method. The residual standard deviation is approximated as \sigma^{(k)} = MAD / 0.6745, where MAD is the Median-Absolute-Deviation of the n-p largest residuals from the previous iteration. 3. Compute new weights w_i^{(k)} = \psi(e_i^{(k)})/e_i^{(k)}. 4. Compute new coefficients c^{(k)} by solving the weighted least squares problem with weights w_i^{(k)}. 5. Steps 2 through 4 are iterated until the coefficients converge or until some maximum iteration limit is reached. Coefficients are tested for convergence using the critera: |c_i^(k) - c_i^(k-1)| <= \epsilon * max(|c_i^(k)|, |c_i^(k-1)|) for all 0 \le i < p where \epsilon is a small tolerance factor. The key to this method lies in selecting the function \psi(e_i) to assign smaller weights to large residuals, and larger weights to smaller residuals. As the iteration proceeds, outliers are assigned smaller and smaller weights, eventually having very little or no effect on the fitted model. -- Type: gsl_multifit_robust_workspace This workspace is used for robust least squares fitting. -- Function: *note gsl_multifit_robust_workspace: b59. *gsl_multifit_robust_alloc (const gsl_multifit_robust_type *T, const size_t n, const size_t p) This function allocates a workspace for fitting a model to *note n: b5a. observations using *note p: b5a. parameters. The size of the workspace is O(np + p^2). The type *note T: b5a. specifies the function \psi and can be selected from the following choices. -- Type: gsl_multifit_robust_type -- Variable: *note gsl_multifit_robust_type: b5b. *gsl_multifit_robust_default This specifies the *note gsl_multifit_robust_bisquare: b5d. type (see below) and is a good general purpose choice for robust regression. -- Variable: *note gsl_multifit_robust_type: b5b. *gsl_multifit_robust_bisquare This is Tukey’s biweight (bisquare) function and is a good general purpose choice for robust regression. The weight function is given by w(e) = { (1 - e^2)^2, |e| <= 1 { 0, |e| > 1 and the default tuning constant is t = 4.685. -- Variable: *note gsl_multifit_robust_type: b5b. *gsl_multifit_robust_cauchy This is Cauchy’s function, also known as the Lorentzian function. This function does not guarantee a unique solution, meaning different choices of the coefficient vector ‘c’ could minimize the objective function. Therefore this option should be used with care. The weight function is given by w(e) = 1 / (1 + e^2) and the default tuning constant is t = 2.385. -- Variable: *note gsl_multifit_robust_type: b5b. *gsl_multifit_robust_fair This is the fair \rho function, which guarantees a unique solution and has continuous derivatives to three orders. The weight function is given by w(e) = 1 / (1 + |e|) and the default tuning constant is t = 1.400. -- Variable: *note gsl_multifit_robust_type: b5b. *gsl_multifit_robust_huber This specifies Huber’s \rho function, which is a parabola in the vicinity of zero and increases linearly for a given threshold |e| > t. This function is also considered an excellent general purpose robust estimator, however, occasional difficulties can be encountered due to the discontinuous first derivative of the \psi function. The weight function is given by w(e) = 1/max(1,|e|) and the default tuning constant is t = 1.345. -- Variable: *note gsl_multifit_robust_type: b5b. *gsl_multifit_robust_ols This specifies the ordinary least squares solution, which can be useful for quickly checking the difference between the various robust and OLS solutions. The weight function is given by w(e) = 1 and the default tuning constant is t = 1. -- Variable: *note gsl_multifit_robust_type: b5b. *gsl_multifit_robust_welsch This specifies the Welsch function which can perform well in cases where the residuals have an exponential distribution. The weight function is given by w(e) = \exp{(-e^2)} and the default tuning constant is t = 2.985. -- Function: void gsl_multifit_robust_free (gsl_multifit_robust_workspace *w) This function frees the memory associated with the workspace *note w: b63. -- Function: const char *gsl_multifit_robust_name (const gsl_multifit_robust_workspace *w) This function returns the name of the robust type ‘T’ specified to *note gsl_multifit_robust_alloc(): b5a. -- Function: int gsl_multifit_robust_tune (const double tune, gsl_multifit_robust_workspace *w) This function sets the tuning constant t used to adjust the residuals at each iteration to *note tune: b65. Decreasing the tuning constant increases the downweight assigned to large residuals, while increasing the tuning constant decreases the downweight assigned to large residuals. -- Function: int gsl_multifit_robust_maxiter (const size_t maxiter, gsl_multifit_robust_workspace *w) This function sets the maximum number of iterations in the iteratively reweighted least squares algorithm to *note maxiter: b66. By default, this value is set to 100 by *note gsl_multifit_robust_alloc(): b5a. -- Function: int gsl_multifit_robust_weights (const gsl_vector *r, gsl_vector *wts, gsl_multifit_robust_workspace *w) This function assigns weights to the vector *note wts: b67. using the residual vector *note r: b67. and previously specified weighting function. The output weights are given by wts_i = w(r_i / (t \sigma)), where the weighting functions w are detailed in *note gsl_multifit_robust_alloc(): b5a. \sigma is an estimate of the residual standard deviation based on the Median-Absolute-Deviation and t is the tuning constant. This function is useful if the user wishes to implement their own robust regression rather than using the supplied *note gsl_multifit_robust(): b68. routine below. -- Function: int gsl_multifit_robust (const gsl_matrix *X, const gsl_vector *y, gsl_vector *c, gsl_matrix *cov, gsl_multifit_robust_workspace *w) This function computes the best-fit parameters *note c: b68. of the model y = X c for the observations *note y: b68. and the matrix of predictor variables *note X: b68, attemping to reduce the influence of outliers using the algorithm outlined above. The p-by-p variance-covariance matrix of the model parameters *note cov: b68. is estimated as \sigma^2 (X^T X)^{-1}, where \sigma is an approximation of the residual standard deviation using the theory of robust regression. Special care must be taken when estimating \sigma and other statistics such as R^2, and so these are computed internally and are available by calling the function *note gsl_multifit_robust_statistics(): b69. If the coefficients do not converge within the maximum iteration limit, the function returns ‘GSL_EMAXITER’. In this case, the current estimates of the coefficients and covariance matrix are returned in *note c: b68. and *note cov: b68. and the internal fit statistics are computed with these estimates. -- Function: int gsl_multifit_robust_est (const gsl_vector *x, const gsl_vector *c, const gsl_matrix *cov, double *y, double *y_err) This function uses the best-fit robust regression coefficients *note c: b6a. and their covariance matrix *note cov: b6a. to compute the fitted function value *note y: b6a. and its standard deviation *note y_err: b6a. for the model y = x \cdot c at the point *note x: b6a. -- Function: int gsl_multifit_robust_residuals (const gsl_matrix *X, const gsl_vector *y, const gsl_vector *c, gsl_vector *r, gsl_multifit_robust_workspace *w) This function computes the vector of studentized residuals r_i = {y_i - (X c)_i \over \sigma \sqrt{1 - h_i}} for the observations *note y: b6b, coefficients *note c: b6b. and matrix of predictor variables *note X: b6b. The routine *note gsl_multifit_robust(): b68. must first be called to compute the statisical leverages h_i of the matrix *note X: b6b. and residual standard deviation estimate \sigma. -- Function: *note gsl_multifit_robust_stats: b6c. gsl_multifit_robust_statistics (const gsl_multifit_robust_workspace *w) This function returns a structure containing relevant statistics from a robust regression. The function *note gsl_multifit_robust(): b68. must be called first to perform the regression and calculate these statistics. The returned *note gsl_multifit_robust_stats: b6c. structure contains the following fields. -- Type: gsl_multifit_robust_stats ‘double sigma_ols’ This contains the standard deviation of the residuals as computed from ordinary least squares (OLS). ‘double sigma_mad’ This contains an estimate of the standard deviation of the final residuals using the Median-Absolute-Deviation statistic ‘double sigma_rob’ This contains an estimate of the standard deviation of the final residuals from the theory of robust regression (see Street et al, 1988). ‘double sigma’ This contains an estimate of the standard deviation of the final residuals by attemping to reconcile ‘sigma_rob’ and ‘sigma_ols’ in a reasonable way. ‘double Rsq’ This contains the R^2 coefficient of determination statistic using the estimate ‘sigma’. ‘double adj_Rsq’ This contains the adjusted R^2 coefficient of determination statistic using the estimate ‘sigma’. ‘double rmse’ This contains the root mean squared error of the final residuals ‘double sse’ This contains the residual sum of squares taking into account the robust covariance matrix. ‘size_t dof’ This contains the number of degrees of freedom n - p ‘size_t numit’ Upon successful convergence, this contains the number of iterations performed ‘gsl_vector * weights’ This contains the final weight vector of length ‘n’ ‘gsl_vector * r’ This contains the final residual vector of length ‘n’, r = y - X c  File: gsl-ref.info, Node: Large dense linear systems, Next: Troubleshooting, Prev: Robust linear regression, Up: Linear Least-Squares Fitting 40.6 Large dense linear systems =============================== This module is concerned with solving large dense least squares systems X c = y where the n-by-p matrix X has n >> p (ie: many more rows than columns). This type of matrix is called a “tall skinny” matrix, and for some applications, it may not be possible to fit the entire matrix in memory at once to use the standard SVD approach. Therefore, the algorithms in this module are designed to allow the user to construct smaller blocks of the matrix X and accumulate those blocks into the larger system one at a time. The algorithms in this module never need to store the entire matrix X in memory. The large linear least squares routines support data weights and Tikhonov regularization, and are designed to minimize the residual \chi^2 = || y - Xc ||_W^2 + \lambda^2 || L c ||^2 where y is the n-by-1 observation vector, X is the n-by-p design matrix, c is the p-by-1 solution vector, W = \diag(w_1,...,w_n) is the data weighting matrix, L is an m-by-p regularization matrix, \lambda is a regularization parameter, and ||r||_W^2 = r^T W r. In the discussion which follows, we will assume that the system has been converted into Tikhonov standard form, \chi^2 = || y~ - X~ c~ ||^2 + \lambda^2 || c~ ||^2 and we will drop the tilde characters from the various parameters. For a discussion of the transformation to standard form, see *note Regularized regression: b41. The basic idea is to partition the matrix X and observation vector y as [ X_1 ] c = [ y_1 ] [ X_2 ] [ y_2 ] [ X_3 ] [ y_3 ] [ ... ] [ ... ] [ X_k ] [ y_k ] into k blocks, where each block (X_i,y_i) may have any number of rows, but each X_i has p columns. The sections below describe the methods available for solving this partitioned system. The functions are declared in the header file ‘gsl_multilarge.h’. * Menu: * Normal Equations Approach:: * Tall Skinny QR (TSQR) Approach: Tall Skinny QR TSQR Approach. * Large Dense Linear Systems Solution Steps:: * Large Dense Linear Least Squares Routines::  File: gsl-ref.info, Node: Normal Equations Approach, Next: Tall Skinny QR TSQR Approach, Up: Large dense linear systems 40.6.1 Normal Equations Approach -------------------------------- The normal equations approach to the large linear least squares problem described above is popular due to its speed and simplicity. Since the normal equations solution to the problem is given by c = ( X^T X + \lambda^2 I )^-1 X^T y only the p-by-p matrix X^T X and p-by-1 vector X^T y need to be stored. Using the partition scheme described above, these are given by X^T X = \sum_i X_i^T X_i X^T y = \sum_i X_i^T y_i Since the matrix X^T X is symmetric, only half of it needs to be calculated. Once all of the blocks (X_i,y_i) have been accumulated into the final X^T X and X^T y, the system can be solved with a Cholesky factorization of the X^T X matrix. The X^T X matrix is first transformed via a diagonal scaling transformation to attempt to reduce its condition number as much as possible to recover a more accurate solution vector. The normal equations approach is the fastest method for solving the large least squares problem, and is accurate for well-conditioned matrices X. However, for ill-conditioned matrices, as is often the case for large systems, this method can suffer from numerical instabilities (see Trefethen and Bau, 1997). The number of operations for this method is O(np^2 + {1 \over 3}p^3).  File: gsl-ref.info, Node: Tall Skinny QR TSQR Approach, Next: Large Dense Linear Systems Solution Steps, Prev: Normal Equations Approach, Up: Large dense linear systems 40.6.2 Tall Skinny QR (TSQR) Approach ------------------------------------- An algorithm which has better numerical stability for ill-conditioned problems is known as the Tall Skinny QR (TSQR) method. This method is based on computing the thin QR decomposition of the least squares matrix X = Q R, where Q is an n-by-p matrix with orthogonal columns, and R is a p-by-p upper triangular matrix. Once these factors are calculated, the residual becomes \chi^2 = || Q^T y - R c ||^2 + \lambda^2 || c ||^2 which can be written as the matrix equation [ R ; \lambda I ] c = [ Q^T b ; 0 ] The matrix on the left hand side is now a much smaller 2p-by-p matrix which can be solved with a standard SVD approach. The Q matrix is just as large as the original matrix X, however it does not need to be explicitly constructed. The TSQR algorithm computes only the p-by-p matrix R and the p-by-1 vector Q^T y, and updates these quantities as new blocks are added to the system. Each time a new block of rows (X_i,y_i) is added, the algorithm performs a QR decomposition of the matrix [ R_(i-1) ; X_i ] where R_{i-1} is the upper triangular R factor for the matrix [ X_1 ; ... ; X_(i-1) ] This QR decomposition is done efficiently taking into account the sparse structure of R_{i-1}. See Demmel et al, 2008 for more details on how this is accomplished. The number of operations for this method is O(2np^2 - {2 \over 3}p^3).  File: gsl-ref.info, Node: Large Dense Linear Systems Solution Steps, Next: Large Dense Linear Least Squares Routines, Prev: Tall Skinny QR TSQR Approach, Up: Large dense linear systems 40.6.3 Large Dense Linear Systems Solution Steps ------------------------------------------------ The typical steps required to solve large regularized linear least squares problems are as follows: 1. Choose the regularization matrix L. 2. Construct a block of rows of the least squares matrix, right hand side vector, and weight vector (X_i, y_i, w_i). 3. Transform the block to standard form (\tilde{X_i}, \tilde{y_i}). This step can be skipped if L = I and W = I. 4. Accumulate the standard form block (\tilde{X_i}, \tilde{y_i}) into the system. 5. Repeat steps 2-4 until the entire matrix and right hand side vector have been accumulated. 6. Determine an appropriate regularization parameter \lambda (using for example L-curve analysis). 7. Solve the standard form system using the chosen \lambda. 8. Backtransform the standard form solution \tilde{c} to recover the original solution vector c.  File: gsl-ref.info, Node: Large Dense Linear Least Squares Routines, Prev: Large Dense Linear Systems Solution Steps, Up: Large dense linear systems 40.6.4 Large Dense Linear Least Squares Routines ------------------------------------------------ -- Type: gsl_multilarge_linear_workspace This workspace contains parameters for solving large linear least squares problems. -- Function: *note gsl_multilarge_linear_workspace: b72. *gsl_multilarge_linear_alloc (const gsl_multilarge_linear_type *T, const size_t p) This function allocates a workspace for solving large linear least squares systems. The least squares matrix X has *note p: b73. columns, but may have any number of rows. -- Type: gsl_multilarge_linear_type The parameter *note T: b73. specifies the method to be used for solving the large least squares system and may be selected from the following choices -- Variable: *note gsl_multilarge_linear_type: b74. *gsl_multilarge_linear_normal This specifies the normal equations approach for solving the least squares system. This method is suitable in cases where performance is critical and it is known that the least squares matrix X is well conditioned. The size of this workspace is O(p^2). -- Variable: *note gsl_multilarge_linear_type: b74. *gsl_multilarge_linear_tsqr This specifies the sequential Tall Skinny QR (TSQR) approach for solving the least squares system. This method is a good general purpose choice for large systems, but requires about twice as many operations as the normal equations method for n >> p. The size of this workspace is O(p^2). -- Function: void gsl_multilarge_linear_free (gsl_multilarge_linear_workspace *w) This function frees the memory associated with the workspace *note w: b77. -- Function: const char *gsl_multilarge_linear_name (gsl_multilarge_linear_workspace *w) This function returns a string pointer to the name of the multilarge solver. -- Function: int gsl_multilarge_linear_reset (gsl_multilarge_linear_workspace *w) This function resets the workspace *note w: b79. so it can begin to accumulate a new least squares system. -- Function: int gsl_multilarge_linear_stdform1 (const gsl_vector *L, const gsl_matrix *X, const gsl_vector *y, gsl_matrix *Xs, gsl_vector *ys, gsl_multilarge_linear_workspace *work) -- Function: int gsl_multilarge_linear_wstdform1 (const gsl_vector *L, const gsl_matrix *X, const gsl_vector *w, const gsl_vector *y, gsl_matrix *Xs, gsl_vector *ys, gsl_multilarge_linear_workspace *work) These functions define a regularization matrix L = \diag(l_0,l_1,...,l_{p-1}). The diagonal matrix element l_i is provided by the i-th element of the input vector *note L: b7b. The block (*note X: b7b, *note y: b7b.) is converted to standard form and the parameters (\tilde{X}, \tilde{y}) are stored in *note Xs: b7b. and *note ys: b7b. on output. *note Xs: b7b. and *note ys: b7b. have the same dimensions as *note X: b7b. and *note y: b7b. Optional data weights may be supplied in the vector *note w: b7b. In order to apply this transformation, L^{-1} must exist and so none of the l_i may be zero. After the standard form system has been solved, use *note gsl_multilarge_linear_genform1(): b7c. to recover the original solution vector. It is allowed to have *note X: b7b. = *note Xs: b7b. and *note y: b7b. = *note ys: b7b. for an in-place transform. -- Function: int gsl_multilarge_linear_L_decomp (gsl_matrix *L, gsl_vector *tau) This function calculates the QR decomposition of the m-by-p regularization matrix *note L: b7d. *note L: b7d. must have m \ge p. On output, the Householder scalars are stored in the vector *note tau: b7d. of size p. These outputs will be used by *note gsl_multilarge_linear_wstdform2(): b7e. to complete the transformation to standard form. -- Function: int gsl_multilarge_linear_stdform2 (const gsl_matrix *LQR, const gsl_vector *Ltau, const gsl_matrix *X, const gsl_vector *y, gsl_matrix *Xs, gsl_vector *ys, gsl_multilarge_linear_workspace *work) -- Function: int gsl_multilarge_linear_wstdform2 (const gsl_matrix *LQR, const gsl_vector *Ltau, const gsl_matrix *X, const gsl_vector *w, const gsl_vector *y, gsl_matrix *Xs, gsl_vector *ys, gsl_multilarge_linear_workspace *work) These functions convert a block of rows (*note X: b7e, *note y: b7e, *note w: b7e.) to standard form (\tilde{X}, \tilde{y}) which are stored in *note Xs: b7e. and *note ys: b7e. respectively. *note X: b7e, *note y: b7e, and *note w: b7e. must all have the same number of rows. The m-by-p regularization matrix ‘L’ is specified by the inputs *note LQR: b7e. and *note Ltau: b7e, which are outputs from *note gsl_multilarge_linear_L_decomp(): b7d. *note Xs: b7e. and *note ys: b7e. have the same dimensions as *note X: b7e. and *note y: b7e. After the standard form system has been solved, use *note gsl_multilarge_linear_genform2(): b80. to recover the original solution vector. Optional data weights may be supplied in the vector *note w: b7e, where W = \diag(w). -- Function: int gsl_multilarge_linear_accumulate (gsl_matrix *X, gsl_vector *y, gsl_multilarge_linear_workspace *w) This function accumulates the standard form block (X,y) into the current least squares system. *note X: b81. and *note y: b81. have the same number of rows, which can be arbitrary. *note X: b81. must have p columns. For the TSQR method, *note X: b81. and *note y: b81. are destroyed on output. For the normal equations method, they are both unchanged. -- Function: int gsl_multilarge_linear_solve (const double lambda, gsl_vector *c, double *rnorm, double *snorm, gsl_multilarge_linear_workspace *w) After all blocks (X_i,y_i) have been accumulated into the large least squares system, this function will compute the solution vector which is stored in *note c: b82. on output. The regularization parameter \lambda is provided in *note lambda: b82. On output, *note rnorm: b82. contains the residual norm ||y - X c||_W and *note snorm: b82. contains the solution norm ||L c||. -- Function: int gsl_multilarge_linear_genform1 (const gsl_vector *L, const gsl_vector *cs, gsl_vector *c, gsl_multilarge_linear_workspace *work) After a regularized system has been solved with L = \diag(\l_0,\l_1,...,\l_{p-1}), this function backtransforms the standard form solution vector *note cs: b7c. to recover the solution vector of the original problem *note c: b7c. The diagonal matrix elements l_i are provided in the vector *note L: b7c. It is allowed to have *note c: b7c. = *note cs: b7c. for an in-place transform. -- Function: int gsl_multilarge_linear_genform2 (const gsl_matrix *LQR, const gsl_vector *Ltau, const gsl_vector *cs, gsl_vector *c, gsl_multilarge_linear_workspace *work) After a regularized system has been solved with a regularization matrix L, specified by (*note LQR: b80, *note Ltau: b80.), this function backtransforms the standard form solution *note cs: b80. to recover the solution vector of the original problem, which is stored in *note c: b80, of length p. -- Function: int gsl_multilarge_linear_lcurve (gsl_vector *reg_param, gsl_vector *rho, gsl_vector *eta, gsl_multilarge_linear_workspace *work) This function computes the L-curve for a large least squares system after it has been fully accumulated into the workspace *note work: b83. The output vectors *note reg_param: b83, *note rho: b83, and *note eta: b83. must all be the same size, and will contain the regularization parameters \lambda_i, residual norms ||y - X c_i||, and solution norms || L c_i || which compose the L-curve, where c_i is the regularized solution vector corresponding to \lambda_i. The user may determine the number of points on the L-curve by adjusting the size of these input arrays. For the TSQR method, the regularization parameters \lambda_i are estimated from the singular values of the triangular R factor. For the normal equations method, they are estimated from the eigenvalues of the X^T X matrix. -- Function: const *note gsl_matrix: 3a2. *gsl_multilarge_linear_matrix_ptr (const gsl_multilarge_linear_workspace *work) For the normal equations method, this function returns a pointer to the X^T X matrix. For the TSQR method, this function returns a pointer to the upper triangular R matrix. -- Function: const *note gsl_vector: 35f. *gsl_multilarge_linear_rhs_ptr (const gsl_multilarge_linear_workspace *work) For the normal equations method, this function returns a pointer to the X^T y right hand side vector. For the TSQR method, this function returns a pointer to the Q^T y right hand side vector. -- Function: int gsl_multilarge_linear_rcond (double *rcond, gsl_multilarge_linear_workspace *work) This function computes the reciprocal condition number, stored in *note rcond: b86, of the least squares matrix after it has been accumulated into the workspace *note work: b86. For the TSQR algorithm, this is accomplished by calculating the SVD of the R factor, which has the same singular values as the matrix X. For the normal equations method, this is done by computing the eigenvalues of X^T X, which could be inaccurate for ill-conditioned matrices X.  File: gsl-ref.info, Node: Troubleshooting, Next: Examples<31>, Prev: Large dense linear systems, Up: Linear Least-Squares Fitting 40.7 Troubleshooting ==================== When using models based on polynomials, care should be taken when constructing the design matrix X. If the x values are large, then the matrix X could be ill-conditioned since its columns are powers of x, leading to unstable least-squares solutions. In this case it can often help to center and scale the x values using the mean and standard deviation: x' = (x - mu)/sigma and then construct the X matrix using the transformed values x'.  File: gsl-ref.info, Node: Examples<31>, Next: References and Further Reading<33>, Prev: Troubleshooting, Up: Linear Least-Squares Fitting 40.8 Examples ============= The example programs in this section demonstrate the various linear regression methods. * Menu: * Simple Linear Regression Example:: * Multi-parameter Linear Regression Example:: * Regularized Linear Regression Example 1:: * Regularized Linear Regression Example 2:: * Robust Linear Regression Example:: * Large Dense Linear Regression Example::  File: gsl-ref.info, Node: Simple Linear Regression Example, Next: Multi-parameter Linear Regression Example, Up: Examples<31> 40.8.1 Simple Linear Regression Example --------------------------------------- The following program computes a least squares straight-line fit to a simple dataset, and outputs the best-fit line and its associated one standard-deviation error bars. #include #include int main (void) { int i, n = 4; double x[4] = { 1970, 1980, 1990, 2000 }; double y[4] = { 12, 11, 14, 13 }; double w[4] = { 0.1, 0.2, 0.3, 0.4 }; double c0, c1, cov00, cov01, cov11, chisq; gsl_fit_wlinear (x, 1, w, 1, y, 1, n, &c0, &c1, &cov00, &cov01, &cov11, &chisq); printf ("# best fit: Y = %g + %g X\n", c0, c1); printf ("# covariance matrix:\n"); printf ("# [ %g, %g\n# %g, %g]\n", cov00, cov01, cov01, cov11); printf ("# chisq = %g\n", chisq); for (i = 0; i < n; i++) printf ("data: %g %g %g\n", x[i], y[i], 1/sqrt(w[i])); printf ("\n"); for (i = -30; i < 130; i++) { double xf = x[0] + (i/100.0) * (x[n-1] - x[0]); double yf, yf_err; gsl_fit_linear_est (xf, c0, c1, cov00, cov01, cov11, &yf, &yf_err); printf ("fit: %g %g\n", xf, yf); printf ("hi : %g %g\n", xf, yf + yf_err); printf ("lo : %g %g\n", xf, yf - yf_err); } return 0; } The following commands extract the data from the output of the program and display it using the GNU plotutils “graph” utility: $ ./demo > tmp $ more tmp # best fit: Y = -106.6 + 0.06 X # covariance matrix: # [ 39602, -19.9 # -19.9, 0.01] # chisq = 0.8 $ for n in data fit hi lo ; do grep "^$n" tmp | cut -d: -f2 > $n ; done $ graph -T X -X x -Y y -y 0 20 -m 0 -S 2 -Ie data -S 0 -I a -m 1 fit -m 2 hi -m 2 lo The result is shown in Fig. %s. [gsl-ref-figures/fit-wlinear] Figure: Straight line fit with 1-\sigma error bars  File: gsl-ref.info, Node: Multi-parameter Linear Regression Example, Next: Regularized Linear Regression Example 1, Prev: Simple Linear Regression Example, Up: Examples<31> 40.8.2 Multi-parameter Linear Regression Example ------------------------------------------------ The following program performs a quadratic fit y = c_0 + c_1 x + c_2 x^2 to a weighted dataset using the generalised linear fitting function *note gsl_multifit_wlinear(): b3b. The model matrix X for a quadratic fit is given by, X = [ 1 , x_0 , x_0^2 ; 1 , x_1 , x_1^2 ; 1 , x_2 , x_2^2 ; ... , ... , ... ] where the column of ones corresponds to the constant term c_0. The two remaining columns corresponds to the terms c_1 x and c_2 x^2. The program reads ‘n’ lines of data in the format (‘x’, ‘y’, ‘err’) where ‘err’ is the error (standard deviation) in the value ‘y’. #include #include int main (int argc, char **argv) { int i, n; double xi, yi, ei, chisq; gsl_matrix *X, *cov; gsl_vector *y, *w, *c; if (argc != 2) { fprintf (stderr,"usage: fit n < data\n"); exit (-1); } n = atoi (argv[1]); X = gsl_matrix_alloc (n, 3); y = gsl_vector_alloc (n); w = gsl_vector_alloc (n); c = gsl_vector_alloc (3); cov = gsl_matrix_alloc (3, 3); for (i = 0; i < n; i++) { int count = fscanf (stdin, "%lg %lg %lg", &xi, &yi, &ei); if (count != 3) { fprintf (stderr, "error reading file\n"); exit (-1); } printf ("%g %g +/- %g\n", xi, yi, ei); gsl_matrix_set (X, i, 0, 1.0); gsl_matrix_set (X, i, 1, xi); gsl_matrix_set (X, i, 2, xi*xi); gsl_vector_set (y, i, yi); gsl_vector_set (w, i, 1.0/(ei*ei)); } { gsl_multifit_linear_workspace * work = gsl_multifit_linear_alloc (n, 3); gsl_multifit_wlinear (X, w, y, c, cov, &chisq, work); gsl_multifit_linear_free (work); } #define C(i) (gsl_vector_get(c,(i))) #define COV(i,j) (gsl_matrix_get(cov,(i),(j))) { printf ("# best fit: Y = %g + %g X + %g X^2\n", C(0), C(1), C(2)); printf ("# covariance matrix:\n"); printf ("[ %+.5e, %+.5e, %+.5e \n", COV(0,0), COV(0,1), COV(0,2)); printf (" %+.5e, %+.5e, %+.5e \n", COV(1,0), COV(1,1), COV(1,2)); printf (" %+.5e, %+.5e, %+.5e ]\n", COV(2,0), COV(2,1), COV(2,2)); printf ("# chisq = %g\n", chisq); } gsl_matrix_free (X); gsl_vector_free (y); gsl_vector_free (w); gsl_vector_free (c); gsl_matrix_free (cov); return 0; } A suitable set of data for fitting can be generated using the following program. It outputs a set of points with gaussian errors from the curve y = e^x in the region 0 < x < 2. #include #include #include int main (void) { double x; const gsl_rng_type * T; gsl_rng * r; gsl_rng_env_setup (); T = gsl_rng_default; r = gsl_rng_alloc (T); for (x = 0.1; x < 2; x+= 0.1) { double y0 = exp (x); double sigma = 0.1 * y0; double dy = gsl_ran_gaussian (r, sigma); printf ("%g %g %g\n", x, y0 + dy, sigma); } gsl_rng_free(r); return 0; } The data can be prepared by running the resulting executable program: $ GSL_RNG_TYPE=mt19937_1999 ./generate > exp.dat $ more exp.dat 0.1 0.97935 0.110517 0.2 1.3359 0.12214 0.3 1.52573 0.134986 0.4 1.60318 0.149182 0.5 1.81731 0.164872 0.6 1.92475 0.182212 .... To fit the data use the previous program, with the number of data points given as the first argument. In this case there are 19 data points: $ ./fit 19 < exp.dat 0.1 0.97935 +/- 0.110517 0.2 1.3359 +/- 0.12214 ... # best fit: Y = 1.02318 + 0.956201 X + 0.876796 X^2 # covariance matrix: [ +1.25612e-02, -3.64387e-02, +1.94389e-02 -3.64387e-02, +1.42339e-01, -8.48761e-02 +1.94389e-02, -8.48761e-02, +5.60243e-02 ] # chisq = 23.0987 The parameters of the quadratic fit match the coefficients of the expansion of e^x, taking into account the errors on the parameters and the O(x^3) difference between the exponential and quadratic functions for the larger values of x. The errors on the parameters are given by the square-root of the corresponding diagonal elements of the covariance matrix. The chi-squared per degree of freedom is 1.4, indicating a reasonable fit to the data. Fig. %s shows the resulting fit. [gsl-ref-figures/fit-wlinear2] Figure: Weighted fit example with error bars  File: gsl-ref.info, Node: Regularized Linear Regression Example 1, Next: Regularized Linear Regression Example 2, Prev: Multi-parameter Linear Regression Example, Up: Examples<31> 40.8.3 Regularized Linear Regression Example 1 ---------------------------------------------- The next program demonstrates the difference between ordinary and regularized least squares when the design matrix is near-singular. In this program, we generate two random normally distributed variables u and v, with v = u + noise so that u and v are nearly colinear. We then set a third dependent variable y = u + v + noise and solve for the coefficients c_1,c_2 of the model Y(c_1,c_2) = c_1 u + c_2 v. Since u \approx v, the design matrix X is nearly singular, leading to unstable ordinary least squares solutions. Here is the program output: matrix condition number = 1.025113e+04 === Unregularized fit === best fit: y = -43.6588 u + 45.6636 v residual norm = 31.6248 solution norm = 63.1764 chisq/dof = 1.00213 === Regularized fit (L-curve) === optimal lambda: 4.51103 best fit: y = 1.00113 u + 1.0032 v residual norm = 31.6547 solution norm = 1.41728 chisq/dof = 1.04499 === Regularized fit (GCV) === optimal lambda: 0.0232029 best fit: y = -19.8367 u + 21.8417 v residual norm = 31.6332 solution norm = 29.5051 chisq/dof = 1.00314 We see that the ordinary least squares solution is completely wrong, while the L-curve regularized method with the optimal \lambda = 4.51103 finds the correct solution c_1 \approx c_2 \approx 1. The GCV regularized method finds a regularization parameter \lambda = 0.0232029 which is too small to give an accurate solution, although it performs better than OLS. The L-curve and its computed corner, as well as the GCV curve and its minimum are plotted in Fig. %s. [gsl-ref-figures/regularized] Figure: L-curve and GCV curve for example program. The program is given below. #include #include #include #include #include #include int main() { const size_t n = 1000; /* number of observations */ const size_t p = 2; /* number of model parameters */ size_t i; gsl_rng *r = gsl_rng_alloc(gsl_rng_default); gsl_matrix *X = gsl_matrix_alloc(n, p); gsl_vector *y = gsl_vector_alloc(n); for (i = 0; i < n; ++i) { /* generate first random variable u */ double ui = 5.0 * gsl_ran_gaussian(r, 1.0); /* set v = u + noise */ double vi = ui + gsl_ran_gaussian(r, 0.001); /* set y = u + v + noise */ double yi = ui + vi + gsl_ran_gaussian(r, 1.0); /* since u =~ v, the matrix X is ill-conditioned */ gsl_matrix_set(X, i, 0, ui); gsl_matrix_set(X, i, 1, vi); /* rhs vector */ gsl_vector_set(y, i, yi); } { const size_t npoints = 200; /* number of points on L-curve and GCV curve */ gsl_multifit_linear_workspace *w = gsl_multifit_linear_alloc(n, p); gsl_vector *c = gsl_vector_alloc(p); /* OLS solution */ gsl_vector *c_lcurve = gsl_vector_alloc(p); /* regularized solution (L-curve) */ gsl_vector *c_gcv = gsl_vector_alloc(p); /* regularized solution (GCV) */ gsl_vector *reg_param = gsl_vector_alloc(npoints); gsl_vector *rho = gsl_vector_alloc(npoints); /* residual norms */ gsl_vector *eta = gsl_vector_alloc(npoints); /* solution norms */ gsl_vector *G = gsl_vector_alloc(npoints); /* GCV function values */ double lambda_l; /* optimal regularization parameter (L-curve) */ double lambda_gcv; /* optimal regularization parameter (GCV) */ double G_gcv; /* G(lambda_gcv) */ size_t reg_idx; /* index of optimal lambda */ double rcond; /* reciprocal condition number of X */ double chisq, rnorm, snorm; /* compute SVD of X */ gsl_multifit_linear_svd(X, w); rcond = gsl_multifit_linear_rcond(w); fprintf(stderr, "matrix condition number = %e\n\n", 1.0 / rcond); /* unregularized (standard) least squares fit, lambda = 0 */ gsl_multifit_linear_solve(0.0, X, y, c, &rnorm, &snorm, w); chisq = pow(rnorm, 2.0); fprintf(stderr, "=== Unregularized fit ===\n"); fprintf(stderr, "best fit: y = %g u + %g v\n", gsl_vector_get(c, 0), gsl_vector_get(c, 1)); fprintf(stderr, "residual norm = %g\n", rnorm); fprintf(stderr, "solution norm = %g\n", snorm); fprintf(stderr, "chisq/dof = %g\n", chisq / (n - p)); /* calculate L-curve and find its corner */ gsl_multifit_linear_lcurve(y, reg_param, rho, eta, w); gsl_multifit_linear_lcorner(rho, eta, ®_idx); /* store optimal regularization parameter */ lambda_l = gsl_vector_get(reg_param, reg_idx); /* regularize with lambda_l */ gsl_multifit_linear_solve(lambda_l, X, y, c_lcurve, &rnorm, &snorm, w); chisq = pow(rnorm, 2.0) + pow(lambda_l * snorm, 2.0); fprintf(stderr, "\n=== Regularized fit (L-curve) ===\n"); fprintf(stderr, "optimal lambda: %g\n", lambda_l); fprintf(stderr, "best fit: y = %g u + %g v\n", gsl_vector_get(c_lcurve, 0), gsl_vector_get(c_lcurve, 1)); fprintf(stderr, "residual norm = %g\n", rnorm); fprintf(stderr, "solution norm = %g\n", snorm); fprintf(stderr, "chisq/dof = %g\n", chisq / (n - p)); /* calculate GCV curve and find its minimum */ gsl_multifit_linear_gcv(y, reg_param, G, &lambda_gcv, &G_gcv, w); /* regularize with lambda_gcv */ gsl_multifit_linear_solve(lambda_gcv, X, y, c_gcv, &rnorm, &snorm, w); chisq = pow(rnorm, 2.0) + pow(lambda_gcv * snorm, 2.0); fprintf(stderr, "\n=== Regularized fit (GCV) ===\n"); fprintf(stderr, "optimal lambda: %g\n", lambda_gcv); fprintf(stderr, "best fit: y = %g u + %g v\n", gsl_vector_get(c_gcv, 0), gsl_vector_get(c_gcv, 1)); fprintf(stderr, "residual norm = %g\n", rnorm); fprintf(stderr, "solution norm = %g\n", snorm); fprintf(stderr, "chisq/dof = %g\n", chisq / (n - p)); /* output L-curve and GCV curve */ for (i = 0; i < npoints; ++i) { printf("%e %e %e %e\n", gsl_vector_get(reg_param, i), gsl_vector_get(rho, i), gsl_vector_get(eta, i), gsl_vector_get(G, i)); } /* output L-curve corner point */ printf("\n\n%f %f\n", gsl_vector_get(rho, reg_idx), gsl_vector_get(eta, reg_idx)); /* output GCV curve corner minimum */ printf("\n\n%e %e\n", lambda_gcv, G_gcv); gsl_multifit_linear_free(w); gsl_vector_free(c); gsl_vector_free(c_lcurve); gsl_vector_free(reg_param); gsl_vector_free(rho); gsl_vector_free(eta); gsl_vector_free(G); } gsl_rng_free(r); gsl_matrix_free(X); gsl_vector_free(y); return 0; }  File: gsl-ref.info, Node: Regularized Linear Regression Example 2, Next: Robust Linear Regression Example, Prev: Regularized Linear Regression Example 1, Up: Examples<31> 40.8.4 Regularized Linear Regression Example 2 ---------------------------------------------- The following example program minimizes the cost function ||y - X c||^2 + \lambda^2 ||x||^2 where X is the 10-by-8 Hilbert matrix whose entries are given by X_{ij} = 1 / (i + j - 1) and the right hand side vector is given by y = [1,-1,1,-1,1,-1,1,-1,1,-1]^T. Solutions are computed for \lambda = 0 (unregularized) as well as for optimal parameters \lambda chosen by analyzing the L-curve and GCV curve. Here is the program output: matrix condition number = 3.565872e+09 === Unregularized fit === residual norm = 2.15376 solution norm = 2.92217e+09 chisq/dof = 2.31934 === Regularized fit (L-curve) === optimal lambda: 7.11407e-07 residual norm = 2.60386 solution norm = 424507 chisq/dof = 3.43565 === Regularized fit (GCV) === optimal lambda: 1.72278 residual norm = 3.1375 solution norm = 0.139357 chisq/dof = 4.95076 Here we see the unregularized solution results in a large solution norm due to the ill-conditioned matrix. The L-curve solution finds a small value of \lambda = 7.11e-7 which still results in a badly conditioned system and a large solution norm. The GCV method finds a parameter \lambda = 1.72 which results in a well-conditioned system and small solution norm. The L-curve and its computed corner, as well as the GCV curve and its minimum are plotted in Fig. %s. [gsl-ref-figures/regularized2] Figure: L-curve and GCV curve for example program. The program is given below. #include #include #include #include #include static int hilbert_matrix(gsl_matrix * m) { const size_t N = m->size1; const size_t M = m->size2; size_t i, j; for (i = 0; i < N; i++) { for (j = 0; j < M; j++) { gsl_matrix_set(m, i, j, 1.0/(i+j+1.0)); } } return GSL_SUCCESS; } int main() { const size_t n = 10; /* number of observations */ const size_t p = 8; /* number of model parameters */ size_t i; gsl_matrix *X = gsl_matrix_alloc(n, p); gsl_vector *y = gsl_vector_alloc(n); /* construct Hilbert matrix and rhs vector */ hilbert_matrix(X); { double val = 1.0; for (i = 0; i < n; ++i) { gsl_vector_set(y, i, val); val *= -1.0; } } { const size_t npoints = 200; /* number of points on L-curve and GCV curve */ gsl_multifit_linear_workspace *w = gsl_multifit_linear_alloc(n, p); gsl_vector *c = gsl_vector_alloc(p); /* OLS solution */ gsl_vector *c_lcurve = gsl_vector_alloc(p); /* regularized solution (L-curve) */ gsl_vector *c_gcv = gsl_vector_alloc(p); /* regularized solution (GCV) */ gsl_vector *reg_param = gsl_vector_alloc(npoints); gsl_vector *rho = gsl_vector_alloc(npoints); /* residual norms */ gsl_vector *eta = gsl_vector_alloc(npoints); /* solution norms */ gsl_vector *G = gsl_vector_alloc(npoints); /* GCV function values */ double lambda_l; /* optimal regularization parameter (L-curve) */ double lambda_gcv; /* optimal regularization parameter (GCV) */ double G_gcv; /* G(lambda_gcv) */ size_t reg_idx; /* index of optimal lambda */ double rcond; /* reciprocal condition number of X */ double chisq, rnorm, snorm; /* compute SVD of X */ gsl_multifit_linear_svd(X, w); rcond = gsl_multifit_linear_rcond(w); fprintf(stderr, "matrix condition number = %e\n", 1.0 / rcond); /* unregularized (standard) least squares fit, lambda = 0 */ gsl_multifit_linear_solve(0.0, X, y, c, &rnorm, &snorm, w); chisq = pow(rnorm, 2.0); fprintf(stderr, "\n=== Unregularized fit ===\n"); fprintf(stderr, "residual norm = %g\n", rnorm); fprintf(stderr, "solution norm = %g\n", snorm); fprintf(stderr, "chisq/dof = %g\n", chisq / (n - p)); /* calculate L-curve and find its corner */ gsl_multifit_linear_lcurve(y, reg_param, rho, eta, w); gsl_multifit_linear_lcorner(rho, eta, ®_idx); /* store optimal regularization parameter */ lambda_l = gsl_vector_get(reg_param, reg_idx); /* regularize with lambda_l */ gsl_multifit_linear_solve(lambda_l, X, y, c_lcurve, &rnorm, &snorm, w); chisq = pow(rnorm, 2.0) + pow(lambda_l * snorm, 2.0); fprintf(stderr, "\n=== Regularized fit (L-curve) ===\n"); fprintf(stderr, "optimal lambda: %g\n", lambda_l); fprintf(stderr, "residual norm = %g\n", rnorm); fprintf(stderr, "solution norm = %g\n", snorm); fprintf(stderr, "chisq/dof = %g\n", chisq / (n - p)); /* calculate GCV curve and find its minimum */ gsl_multifit_linear_gcv(y, reg_param, G, &lambda_gcv, &G_gcv, w); /* regularize with lambda_gcv */ gsl_multifit_linear_solve(lambda_gcv, X, y, c_gcv, &rnorm, &snorm, w); chisq = pow(rnorm, 2.0) + pow(lambda_gcv * snorm, 2.0); fprintf(stderr, "\n=== Regularized fit (GCV) ===\n"); fprintf(stderr, "optimal lambda: %g\n", lambda_gcv); fprintf(stderr, "residual norm = %g\n", rnorm); fprintf(stderr, "solution norm = %g\n", snorm); fprintf(stderr, "chisq/dof = %g\n", chisq / (n - p)); /* output L-curve and GCV curve */ for (i = 0; i < npoints; ++i) { printf("%e %e %e %e\n", gsl_vector_get(reg_param, i), gsl_vector_get(rho, i), gsl_vector_get(eta, i), gsl_vector_get(G, i)); } /* output L-curve corner point */ printf("\n\n%f %f\n", gsl_vector_get(rho, reg_idx), gsl_vector_get(eta, reg_idx)); /* output GCV curve corner minimum */ printf("\n\n%e %e\n", lambda_gcv, G_gcv); gsl_multifit_linear_free(w); gsl_vector_free(c); gsl_vector_free(c_lcurve); gsl_vector_free(reg_param); gsl_vector_free(rho); gsl_vector_free(eta); gsl_vector_free(G); } gsl_matrix_free(X); gsl_vector_free(y); return 0; }  File: gsl-ref.info, Node: Robust Linear Regression Example, Next: Large Dense Linear Regression Example, Prev: Regularized Linear Regression Example 2, Up: Examples<31> 40.8.5 Robust Linear Regression Example --------------------------------------- The next program demonstrates the advantage of robust least squares on a dataset with outliers. The program generates linear (x,y) data pairs on the line y = 1.45 x + 3.88, adds some random noise, and inserts 3 outliers into the dataset. Both the robust and ordinary least squares (OLS) coefficients are computed for comparison. #include #include #include int dofit(const gsl_multifit_robust_type *T, const gsl_matrix *X, const gsl_vector *y, gsl_vector *c, gsl_matrix *cov) { int s; gsl_multifit_robust_workspace * work = gsl_multifit_robust_alloc (T, X->size1, X->size2); s = gsl_multifit_robust (X, y, c, cov, work); gsl_multifit_robust_free (work); return s; } int main (int argc, char **argv) { size_t i; size_t n; const size_t p = 2; /* linear fit */ gsl_matrix *X, *cov; gsl_vector *x, *y, *c, *c_ols; const double a = 1.45; /* slope */ const double b = 3.88; /* intercept */ gsl_rng *r; if (argc != 2) { fprintf (stderr,"usage: robfit n\n"); exit (-1); } n = atoi (argv[1]); X = gsl_matrix_alloc (n, p); x = gsl_vector_alloc (n); y = gsl_vector_alloc (n); c = gsl_vector_alloc (p); c_ols = gsl_vector_alloc (p); cov = gsl_matrix_alloc (p, p); r = gsl_rng_alloc(gsl_rng_default); /* generate linear dataset */ for (i = 0; i < n - 3; i++) { double dx = 10.0 / (n - 1.0); double ei = gsl_rng_uniform(r); double xi = -5.0 + i * dx; double yi = a * xi + b; gsl_vector_set (x, i, xi); gsl_vector_set (y, i, yi + ei); } /* add a few outliers */ gsl_vector_set(x, n - 3, 4.7); gsl_vector_set(y, n - 3, -8.3); gsl_vector_set(x, n - 2, 3.5); gsl_vector_set(y, n - 2, -6.7); gsl_vector_set(x, n - 1, 4.1); gsl_vector_set(y, n - 1, -6.0); /* construct design matrix X for linear fit */ for (i = 0; i < n; ++i) { double xi = gsl_vector_get(x, i); gsl_matrix_set (X, i, 0, 1.0); gsl_matrix_set (X, i, 1, xi); } /* perform robust and OLS fit */ dofit(gsl_multifit_robust_ols, X, y, c_ols, cov); dofit(gsl_multifit_robust_bisquare, X, y, c, cov); /* output data and model */ for (i = 0; i < n; ++i) { double xi = gsl_vector_get(x, i); double yi = gsl_vector_get(y, i); gsl_vector_view v = gsl_matrix_row(X, i); double y_ols, y_rob, y_err; gsl_multifit_robust_est(&v.vector, c, cov, &y_rob, &y_err); gsl_multifit_robust_est(&v.vector, c_ols, cov, &y_ols, &y_err); printf("%g %g %g %g\n", xi, yi, y_rob, y_ols); } #define C(i) (gsl_vector_get(c,(i))) #define COV(i,j) (gsl_matrix_get(cov,(i),(j))) { printf ("# best fit: Y = %g + %g X\n", C(0), C(1)); printf ("# covariance matrix:\n"); printf ("# [ %+.5e, %+.5e\n", COV(0,0), COV(0,1)); printf ("# %+.5e, %+.5e\n", COV(1,0), COV(1,1)); } gsl_matrix_free (X); gsl_vector_free (x); gsl_vector_free (y); gsl_vector_free (c); gsl_vector_free (c_ols); gsl_matrix_free (cov); gsl_rng_free(r); return 0; } The output from the program is shown in Fig. %s. [gsl-ref-figures/robust] Figure: Linear fit to dataset with outliers.  File: gsl-ref.info, Node: Large Dense Linear Regression Example, Prev: Robust Linear Regression Example, Up: Examples<31> 40.8.6 Large Dense Linear Regression Example -------------------------------------------- The following program demostrates the large dense linear least squares solvers. This example is adapted from Trefethen and Bau, and fits the function f(t) = \exp{(\sin^3{(10t)}}) on the interval [0,1] with a degree 15 polynomial. The program generates n = 50000 equally spaced points t_i on this interval, calculates the function value and adds random noise to determine the observation value y_i. The entries of the least squares matrix are X_{ij} = t_i^j, representing a polynomial fit. The matrix is highly ill-conditioned, with a condition number of about 2.4 \cdot 10^{11}. The program accumulates the matrix into the least squares system in 5 blocks, each with 10000 rows. This way the full matrix X is never stored in memory. We solve the system with both the normal equations and TSQR methods. The results are shown in Fig. %s. In the top left plot, the TSQR solution provides a reasonable agreement to the exact solution, while the normal equations method fails completely since the Cholesky factorization fails due to the ill-conditioning of the matrix. In the bottom left plot, we show the L-curve calculated from TSQR, which exhibits multiple corners. In the top right panel, we plot a regularized solution using \lambda = 10^{-5}. The TSQR and normal solutions now agree, however they are unable to provide a good fit due to the damping. This indicates that for some ill-conditioned problems, regularizing the normal equations does not improve the solution. This is further illustrated in the bottom right panel, where we plot the L-curve calculated from the normal equations. The curve agrees with the TSQR curve for larger damping parameters, but for small \lambda, the normal equations approach cannot provide accurate solution vectors leading to numerical inaccuracies in the left portion of the curve. [gsl-ref-figures/multilarge] Figure: Top left: unregularized solutions; top right: regularized solutions; bottom left: L-curve for TSQR method; bottom right: L-curve from normal equations method. #include #include #include #include #include #include #include #include #include /* function to be fitted */ double func(const double t) { double x = sin(10.0 * t); return exp(x*x*x); } /* construct a row of the least squares matrix */ int build_row(const double t, gsl_vector *row) { const size_t p = row->size; double Xj = 1.0; size_t j; for (j = 0; j < p; ++j) { gsl_vector_set(row, j, Xj); Xj *= t; } return 0; } int solve_system(const int print_data, const gsl_multilarge_linear_type * T, const double lambda, const size_t n, const size_t p, gsl_vector * c) { const size_t nblock = 5; /* number of blocks to accumulate */ const size_t nrows = n / nblock; /* number of rows per block */ gsl_multilarge_linear_workspace * w = gsl_multilarge_linear_alloc(T, p); gsl_matrix *X = gsl_matrix_alloc(nrows, p); gsl_vector *y = gsl_vector_alloc(nrows); gsl_rng *r = gsl_rng_alloc(gsl_rng_default); const size_t nlcurve = 200; gsl_vector *reg_param = gsl_vector_alloc(nlcurve); gsl_vector *rho = gsl_vector_calloc(nlcurve); gsl_vector *eta = gsl_vector_calloc(nlcurve); size_t rowidx = 0; double rnorm, snorm, rcond; double t = 0.0; double dt = 1.0 / (n - 1.0); while (rowidx < n) { size_t nleft = n - rowidx; /* number of rows left to accumulate */ size_t nr = GSL_MIN(nrows, nleft); /* number of rows in this block */ gsl_matrix_view Xv = gsl_matrix_submatrix(X, 0, 0, nr, p); gsl_vector_view yv = gsl_vector_subvector(y, 0, nr); size_t i; /* build (X,y) block with 'nr' rows */ for (i = 0; i < nr; ++i) { gsl_vector_view row = gsl_matrix_row(&Xv.matrix, i); double fi = func(t); double ei = gsl_ran_gaussian (r, 0.1 * fi); /* noise */ double yi = fi + ei; /* construct this row of LS matrix */ build_row(t, &row.vector); /* set right hand side value with added noise */ gsl_vector_set(&yv.vector, i, yi); if (print_data && (i % 100 == 0)) printf("%f %f\n", t, yi); t += dt; } /* accumulate (X,y) block into LS system */ gsl_multilarge_linear_accumulate(&Xv.matrix, &yv.vector, w); rowidx += nr; } if (print_data) printf("\n\n"); /* compute L-curve */ gsl_multilarge_linear_lcurve(reg_param, rho, eta, w); /* solve large LS system and store solution in c */ gsl_multilarge_linear_solve(lambda, c, &rnorm, &snorm, w); /* compute reciprocal condition number */ gsl_multilarge_linear_rcond(&rcond, w); fprintf(stderr, "=== Method %s ===\n", gsl_multilarge_linear_name(w)); fprintf(stderr, "condition number = %e\n", 1.0 / rcond); fprintf(stderr, "residual norm = %e\n", rnorm); fprintf(stderr, "solution norm = %e\n", snorm); /* output L-curve */ { size_t i; for (i = 0; i < nlcurve; ++i) { printf("%.12e %.12e %.12e\n", gsl_vector_get(reg_param, i), gsl_vector_get(rho, i), gsl_vector_get(eta, i)); } printf("\n\n"); } gsl_matrix_free(X); gsl_vector_free(y); gsl_multilarge_linear_free(w); gsl_rng_free(r); gsl_vector_free(reg_param); gsl_vector_free(rho); gsl_vector_free(eta); return 0; } int main(int argc, char *argv[]) { const size_t n = 50000; /* number of observations */ const size_t p = 16; /* polynomial order + 1 */ double lambda = 0.0; /* regularization parameter */ gsl_vector *c_tsqr = gsl_vector_calloc(p); gsl_vector *c_normal = gsl_vector_calloc(p); if (argc > 1) lambda = atof(argv[1]); /* turn off error handler so normal equations method won't abort */ gsl_set_error_handler_off(); /* solve system with TSQR method */ solve_system(1, gsl_multilarge_linear_tsqr, lambda, n, p, c_tsqr); /* solve system with Normal equations method */ solve_system(0, gsl_multilarge_linear_normal, lambda, n, p, c_normal); /* output solutions */ { gsl_vector *v = gsl_vector_alloc(p); double t; for (t = 0.0; t <= 1.0; t += 0.01) { double f_exact = func(t); double f_tsqr, f_normal; build_row(t, v); gsl_blas_ddot(v, c_tsqr, &f_tsqr); gsl_blas_ddot(v, c_normal, &f_normal); printf("%f %e %e %e\n", t, f_exact, f_tsqr, f_normal); } gsl_vector_free(v); } gsl_vector_free(c_tsqr); gsl_vector_free(c_normal); return 0; }  File: gsl-ref.info, Node: References and Further Reading<33>, Prev: Examples<31>, Up: Linear Least-Squares Fitting 40.9 References and Further Reading =================================== A summary of formulas and techniques for least squares fitting can be found in the “Statistics” chapter of the Annual Review of Particle Physics prepared by the Particle Data Group, * `Review of Particle Properties', R.M. Barnett et al., Physical Review D54, 1 (1996) ‘http://pdg.lbl.gov’ The Review of Particle Physics is available online at the website given above. The tests used to prepare these routines are based on the NIST Statistical Reference Datasets. The datasets and their documentation are available from NIST at the following website, ‘http://www.nist.gov/itl/div898/strd/index.html’ More information on Tikhonov regularization can be found in * Hansen, P. C. (1998), Rank-Deficient and Discrete Ill-Posed Problems: Numerical Aspects of Linear Inversion. SIAM Monogr. on Mathematical Modeling and Computation, Society for Industrial and Applied Mathematics * M. Rezghi and S. M. Hosseini (2009), A new variant of L-curve for Tikhonov regularization, Journal of Computational and Applied Mathematics, Volume 231, Issue 2, pages 914-924. The GSL implementation of robust linear regression closely follows the publications * DuMouchel, W. and F. O’Brien (1989), “Integrating a robust option into a multiple regression computing environment,” Computer Science and Statistics: Proceedings of the 21st Symposium on the Interface, American Statistical Association * Street, J.O., R.J. Carroll, and D. Ruppert (1988), “A note on computing robust regression estimates via iteratively reweighted least squares,” The American Statistician, v. 42, pp. 152-154. More information about the normal equations and TSQR approach for solving large linear least squares systems can be found in the publications * Trefethen, L. N. and Bau, D. (1997), “Numerical Linear Algebra”, SIAM. * Demmel, J., Grigori, L., Hoemmen, M. F., and Langou, J. “Communication-optimal parallel and sequential QR and LU factorizations”, UCB Technical Report No. UCB/EECS-2008-89, 2008.  File: gsl-ref.info, Node: Nonlinear Least-Squares Fitting, Next: Basis Splines, Prev: Linear Least-Squares Fitting, Up: Top 41 Nonlinear Least-Squares Fitting ********************************** This chapter describes functions for multidimensional nonlinear least-squares fitting. There are generally two classes of algorithms for solving nonlinear least squares problems, which fall under line search methods and trust region methods. GSL currently implements only trust region methods and provides the user with full access to intermediate steps of the iteration. The user also has the ability to tune a number of parameters which affect low-level aspects of the algorithm which can help to accelerate convergence for the specific problem at hand. GSL provides two separate interfaces for nonlinear least squares fitting. The first is designed for small to moderate sized problems, and the second is designed for very large problems, which may or may not have significant sparse structure. The header file ‘gsl_multifit_nlinear.h’ contains prototypes for the multidimensional nonlinear fitting functions and related declarations relating to the small to moderate sized systems. The header file ‘gsl_multilarge_nlinear.h’ contains prototypes for the multidimensional nonlinear fitting functions and related declarations relating to large systems. * Menu: * Overview: Overview<6>. * Solving the Trust Region Subproblem (TRS): Solving the Trust Region Subproblem TRS. * Weighted Nonlinear Least-Squares:: * Tunable Parameters:: * Initializing the Solver: Initializing the Solver<3>. * Providing the Function to be Minimized:: * Iteration: Iteration<5>. * Testing for Convergence:: * High Level Driver:: * Covariance matrix of best fit parameters:: * Troubleshooting: Troubleshooting<2>. * Examples: Examples<32>. * References and Further Reading: References and Further Reading<34>.  File: gsl-ref.info, Node: Overview<6>, Next: Solving the Trust Region Subproblem TRS, Up: Nonlinear Least-Squares Fitting 41.1 Overview ============= The problem of multidimensional nonlinear least-squares fitting requires the minimization of the squared residuals of n functions, f_i, in p parameters, x_i, \Phi(x) = (1/2) || f(x) ||^2 = (1/2) \sum_{i=1}^{n} f_i(x_1, ..., x_p)^2 In trust region methods, the objective (or cost) function \Phi(x) is approximated by a model function m_k(\delta) in the vicinity of some point x_k. The model function is often simply a second order Taylor series expansion around the point x_k, ie: \Phi(x_k + \delta) ~=~ m_k(\delta) = \Phi(x_k) + g_k^T \delta + 1/2 \delta^T B_k \delta where g_k = \nabla \Phi(x_k) = J^T f is the gradient vector at the point x_k, B_k = \nabla^2 \Phi(x_k) is the Hessian matrix at x_k, or some approximation to it, and J is the n-by-p Jacobian matrix J_{ij} = d f_i / d x_j In order to find the next step \delta, we minimize the model function m_k(\delta), but search for solutions only within a region where we trust that m_k(\delta) is a good approximation to the objective function \Phi(x_k + \delta). In other words, we seek a solution of the trust region subproblem (TRS) \min_(\delta \in R^p) m_k(\delta), s.t. || D_k \delta || <= \Delta_k where \Delta_k > 0 is the trust region radius and D_k is a scaling matrix. If D_k = I, then the trust region is a ball of radius \Delta_k centered at x_k. In some applications, the parameter vector x may have widely different scales. For example, one parameter might be a temperature on the order of 10^3 K, while another might be a length on the order of 10^{-6} m. In such cases, a spherical trust region may not be the best choice, since if \Phi changes rapidly along directions with one scale, and more slowly along directions with a different scale, the model function m_k may be a poor approximation to \Phi along the rapidly changing directions. In such problems, it may be best to use an elliptical trust region, by setting D_k to a diagonal matrix whose entries are designed so that the scaled step D_k \delta has entries of approximately the same order of magnitude. The trust region subproblem above normally amounts to solving a linear least squares system (or multiple systems) for the step \delta. Once \delta is computed, it is checked whether or not it reduces the objective function \Phi(x). A useful statistic for this is to look at the ratio \rho_k = ( \Phi(x_k) - \Phi(x_k + \delta_k) / ( m_k(0) - m_k(\delta_k) ) where the numerator is the actual reduction of the objective function due to the step \delta_k, and the denominator is the predicted reduction due to the model m_k. If \rho_k is negative, it means that the step \delta_k increased the objective function and so it is rejected. If \rho_k is positive, then we have found a step which reduced the objective function and it is accepted. Furthermore, if \rho_k is close to 1, then this indicates that the model function is a good approximation to the objective function in the trust region, and so on the next iteration the trust region is enlarged in order to take more ambitious steps. When a step is rejected, the trust region is made smaller and the TRS is solved again. An outline for the general trust region method used by GSL can now be given. `Trust Region Algorithm' 1. Initialize: given x_0, construct m_0(\delta), D_0 and \Delta_0 > 0 2. For k = 0, 1, 2, … a. If converged, then stop b. Solve TRS for trial step \delta_k c. Evaluate trial step by computing \rho_k 1). if step is accepted, set x_{k+1} = x_k + \delta_k and increase radius, \Delta_{k+1} = \alpha \Delta_k 2). if step is rejected, set x_{k+1} = x_k and decrease radius, \Delta_{k+1} = {\Delta_k \over \beta}; goto 2(b) d. Construct m_{k+1}(\delta) and D_{k+1} GSL offers the user a number of different algorithms for solving the trust region subproblem in 2(b), as well as different choices of scaling matrices D_k and different methods of updating the trust region radius \Delta_k. Therefore, while reasonable default methods are provided, the user has a lot of control to fine-tune the various steps of the algorithm for their specific problem.  File: gsl-ref.info, Node: Solving the Trust Region Subproblem TRS, Next: Weighted Nonlinear Least-Squares, Prev: Overview<6>, Up: Nonlinear Least-Squares Fitting 41.2 Solving the Trust Region Subproblem (TRS) ============================================== Below we describe the methods available for solving the trust region subproblem. The methods available provide either exact or approximate solutions to the trust region subproblem. In all algorithms below, the Hessian matrix B_k is approximated as B_k \approx J_k^T J_k, where J_k = J(x_k). In all methods, the solution of the TRS involves solving a linear least squares system involving the Jacobian matrix. For small to moderate sized problems (‘gsl_multifit_nlinear’ interface), this is accomplished by factoring the full Jacobian matrix, which is provided by the user, with the Cholesky, QR, or SVD decompositions. For large systems (‘gsl_multilarge_nlinear’ interface), the user has two choices. One is to solve the system iteratively, without needing to store the full Jacobian matrix in memory. With this method, the user must provide a routine to calculate the matrix-vector products J u or J^T u for a given vector u. This iterative method is particularly useful for systems where the Jacobian has sparse structure, since forming matrix-vector products can be done cheaply. The second option for large systems involves forming the normal equations matrix J^T J and then factoring it using a Cholesky decomposition. The normal equations matrix is p-by-p, typically much smaller than the full n-by-p Jacobian, and can usually be stored in memory even if the full Jacobian matrix cannot. This option is useful for large, dense systems, or if the iterative method has difficulty converging. * Menu: * Levenberg-Marquardt:: * Levenberg-Marquardt with Geodesic Acceleration:: * Dogleg:: * Double Dogleg:: * Two Dimensional Subspace:: * Steihaug-Toint Conjugate Gradient::  File: gsl-ref.info, Node: Levenberg-Marquardt, Next: Levenberg-Marquardt with Geodesic Acceleration, Up: Solving the Trust Region Subproblem TRS 41.2.1 Levenberg-Marquardt -------------------------- There is a theorem which states that if \delta_k is a solution to the trust region subproblem given above, then there exists \mu_k \ge 0 such that ( B_k + \mu_k D_k^T D_k ) \delta_k = -g_k with \mu_k (\Delta_k - ||D_k \delta_k||) = 0. This forms the basis of the Levenberg-Marquardt algorithm, which controls the trust region size by adjusting the parameter \mu_k rather than the radius \Delta_k directly. For each radius \Delta_k, there is a unique parameter \mu_k which solves the TRS, and they have an inverse relationship, so that large values of \mu_k correspond to smaller trust regions, while small values of \mu_k correspond to larger trust regions. With the approximation B_k \approx J_k^T J_k, on each iteration, in order to calculate the step \delta_k, the following linear least squares problem is solved: [J_k; sqrt(mu_k) D_k] \delta_k = - [f_k; 0] If the step \delta_k is accepted, then \mu_k is decreased on the next iteration in order to take a larger step, otherwise it is increased to take a smaller step. The Levenberg-Marquardt algorithm provides an exact solution of the trust region subproblem, but typically has a higher computational cost per iteration than the approximate methods discussed below, since it may need to solve the least squares system above several times for different values of \mu_k.  File: gsl-ref.info, Node: Levenberg-Marquardt with Geodesic Acceleration, Next: Dogleg, Prev: Levenberg-Marquardt, Up: Solving the Trust Region Subproblem TRS 41.2.2 Levenberg-Marquardt with Geodesic Acceleration ----------------------------------------------------- This method applies a so-called geodesic acceleration correction to the standard Levenberg-Marquardt step \delta_k (Transtrum et al, 2011). By interpreting \delta_k as a first order step along a geodesic in the model parameter space (ie: a velocity \delta_k = v_k), the geodesic acceleration a_k is a second order correction along the geodesic which is determined by solving the linear least squares system [J_k; sqrt(mu_k) D_k] a_k = - [f_vv(x_k); 0] where f_{vv} is the second directional derivative of the residual vector in the velocity direction v, f_{vv}(x) = D_v^2 f = \sum_{\alpha\beta} v_{\alpha} v_{\beta} \partial_{\alpha} \partial_{\beta} f(x), where \alpha and \beta are summed over the p parameters. The new total step is then \delta_k' = v_k + {1 \over 2}a_k. The second order correction a_k can be calculated with a modest additional cost, and has been shown to dramatically reduce the number of iterations (and expensive Jacobian evaluations) required to reach convergence on a variety of different problems. In order to utilize the geodesic acceleration, the user must supply a function which provides the second directional derivative vector f_{vv}(x), or alternatively the library can use a finite difference method to estimate this vector with one additional function evaluation of f(x + h v) where h is a tunable step size (see the ‘h_fvv’ parameter description).  File: gsl-ref.info, Node: Dogleg, Next: Double Dogleg, Prev: Levenberg-Marquardt with Geodesic Acceleration, Up: Solving the Trust Region Subproblem TRS 41.2.3 Dogleg ------------- This is Powell’s dogleg method, which finds an approximate solution to the trust region subproblem, by restricting its search to a piecewise linear “dogleg” path, composed of the origin, the Cauchy point which represents the model minimizer along the steepest descent direction, and the Gauss-Newton point, which is the overall minimizer of the unconstrained model. The Gauss-Newton step is calculated by solving J_k \delta_{gn} = -f_k which is the main computational task for each iteration, but only needs to be performed once per iteration. If the Gauss-Newton point is inside the trust region, it is selected as the step. If it is outside, the method then calculates the Cauchy point, which is located along the gradient direction. If the Cauchy point is also outside the trust region, the method assumes that it is still far from the minimum and so proceeds along the gradient direction, truncating the step at the trust region boundary. If the Cauchy point is inside the trust region, with the Gauss-Newton point outside, the method uses a dogleg step, which is a linear combination of the gradient direction and the Gauss-Newton direction, stopping at the trust region boundary.  File: gsl-ref.info, Node: Double Dogleg, Next: Two Dimensional Subspace, Prev: Dogleg, Up: Solving the Trust Region Subproblem TRS 41.2.4 Double Dogleg -------------------- This method is an improvement over the classical dogleg algorithm, which attempts to include information about the Gauss-Newton step while the iteration is still far from the minimum. When the Cauchy point is inside the trust region and the Gauss-Newton point is outside, the method computes a scaled Gauss-Newton point and then takes a dogleg step between the Cauchy point and the scaled Gauss-Newton point. The scaling is calculated to ensure that the reduction in the model m_k is about the same as the reduction provided by the Cauchy point.  File: gsl-ref.info, Node: Two Dimensional Subspace, Next: Steihaug-Toint Conjugate Gradient, Prev: Double Dogleg, Up: Solving the Trust Region Subproblem TRS 41.2.5 Two Dimensional Subspace ------------------------------- The dogleg methods restrict the search for the TRS solution to a 1D curve defined by the Cauchy and Gauss-Newton points. An improvement to this is to search for a solution using the full two dimensional subspace spanned by the Cauchy and Gauss-Newton directions. The dogleg path is of course inside this subspace, and so this method solves the TRS at least as accurately as the dogleg methods. Since this method searches a larger subspace for a solution, it can converge more quickly than dogleg on some problems. Because the subspace is only two dimensional, this method is very efficient and the main computation per iteration is to determine the Gauss-Newton point.  File: gsl-ref.info, Node: Steihaug-Toint Conjugate Gradient, Prev: Two Dimensional Subspace, Up: Solving the Trust Region Subproblem TRS 41.2.6 Steihaug-Toint Conjugate Gradient ---------------------------------------- One difficulty of the dogleg methods is calculating the Gauss-Newton step when the Jacobian matrix is singular. The Steihaug-Toint method also computes a generalized dogleg step, but avoids solving for the Gauss-Newton step directly, instead using an iterative conjugate gradient algorithm. This method performs well at points where the Jacobian is singular, and is also suitable for large-scale problems where factoring the Jacobian matrix could be prohibitively expensive.  File: gsl-ref.info, Node: Weighted Nonlinear Least-Squares, Next: Tunable Parameters, Prev: Solving the Trust Region Subproblem TRS, Up: Nonlinear Least-Squares Fitting 41.3 Weighted Nonlinear Least-Squares ===================================== Weighted nonlinear least-squares fitting minimizes the function \Phi(x) = (1/2) || f(x) ||_W^2 = (1/2) \sum_{i=1}^{n} f_i(x_1, ..., x_p)^2 where W = \diag(w_1,w_2,...,w_n) is the weighting matrix, and ||f||_W^2 = f^T W f. The weights w_i are commonly defined as w_i = 1/\sigma_i^2, where \sigma_i is the error in the i-th measurement. A simple change of variables \tilde{f} = W^{1 \over 2} f yields \Phi(x) = {1 \over 2} ||\tilde{f}||^2, which is in the same form as the unweighted case. The user can either perform this transform directly on their function residuals and Jacobian, or use the *note gsl_multifit_nlinear_winit(): ba1. interface which automatically performs the correct scaling. To manually perform this transformation, the residuals and Jacobian should be modified according to f~_i = f_i / \sigma_i J~_ij = 1 / \sigma_i df_i/dx_j For large systems, the user must perform their own weighting.  File: gsl-ref.info, Node: Tunable Parameters, Next: Initializing the Solver<3>, Prev: Weighted Nonlinear Least-Squares, Up: Nonlinear Least-Squares Fitting 41.4 Tunable Parameters ======================= The user can tune nearly all aspects of the iteration at allocation time. For the ‘gsl_multifit_nlinear’ interface, the user may modify the *note gsl_multifit_nlinear_parameters: ba4. structure, which is defined as follows: -- Type: gsl_multifit_nlinear_parameters typedef struct { const gsl_multifit_nlinear_trs *trs; /* trust region subproblem method */ const gsl_multifit_nlinear_scale *scale; /* scaling method */ const gsl_multifit_nlinear_solver *solver; /* solver method */ gsl_multifit_nlinear_fdtype fdtype; /* finite difference method */ double factor_up; /* factor for increasing trust radius */ double factor_down; /* factor for decreasing trust radius */ double avmax; /* max allowed |a|/|v| */ double h_df; /* step size for finite difference Jacobian */ double h_fvv; /* step size for finite difference fvv */ } gsl_multifit_nlinear_parameters; For the ‘gsl_multilarge_nlinear’ interface, the user may modify the *note gsl_multilarge_nlinear_parameters: ba5. structure, which is defined as follows: -- Type: gsl_multilarge_nlinear_parameters typedef struct { const gsl_multilarge_nlinear_trs *trs; /* trust region subproblem method */ const gsl_multilarge_nlinear_scale *scale; /* scaling method */ const gsl_multilarge_nlinear_solver *solver; /* solver method */ gsl_multilarge_nlinear_fdtype fdtype; /* finite difference method */ double factor_up; /* factor for increasing trust radius */ double factor_down; /* factor for decreasing trust radius */ double avmax; /* max allowed |a|/|v| */ double h_df; /* step size for finite difference Jacobian */ double h_fvv; /* step size for finite difference fvv */ size_t max_iter; /* maximum iterations for trs method */ double tol; /* tolerance for solving trs */ } gsl_multilarge_nlinear_parameters; Each of these parameters is discussed in further detail below. -- Type: gsl_multifit_nlinear_trs -- Type: gsl_multilarge_nlinear_trs The parameter ‘trs’ determines the method used to solve the trust region subproblem, and may be selected from the following choices, -- Variable: *note gsl_multifit_nlinear_trs: ba6. *gsl_multifit_nlinear_trs_lm -- Variable: *note gsl_multilarge_nlinear_trs: ba7. *gsl_multilarge_nlinear_trs_lm This selects the Levenberg-Marquardt algorithm. -- Variable: *note gsl_multifit_nlinear_trs: ba6. *gsl_multifit_nlinear_trs_lmaccel -- Variable: *note gsl_multilarge_nlinear_trs: ba7. *gsl_multilarge_nlinear_trs_lmaccel This selects the Levenberg-Marquardt algorithm with geodesic acceleration. -- Variable: *note gsl_multifit_nlinear_trs: ba6. *gsl_multifit_nlinear_trs_dogleg -- Variable: *note gsl_multilarge_nlinear_trs: ba7. *gsl_multilarge_nlinear_trs_dogleg This selects the dogleg algorithm. -- Variable: *note gsl_multifit_nlinear_trs: ba6. *gsl_multifit_nlinear_trs_ddogleg -- Variable: *note gsl_multilarge_nlinear_trs: ba7. *gsl_multilarge_nlinear_trs_ddogleg This selects the double dogleg algorithm. -- Variable: *note gsl_multifit_nlinear_trs: ba6. *gsl_multifit_nlinear_trs_subspace2D -- Variable: *note gsl_multilarge_nlinear_trs: ba7. *gsl_multilarge_nlinear_trs_subspace2D This selects the 2D subspace algorithm. -- Variable: *note gsl_multilarge_nlinear_trs: ba7. *gsl_multilarge_nlinear_trs_cgst This selects the Steihaug-Toint conjugate gradient algorithm. This method is available only for large systems. -- Type: gsl_multifit_nlinear_scale -- Type: gsl_multilarge_nlinear_scale The parameter ‘scale’ determines the diagonal scaling matrix D and may be selected from the following choices, -- Variable: *note gsl_multifit_nlinear_scale: bb3. *gsl_multifit_nlinear_scale_more -- Variable: *note gsl_multilarge_nlinear_scale: bb4. *gsl_multilarge_nlinear_scale_more This damping strategy was suggested by Moré, and corresponds to D^T D = \max(\diag(J^T J)), in other words the maximum elements of \diag(J^T J) encountered thus far in the iteration. This choice of D makes the problem scale-invariant, so that if the model parameters x_i are each scaled by an arbitrary constant, \tilde{x}_i = a_i x_i, then the sequence of iterates produced by the algorithm would be unchanged. This method can work very well in cases where the model parameters have widely different scales (ie: if some parameters are measured in nanometers, while others are measured in degrees Kelvin). This strategy has been proven effective on a large class of problems and so it is the library default, but it may not be the best choice for all problems. -- Variable: *note gsl_multifit_nlinear_scale: bb3. *gsl_multifit_nlinear_scale_levenberg -- Variable: *note gsl_multilarge_nlinear_scale: bb4. *gsl_multilarge_nlinear_scale_levenberg This damping strategy was originally suggested by Levenberg, and corresponds to D^T D = I. This method has also proven effective on a large class of problems, but is not scale-invariant. However, some authors (e.g. Transtrum and Sethna 2012) argue that this choice is better for problems which are susceptible to parameter evaporation (ie: parameters go to infinity) -- Variable: *note gsl_multifit_nlinear_scale: bb3. *gsl_multifit_nlinear_scale_marquardt -- Variable: *note gsl_multilarge_nlinear_scale: bb4. *gsl_multilarge_nlinear_scale_marquardt This damping strategy was suggested by Marquardt, and corresponds to D^T D = \diag(J^T J). This method is scale-invariant, but it is generally considered inferior to both the Levenberg and Moré strategies, though may work well on certain classes of problems. -- Type: gsl_multifit_nlinear_solver -- Type: gsl_multilarge_nlinear_solver Solving the trust region subproblem on each iteration almost always requires the solution of the following linear least squares system [J; sqrt(mu) D] \delta = - [f; 0] The ‘solver’ parameter determines how the system is solved and can be selected from the following choices: -- Variable: *note gsl_multifit_nlinear_solver: bbb. *gsl_multifit_nlinear_solver_qr This method solves the system using a rank revealing QR decomposition of the Jacobian J. This method will produce reliable solutions in cases where the Jacobian is rank deficient or near-singular but does require about twice as many operations as the Cholesky method discussed below. -- Variable: *note gsl_multifit_nlinear_solver: bbb. *gsl_multifit_nlinear_solver_cholesky -- Variable: *note gsl_multilarge_nlinear_solver: bbc. *gsl_multilarge_nlinear_solver_cholesky This method solves the alternate normal equations problem ( J^T J + \mu D^T D ) \delta = -J^T f by using a Cholesky decomposition of the matrix J^T J + \mu D^T D. This method is faster than the QR approach, however it is susceptible to numerical instabilities if the Jacobian matrix is rank deficient or near-singular. In these cases, an attempt is made to reduce the condition number of the matrix using Jacobi preconditioning, but for highly ill-conditioned problems the QR approach is better. If it is known that the Jacobian matrix is well conditioned, this method is accurate and will perform faster than the QR approach. -- Variable: *note gsl_multifit_nlinear_solver: bbb. *gsl_multifit_nlinear_solver_mcholesky -- Variable: *note gsl_multilarge_nlinear_solver: bbc. *gsl_multilarge_nlinear_solver_mcholesky This method solves the alternate normal equations problem ( J^T J + \mu D^T D ) \delta = -J^T f by using a modified Cholesky decomposition of the matrix J^T J + \mu D^T D. This is more suitable for the dogleg methods where the parameter \mu = 0, and the matrix J^T J may be ill-conditioned or indefinite causing the standard Cholesky decomposition to fail. This method is based on Level 2 BLAS and is thus slower than the standard Cholesky decomposition, which is based on Level 3 BLAS. -- Variable: *note gsl_multifit_nlinear_solver: bbb. *gsl_multifit_nlinear_solver_svd This method solves the system using a singular value decomposition of the Jacobian J. This method will produce the most reliable solutions for ill-conditioned Jacobians but is also the slowest solver method. -- Type: gsl_multifit_nlinear_fdtype The parameter ‘fdtype’ specifies whether to use forward or centered differences when approximating the Jacobian. This is only used when an analytic Jacobian is not provided to the solver. This parameter may be set to one of the following choices. -- Macro: GSL_MULTIFIT_NLINEAR_FWDIFF This specifies a forward finite difference to approximate the Jacobian matrix. The Jacobian matrix will be calculated as J_ij = 1 / \Delta_j ( f_i(x + \Delta_j e_j) - f_i(x) ) where \Delta_j = h |x_j| and e_j is the standard j-th Cartesian unit basis vector so that x + \Delta_j e_j represents a small (forward) perturbation of the j-th parameter by an amount \Delta_j. The perturbation \Delta_j is proportional to the current value |x_j| which helps to calculate an accurate Jacobian when the various parameters have different scale sizes. The value of h is specified by the ‘h_df’ parameter. The accuracy of this method is O(h), and evaluating this matrix requires an additional p function evaluations. -- Macro: GSL_MULTIFIT_NLINEAR_CTRDIFF This specifies a centered finite difference to approximate the Jacobian matrix. The Jacobian matrix will be calculated as J_ij = 1 / \Delta_j ( f_i(x + 1/2 \Delta_j e_j) - f_i(x - 1/2 \Delta_j e_j) ) See above for a description of \Delta_j. The accuracy of this method is O(h^2), but evaluating this matrix requires an additional 2p function evaluations. ‘double factor_up’ When a step is accepted, the trust region radius will be increased by this factor. The default value is 3. ‘double factor_down’ When a step is rejected, the trust region radius will be decreased by this factor. The default value is 2. ‘double avmax’ When using geodesic acceleration to solve a nonlinear least squares problem, an important parameter to monitor is the ratio of the acceleration term to the velocity term, |a| / |v| If this ratio is small, it means the acceleration correction is contributing very little to the step. This could be because the problem is not “nonlinear” enough to benefit from the acceleration. If the ratio is large (> 1) it means that the acceleration is larger than the velocity, which shouldn’t happen since the step represents a truncated series and so the second order term a should be smaller than the first order term v to guarantee convergence. Therefore any steps with a ratio larger than the parameter ‘avmax’ are rejected. ‘avmax’ is set to 0.75 by default. For problems which experience difficulty converging, this threshold could be lowered. ‘double h_df’ This parameter specifies the step size for approximating the Jacobian matrix with finite differences. It is set to \sqrt{\epsilon} by default, where \epsilon is ‘GSL_DBL_EPSILON’. ‘double h_fvv’ When using geodesic acceleration, the user must either supply a function to calculate f_{vv}(x) or the library can estimate this second directional derivative using a finite difference method. When using finite differences, the library must calculate f(x + h v) where h represents a small step in the velocity direction. The parameter ‘h_fvv’ defines this step size and is set to 0.02 by default.  File: gsl-ref.info, Node: Initializing the Solver<3>, Next: Providing the Function to be Minimized, Prev: Tunable Parameters, Up: Nonlinear Least-Squares Fitting 41.5 Initializing the Solver ============================ -- Type: gsl_multifit_nlinear_type This structure specifies the type of algorithm which will be used to solve a nonlinear least squares problem. It may be selected from the following choices, -- Variable: *note gsl_multifit_nlinear_type: bc7. *gsl_multifit_nlinear_trust This specifies a trust region method. It is currently the only implemented nonlinear least squares method. -- Function: gsl_multifit_nlinear_workspace *gsl_multifit_nlinear_alloc (const gsl_multifit_nlinear_type *T, const gsl_multifit_nlinear_parameters *params, const size_t n, const size_t p) -- Function: gsl_multilarge_nlinear_workspace *gsl_multilarge_nlinear_alloc (const gsl_multilarge_nlinear_type *T, const gsl_multilarge_nlinear_parameters *params, const size_t n, const size_t p) These functions return a pointer to a newly allocated instance of a derivative solver of type *note T: bca. for *note n: bca. observations and *note p: bca. parameters. The *note params: bca. input specifies a tunable set of parameters which will affect important details in each iteration of the trust region subproblem algorithm. It is recommended to start with the suggested default parameters (see *note gsl_multifit_nlinear_default_parameters(): bcb. and *note gsl_multilarge_nlinear_default_parameters(): bcc.) and then tune the parameters once the code is working correctly. See *note Tunable Parameters: ba2. for descriptions of the various parameters. For example, the following code creates an instance of a Levenberg-Marquardt solver for 100 data points and 3 parameters, using suggested defaults: const gsl_multifit_nlinear_type * T = gsl_multifit_nlinear_trust; gsl_multifit_nlinear_parameters params = gsl_multifit_nlinear_default_parameters(); gsl_multifit_nlinear_workspace * w = gsl_multifit_nlinear_alloc (T, ¶ms, 100, 3); The number of observations *note n: bca. must be greater than or equal to parameters *note p: bca. If there is insufficient memory to create the solver then the function returns a null pointer and the error handler is invoked with an error code of *note GSL_ENOMEM: 2a. -- Function: *note gsl_multifit_nlinear_parameters: ba4. gsl_multifit_nlinear_default_parameters (void) -- Function: *note gsl_multilarge_nlinear_parameters: ba5. gsl_multilarge_nlinear_default_parameters (void) These functions return a set of recommended default parameters for use in solving nonlinear least squares problems. The user can tune each parameter to improve the performance on their particular problem, see *note Tunable Parameters: ba2. -- Function: int gsl_multifit_nlinear_init (const gsl_vector *x, gsl_multifit_nlinear_fdf *fdf, gsl_multifit_nlinear_workspace *w) -- Function: int gsl_multifit_nlinear_winit (const gsl_vector *x, const gsl_vector *wts, gsl_multifit_nlinear_fdf *fdf, gsl_multifit_nlinear_workspace *w) -- Function: int gsl_multilarge_nlinear_init (const gsl_vector *x, gsl_multilarge_nlinear_fdf *fdf, gsl_multilarge_nlinear_workspace *w) These functions initialize, or reinitialize, an existing workspace *note w: bce. to use the system *note fdf: bce. and the initial guess *note x: bce. See *note Providing the Function to be Minimized: bcf. for a description of the *note fdf: bce. structure. Optionally, a weight vector ‘wts’ can be given to perform a weighted nonlinear regression. Here, the weighting matrix is W = \diag(w_1,w_2,...,w_n). -- Function: void gsl_multifit_nlinear_free (gsl_multifit_nlinear_workspace *w) -- Function: void gsl_multilarge_nlinear_free (gsl_multilarge_nlinear_workspace *w) These functions free all the memory associated with the workspace *note w: bd1. -- Function: const char *gsl_multifit_nlinear_name (const gsl_multifit_nlinear_workspace *w) -- Function: const char *gsl_multilarge_nlinear_name (const gsl_multilarge_nlinear_workspace *w) These functions return a pointer to the name of the solver. For example: printf ("w is a '%s' solver\n", gsl_multifit_nlinear_name (w)); would print something like ‘w is a 'trust-region' solver’. -- Function: const char *gsl_multifit_nlinear_trs_name (const gsl_multifit_nlinear_workspace *w) -- Function: const char *gsl_multilarge_nlinear_trs_name (const gsl_multilarge_nlinear_workspace *w) These functions return a pointer to the name of the trust region subproblem method. For example: printf ("w is a '%s' solver\n", gsl_multifit_nlinear_trs_name (w)); would print something like ‘w is a 'levenberg-marquardt' solver’.  File: gsl-ref.info, Node: Providing the Function to be Minimized, Next: Iteration<5>, Prev: Initializing the Solver<3>, Up: Nonlinear Least-Squares Fitting 41.6 Providing the Function to be Minimized =========================================== The user must provide n functions of p variables for the minimization algorithm to operate on. In order to allow for arbitrary parameters the functions are defined by the following data types: -- Type: gsl_multifit_nlinear_fdf This data type defines a general system of functions with arbitrary parameters, the corresponding Jacobian matrix of derivatives, and optionally the second directional derivative of the functions for geodesic acceleration. ‘int (* f) (const gsl_vector * x, void * params, gsl_vector * f)’ This function should store the n components of the vector f(x) in ‘f’ for argument ‘x’ and arbitrary parameters ‘params’, returning an appropriate error code if the function cannot be computed. ‘int (* df) (const gsl_vector * x, void * params, gsl_matrix * J)’ This function should store the ‘n’-by-‘p’ matrix result J_ij = d f_i(x) / d x_j in ‘J’ for argument ‘x’ and arbitrary parameters ‘params’, returning an appropriate error code if the matrix cannot be computed. If an analytic Jacobian is unavailable, or too expensive to compute, this function pointer may be set to ‘NULL’, in which case the Jacobian will be internally computed using finite difference approximations of the function ‘f’. ‘int (* fvv) (const gsl_vector * x, const gsl_vector * v, void * params, gsl_vector * fvv)’ When geodesic acceleration is enabled, this function should store the n components of the vector f_{vv}(x) = \sum_{\alpha\beta} v_{\alpha} v_{\beta} {\partial \over \partial x_{\alpha}} {\partial \over \partial x_{\beta}} f(x), representing second directional derivatives of the function to be minimized, into the output ‘fvv’. The parameter vector is provided in ‘x’ and the velocity vector is provided in ‘v’, both of which have p components. The arbitrary parameters are given in ‘params’. If analytic expressions for f_{vv}(x) are unavailable or too difficult to compute, this function pointer may be set to ‘NULL’, in which case f_{vv}(x) will be computed internally using a finite difference approximation. ‘size_t n’ the number of functions, i.e. the number of components of the vector ‘f’. ‘size_t p’ the number of independent variables, i.e. the number of components of the vector ‘x’. ‘void * params’ a pointer to the arbitrary parameters of the function. ‘size_t nevalf’ This does not need to be set by the user. It counts the number of function evaluations and is initialized by the ‘_init’ function. ‘size_t nevaldf’ This does not need to be set by the user. It counts the number of Jacobian evaluations and is initialized by the ‘_init’ function. ‘size_t nevalfvv’ This does not need to be set by the user. It counts the number of f_{vv}(x) evaluations and is initialized by the ‘_init’ function. -- Type: gsl_multilarge_nlinear_fdf This data type defines a general system of functions with arbitrary parameters, a function to compute J u or J^T u for a given vector u, the normal equations matrix J^T J, and optionally the second directional derivative of the functions for geodesic acceleration. ‘int (* f) (const gsl_vector * x, void * params, gsl_vector * f)’ This function should store the n components of the vector f(x) in ‘f’ for argument ‘x’ and arbitrary parameters ‘params’, returning an appropriate error code if the function cannot be computed. ‘int (* df) (CBLAS_TRANSPOSE_t TransJ, const gsl_vector * x, const gsl_vector * u, void * params, gsl_vector * v, gsl_matrix * JTJ)’ If ‘TransJ’ is equal to ‘CblasNoTrans’, then this function should compute the matrix-vector product J u and store the result in ‘v’. If ‘TransJ’ is equal to ‘CblasTrans’, then this function should compute the matrix-vector product J^T u and store the result in ‘v’. Additionally, the normal equations matrix J^T J should be stored in the lower half of ‘JTJ’. The input matrix ‘JTJ’ could be set to ‘NULL’, for example by iterative methods which do not require this matrix, so the user should check for this prior to constructing the matrix. The input ‘params’ contains the arbitrary parameters. ‘int (* fvv) (const gsl_vector * x, const gsl_vector * v, void * params, gsl_vector * fvv)’ When geodesic acceleration is enabled, this function should store the n components of the vector f_{vv}(x) = \sum_{\alpha\beta} v_{\alpha} v_{\beta} {\partial \over \partial x_{\alpha}} {\partial \over \partial x_{\beta}} f(x), representing second directional derivatives of the function to be minimized, into the output ‘fvv’. The parameter vector is provided in ‘x’ and the velocity vector is provided in ‘v’, both of which have p components. The arbitrary parameters are given in ‘params’. If analytic expressions for f_{vv}(x) are unavailable or too difficult to compute, this function pointer may be set to ‘NULL’, in which case f_{vv}(x) will be computed internally using a finite difference approximation. ‘size_t n’ the number of functions, i.e. the number of components of the vector ‘f’. ‘size_t p’ the number of independent variables, i.e. the number of components of the vector ‘x’. ‘void * params’ a pointer to the arbitrary parameters of the function. ‘size_t nevalf’ This does not need to be set by the user. It counts the number of function evaluations and is initialized by the ‘_init’ function. ‘size_t nevaldfu’ This does not need to be set by the user. It counts the number of Jacobian matrix-vector evaluations (J u or J^T u) and is initialized by the ‘_init’ function. ‘size_t nevaldf2’ This does not need to be set by the user. It counts the number of J^T J evaluations and is initialized by the ‘_init’ function. ‘size_t nevalfvv’ This does not need to be set by the user. It counts the number of f_{vv}(x) evaluations and is initialized by the ‘_init’ function. Note that when fitting a non-linear model against experimental data, the data is passed to the functions above using the ‘params’ argument and the trial best-fit parameters through the ‘x’ argument.  File: gsl-ref.info, Node: Iteration<5>, Next: Testing for Convergence, Prev: Providing the Function to be Minimized, Up: Nonlinear Least-Squares Fitting 41.7 Iteration ============== The following functions drive the iteration of each algorithm. Each function performs one iteration of the trust region method and updates the state of the solver. -- Function: int gsl_multifit_nlinear_iterate (gsl_multifit_nlinear_workspace *w) -- Function: int gsl_multilarge_nlinear_iterate (gsl_multilarge_nlinear_workspace *w) These functions perform a single iteration of the solver *note w: bdb. If the iteration encounters an unexpected problem then an error code will be returned. The solver workspace maintains a current estimate of the best-fit parameters at all times. The solver workspace ‘w’ contains the following entries, which can be used to track the progress of the solution: ‘gsl_vector * x’ The current position, length p. ‘gsl_vector * f’ The function residual vector at the current position f(x), length n. ‘gsl_matrix * J’ The Jacobian matrix at the current position J(x), size n-by-p (only for ‘gsl_multifit_nlinear’ interface). ‘gsl_vector * dx’ The difference between the current position and the previous position, i.e. the last step \delta, taken as a vector, length p. These quantities can be accessed with the following functions, -- Function: *note gsl_vector: 35f. *gsl_multifit_nlinear_position (const gsl_multifit_nlinear_workspace *w) -- Function: *note gsl_vector: 35f. *gsl_multilarge_nlinear_position (const gsl_multilarge_nlinear_workspace *w) These functions return the current position x (i.e. best-fit parameters) of the solver *note w: bdd. -- Function: *note gsl_vector: 35f. *gsl_multifit_nlinear_residual (const gsl_multifit_nlinear_workspace *w) -- Function: *note gsl_vector: 35f. *gsl_multilarge_nlinear_residual (const gsl_multilarge_nlinear_workspace *w) These functions return the current residual vector f(x) of the solver *note w: bdf. For weighted systems, the residual vector includes the weighting factor \sqrt{W}. -- Function: *note gsl_matrix: 3a2. *gsl_multifit_nlinear_jac (const gsl_multifit_nlinear_workspace *w) This function returns a pointer to the n-by-p Jacobian matrix for the current iteration of the solver *note w: be0. This function is available only for the ‘gsl_multifit_nlinear’ interface. -- Function: size_t gsl_multifit_nlinear_niter (const gsl_multifit_nlinear_workspace *w) -- Function: size_t gsl_multilarge_nlinear_niter (const gsl_multilarge_nlinear_workspace *w) These functions return the number of iterations performed so far. The iteration counter is updated on each call to the ‘_iterate’ functions above, and reset to 0 in the ‘_init’ functions. -- Function: int gsl_multifit_nlinear_rcond (double *rcond, const gsl_multifit_nlinear_workspace *w) -- Function: int gsl_multilarge_nlinear_rcond (double *rcond, const gsl_multilarge_nlinear_workspace *w) This function estimates the reciprocal condition number of the Jacobian matrix at the current position x and stores it in *note rcond: be4. The computed value is only an estimate to give the user a guideline as to the conditioning of their particular problem. Its calculation is based on which factorization method is used (Cholesky, QR, or SVD). * For the Cholesky solver, the matrix J^T J is factored at each iteration. Therefore this function will estimate the 1-norm condition number rcond^2 = 1/(||J^T J||_1 \cdot ||(J^T J)^{-1}||_1) * For the QR solver, J is factored as J = Q R at each iteration. For simplicity, this function calculates the 1-norm conditioning of only the R factor, rcond = 1 / (||R||_1 \cdot ||R^{-1}||_1). This can be computed efficiently since R is upper triangular. * For the SVD solver, in order to efficiently solve the trust region subproblem, the matrix which is factored is J D^{-1}, instead of J itself. The resulting singular values are used to provide the 2-norm reciprocal condition number, as rcond = \sigma_{min} / \sigma_{max}. Note that when using Moré scaling, D \ne I and the resulting *note rcond: be4. estimate may be significantly different from the true *note rcond: be4. of J itself. -- Function: double gsl_multifit_nlinear_avratio (const gsl_multifit_nlinear_workspace *w) -- Function: double gsl_multilarge_nlinear_avratio (const gsl_multilarge_nlinear_workspace *w) This function returns the current ratio |a| / |v| of the acceleration correction term to the velocity step term. The acceleration term is computed only by the ‘gsl_multifit_nlinear_trs_lmaccel’ and ‘gsl_multilarge_nlinear_trs_lmaccel’ methods, so this ratio will be zero for other TRS methods.  File: gsl-ref.info, Node: Testing for Convergence, Next: High Level Driver, Prev: Iteration<5>, Up: Nonlinear Least-Squares Fitting 41.8 Testing for Convergence ============================ A minimization procedure should stop when one of the following conditions is true: * A minimum has been found to within the user-specified precision. * A user-specified maximum number of iterations has been reached. * An error has occurred. The handling of these conditions is under user control. The functions below allow the user to test the current estimate of the best-fit parameters in several standard ways. -- Function: int gsl_multifit_nlinear_test (const double xtol, const double gtol, const double ftol, int *info, const gsl_multifit_nlinear_workspace *w) -- Function: int gsl_multilarge_nlinear_test (const double xtol, const double gtol, const double ftol, int *info, const gsl_multilarge_nlinear_workspace *w) These functions test for convergence of the minimization method using the following criteria: * Testing for a small step size relative to the current parameter vector |\delta_i| \le xtol (|x_i| + xtol) for each 0 <= i < p. Each element of the step vector \delta is tested individually in case the different parameters have widely different scales. Adding *note xtol: be9. to |x_i| helps the test avoid breaking down in situations where the true solution value x_i = 0. If this test succeeds, *note info: be9. is set to 1 and the function returns ‘GSL_SUCCESS’. A general guideline for selecting the step tolerance is to choose xtol = 10^{-d} where d is the number of accurate decimal digits desired in the solution x. See Dennis and Schnabel for more information. * Testing for a small gradient (g = \nabla \Phi(x) = J^T f) indicating a local function minimum: ||g||_inf <= gtol This expression tests whether the ratio (\nabla \Phi)_i x_i / \Phi is small. Testing this scaled gradient is a better than \nabla \Phi alone since it is a dimensionless quantity and so independent of the scale of the problem. The ‘max’ arguments help ensure the test doesn’t break down in regions where x_i or \Phi(x) are close to 0. If this test succeeds, *note info: be9. is set to 2 and the function returns ‘GSL_SUCCESS’. A general guideline for choosing the gradient tolerance is to set ‘gtol = GSL_DBL_EPSILON^(1/3)’. See Dennis and Schnabel for more information. If none of the tests succeed, *note info: be9. is set to 0 and the function returns ‘GSL_CONTINUE’, indicating further iterations are required.  File: gsl-ref.info, Node: High Level Driver, Next: Covariance matrix of best fit parameters, Prev: Testing for Convergence, Up: Nonlinear Least-Squares Fitting 41.9 High Level Driver ====================== These routines provide a high level wrapper that combines the iteration and convergence testing for easy use. -- Function: int gsl_multifit_nlinear_driver (const size_t maxiter, const double xtol, const double gtol, const double ftol, void (*callback)(const size_t iter, void *params, const gsl_multifit_linear_workspace *w), void *callback_params, int *info, gsl_multifit_nlinear_workspace *w) -- Function: int gsl_multilarge_nlinear_driver (const size_t maxiter, const double xtol, const double gtol, const double ftol, void (*callback)(const size_t iter, void *params, const gsl_multilarge_linear_workspace *w), void *callback_params, int *info, gsl_multilarge_nlinear_workspace *w) These functions iterate the nonlinear least squares solver *note w: bec. for a maximum of *note maxiter: bec. iterations. After each iteration, the system is tested for convergence with the error tolerances *note xtol: bec, *note gtol: bec. and *note ftol: bec. Additionally, the user may supply a callback function *note callback: bec. which is called after each iteration, so that the user may save or print relevant quantities for each iteration. The parameter *note callback_params: bec. is passed to the *note callback: bec. function. The parameters *note callback: bec. and *note callback_params: bec. may be set to ‘NULL’ to disable this feature. Upon successful convergence, the function returns ‘GSL_SUCCESS’ and sets *note info: bec. to the reason for convergence (see *note gsl_multifit_nlinear_test(): be8.). If the function has not converged after *note maxiter: bec. iterations, ‘GSL_EMAXITER’ is returned. In rare cases, during an iteration the algorithm may be unable to find a new acceptable step \delta to take. In this case, ‘GSL_ENOPROG’ is returned indicating no further progress can be made. If your problem is having difficulty converging, see *note Troubleshooting: bed. for further guidance.  File: gsl-ref.info, Node: Covariance matrix of best fit parameters, Next: Troubleshooting<2>, Prev: High Level Driver, Up: Nonlinear Least-Squares Fitting 41.10 Covariance matrix of best fit parameters ============================================== -- Function: int gsl_multifit_nlinear_covar (const gsl_matrix *J, const double epsrel, gsl_matrix *covar) -- Function: int gsl_multilarge_nlinear_covar (gsl_matrix *covar, gsl_multilarge_nlinear_workspace *w) This function computes the covariance matrix of best-fit parameters using the Jacobian matrix ‘J’ and stores it in *note covar: bf0. The parameter ‘epsrel’ is used to remove linear-dependent columns when ‘J’ is rank deficient. The covariance matrix is given by, C = (J^T J)^{-1} or in the weighted case, C = (J^T W J)^{-1} and is computed using the factored form of the Jacobian (Cholesky, QR, or SVD). Any columns of R which satisfy |R_{kk}| \leq epsrel |R_{11}| are considered linearly-dependent and are excluded from the covariance matrix (the corresponding rows and columns of the covariance matrix are set to zero). If the minimisation uses the weighted least-squares function f_i = (Y(x, t_i) - y_i) / \sigma_i then the covariance matrix above gives the statistical error on the best-fit parameters resulting from the Gaussian errors \sigma_i on the underlying data y_i. This can be verified from the relation \delta f = J \delta c and the fact that the fluctuations in f from the data y_i are normalised by \sigma_i and so satisfy <\delta f \delta f^T> = I For an unweighted least-squares function f_i = (Y(x, t_i) - y_i) the covariance matrix above should be multiplied by the variance of the residuals about the best-fit \sigma^2 = \sum (y_i - Y(x,t_i))^2 / (n-p) to give the variance-covariance matrix \sigma^2 C. This estimates the statistical error on the best-fit parameters from the scatter of the underlying data. For more information about covariance matrices see *note Linear Least-Squares Overview: b29.  File: gsl-ref.info, Node: Troubleshooting<2>, Next: Examples<32>, Prev: Covariance matrix of best fit parameters, Up: Nonlinear Least-Squares Fitting 41.11 Troubleshooting ===================== When developing a code to solve a nonlinear least squares problem, here are a few considerations to keep in mind. 1. The most common difficulty is the accurate implementation of the Jacobian matrix. If the analytic Jacobian is not properly provided to the solver, this can hinder and many times prevent convergence of the method. When developing a new nonlinear least squares code, it often helps to compare the program output with the internally computed finite difference Jacobian and the user supplied analytic Jacobian. If there is a large difference in coefficients, it is likely the analytic Jacobian is incorrectly implemented. 2. If your code is having difficulty converging, the next thing to check is the starting point provided to the solver. The methods of this chapter are local methods, meaning if you provide a starting point far away from the true minimum, the method may converge to a local minimum or not converge at all. Sometimes it is possible to solve a linearized approximation to the nonlinear problem, and use the linear solution as the starting point to the nonlinear problem. 3. If the various parameters of the coefficient vector x vary widely in magnitude, then the problem is said to be badly scaled. The methods of this chapter do attempt to automatically rescale the elements of x to have roughly the same order of magnitude, but in extreme cases this could still cause problems for convergence. In these cases it is recommended for the user to scale their parameter vector x so that each parameter spans roughly the same range, say [-1,1]. The solution vector can be backscaled to recover the original units of the problem.  File: gsl-ref.info, Node: Examples<32>, Next: References and Further Reading<34>, Prev: Troubleshooting<2>, Up: Nonlinear Least-Squares Fitting 41.12 Examples ============== The following example programs demonstrate the nonlinear least squares fitting capabilities. * Menu: * Exponential Fitting Example:: * Geodesic Acceleration Example 1:: * Geodesic Acceleration Example 2:: * Comparing TRS Methods Example:: * Large Nonlinear Least Squares Example::  File: gsl-ref.info, Node: Exponential Fitting Example, Next: Geodesic Acceleration Example 1, Up: Examples<32> 41.12.1 Exponential Fitting Example ----------------------------------- The following example program fits a weighted exponential model with background to experimental data, Y = A \exp(-\lambda t) + b. The first part of the program sets up the functions ‘expb_f()’ and ‘expb_df()’ to calculate the model and its Jacobian. The appropriate fitting function is given by, f_i = (A \exp(-\lambda t_i) + b) - y_i where we have chosen t_i = i T / (N - 1), where N is the number of data points fitted, so that t_i \in [0, T]. The Jacobian matrix J is the derivative of these functions with respect to the three parameters (A, \lambda, b). It is given by, J_{ij} = d f_i / d x_j where x_0 = A, x_1 = \lambda and x_2 = b. The i-th row of the Jacobian is therefore J(i,:) = [ \exp(-\lambda t_i) ; -t_i A \exp(-\lambda t_i) ; 1 ] The main part of the program sets up a Levenberg-Marquardt solver and some simulated random data. The data uses the known parameters (5.0,1.5,1.0) combined with Gaussian noise (standard deviation = 0.1) with a maximum time T = 3 and N = 100 timesteps. The initial guess for the parameters is chosen as (1.0, 1.0, 0.0). The iteration terminates when the relative change in x is smaller than 10^{-8}, or when the magnitude of the gradient falls below 10^{-8}. Here are the results of running the program: iter 0: A = 1.0000, lambda = 1.0000, b = 0.0000, cond(J) = inf, |f(x)| = 88.4448 iter 1: A = 4.5109, lambda = 2.5258, b = 1.0704, cond(J) = 26.2686, |f(x)| = 24.0646 iter 2: A = 4.8565, lambda = 1.7442, b = 1.1669, cond(J) = 23.7470, |f(x)| = 11.9797 iter 3: A = 4.9356, lambda = 1.5713, b = 1.0767, cond(J) = 17.5849, |f(x)| = 10.7355 iter 4: A = 4.8678, lambda = 1.4838, b = 1.0252, cond(J) = 16.3428, |f(x)| = 10.5000 iter 5: A = 4.8118, lambda = 1.4481, b = 1.0076, cond(J) = 15.7925, |f(x)| = 10.4786 iter 6: A = 4.7983, lambda = 1.4404, b = 1.0041, cond(J) = 15.5840, |f(x)| = 10.4778 iter 7: A = 4.7967, lambda = 1.4395, b = 1.0037, cond(J) = 15.5396, |f(x)| = 10.4778 iter 8: A = 4.7965, lambda = 1.4394, b = 1.0037, cond(J) = 15.5344, |f(x)| = 10.4778 iter 9: A = 4.7965, lambda = 1.4394, b = 1.0037, cond(J) = 15.5339, |f(x)| = 10.4778 iter 10: A = 4.7965, lambda = 1.4394, b = 1.0037, cond(J) = 15.5339, |f(x)| = 10.4778 iter 11: A = 4.7965, lambda = 1.4394, b = 1.0037, cond(J) = 15.5339, |f(x)| = 10.4778 summary from method 'trust-region/levenberg-marquardt' number of iterations: 11 function evaluations: 16 Jacobian evaluations: 12 reason for stopping: small gradient initial |f(x)| = 88.444756 final |f(x)| = 10.477801 chisq/dof = 1.1318 A = 4.79653 +/- 0.18704 lambda = 1.43937 +/- 0.07390 b = 1.00368 +/- 0.03473 status = success The approximate values of the parameters are found correctly, and the chi-squared value indicates a good fit (the chi-squared per degree of freedom is approximately 1). In this case the errors on the parameters can be estimated from the square roots of the diagonal elements of the covariance matrix. If the chi-squared value shows a poor fit (i.e. \chi^2/(n-p) \gg 1 then the error estimates obtained from the covariance matrix will be too small. In the example program the error estimates are multiplied by \sqrt{\chi^2/(n-p)} in this case, a common way of increasing the errors for a poor fit. Note that a poor fit will result from the use of an inappropriate model, and the scaled error estimates may then be outside the range of validity for Gaussian errors. Additionally, we see that the condition number of J(x) stays reasonably small throughout the iteration. This indicates we could safely switch to the Cholesky solver for speed improvement, although this particular system is too small to really benefit. Fig. %s shows the fitted curve with the original data. [gsl-ref-figures/fit-exp] Figure: Exponential fitted curve with data #include #include #include #include #include #include #include #include #define N 100 /* number of data points to fit */ #define TMAX (3.0) /* time variable in [0,TMAX] */ struct data { size_t n; double * t; double * y; }; int expb_f (const gsl_vector * x, void *data, gsl_vector * f) { size_t n = ((struct data *)data)->n; double *t = ((struct data *)data)->t; double *y = ((struct data *)data)->y; double A = gsl_vector_get (x, 0); double lambda = gsl_vector_get (x, 1); double b = gsl_vector_get (x, 2); size_t i; for (i = 0; i < n; i++) { /* Model Yi = A * exp(-lambda * t_i) + b */ double Yi = A * exp (-lambda * t[i]) + b; gsl_vector_set (f, i, Yi - y[i]); } return GSL_SUCCESS; } int expb_df (const gsl_vector * x, void *data, gsl_matrix * J) { size_t n = ((struct data *)data)->n; double *t = ((struct data *)data)->t; double A = gsl_vector_get (x, 0); double lambda = gsl_vector_get (x, 1); size_t i; for (i = 0; i < n; i++) { /* Jacobian matrix J(i,j) = dfi / dxj, */ /* where fi = (Yi - yi)/sigma[i], */ /* Yi = A * exp(-lambda * t_i) + b */ /* and the xj are the parameters (A,lambda,b) */ double e = exp(-lambda * t[i]); gsl_matrix_set (J, i, 0, e); gsl_matrix_set (J, i, 1, -t[i] * A * e); gsl_matrix_set (J, i, 2, 1.0); } return GSL_SUCCESS; } void callback(const size_t iter, void *params, const gsl_multifit_nlinear_workspace *w) { gsl_vector *f = gsl_multifit_nlinear_residual(w); gsl_vector *x = gsl_multifit_nlinear_position(w); double rcond; /* compute reciprocal condition number of J(x) */ gsl_multifit_nlinear_rcond(&rcond, w); fprintf(stderr, "iter %2zu: A = %.4f, lambda = %.4f, b = %.4f, cond(J) = %8.4f, |f(x)| = %.4f\n", iter, gsl_vector_get(x, 0), gsl_vector_get(x, 1), gsl_vector_get(x, 2), 1.0 / rcond, gsl_blas_dnrm2(f)); } int main (void) { const gsl_multifit_nlinear_type *T = gsl_multifit_nlinear_trust; gsl_multifit_nlinear_workspace *w; gsl_multifit_nlinear_fdf fdf; gsl_multifit_nlinear_parameters fdf_params = gsl_multifit_nlinear_default_parameters(); const size_t n = N; const size_t p = 3; gsl_vector *f; gsl_matrix *J; gsl_matrix *covar = gsl_matrix_alloc (p, p); double t[N], y[N], weights[N]; struct data d = { n, t, y }; double x_init[3] = { 1.0, 1.0, 0.0 }; /* starting values */ gsl_vector_view x = gsl_vector_view_array (x_init, p); gsl_vector_view wts = gsl_vector_view_array(weights, n); gsl_rng * r; double chisq, chisq0; int status, info; size_t i; const double xtol = 1e-8; const double gtol = 1e-8; const double ftol = 0.0; gsl_rng_env_setup(); r = gsl_rng_alloc(gsl_rng_default); /* define the function to be minimized */ fdf.f = expb_f; fdf.df = expb_df; /* set to NULL for finite-difference Jacobian */ fdf.fvv = NULL; /* not using geodesic acceleration */ fdf.n = n; fdf.p = p; fdf.params = &d; /* this is the data to be fitted */ for (i = 0; i < n; i++) { double ti = i * TMAX / (n - 1.0); double yi = 1.0 + 5 * exp (-1.5 * ti); double si = 0.1 * yi; double dy = gsl_ran_gaussian(r, si); t[i] = ti; y[i] = yi + dy; weights[i] = 1.0 / (si * si); printf ("data: %g %g %g\n", ti, y[i], si); }; /* allocate workspace with default parameters */ w = gsl_multifit_nlinear_alloc (T, &fdf_params, n, p); /* initialize solver with starting point and weights */ gsl_multifit_nlinear_winit (&x.vector, &wts.vector, &fdf, w); /* compute initial cost function */ f = gsl_multifit_nlinear_residual(w); gsl_blas_ddot(f, f, &chisq0); /* solve the system with a maximum of 100 iterations */ status = gsl_multifit_nlinear_driver(100, xtol, gtol, ftol, callback, NULL, &info, w); /* compute covariance of best fit parameters */ J = gsl_multifit_nlinear_jac(w); gsl_multifit_nlinear_covar (J, 0.0, covar); /* compute final cost */ gsl_blas_ddot(f, f, &chisq); #define FIT(i) gsl_vector_get(w->x, i) #define ERR(i) sqrt(gsl_matrix_get(covar,i,i)) fprintf(stderr, "summary from method '%s/%s'\n", gsl_multifit_nlinear_name(w), gsl_multifit_nlinear_trs_name(w)); fprintf(stderr, "number of iterations: %zu\n", gsl_multifit_nlinear_niter(w)); fprintf(stderr, "function evaluations: %zu\n", fdf.nevalf); fprintf(stderr, "Jacobian evaluations: %zu\n", fdf.nevaldf); fprintf(stderr, "reason for stopping: %s\n", (info == 1) ? "small step size" : "small gradient"); fprintf(stderr, "initial |f(x)| = %f\n", sqrt(chisq0)); fprintf(stderr, "final |f(x)| = %f\n", sqrt(chisq)); { double dof = n - p; double c = GSL_MAX_DBL(1, sqrt(chisq / dof)); fprintf(stderr, "chisq/dof = %g\n", chisq / dof); fprintf (stderr, "A = %.5f +/- %.5f\n", FIT(0), c*ERR(0)); fprintf (stderr, "lambda = %.5f +/- %.5f\n", FIT(1), c*ERR(1)); fprintf (stderr, "b = %.5f +/- %.5f\n", FIT(2), c*ERR(2)); } fprintf (stderr, "status = %s\n", gsl_strerror (status)); gsl_multifit_nlinear_free (w); gsl_matrix_free (covar); gsl_rng_free (r); return 0; }  File: gsl-ref.info, Node: Geodesic Acceleration Example 1, Next: Geodesic Acceleration Example 2, Prev: Exponential Fitting Example, Up: Examples<32> 41.12.2 Geodesic Acceleration Example 1 --------------------------------------- The following example program minimizes a modified Rosenbrock function, which is characterized by a narrow canyon with steep walls. The starting point is selected high on the canyon wall, so the solver must first find the canyon bottom and then navigate to the minimum. The problem is solved both with and without using geodesic acceleration for comparison. The cost function is given by Phi(x) = 1/2 (f1^2 + f2^2) f1 = 100 ( x2 - x1^2 ) f2 = 1 - x1 The Jacobian matrix is J = [ -200*x1 100 ] [ -1 0 ] In order to use geodesic acceleration, the user must provide the second directional derivative of each residual in the velocity direction, D_v^2 f_i = \sum_{\alpha\beta} v_{\alpha} v_{\beta} \partial_{\alpha} \partial_{\beta} f_i. The velocity vector v is provided by the solver. For this example, these derivatives are fvv = [ -200 v1^2 ] [ 0 ] The solution of this minimization problem is x* = [ 1 ; 1 ] Phi(x*) = 0 The program output is shown below: === Solving system without acceleration === NITER = 53 NFEV = 56 NJEV = 54 NAEV = 0 initial cost = 2.250225000000e+04 final cost = 6.674986031430e-18 final x = (9.999999974165e-01, 9.999999948328e-01) final cond(J) = 6.000096055094e+02 === Solving system with acceleration === NITER = 15 NFEV = 17 NJEV = 16 NAEV = 16 initial cost = 2.250225000000e+04 final cost = 7.518932873279e-19 final x = (9.999999991329e-01, 9.999999982657e-01) final cond(J) = 6.000097233278e+02 [gsl-ref-figures/nlfit2] Figure: Paths taken by solver for Rosenbrock function We can see that enabling geodesic acceleration requires less than a third of the number of Jacobian evaluations in order to locate the minimum. The path taken by both methods is shown in Fig. %s. The contours show the cost function \Phi(x_1,x_2). We see that both methods quickly find the canyon bottom, but the geodesic acceleration method navigates along the bottom to the solution with significantly fewer iterations. The program is given below. #include #include #include #include #include #include int func_f (const gsl_vector * x, void *params, gsl_vector * f) { double x1 = gsl_vector_get(x, 0); double x2 = gsl_vector_get(x, 1); gsl_vector_set(f, 0, 100.0 * (x2 - x1*x1)); gsl_vector_set(f, 1, 1.0 - x1); return GSL_SUCCESS; } int func_df (const gsl_vector * x, void *params, gsl_matrix * J) { double x1 = gsl_vector_get(x, 0); gsl_matrix_set(J, 0, 0, -200.0*x1); gsl_matrix_set(J, 0, 1, 100.0); gsl_matrix_set(J, 1, 0, -1.0); gsl_matrix_set(J, 1, 1, 0.0); return GSL_SUCCESS; } int func_fvv (const gsl_vector * x, const gsl_vector * v, void *params, gsl_vector * fvv) { double v1 = gsl_vector_get(v, 0); gsl_vector_set(fvv, 0, -200.0 * v1 * v1); gsl_vector_set(fvv, 1, 0.0); return GSL_SUCCESS; } void callback(const size_t iter, void *params, const gsl_multifit_nlinear_workspace *w) { gsl_vector * x = gsl_multifit_nlinear_position(w); /* print out current location */ printf("%f %f\n", gsl_vector_get(x, 0), gsl_vector_get(x, 1)); } void solve_system(gsl_vector *x0, gsl_multifit_nlinear_fdf *fdf, gsl_multifit_nlinear_parameters *params) { const gsl_multifit_nlinear_type *T = gsl_multifit_nlinear_trust; const size_t max_iter = 200; const double xtol = 1.0e-8; const double gtol = 1.0e-8; const double ftol = 1.0e-8; const size_t n = fdf->n; const size_t p = fdf->p; gsl_multifit_nlinear_workspace *work = gsl_multifit_nlinear_alloc(T, params, n, p); gsl_vector * f = gsl_multifit_nlinear_residual(work); gsl_vector * x = gsl_multifit_nlinear_position(work); int info; double chisq0, chisq, rcond; /* initialize solver */ gsl_multifit_nlinear_init(x0, fdf, work); /* store initial cost */ gsl_blas_ddot(f, f, &chisq0); /* iterate until convergence */ gsl_multifit_nlinear_driver(max_iter, xtol, gtol, ftol, callback, NULL, &info, work); /* store final cost */ gsl_blas_ddot(f, f, &chisq); /* store cond(J(x)) */ gsl_multifit_nlinear_rcond(&rcond, work); /* print summary */ fprintf(stderr, "NITER = %zu\n", gsl_multifit_nlinear_niter(work)); fprintf(stderr, "NFEV = %zu\n", fdf->nevalf); fprintf(stderr, "NJEV = %zu\n", fdf->nevaldf); fprintf(stderr, "NAEV = %zu\n", fdf->nevalfvv); fprintf(stderr, "initial cost = %.12e\n", chisq0); fprintf(stderr, "final cost = %.12e\n", chisq); fprintf(stderr, "final x = (%.12e, %.12e)\n", gsl_vector_get(x, 0), gsl_vector_get(x, 1)); fprintf(stderr, "final cond(J) = %.12e\n", 1.0 / rcond); printf("\n\n"); gsl_multifit_nlinear_free(work); } int main (void) { const size_t n = 2; const size_t p = 2; gsl_vector *f = gsl_vector_alloc(n); gsl_vector *x = gsl_vector_alloc(p); gsl_multifit_nlinear_fdf fdf; gsl_multifit_nlinear_parameters fdf_params = gsl_multifit_nlinear_default_parameters(); /* print map of Phi(x1, x2) */ { double x1, x2, chisq; double *f1 = gsl_vector_ptr(f, 0); double *f2 = gsl_vector_ptr(f, 1); for (x1 = -1.2; x1 < 1.3; x1 += 0.1) { for (x2 = -0.5; x2 < 2.1; x2 += 0.1) { gsl_vector_set(x, 0, x1); gsl_vector_set(x, 1, x2); func_f(x, NULL, f); chisq = (*f1) * (*f1) + (*f2) * (*f2); printf("%f %f %f\n", x1, x2, chisq); } printf("\n"); } printf("\n\n"); } /* define function to be minimized */ fdf.f = func_f; fdf.df = func_df; fdf.fvv = func_fvv; fdf.n = n; fdf.p = p; fdf.params = NULL; /* starting point */ gsl_vector_set(x, 0, -0.5); gsl_vector_set(x, 1, 1.75); fprintf(stderr, "=== Solving system without acceleration ===\n"); fdf_params.trs = gsl_multifit_nlinear_trs_lm; solve_system(x, &fdf, &fdf_params); fprintf(stderr, "=== Solving system with acceleration ===\n"); fdf_params.trs = gsl_multifit_nlinear_trs_lmaccel; solve_system(x, &fdf, &fdf_params); gsl_vector_free(f); gsl_vector_free(x); return 0; }  File: gsl-ref.info, Node: Geodesic Acceleration Example 2, Next: Comparing TRS Methods Example, Prev: Geodesic Acceleration Example 1, Up: Examples<32> 41.12.3 Geodesic Acceleration Example 2 --------------------------------------- The following example fits a set of data to a Gaussian model using the Levenberg-Marquardt method with geodesic acceleration. The cost function is Phi(x) = 1/2 \sum_i f_i^2 f_i = y_i - Y(a,b,c;t_i) where y_i is the measured data point at time t_i, and the model is specified by Y(a,b,c;t) = a exp(-1/2 ((t-b)/c)^2) The parameters a,b,c represent the amplitude, mean, and width of the Gaussian respectively. The program below generates the y_i data on [0,1] using the values a = 5, b = 0.4, c = 0.15 and adding random noise. The i-th row of the Jacobian is J(i,:) = ( -e_i -(a/c)*z_i*e_i -(a/c)*z_i^2*e_i ) where z_i = (t_i - b) / c e_i = \exp(-1/2 z_i^2) In order to use geodesic acceleration, we need the second directional derivative of the residuals in the velocity direction, D_v^2 f_i = \sum_{\alpha\beta} v_{\alpha} v_{\beta} \partial_{\alpha} \partial_{\beta} f_i, where v is provided by the solver. To compute this, it is helpful to make a table of all second derivatives of the residuals f_i with respect to each combination of model parameters. This table is The lower half of the table is omitted since it is symmetric. Then, the second directional derivative of f_i is D_v^2 f_i = v_a^2 (d/da)^2 f_i + 2 v_a v_b (d/da) (d/db) f_i + 2 v_a v_c (d/da) (d/dc) f_i + v_b^2 (d/db)^2 f_i + 2 v_b v_c (d/db) (d/dc) f_i + v_c^2 (d/dc)^2 f_i The factors of 2 come from the symmetry of the mixed second partial derivatives. The iteration is started using the initial guess a = 1, b = 0, c = 1. The program output is shown below: iter 0: a = 1.0000, b = 0.0000, c = 1.0000, |a|/|v| = 0.0000 cond(J) = inf, |f(x)| = 35.4785 iter 1: a = 1.5708, b = 0.5321, c = 0.5219, |a|/|v| = 0.3093 cond(J) = 29.0443, |f(x)| = 31.1042 iter 2: a = 1.7387, b = 0.4040, c = 0.4568, |a|/|v| = 0.1199 cond(J) = 3.5256, |f(x)| = 28.7217 iter 3: a = 2.2340, b = 0.3829, c = 0.3053, |a|/|v| = 0.3308 cond(J) = 4.5121, |f(x)| = 23.8074 iter 4: a = 3.2275, b = 0.3952, c = 0.2243, |a|/|v| = 0.2784 cond(J) = 8.6499, |f(x)| = 15.6003 iter 5: a = 4.3347, b = 0.3974, c = 0.1752, |a|/|v| = 0.2029 cond(J) = 15.1732, |f(x)| = 7.5908 iter 6: a = 4.9352, b = 0.3992, c = 0.1536, |a|/|v| = 0.1001 cond(J) = 26.6621, |f(x)| = 4.8402 iter 7: a = 5.0716, b = 0.3994, c = 0.1498, |a|/|v| = 0.0166 cond(J) = 34.6922, |f(x)| = 4.7103 iter 8: a = 5.0828, b = 0.3994, c = 0.1495, |a|/|v| = 0.0012 cond(J) = 36.5422, |f(x)| = 4.7095 iter 9: a = 5.0831, b = 0.3994, c = 0.1495, |a|/|v| = 0.0000 cond(J) = 36.6929, |f(x)| = 4.7095 iter 10: a = 5.0831, b = 0.3994, c = 0.1495, |a|/|v| = 0.0000 cond(J) = 36.6975, |f(x)| = 4.7095 iter 11: a = 5.0831, b = 0.3994, c = 0.1495, |a|/|v| = 0.0000 cond(J) = 36.6976, |f(x)| = 4.7095 NITER = 11 NFEV = 18 NJEV = 12 NAEV = 17 initial cost = 1.258724737288e+03 final cost = 2.217977560180e+01 final x = (5.083101559156e+00, 3.994484109594e-01, 1.494898e-01) final cond(J) = 3.669757713403e+01 We see the method converges after 11 iterations. For comparison the standard Levenberg-Marquardt method requires 26 iterations and so the Gaussian fitting problem benefits substantially from the geodesic acceleration correction. The column marked ‘|a|/|v|’ above shows the ratio of the acceleration term to the velocity term as the iteration progresses. Larger values of this ratio indicate that the geodesic acceleration correction term is contributing substantial information to the solver relative to the standard LM velocity step. The data and fitted model are shown in Fig. %s. [gsl-ref-figures/nlfit2b] Figure: Gaussian model fitted to data The program is given below. #include #include #include #include #include #include #include #include struct data { double *t; double *y; size_t n; }; /* model function: a * exp( -1/2 * [ (t - b) / c ]^2 ) */ double gaussian(const double a, const double b, const double c, const double t) { const double z = (t - b) / c; return (a * exp(-0.5 * z * z)); } int func_f (const gsl_vector * x, void *params, gsl_vector * f) { struct data *d = (struct data *) params; double a = gsl_vector_get(x, 0); double b = gsl_vector_get(x, 1); double c = gsl_vector_get(x, 2); size_t i; for (i = 0; i < d->n; ++i) { double ti = d->t[i]; double yi = d->y[i]; double y = gaussian(a, b, c, ti); gsl_vector_set(f, i, yi - y); } return GSL_SUCCESS; } int func_df (const gsl_vector * x, void *params, gsl_matrix * J) { struct data *d = (struct data *) params; double a = gsl_vector_get(x, 0); double b = gsl_vector_get(x, 1); double c = gsl_vector_get(x, 2); size_t i; for (i = 0; i < d->n; ++i) { double ti = d->t[i]; double zi = (ti - b) / c; double ei = exp(-0.5 * zi * zi); gsl_matrix_set(J, i, 0, -ei); gsl_matrix_set(J, i, 1, -(a / c) * ei * zi); gsl_matrix_set(J, i, 2, -(a / c) * ei * zi * zi); } return GSL_SUCCESS; } int func_fvv (const gsl_vector * x, const gsl_vector * v, void *params, gsl_vector * fvv) { struct data *d = (struct data *) params; double a = gsl_vector_get(x, 0); double b = gsl_vector_get(x, 1); double c = gsl_vector_get(x, 2); double va = gsl_vector_get(v, 0); double vb = gsl_vector_get(v, 1); double vc = gsl_vector_get(v, 2); size_t i; for (i = 0; i < d->n; ++i) { double ti = d->t[i]; double zi = (ti - b) / c; double ei = exp(-0.5 * zi * zi); double Dab = -zi * ei / c; double Dac = -zi * zi * ei / c; double Dbb = a * ei / (c * c) * (1.0 - zi*zi); double Dbc = a * zi * ei / (c * c) * (2.0 - zi*zi); double Dcc = a * zi * zi * ei / (c * c) * (3.0 - zi*zi); double sum; sum = 2.0 * va * vb * Dab + 2.0 * va * vc * Dac + vb * vb * Dbb + 2.0 * vb * vc * Dbc + vc * vc * Dcc; gsl_vector_set(fvv, i, sum); } return GSL_SUCCESS; } void callback(const size_t iter, void *params, const gsl_multifit_nlinear_workspace *w) { gsl_vector *f = gsl_multifit_nlinear_residual(w); gsl_vector *x = gsl_multifit_nlinear_position(w); double avratio = gsl_multifit_nlinear_avratio(w); double rcond; (void) params; /* not used */ /* compute reciprocal condition number of J(x) */ gsl_multifit_nlinear_rcond(&rcond, w); fprintf(stderr, "iter %2zu: a = %.4f, b = %.4f, c = %.4f, |a|/|v| = %.4f cond(J) = %8.4f, |f(x)| = %.4f\n", iter, gsl_vector_get(x, 0), gsl_vector_get(x, 1), gsl_vector_get(x, 2), avratio, 1.0 / rcond, gsl_blas_dnrm2(f)); } void solve_system(gsl_vector *x, gsl_multifit_nlinear_fdf *fdf, gsl_multifit_nlinear_parameters *params) { const gsl_multifit_nlinear_type *T = gsl_multifit_nlinear_trust; const size_t max_iter = 200; const double xtol = 1.0e-8; const double gtol = 1.0e-8; const double ftol = 1.0e-8; const size_t n = fdf->n; const size_t p = fdf->p; gsl_multifit_nlinear_workspace *work = gsl_multifit_nlinear_alloc(T, params, n, p); gsl_vector * f = gsl_multifit_nlinear_residual(work); gsl_vector * y = gsl_multifit_nlinear_position(work); int info; double chisq0, chisq, rcond; /* initialize solver */ gsl_multifit_nlinear_init(x, fdf, work); /* store initial cost */ gsl_blas_ddot(f, f, &chisq0); /* iterate until convergence */ gsl_multifit_nlinear_driver(max_iter, xtol, gtol, ftol, callback, NULL, &info, work); /* store final cost */ gsl_blas_ddot(f, f, &chisq); /* store cond(J(x)) */ gsl_multifit_nlinear_rcond(&rcond, work); gsl_vector_memcpy(x, y); /* print summary */ fprintf(stderr, "NITER = %zu\n", gsl_multifit_nlinear_niter(work)); fprintf(stderr, "NFEV = %zu\n", fdf->nevalf); fprintf(stderr, "NJEV = %zu\n", fdf->nevaldf); fprintf(stderr, "NAEV = %zu\n", fdf->nevalfvv); fprintf(stderr, "initial cost = %.12e\n", chisq0); fprintf(stderr, "final cost = %.12e\n", chisq); fprintf(stderr, "final x = (%.12e, %.12e, %12e)\n", gsl_vector_get(x, 0), gsl_vector_get(x, 1), gsl_vector_get(x, 2)); fprintf(stderr, "final cond(J) = %.12e\n", 1.0 / rcond); gsl_multifit_nlinear_free(work); } int main (void) { const size_t n = 300; /* number of data points to fit */ const size_t p = 3; /* number of model parameters */ const double a = 5.0; /* amplitude */ const double b = 0.4; /* center */ const double c = 0.15; /* width */ const gsl_rng_type * T = gsl_rng_default; gsl_vector *f = gsl_vector_alloc(n); gsl_vector *x = gsl_vector_alloc(p); gsl_multifit_nlinear_fdf fdf; gsl_multifit_nlinear_parameters fdf_params = gsl_multifit_nlinear_default_parameters(); struct data fit_data; gsl_rng * r; size_t i; gsl_rng_env_setup (); r = gsl_rng_alloc (T); fit_data.t = malloc(n * sizeof(double)); fit_data.y = malloc(n * sizeof(double)); fit_data.n = n; /* generate synthetic data with noise */ for (i = 0; i < n; ++i) { double t = (double)i / (double) n; double y0 = gaussian(a, b, c, t); double dy = gsl_ran_gaussian (r, 0.1 * y0); fit_data.t[i] = t; fit_data.y[i] = y0 + dy; } /* define function to be minimized */ fdf.f = func_f; fdf.df = func_df; fdf.fvv = func_fvv; fdf.n = n; fdf.p = p; fdf.params = &fit_data; /* starting point */ gsl_vector_set(x, 0, 1.0); gsl_vector_set(x, 1, 0.0); gsl_vector_set(x, 2, 1.0); fdf_params.trs = gsl_multifit_nlinear_trs_lmaccel; solve_system(x, &fdf, &fdf_params); /* print data and model */ { double A = gsl_vector_get(x, 0); double B = gsl_vector_get(x, 1); double C = gsl_vector_get(x, 2); for (i = 0; i < n; ++i) { double ti = fit_data.t[i]; double yi = fit_data.y[i]; double fi = gaussian(A, B, C, ti); printf("%f %f %f\n", ti, yi, fi); } } gsl_vector_free(f); gsl_vector_free(x); gsl_rng_free(r); return 0; }  File: gsl-ref.info, Node: Comparing TRS Methods Example, Next: Large Nonlinear Least Squares Example, Prev: Geodesic Acceleration Example 2, Up: Examples<32> 41.12.4 Comparing TRS Methods Example ------------------------------------- The following program compares all available nonlinear least squares trust-region subproblem (TRS) methods on the Branin function, a common optimization test problem. The cost function is \Phi(x) &= 1/2 (f_1^2 + f_2^2) f_1 &= x_2 + a_1 x_1^2 + a_2 x_1 + a_3 f_2 &= sqrt(a_4) sqrt(1 + (1 - a_5) cos(x_1)) with a_1 = -{5.1 \over 4 \pi^2}, a_2 = {5 \over \pi}, a_3 = -6, a_4 = 10, a_5 = {1 \over 8\pi}. There are three minima of this function in the range (x_1,x_2) \in [-5,15] \times [-5,15]. The program below uses the starting point (x_1,x_2) = (6,14.5) and calculates the solution with all available nonlinear least squares TRS methods. The program output is shown below: Method NITER NFEV NJEV Initial Cost Final cost Final cond(J) Final x levenberg-marquardt 20 27 21 1.9874e+02 3.9789e-01 6.1399e+07 (-3.14e+00, 1.23e+01) levenberg-marquardt+accel 27 36 28 1.9874e+02 3.9789e-01 1.4465e+07 (3.14e+00, 2.27e+00) dogleg 23 64 23 1.9874e+02 3.9789e-01 5.0692e+08 (3.14e+00, 2.28e+00) double-dogleg 24 69 24 1.9874e+02 3.9789e-01 3.4879e+07 (3.14e+00, 2.27e+00) 2D-subspace 23 54 24 1.9874e+02 3.9789e-01 2.5142e+07 (3.14e+00, 2.27e+00) The first row of output above corresponds to standard Levenberg-Marquardt, while the second row includes geodesic acceleration. We see that the standard LM method converges to the minimum at (-\pi,12.275) and also uses the least number of iterations and Jacobian evaluations. All other methods converge to the minimum (\pi,2.275) and perform similarly in terms of number of Jacobian evaluations. We see that J is fairly ill-conditioned at both minima, indicating that the QR (or SVD) solver is the best choice for this problem. Since there are only two parameters in this optimization problem, we can easily visualize the paths taken by each method, which are shown in Fig. %s. The figure shows contours of the cost function \Phi(x_1,x_2) which exhibits three global minima in the range [-5,15] \times [-5,15]. The paths taken by each solver are shown as colored lines. [gsl-ref-figures/nlfit3] Figure: Paths taken for different TRS methods for the Branin function The program is given below. #include #include #include #include #include #include /* parameters to model */ struct model_params { double a1; double a2; double a3; double a4; double a5; }; /* Branin function */ int func_f (const gsl_vector * x, void *params, gsl_vector * f) { struct model_params *par = (struct model_params *) params; double x1 = gsl_vector_get(x, 0); double x2 = gsl_vector_get(x, 1); double f1 = x2 + par->a1 * x1 * x1 + par->a2 * x1 + par->a3; double f2 = sqrt(par->a4) * sqrt(1.0 + (1.0 - par->a5) * cos(x1)); gsl_vector_set(f, 0, f1); gsl_vector_set(f, 1, f2); return GSL_SUCCESS; } int func_df (const gsl_vector * x, void *params, gsl_matrix * J) { struct model_params *par = (struct model_params *) params; double x1 = gsl_vector_get(x, 0); double f2 = sqrt(par->a4) * sqrt(1.0 + (1.0 - par->a5) * cos(x1)); gsl_matrix_set(J, 0, 0, 2.0 * par->a1 * x1 + par->a2); gsl_matrix_set(J, 0, 1, 1.0); gsl_matrix_set(J, 1, 0, -0.5 * par->a4 / f2 * (1.0 - par->a5) * sin(x1)); gsl_matrix_set(J, 1, 1, 0.0); return GSL_SUCCESS; } int func_fvv (const gsl_vector * x, const gsl_vector * v, void *params, gsl_vector * fvv) { struct model_params *par = (struct model_params *) params; double x1 = gsl_vector_get(x, 0); double v1 = gsl_vector_get(v, 0); double c = cos(x1); double s = sin(x1); double f2 = sqrt(par->a4) * sqrt(1.0 + (1.0 - par->a5) * c); double t = 0.5 * par->a4 * (1.0 - par->a5) / f2; gsl_vector_set(fvv, 0, 2.0 * par->a1 * v1 * v1); gsl_vector_set(fvv, 1, -t * (c + s*s/f2) * v1 * v1); return GSL_SUCCESS; } void callback(const size_t iter, void *params, const gsl_multifit_nlinear_workspace *w) { gsl_vector * x = gsl_multifit_nlinear_position(w); double x1 = gsl_vector_get(x, 0); double x2 = gsl_vector_get(x, 1); /* print out current location */ printf("%f %f\n", x1, x2); } void solve_system(gsl_vector *x0, gsl_multifit_nlinear_fdf *fdf, gsl_multifit_nlinear_parameters *params) { const gsl_multifit_nlinear_type *T = gsl_multifit_nlinear_trust; const size_t max_iter = 200; const double xtol = 1.0e-8; const double gtol = 1.0e-8; const double ftol = 1.0e-8; const size_t n = fdf->n; const size_t p = fdf->p; gsl_multifit_nlinear_workspace *work = gsl_multifit_nlinear_alloc(T, params, n, p); gsl_vector * f = gsl_multifit_nlinear_residual(work); gsl_vector * x = gsl_multifit_nlinear_position(work); int info; double chisq0, chisq, rcond; printf("# %s/%s\n", gsl_multifit_nlinear_name(work), gsl_multifit_nlinear_trs_name(work)); /* initialize solver */ gsl_multifit_nlinear_init(x0, fdf, work); /* store initial cost */ gsl_blas_ddot(f, f, &chisq0); /* iterate until convergence */ gsl_multifit_nlinear_driver(max_iter, xtol, gtol, ftol, callback, NULL, &info, work); /* store final cost */ gsl_blas_ddot(f, f, &chisq); /* store cond(J(x)) */ gsl_multifit_nlinear_rcond(&rcond, work); /* print summary */ fprintf(stderr, "%-25s %-6zu %-5zu %-5zu %-13.4e %-12.4e %-13.4e (%.2e, %.2e)\n", gsl_multifit_nlinear_trs_name(work), gsl_multifit_nlinear_niter(work), fdf->nevalf, fdf->nevaldf, chisq0, chisq, 1.0 / rcond, gsl_vector_get(x, 0), gsl_vector_get(x, 1)); printf("\n\n"); gsl_multifit_nlinear_free(work); } int main (void) { const size_t n = 2; const size_t p = 2; gsl_vector *f = gsl_vector_alloc(n); gsl_vector *x = gsl_vector_alloc(p); gsl_multifit_nlinear_fdf fdf; gsl_multifit_nlinear_parameters fdf_params = gsl_multifit_nlinear_default_parameters(); struct model_params params; params.a1 = -5.1 / (4.0 * M_PI * M_PI); params.a2 = 5.0 / M_PI; params.a3 = -6.0; params.a4 = 10.0; params.a5 = 1.0 / (8.0 * M_PI); /* print map of Phi(x1, x2) */ { double x1, x2, chisq; for (x1 = -5.0; x1 < 15.0; x1 += 0.1) { for (x2 = -5.0; x2 < 15.0; x2 += 0.1) { gsl_vector_set(x, 0, x1); gsl_vector_set(x, 1, x2); func_f(x, ¶ms, f); gsl_blas_ddot(f, f, &chisq); printf("%f %f %f\n", x1, x2, chisq); } printf("\n"); } printf("\n\n"); } /* define function to be minimized */ fdf.f = func_f; fdf.df = func_df; fdf.fvv = func_fvv; fdf.n = n; fdf.p = p; fdf.params = ¶ms; /* starting point */ gsl_vector_set(x, 0, 6.0); gsl_vector_set(x, 1, 14.5); fprintf(stderr, "%-25s %-6s %-5s %-5s %-13s %-12s %-13s %-15s\n", "Method", "NITER", "NFEV", "NJEV", "Initial Cost", "Final cost", "Final cond(J)", "Final x"); fdf_params.trs = gsl_multifit_nlinear_trs_lm; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multifit_nlinear_trs_lmaccel; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multifit_nlinear_trs_dogleg; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multifit_nlinear_trs_ddogleg; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multifit_nlinear_trs_subspace2D; solve_system(x, &fdf, &fdf_params); gsl_vector_free(f); gsl_vector_free(x); return 0; }  File: gsl-ref.info, Node: Large Nonlinear Least Squares Example, Prev: Comparing TRS Methods Example, Up: Examples<32> 41.12.5 Large Nonlinear Least Squares Example --------------------------------------------- The following program illustrates the large nonlinear least squares solvers on a system with significant sparse structure in the Jacobian. The cost function is \Phi(x) &= 1/2 \sum_{i=1}^{p+1} f_i^2 f_i &= \sqrt{\alpha} (x_i - 1), 1 \le i \le p f_{p+1} &= ||x||^2 - 1/4 with \alpha = 10^{-5}. The residual f_{p+1} imposes a constraint on the p parameters x, to ensure that ||x||^2 \approx {1 \over 4}. The (p+1)-by-p Jacobian for this system is J(x) = [ \sqrt{alpha} I_p; 2 x^T ] and the normal equations matrix is J^T J = \alpha I_p + 4 x x^T Finally, the second directional derivative of f for the geodesic acceleration method is fvv = [ 0 ] [ 2 ||v||^2 ] Since the upper p-by-p block of J is diagonal, this sparse structure should be exploited in the nonlinear solver. For comparison, the following program solves the system for p = 2000 using the dense direct Cholesky solver based on the normal equations matrix J^T J, as well as the iterative Steihaug-Toint solver, based on sparse matrix-vector products J u and J^T u. The program output is shown below: Method NITER NFEV NJUEV NJTJEV NAEV Init Cost Final cost cond(J) Final |x|^2 Time (s) levenberg-marquardt 25 31 26 26 0 7.1218e+18 1.9555e-02 447.50 2.5044e-01 46.28 levenberg-marquardt+accel 22 23 45 23 22 7.1218e+18 1.9555e-02 447.64 2.5044e-01 33.92 dogleg 37 87 36 36 0 7.1218e+18 1.9555e-02 447.59 2.5044e-01 56.05 double-dogleg 35 88 34 34 0 7.1218e+18 1.9555e-02 447.62 2.5044e-01 52.65 2D-subspace 37 88 36 36 0 7.1218e+18 1.9555e-02 447.71 2.5044e-01 59.75 steihaug-toint 35 88 345 0 0 7.1218e+18 1.9555e-02 inf 2.5044e-01 0.09 The first five rows use methods based on factoring the dense J^T J matrix while the last row uses the iterative Steihaug-Toint method. While the number of Jacobian matrix-vector products (NJUEV) is less for the dense methods, the added time to construct and factor the J^T J matrix (NJTJEV) results in a much larger runtime than the iterative method (see last column). The program is given below. #include #include #include #include #include #include #include #include #include /* parameters for functions */ struct model_params { double alpha; gsl_spmatrix *J; }; /* penalty function */ int penalty_f (const gsl_vector * x, void *params, gsl_vector * f) { struct model_params *par = (struct model_params *) params; const double sqrt_alpha = sqrt(par->alpha); const size_t p = x->size; size_t i; double sum = 0.0; for (i = 0; i < p; ++i) { double xi = gsl_vector_get(x, i); gsl_vector_set(f, i, sqrt_alpha*(xi - 1.0)); sum += xi * xi; } gsl_vector_set(f, p, sum - 0.25); return GSL_SUCCESS; } int penalty_df (CBLAS_TRANSPOSE_t TransJ, const gsl_vector * x, const gsl_vector * u, void * params, gsl_vector * v, gsl_matrix * JTJ) { struct model_params *par = (struct model_params *) params; const size_t p = x->size; size_t j; /* store 2*x in last row of J */ for (j = 0; j < p; ++j) { double xj = gsl_vector_get(x, j); gsl_spmatrix_set(par->J, p, j, 2.0 * xj); } /* compute v = op(J) u */ if (v) gsl_spblas_dgemv(TransJ, 1.0, par->J, u, 0.0, v); if (JTJ) { gsl_vector_view diag = gsl_matrix_diagonal(JTJ); /* compute J^T J = [ alpha*I_p + 4 x x^T ] */ gsl_matrix_set_zero(JTJ); /* store 4 x x^T in lower half of JTJ */ gsl_blas_dsyr(CblasLower, 4.0, x, JTJ); /* add alpha to diag(JTJ) */ gsl_vector_add_constant(&diag.vector, par->alpha); } return GSL_SUCCESS; } int penalty_fvv (const gsl_vector * x, const gsl_vector * v, void *params, gsl_vector * fvv) { const size_t p = x->size; double normv = gsl_blas_dnrm2(v); gsl_vector_set_zero(fvv); gsl_vector_set(fvv, p, 2.0 * normv * normv); (void)params; /* avoid unused parameter warning */ return GSL_SUCCESS; } void solve_system(const gsl_vector *x0, gsl_multilarge_nlinear_fdf *fdf, gsl_multilarge_nlinear_parameters *params) { const gsl_multilarge_nlinear_type *T = gsl_multilarge_nlinear_trust; const size_t max_iter = 200; const double xtol = 1.0e-8; const double gtol = 1.0e-8; const double ftol = 1.0e-8; const size_t n = fdf->n; const size_t p = fdf->p; gsl_multilarge_nlinear_workspace *work = gsl_multilarge_nlinear_alloc(T, params, n, p); gsl_vector * f = gsl_multilarge_nlinear_residual(work); gsl_vector * x = gsl_multilarge_nlinear_position(work); int info; double chisq0, chisq, rcond, xsq; struct timeval tv0, tv1; gettimeofday(&tv0, NULL); /* initialize solver */ gsl_multilarge_nlinear_init(x0, fdf, work); /* store initial cost */ gsl_blas_ddot(f, f, &chisq0); /* iterate until convergence */ gsl_multilarge_nlinear_driver(max_iter, xtol, gtol, ftol, NULL, NULL, &info, work); gettimeofday(&tv1, NULL); /* store final cost */ gsl_blas_ddot(f, f, &chisq); /* compute final ||x||^2 */ gsl_blas_ddot(x, x, &xsq); /* store cond(J(x)) */ gsl_multilarge_nlinear_rcond(&rcond, work); /* print summary */ fprintf(stderr, "%-25s %-5zu %-4zu %-5zu %-6zu %-4zu %-10.4e %-10.4e %-7.2f %-11.4e %.2f\n", gsl_multilarge_nlinear_trs_name(work), gsl_multilarge_nlinear_niter(work), fdf->nevalf, fdf->nevaldfu, fdf->nevaldf2, fdf->nevalfvv, chisq0, chisq, 1.0 / rcond, xsq, (tv1.tv_sec - tv0.tv_sec) + 1.0e-6 * (tv1.tv_usec - tv0.tv_usec)); gsl_multilarge_nlinear_free(work); } int main (void) { const size_t p = 2000; const size_t n = p + 1; gsl_vector *f = gsl_vector_alloc(n); gsl_vector *x = gsl_vector_alloc(p); /* allocate sparse Jacobian matrix with 2*p non-zero elements in triplet format */ gsl_spmatrix *J = gsl_spmatrix_alloc_nzmax(n, p, 2 * p, GSL_SPMATRIX_TRIPLET); gsl_multilarge_nlinear_fdf fdf; gsl_multilarge_nlinear_parameters fdf_params = gsl_multilarge_nlinear_default_parameters(); struct model_params params; size_t i; params.alpha = 1.0e-5; params.J = J; /* define function to be minimized */ fdf.f = penalty_f; fdf.df = penalty_df; fdf.fvv = penalty_fvv; fdf.n = n; fdf.p = p; fdf.params = ¶ms; for (i = 0; i < p; ++i) { /* starting point */ gsl_vector_set(x, i, i + 1.0); /* store sqrt(alpha)*I_p in upper p-by-p block of J */ gsl_spmatrix_set(J, i, i, sqrt(params.alpha)); } fprintf(stderr, "%-25s %-4s %-4s %-5s %-6s %-4s %-10s %-10s %-7s %-11s %-10s\n", "Method", "NITER", "NFEV", "NJUEV", "NJTJEV", "NAEV", "Init Cost", "Final cost", "cond(J)", "Final |x|^2", "Time (s)"); fdf_params.scale = gsl_multilarge_nlinear_scale_levenberg; fdf_params.trs = gsl_multilarge_nlinear_trs_lm; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multilarge_nlinear_trs_lmaccel; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multilarge_nlinear_trs_dogleg; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multilarge_nlinear_trs_ddogleg; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multilarge_nlinear_trs_subspace2D; solve_system(x, &fdf, &fdf_params); fdf_params.trs = gsl_multilarge_nlinear_trs_cgst; solve_system(x, &fdf, &fdf_params); gsl_vector_free(f); gsl_vector_free(x); gsl_spmatrix_free(J); return 0; }  File: gsl-ref.info, Node: References and Further Reading<34>, Prev: Examples<32>, Up: Nonlinear Least-Squares Fitting 41.13 References and Further Reading ==================================== The following publications are relevant to the algorithms described in this section, * J.J. Moré, `The Levenberg-Marquardt Algorithm: Implementation and Theory', Lecture Notes in Mathematics, v630 (1978), ed G. Watson. * H. B. Nielsen, “Damping Parameter in Marquardt’s Method”, IMM Department of Mathematical Modeling, DTU, Tech. Report IMM-REP-1999-05 (1999). * K. Madsen and H. B. Nielsen, “Introduction to Optimization and Data Fitting”, IMM Department of Mathematical Modeling, DTU, 2010. * J. E. Dennis and R. B. Schnabel, Numerical Methods for Unconstrained Optimization and Nonlinear Equations, SIAM, 1996. * M. K. Transtrum, B. B. Machta, and J. P. Sethna, Geometry of nonlinear least squares with applications to sloppy models and optimization, Phys. Rev. E 83, 036701, 2011. * M. K. Transtrum and J. P. Sethna, Improvements to the Levenberg-Marquardt algorithm for nonlinear least-squares minimization, arXiv:1201.5885, 2012. * J.J. Moré, B.S. Garbow, K.E. Hillstrom, “Testing Unconstrained Optimization Software”, ACM Transactions on Mathematical Software, Vol 7, No 1 (1981), p 17–41. * H. B. Nielsen, “UCTP Test Problems for Unconstrained Optimization”, IMM Department of Mathematical Modeling, DTU, Tech. Report IMM-REP-2000-17 (2000).  File: gsl-ref.info, Node: Basis Splines, Next: Sparse Matrices, Prev: Nonlinear Least-Squares Fitting, Up: Top 42 Basis Splines **************** This chapter describes functions for the computation of smoothing basis splines (B-splines). A smoothing spline differs from an interpolating spline in that the resulting curve is not required to pass through each datapoint. For information about interpolating splines, see *note Interpolation: 9b0. The header file ‘gsl_bspline.h’ contains the prototypes for the bspline functions and related declarations. * Menu: * Overview: Overview<7>. * Initializing the B-splines solver:: * Constructing the knots vector:: * Evaluation of B-splines:: * Evaluation of B-spline derivatives:: * Working with the Greville abscissae:: * Examples: Examples<33>. * References and Further Reading: References and Further Reading<35>.  File: gsl-ref.info, Node: Overview<7>, Next: Initializing the B-splines solver, Up: Basis Splines 42.1 Overview ============= B-splines are commonly used as basis functions to fit smoothing curves to large data sets. To do this, the abscissa axis is broken up into some number of intervals, where the endpoints of each interval are called `breakpoints'. These breakpoints are then converted to `knots' by imposing various continuity and smoothness conditions at each interface. Given a nondecreasing knot vector t = \{t_0, t_1, \dots, t_{n+k-1}\} the n basis splines of order k are defined by B_(i,1)(x) = (1, t_i <= x < t_(i+1) (0, else B_(i,k)(x) = [(x - t_i)/(t_(i+k-1) - t_i)] B_(i,k-1)(x) + [(t_(i+k) - x)/(t_(i+k) - t_(i+1))] B_(i+1,k-1)(x) for i = 0, \ldots, n-1. The common case of cubic B-splines is given by k = 4. The above recurrence relation can be evaluated in a numerically stable way by the de Boor algorithm. If we define appropriate knots on an interval [a,b] then the B-spline basis functions form a complete set on that interval. Therefore we can expand a smoothing function as f(x) = \sum_{i=0}^{n-1} c_i B_{i,k}(x) given enough (x_j, f(x_j)) data pairs. The coefficients c_i can be readily obtained from a least-squares fit.  File: gsl-ref.info, Node: Initializing the B-splines solver, Next: Constructing the knots vector, Prev: Overview<7>, Up: Basis Splines 42.2 Initializing the B-splines solver ====================================== -- Type: gsl_bspline_workspace The computation of B-spline functions requires a preallocated workspace. -- Function: *note gsl_bspline_workspace: c01. *gsl_bspline_alloc (const size_t k, const size_t nbreak) This function allocates a workspace for computing B-splines of order *note k: c02. The number of breakpoints is given by *note nbreak: c02. This leads to n = nbreak + k - 2 basis functions. Cubic B-splines are specified by k = 4. The size of the workspace is O(2k^2 + 5k + nbreak). -- Function: void gsl_bspline_free (gsl_bspline_workspace *w) This function frees the memory associated with the workspace *note w: c03.  File: gsl-ref.info, Node: Constructing the knots vector, Next: Evaluation of B-splines, Prev: Initializing the B-splines solver, Up: Basis Splines 42.3 Constructing the knots vector ================================== -- Function: int gsl_bspline_knots (const gsl_vector *breakpts, gsl_bspline_workspace *w) This function computes the knots associated with the given breakpoints and stores them internally in ‘w->knots’. -- Function: int gsl_bspline_knots_uniform (const double a, const double b, gsl_bspline_workspace *w) This function assumes uniformly spaced breakpoints on [a,b] and constructs the corresponding knot vector using the previously specified ‘nbreak’ parameter. The knots are stored in ‘w->knots’.  File: gsl-ref.info, Node: Evaluation of B-splines, Next: Evaluation of B-spline derivatives, Prev: Constructing the knots vector, Up: Basis Splines 42.4 Evaluation of B-splines ============================ -- Function: int gsl_bspline_eval (const double x, gsl_vector *B, gsl_bspline_workspace *w) This function evaluates all B-spline basis functions at the position *note x: c08. and stores them in the vector *note B: c08, so that the i-th element is B_i(x). The vector *note B: c08. must be of length n = nbreak + k - 2. This value may also be obtained by calling *note gsl_bspline_ncoeffs(): c09. Computing all the basis functions at once is more efficient than computing them individually, due to the nature of the defining recurrence relation. -- Function: int gsl_bspline_eval_nonzero (const double x, gsl_vector *Bk, size_t *istart, size_t *iend, gsl_bspline_workspace *w) This function evaluates all potentially nonzero B-spline basis functions at the position *note x: c0a. and stores them in the vector *note Bk: c0a, so that the i-th element is B_{(istart+i)}(x). The last element of *note Bk: c0a. is B_{iend}(x). The vector *note Bk: c0a. must be of length k. By returning only the nonzero basis functions, this function allows quantities involving linear combinations of the B_i(x) to be computed without unnecessary terms (such linear combinations occur, for example, when evaluating an interpolated function). -- Function: size_t gsl_bspline_ncoeffs (gsl_bspline_workspace *w) This function returns the number of B-spline coefficients given by n = nbreak + k - 2.  File: gsl-ref.info, Node: Evaluation of B-spline derivatives, Next: Working with the Greville abscissae, Prev: Evaluation of B-splines, Up: Basis Splines 42.5 Evaluation of B-spline derivatives ======================================= -- Function: int gsl_bspline_deriv_eval (const double x, const size_t nderiv, gsl_matrix *dB, gsl_bspline_workspace *w) This function evaluates all B-spline basis function derivatives of orders 0 through *note nderiv: c0c. (inclusive) at the position *note x: c0c. and stores them in the matrix *note dB: c0c. The (i,j)-th element of *note dB: c0c. is d^jB_i(x)/dx^j. The matrix *note dB: c0c. must be of size n = nbreak + k - 2 by nderiv + 1. The value n may also be obtained by calling *note gsl_bspline_ncoeffs(): c09. Note that function evaluations are included as the zeroth order derivatives in *note dB: c0c. Computing all the basis function derivatives at once is more efficient than computing them individually, due to the nature of the defining recurrence relation. -- Function: int gsl_bspline_deriv_eval_nonzero (const double x, const size_t nderiv, gsl_matrix *dB, size_t *istart, size_t *iend, gsl_bspline_workspace *w) This function evaluates all potentially nonzero B-spline basis function derivatives of orders 0 through *note nderiv: c0d. (inclusive) at the position *note x: c0d. and stores them in the matrix *note dB: c0d. The (i,j)-th element of *note dB: c0d. is d^jB_{(istart+i)}(x)/dx^j. The last row of *note dB: c0d. contains d^jB_{iend}(x)/dx^j. The matrix *note dB: c0d. must be of size k by at least nderiv + 1. Note that function evaluations are included as the zeroth order derivatives in *note dB: c0d. By returning only the nonzero basis functions, this function allows quantities involving linear combinations of the B_i(x) and their derivatives to be computed without unnecessary terms.  File: gsl-ref.info, Node: Working with the Greville abscissae, Next: Examples<33>, Prev: Evaluation of B-spline derivatives, Up: Basis Splines 42.6 Working with the Greville abscissae ======================================== The Greville abscissae are defined to be the mean location of k-1 consecutive knots in the knot vector for each basis spline function of order k. With the first and last knots in the *note gsl_bspline_workspace: c01. knot vector excluded, there are *note gsl_bspline_ncoeffs(): c09. Greville abscissae for any given B-spline basis. These values are often used in B-spline collocation applications and may also be called Marsden-Schoenberg points. -- Function: double gsl_bspline_greville_abscissa (size_t i, gsl_bspline_workspace *w) Returns the location of the i-th Greville abscissa for the given B-spline basis. For the ill-defined case when k = 1, the implementation chooses to return breakpoint interval midpoints.  File: gsl-ref.info, Node: Examples<33>, Next: References and Further Reading<35>, Prev: Working with the Greville abscissae, Up: Basis Splines 42.7 Examples ============= The following program computes a linear least squares fit to data using cubic B-spline basis functions with uniform breakpoints. The data is generated from the curve y(x) = \cos{(x)} \exp{(-x/10)} on the interval [0, 15] with Gaussian noise added. #include #include #include #include #include #include #include #include /* number of data points to fit */ #define N 200 /* number of fit coefficients */ #define NCOEFFS 12 /* nbreak = ncoeffs + 2 - k = ncoeffs - 2 since k = 4 */ #define NBREAK (NCOEFFS - 2) int main (void) { const size_t n = N; const size_t ncoeffs = NCOEFFS; const size_t nbreak = NBREAK; size_t i, j; gsl_bspline_workspace *bw; gsl_vector *B; double dy; gsl_rng *r; gsl_vector *c, *w; gsl_vector *x, *y; gsl_matrix *X, *cov; gsl_multifit_linear_workspace *mw; double chisq, Rsq, dof, tss; gsl_rng_env_setup(); r = gsl_rng_alloc(gsl_rng_default); /* allocate a cubic bspline workspace (k = 4) */ bw = gsl_bspline_alloc(4, nbreak); B = gsl_vector_alloc(ncoeffs); x = gsl_vector_alloc(n); y = gsl_vector_alloc(n); X = gsl_matrix_alloc(n, ncoeffs); c = gsl_vector_alloc(ncoeffs); w = gsl_vector_alloc(n); cov = gsl_matrix_alloc(ncoeffs, ncoeffs); mw = gsl_multifit_linear_alloc(n, ncoeffs); /* this is the data to be fitted */ for (i = 0; i < n; ++i) { double sigma; double xi = (15.0 / (N - 1)) * i; double yi = cos(xi) * exp(-0.1 * xi); sigma = 0.1 * yi; dy = gsl_ran_gaussian(r, sigma); yi += dy; gsl_vector_set(x, i, xi); gsl_vector_set(y, i, yi); gsl_vector_set(w, i, 1.0 / (sigma * sigma)); printf("%f %f\n", xi, yi); } /* use uniform breakpoints on [0, 15] */ gsl_bspline_knots_uniform(0.0, 15.0, bw); /* construct the fit matrix X */ for (i = 0; i < n; ++i) { double xi = gsl_vector_get(x, i); /* compute B_j(xi) for all j */ gsl_bspline_eval(xi, B, bw); /* fill in row i of X */ for (j = 0; j < ncoeffs; ++j) { double Bj = gsl_vector_get(B, j); gsl_matrix_set(X, i, j, Bj); } } /* do the fit */ gsl_multifit_wlinear(X, w, y, c, cov, &chisq, mw); dof = n - ncoeffs; tss = gsl_stats_wtss(w->data, 1, y->data, 1, y->size); Rsq = 1.0 - chisq / tss; fprintf(stderr, "chisq/dof = %e, Rsq = %f\n", chisq / dof, Rsq); printf("\n\n"); /* output the smoothed curve */ { double xi, yi, yerr; for (xi = 0.0; xi < 15.0; xi += 0.1) { gsl_bspline_eval(xi, B, bw); gsl_multifit_linear_est(B, c, cov, &yi, &yerr); printf("%f %f\n", xi, yi); } } gsl_rng_free(r); gsl_bspline_free(bw); gsl_vector_free(B); gsl_vector_free(x); gsl_vector_free(y); gsl_matrix_free(X); gsl_vector_free(c); gsl_vector_free(w); gsl_matrix_free(cov); gsl_multifit_linear_free(mw); return 0; } /* main() */ The output is shown below: $ ./a.out > bspline.txt chisq/dof = 1.118217e+00, Rsq = 0.989771 The data and fitted model are shown in Fig. %s. [gsl-ref-figures/bspline] Figure: Data (black) and fitted model (red)  File: gsl-ref.info, Node: References and Further Reading<35>, Prev: Examples<33>, Up: Basis Splines 42.8 References and Further Reading =================================== Further information on the algorithms described in this section can be found in the following book, * C. de Boor, `A Practical Guide to Splines' (1978), Springer-Verlag, ISBN 0-387-90356-9. Further information of Greville abscissae and B-spline collocation can be found in the following paper, * Richard W. Johnson, Higher order B-spline collocation at the Greville abscissae. `Applied Numerical Mathematics'. vol.: 52, 2005, 63–75. A large collection of B-spline routines is available in the PPPACK library available at ‘http://www.netlib.org/pppack’, which is also part of SLATEC.  File: gsl-ref.info, Node: Sparse Matrices, Next: Sparse BLAS Support, Prev: Basis Splines, Up: Top 43 Sparse Matrices ****************** This chapter describes functions for the construction and manipulation of sparse matrices, matrices which are populated primarily with zeros and contain only a few non-zero elements. Sparse matrices often appear in the solution of partial differential equations. It is beneficial to use specialized data structures and algorithms for storing and working with sparse matrices, since dense matrix algorithms and structures can be prohibitively slow and use huge amounts of memory when applied to sparse matrices. The header file ‘gsl_spmatrix.h’ contains the prototypes for the sparse matrix functions and related declarations. * Menu: * Data types: Data types<2>. * Sparse Matrix Storage Formats:: * Overview: Overview<8>. * Allocation:: * Accessing Matrix Elements:: * Initializing Matrix Elements:: * Reading and Writing Matrices:: * Copying Matrices:: * Exchanging Rows and Columns:: * Matrix Operations:: * Matrix Properties:: * Finding Maximum and Minimum Elements:: * Compressed Format:: * Conversion Between Sparse and Dense Matrices:: * Examples: Examples<34>. * References and Further Reading: References and Further Reading<36>.  File: gsl-ref.info, Node: Data types<2>, Next: Sparse Matrix Storage Formats, Up: Sparse Matrices 43.1 Data types =============== All the functions are available for each of the standard data-types. The versions for ‘double’ have the prefix ‘gsl_spmatrix’, Similarly the versions for single-precision ‘float’ arrays have the prefix ‘gsl_spmatrix_float’. The full list of available types is given below, Prefix Type -------------------------------------------------------------- gsl_spmatrix double gsl_spmatrix_float float gsl_spmatrix_long_double long double gsl_spmatrix_int int gsl_spmatrix_uint unsigned int gsl_spmatrix_long long gsl_spmatrix_ulong unsigned long gsl_spmatrix_short short gsl_spmatrix_ushort unsigned short gsl_spmatrix_char char gsl_spmatrix_uchar unsigned char gsl_spmatrix_complex complex double gsl_spmatrix_complex_float complex float gsl_spmatrix_complex_long_double complex long double  File: gsl-ref.info, Node: Sparse Matrix Storage Formats, Next: Overview<8>, Prev: Data types<2>, Up: Sparse Matrices 43.2 Sparse Matrix Storage Formats ================================== GSL currently supports three storage formats for sparse matrices: the coordinate (COO) representation, compressed sparse column (CSC) and compressed sparse row (CSR) formats. These are discussed in more detail below. In order to illustrate the different storage formats, the following sections will reference this M-by-N sparse matrix, with M=4 and N=5: \begin{pmatrix} 9 & 0 & 0 & 0 & -3 \\ 4 & 7 & 0 & 0 & 0 \\ 0 & 8 & -1 & 8 & 0 \\ 4 & 0 & 5 & 6 & 0 \end{pmatrix} The number of non-zero elements in the matrix, also abbreviated as ‘nnz’ is equal to 10 in this case. * Menu: * Coordinate Storage (COO): Coordinate Storage COO. * Compressed Sparse Column (CSC): Compressed Sparse Column CSC. * Compressed Sparse Row (CSR): Compressed Sparse Row CSR.  File: gsl-ref.info, Node: Coordinate Storage COO, Next: Compressed Sparse Column CSC, Up: Sparse Matrix Storage Formats 43.2.1 Coordinate Storage (COO) ------------------------------- The coordinate storage format, also known as `triplet format', stores triplets (i,j,x) for each non-zero element of the matrix. This notation means that the (i,j) element of the matrix A is A_{ij} = x. The matrix is stored using three arrays of the same length, representing the row indices, column indices, and matrix data. For the reference matrix above, one possible storage format is: data 9 7 4 8 -3 -1 8 5 6 4 row 0 1 1 2 0 2 2 3 3 3 col 0 1 0 1 4 2 3 2 3 0 Note that this representation is not unique - the coordinate triplets may appear in any ordering and would still represent the same sparse matrix. The length of the three arrays is equal to the number of non-zero elements in the matrix, ‘nnz’, which in this case is 10. The coordinate format is extremely convenient for sparse matrix `assembly', the process of adding new elements, or changing existing elements, in a sparse matrix. However, it is generally not suitable for the efficient implementation of matrix-matrix products, or matrix factorization algorithms. For these applications it is better to use one of the compressed formats discussed below. In order to faciliate efficient sparse matrix assembly, GSL stores the coordinate data in a balanced binary search tree, specifically an AVL tree, in addition to the three arrays discussed above. This allows GSL to efficiently determine whether an entry (i,j) already exists in the matrix, and to replace an existing matrix entry with a new value, without needing to search unsorted arrays.  File: gsl-ref.info, Node: Compressed Sparse Column CSC, Next: Compressed Sparse Row CSR, Prev: Coordinate Storage COO, Up: Sparse Matrix Storage Formats 43.2.2 Compressed Sparse Column (CSC) ------------------------------------- Compressed sparse column storage stores each column of non-zero values in the sparse matrix in a continuous memory block, keeping pointers to the beginning of each column in that memory block, and storing the row indices of each non-zero element. For the reference matrix above, these arrays look like data 9 4 4 7 8 -1 5 8 6 -3 row 0 1 3 1 2 2 3 2 3 0 col_ptr 0 3 5 7 9 10 The ‘data’ and ‘row’ arrays are of length ‘nnz’ and are the same as the COO storage format. The ‘col_ptr’ array has length N+1, and ‘col_ptr[j]’ gives the index in ‘data’ of the start of column ‘j’. Therefore, the j-th column of the matrix is stored in ‘data[col_ptr[j]]’, ‘data[col_ptr[j] + 1]’, …, ‘data[col_ptr[j+1] - 1]’. The last element of ‘col_ptr’ is ‘nnz’.  File: gsl-ref.info, Node: Compressed Sparse Row CSR, Prev: Compressed Sparse Column CSC, Up: Sparse Matrix Storage Formats 43.2.3 Compressed Sparse Row (CSR) ---------------------------------- Compressed row storage stores each row of non-zero values in a continuous memory block, keeping pointers to the beginning of each row in the block and storing the column indices of each non-zero element. For the reference matrix above, these arrays look like data 9 -3 4 7 8 -1 8 4 5 6 col 0 4 0 1 1 2 3 0 2 3 row_ptr 0 2 4 7 10 The ‘data’ and ‘col’ arrays are of length ‘nnz’ and are the same as the COO storage format. The ‘row_ptr’ array has length M+1, and ‘row_ptr[i]’ gives the index in ‘data’ of the start of row ‘i’. Therefore, the i-th row of the matrix is stored in ‘data[row_ptr[i]]’, ‘data[row_ptr[i] + 1]’, …, ‘data[row_ptr[i+1] - 1]’. The last element of ‘row_ptr’ is ‘nnz’.  File: gsl-ref.info, Node: Overview<8>, Next: Allocation, Prev: Sparse Matrix Storage Formats, Up: Sparse Matrices 43.3 Overview ============= These routines provide support for constructing and manipulating sparse matrices in GSL, using an API similar to *note gsl_matrix: 3a2. The basic structure is called *note gsl_spmatrix: c1e. -- Type: gsl_spmatrix This structure is defined as: typedef struct { size_t size1; size_t size2; int *i; double *data; int *p; size_t nzmax; size_t nz; [ ... variables for binary tree and memory management ... ] size_t sptype; } gsl_spmatrix; This defines a ‘size1’-by-‘size2’ sparse matrix. The number of non-zero elements currently in the matrix is given by ‘nz’. For the triplet representation, ‘i’, ‘p’, and ‘data’ are arrays of size ‘nz’ which contain the row indices, column indices, and element value, respectively. So if data[k] = A(i,j), then i = i[k] and j = p[k]. For compressed column storage, ‘i’ and ‘data’ are arrays of size ‘nz’ containing the row indices and element values, identical to the triplet case. ‘p’ is an array of size ‘size2’ + 1 where ‘p[j]’ points to the index in ‘data’ of the start of column ‘j’. Thus, if data[k] = A(i,j), then i = i[k] and p[j] <= k < p[j+1]. For compressed row storage, ‘i’ and ‘data’ are arrays of size ‘nz’ containing the column indices and element values, identical to the triplet case. ‘p’ is an array of size ‘size1’ + 1 where ‘p[i]’ points to the index in ‘data’ of the start of row ‘i’. Thus, if data[k] = A(i,j), then j = i[k] and p[i] <= k < p[i+1]. There are additional variables in the *note gsl_spmatrix: c1e. structure related to binary tree storage and memory management. The GSL implementation of sparse matrices uses balanced AVL trees to sort matrix elements in the triplet representation. This speeds up element searches and duplicate detection during the matrix assembly process. The *note gsl_spmatrix: c1e. structure also contains additional workspace variables needed for various operations like converting from triplet to compressed storage. ‘sptype’ indicates the type of storage format being used (COO, CSC or CSR). The compressed storage format defined above makes it very simple to interface with sophisticated external linear solver libraries which accept compressed storage input. The user can simply pass the arrays ‘i’, ‘p’, and ‘data’ as the inputs to external libraries.  File: gsl-ref.info, Node: Allocation, Next: Accessing Matrix Elements, Prev: Overview<8>, Up: Sparse Matrices 43.4 Allocation =============== The functions for allocating memory for a sparse matrix follow the style of ‘malloc()’ and ‘free()’. They also perform their own error checking. If there is insufficient memory available to allocate a matrix then the functions call the GSL error handler with an error code of *note GSL_ENOMEM: 2a. in addition to returning a null pointer. -- Function: *note gsl_spmatrix: c1e. *gsl_spmatrix_alloc (const size_t n1, const size_t n2) This function allocates a sparse matrix of size *note n1: c20.-by-*note n2: c20. and initializes it to all zeros. If the size of the matrix is not known at allocation time, both *note n1: c20. and *note n2: c20. may be set to 1, and they will automatically grow as elements are added to the matrix. This function sets the matrix to the triplet representation, which is the easiest for adding and accessing matrix elements. This function tries to make a reasonable guess for the number of non-zero elements (‘nzmax’) which will be added to the matrix by assuming a sparse density of 10\%. The function *note gsl_spmatrix_alloc_nzmax(): c21. can be used if this number is known more accurately. The workspace is of size O(nzmax). -- Function: *note gsl_spmatrix: c1e. *gsl_spmatrix_alloc_nzmax (const size_t n1, const size_t n2, const size_t nzmax, const size_t sptype) This function allocates a sparse matrix of size *note n1: c21.-by-*note n2: c21. and initializes it to all zeros. If the size of the matrix is not known at allocation time, both *note n1: c21. and *note n2: c21. may be set to 1, and they will automatically grow as elements are added to the matrix. The parameter *note nzmax: c21. specifies the maximum number of non-zero elements which will be added to the matrix. It does not need to be precisely known in advance, since storage space will automatically grow using *note gsl_spmatrix_realloc(): c22. if *note nzmax: c21. is not large enough. Accurate knowledge of this parameter reduces the number of reallocation calls required. The parameter *note sptype: c21. specifies the storage format of the sparse matrix. Possible values are -- Macro: GSL_SPMATRIX_COO This flag specifies coordinate (triplet) storage. -- Macro: GSL_SPMATRIX_CSC This flag specifies compressed sparse column storage. -- Macro: GSL_SPMATRIX_CSR This flag specifies compressed sparse row storage. The allocated *note gsl_spmatrix: c1e. structure is of size O(nzmax). -- Function: int gsl_spmatrix_realloc (const size_t nzmax, gsl_spmatrix *m) This function reallocates the storage space for *note m: c22. to accomodate *note nzmax: c22. non-zero elements. It is typically called internally by *note gsl_spmatrix_set(): c26. if the user wants to add more elements to the sparse matrix than the previously specified *note nzmax: c22. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: void gsl_spmatrix_free (gsl_spmatrix *m) This function frees the memory associated with the sparse matrix *note m: c27. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Accessing Matrix Elements, Next: Initializing Matrix Elements, Prev: Allocation, Up: Sparse Matrices 43.5 Accessing Matrix Elements ============================== -- Function: double gsl_spmatrix_get (const gsl_spmatrix *m, const size_t i, const size_t j) This function returns element (*note i: c29, *note j: c29.) of the matrix *note m: c29. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_set (gsl_spmatrix *m, const size_t i, const size_t j, const double x) This function sets element (*note i: c26, *note j: c26.) of the matrix *note m: c26. to the value *note x: c26. Input matrix formats supported: *note COO: c18. -- Function: double *gsl_spmatrix_ptr (gsl_spmatrix *m, const size_t i, const size_t j) This function returns a pointer to the (*note i: c2a, *note j: c2a.) element of the matrix *note m: c2a. If the (*note i: c2a, *note j: c2a.) element is not explicitly stored in the matrix, a null pointer is returned. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Initializing Matrix Elements, Next: Reading and Writing Matrices, Prev: Accessing Matrix Elements, Up: Sparse Matrices 43.6 Initializing Matrix Elements ================================= Since the sparse matrix format only stores the non-zero elements, it is automatically initialized to zero upon allocation. The function *note gsl_spmatrix_set_zero(): c2c. may be used to re-initialize a matrix to zero after elements have been added to it. -- Function: int gsl_spmatrix_set_zero (gsl_spmatrix *m) This function sets (or resets) all the elements of the matrix *note m: c2c. to zero. For CSC and CSR matrices, the cost of this operation is O(1). For COO matrices, the binary tree structure must be dismantled, so the cost is O(nz). Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Reading and Writing Matrices, Next: Copying Matrices, Prev: Initializing Matrix Elements, Up: Sparse Matrices 43.7 Reading and Writing Matrices ================================= -- Function: int gsl_spmatrix_fwrite (FILE *stream, const gsl_spmatrix *m) This function writes the elements of the matrix *note m: c2e. to the stream *note stream: c2e. in binary format. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. Since the data is written in the native binary format it may not be portable between different architectures. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_fread (FILE *stream, gsl_spmatrix *m) This function reads into the matrix *note m: c2f. from the open stream *note stream: c2f. in binary format. The matrix *note m: c2f. must be preallocated with the correct storage format, dimensions and have a sufficiently large ‘nzmax’ in order to read in all matrix elements, otherwise ‘GSL_EBADLEN’ is returned. The return value is 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file. The data is assumed to have been written in the native binary format on the same architecture. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_fprintf (FILE *stream, const gsl_spmatrix *m, const char *format) This function writes the elements of the matrix *note m: c30. line-by-line to the stream *note stream: c30. using the format specifier *note format: c30, which should be one of the ‘%g’, ‘%e’ or ‘%f’ formats for floating point numbers. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem writing to the file. The input matrix *note m: c30. may be in any storage format, and the output file will be written in MatrixMarket format. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: *note gsl_spmatrix: c1e. *gsl_spmatrix_fscanf (FILE *stream) This function reads sparse matrix data in the MatrixMarket format from the stream *note stream: c31. and stores it in a newly allocated matrix which is returned in *note COO: c18. format. The function returns 0 for success and ‘GSL_EFAILED’ if there was a problem reading from the file. The user should free the returned matrix when it is no longer needed.  File: gsl-ref.info, Node: Copying Matrices, Next: Exchanging Rows and Columns, Prev: Reading and Writing Matrices, Up: Sparse Matrices 43.8 Copying Matrices ===================== -- Function: int gsl_spmatrix_memcpy (gsl_spmatrix *dest, const gsl_spmatrix *src) This function copies the elements of the sparse matrix *note src: c33. into *note dest: c33. The two matrices must have the same dimensions and be in the same storage format. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Exchanging Rows and Columns, Next: Matrix Operations, Prev: Copying Matrices, Up: Sparse Matrices 43.9 Exchanging Rows and Columns ================================ -- Function: int gsl_spmatrix_transpose_memcpy (gsl_spmatrix *dest, const gsl_spmatrix *src) This function copies the transpose of the sparse matrix *note src: c35. into *note dest: c35. The dimensions of *note dest: c35. must match the transpose of the matrix *note src: c35. Also, both matrices must use the same sparse storage format. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_transpose (gsl_spmatrix *m) This function replaces the matrix *note m: c36. by its transpose, but changes the storage format for compressed matrix inputs. Since compressed column storage is the transpose of compressed row storage, this function simply converts a CSC matrix to CSR and vice versa. This is the most efficient way to transpose a compressed storage matrix, but the user should note that the storage format of their compressed matrix will change on output. For COO matrix inputs, the output matrix is also in COO storage. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Matrix Operations, Next: Matrix Properties, Prev: Exchanging Rows and Columns, Up: Sparse Matrices 43.10 Matrix Operations ======================= -- Function: int gsl_spmatrix_scale (gsl_spmatrix *m, const double x) This function scales all elements of the matrix *note m: c38. by the constant factor *note x: c38. The result m(i,j) \leftarrow x m(i,j) is stored in *note m: c38. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_scale_columns (gsl_spmatrix *A, const gsl_vector *x) This function scales the columns of the M-by-N sparse matrix *note A: c39. by the elements of the vector *note x: c39, of length N. The j-th column of *note A: c39. is multiplied by ‘x[j]’. This is equivalent to forming A \rightarrow A X where X = \textrm{diag}(x). Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_scale_rows (gsl_spmatrix *A, const gsl_vector *x) This function scales the rows of the M-by-N sparse matrix *note A: c3a. by the elements of the vector *note x: c3a, of length M. The i-th row of *note A: c3a. is multiplied by ‘x[i]’. This is equivalent to forming A \rightarrow X A where X = \textrm{diag}(x). Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_add (gsl_spmatrix *c, const gsl_spmatrix *a, const gsl_spmatrix *b) This function computes the sum c = a + b. The three matrices must have the same dimensions. Input matrix formats supported: *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_dense_add (gsl_matrix *a, const gsl_spmatrix *b) This function adds the elements of the sparse matrix *note b: c3c. to the elements of the dense matrix *note a: c3c. The result a(i,j) \leftarrow a(i,j) + b(i,j) is stored in *note a: c3c. and *note b: c3c. remains unchanged. The two matrices must have the same dimensions. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_dense_sub (gsl_matrix *a, const gsl_spmatrix *b) This function subtracts the elements of the sparse matrix *note b: c3d. from the elements of the dense matrix *note a: c3d. The result a(i,j) \leftarrow a(i,j) - b(i,j) is stored in *note a: c3d. and *note b: c3d. remains unchanged. The two matrices must have the same dimensions. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Matrix Properties, Next: Finding Maximum and Minimum Elements, Prev: Matrix Operations, Up: Sparse Matrices 43.11 Matrix Properties ======================= -- Function: const char *gsl_spmatrix_type (const gsl_spmatrix *m) This function returns a string describing the sparse storage format of the matrix *note m: c3f. For example: printf ("matrix is '%s' format.\n", gsl_spmatrix_type (m)); would print something like: matrix is 'CSR' format. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: size_t gsl_spmatrix_nnz (const gsl_spmatrix *m) This function returns the number of non-zero elements in *note m: c40. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_equal (const gsl_spmatrix *a, const gsl_spmatrix *b) This function returns 1 if the matrices *note a: c41. and *note b: c41. are equal (by comparison of element values) and 0 otherwise. The matrices *note a: c41. and *note b: c41. must be in the same sparse storage format for comparison. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: double gsl_spmatrix_norm1 (const gsl_spmatrix *A) This function returns the 1-norm of the m-by-n matrix *note A: c42, defined as the maximum column sum, ||A||_1 = \textrm{max}_{1 \le j \le n} \sum_{i=1}^m |A_{ij}| Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Finding Maximum and Minimum Elements, Next: Compressed Format, Prev: Matrix Properties, Up: Sparse Matrices 43.12 Finding Maximum and Minimum Elements ========================================== -- Function: int gsl_spmatrix_minmax (const gsl_spmatrix *m, double *min_out, double *max_out) This function returns the minimum and maximum elements of the matrix *note m: c44, storing them in *note min_out: c44. and *note max_out: c44, and searching only the non-zero values. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c. -- Function: int gsl_spmatrix_min_index (const gsl_spmatrix *m, size_t *imin, size_t *jmin) This function returns the indices of the minimum value in the matrix *note m: c45, searching only the non-zero values, and storing them in *note imin: c45. and *note jmin: c45. When there are several equal minimum elements then the first element found is returned. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Compressed Format, Next: Conversion Between Sparse and Dense Matrices, Prev: Finding Maximum and Minimum Elements, Up: Sparse Matrices 43.13 Compressed Format ======================= These routines calculate a compressed matrix from a coordinate representation. -- Function: int gsl_spmatrix_csc (gsl_spmatrix *dest, const gsl_spmatrix *src) This function creates a sparse matrix in *note compressed sparse column: c1a. format from the input sparse matrix *note src: c47. which must be in COO format. The compressed matrix is stored in *note dest: c47. Input matrix formats supported: *note COO: c18. -- Function: int gsl_spmatrix_csr (gsl_spmatrix *dest, const gsl_spmatrix *src) This function creates a sparse matrix in *note compressed sparse row: c1c. format from the input sparse matrix *note src: c48. which must be in COO format. The compressed matrix is stored in *note dest: c48. Input matrix formats supported: *note COO: c18. -- Function: *note gsl_spmatrix: c1e. *gsl_spmatrix_compress (const gsl_spmatrix *src, const int sptype) This function allocates a new sparse matrix, and stores *note src: c49. into it using the format specified by *note sptype: c49. The input *note sptype: c49. can be one of ‘GSL_SPMATRIX_COO’, ‘GSL_SPMATRIX_CSC’, or ‘GSL_SPMATRIX_CSR’. A pointer to the newly allocated matrix is returned, and must be freed by the caller when no longer needed.  File: gsl-ref.info, Node: Conversion Between Sparse and Dense Matrices, Next: Examples<34>, Prev: Compressed Format, Up: Sparse Matrices 43.14 Conversion Between Sparse and Dense Matrices ================================================== The *note gsl_spmatrix: c1e. structure can be converted into the dense *note gsl_matrix: 3a2. format and vice versa with the following routines. -- Function: int gsl_spmatrix_d2sp (gsl_spmatrix *S, const gsl_matrix *A) This function converts the dense matrix *note A: c4b. into sparse COO format and stores the result in *note S: c4b. Input matrix formats supported: *note COO: c18. -- Function: int gsl_spmatrix_sp2d (gsl_matrix *A, const gsl_spmatrix *S) This function converts the sparse matrix *note S: c4c. into a dense matrix and stores the result in *note A: c4c. Input matrix formats supported: *note COO: c18, *note CSC: c1a, *note CSR: c1c.  File: gsl-ref.info, Node: Examples<34>, Next: References and Further Reading<36>, Prev: Conversion Between Sparse and Dense Matrices, Up: Sparse Matrices 43.15 Examples ============== The following example program builds a 5-by-4 sparse matrix and prints it in coordinate, compressed column, and compressed row format. The matrix which is constructed is [ 0 0 3.1 4.6 ] [ 1 0 7.2 0 ] [ 0 0 0 0 ] [ 2.1 2.9 0 8.5 ] [ 4.1 0 0 0 ] The output of the program is: printing all matrix elements: A(0,0) = 0 A(0,1) = 0 A(0,2) = 3.1 A(0,3) = 4.6 A(1,0) = 1 . . . A(4,0) = 4.1 A(4,1) = 0 A(4,2) = 0 A(4,3) = 0 matrix in triplet format (i,j,Aij): (0, 2, 3.1) (0, 3, 4.6) (1, 0, 1.0) (1, 2, 7.2) (3, 0, 2.1) (3, 1, 2.9) (3, 3, 8.5) (4, 0, 4.1) matrix in compressed column format: i = [ 1, 3, 4, 3, 0, 1, 0, 3, ] p = [ 0, 3, 4, 6, 8, ] d = [ 1, 2.1, 4.1, 2.9, 3.1, 7.2, 4.6, 8.5, ] matrix in compressed row format: i = [ 2, 3, 0, 2, 0, 1, 3, 0, ] p = [ 0, 2, 4, 4, 7, 8, ] d = [ 3.1, 4.6, 1, 7.2, 2.1, 2.9, 8.5, 4.1, ] We see in the compressed column output, the data array stores each column contiguously, the array i stores the row index of the corresponding data element, and the array p stores the index of the start of each column in the data array. Similarly, for the compressed row output, the data array stores each row contiguously, the array i stores the column index of the corresponding data element, and the p array stores the index of the start of each row in the data array. #include #include #include int main() { gsl_spmatrix *A = gsl_spmatrix_alloc(5, 4); /* triplet format */ gsl_spmatrix *B, *C; size_t i, j; /* build the sparse matrix */ gsl_spmatrix_set(A, 0, 2, 3.1); gsl_spmatrix_set(A, 0, 3, 4.6); gsl_spmatrix_set(A, 1, 0, 1.0); gsl_spmatrix_set(A, 1, 2, 7.2); gsl_spmatrix_set(A, 3, 0, 2.1); gsl_spmatrix_set(A, 3, 1, 2.9); gsl_spmatrix_set(A, 3, 3, 8.5); gsl_spmatrix_set(A, 4, 0, 4.1); printf("printing all matrix elements:\n"); for (i = 0; i < 5; ++i) for (j = 0; j < 4; ++j) printf("A(%zu,%zu) = %g\n", i, j, gsl_spmatrix_get(A, i, j)); /* print out elements in triplet format */ printf("matrix in triplet format (i,j,Aij):\n"); gsl_spmatrix_fprintf(stdout, A, "%.1f"); /* convert to compressed column format */ B = gsl_spmatrix_ccs(A); printf("matrix in compressed column format:\n"); printf("i = [ "); for (i = 0; i < B->nz; ++i) printf("%d, ", B->i[i]); printf("]\n"); printf("p = [ "); for (i = 0; i < B->size2 + 1; ++i) printf("%d, ", B->p[i]); printf("]\n"); printf("d = [ "); for (i = 0; i < B->nz; ++i) printf("%g, ", B->data[i]); printf("]\n"); /* convert to compressed row format */ C = gsl_spmatrix_crs(A); printf("matrix in compressed row format:\n"); printf("i = [ "); for (i = 0; i < C->nz; ++i) printf("%d, ", C->i[i]); printf("]\n"); printf("p = [ "); for (i = 0; i < C->size1 + 1; ++i) printf("%d, ", C->p[i]); printf("]\n"); printf("d = [ "); for (i = 0; i < C->nz; ++i) printf("%g, ", C->data[i]); printf("]\n"); gsl_spmatrix_free(A); gsl_spmatrix_free(B); gsl_spmatrix_free(C); return 0; }  File: gsl-ref.info, Node: References and Further Reading<36>, Prev: Examples<34>, Up: Sparse Matrices 43.16 References and Further Reading ==================================== The algorithms used by these functions are described in the following sources, * Davis, T. A., Direct Methods for Sparse Linear Systems, SIAM, 2006. * CSparse software library, ‘https://www.cise.ufl.edu/research/sparse/CSparse’  File: gsl-ref.info, Node: Sparse BLAS Support, Next: Sparse Linear Algebra, Prev: Sparse Matrices, Up: Top 44 Sparse BLAS Support ********************** The Sparse Basic Linear Algebra Subprograms (BLAS) define a set of fundamental operations on vectors and sparse matrices which can be used to create optimized higher-level linear algebra functionality. GSL supports a limited number of BLAS operations for sparse matrices. The header file ‘gsl_spblas.h’ contains the prototypes for the sparse BLAS functions and related declarations. * Menu: * Sparse BLAS operations:: * References and Further Reading: References and Further Reading<37>.  File: gsl-ref.info, Node: Sparse BLAS operations, Next: References and Further Reading<37>, Up: Sparse BLAS Support 44.1 Sparse BLAS operations =========================== -- Function: int gsl_spblas_dgemv (const CBLAS_TRANSPOSE_t TransA, const double alpha, const gsl_spmatrix *A, const gsl_vector *x, const double beta, gsl_vector *y) This function computes the matrix-vector product and sum y \leftarrow \alpha op(A) x + \beta y, where op(A) = A, A^T for *note TransA: c52. = ‘CblasNoTrans’, ‘CblasTrans’. In-place computations are not supported, so *note x: c52. and *note y: c52. must be distinct vectors. The matrix *note A: c52. may be in triplet or compressed format. -- Function: int gsl_spblas_dgemm (const double alpha, const gsl_spmatrix *A, const gsl_spmatrix *B, gsl_spmatrix *C) This function computes the sparse matrix-matrix product C = \alpha A B. The matrices must be in compressed format.  File: gsl-ref.info, Node: References and Further Reading<37>, Prev: Sparse BLAS operations, Up: Sparse BLAS Support 44.2 References and Further Reading =================================== The algorithms used by these functions are described in the following sources: * Davis, T. A., Direct Methods for Sparse Linear Systems, SIAM, 2006. * CSparse software library, ‘https://www.cise.ufl.edu/research/sparse/CSparse’  File: gsl-ref.info, Node: Sparse Linear Algebra, Next: Physical Constants, Prev: Sparse BLAS Support, Up: Top 45 Sparse Linear Algebra ************************ This chapter describes functions for solving sparse linear systems of equations. The library provides linear algebra routines which operate directly on the *note gsl_spmatrix: c1e. and *note gsl_vector: 35f. objects. The functions described in this chapter are declared in the header file ‘gsl_splinalg.h’. * Menu: * Overview: Overview<9>. * Sparse Iterative Solvers:: * Examples: Examples<35>. * References and Further Reading: References and Further Reading<38>.  File: gsl-ref.info, Node: Overview<9>, Next: Sparse Iterative Solvers, Up: Sparse Linear Algebra 45.1 Overview ============= This chapter is primarily concerned with the solution of the linear system A x = b where A is a general square n-by-n non-singular sparse matrix, x is an unknown n-by-1 vector, and b is a given n-by-1 right hand side vector. There exist many methods for solving such sparse linear systems, which broadly fall into either direct or iterative categories. Direct methods include LU and QR decompositions, while iterative methods start with an initial guess for the vector x and update the guess through iteration until convergence. GSL does not currently provide any direct sparse solvers.  File: gsl-ref.info, Node: Sparse Iterative Solvers, Next: Examples<35>, Prev: Overview<9>, Up: Sparse Linear Algebra 45.2 Sparse Iterative Solvers ============================= * Menu: * Overview: Overview<10>. * Types of Sparse Iterative Solvers:: * Iterating the Sparse Linear System::  File: gsl-ref.info, Node: Overview<10>, Next: Types of Sparse Iterative Solvers, Up: Sparse Iterative Solvers 45.2.1 Overview --------------- Many practical iterative methods of solving large n-by-n sparse linear systems involve projecting an approximate solution for ‘x’ onto a subspace of {\bf R}^n. If we define a m-dimensional subspace {\cal K} as the subspace of approximations to the solution ‘x’, then m constraints must be imposed to determine the next approximation. These m constraints define another m-dimensional subspace denoted by {\cal L}. The subspace dimension m is typically chosen to be much smaller than n in order to reduce the computational effort needed to generate the next approximate solution vector. The many iterative algorithms which exist differ mainly in their choice of {\cal K} and {\cal L}.  File: gsl-ref.info, Node: Types of Sparse Iterative Solvers, Next: Iterating the Sparse Linear System, Prev: Overview<10>, Up: Sparse Iterative Solvers 45.2.2 Types of Sparse Iterative Solvers ---------------------------------------- The sparse linear algebra library provides the following types of iterative solvers: -- Type: gsl_splinalg_itersolve_type -- Variable: *note gsl_splinalg_itersolve_type: c5b. *gsl_splinalg_itersolve_gmres This specifies the Generalized Minimum Residual Method (GMRES). This is a projection method using {\cal K} = {\cal K}_m and {\cal L} = A {\cal K}_m where {\cal K}_m is the m-th Krylov subspace K_m = span( r_0, A r_0, ..., A^(m-1) r_0) and r_0 = b - A x_0 is the residual vector of the initial guess x_0. If m is set equal to n, then the Krylov subspace is {\bf R}^n and GMRES will provide the exact solution ‘x’. However, the goal is for the method to arrive at a very good approximation to ‘x’ using a much smaller subspace {\cal K}_m. By default, the GMRES method selects m = MIN(n,10) but the user may specify a different value for m. The GMRES storage requirements grow as O(n(m+1)) and the number of flops grow as O(4 m^2 n - 4 m^3 / 3). In the below function *note gsl_splinalg_itersolve_iterate(): c5d, one GMRES iteration is defined as projecting the approximate solution vector onto each Krylov subspace {\cal K}_1, ..., {\cal K}_m, and so m can be kept smaller by “restarting” the method and calling *note gsl_splinalg_itersolve_iterate(): c5d. multiple times, providing the updated approximation ‘x’ to each new call. If the method is not adequately converging, the user may try increasing the parameter m. GMRES is considered a robust general purpose iterative solver, however there are cases where the method stagnates if the matrix is not positive-definite and fails to reduce the residual until the very last projection onto the subspace {\cal K}_n = {\bf R}^n. In these cases, preconditioning the linear system can help, but GSL does not currently provide any preconditioners.  File: gsl-ref.info, Node: Iterating the Sparse Linear System, Prev: Types of Sparse Iterative Solvers, Up: Sparse Iterative Solvers 45.2.3 Iterating the Sparse Linear System ----------------------------------------- The following functions are provided to allocate storage for the sparse linear solvers and iterate the system to a solution. -- Function: gsl_splinalg_itersolve *gsl_splinalg_itersolve_alloc (const gsl_splinalg_itersolve_type *T, const size_t n, const size_t m) This function allocates a workspace for the iterative solution of *note n: c5f.-by-*note n: c5f. sparse matrix systems. The iterative solver type is specified by *note T: c5f. The argument *note m: c5f. specifies the size of the solution candidate subspace {\cal K}_m. The dimension *note m: c5f. may be set to 0 in which case a reasonable default value is used. -- Function: void gsl_splinalg_itersolve_free (gsl_splinalg_itersolve *w) This function frees the memory associated with the workspace *note w: c60. -- Function: const char *gsl_splinalg_itersolve_name (const gsl_splinalg_itersolve *w) This function returns a string pointer to the name of the solver. -- Function: int gsl_splinalg_itersolve_iterate (const gsl_spmatrix *A, const gsl_vector *b, const double tol, gsl_vector *x, gsl_splinalg_itersolve *w) This function performs one iteration of the iterative method for the sparse linear system specfied by the matrix *note A: c5d, right hand side vector *note b: c5d. and solution vector *note x: c5d. On input, *note x: c5d. must be set to an initial guess for the solution. On output, *note x: c5d. is updated to give the current solution estimate. The parameter *note tol: c5d. specifies the relative tolerance between the residual norm and norm of *note b: c5d. in order to check for convergence. When the following condition is satisfied: || A x - b || <= tol * || b || the method has converged, the function returns ‘GSL_SUCCESS’ and the final solution is provided in *note x: c5d. Otherwise, the function returns ‘GSL_CONTINUE’ to signal that more iterations are required. Here, || \cdot || represents the Euclidean norm. The input matrix *note A: c5d. may be in triplet or compressed format. -- Function: double gsl_splinalg_itersolve_normr (const gsl_splinalg_itersolve *w) This function returns the current residual norm ||r|| = ||A x - b||, which is updated after each call to *note gsl_splinalg_itersolve_iterate(): c5d.  File: gsl-ref.info, Node: Examples<35>, Next: References and Further Reading<38>, Prev: Sparse Iterative Solvers, Up: Sparse Linear Algebra 45.3 Examples ============= This example program demonstrates the sparse linear algebra routines on the solution of a simple 1D Poisson equation on [0,1]: u''(x) = f(x) = -\pi^2 \sin(\pi x) with boundary conditions u(0) = u(1) = 0. The analytic solution of this simple problem is u(x) = \sin{\pi x}. We will solve this problem by finite differencing the left hand side to give 1/h^2 ( u_(i+1) - 2 u_i + u_(i-1) ) = f_i Defining a grid of N points, h = 1/(N-1). In the finite difference equation above, u_0 = u_{N-1} = 0 are known from the boundary conditions, so we will only put the equations for i = 1, ..., N-2 into the matrix system. The resulting (N-2) \times (N-2) matrix equation is An example program which constructs and solves this system is given below. The system is solved using the iterative GMRES solver. Here is the output of the program: iter 0 residual = 4.297275996844e-11 Converged showing that the method converged in a single iteration. The calculated solution is shown in Fig. %s. [gsl-ref-figures/sparse_poisson] Figure: Solution of PDE #include #include #include #include #include #include #include int main() { const size_t N = 100; /* number of grid points */ const size_t n = N - 2; /* subtract 2 to exclude boundaries */ const double h = 1.0 / (N - 1.0); /* grid spacing */ gsl_spmatrix *A = gsl_spmatrix_alloc(n ,n); /* triplet format */ gsl_spmatrix *C; /* compressed format */ gsl_vector *f = gsl_vector_alloc(n); /* right hand side vector */ gsl_vector *u = gsl_vector_alloc(n); /* solution vector */ size_t i; /* construct the sparse matrix for the finite difference equation */ /* construct first row */ gsl_spmatrix_set(A, 0, 0, -2.0); gsl_spmatrix_set(A, 0, 1, 1.0); /* construct rows [1:n-2] */ for (i = 1; i < n - 1; ++i) { gsl_spmatrix_set(A, i, i + 1, 1.0); gsl_spmatrix_set(A, i, i, -2.0); gsl_spmatrix_set(A, i, i - 1, 1.0); } /* construct last row */ gsl_spmatrix_set(A, n - 1, n - 1, -2.0); gsl_spmatrix_set(A, n - 1, n - 2, 1.0); /* scale by h^2 */ gsl_spmatrix_scale(A, 1.0 / (h * h)); /* construct right hand side vector */ for (i = 0; i < n; ++i) { double xi = (i + 1) * h; double fi = -M_PI * M_PI * sin(M_PI * xi); gsl_vector_set(f, i, fi); } /* convert to compressed column format */ C = gsl_spmatrix_ccs(A); /* now solve the system with the GMRES iterative solver */ { const double tol = 1.0e-6; /* solution relative tolerance */ const size_t max_iter = 10; /* maximum iterations */ const gsl_splinalg_itersolve_type *T = gsl_splinalg_itersolve_gmres; gsl_splinalg_itersolve *work = gsl_splinalg_itersolve_alloc(T, n, 0); size_t iter = 0; double residual; int status; /* initial guess u = 0 */ gsl_vector_set_zero(u); /* solve the system A u = f */ do { status = gsl_splinalg_itersolve_iterate(C, f, tol, u, work); /* print out residual norm ||A*u - f|| */ residual = gsl_splinalg_itersolve_normr(work); fprintf(stderr, "iter %zu residual = %.12e\n", iter, residual); if (status == GSL_SUCCESS) fprintf(stderr, "Converged\n"); } while (status == GSL_CONTINUE && ++iter < max_iter); /* output solution */ for (i = 0; i < n; ++i) { double xi = (i + 1) * h; double u_exact = sin(M_PI * xi); double u_gsl = gsl_vector_get(u, i); printf("%f %.12e %.12e\n", xi, u_gsl, u_exact); } gsl_splinalg_itersolve_free(work); } gsl_spmatrix_free(A); gsl_spmatrix_free(C); gsl_vector_free(f); gsl_vector_free(u); return 0; } /* main() */  File: gsl-ref.info, Node: References and Further Reading<38>, Prev: Examples<35>, Up: Sparse Linear Algebra 45.4 References and Further Reading =================================== The implementation of the GMRES iterative solver closely follows the publications * H. F. Walker, Implementation of the GMRES method using Householder transformations, SIAM J. Sci. Stat. Comput. 9(1), 1988. * Y. Saad, Iterative methods for sparse linear systems, 2nd edition, SIAM, 2003.  File: gsl-ref.info, Node: Physical Constants, Next: IEEE floating-point arithmetic, Prev: Sparse Linear Algebra, Up: Top 46 Physical Constants ********************* This chapter describes macros for the values of physical constants, such as the speed of light, c, and gravitational constant, G. The values are available in different unit systems, including the standard MKSA system (meters, kilograms, seconds, amperes) and the CGSM system (centimeters, grams, seconds, gauss), which is commonly used in Astronomy. The definitions of constants in the MKSA system are available in the file ‘gsl_const_mksa.h’. The constants in the CGSM system are defined in ‘gsl_const_cgsm.h’. Dimensionless constants, such as the fine structure constant, which are pure numbers are defined in ‘gsl_const_num.h’. The full list of constants is described briefly below. Consult the header files themselves for the values of the constants used in the library. * Menu: * Fundamental Constants:: * Astronomy and Astrophysics:: * Atomic and Nuclear Physics:: * Measurement of Time:: * Imperial Units:: * Speed and Nautical Units:: * Printers Units:: * Volume, Area and Length: Volume Area and Length. * Mass and Weight:: * Thermal Energy and Power:: * Pressure:: * Viscosity:: * Light and Illumination:: * Radioactivity:: * Force and Energy:: * Prefixes:: * Examples: Examples<36>. * References and Further Reading: References and Further Reading<39>.  File: gsl-ref.info, Node: Fundamental Constants, Next: Astronomy and Astrophysics, Up: Physical Constants 46.1 Fundamental Constants ========================== -- Macro: GSL_CONST_MKSA_SPEED_OF_LIGHT The speed of light in vacuum, c. -- Macro: GSL_CONST_MKSA_VACUUM_PERMEABILITY The permeability of free space, \mu_0. This constant is defined in the MKSA system only. -- Macro: GSL_CONST_MKSA_VACUUM_PERMITTIVITY The permittivity of free space, \epsilon_0. This constant is defined in the MKSA system only. -- Macro: GSL_CONST_MKSA_PLANCKS_CONSTANT_H Planck’s constant, h. -- Macro: GSL_CONST_MKSA_PLANCKS_CONSTANT_HBAR Planck’s constant divided by 2\pi, \hbar. -- Macro: GSL_CONST_NUM_AVOGADRO Avogadro’s number, N_a. -- Macro: GSL_CONST_MKSA_FARADAY The molar charge of 1 Faraday. -- Macro: GSL_CONST_MKSA_BOLTZMANN The Boltzmann constant, k. -- Macro: GSL_CONST_MKSA_MOLAR_GAS The molar gas constant, R_0. -- Macro: GSL_CONST_MKSA_STANDARD_GAS_VOLUME The standard gas volume, V_0. -- Macro: GSL_CONST_MKSA_STEFAN_BOLTZMANN_CONSTANT The Stefan-Boltzmann radiation constant, \sigma. -- Macro: GSL_CONST_MKSA_GAUSS The magnetic field of 1 Gauss.  File: gsl-ref.info, Node: Astronomy and Astrophysics, Next: Atomic and Nuclear Physics, Prev: Fundamental Constants, Up: Physical Constants 46.2 Astronomy and Astrophysics =============================== -- Macro: GSL_CONST_MKSA_ASTRONOMICAL_UNIT The length of 1 astronomical unit (mean earth-sun distance), au. -- Macro: GSL_CONST_MKSA_GRAVITATIONAL_CONSTANT The gravitational constant, G. -- Macro: GSL_CONST_MKSA_LIGHT_YEAR The distance of 1 light-year, ly. -- Macro: GSL_CONST_MKSA_PARSEC The distance of 1 parsec, pc. -- Macro: GSL_CONST_MKSA_GRAV_ACCEL The standard gravitational acceleration on Earth, g. -- Macro: GSL_CONST_MKSA_SOLAR_MASS The mass of the Sun.  File: gsl-ref.info, Node: Atomic and Nuclear Physics, Next: Measurement of Time, Prev: Astronomy and Astrophysics, Up: Physical Constants 46.3 Atomic and Nuclear Physics =============================== -- Macro: GSL_CONST_MKSA_ELECTRON_CHARGE The charge of the electron, e. -- Macro: GSL_CONST_MKSA_ELECTRON_VOLT The energy of 1 electron volt, eV. -- Macro: GSL_CONST_MKSA_UNIFIED_ATOMIC_MASS The unified atomic mass, amu. -- Macro: GSL_CONST_MKSA_MASS_ELECTRON The mass of the electron, m_e. -- Macro: GSL_CONST_MKSA_MASS_MUON The mass of the muon, m_\mu. -- Macro: GSL_CONST_MKSA_MASS_PROTON The mass of the proton, m_p. -- Macro: GSL_CONST_MKSA_MASS_NEUTRON The mass of the neutron, m_n. -- Macro: GSL_CONST_NUM_FINE_STRUCTURE The electromagnetic fine structure constant \alpha. -- Macro: GSL_CONST_MKSA_RYDBERG The Rydberg constant, Ry, in units of energy. This is related to the Rydberg inverse wavelength R_\infty by Ry = h c R_\infty. -- Macro: GSL_CONST_MKSA_BOHR_RADIUS The Bohr radius, a_0. -- Macro: GSL_CONST_MKSA_ANGSTROM The length of 1 angstrom. -- Macro: GSL_CONST_MKSA_BARN The area of 1 barn. -- Macro: GSL_CONST_MKSA_BOHR_MAGNETON The Bohr Magneton, \mu_B. -- Macro: GSL_CONST_MKSA_NUCLEAR_MAGNETON The Nuclear Magneton, \mu_N. -- Macro: GSL_CONST_MKSA_ELECTRON_MAGNETIC_MOMENT The absolute value of the magnetic moment of the electron, \mu_e. The physical magnetic moment of the electron is negative. -- Macro: GSL_CONST_MKSA_PROTON_MAGNETIC_MOMENT The magnetic moment of the proton, \mu_p. -- Macro: GSL_CONST_MKSA_THOMSON_CROSS_SECTION The Thomson cross section, \sigma_T. -- Macro: GSL_CONST_MKSA_DEBYE The electric dipole moment of 1 Debye, D.  File: gsl-ref.info, Node: Measurement of Time, Next: Imperial Units, Prev: Atomic and Nuclear Physics, Up: Physical Constants 46.4 Measurement of Time ======================== -- Macro: GSL_CONST_MKSA_MINUTE The number of seconds in 1 minute. -- Macro: GSL_CONST_MKSA_HOUR The number of seconds in 1 hour. -- Macro: GSL_CONST_MKSA_DAY The number of seconds in 1 day. -- Macro: GSL_CONST_MKSA_WEEK The number of seconds in 1 week.  File: gsl-ref.info, Node: Imperial Units, Next: Speed and Nautical Units, Prev: Measurement of Time, Up: Physical Constants 46.5 Imperial Units =================== -- Macro: GSL_CONST_MKSA_INCH The length of 1 inch. -- Macro: GSL_CONST_MKSA_FOOT The length of 1 foot. -- Macro: GSL_CONST_MKSA_YARD The length of 1 yard. -- Macro: GSL_CONST_MKSA_MILE The length of 1 mile. -- Macro: GSL_CONST_MKSA_MIL The length of 1 mil (1/1000th of an inch).  File: gsl-ref.info, Node: Speed and Nautical Units, Next: Printers Units, Prev: Imperial Units, Up: Physical Constants 46.6 Speed and Nautical Units ============================= -- Macro: GSL_CONST_MKSA_KILOMETERS_PER_HOUR The speed of 1 kilometer per hour. -- Macro: GSL_CONST_MKSA_MILES_PER_HOUR The speed of 1 mile per hour. -- Macro: GSL_CONST_MKSA_NAUTICAL_MILE The length of 1 nautical mile. -- Macro: GSL_CONST_MKSA_FATHOM The length of 1 fathom. -- Macro: GSL_CONST_MKSA_KNOT The speed of 1 knot.  File: gsl-ref.info, Node: Printers Units, Next: Volume Area and Length, Prev: Speed and Nautical Units, Up: Physical Constants 46.7 Printers Units =================== -- Macro: GSL_CONST_MKSA_POINT The length of 1 printer’s point (1/72 inch). -- Macro: GSL_CONST_MKSA_TEXPOINT The length of 1 TeX point (1/72.27 inch).  File: gsl-ref.info, Node: Volume Area and Length, Next: Mass and Weight, Prev: Printers Units, Up: Physical Constants 46.8 Volume, Area and Length ============================ -- Macro: GSL_CONST_MKSA_MICRON The length of 1 micron. -- Macro: GSL_CONST_MKSA_HECTARE The area of 1 hectare. -- Macro: GSL_CONST_MKSA_ACRE The area of 1 acre. -- Macro: GSL_CONST_MKSA_LITER The volume of 1 liter. -- Macro: GSL_CONST_MKSA_US_GALLON The volume of 1 US gallon. -- Macro: GSL_CONST_MKSA_CANADIAN_GALLON The volume of 1 Canadian gallon. -- Macro: GSL_CONST_MKSA_UK_GALLON The volume of 1 UK gallon. -- Macro: GSL_CONST_MKSA_QUART The volume of 1 quart. -- Macro: GSL_CONST_MKSA_PINT The volume of 1 pint.  File: gsl-ref.info, Node: Mass and Weight, Next: Thermal Energy and Power, Prev: Volume Area and Length, Up: Physical Constants 46.9 Mass and Weight ==================== -- Macro: GSL_CONST_MKSA_POUND_MASS The mass of 1 pound. -- Macro: GSL_CONST_MKSA_OUNCE_MASS The mass of 1 ounce. -- Macro: GSL_CONST_MKSA_TON The mass of 1 ton. -- Macro: GSL_CONST_MKSA_METRIC_TON The mass of 1 metric ton (1000 kg). -- Macro: GSL_CONST_MKSA_UK_TON The mass of 1 UK ton. -- Macro: GSL_CONST_MKSA_TROY_OUNCE The mass of 1 troy ounce. -- Macro: GSL_CONST_MKSA_CARAT The mass of 1 carat. -- Macro: GSL_CONST_MKSA_GRAM_FORCE The force of 1 gram weight. -- Macro: GSL_CONST_MKSA_POUND_FORCE The force of 1 pound weight. -- Macro: GSL_CONST_MKSA_KILOPOUND_FORCE The force of 1 kilopound weight. -- Macro: GSL_CONST_MKSA_POUNDAL The force of 1 poundal.  File: gsl-ref.info, Node: Thermal Energy and Power, Next: Pressure, Prev: Mass and Weight, Up: Physical Constants 46.10 Thermal Energy and Power ============================== -- Macro: GSL_CONST_MKSA_CALORIE The energy of 1 calorie. -- Macro: GSL_CONST_MKSA_BTU The energy of 1 British Thermal Unit, btu. -- Macro: GSL_CONST_MKSA_THERM The energy of 1 Therm. -- Macro: GSL_CONST_MKSA_HORSEPOWER The power of 1 horsepower.  File: gsl-ref.info, Node: Pressure, Next: Viscosity, Prev: Thermal Energy and Power, Up: Physical Constants 46.11 Pressure ============== -- Macro: GSL_CONST_MKSA_BAR The pressure of 1 bar. -- Macro: GSL_CONST_MKSA_STD_ATMOSPHERE The pressure of 1 standard atmosphere. -- Macro: GSL_CONST_MKSA_TORR The pressure of 1 torr. -- Macro: GSL_CONST_MKSA_METER_OF_MERCURY The pressure of 1 meter of mercury. -- Macro: GSL_CONST_MKSA_INCH_OF_MERCURY The pressure of 1 inch of mercury. -- Macro: GSL_CONST_MKSA_INCH_OF_WATER The pressure of 1 inch of water. -- Macro: GSL_CONST_MKSA_PSI The pressure of 1 pound per square inch.  File: gsl-ref.info, Node: Viscosity, Next: Light and Illumination, Prev: Pressure, Up: Physical Constants 46.12 Viscosity =============== -- Macro: GSL_CONST_MKSA_POISE The dynamic viscosity of 1 poise. -- Macro: GSL_CONST_MKSA_STOKES The kinematic viscosity of 1 stokes.  File: gsl-ref.info, Node: Light and Illumination, Next: Radioactivity, Prev: Viscosity, Up: Physical Constants 46.13 Light and Illumination ============================ -- Macro: GSL_CONST_MKSA_STILB The luminance of 1 stilb. -- Macro: GSL_CONST_MKSA_LUMEN The luminous flux of 1 lumen. -- Macro: GSL_CONST_MKSA_LUX The illuminance of 1 lux. -- Macro: GSL_CONST_MKSA_PHOT The illuminance of 1 phot. -- Macro: GSL_CONST_MKSA_FOOTCANDLE The illuminance of 1 footcandle. -- Macro: GSL_CONST_MKSA_LAMBERT The luminance of 1 lambert. -- Macro: GSL_CONST_MKSA_FOOTLAMBERT The luminance of 1 footlambert.  File: gsl-ref.info, Node: Radioactivity, Next: Force and Energy, Prev: Light and Illumination, Up: Physical Constants 46.14 Radioactivity =================== -- Macro: GSL_CONST_MKSA_CURIE The activity of 1 curie. -- Macro: GSL_CONST_MKSA_ROENTGEN The exposure of 1 roentgen. -- Macro: GSL_CONST_MKSA_RAD The absorbed dose of 1 rad.  File: gsl-ref.info, Node: Force and Energy, Next: Prefixes, Prev: Radioactivity, Up: Physical Constants 46.15 Force and Energy ====================== -- Macro: GSL_CONST_MKSA_NEWTON The SI unit of force, 1 Newton. -- Macro: GSL_CONST_MKSA_DYNE The force of 1 Dyne = 10^{-5} Newton. -- Macro: GSL_CONST_MKSA_JOULE The SI unit of energy, 1 Joule. -- Macro: GSL_CONST_MKSA_ERG The energy 1 erg = 10^{-7} Joule.  File: gsl-ref.info, Node: Prefixes, Next: Examples<36>, Prev: Force and Energy, Up: Physical Constants 46.16 Prefixes ============== These constants are dimensionless scaling factors. -- Macro: GSL_CONST_NUM_YOTTA 10^{24} -- Macro: GSL_CONST_NUM_ZETTA 10^{21} -- Macro: GSL_CONST_NUM_EXA 10^{18} -- Macro: GSL_CONST_NUM_PETA 10^{15} -- Macro: GSL_CONST_NUM_TERA 10^{12} -- Macro: GSL_CONST_NUM_GIGA 10^9 -- Macro: GSL_CONST_NUM_MEGA 10^6 -- Macro: GSL_CONST_NUM_KILO 10^3 -- Macro: GSL_CONST_NUM_MILLI 10^{-3} -- Macro: GSL_CONST_NUM_MICRO 10^{-6} -- Macro: GSL_CONST_NUM_NANO 10^{-9} -- Macro: GSL_CONST_NUM_PICO 10^{-12} -- Macro: GSL_CONST_NUM_FEMTO 10^{-15} -- Macro: GSL_CONST_NUM_ATTO 10^{-18} -- Macro: GSL_CONST_NUM_ZEPTO 10^{-21} -- Macro: GSL_CONST_NUM_YOCTO 10^{-24}  File: gsl-ref.info, Node: Examples<36>, Next: References and Further Reading<39>, Prev: Prefixes, Up: Physical Constants 46.17 Examples ============== The following program demonstrates the use of the physical constants in a calculation. In this case, the goal is to calculate the range of light-travel times from Earth to Mars. The required data is the average distance of each planet from the Sun in astronomical units (the eccentricities and inclinations of the orbits will be neglected for the purposes of this calculation). The average radius of the orbit of Mars is 1.52 astronomical units, and for the orbit of Earth it is 1 astronomical unit (by definition). These values are combined with the MKSA values of the constants for the speed of light and the length of an astronomical unit to produce a result for the shortest and longest light-travel times in seconds. The figures are converted into minutes before being displayed. #include #include int main (void) { double c = GSL_CONST_MKSA_SPEED_OF_LIGHT; double au = GSL_CONST_MKSA_ASTRONOMICAL_UNIT; double minutes = GSL_CONST_MKSA_MINUTE; /* distance stored in meters */ double r_earth = 1.00 * au; double r_mars = 1.52 * au; double t_min, t_max; t_min = (r_mars - r_earth) / c; t_max = (r_mars + r_earth) / c; printf ("light travel time from Earth to Mars:\n"); printf ("minimum = %.1f minutes\n", t_min / minutes); printf ("maximum = %.1f minutes\n", t_max / minutes); return 0; } Here is the output from the program, light travel time from Earth to Mars: minimum = 4.3 minutes maximum = 21.0 minutes  File: gsl-ref.info, Node: References and Further Reading<39>, Prev: Examples<36>, Up: Physical Constants 46.18 References and Further Reading ==================================== The authoritative sources for physical constants are the 2006 CODATA recommended values, published in the article below. Further information on the values of physical constants is also available from the NIST website. * P.J. Mohr, B.N. Taylor, D.B. Newell, “CODATA Recommended Values of the Fundamental Physical Constants: 2006”, Reviews of Modern Physics, 80(2), pp. 633–730 (2008). * ‘http://www.physics.nist.gov/cuu/Constants/index.html’ * ‘http://physics.nist.gov/Pubs/SP811/appenB9.html’  File: gsl-ref.info, Node: IEEE floating-point arithmetic, Next: Debugging Numerical Programs, Prev: Physical Constants, Up: Top 47 IEEE floating-point arithmetic ********************************* This chapter describes functions for examining the representation of floating point numbers and controlling the floating point environment of your program. The functions described in this chapter are declared in the header file ‘gsl_ieee_utils.h’. * Menu: * Representation of floating point numbers:: * Setting up your IEEE environment:: * References and Further Reading: References and Further Reading<40>.  File: gsl-ref.info, Node: Representation of floating point numbers, Next: Setting up your IEEE environment, Up: IEEE floating-point arithmetic 47.1 Representation of floating point numbers ============================================= The IEEE Standard for Binary Floating-Point Arithmetic defines binary formats for single and double precision numbers. Each number is composed of three parts: a `sign bit' (s), an `exponent' (E) and a `fraction' (f). The numerical value of the combination (s,E,f) is given by the following formula, (-1)^s (1.fffff...) 2^E The sign bit is either zero or one. The exponent ranges from a minimum value E_{min} to a maximum value E_{max} depending on the precision. The exponent is converted to an unsigned number e, known as the `biased exponent', for storage by adding a `bias' parameter, e = E + bias The sequence fffff... represents the digits of the binary fraction f. The binary digits are stored in `normalized form', by adjusting the exponent to give a leading digit of 1. Since the leading digit is always 1 for normalized numbers it is assumed implicitly and does not have to be stored. Numbers smaller than 2^{E_{min}} are be stored in `denormalized form' with a leading zero, (-1)^s (0.fffff...) 2^(E_min) This allows gradual underflow down to 2^{E_{min} - p} for p bits of precision. A zero is encoded with the special exponent of 2^{E_{min}-1} and infinities with the exponent of 2^{E_{max}+1}. The format for single precision numbers uses 32 bits divided in the following way: seeeeeeeefffffffffffffffffffffff s = sign bit, 1 bit e = exponent, 8 bits (E_min=-126, E_max=127, bias=127) f = fraction, 23 bits The format for double precision numbers uses 64 bits divided in the following way: seeeeeeeeeeeffffffffffffffffffffffffffffffffffffffffffffffffffff s = sign bit, 1 bit e = exponent, 11 bits (E_min=-1022, E_max=1023, bias=1023) f = fraction, 52 bits It is often useful to be able to investigate the behavior of a calculation at the bit-level and the library provides functions for printing the IEEE representations in a human-readable form. -- Function: void gsl_ieee_fprintf_float (FILE *stream, const float *x) -- Function: void gsl_ieee_fprintf_double (FILE *stream, const double *x) These functions output a formatted version of the IEEE floating-point number pointed to by *note x: cf2. to the stream *note stream: cf2. A pointer is used to pass the number indirectly, to avoid any undesired promotion from ‘float’ to ‘double’. The output takes one of the following forms, ‘NaN’ the Not-a-Number symbol ‘Inf, -Inf’ positive or negative infinity ‘1.fffff...*2^E, -1.fffff...*2^E’ a normalized floating point number ‘0.fffff...*2^E, -0.fffff...*2^E’ a denormalized floating point number ‘0, -0’ positive or negative zero The output can be used directly in GNU Emacs Calc mode by preceding it with ‘2#’ to indicate binary. -- Function: void gsl_ieee_printf_float (const float *x) -- Function: void gsl_ieee_printf_double (const double *x) These functions output a formatted version of the IEEE floating-point number pointed to by *note x: cf4. to the stream ‘stdout’. The following program demonstrates the use of the functions by printing the single and double precision representations of the fraction 1/3. For comparison the representation of the value promoted from single to double precision is also printed. #include #include int main (void) { float f = 1.0/3.0; double d = 1.0/3.0; double fd = f; /* promote from float to double */ printf (" f="); gsl_ieee_printf_float(&f); printf ("\n"); printf ("fd="); gsl_ieee_printf_double(&fd); printf ("\n"); printf (" d="); gsl_ieee_printf_double(&d); printf ("\n"); return 0; } The binary representation of 1/3 is 0.01010101.... The output below shows that the IEEE format normalizes this fraction to give a leading digit of 1: f= 1.01010101010101010101011*2^-2 fd= 1.0101010101010101010101100000000000000000000000000000*2^-2 d= 1.0101010101010101010101010101010101010101010101010101*2^-2 The output also shows that a single-precision number is promoted to double-precision by adding zeros in the binary representation.  File: gsl-ref.info, Node: Setting up your IEEE environment, Next: References and Further Reading<40>, Prev: Representation of floating point numbers, Up: IEEE floating-point arithmetic 47.2 Setting up your IEEE environment ===================================== The IEEE standard defines several `modes' for controlling the behavior of floating point operations. These modes specify the important properties of computer arithmetic: the direction used for rounding (e.g. whether numbers should be rounded up, down or to the nearest number), the rounding precision and how the program should handle arithmetic exceptions, such as division by zero. Many of these features can now be controlled via standard functions such as ‘fpsetround()’, which should be used whenever they are available. Unfortunately in the past there has been no universal API for controlling their behavior—each system has had its own low-level way of accessing them. To help you write portable programs GSL allows you to specify modes in a platform-independent way using the environment variable *note GSL_IEEE_MODE: cf6. The library then takes care of all the necessary machine-specific initializations for you when you call the function *note gsl_ieee_env_setup(): cf7. -- Macro: GSL_IEEE_MODE Environment variable which specifies IEEE mode. -- Function: void gsl_ieee_env_setup () This function reads the environment variable *note GSL_IEEE_MODE: cf6. and attempts to set up the corresponding specified IEEE modes. The environment variable should be a list of keywords, separated by commas, like this: GSL_IEEE_MODE = "keyword, keyword, ..." where ‘keyword’ is one of the following mode-names: single-precision double-precision extended-precision round-to-nearest round-down round-up round-to-zero mask-all mask-invalid mask-denormalized mask-division-by-zero mask-overflow mask-underflow trap-inexact trap-common If *note GSL_IEEE_MODE: cf6. is empty or undefined then the function returns immediately and no attempt is made to change the system’s IEEE mode. When the modes from *note GSL_IEEE_MODE: cf6. are turned on the function prints a short message showing the new settings to remind you that the results of the program will be affected. If the requested modes are not supported by the platform being used then the function calls the error handler and returns an error code of ‘GSL_EUNSUP’. When options are specified using this method, the resulting mode is based on a default setting of the highest available precision (double precision or extended precision, depending on the platform) in round-to-nearest mode, with all exceptions enabled apart from the INEXACT exception. The INEXACT exception is generated whenever rounding occurs, so it must generally be disabled in typical scientific calculations. All other floating-point exceptions are enabled by default, including underflows and the use of denormalized numbers, for safety. They can be disabled with the individual ‘mask-’ settings or together using ‘mask-all’. The following adjusted combination of modes is convenient for many purposes: GSL_IEEE_MODE="double-precision,"\ "mask-underflow,"\ "mask-denormalized" This choice ignores any errors relating to small numbers (either denormalized, or underflowing to zero) but traps overflows, division by zero and invalid operations. Note that on the x86 series of processors this function sets both the original x87 mode and the newer MXCSR mode, which controls SSE floating-point operations. The SSE floating-point units do not have a precision-control bit, and always work in double-precision. The single-precision and extended-precision keywords have no effect in this case. To demonstrate the effects of different rounding modes consider the following program which computes e, the base of natural logarithms, by summing a rapidly-decreasing series, e = 1 + 1/2! + 1/3! + 1/4! + ... = 2.71828182846... #include #include #include int main (void) { double x = 1, oldsum = 0, sum = 0; int i = 0; gsl_ieee_env_setup (); /* read GSL_IEEE_MODE */ do { i++; oldsum = sum; sum += x; x = x / i; printf ("i=%2d sum=%.18f error=%g\n", i, sum, sum - M_E); if (i > 30) break; } while (sum != oldsum); return 0; } Here are the results of running the program in ‘round-to-nearest’ mode. This is the IEEE default so it isn’t really necessary to specify it here: $ GSL_IEEE_MODE="round-to-nearest" ./a.out i= 1 sum=1.000000000000000000 error=-1.71828 i= 2 sum=2.000000000000000000 error=-0.718282 .... i=18 sum=2.718281828459045535 error=4.44089e-16 i=19 sum=2.718281828459045535 error=4.44089e-16 After nineteen terms the sum converges to within 4 \times 10^{-16} of the correct value. If we now change the rounding mode to ‘round-down’ the final result is less accurate: $ GSL_IEEE_MODE="round-down" ./a.out i= 1 sum=1.000000000000000000 error=-1.71828 .... i=19 sum=2.718281828459041094 error=-3.9968e-15 The result is about 4 \times 10^{-15} below the correct value, an order of magnitude worse than the result obtained in the ‘round-to-nearest’ mode. If we change to rounding mode to ‘round-up’ then the final result is higher than the correct value (when we add each term to the sum the final result is always rounded up, which increases the sum by at least one tick until the added term underflows to zero). To avoid this problem we would need to use a safer converge criterion, such as ‘while (fabs(sum - oldsum) > epsilon)’, with a suitably chosen value of epsilon. Finally we can see the effect of computing the sum using single-precision rounding, in the default ‘round-to-nearest’ mode. In this case the program thinks it is still using double precision numbers but the CPU rounds the result of each floating point operation to single-precision accuracy. This simulates the effect of writing the program using single-precision ‘float’ variables instead of ‘double’ variables. The iteration stops after about half the number of iterations and the final result is much less accurate: $ GSL_IEEE_MODE="single-precision" ./a.out .... i=12 sum=2.718281984329223633 error=1.5587e-07 with an error of O(10^{-7}), which corresponds to single precision accuracy (about 1 part in 10^7). Continuing the iterations further does not decrease the error because all the subsequent results are rounded to the same value.  File: gsl-ref.info, Node: References and Further Reading<40>, Prev: Setting up your IEEE environment, Up: IEEE floating-point arithmetic 47.3 References and Further Reading =================================== The reference for the IEEE standard is, * ANSI/IEEE Std 754-1985, IEEE Standard for Binary Floating-Point Arithmetic. A more pedagogical introduction to the standard can be found in the following paper, * David Goldberg: What Every Computer Scientist Should Know About Floating-Point Arithmetic. `ACM Computing Surveys', Vol.: 23, No.: 1 (March 1991), pages 5–48. * Corrigendum: `ACM Computing Surveys', Vol.: 23, No.: 3 (September 1991), page 413. and see also the sections by B. A. Wichmann and Charles B. Dunham in Surveyor’s Forum: “What Every Computer Scientist Should Know About Floating-Point Arithmetic”. `ACM Computing Surveys', Vol.: 24, No.: 3 (September 1992), page 319. A detailed textbook on IEEE arithmetic and its practical use is available from SIAM Press, * Michael L. Overton, `Numerical Computing with IEEE Floating Point Arithmetic', SIAM Press, ISBN 0898715717.  File: gsl-ref.info, Node: Debugging Numerical Programs, Next: Contributors to GSL, Prev: IEEE floating-point arithmetic, Up: Top 48 Debugging Numerical Programs ******************************* This chapter describes some tips and tricks for debugging numerical programs which use GSL. * Menu: * Using gdb:: * Examining floating point registers:: * Handling floating point exceptions:: * GCC warning options for numerical programs:: * References and Further Reading: References and Further Reading<41>.  File: gsl-ref.info, Node: Using gdb, Next: Examining floating point registers, Up: Debugging Numerical Programs 48.1 Using gdb ============== Any errors reported by the library are passed to the function ‘gsl_error()’. By running your programs under gdb and setting a breakpoint in this function you can automatically catch any library errors. You can add a breakpoint for every session by putting: break gsl_error into your ‘.gdbinit’ file in the directory where your program is started. If the breakpoint catches an error then you can use a backtrace (‘bt’) to see the call-tree, and the arguments which possibly caused the error. By moving up into the calling function you can investigate the values of variables at that point. Here is an example from the program ‘fft/test_trap’, which contains the following line: status = gsl_fft_complex_wavetable_alloc (0, &complex_wavetable); The function *note gsl_fft_complex_wavetable_alloc(): 61d. takes the length of an FFT as its first argument. When this line is executed an error will be generated because the length of an FFT is not allowed to be zero. To debug this problem we start ‘gdb’, using the file ‘.gdbinit’ to define a breakpoint in ‘gsl_error()’: $ gdb test_trap GDB is free software and you are welcome to distribute copies of it under certain conditions; type "show copying" to see the conditions. There is absolutely no warranty for GDB; type "show warranty" for details. GDB 4.16 (i586-debian-linux), Copyright 1996 Free Software Foundation, Inc. Breakpoint 1 at 0x8050b1e: file error.c, line 14. When we run the program this breakpoint catches the error and shows the reason for it: (gdb) run Starting program: test_trap Breakpoint 1, gsl_error (reason=0x8052b0d "length n must be positive integer", file=0x8052b04 "c_init.c", line=108, gsl_errno=1) at error.c:14 14 if (gsl_error_handler) The first argument of ‘gsl_error()’ is always a string describing the error. Now we can look at the backtrace to see what caused the problem: (gdb) bt #0 gsl_error (reason=0x8052b0d "length n must be positive integer", file=0x8052b04 "c_init.c", line=108, gsl_errno=1) at error.c:14 #1 0x8049376 in gsl_fft_complex_wavetable_alloc (n=0, wavetable=0xbffff778) at c_init.c:108 #2 0x8048a00 in main (argc=1, argv=0xbffff9bc) at test_trap.c:94 #3 0x80488be in ___crt_dummy__ () We can see that the error was generated in the function *note gsl_fft_complex_wavetable_alloc(): 61d. when it was called with an argument of ‘n = 0’. The original call came from line 94 in the file ‘test_trap.c’. By moving up to the level of the original call we can find the line that caused the error: (gdb) up #1 0x8049376 in gsl_fft_complex_wavetable_alloc (n=0, wavetable=0xbffff778) at c_init.c:108 108 GSL_ERROR ("length n must be positive integer", GSL_EDOM); (gdb) up #2 0x8048a00 in main (argc=1, argv=0xbffff9bc) at test_trap.c:94 94 status = gsl_fft_complex_wavetable_alloc (0, &complex_wavetable); Thus we have found the line that caused the problem. From this point we could also print out the values of other variables such as ‘complex_wavetable’.  File: gsl-ref.info, Node: Examining floating point registers, Next: Handling floating point exceptions, Prev: Using gdb, Up: Debugging Numerical Programs 48.2 Examining floating point registers ======================================= The contents of floating point registers can be examined using the command ‘info float’ (on supported platforms): (gdb) info float st0: 0xc4018b895aa17a945000 Valid Normal -7.838871e+308 st1: 0x3ff9ea3f50e4d7275000 Valid Normal 0.0285946 st2: 0x3fe790c64ce27dad4800 Valid Normal 6.7415931e-08 st3: 0x3ffaa3ef0df6607d7800 Spec Normal 0.0400229 st4: 0x3c028000000000000000 Valid Normal 4.4501477e-308 st5: 0x3ffef5412c22219d9000 Zero Normal 0.9580257 st6: 0x3fff8000000000000000 Valid Normal 1 st7: 0xc4028b65a1f6d243c800 Valid Normal -1.566206e+309 fctrl: 0x0272 53 bit; NEAR; mask DENOR UNDER LOS; fstat: 0xb9ba flags 0001; top 7; excep DENOR OVERF UNDER LOS ftag: 0x3fff fip: 0x08048b5c fcs: 0x051a0023 fopoff: 0x08086820 fopsel: 0x002b Individual registers can be examined using the variables ‘$reg’, where ‘reg’ is the register name: (gdb) p $st1 $1 = 0.02859464454261210347719  File: gsl-ref.info, Node: Handling floating point exceptions, Next: GCC warning options for numerical programs, Prev: Examining floating point registers, Up: Debugging Numerical Programs 48.3 Handling floating point exceptions ======================================= It is possible to stop the program whenever a ‘SIGFPE’ floating point exception occurs. This can be useful for finding the cause of an unexpected infinity or ‘NaN’. The current handler settings can be shown with the command ‘info signal SIGFPE’: (gdb) info signal SIGFPE Signal Stop Print Pass to program Description SIGFPE Yes Yes Yes Arithmetic exception Unless the program uses a signal handler the default setting should be changed so that SIGFPE is not passed to the program, as this would cause it to exit. The command ‘handle SIGFPE stop nopass’ prevents this: (gdb) handle SIGFPE stop nopass Signal Stop Print Pass to program Description SIGFPE Yes Yes No Arithmetic exception Depending on the platform it may be necessary to instruct the kernel to generate signals for floating point exceptions. For programs using GSL this can be achieved using the *note GSL_IEEE_MODE: cf6. environment variable in conjunction with the function *note gsl_ieee_env_setup(): cf7. as described in *note IEEE floating-point arithmetic: cee.: (gdb) set env GSL_IEEE_MODE=double-precision  File: gsl-ref.info, Node: GCC warning options for numerical programs, Next: References and Further Reading<41>, Prev: Handling floating point exceptions, Up: Debugging Numerical Programs 48.4 GCC warning options for numerical programs =============================================== Writing reliable numerical programs in C requires great care. The following GCC warning options are recommended when compiling numerical programs: gcc -ansi -pedantic -Werror -Wall -W -Wmissing-prototypes -Wstrict-prototypes -Wconversion -Wshadow -Wpointer-arith -Wcast-qual -Wcast-align -Wwrite-strings -Wnested-externs -fshort-enums -fno-common -Dinline= -g -O2 For details of each option consult the manual `Using and Porting GCC'. The following table gives a brief explanation of what types of errors these options catch. ‘-ansi -pedantic’ Use ANSI C, and reject any non-ANSI extensions. These flags help in writing portable programs that will compile on other systems. ‘-Werror’ Consider warnings to be errors, so that compilation stops. This prevents warnings from scrolling off the top of the screen and being lost. You won’t be able to compile the program until it is completely warning-free. ‘-Wall’ This turns on a set of warnings for common programming problems. You need ‘-Wall’, but it is not enough on its own. ‘-O2’ Turn on optimization. The warnings for uninitialized variables in ‘-Wall’ rely on the optimizer to analyze the code. If there is no optimization then these warnings aren’t generated. ‘-W’ This turns on some extra warnings not included in ‘-Wall’, such as missing return values and comparisons between signed and unsigned integers. ‘-Wmissing-prototypes -Wstrict-prototypes’ Warn if there are any missing or inconsistent prototypes. Without prototypes it is harder to detect problems with incorrect arguments. ‘-Wconversion’ The main use of this option is to warn about conversions from signed to unsigned integers. For example, ‘unsigned int x = -1’. If you need to perform such a conversion you can use an explicit cast. ‘-Wshadow’ This warns whenever a local variable shadows another local variable. If two variables have the same name then it is a potential source of confusion. ‘-Wpointer-arith -Wcast-qual -Wcast-align’ These options warn if you try to do pointer arithmetic for types which don’t have a size, such as ‘void’, if you remove a ‘const’ cast from a pointer, or if you cast a pointer to a type which has a different size, causing an invalid alignment. ‘-Wwrite-strings’ This option gives string constants a ‘const’ qualifier so that it will be a compile-time error to attempt to overwrite them. ‘-fshort-enums’ This option makes the type of ‘enum’ as short as possible. Normally this makes an ‘enum’ different from an ‘int’. Consequently any attempts to assign a pointer-to-int to a pointer-to-enum will generate a cast-alignment warning. ‘-fno-common’ This option prevents global variables being simultaneously defined in different object files (you get an error at link time). Such a variable should be defined in one file and referred to in other files with an ‘extern’ declaration. ‘-Wnested-externs’ This warns if an ‘extern’ declaration is encountered within a function. ‘-Dinline=’ The ‘inline’ keyword is not part of ANSI C. Thus if you want to use ‘-ansi’ with a program which uses inline functions you can use this preprocessor definition to remove the ‘inline’ keywords. ‘-g’ It always makes sense to put debugging symbols in the executable so that you can debug it using ‘gdb’. The only effect of debugging symbols is to increase the size of the file, and you can use the ‘strip’ command to remove them later if necessary.  File: gsl-ref.info, Node: References and Further Reading<41>, Prev: GCC warning options for numerical programs, Up: Debugging Numerical Programs 48.5 References and Further Reading =================================== The following books are essential reading for anyone writing and debugging numerical programs with ‘gcc’ and ‘gdb’. * R.M. Stallman, `Using and Porting GNU CC', Free Software Foundation, ISBN 1882114388 * R.M. Stallman, R.H. Pesch, `Debugging with GDB: The GNU Source-Level Debugger', Free Software Foundation, ISBN 1882114779 For a tutorial introduction to the GNU C Compiler and related programs, see * B.J. Gough, ‘http://www.network-theory.co.uk/gcc/intro/’,’ `An Introduction to GCC', Network Theory Ltd, ISBN 0954161793  File: gsl-ref.info, Node: Contributors to GSL, Next: Autoconf Macros, Prev: Debugging Numerical Programs, Up: Top 49 Contributors to GSL ********************** (See the ‘AUTHORS’ file in the distribution for up-to-date information.) Mark Galassi Conceived GSL (with James Theiler) and wrote the design document. Wrote the simulated annealing package and the relevant chapter in the manual. James Theiler Conceived GSL (with Mark Galassi). Wrote the random number generators and the relevant chapter in this manual. Jim Davies Wrote the statistical routines and the relevant chapter in this manual. Brian Gough FFTs, numerical integration, random number generators and distributions, root finding, minimization and fitting, polynomial solvers, complex numbers, physical constants, permutations, vector and matrix functions, histograms, statistics, ieee-utils, revised CBLAS Level 2 & 3, matrix decompositions, eigensystems, cumulative distribution functions, testing, documentation and releases. Reid Priedhorsky Wrote and documented the initial version of the root finding routines while at Los Alamos National Laboratory, Mathematical Modeling and Analysis Group. Gerard Jungman Special Functions, Series acceleration, ODEs, BLAS, Linear Algebra, Eigensystems, Hankel Transforms. Patrick Alken Implementation of nonsymmetric and generalized eigensystems, B-splines, linear and nonlinear least squares, matrix decompositions, associated Legendre functions, running statistics, sparse matrices, and sparse linear algebra. Mike Booth Wrote the Monte Carlo library. Jorma Olavi Tähtinen Wrote the initial complex arithmetic functions. Thomas Walter Wrote the initial heapsort routines and Cholesky decomposition. Fabrice Rossi Multidimensional minimization. Carlo Perassi Implementation of the random number generators in Knuth’s `Seminumerical Algorithms', 3rd Ed. Szymon Jaroszewicz Wrote the routines for generating combinations. Nicolas Darnis Wrote the cyclic functions and the initial functions for canonical permutations. Jason H. Stover Wrote the major cumulative distribution functions. Ivo Alxneit Wrote the routines for wavelet transforms. Tuomo Keskitalo Improved the implementation of the ODE solvers and wrote the ode-initval2 routines. Lowell Johnson Implementation of the Mathieu functions. Rhys Ulerich Wrote the multiset routines. Pavel Holoborodko Wrote the fixed order Gauss-Legendre quadrature routines. Pedro Gonnet Wrote the CQUAD integration routines. Thanks to Nigel Lowry for help in proofreading the manual. The non-symmetric eigensystems routines contain code based on the LAPACK linear algebra library. LAPACK is distributed under the following license: Copyright (c) 1992-2006 The University of Tennessee. All rights reserved. Redistribution and use in source and binary forms, with or without modification, are permitted provided that the following conditions are met: * Redistributions of source code must retain the above copyright notice, this list of conditions and the following disclaimer. * Redistributions in binary form must reproduce the above copyright notice, this list of conditions and the following disclaimer listed in this license in the documentation and/or other materials provided with the distribution. * Neither the name of the copyright holders nor the names of its contributors may be used to endorse or promote products derived from this software without specific prior written permission. THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS ``AS IS'' AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.  File: gsl-ref.info, Node: Autoconf Macros, Next: GSL CBLAS Library, Prev: Contributors to GSL, Up: Top 50 Autoconf Macros ****************** For applications using ‘autoconf’ the standard macro ‘AC_CHECK_LIB’ can be used to link with GSL automatically from a ‘configure’ script. The library itself depends on the presence of a CBLAS and math library as well, so these must also be located before linking with the main ‘libgsl’ file. The following commands should be placed in the ‘configure.ac’ file to perform these tests: AC_CHECK_LIB([m],[cos]) AC_CHECK_LIB([gslcblas],[cblas_dgemm]) AC_CHECK_LIB([gsl],[gsl_blas_dgemm]) It is important to check for ‘libm’ and ‘libgslcblas’ before ‘libgsl’, otherwise the tests will fail. Assuming the libraries are found the output during the configure stage looks like this: checking for cos in -lm... yes checking for cblas_dgemm in -lgslcblas... yes checking for gsl_blas_dgemm in -lgsl... yes If the library is found then the tests will define the macros ‘HAVE_LIBGSL’, ‘HAVE_LIBGSLCBLAS’, ‘HAVE_LIBM’ and add the options ‘-lgsl -lgslcblas -lm’ to the variable ‘LIBS’. The tests above will find any version of the library. They are suitable for general use, where the versions of the functions are not important. An alternative macro is available in the file ‘gsl.m4’ to test for a specific version of the library. To use this macro simply add the following line to your ‘configure.in’ file instead of the tests above: AX_PATH_GSL(GSL_VERSION, [action-if-found], [action-if-not-found]) The argument ‘GSL_VERSION’ should be the two or three digit ‘major.minor’ or ‘major.minor.micro’ version number of the release you require. A suitable choice for ‘action-if-not-found’ is: AC_MSG_ERROR(could not find required version of GSL) Then you can add the variables ‘GSL_LIBS’ and ‘GSL_CFLAGS’ to your Makefile.am files to obtain the correct compiler flags. ‘GSL_LIBS’ is equal to the output of the ‘gsl-config --libs’ command and ‘GSL_CFLAGS’ is equal to ‘gsl-config --cflags’ command. For example: libfoo_la_LDFLAGS = -lfoo $(GSL_LIBS) -lgslcblas Note that the macro ‘AX_PATH_GSL’ needs to use the C compiler so it should appear in the ‘configure.in’ file before the macro ‘AC_LANG_CPLUSPLUS’ for programs that use C++. To test for ‘inline’ the following test should be placed in your ‘configure.in’ file: AC_C_INLINE if test "$ac_cv_c_inline" != no ; then AC_DEFINE(HAVE_INLINE,1) AC_SUBST(HAVE_INLINE) fi and the macro will then be defined in the compilation flags or by including the file ‘config.h’ before any library headers. The following autoconf test will check for ‘extern inline’: dnl Check for "extern inline", using a modified version dnl of the test for AC_C_INLINE from acspecific.mt dnl AC_CACHE_CHECK([for extern inline], ac_cv_c_extern_inline, [ac_cv_c_extern_inline=no AC_TRY_COMPILE([extern $ac_cv_c_inline double foo(double x); extern $ac_cv_c_inline double foo(double x) { return x+1.0; }; double foo (double x) { return x + 1.0; };], [ foo(1.0) ], [ac_cv_c_extern_inline="yes"]) ]) if test "$ac_cv_c_extern_inline" != no ; then AC_DEFINE(HAVE_INLINE,1) AC_SUBST(HAVE_INLINE) fi The substitution of portability functions can be made automatically if you use ‘autoconf’. For example, to test whether the BSD function ‘hypot()’ is available you can include the following line in the configure file ‘configure.in’ for your application: AC_CHECK_FUNCS(hypot) and place the following macro definitions in the file ‘config.h.in’: /* Substitute gsl_hypot for missing system hypot */ #ifndef HAVE_HYPOT #define hypot gsl_hypot #endif The application source files can then use the include command ‘#include ’ to substitute *note gsl_hypot(): 1a. for each occurrence of ‘hypot()’ when ‘hypot()’ is not available.  File: gsl-ref.info, Node: GSL CBLAS Library, Next: GNU General Public License, Prev: Autoconf Macros, Up: Top 51 GSL CBLAS Library ******************** The prototypes for the low-level CBLAS functions are declared in the file ‘gsl_cblas.h’. For the definition of the functions consult the documentation available from Netlib (*note see BLAS References and Further Reading: 4e7.). * Menu: * Level 1: Level 1<2>. * Level 2: Level 2<2>. * Level 3: Level 3<2>. * Examples: Examples<37>.  File: gsl-ref.info, Node: Level 1<2>, Next: Level 2<2>, Up: GSL CBLAS Library 51.1 Level 1 ============ -- Function: float cblas_sdsdot (const int N, const float alpha, const float *x, const int incx, const float *y, const int incy) -- Function: double cblas_dsdot (const int N, const float *x, const int incx, const float *y, const int incy) -- Function: float cblas_sdot (const int N, const float *x, const int incx, const float *y, const int incy) -- Function: double cblas_ddot (const int N, const double *x, const int incx, const double *y, const int incy) -- Function: void cblas_cdotu_sub (const int N, const void *x, const int incx, const void *y, const int incy, void *dotu) -- Function: void cblas_cdotc_sub (const int N, const void *x, const int incx, const void *y, const int incy, void *dotc) -- Function: void cblas_zdotu_sub (const int N, const void *x, const int incx, const void *y, const int incy, void *dotu) -- Function: void cblas_zdotc_sub (const int N, const void *x, const int incx, const void *y, const int incy, void *dotc) -- Function: float cblas_snrm2 (const int N, const float *x, const int incx) -- Function: float cblas_sasum (const int N, const float *x, const int incx) -- Function: double cblas_dnrm2 (const int N, const double *x, const int incx) -- Function: double cblas_dasum (const int N, const double *x, const int incx) -- Function: float cblas_scnrm2 (const int N, const void *x, const int incx) -- Function: float cblas_scasum (const int N, const void *x, const int incx) -- Function: double cblas_dznrm2 (const int N, const void *x, const int incx) -- Function: double cblas_dzasum (const int N, const void *x, const int incx) -- Function: CBLAS_INDEX cblas_isamax (const int N, const float *x, const int incx) -- Function: CBLAS_INDEX cblas_idamax (const int N, const double *x, const int incx) -- Function: CBLAS_INDEX cblas_icamax (const int N, const void *x, const int incx) -- Function: CBLAS_INDEX cblas_izamax (const int N, const void *x, const int incx) -- Function: void cblas_sswap (const int N, float *x, const int incx, float *y, const int incy) -- Function: void cblas_scopy (const int N, const float *x, const int incx, float *y, const int incy) -- Function: void cblas_saxpy (const int N, const float alpha, const float *x, const int incx, float *y, const int incy) -- Function: void cblas_dswap (const int N, double *x, const int incx, double *y, const int incy) -- Function: void cblas_dcopy (const int N, const double *x, const int incx, double *y, const int incy) -- Function: void cblas_daxpy (const int N, const double alpha, const double *x, const int incx, double *y, const int incy) -- Function: void cblas_cswap (const int N, void *x, const int incx, void *y, const int incy) -- Function: void cblas_ccopy (const int N, const void *x, const int incx, void *y, const int incy) -- Function: void cblas_caxpy (const int N, const void *alpha, const void *x, const int incx, void *y, const int incy) -- Function: void cblas_zswap (const int N, void *x, const int incx, void *y, const int incy) -- Function: void cblas_zcopy (const int N, const void *x, const int incx, void *y, const int incy) -- Function: void cblas_zaxpy (const int N, const void *alpha, const void *x, const int incx, void *y, const int incy) -- Function: void cblas_srotg (float *a, float *b, float *c, float *s) -- Function: void cblas_srotmg (float *d1, float *d2, float *b1, const float b2, float *P) -- Function: void cblas_srot (const int N, float *x, const int incx, float *y, const int incy, const float c, const float s) -- Function: void cblas_srotm (const int N, float *x, const int incx, float *y, const int incy, const float *P) -- Function: void cblas_drotg (double *a, double *b, double *c, double *s) -- Function: void cblas_drotmg (double *d1, double *d2, double *b1, const double b2, double *P) -- Function: void cblas_drot (const int N, double *x, const int incx, double *y, const int incy, const double c, const double s) -- Function: void cblas_drotm (const int N, double *x, const int incx, double *y, const int incy, const double *P) -- Function: void cblas_sscal (const int N, const float alpha, float *x, const int incx) -- Function: void cblas_dscal (const int N, const double alpha, double *x, const int incx) -- Function: void cblas_cscal (const int N, const void *alpha, void *x, const int incx) -- Function: void cblas_zscal (const int N, const void *alpha, void *x, const int incx) -- Function: void cblas_csscal (const int N, const float alpha, void *x, const int incx) -- Function: void cblas_zdscal (const int N, const double alpha, void *x, const int incx)  File: gsl-ref.info, Node: Level 2<2>, Next: Level 3<2>, Prev: Level 1<2>, Up: GSL CBLAS Library 51.2 Level 2 ============ -- Function: void cblas_sgemv (const enum CBLAS_ORDER order, const enum CBLAS_TRANSPOSE TransA, const int M, const int N, const float alpha, const float *A, const int lda, const float *x, const int incx, const float beta, float *y, const int incy) -- Function: void cblas_sgbmv (const enum CBLAS_ORDER order, const enum CBLAS_TRANSPOSE TransA, const int M, const int N, const int KL, const int KU, const float alpha, const float *A, const int lda, const float *x, const int incx, const float beta, float *y, const int incy) -- Function: void cblas_strmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const float *A, const int lda, float *x, const int incx) -- Function: void cblas_stbmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const int K, const float *A, const int lda, float *x, const int incx) -- Function: void cblas_stpmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const float *Ap, float *x, const int incx) -- Function: void cblas_strsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const float *A, const int lda, float *x, const int incx) -- Function: void cblas_stbsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const int K, const float *A, const int lda, float *x, const int incx) -- Function: void cblas_stpsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const float *Ap, float *x, const int incx) -- Function: void cblas_dgemv (const enum CBLAS_ORDER order, const enum CBLAS_TRANSPOSE TransA, const int M, const int N, const double alpha, const double *A, const int lda, const double *x, const int incx, const double beta, double *y, const int incy) -- Function: void cblas_dgbmv (const enum CBLAS_ORDER order, const enum CBLAS_TRANSPOSE TransA, const int M, const int N, const int KL, const int KU, const double alpha, const double *A, const int lda, const double *x, const int incx, const double beta, double *y, const int incy) -- Function: void cblas_dtrmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const double *A, const int lda, double *x, const int incx) -- Function: void cblas_dtbmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const int K, const double *A, const int lda, double *x, const int incx) -- Function: void cblas_dtpmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const double *Ap, double *x, const int incx) -- Function: void cblas_dtrsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const double *A, const int lda, double *x, const int incx) -- Function: void cblas_dtbsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const int K, const double *A, const int lda, double *x, const int incx) -- Function: void cblas_dtpsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const double *Ap, double *x, const int incx) -- Function: void cblas_cgemv (const enum CBLAS_ORDER order, const enum CBLAS_TRANSPOSE TransA, const int M, const int N, const void *alpha, const void *A, const int lda, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_cgbmv (const enum CBLAS_ORDER order, const enum CBLAS_TRANSPOSE TransA, const int M, const int N, const int KL, const int KU, const void *alpha, const void *A, const int lda, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_ctrmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const void *A, const int lda, void *x, const int incx) -- Function: void cblas_ctbmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const int K, const void *A, const int lda, void *x, const int incx) -- Function: void cblas_ctpmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const void *Ap, void *x, const int incx) -- Function: void cblas_ctrsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const void *A, const int lda, void *x, const int incx) -- Function: void cblas_ctbsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const int K, const void *A, const int lda, void *x, const int incx) -- Function: void cblas_ctpsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const void *Ap, void *x, const int incx) -- Function: void cblas_zgemv (const enum CBLAS_ORDER order, const enum CBLAS_TRANSPOSE TransA, const int M, const int N, const void *alpha, const void *A, const int lda, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_zgbmv (const enum CBLAS_ORDER order, const enum CBLAS_TRANSPOSE TransA, const int M, const int N, const int KL, const int KU, const void *alpha, const void *A, const int lda, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_ztrmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const void *A, const int lda, void *x, const int incx) -- Function: void cblas_ztbmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const int K, const void *A, const int lda, void *x, const int incx) -- Function: void cblas_ztpmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const void *Ap, void *x, const int incx) -- Function: void cblas_ztrsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const void *A, const int lda, void *x, const int incx) -- Function: void cblas_ztbsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const int K, const void *A, const int lda, void *x, const int incx) -- Function: void cblas_ztpsv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int N, const void *Ap, void *x, const int incx) -- Function: void cblas_ssymv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const float alpha, const float *A, const int lda, const float *x, const int incx, const float beta, float *y, const int incy) -- Function: void cblas_ssbmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const int K, const float alpha, const float *A, const int lda, const float *x, const int incx, const float beta, float *y, const int incy) -- Function: void cblas_sspmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const float alpha, const float *Ap, const float *x, const int incx, const float beta, float *y, const int incy) -- Function: void cblas_sger (const enum CBLAS_ORDER order, const int M, const int N, const float alpha, const float *x, const int incx, const float *y, const int incy, float *A, const int lda) -- Function: void cblas_ssyr (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const float alpha, const float *x, const int incx, float *A, const int lda) -- Function: void cblas_sspr (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const float alpha, const float *x, const int incx, float *Ap) -- Function: void cblas_ssyr2 (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const float alpha, const float *x, const int incx, const float *y, const int incy, float *A, const int lda) -- Function: void cblas_sspr2 (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const float alpha, const float *x, const int incx, const float *y, const int incy, float *A) -- Function: void cblas_dsymv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const double alpha, const double *A, const int lda, const double *x, const int incx, const double beta, double *y, const int incy) -- Function: void cblas_dsbmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const int K, const double alpha, const double *A, const int lda, const double *x, const int incx, const double beta, double *y, const int incy) -- Function: void cblas_dspmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const double alpha, const double *Ap, const double *x, const int incx, const double beta, double *y, const int incy) -- Function: void cblas_dger (const enum CBLAS_ORDER order, const int M, const int N, const double alpha, const double *x, const int incx, const double *y, const int incy, double *A, const int lda) -- Function: void cblas_dsyr (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const double alpha, const double *x, const int incx, double *A, const int lda) -- Function: void cblas_dspr (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const double alpha, const double *x, const int incx, double *Ap) -- Function: void cblas_dsyr2 (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const double alpha, const double *x, const int incx, const double *y, const int incy, double *A, const int lda) -- Function: void cblas_dspr2 (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const double alpha, const double *x, const int incx, const double *y, const int incy, double *A) -- Function: void cblas_chemv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const void *alpha, const void *A, const int lda, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_chbmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const int K, const void *alpha, const void *A, const int lda, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_chpmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const void *alpha, const void *Ap, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_cgeru (const enum CBLAS_ORDER order, const int M, const int N, const void *alpha, const void *x, const int incx, const void *y, const int incy, void *A, const int lda) -- Function: void cblas_cgerc (const enum CBLAS_ORDER order, const int M, const int N, const void *alpha, const void *x, const int incx, const void *y, const int incy, void *A, const int lda) -- Function: void cblas_cher (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const float alpha, const void *x, const int incx, void *A, const int lda) -- Function: void cblas_chpr (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const float alpha, const void *x, const int incx, void *A) -- Function: void cblas_cher2 (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const void *alpha, const void *x, const int incx, const void *y, const int incy, void *A, const int lda) -- Function: void cblas_chpr2 (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const void *alpha, const void *x, const int incx, const void *y, const int incy, void *Ap) -- Function: void cblas_zhemv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const void *alpha, const void *A, const int lda, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_zhbmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const int K, const void *alpha, const void *A, const int lda, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_zhpmv (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const void *alpha, const void *Ap, const void *x, const int incx, const void *beta, void *y, const int incy) -- Function: void cblas_zgeru (const enum CBLAS_ORDER order, const int M, const int N, const void *alpha, const void *x, const int incx, const void *y, const int incy, void *A, const int lda) -- Function: void cblas_zgerc (const enum CBLAS_ORDER order, const int M, const int N, const void *alpha, const void *x, const int incx, const void *y, const int incy, void *A, const int lda) -- Function: void cblas_zher (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const double alpha, const void *x, const int incx, void *A, const int lda) -- Function: void cblas_zhpr (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const double alpha, const void *x, const int incx, void *A) -- Function: void cblas_zher2 (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const void *alpha, const void *x, const int incx, const void *y, const int incy, void *A, const int lda) -- Function: void cblas_zhpr2 (const enum CBLAS_ORDER order, const enum CBLAS_UPLO Uplo, const int N, const void *alpha, const void *x, const int incx, const void *y, const int incy, void *Ap)  File: gsl-ref.info, Node: Level 3<2>, Next: Examples<37>, Prev: Level 2<2>, Up: GSL CBLAS Library 51.3 Level 3 ============ -- Function: void cblas_sgemm (const enum CBLAS_ORDER Order, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_TRANSPOSE TransB, const int M, const int N, const int K, const float alpha, const float *A, const int lda, const float *B, const int ldb, const float beta, float *C, const int ldc) -- Function: void cblas_ssymm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const int M, const int N, const float alpha, const float *A, const int lda, const float *B, const int ldb, const float beta, float *C, const int ldc) -- Function: void cblas_ssyrk (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const float alpha, const float *A, const int lda, const float beta, float *C, const int ldc) -- Function: void cblas_ssyr2k (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const float alpha, const float *A, const int lda, const float *B, const int ldb, const float beta, float *C, const int ldc) -- Function: void cblas_strmm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int M, const int N, const float alpha, const float *A, const int lda, float *B, const int ldb) -- Function: void cblas_strsm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int M, const int N, const float alpha, const float *A, const int lda, float *B, const int ldb) -- Function: void cblas_dgemm (const enum CBLAS_ORDER Order, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_TRANSPOSE TransB, const int M, const int N, const int K, const double alpha, const double *A, const int lda, const double *B, const int ldb, const double beta, double *C, const int ldc) -- Function: void cblas_dsymm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const int M, const int N, const double alpha, const double *A, const int lda, const double *B, const int ldb, const double beta, double *C, const int ldc) -- Function: void cblas_dsyrk (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const double alpha, const double *A, const int lda, const double beta, double *C, const int ldc) -- Function: void cblas_dsyr2k (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const double alpha, const double *A, const int lda, const double *B, const int ldb, const double beta, double *C, const int ldc) -- Function: void cblas_dtrmm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int M, const int N, const double alpha, const double *A, const int lda, double *B, const int ldb) -- Function: void cblas_dtrsm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int M, const int N, const double alpha, const double *A, const int lda, double *B, const int ldb) -- Function: void cblas_cgemm (const enum CBLAS_ORDER Order, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_TRANSPOSE TransB, const int M, const int N, const int K, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const void *beta, void *C, const int ldc) -- Function: void cblas_csymm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const int M, const int N, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const void *beta, void *C, const int ldc) -- Function: void cblas_csyrk (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const void *alpha, const void *A, const int lda, const void *beta, void *C, const int ldc) -- Function: void cblas_csyr2k (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const void *beta, void *C, const int ldc) -- Function: void cblas_ctrmm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int M, const int N, const void *alpha, const void *A, const int lda, void *B, const int ldb) -- Function: void cblas_ctrsm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int M, const int N, const void *alpha, const void *A, const int lda, void *B, const int ldb) -- Function: void cblas_zgemm (const enum CBLAS_ORDER Order, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_TRANSPOSE TransB, const int M, const int N, const int K, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const void *beta, void *C, const int ldc) -- Function: void cblas_zsymm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const int M, const int N, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const void *beta, void *C, const int ldc) -- Function: void cblas_zsyrk (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const void *alpha, const void *A, const int lda, const void *beta, void *C, const int ldc) -- Function: void cblas_zsyr2k (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const void *beta, void *C, const int ldc) -- Function: void cblas_ztrmm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int M, const int N, const void *alpha, const void *A, const int lda, void *B, const int ldb) -- Function: void cblas_ztrsm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE TransA, const enum CBLAS_DIAG Diag, const int M, const int N, const void *alpha, const void *A, const int lda, void *B, const int ldb) -- Function: void cblas_chemm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const int M, const int N, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const void *beta, void *C, const int ldc) -- Function: void cblas_cherk (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const float alpha, const void *A, const int lda, const float beta, void *C, const int ldc) -- Function: void cblas_cher2k (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const float beta, void *C, const int ldc) -- Function: void cblas_zhemm (const enum CBLAS_ORDER Order, const enum CBLAS_SIDE Side, const enum CBLAS_UPLO Uplo, const int M, const int N, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const void *beta, void *C, const int ldc) -- Function: void cblas_zherk (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const double alpha, const void *A, const int lda, const double beta, void *C, const int ldc) -- Function: void cblas_zher2k (const enum CBLAS_ORDER Order, const enum CBLAS_UPLO Uplo, const enum CBLAS_TRANSPOSE Trans, const int N, const int K, const void *alpha, const void *A, const int lda, const void *B, const int ldb, const double beta, void *C, const int ldc) -- Function: void cblas_xerbla (int p, const char *rout, const char *form, ...)  File: gsl-ref.info, Node: Examples<37>, Prev: Level 3<2>, Up: GSL CBLAS Library 51.4 Examples ============= The following program computes the product of two matrices using the Level-3 BLAS function SGEMM, [ 0.11 0.12 0.13 ] [ 1011 1012 ] [ 367.76 368.12 ] [ 0.21 0.22 0.23 ] [ 1021 1022 ] = [ 674.06 674.72 ] [ 1031 1032 ] The matrices are stored in row major order but could be stored in column major order if the first argument of the call to *note cblas_sgemm(): d79. was changed to ‘CblasColMajor’. #include #include int main (void) { int lda = 3; float A[] = { 0.11, 0.12, 0.13, 0.21, 0.22, 0.23 }; int ldb = 2; float B[] = { 1011, 1012, 1021, 1022, 1031, 1032 }; int ldc = 2; float C[] = { 0.00, 0.00, 0.00, 0.00 }; /* Compute C = A B */ cblas_sgemm (CblasRowMajor, CblasNoTrans, CblasNoTrans, 2, 2, 3, 1.0, A, lda, B, ldb, 0.0, C, ldc); printf ("[ %g, %g\n", C[0], C[1]); printf (" %g, %g ]\n", C[2], C[3]); return 0; } To compile the program use the following command line: $ gcc -Wall demo.c -lgslcblas There is no need to link with the main library ‘-lgsl’ in this case as the CBLAS library is an independent unit. Here is the output from the program, [ 367.76, 368.12 674.06, 674.72 ]  File: gsl-ref.info, Node: GNU General Public License, Next: GNU Free Documentation License, Prev: GSL CBLAS Library, Up: Top 52 GNU General Public License ***************************** GNU GENERAL PUBLIC LICENSE Version 3, 29 June 2007 Copyright (C) 2007 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. Preamble The GNU General Public License is a free, copyleft license for software and other kinds of works. The licenses for most software and other practical works are designed to take away your freedom to share and change the works. By contrast, the GNU General Public License is intended to guarantee your freedom to share and change all versions of a program--to make sure it remains free software for all its users. We, the Free Software Foundation, use the GNU General Public License for most of our software; it applies also to any other work released this way by its authors. You can apply it to your programs, too. When we speak of free software, we are referring to freedom, not price. Our General Public Licenses are designed to make sure that you have the freedom to distribute copies of free software (and charge for them if you wish), that you receive source code or can get it if you want it, that you can change the software or use pieces of it in new free programs, and that you know you can do these things. To protect your rights, we need to prevent others from denying you these rights or asking you to surrender the rights. Therefore, you have certain responsibilities if you distribute copies of the software, or if you modify it: responsibilities to respect the freedom of others. For example, if you distribute copies of such a program, whether gratis or for a fee, you must pass on to the recipients the same freedoms that you received. You must make sure that they, too, receive or can get the source code. And you must show them these terms so they know their rights. Developers that use the GNU GPL protect your rights with two steps: (1) assert copyright on the software, and (2) offer you this License giving you legal permission to copy, distribute and/or modify it. For the developers' and authors' protection, the GPL clearly explains that there is no warranty for this free software. For both users' and authors' sake, the GPL requires that modified versions be marked as changed, so that their problems will not be attributed erroneously to authors of previous versions. Some devices are designed to deny users access to install or run modified versions of the software inside them, although the manufacturer can do so. This is fundamentally incompatible with the aim of protecting users' freedom to change the software. The systematic pattern of such abuse occurs in the area of products for individuals to use, which is precisely where it is most unacceptable. Therefore, we have designed this version of the GPL to prohibit the practice for those products. If such problems arise substantially in other domains, we stand ready to extend this provision to those domains in future versions of the GPL, as needed to protect the freedom of users. Finally, every program is threatened constantly by software patents. States should not allow patents to restrict development and use of software on general-purpose computers, but in those that do, we wish to avoid the special danger that patents applied to a free program could make it effectively proprietary. To prevent this, the GPL assures that patents cannot be used to render the program non-free. The precise terms and conditions for copying, distribution and modification follow. TERMS AND CONDITIONS 0. Definitions. "This License" refers to version 3 of the GNU General Public License. "Copyright" also means copyright-like laws that apply to other kinds of works, such as semiconductor masks. "The Program" refers to any copyrightable work licensed under this License. Each licensee is addressed as "you". "Licensees" and "recipients" may be individuals or organizations. To "modify" a work means to copy from or adapt all or part of the work in a fashion requiring copyright permission, other than the making of an exact copy. The resulting work is called a "modified version" of the earlier work or a work "based on" the earlier work. A "covered work" means either the unmodified Program or a work based on the Program. To "propagate" a work means to do anything with it that, without permission, would make you directly or secondarily liable for infringement under applicable copyright law, except executing it on a computer or modifying a private copy. Propagation includes copying, distribution (with or without modification), making available to the public, and in some countries other activities as well. To "convey" a work means any kind of propagation that enables other parties to make or receive copies. Mere interaction with a user through a computer network, with no transfer of a copy, is not conveying. An interactive user interface displays "Appropriate Legal Notices" to the extent that it includes a convenient and prominently visible feature that (1) displays an appropriate copyright notice, and (2) tells the user that there is no warranty for the work (except to the extent that warranties are provided), that licensees may convey the work under this License, and how to view a copy of this License. If the interface presents a list of user commands or options, such as a menu, a prominent item in the list meets this criterion. 1. Source Code. The "source code" for a work means the preferred form of the work for making modifications to it. "Object code" means any non-source form of a work. A "Standard Interface" means an interface that either is an official standard defined by a recognized standards body, or, in the case of interfaces specified for a particular programming language, one that is widely used among developers working in that language. The "System Libraries" of an executable work include anything, other than the work as a whole, that (a) is included in the normal form of packaging a Major Component, but which is not part of that Major Component, and (b) serves only to enable use of the work with that Major Component, or to implement a Standard Interface for which an implementation is available to the public in source code form. A "Major Component", in this context, means a major essential component (kernel, window system, and so on) of the specific operating system (if any) on which the executable work runs, or a compiler used to produce the work, or an object code interpreter used to run it. The "Corresponding Source" for a work in object code form means all the source code needed to generate, install, and (for an executable work) run the object code and to modify the work, including scripts to control those activities. However, it does not include the work's System Libraries, or general-purpose tools or generally available free programs which are used unmodified in performing those activities but which are not part of the work. For example, Corresponding Source includes interface definition files associated with source files for the work, and the source code for shared libraries and dynamically linked subprograms that the work is specifically designed to require, such as by intimate data communication or control flow between those subprograms and other parts of the work. The Corresponding Source need not include anything that users can regenerate automatically from other parts of the Corresponding Source. The Corresponding Source for a work in source code form is that same work. 2. Basic Permissions. All rights granted under this License are granted for the term of copyright on the Program, and are irrevocable provided the stated conditions are met. This License explicitly affirms your unlimited permission to run the unmodified Program. The output from running a covered work is covered by this License only if the output, given its content, constitutes a covered work. This License acknowledges your rights of fair use or other equivalent, as provided by copyright law. You may make, run and propagate covered works that you do not convey, without conditions so long as your license otherwise remains in force. You may convey covered works to others for the sole purpose of having them make modifications exclusively for you, or provide you with facilities for running those works, provided that you comply with the terms of this License in conveying all material for which you do not control copyright. Those thus making or running the covered works for you must do so exclusively on your behalf, under your direction and control, on terms that prohibit them from making any copies of your copyrighted material outside their relationship with you. Conveying under any other circumstances is permitted solely under the conditions stated below. Sublicensing is not allowed; section 10 makes it unnecessary. 3. Protecting Users' Legal Rights From Anti-Circumvention Law. No covered work shall be deemed part of an effective technological measure under any applicable law fulfilling obligations under article 11 of the WIPO copyright treaty adopted on 20 December 1996, or similar laws prohibiting or restricting circumvention of such measures. When you convey a covered work, you waive any legal power to forbid circumvention of technological measures to the extent such circumvention is effected by exercising rights under this License with respect to the covered work, and you disclaim any intention to limit operation or modification of the work as a means of enforcing, against the work's users, your or third parties' legal rights to forbid circumvention of technological measures. 4. Conveying Verbatim Copies. You may convey verbatim copies of the Program's source code as you receive it, in any medium, provided that you conspicuously and appropriately publish on each copy an appropriate copyright notice; keep intact all notices stating that this License and any non-permissive terms added in accord with section 7 apply to the code; keep intact all notices of the absence of any warranty; and give all recipients a copy of this License along with the Program. You may charge any price or no price for each copy that you convey, and you may offer support or warranty protection for a fee. 5. Conveying Modified Source Versions. You may convey a work based on the Program, or the modifications to produce it from the Program, in the form of source code under the terms of section 4, provided that you also meet all of these conditions: a) The work must carry prominent notices stating that you modified it, and giving a relevant date. b) The work must carry prominent notices stating that it is released under this License and any conditions added under section 7. This requirement modifies the requirement in section 4 to "keep intact all notices". c) You must license the entire work, as a whole, under this License to anyone who comes into possession of a copy. This License will therefore apply, along with any applicable section 7 additional terms, to the whole of the work, and all its parts, regardless of how they are packaged. This License gives no permission to license the work in any other way, but it does not invalidate such permission if you have separately received it. d) If the work has interactive user interfaces, each must display Appropriate Legal Notices; however, if the Program has interactive interfaces that do not display Appropriate Legal Notices, your work need not make them do so. A compilation of a covered work with other separate and independent works, which are not by their nature extensions of the covered work, and which are not combined with it such as to form a larger program, in or on a volume of a storage or distribution medium, is called an "aggregate" if the compilation and its resulting copyright are not used to limit the access or legal rights of the compilation's users beyond what the individual works permit. Inclusion of a covered work in an aggregate does not cause this License to apply to the other parts of the aggregate. 6. Conveying Non-Source Forms. You may convey a covered work in object code form under the terms of sections 4 and 5, provided that you also convey the machine-readable Corresponding Source under the terms of this License, in one of these ways: a) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by the Corresponding Source fixed on a durable physical medium customarily used for software interchange. b) Convey the object code in, or embodied in, a physical product (including a physical distribution medium), accompanied by a written offer, valid for at least three years and valid for as long as you offer spare parts or customer support for that product model, to give anyone who possesses the object code either (1) a copy of the Corresponding Source for all the software in the product that is covered by this License, on a durable physical medium customarily used for software interchange, for a price no more than your reasonable cost of physically performing this conveying of source, or (2) access to copy the Corresponding Source from a network server at no charge. c) Convey individual copies of the object code with a copy of the written offer to provide the Corresponding Source. This alternative is allowed only occasionally and noncommercially, and only if you received the object code with such an offer, in accord with subsection 6b. d) Convey the object code by offering access from a designated place (gratis or for a charge), and offer equivalent access to the Corresponding Source in the same way through the same place at no further charge. You need not require recipients to copy the Corresponding Source along with the object code. If the place to copy the object code is a network server, the Corresponding Source may be on a different server (operated by you or a third party) that supports equivalent copying facilities, provided you maintain clear directions next to the object code saying where to find the Corresponding Source. Regardless of what server hosts the Corresponding Source, you remain obligated to ensure that it is available for as long as needed to satisfy these requirements. e) Convey the object code using peer-to-peer transmission, provided you inform other peers where the object code and Corresponding Source of the work are being offered to the general public at no charge under subsection 6d. A separable portion of the object code, whose source code is excluded from the Corresponding Source as a System Library, need not be included in conveying the object code work. A "User Product" is either (1) a "consumer product", which means any tangible personal property which is normally used for personal, family, or household purposes, or (2) anything designed or sold for incorporation into a dwelling. In determining whether a product is a consumer product, doubtful cases shall be resolved in favor of coverage. For a particular product received by a particular user, "normally used" refers to a typical or common use of that class of product, regardless of the status of the particular user or of the way in which the particular user actually uses, or expects or is expected to use, the product. A product is a consumer product regardless of whether the product has substantial commercial, industrial or non-consumer uses, unless such uses represent the only significant mode of use of the product. "Installation Information" for a User Product means any methods, procedures, authorization keys, or other information required to install and execute modified versions of a covered work in that User Product from a modified version of its Corresponding Source. The information must suffice to ensure that the continued functioning of the modified object code is in no case prevented or interfered with solely because modification has been made. If you convey an object code work under this section in, or with, or specifically for use in, a User Product, and the conveying occurs as part of a transaction in which the right of possession and use of the User Product is transferred to the recipient in perpetuity or for a fixed term (regardless of how the transaction is characterized), the Corresponding Source conveyed under this section must be accompanied by the Installation Information. But this requirement does not apply if neither you nor any third party retains the ability to install modified object code on the User Product (for example, the work has been installed in ROM). The requirement to provide Installation Information does not include a requirement to continue to provide support service, warranty, or updates for a work that has been modified or installed by the recipient, or for the User Product in which it has been modified or installed. Access to a network may be denied when the modification itself materially and adversely affects the operation of the network or violates the rules and protocols for communication across the network. Corresponding Source conveyed, and Installation Information provided, in accord with this section must be in a format that is publicly documented (and with an implementation available to the public in source code form), and must require no special password or key for unpacking, reading or copying. 7. Additional Terms. "Additional permissions" are terms that supplement the terms of this License by making exceptions from one or more of its conditions. Additional permissions that are applicable to the entire Program shall be treated as though they were included in this License, to the extent that they are valid under applicable law. If additional permissions apply only to part of the Program, that part may be used separately under those permissions, but the entire Program remains governed by this License without regard to the additional permissions. When you convey a copy of a covered work, you may at your option remove any additional permissions from that copy, or from any part of it. (Additional permissions may be written to require their own removal in certain cases when you modify the work.) You may place additional permissions on material, added by you to a covered work, for which you have or can give appropriate copyright permission. Notwithstanding any other provision of this License, for material you add to a covered work, you may (if authorized by the copyright holders of that material) supplement the terms of this License with terms: a) Disclaiming warranty or limiting liability differently from the terms of sections 15 and 16 of this License; or b) Requiring preservation of specified reasonable legal notices or author attributions in that material or in the Appropriate Legal Notices displayed by works containing it; or c) Prohibiting misrepresentation of the origin of that material, or requiring that modified versions of such material be marked in reasonable ways as different from the original version; or d) Limiting the use for publicity purposes of names of licensors or authors of the material; or e) Declining to grant rights under trademark law for use of some trade names, trademarks, or service marks; or f) Requiring indemnification of licensors and authors of that material by anyone who conveys the material (or modified versions of it) with contractual assumptions of liability to the recipient, for any liability that these contractual assumptions directly impose on those licensors and authors. All other non-permissive additional terms are considered "further restrictions" within the meaning of section 10. If the Program as you received it, or any part of it, contains a notice stating that it is governed by this License along with a term that is a further restriction, you may remove that term. If a license document contains a further restriction but permits relicensing or conveying under this License, you may add to a covered work material governed by the terms of that license document, provided that the further restriction does not survive such relicensing or conveying. If you add terms to a covered work in accord with this section, you must place, in the relevant source files, a statement of the additional terms that apply to those files, or a notice indicating where to find the applicable terms. Additional terms, permissive or non-permissive, may be stated in the form of a separately written license, or stated as exceptions; the above requirements apply either way. 8. Termination. You may not propagate or modify a covered work except as expressly provided under this License. Any attempt otherwise to propagate or modify it is void, and will automatically terminate your rights under this License (including any patent licenses granted under the third paragraph of section 11). However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, you do not qualify to receive new licenses for the same material under section 10. 9. Acceptance Not Required for Having Copies. You are not required to accept this License in order to receive or run a copy of the Program. Ancillary propagation of a covered work occurring solely as a consequence of using peer-to-peer transmission to receive a copy likewise does not require acceptance. However, nothing other than this License grants you permission to propagate or modify any covered work. These actions infringe copyright if you do not accept this License. Therefore, by modifying or propagating a covered work, you indicate your acceptance of this License to do so. 10. Automatic Licensing of Downstream Recipients. Each time you convey a covered work, the recipient automatically receives a license from the original licensors, to run, modify and propagate that work, subject to this License. You are not responsible for enforcing compliance by third parties with this License. An "entity transaction" is a transaction transferring control of an organization, or substantially all assets of one, or subdividing an organization, or merging organizations. If propagation of a covered work results from an entity transaction, each party to that transaction who receives a copy of the work also receives whatever licenses to the work the party's predecessor in interest had or could give under the previous paragraph, plus a right to possession of the Corresponding Source of the work from the predecessor in interest, if the predecessor has it or can get it with reasonable efforts. You may not impose any further restrictions on the exercise of the rights granted or affirmed under this License. For example, you may not impose a license fee, royalty, or other charge for exercise of rights granted under this License, and you may not initiate litigation (including a cross-claim or counterclaim in a lawsuit) alleging that any patent claim is infringed by making, using, selling, offering for sale, or importing the Program or any portion of it. 11. Patents. A "contributor" is a copyright holder who authorizes use under this License of the Program or a work on which the Program is based. The work thus licensed is called the contributor's "contributor version". A contributor's "essential patent claims" are all patent claims owned or controlled by the contributor, whether already acquired or hereafter acquired, that would be infringed by some manner, permitted by this License, of making, using, or selling its contributor version, but do not include claims that would be infringed only as a consequence of further modification of the contributor version. For purposes of this definition, "control" includes the right to grant patent sublicenses in a manner consistent with the requirements of this License. Each contributor grants you a non-exclusive, worldwide, royalty-free patent license under the contributor's essential patent claims, to make, use, sell, offer for sale, import and otherwise run, modify and propagate the contents of its contributor version. In the following three paragraphs, a "patent license" is any express agreement or commitment, however denominated, not to enforce a patent (such as an express permission to practice a patent or covenant not to sue for patent infringement). To "grant" such a patent license to a party means to make such an agreement or commitment not to enforce a patent against the party. If you convey a covered work, knowingly relying on a patent license, and the Corresponding Source of the work is not available for anyone to copy, free of charge and under the terms of this License, through a publicly available network server or other readily accessible means, then you must either (1) cause the Corresponding Source to be so available, or (2) arrange to deprive yourself of the benefit of the patent license for this particular work, or (3) arrange, in a manner consistent with the requirements of this License, to extend the patent license to downstream recipients. "Knowingly relying" means you have actual knowledge that, but for the patent license, your conveying the covered work in a country, or your recipient's use of the covered work in a country, would infringe one or more identifiable patents in that country that you have reason to believe are valid. If, pursuant to or in connection with a single transaction or arrangement, you convey, or propagate by procuring conveyance of, a covered work, and grant a patent license to some of the parties receiving the covered work authorizing them to use, propagate, modify or convey a specific copy of the covered work, then the patent license you grant is automatically extended to all recipients of the covered work and works based on it. A patent license is "discriminatory" if it does not include within the scope of its coverage, prohibits the exercise of, or is conditioned on the non-exercise of one or more of the rights that are specifically granted under this License. You may not convey a covered work if you are a party to an arrangement with a third party that is in the business of distributing software, under which you make payment to the third party based on the extent of your activity of conveying the work, and under which the third party grants, to any of the parties who would receive the covered work from you, a discriminatory patent license (a) in connection with copies of the covered work conveyed by you (or copies made from those copies), or (b) primarily for and in connection with specific products or compilations that contain the covered work, unless you entered into that arrangement, or that patent license was granted, prior to 28 March 2007. Nothing in this License shall be construed as excluding or limiting any implied license or other defenses to infringement that may otherwise be available to you under applicable patent law. 12. No Surrender of Others' Freedom. If conditions are imposed on you (whether by court order, agreement or otherwise) that contradict the conditions of this License, they do not excuse you from the conditions of this License. If you cannot convey a covered work so as to satisfy simultaneously your obligations under this License and any other pertinent obligations, then as a consequence you may not convey it at all. For example, if you agree to terms that obligate you to collect a royalty for further conveying from those to whom you convey the Program, the only way you could satisfy both those terms and this License would be to refrain entirely from conveying the Program. 13. Use with the GNU Affero General Public License. Notwithstanding any other provision of this License, you have permission to link or combine any covered work with a work licensed under version 3 of the GNU Affero General Public License into a single combined work, and to convey the resulting work. The terms of this License will continue to apply to the part which is the covered work, but the special requirements of the GNU Affero General Public License, section 13, concerning interaction through a network will apply to the combination as such. 14. Revised Versions of this License. The Free Software Foundation may publish revised and/or new versions of the GNU General Public License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. Each version is given a distinguishing version number. If the Program specifies that a certain numbered version of the GNU General Public License "or any later version" applies to it, you have the option of following the terms and conditions either of that numbered version or of any later version published by the Free Software Foundation. If the Program does not specify a version number of the GNU General Public License, you may choose any version ever published by the Free Software Foundation. If the Program specifies that a proxy can decide which future versions of the GNU General Public License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Program. Later license versions may give you additional or different permissions. However, no additional obligations are imposed on any author or copyright holder as a result of your choosing to follow a later version. 15. Disclaimer of Warranty. THERE IS NO WARRANTY FOR THE PROGRAM, TO THE EXTENT PERMITTED BY APPLICABLE LAW. EXCEPT WHEN OTHERWISE STATED IN WRITING THE COPYRIGHT HOLDERS AND/OR OTHER PARTIES PROVIDE THE PROGRAM "AS IS" WITHOUT WARRANTY OF ANY KIND, EITHER EXPRESSED OR IMPLIED, INCLUDING, BUT NOT LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR PURPOSE. THE ENTIRE RISK AS TO THE QUALITY AND PERFORMANCE OF THE PROGRAM IS WITH YOU. SHOULD THE PROGRAM PROVE DEFECTIVE, YOU ASSUME THE COST OF ALL NECESSARY SERVICING, REPAIR OR CORRECTION. 16. Limitation of Liability. IN NO EVENT UNLESS REQUIRED BY APPLICABLE LAW OR AGREED TO IN WRITING WILL ANY COPYRIGHT HOLDER, OR ANY OTHER PARTY WHO MODIFIES AND/OR CONVEYS THE PROGRAM AS PERMITTED ABOVE, BE LIABLE TO YOU FOR DAMAGES, INCLUDING ANY GENERAL, SPECIAL, INCIDENTAL OR CONSEQUENTIAL DAMAGES ARISING OUT OF THE USE OR INABILITY TO USE THE PROGRAM (INCLUDING BUT NOT LIMITED TO LOSS OF DATA OR DATA BEING RENDERED INACCURATE OR LOSSES SUSTAINED BY YOU OR THIRD PARTIES OR A FAILURE OF THE PROGRAM TO OPERATE WITH ANY OTHER PROGRAMS), EVEN IF SUCH HOLDER OR OTHER PARTY HAS BEEN ADVISED OF THE POSSIBILITY OF SUCH DAMAGES. 17. Interpretation of Sections 15 and 16. If the disclaimer of warranty and limitation of liability provided above cannot be given local legal effect according to their terms, reviewing courts shall apply local law that most closely approximates an absolute waiver of all civil liability in connection with the Program, unless a warranty or assumption of liability accompanies a copy of the Program in return for a fee. END OF TERMS AND CONDITIONS How to Apply These Terms to Your New Programs If you develop a new program, and you want it to be of the greatest possible use to the public, the best way to achieve this is to make it free software which everyone can redistribute and change under these terms. To do so, attach the following notices to the program. It is safest to attach them to the start of each source file to most effectively state the exclusion of warranty; and each file should have at least the "copyright" line and a pointer to where the full notice is found. Copyright (C) This program is free software: you can redistribute it and/or modify it under the terms of the GNU General Public License as published by the Free Software Foundation, either version 3 of the License, or (at your option) any later version. This program is distributed in the hope that it will be useful, but WITHOUT ANY WARRANTY; without even the implied warranty of MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the GNU General Public License for more details. You should have received a copy of the GNU General Public License along with this program. If not, see . Also add information on how to contact you by electronic and paper mail. If the program does terminal interaction, make it output a short notice like this when it starts in an interactive mode: Copyright (C) This program comes with ABSOLUTELY NO WARRANTY; for details type `show w'. This is free software, and you are welcome to redistribute it under certain conditions; type `show c' for details. The hypothetical commands `show w' and `show c' should show the appropriate parts of the General Public License. Of course, your program's commands might be different; for a GUI interface, you would use an "about box". You should also get your employer (if you work as a programmer) or school, if any, to sign a "copyright disclaimer" for the program, if necessary. For more information on this, and how to apply and follow the GNU GPL, see . The GNU General Public License does not permit incorporating your program into proprietary programs. If your program is a subroutine library, you may consider it more useful to permit linking proprietary applications with the library. If this is what you want to do, use the GNU Lesser General Public License instead of this License. But first, please read .  File: gsl-ref.info, Node: GNU Free Documentation License, Next: Index, Prev: GNU General Public License, Up: Top 53 GNU Free Documentation License ********************************* GNU Free Documentation License Version 1.3, 3 November 2008 Copyright (C) 2000, 2001, 2002, 2007, 2008 Free Software Foundation, Inc. Everyone is permitted to copy and distribute verbatim copies of this license document, but changing it is not allowed. 0. PREAMBLE The purpose of this License is to make a manual, textbook, or other functional and useful document "free" in the sense of freedom: to assure everyone the effective freedom to copy and redistribute it, with or without modifying it, either commercially or noncommercially. Secondarily, this License preserves for the author and publisher a way to get credit for their work, while not being considered responsible for modifications made by others. This License is a kind of "copyleft", which means that derivative works of the document must themselves be free in the same sense. It complements the GNU General Public License, which is a copyleft license designed for free software. We have designed this License in order to use it for manuals for free software, because free software needs free documentation: a free program should come with manuals providing the same freedoms that the software does. But this License is not limited to software manuals; it can be used for any textual work, regardless of subject matter or whether it is published as a printed book. We recommend this License principally for works whose purpose is instruction or reference. 1. APPLICABILITY AND DEFINITIONS This License applies to any manual or other work, in any medium, that contains a notice placed by the copyright holder saying it can be distributed under the terms of this License. Such a notice grants a world-wide, royalty-free license, unlimited in duration, to use that work under the conditions stated herein. The "Document", below, refers to any such manual or work. Any member of the public is a licensee, and is addressed as "you". You accept the license if you copy, modify or distribute the work in a way requiring permission under copyright law. A "Modified Version" of the Document means any work containing the Document or a portion of it, either copied verbatim, or with modifications and/or translated into another language. A "Secondary Section" is a named appendix or a front-matter section of the Document that deals exclusively with the relationship of the publishers or authors of the Document to the Document's overall subject (or to related matters) and contains nothing that could fall directly within that overall subject. (Thus, if the Document is in part a textbook of mathematics, a Secondary Section may not explain any mathematics.) The relationship could be a matter of historical connection with the subject or with related matters, or of legal, commercial, philosophical, ethical or political position regarding them. The "Invariant Sections" are certain Secondary Sections whose titles are designated, as being those of Invariant Sections, in the notice that says that the Document is released under this License. If a section does not fit the above definition of Secondary then it is not allowed to be designated as Invariant. The Document may contain zero Invariant Sections. If the Document does not identify any Invariant Sections then there are none. The "Cover Texts" are certain short passages of text that are listed, as Front-Cover Texts or Back-Cover Texts, in the notice that says that the Document is released under this License. A Front-Cover Text may be at most 5 words, and a Back-Cover Text may be at most 25 words. A "Transparent" copy of the Document means a machine-readable copy, represented in a format whose specification is available to the general public, that is suitable for revising the document straightforwardly with generic text editors or (for images composed of pixels) generic paint programs or (for drawings) some widely available drawing editor, and that is suitable for input to text formatters or for automatic translation to a variety of formats suitable for input to text formatters. A copy made in an otherwise Transparent file format whose markup, or absence of markup, has been arranged to thwart or discourage subsequent modification by readers is not Transparent. An image format is not Transparent if used for any substantial amount of text. A copy that is not "Transparent" is called "Opaque". Examples of suitable formats for Transparent copies include plain ASCII without markup, Texinfo input format, LaTeX input format, SGML or XML using a publicly available DTD, and standard-conforming simple HTML, PostScript or PDF designed for human modification. Examples of transparent image formats include PNG, XCF and JPG. Opaque formats include proprietary formats that can be read and edited only by proprietary word processors, SGML or XML for which the DTD and/or processing tools are not generally available, and the machine-generated HTML, PostScript or PDF produced by some word processors for output purposes only. The "Title Page" means, for a printed book, the title page itself, plus such following pages as are needed to hold, legibly, the material this License requires to appear in the title page. For works in formats which do not have any title page as such, "Title Page" means the text near the most prominent appearance of the work's title, preceding the beginning of the body of the text. The "publisher" means any person or entity that distributes copies of the Document to the public. A section "Entitled XYZ" means a named subunit of the Document whose title either is precisely XYZ or contains XYZ in parentheses following text that translates XYZ in another language. (Here XYZ stands for a specific section name mentioned below, such as "Acknowledgements", "Dedications", "Endorsements", or "History".) To "Preserve the Title" of such a section when you modify the Document means that it remains a section "Entitled XYZ" according to this definition. The Document may include Warranty Disclaimers next to the notice which states that this License applies to the Document. These Warranty Disclaimers are considered to be included by reference in this License, but only as regards disclaiming warranties: any other implication that these Warranty Disclaimers may have is void and has no effect on the meaning of this License. 2. VERBATIM COPYING You may copy and distribute the Document in any medium, either commercially or noncommercially, provided that this License, the copyright notices, and the license notice saying this License applies to the Document are reproduced in all copies, and that you add no other conditions whatsoever to those of this License. You may not use technical measures to obstruct or control the reading or further copying of the copies you make or distribute. However, you may accept compensation in exchange for copies. If you distribute a large enough number of copies you must also follow the conditions in section 3. You may also lend copies, under the same conditions stated above, and you may publicly display copies. 3. COPYING IN QUANTITY If you publish printed copies (or copies in media that commonly have printed covers) of the Document, numbering more than 100, and the Document's license notice requires Cover Texts, you must enclose the copies in covers that carry, clearly and legibly, all these Cover Texts: Front-Cover Texts on the front cover, and Back-Cover Texts on the back cover. Both covers must also clearly and legibly identify you as the publisher of these copies. The front cover must present the full title with all words of the title equally prominent and visible. You may add other material on the covers in addition. Copying with changes limited to the covers, as long as they preserve the title of the Document and satisfy these conditions, can be treated as verbatim copying in other respects. If the required texts for either cover are too voluminous to fit legibly, you should put the first ones listed (as many as fit reasonably) on the actual cover, and continue the rest onto adjacent pages. If you publish or distribute Opaque copies of the Document numbering more than 100, you must either include a machine-readable Transparent copy along with each Opaque copy, or state in or with each Opaque copy a computer-network location from which the general network-using public has access to download using public-standard network protocols a complete Transparent copy of the Document, free of added material. If you use the latter option, you must take reasonably prudent steps, when you begin distribution of Opaque copies in quantity, to ensure that this Transparent copy will remain thus accessible at the stated location until at least one year after the last time you distribute an Opaque copy (directly or through your agents or retailers) of that edition to the public. It is requested, but not required, that you contact the authors of the Document well before redistributing any large number of copies, to give them a chance to provide you with an updated version of the Document. 4. MODIFICATIONS You may copy and distribute a Modified Version of the Document under the conditions of sections 2 and 3 above, provided that you release the Modified Version under precisely this License, with the Modified Version filling the role of the Document, thus licensing distribution and modification of the Modified Version to whoever possesses a copy of it. In addition, you must do these things in the Modified Version: A. Use in the Title Page (and on the covers, if any) a title distinct from that of the Document, and from those of previous versions (which should, if there were any, be listed in the History section of the Document). You may use the same title as a previous version if the original publisher of that version gives permission. B. List on the Title Page, as authors, one or more persons or entities responsible for authorship of the modifications in the Modified Version, together with at least five of the principal authors of the Document (all of its principal authors, if it has fewer than five), unless they release you from this requirement. C. State on the Title page the name of the publisher of the Modified Version, as the publisher. D. Preserve all the copyright notices of the Document. E. Add an appropriate copyright notice for your modifications adjacent to the other copyright notices. F. Include, immediately after the copyright notices, a license notice giving the public permission to use the Modified Version under the terms of this License, in the form shown in the Addendum below. G. Preserve in that license notice the full lists of Invariant Sections and required Cover Texts given in the Document's license notice. H. Include an unaltered copy of this License. I. Preserve the section Entitled "History", Preserve its Title, and add to it an item stating at least the title, year, new authors, and publisher of the Modified Version as given on the Title Page. If there is no section Entitled "History" in the Document, create one stating the title, year, authors, and publisher of the Document as given on its Title Page, then add an item describing the Modified Version as stated in the previous sentence. J. Preserve the network location, if any, given in the Document for public access to a Transparent copy of the Document, and likewise the network locations given in the Document for previous versions it was based on. These may be placed in the "History" section. You may omit a network location for a work that was published at least four years before the Document itself, or if the original publisher of the version it refers to gives permission. K. For any section Entitled "Acknowledgements" or "Dedications", Preserve the Title of the section, and preserve in the section all the substance and tone of each of the contributor acknowledgements and/or dedications given therein. L. Preserve all the Invariant Sections of the Document, unaltered in their text and in their titles. Section numbers or the equivalent are not considered part of the section titles. M. Delete any section Entitled "Endorsements". Such a section may not be included in the Modified Version. N. Do not retitle any existing section to be Entitled "Endorsements" or to conflict in title with any Invariant Section. O. Preserve any Warranty Disclaimers. If the Modified Version includes new front-matter sections or appendices that qualify as Secondary Sections and contain no material copied from the Document, you may at your option designate some or all of these sections as invariant. To do this, add their titles to the list of Invariant Sections in the Modified Version's license notice. These titles must be distinct from any other section titles. You may add a section Entitled "Endorsements", provided it contains nothing but endorsements of your Modified Version by various parties--for example, statements of peer review or that the text has been approved by an organization as the authoritative definition of a standard. You may add a passage of up to five words as a Front-Cover Text, and a passage of up to 25 words as a Back-Cover Text, to the end of the list of Cover Texts in the Modified Version. Only one passage of Front-Cover Text and one of Back-Cover Text may be added by (or through arrangements made by) any one entity. If the Document already includes a cover text for the same cover, previously added by you or by arrangement made by the same entity you are acting on behalf of, you may not add another; but you may replace the old one, on explicit permission from the previous publisher that added the old one. The author(s) and publisher(s) of the Document do not by this License give permission to use their names for publicity for or to assert or imply endorsement of any Modified Version. 5. COMBINING DOCUMENTS You may combine the Document with other documents released under this License, under the terms defined in section 4 above for modified versions, provided that you include in the combination all of the Invariant Sections of all of the original documents, unmodified, and list them all as Invariant Sections of your combined work in its license notice, and that you preserve all their Warranty Disclaimers. The combined work need only contain one copy of this License, and multiple identical Invariant Sections may be replaced with a single copy. If there are multiple Invariant Sections with the same name but different contents, make the title of each such section unique by adding at the end of it, in parentheses, the name of the original author or publisher of that section if known, or else a unique number. Make the same adjustment to the section titles in the list of Invariant Sections in the license notice of the combined work. In the combination, you must combine any sections Entitled "History" in the various original documents, forming one section Entitled "History"; likewise combine any sections Entitled "Acknowledgements", and any sections Entitled "Dedications". You must delete all sections Entitled "Endorsements". 6. COLLECTIONS OF DOCUMENTS You may make a collection consisting of the Document and other documents released under this License, and replace the individual copies of this License in the various documents with a single copy that is included in the collection, provided that you follow the rules of this License for verbatim copying of each of the documents in all other respects. You may extract a single document from such a collection, and distribute it individually under this License, provided you insert a copy of this License into the extracted document, and follow this License in all other respects regarding verbatim copying of that document. 7. AGGREGATION WITH INDEPENDENT WORKS A compilation of the Document or its derivatives with other separate and independent documents or works, in or on a volume of a storage or distribution medium, is called an "aggregate" if the copyright resulting from the compilation is not used to limit the legal rights of the compilation's users beyond what the individual works permit. When the Document is included in an aggregate, this License does not apply to the other works in the aggregate which are not themselves derivative works of the Document. If the Cover Text requirement of section 3 is applicable to these copies of the Document, then if the Document is less than one half of the entire aggregate, the Document's Cover Texts may be placed on covers that bracket the Document within the aggregate, or the electronic equivalent of covers if the Document is in electronic form. Otherwise they must appear on printed covers that bracket the whole aggregate. 8. TRANSLATION Translation is considered a kind of modification, so you may distribute translations of the Document under the terms of section 4. Replacing Invariant Sections with translations requires special permission from their copyright holders, but you may include translations of some or all Invariant Sections in addition to the original versions of these Invariant Sections. You may include a translation of this License, and all the license notices in the Document, and any Warranty Disclaimers, provided that you also include the original English version of this License and the original versions of those notices and disclaimers. In case of a disagreement between the translation and the original version of this License or a notice or disclaimer, the original version will prevail. If a section in the Document is Entitled "Acknowledgements", "Dedications", or "History", the requirement (section 4) to Preserve its Title (section 1) will typically require changing the actual title. 9. TERMINATION You may not copy, modify, sublicense, or distribute the Document except as expressly provided under this License. Any attempt otherwise to copy, modify, sublicense, or distribute it is void, and will automatically terminate your rights under this License. However, if you cease all violation of this License, then your license from a particular copyright holder is reinstated (a) provisionally, unless and until the copyright holder explicitly and finally terminates your license, and (b) permanently, if the copyright holder fails to notify you of the violation by some reasonable means prior to 60 days after the cessation. Moreover, your license from a particular copyright holder is reinstated permanently if the copyright holder notifies you of the violation by some reasonable means, this is the first time you have received notice of violation of this License (for any work) from that copyright holder, and you cure the violation prior to 30 days after your receipt of the notice. Termination of your rights under this section does not terminate the licenses of parties who have received copies or rights from you under this License. If your rights have been terminated and not permanently reinstated, receipt of a copy of some or all of the same material does not give you any rights to use it. 10. FUTURE REVISIONS OF THIS LICENSE The Free Software Foundation may publish new, revised versions of the GNU Free Documentation License from time to time. Such new versions will be similar in spirit to the present version, but may differ in detail to address new problems or concerns. See http://www.gnu.org/copyleft/. Each version of the License is given a distinguishing version number. If the Document specifies that a particular numbered version of this License "or any later version" applies to it, you have the option of following the terms and conditions either of that specified version or of any later version that has been published (not as a draft) by the Free Software Foundation. If the Document does not specify a version number of this License, you may choose any version ever published (not as a draft) by the Free Software Foundation. If the Document specifies that a proxy can decide which future versions of this License can be used, that proxy's public statement of acceptance of a version permanently authorizes you to choose that version for the Document. 11. RELICENSING "Massive Multiauthor Collaboration Site" (or "MMC Site") means any World Wide Web server that publishes copyrightable works and also provides prominent facilities for anybody to edit those works. A public wiki that anybody can edit is an example of such a server. A "Massive Multiauthor Collaboration" (or "MMC") contained in the site means any set of copyrightable works thus published on the MMC site. "CC-BY-SA" means the Creative Commons Attribution-Share Alike 3.0 license published by Creative Commons Corporation, a not-for-profit corporation with a principal place of business in San Francisco, California, as well as future copyleft versions of that license published by that same organization. "Incorporate" means to publish or republish a Document, in whole or in part, as part of another Document. An MMC is "eligible for relicensing" if it is licensed under this License, and if all works that were first published under this License somewhere other than this MMC, and subsequently incorporated in whole or in part into the MMC, (1) had no cover texts or invariant sections, and (2) were thus incorporated prior to November 1, 2008. The operator of an MMC Site may republish an MMC contained in the site under CC-BY-SA on the same site at any time before August 1, 2009, provided the MMC is eligible for relicensing. ADDENDUM: How to use this License for your documents To use this License in a document you have written, include a copy of the License in the document and put the following copyright and license notices just after the title page: Copyright (c) YEAR YOUR NAME. Permission is granted to copy, distribute and/or modify this document under the terms of the GNU Free Documentation License, Version 1.3 or any later version published by the Free Software Foundation; with no Invariant Sections, no Front-Cover Texts, and no Back-Cover Texts. A copy of the license is included in the section entitled "GNU Free Documentation License". If you have Invariant Sections, Front-Cover Texts and Back-Cover Texts, replace the "with...Texts." line with this: with the Invariant Sections being LIST THEIR TITLES, with the Front-Cover Texts being LIST, and with the Back-Cover Texts being LIST. If you have Invariant Sections without Cover Texts, or some other combination of the three, merge those two alternatives to suit the situation. If your document contains nontrivial examples of program code, we recommend releasing these examples in parallel under your choice of free software license, such as the GNU General Public License, to permit their use in free software. * genindex  File: gsl-ref.info, Node: Index, Prev: GNU Free Documentation License, Up: Top Index ***** [index] * Menu: * $, shell prompt: Further Information. (line 26) * 2D histograms: Example programs for histograms. (line 64) * 2D random direction vector: Spherical Vector Distributions. (line 10) * 3-j symbols: Coupling Coefficients. (line 6) * 3D random direction vector: Spherical Vector Distributions. (line 33) * 6-j symbols: Coupling Coefficients. (line 6) * 9-j symbols: Coupling Coefficients. (line 6) * acceleration of series: References and Further Reading<25>. (line 12) * acosh: Elementary Functions. (line 36) * Adams method: Stepping Functions. (line 164) * Adaptive step-size control, differential equations: Stepping Functions. (line 187) * Ai(x): Airy Functions and Derivatives. (line 6) * Airy functions: Airy Functions and Derivatives. (line 6) * Akima splines: 1D Interpolation Types. (line 42) * aliasing of arrays: Compatibility with C++. (line 13) * alternative optimized functions: Portability functions. (line 34) * AMAX, Level-1 BLAS: Level 1. (line 69) * Angular Mathieu Functions: Angular Mathieu Functions. (line 6) * angular reduction: Restriction Functions. (line 6) * ANSI C, use of: Conventions used in this manual. (line 20) * Apell symbol, see Pochhammer symbol: Pochhammer Symbol. (line 6) * approximate comparison of floating point numbers: Approximate Comparison of Floating Point Numbers. (line 12) * arctangent integral: Arctangent Integral. (line 6) * argument of complex number: Properties of complex numbers. (line 6) * arithmetic exceptions: Representation of floating point numbers. (line 135) * asinh: Elementary Functions. (line 41) * astronomical constants: Fundamental Constants. (line 55) * ASUM, Level-1 BLAS: Level 1. (line 56) * atanh: Elementary Functions. (line 46) * atomic physics, constants: Astronomy and Astrophysics. (line 30) * autoconf, using with GSL: Contributors to GSL. (line 145) * AXPY, Level-1 BLAS: Level 1. (line 105) * B-spline wavelets: Initialization. (line 39) * Bader and Deuflhard, Bulirsch-Stoer method.: Stepping Functions. (line 157) * balancing matrices: Banded LDLT Decomposition. (line 62) * banded Cholesky Decomposition: Banded LU Decomposition. (line 77) * banded general matrices: Banded Systems. (line 14) * banded LDLT decomposition: Banded Cholesky Decomposition. (line 99) * banded LU Decomposition: Symmetric Banded Format. (line 35) * banded matrices: Triangular Systems. (line 44) * banded symmetric matrices: General Banded Format. (line 27) * Basic Linear Algebra Subroutines (BLAS): References and Further Reading<7>. (line 14) * Basic Linear Algebra Subroutines (BLAS) <1>: Autoconf Macros. (line 106) * basis splines, B-splines: References and Further Reading<34>. (line 38) * basis splines, derivatives: Evaluation of B-splines. (line 35) * basis splines, evaluation: Constructing the knots vector. (line 18) * basis splines, examples: Working with the Greville abscissae. (line 20) * basis splines, Greville abscissae: Evaluation of B-spline derivatives. (line 37) * basis splines, initializing: Overview<7>. (line 34) * basis splines, Marsden-Schoenberg points: Evaluation of B-spline derivatives. (line 36) * basis splines, overview: Basis Splines. (line 15) * BDF method: Stepping Functions. (line 175) * Bernoulli trial, random variates: The Bernoulli Distribution. (line 6) * Bessel functions: Bessel Functions. (line 6) * Bessel Functions, Fractional Order: Regular Bessel Function—Fractional Order. (line 6) * best-fit parameters, covariance: High Level Driver. (line 40) * Beta distribution: The Beta Distribution. (line 6) * Beta function: Beta Functions. (line 6) * Beta function, incomplete normalized: Incomplete Beta Function. (line 6) * BFGS algorithm, minimization: Algorithms with Derivatives. (line 45) * Bi(x): Airy Functions and Derivatives. (line 6) * bias, IEEE format: IEEE floating-point arithmetic. (line 11) * bicubic interpolation: 2D Interpolation Types. (line 16) * bidiagonalization of real matrices: Hessenberg-Triangular Decomposition of Real Matrices. (line 26) * bilinear interpolation: 2D Interpolation Types. (line 11) * binning data: References and Further Reading<18>. (line 18) * Binomial random variates: The Binomial Distribution. (line 6) * biorthogonal wavelets: Initialization. (line 39) * bisection algorithm for finding roots: Root Bracketing Algorithms. (line 18) * Bivariate Gaussian distribution: The Bivariate Gaussian Distribution. (line 6) * Bivariate Gaussian distribution <1>: The Multivariate Gaussian Distribution. (line 6) * BLAS: References and Further Reading<7>. (line 15) * BLAS, Low-level C interface: Autoconf Macros. (line 107) * BLAS, sparse: References and Further Reading<36>. (line 13) * blocks: Data types. (line 60) * bounds checking, extension to GCC: Vector allocation. (line 36) * breakpoints: Debugging Numerical Programs. (line 9) * Brent’s method for finding minima: Minimization Algorithms. (line 34) * Brent’s method for finding roots: Root Bracketing Algorithms. (line 54) * Broyden algorithm for multidimensional roots: Algorithms without Derivatives. (line 57) * BSD random number generator: Unix random number generators. (line 17) * bug-gsl: No Warranty. (line 11) * bug-gsl mailing list: No Warranty. (line 12) * bugs: No Warranty. (line 12) * Bulirsch-Stoer method: Stepping Functions. (line 157) * C extensions, compatible use of: Conventions used in this manual. (line 20) * C++, compatibility: Support for different numeric types. (line 67) * C99, inline keyword: ANSI C Compliance. (line 25) * Carlson forms of Elliptic integrals: Definition of Carlson Forms. (line 6) * Cash-Karp, Runge-Kutta method: Stepping Functions. (line 121) * Cauchy distribution: The Cauchy Distribution. (line 6) * Cauchy principal value, by numerical quadrature: QAWC adaptive integration for Cauchy principal values. (line 6) * CBLAS: References and Further Reading<7>. (line 15) * CBLAS, Low-level interface: Autoconf Macros. (line 107) * cblas_caxpy (C function): Level 1<2>. (line 90) * cblas_ccopy (C function): Level 1<2>. (line 87) * cblas_cdotc_sub (C function): Level 1<2>. (line 21) * cblas_cdotu_sub (C function): Level 1<2>. (line 18) * cblas_cgbmv (C function): Level 2<2>. (line 93) * cblas_cgemm (C function): Level 3<2>. (line 76) * cblas_cgemv (C function): Level 2<2>. (line 88) * cblas_cgerc (C function): Level 2<2>. (line 263) * cblas_cgeru (C function): Level 2<2>. (line 259) * cblas_chbmv (C function): Level 2<2>. (line 249) * cblas_chemm (C function): Level 3<2>. (line 146) * cblas_chemv (C function): Level 2<2>. (line 244) * cblas_cher (C function): Level 2<2>. (line 267) * cblas_cher2 (C function): Level 2<2>. (line 275) * cblas_cher2k (C function): Level 3<2>. (line 157) * cblas_cherk (C function): Level 3<2>. (line 152) * cblas_chpmv (C function): Level 2<2>. (line 254) * cblas_chpr (C function): Level 2<2>. (line 271) * cblas_chpr2 (C function): Level 2<2>. (line 280) * cblas_cscal (C function): Level 1<2>. (line 131) * cblas_csscal (C function): Level 1<2>. (line 137) * cblas_cswap (C function): Level 1<2>. (line 84) * cblas_csymm (C function): Level 3<2>. (line 82) * cblas_csyr2k (C function): Level 3<2>. (line 93) * cblas_csyrk (C function): Level 3<2>. (line 88) * cblas_ctbmv (C function): Level 2<2>. (line 104) * cblas_ctbsv (C function): Level 2<2>. (line 119) * cblas_ctpmv (C function): Level 2<2>. (line 109) * cblas_ctpsv (C function): Level 2<2>. (line 124) * cblas_ctrmm (C function): Level 3<2>. (line 99) * cblas_ctrmv (C function): Level 2<2>. (line 99) * cblas_ctrsm (C function): Level 3<2>. (line 105) * cblas_ctrsv (C function): Level 2<2>. (line 114) * cblas_dasum (C function): Level 1<2>. (line 39) * cblas_daxpy (C function): Level 1<2>. (line 81) * cblas_dcopy (C function): Level 1<2>. (line 78) * cblas_ddot (C function): Level 1<2>. (line 15) * cblas_dgbmv (C function): Level 2<2>. (line 52) * cblas_dgemm (C function): Level 3<2>. (line 41) * cblas_dgemv (C function): Level 2<2>. (line 47) * cblas_dger (C function): Level 2<2>. (line 221) * cblas_dnrm2 (C function): Level 1<2>. (line 36) * cblas_drot (C function): Level 1<2>. (line 119) * cblas_drotg (C function): Level 1<2>. (line 113) * cblas_drotm (C function): Level 1<2>. (line 122) * cblas_drotmg (C function): Level 1<2>. (line 116) * cblas_dsbmv (C function): Level 2<2>. (line 211) * cblas_dscal (C function): Level 1<2>. (line 128) * cblas_dsdot (C function): Level 1<2>. (line 9) * cblas_dspmv (C function): Level 2<2>. (line 216) * cblas_dspr (C function): Level 2<2>. (line 230) * cblas_dspr2 (C function): Level 2<2>. (line 239) * cblas_dswap (C function): Level 1<2>. (line 75) * cblas_dsymm (C function): Level 3<2>. (line 47) * cblas_dsymv (C function): Level 2<2>. (line 206) * cblas_dsyr (C function): Level 2<2>. (line 226) * cblas_dsyr2 (C function): Level 2<2>. (line 234) * cblas_dsyr2k (C function): Level 3<2>. (line 58) * cblas_dsyrk (C function): Level 3<2>. (line 53) * cblas_dtbmv (C function): Level 2<2>. (line 63) * cblas_dtbsv (C function): Level 2<2>. (line 78) * cblas_dtpmv (C function): Level 2<2>. (line 68) * cblas_dtpsv (C function): Level 2<2>. (line 83) * cblas_dtrmm (C function): Level 3<2>. (line 64) * cblas_dtrmv (C function): Level 2<2>. (line 58) * cblas_dtrsm (C function): Level 3<2>. (line 70) * cblas_dtrsv (C function): Level 2<2>. (line 73) * cblas_dzasum (C function): Level 1<2>. (line 51) * cblas_dznrm2 (C function): Level 1<2>. (line 48) * cblas_icamax (C function): Level 1<2>. (line 60) * cblas_idamax (C function): Level 1<2>. (line 57) * cblas_isamax (C function): Level 1<2>. (line 54) * cblas_izamax (C function): Level 1<2>. (line 63) * cblas_sasum (C function): Level 1<2>. (line 33) * cblas_saxpy (C function): Level 1<2>. (line 72) * cblas_scasum (C function): Level 1<2>. (line 45) * cblas_scnrm2 (C function): Level 1<2>. (line 42) * cblas_scopy (C function): Level 1<2>. (line 69) * cblas_sdot (C function): Level 1<2>. (line 12) * cblas_sdsdot (C function): Level 1<2>. (line 6) * cblas_sgbmv (C function): Level 2<2>. (line 11) * cblas_sgemm (C function): Level 3<2>. (line 6) * cblas_sgemv (C function): Level 2<2>. (line 6) * cblas_sger (C function): Level 2<2>. (line 185) * cblas_snrm2 (C function): Level 1<2>. (line 30) * cblas_srot (C function): Level 1<2>. (line 107) * cblas_srotg (C function): Level 1<2>. (line 102) * cblas_srotm (C function): Level 1<2>. (line 110) * cblas_srotmg (C function): Level 1<2>. (line 104) * cblas_ssbmv (C function): Level 2<2>. (line 175) * cblas_sscal (C function): Level 1<2>. (line 125) * cblas_sspmv (C function): Level 2<2>. (line 180) * cblas_sspr (C function): Level 2<2>. (line 193) * cblas_sspr2 (C function): Level 2<2>. (line 202) * cblas_sswap (C function): Level 1<2>. (line 66) * cblas_ssymm (C function): Level 3<2>. (line 12) * cblas_ssymv (C function): Level 2<2>. (line 170) * cblas_ssyr (C function): Level 2<2>. (line 189) * cblas_ssyr2 (C function): Level 2<2>. (line 197) * cblas_ssyr2k (C function): Level 3<2>. (line 23) * cblas_ssyrk (C function): Level 3<2>. (line 18) * cblas_stbmv (C function): Level 2<2>. (line 22) * cblas_stbsv (C function): Level 2<2>. (line 37) * cblas_stpmv (C function): Level 2<2>. (line 27) * cblas_stpsv (C function): Level 2<2>. (line 42) * cblas_strmm (C function): Level 3<2>. (line 29) * cblas_strmv (C function): Level 2<2>. (line 17) * cblas_strsm (C function): Level 3<2>. (line 35) * cblas_strsv (C function): Level 2<2>. (line 32) * cblas_xerbla (C function): Level 3<2>. (line 180) * cblas_zaxpy (C function): Level 1<2>. (line 99) * cblas_zcopy (C function): Level 1<2>. (line 96) * cblas_zdotc_sub (C function): Level 1<2>. (line 27) * cblas_zdotu_sub (C function): Level 1<2>. (line 24) * cblas_zdscal (C function): Level 1<2>. (line 140) * cblas_zgbmv (C function): Level 2<2>. (line 134) * cblas_zgemm (C function): Level 3<2>. (line 111) * cblas_zgemv (C function): Level 2<2>. (line 129) * cblas_zgerc (C function): Level 2<2>. (line 303) * cblas_zgeru (C function): Level 2<2>. (line 299) * cblas_zhbmv (C function): Level 2<2>. (line 289) * cblas_zhemm (C function): Level 3<2>. (line 163) * cblas_zhemv (C function): Level 2<2>. (line 284) * cblas_zher (C function): Level 2<2>. (line 307) * cblas_zher2 (C function): Level 2<2>. (line 315) * cblas_zher2k (C function): Level 3<2>. (line 174) * cblas_zherk (C function): Level 3<2>. (line 169) * cblas_zhpmv (C function): Level 2<2>. (line 294) * cblas_zhpr (C function): Level 2<2>. (line 311) * cblas_zhpr2 (C function): Level 2<2>. (line 320) * cblas_zscal (C function): Level 1<2>. (line 134) * cblas_zswap (C function): Level 1<2>. (line 93) * cblas_zsymm (C function): Level 3<2>. (line 117) * cblas_zsyr2k (C function): Level 3<2>. (line 128) * cblas_zsyrk (C function): Level 3<2>. (line 123) * cblas_ztbmv (C function): Level 2<2>. (line 145) * cblas_ztbsv (C function): Level 2<2>. (line 160) * cblas_ztpmv (C function): Level 2<2>. (line 150) * cblas_ztpsv (C function): Level 2<2>. (line 165) * cblas_ztrmm (C function): Level 3<2>. (line 134) * cblas_ztrmv (C function): Level 2<2>. (line 140) * cblas_ztrsm (C function): Level 3<2>. (line 140) * cblas_ztrsv (C function): Level 2<2>. (line 155) * CDFs, cumulative distribution functions: References. (line 14) * ce(q,x), Mathieu function: Angular Mathieu Functions. (line 6) * Chebyshev series: References and Further Reading<24>. (line 15) * checking combination for validity: Combination properties. (line 20) * checking multiset for validity: Multiset properties. (line 20) * checking permutation for validity: Permutation properties. (line 15) * Chi(x): Hyperbolic Integrals. (line 6) * Chi-squared distribution: The Chi-squared Distribution. (line 14) * Cholesky decomposition: Singular Value Decomposition. (line 97) * Cholesky decomposition, banded: Banded LU Decomposition. (line 78) * Cholesky decomposition, modified: Pivoted Cholesky Decomposition. (line 111) * Cholesky decomposition, pivoted: Cholesky Decomposition. (line 155) * Cholesky decomposition, square root free: Modified Cholesky Decomposition. (line 68) * Ci(x): Trigonometric Integrals. (line 6) * Clausen functions: Clausen Functions. (line 6) * Clenshaw-Curtis quadrature: Integrands with weight functions. (line 6) * CMRG, combined multiple recursive random number generator: Random number generator algorithms. (line 103) * code reuse in applications: Deprecated Functions. (line 14) * combinations: References and Further Reading<5>. (line 16) * combinatorial factor C(m: Factorials. (line 43) * combinatorial optimization: References and Further Reading<20>. (line 21) * comparison functions, definition: Sorting objects. (line 16) * compatibility: Conventions used in this manual. (line 19) * compiling programs, include paths: An Example Program. (line 29) * compiling programs, library paths: Compiling and Linking. (line 24) * complementary incomplete Gamma function: Incomplete Gamma Functions. (line 22) * complete Fermi-Dirac integrals: Complete Fermi-Dirac Integrals. (line 6) * complete orthogonal decomposition: QL Decomposition. (line 36) * complex arithmetic: Properties of complex numbers. (line 28) * complex cosine function, special functions: Trigonometric Functions for Complex Arguments. (line 12) * Complex Gamma function: Gamma Functions. (line 60) * complex hermitian matrix, eigensystem: Complex Hermitian Matrices. (line 9) * complex log sine function, special functions: Trigonometric Functions for Complex Arguments. (line 19) * complex numbers: Approximate Comparison of Floating Point Numbers. (line 30) * complex sinc function, special functions: Circular Trigonometric Functions. (line 23) * complex sine function, special functions: Trigonometric Functions for Complex Arguments. (line 6) * confluent hypergeometric function: Laguerre Functions. (line 6) * confluent hypergeometric functions: Hypergeometric Functions. (line 6) * conical functions: Legendre Functions and Spherical Harmonics. (line 6) * Conjugate gradient algorithm, minimization: Algorithms with Derivatives. (line 15) * conjugate of complex number: Complex arithmetic operators. (line 66) * constant matrix: Accessing matrix elements. (line 48) * constants, fundamental: Physical Constants. (line 22) * constants, mathematical (defined as macros): Mathematical Functions. (line 14) * constants, physical: References and Further Reading<38>. (line 15) * constants, prefixes: Force and Energy. (line 21) * contacting the GSL developers: Reporting Bugs. (line 30) * conventions, used in manual: Further Information. (line 27) * convergence, accelerating a series: References and Further Reading<25>. (line 11) * conversion of units: References and Further Reading<38>. (line 15) * cooling schedule: Simulated Annealing algorithm. (line 20) * COPY, Level-1 BLAS: Level 1. (line 94) * correlation, of two datasets: Covariance. (line 25) * cosine function, special functions: Circular Trigonometric Functions. (line 11) * cosine of complex number: Complex Trigonometric Functions. (line 11) * cost function: References and Further Reading<20>. (line 20) * Coulomb wave functions: Coulomb Functions. (line 6) * coupling coefficients: Coupling Coefficients. (line 6) * covariance matrix, from linear regression: Linear regression with a constant term. (line 9) * covariance matrix, linear fits: Overview<5>. (line 18) * covariance matrix, nonlinear fits: High Level Driver. (line 39) * covariance, of two datasets: Autocorrelation. (line 22) * cquad, doubly-adaptive integration: QAWF adaptive integration for Fourier integrals. (line 60) * CRAY random number generator, RANF: Other random number generators. (line 22) * cubic equation, solving: Quadratic Equations. (line 44) * cubic splines: 1D Interpolation Types. (line 23) * cumulative distribution functions (CDFs): References. (line 14) * Cylindrical Bessel Functions: Regular Cylindrical Bessel Functions. (line 6) * Daubechies wavelets: Initialization. (line 24) * Dawson function: Dawson Function. (line 6) * DAXPY, Level-1 BLAS: Level 1. (line 105) * debugging numerical programs: Debugging Numerical Programs. (line 9) * Debye functions: Debye Functions. (line 6) * denormalized form, IEEE format: Representation of floating point numbers. (line 14) * deprecated functions: Thread-safety. (line 21) * derivatives, calculating numerically: References and Further Reading<23>. (line 18) * determinant of a matrix, by LU decomposition: LU Decomposition. (line 111) * Deuflhard and Bader, Bulirsch-Stoer method.: Stepping Functions. (line 157) * DFTs, see FFT: References and Further Reading<10>. (line 27) * diagonal, of a matrix: Creating row and column views. (line 70) * differential equations, initial value problems: References and Further Reading<21>. (line 11) * differentiation of functions, numeric: References and Further Reading<23>. (line 18) * digamma function: Psi Digamma Function. (line 6) * dilogarithm: Dilogarithm. (line 6) * direction vector, random 2D: Spherical Vector Distributions. (line 10) * direction vector, random 3D: Spherical Vector Distributions. (line 33) * direction vector, random N-dimensional: Spherical Vector Distributions. (line 44) * Dirichlet distribution: The Dirichlet Distribution. (line 6) * discontinuities, in ODE systems: Evolution. (line 92) * Discrete Fourier Transforms, see FFT: References and Further Reading<10>. (line 28) * discrete Hankel transforms: References and Further Reading<27>. (line 54) * Discrete Newton algorithm for multidimensional roots: Algorithms without Derivatives. (line 34) * Discrete random numbers: General Discrete Distributions. (line 49) * Discrete random numbers <1>: General Discrete Distributions. (line 66) * Discrete random numbers <2>: General Discrete Distributions. (line 72) * Discrete random numbers <3>: General Discrete Distributions. (line 81) * Discrete random numbers, preprocessing: General Discrete Distributions. (line 49) * divided differences, polynomials: Polynomial Evaluation. (line 39) * division by zero, IEEE exceptions: Representation of floating point numbers. (line 135) * Dogleg algorithm: Levenberg-Marquardt with Geodesic Acceleration. (line 30) * Dogleg algorithm, double: Dogleg. (line 27) * dollar sign $, shell prompt: Conventions used in this manual. (line 6) * DOT, Level-1 BLAS: Level 1. (line 6) * double Dogleg algorithm: Dogleg. (line 27) * double factorial: Factorials. (line 19) * double precision, IEEE format: Representation of floating point numbers. (line 43) * downloading GSL: GSL is Free Software. (line 46) * DWT initialization: Definitions<2>. (line 27) * DWT, mathematical definition: Wavelet Transforms. (line 11) * DWT, one dimensional: Transform Functions. (line 12) * DWT, see wavelet transforms: References and Further Reading<26>. (line 25) * DWT, two dimensional: Wavelet transforms in one dimension. (line 41) * e, defined as a macro: Mathematical Constants. (line 9) * E1(x): Exponential Integral. (line 6) * E2(x): Exponential Integral. (line 6) * Ei(x): Exponential Integral. (line 6) * eigenvalues and eigenvectors: References and Further Reading<9>. (line 64) * elementary functions: Examples. (line 39) * elementary operations: Elementary Operations. (line 6) * elliptic functions (Jacobi): Elliptic Functions Jacobi. (line 6) * elliptic integrals: Elliptic Integrals. (line 6) * energy function: References and Further Reading<20>. (line 21) * energy, units of: Mass and Weight. (line 50) * erf(x): Error Functions. (line 6) * erfc(x): Error Functions. (line 6) * Erlang distribution: The Gamma Distribution. (line 6) * error codes: Error Codes. (line 13) * error codes, reserved: Error Reporting. (line 44) * error function: Error Functions. (line 6) * error handlers: Error Codes. (line 50) * error handling: Code Reuse. (line 14) * error handling macros: Error Handlers. (line 82) * estimated standard deviation: References and Further Reading<14>. (line 52) * estimated variance: References and Further Reading<14>. (line 52) * estimation, location: Order Statistics. (line 22) * estimation, scale: Gastwirth Estimator. (line 24) * Eta Function: Eta Function. (line 6) * euclidean distance function, hypot: Elementary Functions. (line 24) * euclidean distance function, hypot3: Elementary Functions. (line 30) * Euler’s constant, defined as a macro: Mathematical Constants. (line 9) * evaluation of polynomials: Polynomials. (line 13) * evaluation of polynomials, in divided difference form: Polynomial Evaluation. (line 38) * examples, conventions used in: Further Information. (line 27) * exceptions, C++: Support for different numeric types. (line 66) * exceptions, floating point: Examining floating point registers. (line 32) * exceptions, IEEE arithmetic: Representation of floating point numbers. (line 135) * exchanging permutation elements: Accessing permutation elements. (line 18) * exp: Exponential Functions. (line 6) * expm1: Elementary Functions. (line 18) * exponent, IEEE format: IEEE floating-point arithmetic. (line 11) * Exponential distribution: The Exponential Distribution. (line 6) * exponential function: Exponential Functions. (line 6) * exponential integrals: Exponential Integrals. (line 6) * Exponential power distribution: The Exponential Power Distribution. (line 6) * exponential, difference from 1 computed accurately: Elementary Functions. (line 18) * exponentiation of complex number: Elementary Complex Functions. (line 17) * extern inline: ANSI C Compliance. (line 24) * F-distribution: The F-distribution. (line 13) * factorial: Factorials. (line 6) * factorial <1>: Factorials. (line 11) * factorization of matrices: References and Further Reading<8>. (line 32) * false position algorithm for finding roots: Root Bracketing Algorithms. (line 35) * Fast Fourier Transforms, see FFT: References and Further Reading<10>. (line 28) * Fehlberg method, differential equations: Stepping Functions. (line 115) * Fermi-Dirac function: Fermi-Dirac Function. (line 6) * FFT: References and Further Reading<10>. (line 28) * FFT mathematical definition: Fast Fourier Transforms FFTs. (line 18) * FFT of complex data, mixed-radix algorithm: Radix-2 FFT routines for complex data. (line 120) * FFT of complex data, radix-2 algorithm: Overview of complex data FFTs. (line 73) * FFT of real data: Mixed-radix FFT routines for complex data. (line 226) * FFT of real data, mixed-radix algorithm: Radix-2 FFT routines for real data. (line 98) * FFT of real data, radix-2 algorithm: Overview of real data FFTs. (line 50) * FFT, complex data: Mathematical Definitions. (line 54) * finding minima: References and Further Reading<29>. (line 17) * finding roots: References and Further Reading<28>. (line 15) * finding zeros: References and Further Reading<28>. (line 15) * fits, multi-parameter linear: Linear regression without a constant term. (line 50) * fitting: References and Further Reading<32>. (line 24) * fitting, using Chebyshev polynomials: References and Further Reading<24>. (line 15) * Fj(x), Fermi-Dirac integral: Complete Fermi-Dirac Integrals. (line 6) * Fj(x,b), incomplete Fermi-Dirac integral: Incomplete Fermi-Dirac Integrals. (line 6) * flat distribution: The Flat Uniform Distribution. (line 6) * Fletcher-Reeves conjugate gradient algorithm, minimization: Algorithms with Derivatives. (line 15) * floating point exceptions: Examining floating point registers. (line 31) * floating point numbers, approximate comparison: Approximate Comparison of Floating Point Numbers. (line 12) * floating point registers: Using gdb. (line 89) * force and energy: Radioactivity. (line 18) * Fortran range checking, equivalent in gcc: Vector allocation. (line 35) * Four-tap Generalized Feedback Shift Register: Random number generator algorithms. (line 181) * Fourier integrals, numerical: QAWF adaptive integration for Fourier integrals. (line 6) * Fourier Transforms, see FFT: References and Further Reading<10>. (line 28) * Fractional Order Bessel Functions: Regular Bessel Function—Fractional Order. (line 6) * free software, explanation of: Routines available in GSL. (line 53) * frexp: Elementary Functions. (line 56) * functions, numerical differentiation: References and Further Reading<23>. (line 18) * fundamental constants: Physical Constants. (line 22) * Gamma distribution: The Gamma Distribution. (line 6) * gamma functions: Gamma Functions. (line 6) * Gastwirth estimator: Trimmed Mean. (line 25) * Gauss-Kronrod quadrature: Introduction<2>. (line 57) * Gaussian distribution: The Gaussian Distribution. (line 6) * Gaussian distribution, bivariate: The Bivariate Gaussian Distribution. (line 6) * Gaussian distribution, bivariate <1>: The Multivariate Gaussian Distribution. (line 6) * Gaussian Tail distribution: The Gaussian Tail Distribution. (line 6) * gcc extensions, range-checking: Vector allocation. (line 36) * gcc warning options: Handling floating point exceptions. (line 30) * gdb: Debugging Numerical Programs. (line 9) * Gegenbauer functions: Gegenbauer Functions. (line 6) * GEMM, Level-3 BLAS: Level 3. (line 6) * GEMV, Level-2 BLAS: Level 2. (line 6) * general polynomial equations, solving: Cubic Equations. (line 37) * generalized eigensystems: Real Generalized Nonsymmetric Eigensystems. (line 6) * generalized hermitian definite eigensystems: Complex Generalized Hermitian-Definite Eigensystems. (line 6) * generalized symmetric eigensystems: Real Generalized Symmetric-Definite Eigensystems. (line 6) * Geometric random variates: The Geometric Distribution. (line 6) * Geometric random variates <1>: The Hypergeometric Distribution. (line 6) * GER, Level-2 BLAS: Level 2. (line 103) * GERC, Level-2 BLAS: Level 2. (line 117) * GERU, Level-2 BLAS: Level 2. (line 103) * Givens rotation: Bidiagonalization. (line 57) * Givens Rotation, BLAS: Level 1. (line 130) * Givens Rotation, Modified, BLAS: Level 1. (line 152) * gmres: Types of Sparse Iterative Solvers. (line 11) * GNU General Public License: Top. (line 12) * golden section algorithm for finding minima: Minimization Algorithms. (line 15) * gsl_acosh (C function): Elementary Functions. (line 36) * gsl_asinh (C function): Elementary Functions. (line 41) * gsl_atanh (C function): Elementary Functions. (line 46) * gsl_blas_caxpy (C function): Level 1. (line 105) * gsl_blas_ccopy (C function): Level 1. (line 94) * gsl_blas_cdotc (C function): Level 1. (line 33) * gsl_blas_cdotu (C function): Level 1. (line 24) * gsl_blas_cgemm (C function): Level 3. (line 6) * gsl_blas_cgemv (C function): Level 2. (line 6) * gsl_blas_cgerc (C function): Level 2. (line 117) * gsl_blas_cgeru (C function): Level 2. (line 103) * gsl_blas_chemm (C function): Level 3. (line 50) * gsl_blas_chemv (C function): Level 2. (line 85) * gsl_blas_cher (C function): Level 2. (line 140) * gsl_blas_cher2 (C function): Level 2. (line 168) * gsl_blas_cher2k (C function): Level 3. (line 187) * gsl_blas_cherk (C function): Level 3. (line 144) * gsl_blas_cscal (C function): Level 1. (line 117) * gsl_blas_csscal (C function): Level 1. (line 117) * gsl_blas_cswap (C function): Level 1. (line 83) * gsl_blas_csymm (C function): Level 3. (line 27) * gsl_blas_csyr2k (C function): Level 3. (line 162) * gsl_blas_csyrk (C function): Level 3. (line 121) * gsl_blas_ctrmm (C function): Level 3. (line 68) * gsl_blas_ctrmv (C function): Level 2. (line 25) * gsl_blas_ctrsm (C function): Level 3. (line 94) * gsl_blas_ctrsv (C function): Level 2. (line 48) * gsl_blas_dasum (C function): Level 1. (line 56) * gsl_blas_daxpy (C function): Level 1. (line 105) * gsl_blas_dcopy (C function): Level 1. (line 94) * gsl_blas_ddot (C function): Level 1. (line 13) * gsl_blas_dgemm (C function): Level 3. (line 6) * gsl_blas_dgemv (C function): Level 2. (line 6) * gsl_blas_dger (C function): Level 2. (line 103) * gsl_blas_dnrm2 (C function): Level 1. (line 42) * gsl_blas_drot (C function): Level 1. (line 144) * gsl_blas_drotg (C function): Level 1. (line 130) * gsl_blas_drotm (C function): Level 1. (line 161) * gsl_blas_drotmg (C function): Level 1. (line 152) * gsl_blas_dscal (C function): Level 1. (line 117) * gsl_blas_dsdot (C function): Level 1. (line 13) * gsl_blas_dswap (C function): Level 1. (line 83) * gsl_blas_dsymm (C function): Level 3. (line 27) * gsl_blas_dsymv (C function): Level 2. (line 70) * gsl_blas_dsyr (C function): Level 2. (line 127) * gsl_blas_dsyr2 (C function): Level 2. (line 154) * gsl_blas_dsyr2k (C function): Level 3. (line 162) * gsl_blas_dsyrk (C function): Level 3. (line 121) * gsl_blas_dtrmm (C function): Level 3. (line 68) * gsl_blas_dtrmv (C function): Level 2. (line 25) * gsl_blas_dtrsm (C function): Level 3. (line 94) * gsl_blas_dtrsv (C function): Level 2. (line 48) * gsl_blas_dzasum (C function): Level 1. (line 62) * gsl_blas_dznrm2 (C function): Level 1. (line 48) * gsl_blas_icamax (C function): Level 1. (line 69) * gsl_blas_idamax (C function): Level 1. (line 69) * gsl_blas_isamax (C function): Level 1. (line 69) * gsl_blas_izamax (C function): Level 1. (line 69) * gsl_blas_sasum (C function): Level 1. (line 56) * gsl_blas_saxpy (C function): Level 1. (line 105) * gsl_blas_scasum (C function): Level 1. (line 62) * gsl_blas_scnrm2 (C function): Level 1. (line 48) * gsl_blas_scopy (C function): Level 1. (line 94) * gsl_blas_sdot (C function): Level 1. (line 13) * gsl_blas_sdsdot (C function): Level 1. (line 6) * gsl_blas_sgemm (C function): Level 3. (line 6) * gsl_blas_sgemv (C function): Level 2. (line 6) * gsl_blas_sger (C function): Level 2. (line 103) * gsl_blas_snrm2 (C function): Level 1. (line 42) * gsl_blas_srot (C function): Level 1. (line 144) * gsl_blas_srotg (C function): Level 1. (line 130) * gsl_blas_srotm (C function): Level 1. (line 161) * gsl_blas_srotmg (C function): Level 1. (line 152) * gsl_blas_sscal (C function): Level 1. (line 117) * gsl_blas_sswap (C function): Level 1. (line 83) * gsl_blas_ssymm (C function): Level 3. (line 27) * gsl_blas_ssymv (C function): Level 2. (line 70) * gsl_blas_ssyr (C function): Level 2. (line 127) * gsl_blas_ssyr2 (C function): Level 2. (line 154) * gsl_blas_ssyr2k (C function): Level 3. (line 162) * gsl_blas_ssyrk (C function): Level 3. (line 121) * gsl_blas_strmm (C function): Level 3. (line 68) * gsl_blas_strmv (C function): Level 2. (line 25) * gsl_blas_strsm (C function): Level 3. (line 94) * gsl_blas_strsv (C function): Level 2. (line 48) * gsl_blas_zaxpy (C function): Level 1. (line 105) * gsl_blas_zcopy (C function): Level 1. (line 94) * gsl_blas_zdotc (C function): Level 1. (line 33) * gsl_blas_zdotu (C function): Level 1. (line 24) * gsl_blas_zdscal (C function): Level 1. (line 117) * gsl_blas_zgemm (C function): Level 3. (line 6) * gsl_blas_zgemv (C function): Level 2. (line 6) * gsl_blas_zgerc (C function): Level 2. (line 117) * gsl_blas_zgeru (C function): Level 2. (line 103) * gsl_blas_zhemm (C function): Level 3. (line 50) * gsl_blas_zhemv (C function): Level 2. (line 85) * gsl_blas_zher (C function): Level 2. (line 140) * gsl_blas_zher2 (C function): Level 2. (line 168) * gsl_blas_zher2k (C function): Level 3. (line 187) * gsl_blas_zherk (C function): Level 3. (line 144) * gsl_blas_zscal (C function): Level 1. (line 117) * gsl_blas_zswap (C function): Level 1. (line 83) * gsl_blas_zsymm (C function): Level 3. (line 27) * gsl_blas_zsyr2k (C function): Level 3. (line 162) * gsl_blas_zsyrk (C function): Level 3. (line 121) * gsl_blas_ztrmm (C function): Level 3. (line 68) * gsl_blas_ztrmv (C function): Level 2. (line 25) * gsl_blas_ztrsm (C function): Level 3. (line 94) * gsl_blas_ztrsv (C function): Level 2. (line 48) * gsl_block (C type): Blocks. (line 10) * gsl_block_alloc (C function): Block allocation. (line 14) * gsl_block_calloc (C function): Block allocation. (line 26) * gsl_block_fprintf (C function): Reading and writing blocks. (line 28) * gsl_block_fread (C function): Reading and writing blocks. (line 17) * gsl_block_free (C function): Block allocation. (line 31) * gsl_block_fscanf (C function): Reading and writing blocks. (line 38) * gsl_block_fwrite (C function): Reading and writing blocks. (line 9) * gsl_bspline_alloc (C function): Initializing the B-splines solver. (line 11) * gsl_bspline_deriv_eval (C function): Evaluation of B-spline derivatives. (line 6) * gsl_bspline_deriv_eval_nonzero (C function): Evaluation of B-spline derivatives. (line 21) * gsl_bspline_eval (C function): Evaluation of B-splines. (line 6) * gsl_bspline_eval_nonzero (C function): Evaluation of B-splines. (line 18) * gsl_bspline_free (C function): Initializing the B-splines solver. (line 20) * gsl_bspline_greville_abscissa (C function): Working with the Greville abscissae. (line 14) * gsl_bspline_knots (C function): Constructing the knots vector. (line 6) * gsl_bspline_knots_uniform (C function): Constructing the knots vector. (line 12) * gsl_bspline_ncoeffs (C function): Evaluation of B-splines. (line 31) * gsl_bspline_workspace (C type): Initializing the B-splines solver. (line 6) * GSL_C99_INLINE: ANSI C Compliance. (line 25) * GSL_C99_INLINE (C macro): Accessing vector elements. (line 29) * gsl_cdf_beta_P (C function): The Beta Distribution. (line 22) * gsl_cdf_beta_Pinv (C function): The Beta Distribution. (line 22) * gsl_cdf_beta_Q (C function): The Beta Distribution. (line 22) * gsl_cdf_beta_Qinv (C function): The Beta Distribution. (line 22) * gsl_cdf_binomial_P (C function): The Binomial Distribution. (line 26) * gsl_cdf_binomial_Q (C function): The Binomial Distribution. (line 26) * gsl_cdf_cauchy_P (C function): The Cauchy Distribution. (line 24) * gsl_cdf_cauchy_Pinv (C function): The Cauchy Distribution. (line 24) * gsl_cdf_cauchy_Q (C function): The Cauchy Distribution. (line 24) * gsl_cdf_cauchy_Qinv (C function): The Cauchy Distribution. (line 24) * gsl_cdf_chisq_P (C function): The Chi-squared Distribution. (line 31) * gsl_cdf_chisq_Pinv (C function): The Chi-squared Distribution. (line 31) * gsl_cdf_chisq_Q (C function): The Chi-squared Distribution. (line 31) * gsl_cdf_chisq_Qinv (C function): The Chi-squared Distribution. (line 31) * gsl_cdf_exponential_P (C function): The Exponential Distribution. (line 22) * gsl_cdf_exponential_Pinv (C function): The Exponential Distribution. (line 22) * gsl_cdf_exponential_Q (C function): The Exponential Distribution. (line 22) * gsl_cdf_exponential_Qinv (C function): The Exponential Distribution. (line 22) * gsl_cdf_exppow_P (C function): The Exponential Power Distribution. (line 27) * gsl_cdf_exppow_Q (C function): The Exponential Power Distribution. (line 27) * gsl_cdf_fdist_P (C function): The F-distribution. (line 36) * gsl_cdf_fdist_Pinv (C function): The F-distribution. (line 36) * gsl_cdf_fdist_Q (C function): The F-distribution. (line 36) * gsl_cdf_fdist_Qinv (C function): The F-distribution. (line 36) * gsl_cdf_flat_P (C function): The Flat Uniform Distribution. (line 23) * gsl_cdf_flat_Pinv (C function): The Flat Uniform Distribution. (line 23) * gsl_cdf_flat_Q (C function): The Flat Uniform Distribution. (line 23) * gsl_cdf_flat_Qinv (C function): The Flat Uniform Distribution. (line 23) * gsl_cdf_gamma_P (C function): The Gamma Distribution. (line 36) * gsl_cdf_gamma_Pinv (C function): The Gamma Distribution. (line 36) * gsl_cdf_gamma_Q (C function): The Gamma Distribution. (line 36) * gsl_cdf_gamma_Qinv (C function): The Gamma Distribution. (line 36) * gsl_cdf_gaussian_P (C function): The Gaussian Distribution. (line 45) * gsl_cdf_gaussian_Pinv (C function): The Gaussian Distribution. (line 45) * gsl_cdf_gaussian_Q (C function): The Gaussian Distribution. (line 45) * gsl_cdf_gaussian_Qinv (C function): The Gaussian Distribution. (line 45) * gsl_cdf_geometric_P (C function): The Geometric Distribution. (line 27) * gsl_cdf_geometric_Q (C function): The Geometric Distribution. (line 27) * gsl_cdf_gumbel1_P (C function): The Type-1 Gumbel Distribution. (line 23) * gsl_cdf_gumbel1_Pinv (C function): The Type-1 Gumbel Distribution. (line 23) * gsl_cdf_gumbel1_Q (C function): The Type-1 Gumbel Distribution. (line 23) * gsl_cdf_gumbel1_Qinv (C function): The Type-1 Gumbel Distribution. (line 23) * gsl_cdf_gumbel2_P (C function): The Type-2 Gumbel Distribution. (line 23) * gsl_cdf_gumbel2_Pinv (C function): The Type-2 Gumbel Distribution. (line 23) * gsl_cdf_gumbel2_Q (C function): The Type-2 Gumbel Distribution. (line 23) * gsl_cdf_gumbel2_Qinv (C function): The Type-2 Gumbel Distribution. (line 23) * gsl_cdf_hypergeometric_P (C function): The Hypergeometric Distribution. (line 31) * gsl_cdf_hypergeometric_Q (C function): The Hypergeometric Distribution. (line 31) * gsl_cdf_laplace_P (C function): The Laplace Distribution. (line 22) * gsl_cdf_laplace_Pinv (C function): The Laplace Distribution. (line 22) * gsl_cdf_laplace_Q (C function): The Laplace Distribution. (line 22) * gsl_cdf_laplace_Qinv (C function): The Laplace Distribution. (line 22) * gsl_cdf_logistic_P (C function): The Logistic Distribution. (line 22) * gsl_cdf_logistic_Pinv (C function): The Logistic Distribution. (line 22) * gsl_cdf_logistic_Q (C function): The Logistic Distribution. (line 22) * gsl_cdf_logistic_Qinv (C function): The Logistic Distribution. (line 22) * gsl_cdf_lognormal_P (C function): The Lognormal Distribution. (line 24) * gsl_cdf_lognormal_Pinv (C function): The Lognormal Distribution. (line 24) * gsl_cdf_lognormal_Q (C function): The Lognormal Distribution. (line 24) * gsl_cdf_lognormal_Qinv (C function): The Lognormal Distribution. (line 24) * gsl_cdf_negative_binomial_P (C function): The Negative Binomial Distribution. (line 27) * gsl_cdf_negative_binomial_Q (C function): The Negative Binomial Distribution. (line 27) * gsl_cdf_pareto_P (C function): The Pareto Distribution. (line 23) * gsl_cdf_pareto_Pinv (C function): The Pareto Distribution. (line 23) * gsl_cdf_pareto_Q (C function): The Pareto Distribution. (line 23) * gsl_cdf_pareto_Qinv (C function): The Pareto Distribution. (line 23) * gsl_cdf_pascal_P (C function): The Pascal Distribution. (line 25) * gsl_cdf_pascal_Q (C function): The Pascal Distribution. (line 25) * gsl_cdf_poisson_P (C function): The Poisson Distribution. (line 23) * gsl_cdf_poisson_Q (C function): The Poisson Distribution. (line 23) * gsl_cdf_rayleigh_P (C function): The Rayleigh Distribution. (line 23) * gsl_cdf_rayleigh_Pinv (C function): The Rayleigh Distribution. (line 23) * gsl_cdf_rayleigh_Q (C function): The Rayleigh Distribution. (line 23) * gsl_cdf_rayleigh_Qinv (C function): The Rayleigh Distribution. (line 23) * gsl_cdf_tdist_P (C function): The t-distribution. (line 30) * gsl_cdf_tdist_Pinv (C function): The t-distribution. (line 30) * gsl_cdf_tdist_Q (C function): The t-distribution. (line 30) * gsl_cdf_tdist_Qinv (C function): The t-distribution. (line 30) * gsl_cdf_ugaussian_P (C function): The Gaussian Distribution. (line 54) * gsl_cdf_ugaussian_Pinv (C function): The Gaussian Distribution. (line 54) * gsl_cdf_ugaussian_Q (C function): The Gaussian Distribution. (line 54) * gsl_cdf_ugaussian_Qinv (C function): The Gaussian Distribution. (line 54) * gsl_cdf_weibull_P (C function): The Weibull Distribution. (line 23) * gsl_cdf_weibull_Pinv (C function): The Weibull Distribution. (line 23) * gsl_cdf_weibull_Q (C function): The Weibull Distribution. (line 23) * gsl_cdf_weibull_Qinv (C function): The Weibull Distribution. (line 23) * gsl_cheb_alloc (C function): Creation and Calculation of Chebyshev Series. (line 6) * gsl_cheb_calc_deriv (C function): Derivatives and Integrals. (line 12) * gsl_cheb_calc_integ (C function): Derivatives and Integrals. (line 20) * gsl_cheb_coeffs (C function): Auxiliary Functions. (line 13) * gsl_cheb_eval (C function): Chebyshev Series Evaluation. (line 6) * gsl_cheb_eval_err (C function): Chebyshev Series Evaluation. (line 11) * gsl_cheb_eval_n (C function): Chebyshev Series Evaluation. (line 19) * gsl_cheb_eval_n_err (C function): Chebyshev Series Evaluation. (line 26) * gsl_cheb_free (C function): Creation and Calculation of Chebyshev Series. (line 13) * gsl_cheb_init (C function): Creation and Calculation of Chebyshev Series. (line 18) * gsl_cheb_order (C function): Auxiliary Functions. (line 9) * gsl_cheb_series (C type): Definitions. (line 6) * gsl_cheb_size (C function): Auxiliary Functions. (line 13) * gsl_check_range (C var): Accessing vector elements. (line 37) * gsl_combination (C type): The Combination struct. (line 6) * gsl_combination_alloc (C function): Combination allocation. (line 6) * gsl_combination_calloc (C function): Combination allocation. (line 17) * gsl_combination_data (C function): Combination properties. (line 15) * gsl_combination_fprintf (C function): Reading and writing combinations. (line 30) * gsl_combination_fread (C function): Reading and writing combinations. (line 18) * gsl_combination_free (C function): Combination allocation. (line 36) * gsl_combination_fscanf (C function): Reading and writing combinations. (line 40) * gsl_combination_fwrite (C function): Reading and writing combinations. (line 9) * gsl_combination_get (C function): Accessing combination elements. (line 9) * gsl_combination_init_first (C function): Combination allocation. (line 25) * gsl_combination_init_last (C function): Combination allocation. (line 30) * gsl_combination_k (C function): Combination properties. (line 10) * gsl_combination_memcpy (C function): Combination allocation. (line 41) * gsl_combination_n (C function): Combination properties. (line 6) * gsl_combination_next (C function): Combination functions. (line 6) * gsl_combination_prev (C function): Combination functions. (line 15) * gsl_combination_valid (C function): Combination properties. (line 20) * gsl_complex: Complex Numbers. (line 21) * gsl_complex_abs (C function): Properties of complex numbers. (line 11) * gsl_complex_abs2 (C function): Properties of complex numbers. (line 16) * gsl_complex_add (C function): Complex arithmetic operators. (line 6) * gsl_complex_add_imag (C function): Complex arithmetic operators. (line 46) * gsl_complex_add_real (C function): Complex arithmetic operators. (line 26) * gsl_complex_arccos (C function): Inverse Complex Trigonometric Functions. (line 21) * gsl_complex_arccosh (C function): Inverse Complex Hyperbolic Functions. (line 12) * gsl_complex_arccosh_real (C function): Inverse Complex Hyperbolic Functions. (line 20) * gsl_complex_arccos_real (C function): Inverse Complex Trigonometric Functions. (line 27) * gsl_complex_arccot (C function): Inverse Complex Trigonometric Functions. (line 61) * gsl_complex_arccoth (C function): Inverse Complex Hyperbolic Functions. (line 46) * gsl_complex_arccsc (C function): Inverse Complex Trigonometric Functions. (line 51) * gsl_complex_arccsch (C function): Inverse Complex Hyperbolic Functions. (line 41) * gsl_complex_arccsc_real (C function): Inverse Complex Trigonometric Functions. (line 56) * gsl_complex_arcsec (C function): Inverse Complex Trigonometric Functions. (line 41) * gsl_complex_arcsech (C function): Inverse Complex Hyperbolic Functions. (line 36) * gsl_complex_arcsec_real (C function): Inverse Complex Trigonometric Functions. (line 46) * gsl_complex_arcsin (C function): Inverse Complex Trigonometric Functions. (line 6) * gsl_complex_arcsinh (C function): Inverse Complex Hyperbolic Functions. (line 6) * gsl_complex_arcsin_real (C function): Inverse Complex Trigonometric Functions. (line 12) * gsl_complex_arctan (C function): Inverse Complex Trigonometric Functions. (line 35) * gsl_complex_arctanh (C function): Inverse Complex Hyperbolic Functions. (line 25) * gsl_complex_arctanh_real (C function): Inverse Complex Hyperbolic Functions. (line 31) * gsl_complex_arg (C function): Properties of complex numbers. (line 6) * gsl_complex_conjugate (C function): Complex arithmetic operators. (line 66) * gsl_complex_cos (C function): Complex Trigonometric Functions. (line 11) * gsl_complex_cosh (C function): Complex Hyperbolic Functions. (line 11) * gsl_complex_cot (C function): Complex Trigonometric Functions. (line 31) * gsl_complex_coth (C function): Complex Hyperbolic Functions. (line 31) * gsl_complex_csc (C function): Complex Trigonometric Functions. (line 26) * gsl_complex_csch (C function): Complex Hyperbolic Functions. (line 26) * gsl_complex_div (C function): Complex arithmetic operators. (line 21) * gsl_complex_div_imag (C function): Complex arithmetic operators. (line 61) * gsl_complex_div_real (C function): Complex arithmetic operators. (line 41) * gsl_complex_exp (C function): Elementary Complex Functions. (line 28) * gsl_complex_inverse (C function): Complex arithmetic operators. (line 71) * gsl_complex_log (C function): Elementary Complex Functions. (line 33) * gsl_complex_log10 (C function): Elementary Complex Functions. (line 39) * gsl_complex_logabs (C function): Properties of complex numbers. (line 21) * gsl_complex_log_b (C function): Elementary Complex Functions. (line 44) * gsl_complex_mul (C function): Complex arithmetic operators. (line 16) * gsl_complex_mul_imag (C function): Complex arithmetic operators. (line 56) * gsl_complex_mul_real (C function): Complex arithmetic operators. (line 36) * gsl_complex_negative (C function): Complex arithmetic operators. (line 76) * gsl_complex_polar (C function): Assigning complex numbers. (line 12) * gsl_complex_poly_complex_eval (C function): Polynomial Evaluation. (line 25) * gsl_complex_pow (C function): Elementary Complex Functions. (line 17) * gsl_complex_pow_real (C function): Elementary Complex Functions. (line 23) * gsl_complex_rect (C function): Assigning complex numbers. (line 6) * gsl_complex_sec (C function): Complex Trigonometric Functions. (line 21) * gsl_complex_sech (C function): Complex Hyperbolic Functions. (line 21) * gsl_complex_sin (C function): Complex Trigonometric Functions. (line 6) * gsl_complex_sinh (C function): Complex Hyperbolic Functions. (line 6) * gsl_complex_sqrt (C function): Elementary Complex Functions. (line 6) * gsl_complex_sqrt_real (C function): Elementary Complex Functions. (line 12) * gsl_complex_sub (C function): Complex arithmetic operators. (line 11) * gsl_complex_sub_imag (C function): Complex arithmetic operators. (line 51) * gsl_complex_sub_real (C function): Complex arithmetic operators. (line 31) * gsl_complex_tan (C function): Complex Trigonometric Functions. (line 16) * gsl_complex_tanh (C function): Complex Hyperbolic Functions. (line 16) * GSL_CONST_MKSA_ACRE (C macro): Volume Area and Length. (line 14) * GSL_CONST_MKSA_ANGSTROM (C macro): Atomic and Nuclear Physics. (line 47) * GSL_CONST_MKSA_ASTRONOMICAL_UNIT (C macro): Astronomy and Astrophysics. (line 6) * GSL_CONST_MKSA_BAR (C macro): Pressure. (line 6) * GSL_CONST_MKSA_BARN (C macro): Atomic and Nuclear Physics. (line 51) * GSL_CONST_MKSA_BOHR_MAGNETON (C macro): Atomic and Nuclear Physics. (line 55) * GSL_CONST_MKSA_BOHR_RADIUS (C macro): Atomic and Nuclear Physics. (line 43) * GSL_CONST_MKSA_BOLTZMANN (C macro): Fundamental Constants. (line 36) * GSL_CONST_MKSA_BTU (C macro): Thermal Energy and Power. (line 10) * GSL_CONST_MKSA_CALORIE (C macro): Thermal Energy and Power. (line 6) * GSL_CONST_MKSA_CANADIAN_GALLON (C macro): Volume Area and Length. (line 26) * GSL_CONST_MKSA_CARAT (C macro): Mass and Weight. (line 30) * GSL_CONST_MKSA_CURIE (C macro): Radioactivity. (line 6) * GSL_CONST_MKSA_DAY (C macro): Measurement of Time. (line 14) * GSL_CONST_MKSA_DEBYE (C macro): Atomic and Nuclear Physics. (line 76) * GSL_CONST_MKSA_DYNE (C macro): Force and Energy. (line 10) * GSL_CONST_MKSA_ELECTRON_CHARGE (C macro): Atomic and Nuclear Physics. (line 6) * GSL_CONST_MKSA_ELECTRON_MAGNETIC_MOMENT (C macro): Atomic and Nuclear Physics. (line 63) * GSL_CONST_MKSA_ELECTRON_VOLT (C macro): Atomic and Nuclear Physics. (line 10) * GSL_CONST_MKSA_ERG (C macro): Force and Energy. (line 18) * GSL_CONST_MKSA_FARADAY (C macro): Fundamental Constants. (line 32) * GSL_CONST_MKSA_FATHOM (C macro): Speed and Nautical Units. (line 18) * GSL_CONST_MKSA_FOOT (C macro): Imperial Units. (line 10) * GSL_CONST_MKSA_FOOTCANDLE (C macro): Light and Illumination. (line 22) * GSL_CONST_MKSA_FOOTLAMBERT (C macro): Light and Illumination. (line 30) * GSL_CONST_MKSA_GAUSS (C macro): Fundamental Constants. (line 52) * GSL_CONST_MKSA_GRAM_FORCE (C macro): Mass and Weight. (line 34) * GSL_CONST_MKSA_GRAVITATIONAL_CONSTANT (C macro): Astronomy and Astrophysics. (line 10) * GSL_CONST_MKSA_GRAV_ACCEL (C macro): Astronomy and Astrophysics. (line 22) * GSL_CONST_MKSA_HECTARE (C macro): Volume Area and Length. (line 10) * GSL_CONST_MKSA_HORSEPOWER (C macro): Thermal Energy and Power. (line 18) * GSL_CONST_MKSA_HOUR (C macro): Measurement of Time. (line 10) * GSL_CONST_MKSA_INCH (C macro): Imperial Units. (line 6) * GSL_CONST_MKSA_INCH_OF_MERCURY (C macro): Pressure. (line 22) * GSL_CONST_MKSA_INCH_OF_WATER (C macro): Pressure. (line 26) * GSL_CONST_MKSA_JOULE (C macro): Force and Energy. (line 14) * GSL_CONST_MKSA_KILOMETERS_PER_HOUR (C macro): Speed and Nautical Units. (line 6) * GSL_CONST_MKSA_KILOPOUND_FORCE (C macro): Mass and Weight. (line 42) * GSL_CONST_MKSA_KNOT (C macro): Speed and Nautical Units. (line 22) * GSL_CONST_MKSA_LAMBERT (C macro): Light and Illumination. (line 26) * GSL_CONST_MKSA_LIGHT_YEAR (C macro): Astronomy and Astrophysics. (line 14) * GSL_CONST_MKSA_LITER (C macro): Volume Area and Length. (line 18) * GSL_CONST_MKSA_LUMEN (C macro): Light and Illumination. (line 10) * GSL_CONST_MKSA_LUX (C macro): Light and Illumination. (line 14) * GSL_CONST_MKSA_MASS_ELECTRON (C macro): Atomic and Nuclear Physics. (line 18) * GSL_CONST_MKSA_MASS_MUON (C macro): Atomic and Nuclear Physics. (line 22) * GSL_CONST_MKSA_MASS_NEUTRON (C macro): Atomic and Nuclear Physics. (line 30) * GSL_CONST_MKSA_MASS_PROTON (C macro): Atomic and Nuclear Physics. (line 26) * GSL_CONST_MKSA_METER_OF_MERCURY (C macro): Pressure. (line 18) * GSL_CONST_MKSA_METRIC_TON (C macro): Mass and Weight. (line 18) * GSL_CONST_MKSA_MICRON (C macro): Volume Area and Length. (line 6) * GSL_CONST_MKSA_MIL (C macro): Imperial Units. (line 22) * GSL_CONST_MKSA_MILE (C macro): Imperial Units. (line 18) * GSL_CONST_MKSA_MILES_PER_HOUR (C macro): Speed and Nautical Units. (line 10) * GSL_CONST_MKSA_MINUTE (C macro): Measurement of Time. (line 6) * GSL_CONST_MKSA_MOLAR_GAS (C macro): Fundamental Constants. (line 40) * GSL_CONST_MKSA_NAUTICAL_MILE (C macro): Speed and Nautical Units. (line 14) * GSL_CONST_MKSA_NEWTON (C macro): Force and Energy. (line 6) * GSL_CONST_MKSA_NUCLEAR_MAGNETON (C macro): Atomic and Nuclear Physics. (line 59) * GSL_CONST_MKSA_OUNCE_MASS (C macro): Mass and Weight. (line 10) * GSL_CONST_MKSA_PARSEC (C macro): Astronomy and Astrophysics. (line 18) * GSL_CONST_MKSA_PHOT (C macro): Light and Illumination. (line 18) * GSL_CONST_MKSA_PINT (C macro): Volume Area and Length. (line 38) * GSL_CONST_MKSA_PLANCKS_CONSTANT_H (C macro): Fundamental Constants. (line 20) * GSL_CONST_MKSA_PLANCKS_CONSTANT_HBAR (C macro): Fundamental Constants. (line 24) * GSL_CONST_MKSA_POINT (C macro): Printers Units. (line 6) * GSL_CONST_MKSA_POISE (C macro): Viscosity. (line 6) * GSL_CONST_MKSA_POUNDAL (C macro): Mass and Weight. (line 46) * GSL_CONST_MKSA_POUND_FORCE (C macro): Mass and Weight. (line 38) * GSL_CONST_MKSA_POUND_MASS (C macro): Mass and Weight. (line 6) * GSL_CONST_MKSA_PROTON_MAGNETIC_MOMENT (C macro): Atomic and Nuclear Physics. (line 68) * GSL_CONST_MKSA_PSI (C macro): Pressure. (line 30) * GSL_CONST_MKSA_QUART (C macro): Volume Area and Length. (line 34) * GSL_CONST_MKSA_RAD (C macro): Radioactivity. (line 14) * GSL_CONST_MKSA_ROENTGEN (C macro): Radioactivity. (line 10) * GSL_CONST_MKSA_RYDBERG (C macro): Atomic and Nuclear Physics. (line 38) * GSL_CONST_MKSA_SOLAR_MASS (C macro): Astronomy and Astrophysics. (line 26) * GSL_CONST_MKSA_SPEED_OF_LIGHT (C macro): Fundamental Constants. (line 6) * GSL_CONST_MKSA_STANDARD_GAS_VOLUME (C macro): Fundamental Constants. (line 44) * GSL_CONST_MKSA_STD_ATMOSPHERE (C macro): Pressure. (line 10) * GSL_CONST_MKSA_STEFAN_BOLTZMANN_CONSTANT (C macro): Fundamental Constants. (line 48) * GSL_CONST_MKSA_STILB (C macro): Light and Illumination. (line 6) * GSL_CONST_MKSA_STOKES (C macro): Viscosity. (line 10) * GSL_CONST_MKSA_TEXPOINT (C macro): Printers Units. (line 10) * GSL_CONST_MKSA_THERM (C macro): Thermal Energy and Power. (line 14) * GSL_CONST_MKSA_THOMSON_CROSS_SECTION (C macro): Atomic and Nuclear Physics. (line 72) * GSL_CONST_MKSA_TON (C macro): Mass and Weight. (line 14) * GSL_CONST_MKSA_TORR (C macro): Pressure. (line 14) * GSL_CONST_MKSA_TROY_OUNCE (C macro): Mass and Weight. (line 26) * GSL_CONST_MKSA_UK_GALLON (C macro): Volume Area and Length. (line 30) * GSL_CONST_MKSA_UK_TON (C macro): Mass and Weight. (line 22) * GSL_CONST_MKSA_UNIFIED_ATOMIC_MASS (C macro): Atomic and Nuclear Physics. (line 14) * GSL_CONST_MKSA_US_GALLON (C macro): Volume Area and Length. (line 22) * GSL_CONST_MKSA_VACUUM_PERMEABILITY (C macro): Fundamental Constants. (line 10) * GSL_CONST_MKSA_VACUUM_PERMITTIVITY (C macro): Fundamental Constants. (line 15) * GSL_CONST_MKSA_WEEK (C macro): Measurement of Time. (line 18) * GSL_CONST_MKSA_YARD (C macro): Imperial Units. (line 14) * GSL_CONST_NUM_ATTO (C macro): Prefixes. (line 60) * GSL_CONST_NUM_AVOGADRO (C macro): Fundamental Constants. (line 28) * GSL_CONST_NUM_EXA (C macro): Prefixes. (line 16) * GSL_CONST_NUM_FEMTO (C macro): Prefixes. (line 56) * GSL_CONST_NUM_FINE_STRUCTURE (C macro): Atomic and Nuclear Physics. (line 34) * GSL_CONST_NUM_GIGA (C macro): Prefixes. (line 28) * GSL_CONST_NUM_KILO (C macro): Prefixes. (line 36) * GSL_CONST_NUM_MEGA (C macro): Prefixes. (line 32) * GSL_CONST_NUM_MICRO (C macro): Prefixes. (line 44) * GSL_CONST_NUM_MILLI (C macro): Prefixes. (line 40) * GSL_CONST_NUM_NANO (C macro): Prefixes. (line 48) * GSL_CONST_NUM_PETA (C macro): Prefixes. (line 20) * GSL_CONST_NUM_PICO (C macro): Prefixes. (line 52) * GSL_CONST_NUM_TERA (C macro): Prefixes. (line 24) * GSL_CONST_NUM_YOCTO (C macro): Prefixes. (line 68) * GSL_CONST_NUM_YOTTA (C macro): Prefixes. (line 8) * GSL_CONST_NUM_ZEPTO (C macro): Prefixes. (line 64) * GSL_CONST_NUM_ZETTA (C macro): Prefixes. (line 12) * gsl_deriv_backward (C function): Functions. (line 46) * gsl_deriv_central (C function): Functions. (line 6) * gsl_deriv_forward (C function): Functions. (line 25) * gsl_dht (C type): Functions<2>. (line 6) * gsl_dht_alloc (C function): Functions<2>. (line 10) * gsl_dht_apply (C function): Functions<2>. (line 31) * gsl_dht_free (C function): Functions<2>. (line 27) * gsl_dht_init (C function): Functions<2>. (line 15) * gsl_dht_k_sample (C function): Functions<2>. (line 49) * gsl_dht_new (C function): Functions<2>. (line 20) * gsl_dht_x_sample (C function): Functions<2>. (line 42) * GSL_EDOM (C var): Error Codes. (line 13) * gsl_eigen_gen (C function): Real Generalized Nonsymmetric Eigensystems. (line 89) * gsl_eigen_genherm (C function): Complex Generalized Hermitian-Definite Eigensystems. (line 38) * gsl_eigen_genhermv (C function): Complex Generalized Hermitian-Definite Eigensystems. (line 67) * gsl_eigen_genhermv_alloc (C function): Complex Generalized Hermitian-Definite Eigensystems. (line 53) * gsl_eigen_genhermv_free (C function): Complex Generalized Hermitian-Definite Eigensystems. (line 61) * gsl_eigen_genhermv_sort (C function): Sorting Eigenvalues and Eigenvectors. (line 58) * gsl_eigen_genhermv_workspace (C type): Complex Generalized Hermitian-Definite Eigensystems. (line 48) * gsl_eigen_genherm_alloc (C function): Complex Generalized Hermitian-Definite Eigensystems. (line 24) * gsl_eigen_genherm_free (C function): Complex Generalized Hermitian-Definite Eigensystems. (line 32) * gsl_eigen_genherm_workspace (C type): Complex Generalized Hermitian-Definite Eigensystems. (line 19) * gsl_eigen_gensymm (C function): Real Generalized Symmetric-Definite Eigensystems. (line 44) * gsl_eigen_gensymmv (C function): Real Generalized Symmetric-Definite Eigensystems. (line 72) * gsl_eigen_gensymmv_alloc (C function): Real Generalized Symmetric-Definite Eigensystems. (line 58) * gsl_eigen_gensymmv_free (C function): Real Generalized Symmetric-Definite Eigensystems. (line 66) * gsl_eigen_gensymmv_sort (C function): Sorting Eigenvalues and Eigenvectors. (line 49) * gsl_eigen_gensymmv_workspace (C type): Real Generalized Symmetric-Definite Eigensystems. (line 53) * gsl_eigen_gensymm_alloc (C function): Real Generalized Symmetric-Definite Eigensystems. (line 31) * gsl_eigen_gensymm_free (C function): Real Generalized Symmetric-Definite Eigensystems. (line 38) * gsl_eigen_gensymm_workspace (C type): Real Generalized Symmetric-Definite Eigensystems. (line 26) * gsl_eigen_genv (C function): Real Generalized Nonsymmetric Eigensystems. (line 134) * gsl_eigen_genv_alloc (C function): Real Generalized Nonsymmetric Eigensystems. (line 122) * gsl_eigen_genv_free (C function): Real Generalized Nonsymmetric Eigensystems. (line 129) * gsl_eigen_genv_QZ (C function): Real Generalized Nonsymmetric Eigensystems. (line 152) * gsl_eigen_genv_sort (C function): Sorting Eigenvalues and Eigenvectors. (line 67) * gsl_eigen_genv_workspace (C type): Real Generalized Nonsymmetric Eigensystems. (line 117) * gsl_eigen_gen_alloc (C function): Real Generalized Nonsymmetric Eigensystems. (line 54) * gsl_eigen_gen_free (C function): Real Generalized Nonsymmetric Eigensystems. (line 61) * gsl_eigen_gen_params (C function): Real Generalized Nonsymmetric Eigensystems. (line 66) * gsl_eigen_gen_QZ (C function): Real Generalized Nonsymmetric Eigensystems. (line 109) * gsl_eigen_gen_workspace (C type): Real Generalized Nonsymmetric Eigensystems. (line 49) * gsl_eigen_herm (C function): Complex Hermitian Matrices. (line 26) * gsl_eigen_hermv (C function): Complex Hermitian Matrices. (line 55) * gsl_eigen_hermv_alloc (C function): Complex Hermitian Matrices. (line 43) * gsl_eigen_hermv_free (C function): Complex Hermitian Matrices. (line 50) * gsl_eigen_hermv_sort (C function): Sorting Eigenvalues and Eigenvectors. (line 29) * gsl_eigen_hermv_workspace (C type): Complex Hermitian Matrices. (line 38) * gsl_eigen_herm_alloc (C function): Complex Hermitian Matrices. (line 14) * gsl_eigen_herm_free (C function): Complex Hermitian Matrices. (line 21) * gsl_eigen_herm_workspace (C type): Complex Hermitian Matrices. (line 9) * gsl_eigen_nonsymm (C function): Real Nonsymmetric Matrices. (line 72) * gsl_eigen_nonsymmv (C function): Real Nonsymmetric Matrices. (line 123) * gsl_eigen_nonsymmv_alloc (C function): Real Nonsymmetric Matrices. (line 99) * gsl_eigen_nonsymmv_free (C function): Real Nonsymmetric Matrices. (line 106) * gsl_eigen_nonsymmv_params (C function): Real Nonsymmetric Matrices. (line 112) * gsl_eigen_nonsymmv_sort (C function): Sorting Eigenvalues and Eigenvectors. (line 38) * gsl_eigen_nonsymmv_workspace (C type): Real Nonsymmetric Matrices. (line 94) * gsl_eigen_nonsymmv_Z (C function): Real Nonsymmetric Matrices. (line 139) * gsl_eigen_nonsymm_alloc (C function): Real Nonsymmetric Matrices. (line 22) * gsl_eigen_nonsymm_free (C function): Real Nonsymmetric Matrices. (line 29) * gsl_eigen_nonsymm_params (C function): Real Nonsymmetric Matrices. (line 35) * gsl_eigen_nonsymm_workspace (C type): Real Nonsymmetric Matrices. (line 17) * gsl_eigen_nonsymm_Z (C function): Real Nonsymmetric Matrices. (line 87) * gsl_eigen_symm (C function): Real Symmetric Matrices. (line 29) * gsl_eigen_symmv (C function): Real Symmetric Matrices. (line 56) * gsl_eigen_symmv_alloc (C function): Real Symmetric Matrices. (line 44) * gsl_eigen_symmv_free (C function): Real Symmetric Matrices. (line 51) * gsl_eigen_symmv_sort (C function): Sorting Eigenvalues and Eigenvectors. (line 6) * gsl_eigen_symmv_sort.gsl_eigen_sort_t (C type): Sorting Eigenvalues and Eigenvectors. (line 15) * gsl_eigen_symmv_workspace (C type): Real Symmetric Matrices. (line 39) * gsl_eigen_symm_alloc (C function): Real Symmetric Matrices. (line 17) * gsl_eigen_symm_free (C function): Real Symmetric Matrices. (line 24) * gsl_eigen_symm_workspace (C type): Real Symmetric Matrices. (line 12) * GSL_EINVAL (C var): Error Codes. (line 32) * GSL_ENOMEM (C var): Error Codes. (line 25) * GSL_ERANGE (C var): Error Codes. (line 19) * GSL_ERROR (C macro): Using GSL error reporting in your own functions. (line 15) * gsl_error_handler_t (C type): Error Handlers. (line 23) * GSL_ERROR_VAL (C macro): Using GSL error reporting in your own functions. (line 36) * gsl_expm1 (C function): Elementary Functions. (line 18) * gsl_fcmp (C function): Approximate Comparison of Floating Point Numbers. (line 12) * gsl_fft_complex_backward (C function): Mixed-radix FFT routines for complex data. (line 121) * gsl_fft_complex_forward (C function): Mixed-radix FFT routines for complex data. (line 121) * gsl_fft_complex_inverse (C function): Mixed-radix FFT routines for complex data. (line 121) * gsl_fft_complex_radix2_backward (C function): Radix-2 FFT routines for complex data. (line 16) * gsl_fft_complex_radix2_dif_backward (C function): Radix-2 FFT routines for complex data. (line 38) * gsl_fft_complex_radix2_dif_forward (C function): Radix-2 FFT routines for complex data. (line 38) * gsl_fft_complex_radix2_dif_inverse (C function): Radix-2 FFT routines for complex data. (line 38) * gsl_fft_complex_radix2_dif_transform (C function): Radix-2 FFT routines for complex data. (line 38) * gsl_fft_complex_radix2_forward (C function): Radix-2 FFT routines for complex data. (line 16) * gsl_fft_complex_radix2_inverse (C function): Radix-2 FFT routines for complex data. (line 16) * gsl_fft_complex_radix2_transform (C function): Radix-2 FFT routines for complex data. (line 16) * gsl_fft_complex_transform (C function): Mixed-radix FFT routines for complex data. (line 121) * gsl_fft_complex_wavetable (C type): Mixed-radix FFT routines for complex data. (line 80) * gsl_fft_complex_wavetable_alloc (C function): Mixed-radix FFT routines for complex data. (line 44) * gsl_fft_complex_wavetable_free (C function): Mixed-radix FFT routines for complex data. (line 65) * gsl_fft_complex_workspace (C type): Mixed-radix FFT routines for complex data. (line 101) * gsl_fft_complex_workspace_alloc (C function): Mixed-radix FFT routines for complex data. (line 106) * gsl_fft_complex_workspace_free (C function): Mixed-radix FFT routines for complex data. (line 112) * gsl_fft_halfcomplex_radix2_backward (C function): Radix-2 FFT routines for real data. (line 57) * gsl_fft_halfcomplex_radix2_inverse (C function): Radix-2 FFT routines for real data. (line 57) * gsl_fft_halfcomplex_radix2_unpack (C function): Radix-2 FFT routines for real data. (line 68) * gsl_fft_halfcomplex_transform (C function): Mixed-radix FFT routines for real data. (line 133) * gsl_fft_halfcomplex_unpack (C function): Mixed-radix FFT routines for real data. (line 169) * gsl_fft_halfcomplex_wavetable (C type): Mixed-radix FFT routines for real data. (line 72) * gsl_fft_halfcomplex_wavetable_alloc (C function): Mixed-radix FFT routines for real data. (line 78) * gsl_fft_halfcomplex_wavetable_free (C function): Mixed-radix FFT routines for real data. (line 100) * gsl_fft_real_radix2_transform (C function): Radix-2 FFT routines for real data. (line 13) * gsl_fft_real_transform (C function): Mixed-radix FFT routines for real data. (line 133) * gsl_fft_real_unpack (C function): Mixed-radix FFT routines for real data. (line 153) * gsl_fft_real_wavetable (C type): Mixed-radix FFT routines for real data. (line 72) * gsl_fft_real_wavetable_alloc (C function): Mixed-radix FFT routines for real data. (line 78) * gsl_fft_real_wavetable_free (C function): Mixed-radix FFT routines for real data. (line 100) * gsl_fft_real_workspace (C type): Mixed-radix FFT routines for real data. (line 112) * gsl_fft_real_workspace_alloc (C function): Mixed-radix FFT routines for real data. (line 116) * gsl_fft_real_workspace_free (C function): Mixed-radix FFT routines for real data. (line 123) * gsl_filter_end_t (C type): Handling Endpoints<2>. (line 12) * gsl_filter_end_t.GSL_FILTER_END_PADVALUE (C macro): Handling Endpoints<2>. (line 27) * gsl_filter_end_t.GSL_FILTER_END_PADZERO (C macro): Handling Endpoints<2>. (line 17) * gsl_filter_end_t.GSL_FILTER_END_TRUNCATE (C macro): Handling Endpoints<2>. (line 36) * gsl_filter_gaussian (C function): Gaussian Filter. (line 50) * gsl_filter_gaussian_alloc (C function): Gaussian Filter. (line 37) * gsl_filter_gaussian_free (C function): Gaussian Filter. (line 45) * gsl_filter_gaussian_kernel (C function): Gaussian Filter. (line 62) * gsl_filter_impulse (C function): Impulse Detection Filter. (line 115) * gsl_filter_impulse_alloc (C function): Impulse Detection Filter. (line 101) * gsl_filter_impulse_free (C function): Impulse Detection Filter. (line 110) * gsl_filter_median (C function): Standard Median Filter. (line 25) * gsl_filter_median_alloc (C function): Standard Median Filter. (line 11) * gsl_filter_median_free (C function): Standard Median Filter. (line 20) * gsl_filter_rmedian (C function): Recursive Median Filter. (line 33) * gsl_filter_rmedian_alloc (C function): Recursive Median Filter. (line 19) * gsl_filter_rmedian_free (C function): Recursive Median Filter. (line 28) * gsl_filter_scale_t (C type): Impulse Detection Filter. (line 45) * gsl_filter_scale_t.GSL_FILTER_SCALE_IQR (C macro): Impulse Detection Filter. (line 61) * gsl_filter_scale_t.GSL_FILTER_SCALE_MAD (C macro): Impulse Detection Filter. (line 50) * gsl_filter_scale_t.GSL_FILTER_SCALE_QN (C macro): Impulse Detection Filter. (line 81) * gsl_filter_scale_t.GSL_FILTER_SCALE_SN (C macro): Impulse Detection Filter. (line 75) * gsl_finite (C function): Infinities and Not-a-number. (line 30) * gsl_fit_linear (C function): Linear regression with a constant term. (line 9) * gsl_fit_linear_est (C function): Linear regression with a constant term. (line 47) * gsl_fit_mul (C function): Linear regression without a constant term. (line 10) * gsl_fit_mul_est (C function): Linear regression without a constant term. (line 43) * gsl_fit_wlinear (C function): Linear regression with a constant term. (line 27) * gsl_fit_wmul (C function): Linear regression without a constant term. (line 24) * gsl_frexp (C function): Elementary Functions. (line 56) * gsl_function (C type): Providing the function to solve. (line 11) * gsl_function_fdf (C type): Providing the function to solve. (line 54) * gsl_heapsort (C function): Sorting objects. (line 16) * gsl_heapsort.gsl_comparison_fn_t (C type): Sorting objects. (line 24) * gsl_heapsort_index (C function): Sorting objects. (line 58) * gsl_histogram (C type): The histogram struct. (line 8) * gsl_histogram2d (C type): The 2D histogram struct. (line 8) * gsl_histogram2d_accumulate (C function): Updating and accessing 2D histogram elements. (line 27) * gsl_histogram2d_add (C function): 2D Histogram Operations. (line 12) * gsl_histogram2d_alloc (C function): 2D Histogram allocation. (line 14) * gsl_histogram2d_clone (C function): Copying 2D Histograms. (line 14) * gsl_histogram2d_cov (C function): 2D Histogram Statistics. (line 60) * gsl_histogram2d_div (C function): 2D Histogram Operations. (line 36) * gsl_histogram2d_equal_bins_p (C function): 2D Histogram Operations. (line 6) * gsl_histogram2d_find (C function): Searching 2D histogram ranges. (line 9) * gsl_histogram2d_fprintf (C function): Reading and writing 2D histograms. (line 31) * gsl_histogram2d_fread (C function): Reading and writing 2D histograms. (line 19) * gsl_histogram2d_free (C function): 2D Histogram allocation. (line 43) * gsl_histogram2d_fscanf (C function): Reading and writing 2D histograms. (line 70) * gsl_histogram2d_fwrite (C function): Reading and writing 2D histograms. (line 9) * gsl_histogram2d_get (C function): Updating and accessing 2D histogram elements. (line 34) * gsl_histogram2d_get_xrange (C function): Updating and accessing 2D histogram elements. (line 43) * gsl_histogram2d_get_yrange (C function): Updating and accessing 2D histogram elements. (line 43) * gsl_histogram2d_increment (C function): Updating and accessing 2D histogram elements. (line 12) * gsl_histogram2d_max_bin (C function): 2D Histogram Statistics. (line 11) * gsl_histogram2d_max_val (C function): 2D Histogram Statistics. (line 6) * gsl_histogram2d_memcpy (C function): Copying 2D Histograms. (line 6) * gsl_histogram2d_min_bin (C function): 2D Histogram Statistics. (line 24) * gsl_histogram2d_min_val (C function): 2D Histogram Statistics. (line 19) * gsl_histogram2d_mul (C function): 2D Histogram Operations. (line 28) * gsl_histogram2d_nx (C function): Updating and accessing 2D histogram elements. (line 60) * gsl_histogram2d_ny (C function): Updating and accessing 2D histogram elements. (line 60) * gsl_histogram2d_pdf (C type): Resampling from 2D histograms. (line 20) * gsl_histogram2d_pdf_alloc (C function): Resampling from 2D histograms. (line 42) * gsl_histogram2d_pdf_free (C function): Resampling from 2D histograms. (line 61) * gsl_histogram2d_pdf_init (C function): Resampling from 2D histograms. (line 52) * gsl_histogram2d_pdf_sample (C function): Resampling from 2D histograms. (line 66) * gsl_histogram2d_reset (C function): Updating and accessing 2D histogram elements. (line 73) * gsl_histogram2d_scale (C function): 2D Histogram Operations. (line 44) * gsl_histogram2d_set_ranges (C function): 2D Histogram allocation. (line 26) * gsl_histogram2d_set_ranges_uniform (C function): 2D Histogram allocation. (line 35) * gsl_histogram2d_shift (C function): 2D Histogram Operations. (line 52) * gsl_histogram2d_sub (C function): 2D Histogram Operations. (line 20) * gsl_histogram2d_sum (C function): 2D Histogram Statistics. (line 67) * gsl_histogram2d_xmax (C function): Updating and accessing 2D histogram elements. (line 60) * gsl_histogram2d_xmean (C function): 2D Histogram Statistics. (line 32) * gsl_histogram2d_xmin (C function): Updating and accessing 2D histogram elements. (line 60) * gsl_histogram2d_xsigma (C function): 2D Histogram Statistics. (line 46) * gsl_histogram2d_ymax (C function): Updating and accessing 2D histogram elements. (line 60) * gsl_histogram2d_ymean (C function): 2D Histogram Statistics. (line 39) * gsl_histogram2d_ymin (C function): Updating and accessing 2D histogram elements. (line 60) * gsl_histogram2d_ysigma (C function): 2D Histogram Statistics. (line 53) * gsl_histogram_accumulate (C function): Updating and accessing histogram elements. (line 27) * gsl_histogram_add (C function): Histogram Operations. (line 12) * gsl_histogram_alloc (C function): Histogram allocation. (line 14) * gsl_histogram_bins (C function): Updating and accessing histogram elements. (line 58) * gsl_histogram_clone (C function): Copying Histograms. (line 14) * gsl_histogram_div (C function): Histogram Operations. (line 36) * gsl_histogram_equal_bins_p (C function): Histogram Operations. (line 6) * gsl_histogram_find (C function): Searching histogram ranges. (line 9) * gsl_histogram_fprintf (C function): Reading and writing histograms. (line 30) * gsl_histogram_fread (C function): Reading and writing histograms. (line 19) * gsl_histogram_free (C function): Histogram allocation. (line 66) * gsl_histogram_fscanf (C function): Reading and writing histograms. (line 57) * gsl_histogram_fwrite (C function): Reading and writing histograms. (line 9) * gsl_histogram_get (C function): Updating and accessing histogram elements. (line 34) * gsl_histogram_get_range (C function): Updating and accessing histogram elements. (line 43) * gsl_histogram_increment (C function): Updating and accessing histogram elements. (line 11) * gsl_histogram_max (C function): Updating and accessing histogram elements. (line 58) * gsl_histogram_max_bin (C function): Histogram Statistics. (line 11) * gsl_histogram_max_val (C function): Histogram Statistics. (line 6) * gsl_histogram_mean (C function): Histogram Statistics. (line 28) * gsl_histogram_memcpy (C function): Copying Histograms. (line 6) * gsl_histogram_min (C function): Updating and accessing histogram elements. (line 58) * gsl_histogram_min_bin (C function): Histogram Statistics. (line 22) * gsl_histogram_min_val (C function): Histogram Statistics. (line 17) * gsl_histogram_mul (C function): Histogram Operations. (line 28) * gsl_histogram_pdf (C type): The histogram probability distribution struct. (line 18) * gsl_histogram_pdf_alloc (C function): The histogram probability distribution struct. (line 36) * gsl_histogram_pdf_free (C function): The histogram probability distribution struct. (line 54) * gsl_histogram_pdf_init (C function): The histogram probability distribution struct. (line 45) * gsl_histogram_pdf_sample (C function): The histogram probability distribution struct. (line 59) * gsl_histogram_reset (C function): Updating and accessing histogram elements. (line 67) * gsl_histogram_scale (C function): Histogram Operations. (line 44) * gsl_histogram_set_ranges (C function): Histogram allocation. (line 24) * gsl_histogram_set_ranges_uniform (C function): Histogram allocation. (line 51) * gsl_histogram_shift (C function): Histogram Operations. (line 51) * gsl_histogram_sigma (C function): Histogram Statistics. (line 35) * gsl_histogram_sub (C function): Histogram Operations. (line 20) * gsl_histogram_sum (C function): Histogram Statistics. (line 43) * gsl_hypot (C function): Elementary Functions. (line 24) * gsl_hypot3 (C function): Elementary Functions. (line 30) * gsl_ieee_env_setup (C function): Setting up your IEEE environment. (line 27) * gsl_ieee_fprintf_double (C function): Representation of floating point numbers. (line 56) * gsl_ieee_fprintf_float (C function): Representation of floating point numbers. (line 56) * GSL_IEEE_MODE (C macro): Setting up your IEEE environment. (line 23) * gsl_ieee_printf_double (C function): Representation of floating point numbers. (line 89) * gsl_ieee_printf_float (C function): Representation of floating point numbers. (line 89) * GSL_IMAG (C macro): Complex number macros. (line 9) * gsl_integration_cquad (C function): CQUAD doubly-adaptive integration. (line 36) * gsl_integration_cquad_workspace: CQUAD doubly-adaptive integration. (line 20) * gsl_integration_cquad_workspace_alloc (C function): CQUAD doubly-adaptive integration. (line 20) * gsl_integration_cquad_workspace_free (C function): CQUAD doubly-adaptive integration. (line 30) * gsl_integration_fixed (C function): Fixed point quadratures. (line 170) * gsl_integration_fixed_alloc (C function): Fixed point quadratures. (line 74) * gsl_integration_fixed_alloc.gsl_integration_fixed_type (C type): Fixed point quadratures. (line 87) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_chebyshev (C var): Fixed point quadratures. (line 99) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_chebyshev2 (C var): Fixed point quadratures. (line 140) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_exponential (C var): Fixed point quadratures. (line 129) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_gegenbauer (C var): Fixed point quadratures. (line 106) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_hermite (C var): Fixed point quadratures. (line 123) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_jacobi (C var): Fixed point quadratures. (line 112) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_laguerre (C var): Fixed point quadratures. (line 117) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_legendre (C var): Fixed point quadratures. (line 92) * gsl_integration_fixed_alloc.gsl_integration_fixed_type.gsl_integration_fixed_rational (C var): Fixed point quadratures. (line 135) * gsl_integration_fixed_free (C function): Fixed point quadratures. (line 147) * gsl_integration_fixed_n (C function): Fixed point quadratures. (line 153) * gsl_integration_fixed_nodes (C function): Fixed point quadratures. (line 158) * gsl_integration_fixed_weights (C function): Fixed point quadratures. (line 164) * gsl_integration_fixed_workspace (C type): Fixed point quadratures. (line 59) * gsl_integration_glfixed (C function): Gauss-Legendre integration. (line 23) * gsl_integration_glfixed_point (C function): Gauss-Legendre integration. (line 29) * gsl_integration_glfixed_table: Gauss-Legendre integration. (line 14) * gsl_integration_glfixed_table <1>: Gauss-Legendre integration. (line 40) * gsl_integration_glfixed_table_alloc (C function): Gauss-Legendre integration. (line 14) * gsl_integration_glfixed_table_free (C function): Gauss-Legendre integration. (line 40) * gsl_integration_qag (C function): QAG adaptive integration. (line 33) * gsl_integration_qagi (C function): QAGI adaptive integration on infinite intervals. (line 6) * gsl_integration_qagil (C function): QAGI adaptive integration on infinite intervals. (line 36) * gsl_integration_qagiu (C function): QAGI adaptive integration on infinite intervals. (line 22) * gsl_integration_qagp (C function): QAGP adaptive integration with known singular points. (line 6) * gsl_integration_qags (C function): QAGS adaptive integration with singularities. (line 15) * gsl_integration_qawc (C function): QAWC adaptive integration for Cauchy principal values. (line 6) * gsl_integration_qawf (C function): QAWF adaptive integration for Fourier integrals. (line 6) * gsl_integration_qawo (C function): QAWO adaptive integration for oscillatory functions. (line 59) * gsl_integration_qawo_table: QAWO adaptive integration for oscillatory functions. (line 11) * gsl_integration_qawo_table_alloc (C function): QAWO adaptive integration for oscillatory functions. (line 11) * gsl_integration_qawo_table_alloc.GSL_INTEG_COSINE (C macro): QAWO adaptive integration for oscillatory functions. (line 27) * gsl_integration_qawo_table_alloc.GSL_INTEG_SINE (C macro): QAWO adaptive integration for oscillatory functions. (line 29) * gsl_integration_qawo_table_free (C function): QAWO adaptive integration for oscillatory functions. (line 53) * gsl_integration_qawo_table_set (C function): QAWO adaptive integration for oscillatory functions. (line 40) * gsl_integration_qawo_table_set_length (C function): QAWO adaptive integration for oscillatory functions. (line 47) * gsl_integration_qaws (C function): QAWS adaptive integration for singular functions. (line 68) * gsl_integration_qaws_table (C type): QAWS adaptive integration for singular functions. (line 11) * gsl_integration_qaws_table_alloc (C function): QAWS adaptive integration for singular functions. (line 16) * gsl_integration_qaws_table_free (C function): QAWS adaptive integration for singular functions. (line 62) * gsl_integration_qaws_table_set (C function): QAWS adaptive integration for singular functions. (line 54) * gsl_integration_qng (C function): QNG non-adaptive Gauss-Kronrod integration. (line 10) * gsl_integration_romberg (C function): Romberg integration. (line 36) * gsl_integration_romberg_alloc (C function): Romberg integration. (line 21) * gsl_integration_romberg_free (C function): Romberg integration. (line 30) * gsl_integration_workspace: QAG adaptive integration. (line 18) * gsl_integration_workspace (C type): QAG adaptive integration. (line 13) * gsl_integration_workspace_alloc (C function): QAG adaptive integration. (line 18) * gsl_integration_workspace_free (C function): QAG adaptive integration. (line 27) * gsl_interp (C type): 1D Interpolation Functions. (line 9) * gsl_interp2d (C type): 2D Interpolation Functions. (line 10) * gsl_interp2d_alloc (C function): 2D Interpolation Functions. (line 14) * gsl_interp2d_eval (C function): 2D Evaluation of Interpolating Functions. (line 6) * gsl_interp2d_eval_deriv_x (C function): 2D Evaluation of Interpolating Functions. (line 41) * gsl_interp2d_eval_deriv_xx (C function): 2D Evaluation of Interpolating Functions. (line 77) * gsl_interp2d_eval_deriv_xx_e (C function): 2D Evaluation of Interpolating Functions. (line 77) * gsl_interp2d_eval_deriv_xy (C function): 2D Evaluation of Interpolating Functions. (line 113) * gsl_interp2d_eval_deriv_xy_e (C function): 2D Evaluation of Interpolating Functions. (line 113) * gsl_interp2d_eval_deriv_x_e (C function): 2D Evaluation of Interpolating Functions. (line 41) * gsl_interp2d_eval_deriv_y (C function): 2D Evaluation of Interpolating Functions. (line 59) * gsl_interp2d_eval_deriv_yy (C function): 2D Evaluation of Interpolating Functions. (line 95) * gsl_interp2d_eval_deriv_yy_e (C function): 2D Evaluation of Interpolating Functions. (line 95) * gsl_interp2d_eval_deriv_y_e (C function): 2D Evaluation of Interpolating Functions. (line 59) * gsl_interp2d_eval_e (C function): 2D Evaluation of Interpolating Functions. (line 6) * gsl_interp2d_eval_extrap (C function): 2D Evaluation of Interpolating Functions. (line 23) * gsl_interp2d_eval_extrap_e (C function): 2D Evaluation of Interpolating Functions. (line 23) * gsl_interp2d_free (C function): 2D Interpolation Functions. (line 39) * gsl_interp2d_get (C function): 2D Interpolation Grids. (line 21) * gsl_interp2d_idx (C function): 2D Interpolation Grids. (line 27) * gsl_interp2d_init (C function): 2D Interpolation Functions. (line 22) * gsl_interp2d_min_size (C function): 2D Interpolation Types. (line 31) * gsl_interp2d_name (C function): 2D Interpolation Types. (line 20) * gsl_interp2d_set (C function): 2D Interpolation Grids. (line 15) * gsl_interp2d_type (C type): 2D Interpolation Types. (line 6) * gsl_interp2d_type.gsl_interp2d_bicubic (C var): 2D Interpolation Types. (line 16) * gsl_interp2d_type.gsl_interp2d_bilinear (C var): 2D Interpolation Types. (line 11) * gsl_interp2d_type_min_size (C function): 2D Interpolation Types. (line 31) * gsl_interp_accel (C type): 1D Index Look-up and Acceleration. (line 9) * gsl_interp_accel_alloc (C function): 1D Index Look-up and Acceleration. (line 25) * gsl_interp_accel_find (C function): 1D Index Look-up and Acceleration. (line 36) * gsl_interp_accel_free (C function): 1D Index Look-up and Acceleration. (line 52) * gsl_interp_accel_reset (C function): 1D Index Look-up and Acceleration. (line 46) * gsl_interp_alloc (C function): 1D Interpolation Functions. (line 13) * gsl_interp_bsearch (C function): 1D Index Look-up and Acceleration. (line 16) * gsl_interp_eval (C function): 1D Evaluation of Interpolating Functions. (line 6) * gsl_interp_eval_deriv (C function): 1D Evaluation of Interpolating Functions. (line 20) * gsl_interp_eval_deriv2 (C function): 1D Evaluation of Interpolating Functions. (line 32) * gsl_interp_eval_deriv2_e (C function): 1D Evaluation of Interpolating Functions. (line 32) * gsl_interp_eval_deriv_e (C function): 1D Evaluation of Interpolating Functions. (line 20) * gsl_interp_eval_e (C function): 1D Evaluation of Interpolating Functions. (line 6) * gsl_interp_eval_integ (C function): 1D Evaluation of Interpolating Functions. (line 44) * gsl_interp_eval_integ_e (C function): 1D Evaluation of Interpolating Functions. (line 44) * gsl_interp_free (C function): 1D Interpolation Functions. (line 31) * gsl_interp_init (C function): 1D Interpolation Functions. (line 19) * gsl_interp_min_size (C function): 1D Interpolation Types. (line 77) * gsl_interp_name (C function): 1D Interpolation Types. (line 66) * gsl_interp_type (C type): 1D Interpolation Types. (line 8) * gsl_interp_type.gsl_interp_akima (C var): 1D Interpolation Types. (line 42) * gsl_interp_type.gsl_interp_akima_periodic (C var): 1D Interpolation Types. (line 47) * gsl_interp_type.gsl_interp_cspline (C var): 1D Interpolation Types. (line 23) * gsl_interp_type.gsl_interp_cspline_periodic (C var): 1D Interpolation Types. (line 31) * gsl_interp_type.gsl_interp_linear (C var): 1D Interpolation Types. (line 10) * gsl_interp_type.gsl_interp_polynomial (C var): 1D Interpolation Types. (line 15) * gsl_interp_type.gsl_interp_steffen (C var): 1D Interpolation Types. (line 53) * gsl_interp_type_min_size (C function): 1D Interpolation Types. (line 77) * gsl_isinf (C function): Infinities and Not-a-number. (line 25) * gsl_isnan (C function): Infinities and Not-a-number. (line 21) * GSL_IS_EVEN (C macro): Testing for Odd and Even Numbers. (line 11) * GSL_IS_ODD (C macro): Testing for Odd and Even Numbers. (line 6) * gsl_ldexp (C function): Elementary Functions. (line 51) * gsl_linalg_balance_matrix (C function): Balancing. (line 17) * gsl_linalg_bidiag_decomp (C function): Bidiagonalization. (line 15) * gsl_linalg_bidiag_unpack (C function): Bidiagonalization. (line 28) * gsl_linalg_bidiag_unpack2 (C function): Bidiagonalization. (line 40) * gsl_linalg_bidiag_unpack_B (C function): Bidiagonalization. (line 50) * gsl_linalg_cholesky_band_decomp (C function): Banded Cholesky Decomposition. (line 14) * gsl_linalg_cholesky_band_invert (C function): Banded Cholesky Decomposition. (line 54) * gsl_linalg_cholesky_band_rcond (C function): Banded Cholesky Decomposition. (line 89) * gsl_linalg_cholesky_band_scale (C function): Banded Cholesky Decomposition. (line 71) * gsl_linalg_cholesky_band_scale_apply (C function): Banded Cholesky Decomposition. (line 82) * gsl_linalg_cholesky_band_solve (C function): Banded Cholesky Decomposition. (line 32) * gsl_linalg_cholesky_band_solvem (C function): Banded Cholesky Decomposition. (line 32) * gsl_linalg_cholesky_band_svx (C function): Banded Cholesky Decomposition. (line 42) * gsl_linalg_cholesky_band_svxm (C function): Banded Cholesky Decomposition. (line 42) * gsl_linalg_cholesky_band_unpack (C function): Banded Cholesky Decomposition. (line 63) * gsl_linalg_cholesky_decomp (C function): Cholesky Decomposition. (line 49) * gsl_linalg_cholesky_decomp1 (C function): Cholesky Decomposition. (line 29) * gsl_linalg_cholesky_decomp2 (C function): Cholesky Decomposition. (line 89) * gsl_linalg_cholesky_invert (C function): Cholesky Decomposition. (line 79) * gsl_linalg_cholesky_rcond (C function): Cholesky Decomposition. (line 145) * gsl_linalg_cholesky_scale (C function): Cholesky Decomposition. (line 127) * gsl_linalg_cholesky_scale_apply (C function): Cholesky Decomposition. (line 138) * gsl_linalg_cholesky_solve (C function): Cholesky Decomposition. (line 54) * gsl_linalg_cholesky_solve2 (C function): Cholesky Decomposition. (line 108) * gsl_linalg_cholesky_svx (C function): Cholesky Decomposition. (line 66) * gsl_linalg_cholesky_svx2 (C function): Cholesky Decomposition. (line 117) * gsl_linalg_COD_decomp (C function): Complete Orthogonal Decomposition. (line 49) * gsl_linalg_COD_decomp_e (C function): Complete Orthogonal Decomposition. (line 49) * gsl_linalg_COD_lssolve (C function): Complete Orthogonal Decomposition. (line 70) * gsl_linalg_COD_lssolve2 (C function): Complete Orthogonal Decomposition. (line 86) * gsl_linalg_COD_matZ (C function): Complete Orthogonal Decomposition. (line 113) * gsl_linalg_COD_unpack (C function): Complete Orthogonal Decomposition. (line 103) * gsl_linalg_complex_cholesky_decomp (C function): Cholesky Decomposition. (line 29) * gsl_linalg_complex_cholesky_invert (C function): Cholesky Decomposition. (line 79) * gsl_linalg_complex_cholesky_solve (C function): Cholesky Decomposition. (line 54) * gsl_linalg_complex_cholesky_svx (C function): Cholesky Decomposition. (line 66) * gsl_linalg_complex_householder_hm (C function): Householder Transformations. (line 30) * gsl_linalg_complex_householder_hv (C function): Householder Transformations. (line 50) * gsl_linalg_complex_householder_mh (C function): Householder Transformations. (line 40) * gsl_linalg_complex_householder_transform (C function): Householder Transformations. (line 17) * gsl_linalg_complex_LU_decomp (C function): LU Decomposition. (line 19) * gsl_linalg_complex_LU_det (C function): LU Decomposition. (line 111) * gsl_linalg_complex_LU_invert (C function): LU Decomposition. (line 77) * gsl_linalg_complex_LU_invx (C function): LU Decomposition. (line 94) * gsl_linalg_complex_LU_lndet (C function): LU Decomposition. (line 120) * gsl_linalg_complex_LU_refine (C function): LU Decomposition. (line 64) * gsl_linalg_complex_LU_sgndet (C function): LU Decomposition. (line 129) * gsl_linalg_complex_LU_solve (C function): LU Decomposition. (line 43) * gsl_linalg_complex_LU_svx (C function): LU Decomposition. (line 54) * gsl_linalg_complex_QR_decomp (C function): Level 2 Interface. (line 10) * gsl_linalg_complex_QR_decomp_r (C function): QR Decomposition. (line 63) * gsl_linalg_complex_QR_lssolve (C function): Level 2 Interface. (line 51) * gsl_linalg_complex_QR_lssolve_r (C function): QR Decomposition. (line 91) * gsl_linalg_complex_QR_QHvec (C function): Level 2 Interface. (line 69) * gsl_linalg_complex_QR_QHvec_r (C function): QR Decomposition. (line 113) * gsl_linalg_complex_QR_Qvec (C function): Level 2 Interface. (line 81) * gsl_linalg_complex_QR_solve (C function): Level 2 Interface. (line 30) * gsl_linalg_complex_QR_solve_r (C function): QR Decomposition. (line 79) * gsl_linalg_complex_QR_svx (C function): Level 2 Interface. (line 41) * gsl_linalg_complex_QR_unpack_r (C function): QR Decomposition. (line 137) * gsl_linalg_complex_tri_invert (C function): Triangular Systems. (line 6) * gsl_linalg_complex_tri_LHL (C function): Triangular Systems. (line 17) * gsl_linalg_complex_tri_UL (C function): Triangular Systems. (line 24) * gsl_linalg_givens (C function): Givens Rotations. (line 21) * gsl_linalg_givens_gv (C function): Givens Rotations. (line 28) * gsl_linalg_hermtd_decomp (C function): Tridiagonal Decomposition of Hermitian Matrices. (line 14) * gsl_linalg_hermtd_unpack (C function): Tridiagonal Decomposition of Hermitian Matrices. (line 27) * gsl_linalg_hermtd_unpack_T (C function): Tridiagonal Decomposition of Hermitian Matrices. (line 37) * gsl_linalg_hessenberg_decomp (C function): Hessenberg Decomposition of Real Matrices. (line 16) * gsl_linalg_hessenberg_set_zero (C function): Hessenberg Decomposition of Real Matrices. (line 49) * gsl_linalg_hessenberg_unpack (C function): Hessenberg Decomposition of Real Matrices. (line 29) * gsl_linalg_hessenberg_unpack_accum (C function): Hessenberg Decomposition of Real Matrices. (line 37) * gsl_linalg_hesstri_decomp (C function): Hessenberg-Triangular Decomposition of Real Matrices. (line 17) * gsl_linalg_HH_solve (C function): Householder solver for linear systems. (line 6) * gsl_linalg_HH_svx (C function): Householder solver for linear systems. (line 14) * gsl_linalg_householder_hm (C function): Householder Transformations. (line 30) * gsl_linalg_householder_hv (C function): Householder Transformations. (line 50) * gsl_linalg_householder_mh (C function): Householder Transformations. (line 40) * gsl_linalg_householder_transform (C function): Householder Transformations. (line 17) * gsl_linalg_ldlt_band_decomp (C function): Banded LDLT Decomposition. (line 14) * gsl_linalg_ldlt_band_rcond (C function): Banded LDLT Decomposition. (line 53) * gsl_linalg_ldlt_band_solve (C function): Banded LDLT Decomposition. (line 26) * gsl_linalg_ldlt_band_svx (C function): Banded LDLT Decomposition. (line 34) * gsl_linalg_ldlt_band_unpack (C function): Banded LDLT Decomposition. (line 44) * gsl_linalg_ldlt_decomp (C function): LDLT Decomposition. (line 22) * gsl_linalg_ldlt_rcond (C function): LDLT Decomposition. (line 54) * gsl_linalg_ldlt_solve (C function): LDLT Decomposition. (line 37) * gsl_linalg_ldlt_svx (C function): LDLT Decomposition. (line 45) * gsl_linalg_LQ_decomp (C function): LQ Decomposition. (line 27) * gsl_linalg_LQ_lssolve (C function): LQ Decomposition. (line 41) * gsl_linalg_LQ_QTvec (C function): LQ Decomposition. (line 60) * gsl_linalg_LQ_unpack (C function): LQ Decomposition. (line 53) * gsl_linalg_LU_band_decomp (C function): Banded LU Decomposition. (line 28) * gsl_linalg_LU_band_solve (C function): Banded LU Decomposition. (line 43) * gsl_linalg_LU_band_svx (C function): Banded LU Decomposition. (line 54) * gsl_linalg_LU_band_unpack (C function): Banded LU Decomposition. (line 66) * gsl_linalg_LU_decomp (C function): LU Decomposition. (line 19) * gsl_linalg_LU_det (C function): LU Decomposition. (line 111) * gsl_linalg_LU_invert (C function): LU Decomposition. (line 77) * gsl_linalg_LU_invx (C function): LU Decomposition. (line 94) * gsl_linalg_LU_lndet (C function): LU Decomposition. (line 120) * gsl_linalg_LU_refine (C function): LU Decomposition. (line 64) * gsl_linalg_LU_sgndet (C function): LU Decomposition. (line 129) * gsl_linalg_LU_solve (C function): LU Decomposition. (line 43) * gsl_linalg_LU_svx (C function): LU Decomposition. (line 54) * gsl_linalg_mcholesky_decomp (C function): Modified Cholesky Decomposition. (line 25) * gsl_linalg_mcholesky_rcond (C function): Modified Cholesky Decomposition. (line 59) * gsl_linalg_mcholesky_solve (C function): Modified Cholesky Decomposition. (line 41) * gsl_linalg_mcholesky_svx (C function): Modified Cholesky Decomposition. (line 49) * gsl_linalg_pcholesky_decomp (C function): Pivoted Cholesky Decomposition. (line 20) * gsl_linalg_pcholesky_decomp2 (C function): Pivoted Cholesky Decomposition. (line 53) * gsl_linalg_pcholesky_invert (C function): Pivoted Cholesky Decomposition. (line 94) * gsl_linalg_pcholesky_rcond (C function): Pivoted Cholesky Decomposition. (line 101) * gsl_linalg_pcholesky_solve (C function): Pivoted Cholesky Decomposition. (line 35) * gsl_linalg_pcholesky_solve2 (C function): Pivoted Cholesky Decomposition. (line 73) * gsl_linalg_pcholesky_svx (C function): Pivoted Cholesky Decomposition. (line 43) * gsl_linalg_pcholesky_svx2 (C function): Pivoted Cholesky Decomposition. (line 83) * gsl_linalg_QL_decomp (C function): QL Decomposition. (line 22) * gsl_linalg_QL_unpack (C function): QL Decomposition. (line 30) * gsl_linalg_QRPT_decomp (C function): QR Decomposition with Column Pivoting. (line 33) * gsl_linalg_QRPT_decomp2 (C function): QR Decomposition with Column Pivoting. (line 59) * gsl_linalg_QRPT_lssolve (C function): QR Decomposition with Column Pivoting. (line 85) * gsl_linalg_QRPT_lssolve2 (C function): QR Decomposition with Column Pivoting. (line 100) * gsl_linalg_QRPT_QRsolve (C function): QR Decomposition with Column Pivoting. (line 117) * gsl_linalg_QRPT_rank (C function): QR Decomposition with Column Pivoting. (line 149) * gsl_linalg_QRPT_rcond (C function): QR Decomposition with Column Pivoting. (line 159) * gsl_linalg_QRPT_Rsolve (C function): QR Decomposition with Column Pivoting. (line 135) * gsl_linalg_QRPT_Rsvx (C function): QR Decomposition with Column Pivoting. (line 141) * gsl_linalg_QRPT_solve (C function): QR Decomposition with Column Pivoting. (line 68) * gsl_linalg_QRPT_svx (C function): QR Decomposition with Column Pivoting. (line 77) * gsl_linalg_QRPT_update (C function): QR Decomposition with Column Pivoting. (line 125) * gsl_linalg_QR_decomp (C function): Level 2 Interface. (line 10) * gsl_linalg_QR_decomp_r (C function): QR Decomposition. (line 63) * gsl_linalg_QR_lssolve (C function): Level 2 Interface. (line 51) * gsl_linalg_QR_lssolve_r (C function): QR Decomposition. (line 91) * gsl_linalg_QR_QRsolve (C function): Level 2 Interface. (line 124) * gsl_linalg_QR_QTmat (C function): Level 2 Interface. (line 92) * gsl_linalg_QR_QTmat_r (C function): QR Decomposition. (line 127) * gsl_linalg_QR_QTvec (C function): Level 2 Interface. (line 69) * gsl_linalg_QR_QTvec_r (C function): QR Decomposition. (line 113) * gsl_linalg_QR_Qvec (C function): Level 2 Interface. (line 81) * gsl_linalg_QR_rcond (C function): QR Decomposition. (line 151) * gsl_linalg_QR_Rsolve (C function): Level 2 Interface. (line 101) * gsl_linalg_QR_Rsvx (C function): Level 2 Interface. (line 108) * gsl_linalg_QR_solve (C function): Level 2 Interface. (line 30) * gsl_linalg_QR_solve_r (C function): QR Decomposition. (line 79) * gsl_linalg_QR_svx (C function): Level 2 Interface. (line 41) * gsl_linalg_QR_UD_decomp (C function): Triangle on Top of Diagonal. (line 26) * gsl_linalg_QR_UD_lssolve (C function): Triangle on Top of Diagonal. (line 35) * gsl_linalg_QR_unpack (C function): Level 2 Interface. (line 117) * gsl_linalg_QR_unpack_r (C function): QR Decomposition. (line 137) * gsl_linalg_QR_update (C function): Level 2 Interface. (line 131) * gsl_linalg_QR_UR_decomp (C function): Triangle on Top of Rectangle. (line 27) * gsl_linalg_QR_UU_decomp (C function): Triangle on Top of Triangle. (line 25) * gsl_linalg_QR_UU_lssolve (C function): Triangle on Top of Triangle. (line 34) * gsl_linalg_QR_UU_QTec (C function): Triangle on Top of Triangle. (line 54) * gsl_linalg_QR_UZ_decomp (C function): Triangle on Top of Trapezoidal. (line 31) * gsl_linalg_R_solve (C function): Level 2 Interface. (line 140) * gsl_linalg_R_svx (C function): Level 2 Interface. (line 146) * gsl_linalg_solve_cyc_tridiag (C function): Tridiagonal Systems. (line 41) * gsl_linalg_solve_symm_cyc_tridiag (C function): Tridiagonal Systems. (line 56) * gsl_linalg_solve_symm_tridiag (C function): Tridiagonal Systems. (line 28) * gsl_linalg_solve_tridiag (C function): Tridiagonal Systems. (line 13) * gsl_linalg_SV_decomp (C function): Singular Value Decomposition. (line 38) * gsl_linalg_SV_decomp_jacobi (C function): Singular Value Decomposition. (line 61) * gsl_linalg_SV_decomp_mod (C function): Singular Value Decomposition. (line 53) * gsl_linalg_SV_leverage (C function): Singular Value Decomposition. (line 87) * gsl_linalg_SV_solve (C function): Singular Value Decomposition. (line 69) * gsl_linalg_symmtd_decomp (C function): Tridiagonal Decomposition of Real Symmetric Matrices. (line 13) * gsl_linalg_symmtd_unpack (C function): Tridiagonal Decomposition of Real Symmetric Matrices. (line 26) * gsl_linalg_symmtd_unpack_T (C function): Tridiagonal Decomposition of Real Symmetric Matrices. (line 36) * gsl_linalg_tri_invert (C function): Triangular Systems. (line 6) * gsl_linalg_tri_LTL (C function): Triangular Systems. (line 17) * gsl_linalg_tri_rcond (C function): Triangular Systems. (line 33) * gsl_linalg_tri_UL (C function): Triangular Systems. (line 24) * gsl_log1p (C function): Elementary Functions. (line 12) * gsl_matrix (C type): Matrices. (line 11) * gsl_matrix_add (C function): Matrix operations. (line 8) * gsl_matrix_add_constant (C function): Matrix operations. (line 68) * gsl_matrix_alloc (C function): Matrix allocation. (line 14) * gsl_matrix_calloc (C function): Matrix allocation. (line 25) * gsl_matrix_column (C function): Creating row and column views. (line 26) * gsl_matrix_complex_conjtrans_memcpy (C function): Exchanging rows and columns. (line 43) * gsl_matrix_const_column (C function): Creating row and column views. (line 26) * gsl_matrix_const_diagonal (C function): Creating row and column views. (line 70) * gsl_matrix_const_ptr (C function): Accessing matrix elements. (line 38) * gsl_matrix_const_row (C function): Creating row and column views. (line 13) * gsl_matrix_const_subcolumn (C function): Creating row and column views. (line 54) * gsl_matrix_const_subdiagonal (C function): Creating row and column views. (line 84) * gsl_matrix_const_submatrix (C function): Matrix views. (line 21) * gsl_matrix_const_subrow (C function): Creating row and column views. (line 39) * gsl_matrix_const_superdiagonal (C function): Creating row and column views. (line 98) * gsl_matrix_const_view (C type): Matrix views. (line 6) * gsl_matrix_const_view_array (C function): Matrix views. (line 57) * gsl_matrix_const_view_array_with_tda (C function): Matrix views. (line 84) * gsl_matrix_const_view_vector (C function): Matrix views. (line 112) * gsl_matrix_const_view_vector_with_tda (C function): Matrix views. (line 139) * gsl_matrix_diagonal (C function): Creating row and column views. (line 70) * gsl_matrix_div_elements (C function): Matrix operations. (line 30) * gsl_matrix_equal (C function): Matrix properties. (line 21) * gsl_matrix_fprintf (C function): Reading and writing matrices. (line 28) * gsl_matrix_fread (C function): Reading and writing matrices. (line 17) * gsl_matrix_free (C function): Matrix allocation. (line 32) * gsl_matrix_fscanf (C function): Reading and writing matrices. (line 38) * gsl_matrix_fwrite (C function): Reading and writing matrices. (line 9) * gsl_matrix_get (C function): Accessing matrix elements. (line 20) * gsl_matrix_get_col (C function): Copying rows and columns. (line 20) * gsl_matrix_get_row (C function): Copying rows and columns. (line 13) * gsl_matrix_isneg (C function): Matrix properties. (line 10) * gsl_matrix_isnonneg (C function): Matrix properties. (line 10) * gsl_matrix_isnull (C function): Matrix properties. (line 10) * gsl_matrix_ispos (C function): Matrix properties. (line 10) * gsl_matrix_max (C function): Finding maximum and minimum elements of matrices. (line 8) * gsl_matrix_max_index (C function): Finding maximum and minimum elements of matrices. (line 23) * gsl_matrix_memcpy (C function): Copying matrices. (line 6) * gsl_matrix_min (C function): Finding maximum and minimum elements of matrices. (line 12) * gsl_matrix_minmax (C function): Finding maximum and minimum elements of matrices. (line 16) * gsl_matrix_minmax_index (C function): Finding maximum and minimum elements of matrices. (line 39) * gsl_matrix_min_index (C function): Finding maximum and minimum elements of matrices. (line 31) * gsl_matrix_mul_elements (C function): Matrix operations. (line 22) * gsl_matrix_norm1 (C function): Matrix properties. (line 27) * gsl_matrix_ptr (C function): Accessing matrix elements. (line 38) * gsl_matrix_row (C function): Creating row and column views. (line 13) * gsl_matrix_scale (C function): Matrix operations. (line 38) * gsl_matrix_scale_columns (C function): Matrix operations. (line 44) * gsl_matrix_scale_rows (C function): Matrix operations. (line 56) * gsl_matrix_set (C function): Accessing matrix elements. (line 29) * gsl_matrix_set_all (C function): Initializing matrix elements. (line 6) * gsl_matrix_set_col (C function): Copying rows and columns. (line 34) * gsl_matrix_set_identity (C function): Initializing matrix elements. (line 16) * gsl_matrix_set_row (C function): Copying rows and columns. (line 27) * gsl_matrix_set_zero (C function): Initializing matrix elements. (line 11) * gsl_matrix_sub (C function): Matrix operations. (line 15) * gsl_matrix_subcolumn (C function): Creating row and column views. (line 54) * gsl_matrix_subdiagonal (C function): Creating row and column views. (line 84) * gsl_matrix_submatrix (C function): Matrix views. (line 21) * gsl_matrix_subrow (C function): Creating row and column views. (line 39) * gsl_matrix_superdiagonal (C function): Creating row and column views. (line 98) * gsl_matrix_swap (C function): Copying matrices. (line 13) * gsl_matrix_swap_columns (C function): Exchanging rows and columns. (line 15) * gsl_matrix_swap_rowcol (C function): Exchanging rows and columns. (line 21) * gsl_matrix_swap_rows (C function): Exchanging rows and columns. (line 9) * gsl_matrix_transpose (C function): Exchanging rows and columns. (line 37) * gsl_matrix_transpose_memcpy (C function): Exchanging rows and columns. (line 28) * gsl_matrix_view (C type): Matrix views. (line 6) * gsl_matrix_view_array (C function): Matrix views. (line 57) * gsl_matrix_view_array_with_tda (C function): Matrix views. (line 84) * gsl_matrix_view_vector (C function): Matrix views. (line 112) * gsl_matrix_view_vector_with_tda (C function): Matrix views. (line 139) * GSL_MAX (C macro): Maximum and Minimum functions. (line 10) * GSL_MAX_DBL (C function): Maximum and Minimum functions. (line 20) * GSL_MAX_INT (C function): Maximum and Minimum functions. (line 38) * GSL_MAX_LDBL (C function): Maximum and Minimum functions. (line 46) * GSL_MIN (C macro): Maximum and Minimum functions. (line 15) * GSL_MIN_DBL (C function): Maximum and Minimum functions. (line 29) * gsl_min_fminimizer (C type): Initializing the Minimizer. (line 6) * gsl_min_fminimizer_alloc (C function): Initializing the Minimizer. (line 10) * gsl_min_fminimizer_free (C function): Initializing the Minimizer. (line 46) * gsl_min_fminimizer_f_lower (C function): Iteration<2>. (line 46) * gsl_min_fminimizer_f_minimum (C function): Iteration<2>. (line 46) * gsl_min_fminimizer_f_upper (C function): Iteration<2>. (line 46) * gsl_min_fminimizer_iterate (C function): Iteration<2>. (line 12) * gsl_min_fminimizer_name (C function): Initializing the Minimizer. (line 51) * gsl_min_fminimizer_set (C function): Initializing the Minimizer. (line 24) * gsl_min_fminimizer_set_with_values (C function): Initializing the Minimizer. (line 36) * gsl_min_fminimizer_type (C type): Minimization Algorithms. (line 13) * gsl_min_fminimizer_type.gsl_min_fminimizer_brent (C var): Minimization Algorithms. (line 34) * gsl_min_fminimizer_type.gsl_min_fminimizer_goldensection (C var): Minimization Algorithms. (line 15) * gsl_min_fminimizer_type.gsl_min_fminimizer_quad_golden (C var): Minimization Algorithms. (line 52) * gsl_min_fminimizer_x_lower (C function): Iteration<2>. (line 38) * gsl_min_fminimizer_x_minimum (C function): Iteration<2>. (line 32) * gsl_min_fminimizer_x_upper (C function): Iteration<2>. (line 38) * GSL_MIN_INT (C function): Maximum and Minimum functions. (line 38) * GSL_MIN_LDBL (C function): Maximum and Minimum functions. (line 46) * gsl_min_test_interval (C function): Stopping Parameters. (line 18) * gsl_mode_t (C type): Modes. (line 14) * gsl_mode_t.GSL_PREC_APPROX (C macro): Modes. (line 26) * gsl_mode_t.GSL_PREC_DOUBLE (C macro): Modes. (line 16) * gsl_mode_t.GSL_PREC_SINGLE (C macro): Modes. (line 21) * gsl_monte_function (C type): Interface. (line 27) * gsl_monte_miser_alloc (C function): MISER. (line 48) * gsl_monte_miser_free (C function): MISER. (line 77) * gsl_monte_miser_init (C function): MISER. (line 55) * gsl_monte_miser_integrate (C function): MISER. (line 61) * gsl_monte_miser_params (C type): MISER. (line 104) * gsl_monte_miser_params.alpha (C var): MISER. (line 133) * gsl_monte_miser_params.dither (C var): MISER. (line 151) * gsl_monte_miser_params.estimate_frac (C var): MISER. (line 106) * gsl_monte_miser_params.min_calls (C var): MISER. (line 113) * gsl_monte_miser_params.min_calls_per_bisection (C var): MISER. (line 123) * gsl_monte_miser_params_get (C function): MISER. (line 85) * gsl_monte_miser_params_set (C function): MISER. (line 91) * gsl_monte_miser_state (C type): MISER. (line 44) * gsl_monte_plain_alloc (C function): PLAIN Monte Carlo. (line 33) * gsl_monte_plain_free (C function): PLAIN Monte Carlo. (line 61) * gsl_monte_plain_init (C function): PLAIN Monte Carlo. (line 39) * gsl_monte_plain_integrate (C function): PLAIN Monte Carlo. (line 45) * gsl_monte_plain_state (C type): PLAIN Monte Carlo. (line 29) * gsl_monte_vegas_alloc (C function): VEGAS. (line 54) * gsl_monte_vegas_chisq (C function): VEGAS. (line 124) * gsl_monte_vegas_free (C function): VEGAS. (line 86) * gsl_monte_vegas_init (C function): VEGAS. (line 61) * gsl_monte_vegas_integrate (C function): VEGAS. (line 67) * gsl_monte_vegas_params (C type): VEGAS. (line 163) * gsl_monte_vegas_params.alpha (C var): VEGAS. (line 165) * gsl_monte_vegas_params.iterations (C var): VEGAS. (line 172) * gsl_monte_vegas_params.mode (C var): VEGAS. (line 193) * gsl_monte_vegas_params.ostream (C var): VEGAS. (line 203) * gsl_monte_vegas_params.stage (C var): VEGAS. (line 177) * gsl_monte_vegas_params.verbose (C var): VEGAS. (line 203) * gsl_monte_vegas_params_get (C function): VEGAS. (line 144) * gsl_monte_vegas_params_set (C function): VEGAS. (line 150) * gsl_monte_vegas_runval (C function): VEGAS. (line 134) * gsl_monte_vegas_state (C type): VEGAS. (line 50) * gsl_movstat_accum (C type): Accumulators. (line 11) * gsl_movstat_accum.delete (C member): Accumulators. (line 45) * gsl_movstat_accum.get (C member): Accumulators. (line 50) * gsl_movstat_accum.init (C member): Accumulators. (line 32) * gsl_movstat_accum.insert (C member): Accumulators. (line 37) * gsl_movstat_accum.size (C member): Accumulators. (line 27) * gsl_movstat_accum_max (C var): Accumulators. (line 60) * gsl_movstat_accum_mean (C var): Accumulators. (line 67) * gsl_movstat_accum_median (C var): Accumulators. (line 74) * gsl_movstat_accum_min (C var): Accumulators. (line 60) * gsl_movstat_accum_minmax (C var): Accumulators. (line 60) * gsl_movstat_accum_Qn (C var): Accumulators. (line 79) * gsl_movstat_accum_qqr (C var): Accumulators. (line 89) * gsl_movstat_accum_sd (C var): Accumulators. (line 67) * gsl_movstat_accum_Sn (C var): Accumulators. (line 79) * gsl_movstat_accum_sum (C var): Accumulators. (line 85) * gsl_movstat_accum_variance (C var): Accumulators. (line 67) * gsl_movstat_alloc (C function): Allocation for Moving Window Statistics. (line 10) * gsl_movstat_alloc2 (C function): Allocation for Moving Window Statistics. (line 18) * gsl_movstat_apply (C function): User-defined Moving Statistics. (line 41) * gsl_movstat_end_t (C type): Handling Endpoints. (line 12) * gsl_movstat_end_t.GSL_MOVSTAT_END_PADVALUE (C macro): Handling Endpoints. (line 27) * gsl_movstat_end_t.GSL_MOVSTAT_END_PADZERO (C macro): Handling Endpoints. (line 17) * gsl_movstat_end_t.GSL_MOVSTAT_END_TRUNCATE (C macro): Handling Endpoints. (line 36) * gsl_movstat_fill (C function): User-defined Moving Statistics. (line 52) * gsl_movstat_free (C function): Allocation for Moving Window Statistics. (line 26) * gsl_movstat_function (C type): User-defined Moving Statistics. (line 16) * gsl_movstat_function.function (C member): User-defined Moving Statistics. (line 30) * gsl_movstat_function.params (C member): User-defined Moving Statistics. (line 37) * gsl_movstat_mad (C function): Moving MAD. (line 15) * gsl_movstat_mad0 (C function): Moving MAD. (line 15) * gsl_movstat_max (C function): Moving Minimum and Maximum. (line 21) * gsl_movstat_mean (C function): Moving Mean. (line 16) * gsl_movstat_median (C function): Moving Median. (line 11) * gsl_movstat_min (C function): Moving Minimum and Maximum. (line 12) * gsl_movstat_minmax (C function): Moving Minimum and Maximum. (line 30) * gsl_movstat_Qn (C function): Moving Q_n. (line 10) * gsl_movstat_qqr (C function): Moving QQR. (line 19) * gsl_movstat_sd (C function): Moving Variance and Standard Deviation. (line 23) * gsl_movstat_Sn (C function): Moving S_n. (line 11) * gsl_movstat_sum (C function): Moving Sum. (line 11) * gsl_movstat_variance (C function): Moving Variance and Standard Deviation. (line 14) * gsl_movstat_workspace (C type): Allocation for Moving Window Statistics. (line 6) * gsl_multifit_linear (C function): Multi-parameter regression. (line 76) * gsl_multifit_linear_alloc (C function): Multi-parameter regression. (line 48) * gsl_multifit_linear_applyW (C function): Regularized regression. (line 234) * gsl_multifit_linear_bsvd (C function): Multi-parameter regression. (line 68) * gsl_multifit_linear_est (C function): Multi-parameter regression. (line 157) * gsl_multifit_linear_free (C function): Multi-parameter regression. (line 56) * gsl_multifit_linear_gcv (C function): Regularized regression. (line 365) * gsl_multifit_linear_gcv_calc (C function): Regularized regression. (line 358) * gsl_multifit_linear_gcv_curve (C function): Regularized regression. (line 336) * gsl_multifit_linear_gcv_init (C function): Regularized regression. (line 324) * gsl_multifit_linear_gcv_min (C function): Regularized regression. (line 346) * gsl_multifit_linear_genform1 (C function): Regularized regression. (line 201) * gsl_multifit_linear_genform2 (C function): Regularized regression. (line 213) * gsl_multifit_linear_lcorner (C function): Regularized regression. (line 284) * gsl_multifit_linear_lcorner2 (C function): Regularized regression. (line 302) * gsl_multifit_linear_lcurvature (C function): Regularized regression. (line 264) * gsl_multifit_linear_lcurve (C function): Regularized regression. (line 246) * gsl_multifit_linear_Lk (C function): Regularized regression. (line 378) * gsl_multifit_linear_Lsobolev (C function): Regularized regression. (line 386) * gsl_multifit_linear_L_decomp (C function): Regularized regression. (line 143) * gsl_multifit_linear_rank (C function): Multi-parameter regression. (line 174) * gsl_multifit_linear_rcond (C function): Regularized regression. (line 403) * gsl_multifit_linear_residuals (C function): Multi-parameter regression. (line 167) * gsl_multifit_linear_solve (C function): Regularized regression. (line 182) * gsl_multifit_linear_stdform1 (C function): Regularized regression. (line 117) * gsl_multifit_linear_stdform2 (C function): Regularized regression. (line 157) * gsl_multifit_linear_svd (C function): Multi-parameter regression. (line 61) * gsl_multifit_linear_tsvd (C function): Multi-parameter regression. (line 99) * gsl_multifit_linear_wgenform2 (C function): Regularized regression. (line 213) * gsl_multifit_linear_workspace (C type): Multi-parameter regression. (line 43) * gsl_multifit_linear_wstdform1 (C function): Regularized regression. (line 117) * gsl_multifit_linear_wstdform2 (C function): Regularized regression. (line 157) * gsl_multifit_nlinear_alloc (C function): Initializing the Solver<3>. (line 18) * gsl_multifit_nlinear_avratio (C function): Iteration<5>. (line 109) * gsl_multifit_nlinear_covar (C function): Covariance matrix of best fit parameters. (line 6) * gsl_multifit_nlinear_default_parameters (C function): Initializing the Solver<3>. (line 53) * gsl_multifit_nlinear_driver (C function): High Level Driver. (line 9) * gsl_multifit_nlinear_fdf (C type): Providing the Function to be Minimized. (line 10) * gsl_multifit_nlinear_fdtype (C type): Tunable Parameters. (line 211) * gsl_multifit_nlinear_fdtype.GSL_MULTIFIT_NLINEAR_CTRDIFF (C macro): Tunable Parameters. (line 236) * gsl_multifit_nlinear_fdtype.GSL_MULTIFIT_NLINEAR_FWDIFF (C macro): Tunable Parameters. (line 218) * gsl_multifit_nlinear_free (C function): Initializing the Solver<3>. (line 82) * gsl_multifit_nlinear_init (C function): Initializing the Solver<3>. (line 63) * gsl_multifit_nlinear_iterate (C function): Iteration<5>. (line 10) * gsl_multifit_nlinear_jac (C function): Iteration<5>. (line 61) * gsl_multifit_nlinear_name (C function): Initializing the Solver<3>. (line 90) * gsl_multifit_nlinear_niter (C function): Iteration<5>. (line 68) * gsl_multifit_nlinear_parameters (C type): Tunable Parameters. (line 11) * gsl_multifit_nlinear_position (C function): Iteration<5>. (line 44) * gsl_multifit_nlinear_rcond (C function): Iteration<5>. (line 77) * gsl_multifit_nlinear_residual (C function): Iteration<5>. (line 52) * gsl_multifit_nlinear_scale (C type): Tunable Parameters. (line 97) * gsl_multifit_nlinear_solver (C type): Tunable Parameters. (line 147) * gsl_multifit_nlinear_test (C function): Testing for Convergence. (line 19) * gsl_multifit_nlinear_trs (C type): Tunable Parameters. (line 49) * gsl_multifit_nlinear_trs_name (C function): Initializing the Solver<3>. (line 102) * gsl_multifit_nlinear_type (C type): Initializing the Solver<3>. (line 6) * gsl_multifit_nlinear_type.gsl_multifit_nlinear_trust (C var): Initializing the Solver<3>. (line 12) * gsl_multifit_nlinear_winit (C function): Initializing the Solver<3>. (line 63) * gsl_multifit_robust (C function): Robust linear regression. (line 217) * gsl_multifit_robust_alloc (C function): Robust linear regression. (line 81) * gsl_multifit_robust_alloc.gsl_multifit_robust_type (C type): Robust linear regression. (line 90) * gsl_multifit_robust_alloc.gsl_multifit_robust_type.gsl_multifit_robust_bisquare (C var): Robust linear regression. (line 99) * gsl_multifit_robust_alloc.gsl_multifit_robust_type.gsl_multifit_robust_cauchy (C var): Robust linear regression. (line 111) * gsl_multifit_robust_alloc.gsl_multifit_robust_type.gsl_multifit_robust_default (C var): Robust linear regression. (line 92) * gsl_multifit_robust_alloc.gsl_multifit_robust_type.gsl_multifit_robust_fair (C var): Robust linear regression. (line 125) * gsl_multifit_robust_alloc.gsl_multifit_robust_type.gsl_multifit_robust_huber (C var): Robust linear regression. (line 136) * gsl_multifit_robust_alloc.gsl_multifit_robust_type.gsl_multifit_robust_ols (C var): Robust linear regression. (line 151) * gsl_multifit_robust_alloc.gsl_multifit_robust_type.gsl_multifit_robust_welsch (C var): Robust linear regression. (line 163) * gsl_multifit_robust_est (C function): Robust linear regression. (line 239) * gsl_multifit_robust_free (C function): Robust linear regression. (line 174) * gsl_multifit_robust_maxiter (C function): Robust linear regression. (line 195) * gsl_multifit_robust_name (C function): Robust linear regression. (line 180) * gsl_multifit_robust_residuals (C function): Robust linear regression. (line 249) * gsl_multifit_robust_statistics (C function): Robust linear regression. (line 261) * gsl_multifit_robust_statistics.gsl_multifit_robust_stats (C type): Robust linear regression. (line 272) * gsl_multifit_robust_tune (C function): Robust linear regression. (line 186) * gsl_multifit_robust_weights (C function): Robust linear regression. (line 203) * gsl_multifit_robust_workspace (C type): Robust linear regression. (line 77) * gsl_multifit_wlinear (C function): Multi-parameter regression. (line 120) * gsl_multifit_wlinear_tsvd (C function): Multi-parameter regression. (line 136) * gsl_multilarge_linear_accumulate (C function): Large Dense Linear Least Squares Routines. (line 117) * gsl_multilarge_linear_alloc (C function): Large Dense Linear Least Squares Routines. (line 11) * gsl_multilarge_linear_alloc.gsl_multilarge_linear_type (C type): Large Dense Linear Least Squares Routines. (line 19) * gsl_multilarge_linear_alloc.gsl_multilarge_linear_type.gsl_multilarge_linear_normal (C var): Large Dense Linear Least Squares Routines. (line 25) * gsl_multilarge_linear_alloc.gsl_multilarge_linear_type.gsl_multilarge_linear_tsqr (C var): Large Dense Linear Least Squares Routines. (line 34) * gsl_multilarge_linear_free (C function): Large Dense Linear Least Squares Routines. (line 44) * gsl_multilarge_linear_genform1 (C function): Large Dense Linear Least Squares Routines. (line 138) * gsl_multilarge_linear_genform2 (C function): Large Dense Linear Least Squares Routines. (line 150) * gsl_multilarge_linear_lcurve (C function): Large Dense Linear Least Squares Routines. (line 160) * gsl_multilarge_linear_L_decomp (C function): Large Dense Linear Least Squares Routines. (line 85) * gsl_multilarge_linear_matrix_ptr (C function): Large Dense Linear Least Squares Routines. (line 178) * gsl_multilarge_linear_name (C function): Large Dense Linear Least Squares Routines. (line 50) * gsl_multilarge_linear_rcond (C function): Large Dense Linear Least Squares Routines. (line 194) * gsl_multilarge_linear_reset (C function): Large Dense Linear Least Squares Routines. (line 56) * gsl_multilarge_linear_rhs_ptr (C function): Large Dense Linear Least Squares Routines. (line 186) * gsl_multilarge_linear_solve (C function): Large Dense Linear Least Squares Routines. (line 127) * gsl_multilarge_linear_stdform1 (C function): Large Dense Linear Least Squares Routines. (line 62) * gsl_multilarge_linear_stdform2 (C function): Large Dense Linear Least Squares Routines. (line 95) * gsl_multilarge_linear_workspace (C type): Large Dense Linear Least Squares Routines. (line 6) * gsl_multilarge_linear_wstdform1 (C function): Large Dense Linear Least Squares Routines. (line 62) * gsl_multilarge_linear_wstdform2 (C function): Large Dense Linear Least Squares Routines. (line 95) * gsl_multilarge_nlinear_alloc (C function): Initializing the Solver<3>. (line 18) * gsl_multilarge_nlinear_avratio (C function): Iteration<5>. (line 109) * gsl_multilarge_nlinear_covar (C function): Covariance matrix of best fit parameters. (line 6) * gsl_multilarge_nlinear_default_parameters (C function): Initializing the Solver<3>. (line 53) * gsl_multilarge_nlinear_driver (C function): High Level Driver. (line 9) * gsl_multilarge_nlinear_fdf (C type): Providing the Function to be Minimized. (line 85) * gsl_multilarge_nlinear_free (C function): Initializing the Solver<3>. (line 82) * gsl_multilarge_nlinear_init (C function): Initializing the Solver<3>. (line 63) * gsl_multilarge_nlinear_iterate (C function): Iteration<5>. (line 10) * gsl_multilarge_nlinear_name (C function): Initializing the Solver<3>. (line 90) * gsl_multilarge_nlinear_niter (C function): Iteration<5>. (line 68) * gsl_multilarge_nlinear_parameters (C type): Tunable Parameters. (line 30) * gsl_multilarge_nlinear_position (C function): Iteration<5>. (line 44) * gsl_multilarge_nlinear_rcond (C function): Iteration<5>. (line 77) * gsl_multilarge_nlinear_residual (C function): Iteration<5>. (line 52) * gsl_multilarge_nlinear_scale (C type): Tunable Parameters. (line 97) * gsl_multilarge_nlinear_scale.gsl_multifit_nlinear_scale_levenberg (C var): Tunable Parameters. (line 123) * gsl_multilarge_nlinear_scale.gsl_multifit_nlinear_scale_marquardt (C var): Tunable Parameters. (line 136) * gsl_multilarge_nlinear_scale.gsl_multifit_nlinear_scale_more (C var): Tunable Parameters. (line 103) * gsl_multilarge_nlinear_scale.gsl_multilarge_nlinear_scale_levenberg (C var): Tunable Parameters. (line 123) * gsl_multilarge_nlinear_scale.gsl_multilarge_nlinear_scale_marquardt (C var): Tunable Parameters. (line 136) * gsl_multilarge_nlinear_scale.gsl_multilarge_nlinear_scale_more (C var): Tunable Parameters. (line 103) * gsl_multilarge_nlinear_solver (C type): Tunable Parameters. (line 147) * gsl_multilarge_nlinear_solver.gsl_multifit_nlinear_solver_cholesky (C var): Tunable Parameters. (line 167) * gsl_multilarge_nlinear_solver.gsl_multifit_nlinear_solver_mcholesky (C var): Tunable Parameters. (line 186) * gsl_multilarge_nlinear_solver.gsl_multifit_nlinear_solver_qr (C var): Tunable Parameters. (line 158) * gsl_multilarge_nlinear_solver.gsl_multifit_nlinear_solver_svd (C var): Tunable Parameters. (line 203) * gsl_multilarge_nlinear_solver.gsl_multilarge_nlinear_solver_cholesky (C var): Tunable Parameters. (line 167) * gsl_multilarge_nlinear_solver.gsl_multilarge_nlinear_solver_mcholesky (C var): Tunable Parameters. (line 186) * gsl_multilarge_nlinear_test (C function): Testing for Convergence. (line 19) * gsl_multilarge_nlinear_trs (C type): Tunable Parameters. (line 49) * gsl_multilarge_nlinear_trs.gsl_multifit_nlinear_trs_ddogleg (C var): Tunable Parameters. (line 77) * gsl_multilarge_nlinear_trs.gsl_multifit_nlinear_trs_dogleg (C var): Tunable Parameters. (line 70) * gsl_multilarge_nlinear_trs.gsl_multifit_nlinear_trs_lm (C var): Tunable Parameters. (line 55) * gsl_multilarge_nlinear_trs.gsl_multifit_nlinear_trs_lmaccel (C var): Tunable Parameters. (line 62) * gsl_multilarge_nlinear_trs.gsl_multifit_nlinear_trs_subspace2D (C var): Tunable Parameters. (line 84) * gsl_multilarge_nlinear_trs.gsl_multilarge_nlinear_trs_cgst (C var): Tunable Parameters. (line 91) * gsl_multilarge_nlinear_trs.gsl_multilarge_nlinear_trs_ddogleg (C var): Tunable Parameters. (line 77) * gsl_multilarge_nlinear_trs.gsl_multilarge_nlinear_trs_dogleg (C var): Tunable Parameters. (line 70) * gsl_multilarge_nlinear_trs.gsl_multilarge_nlinear_trs_lm (C var): Tunable Parameters. (line 55) * gsl_multilarge_nlinear_trs.gsl_multilarge_nlinear_trs_lmaccel (C var): Tunable Parameters. (line 62) * gsl_multilarge_nlinear_trs.gsl_multilarge_nlinear_trs_subspace2D (C var): Tunable Parameters. (line 84) * gsl_multilarge_nlinear_trs_name (C function): Initializing the Solver<3>. (line 102) * gsl_multimin_fdfminimizer (C type): Initializing the Multidimensional Minimizer. (line 10) * gsl_multimin_fdfminimizer_alloc (C function): Initializing the Multidimensional Minimizer. (line 18) * gsl_multimin_fdfminimizer_dx (C function): Iteration<4>. (line 27) * gsl_multimin_fdfminimizer_free (C function): Initializing the Multidimensional Minimizer. (line 58) * gsl_multimin_fdfminimizer_gradient (C function): Iteration<4>. (line 27) * gsl_multimin_fdfminimizer_iterate (C function): Iteration<4>. (line 11) * gsl_multimin_fdfminimizer_minimum (C function): Iteration<4>. (line 27) * gsl_multimin_fdfminimizer_name (C function): Initializing the Multidimensional Minimizer. (line 66) * gsl_multimin_fdfminimizer_restart (C function): Iteration<4>. (line 47) * gsl_multimin_fdfminimizer_set (C function): Initializing the Multidimensional Minimizer. (line 31) * gsl_multimin_fdfminimizer_type (C type): Algorithms with Derivatives. (line 11) * gsl_multimin_fdfminimizer_type.gsl_multimin_fdfminimizer_conjugate_fr (C var): Algorithms with Derivatives. (line 15) * gsl_multimin_fdfminimizer_type.gsl_multimin_fdfminimizer_conjugate_pr (C var): Algorithms with Derivatives. (line 35) * gsl_multimin_fdfminimizer_type.gsl_multimin_fdfminimizer_steepest_descent (C var): Algorithms with Derivatives. (line 69) * gsl_multimin_fdfminimizer_type.gsl_multimin_fdfminimizer_vector_bfgs (C var): Algorithms with Derivatives. (line 45) * gsl_multimin_fdfminimizer_type.gsl_multimin_fdfminimizer_vector_bfgs2 (C var): Algorithms with Derivatives. (line 45) * gsl_multimin_fdfminimizer_x (C function): Iteration<4>. (line 27) * gsl_multimin_fminimizer (C type): Initializing the Multidimensional Minimizer. (line 14) * gsl_multimin_fminimizer_alloc (C function): Initializing the Multidimensional Minimizer. (line 18) * gsl_multimin_fminimizer_free (C function): Initializing the Multidimensional Minimizer. (line 58) * gsl_multimin_fminimizer_iterate (C function): Iteration<4>. (line 11) * gsl_multimin_fminimizer_minimum (C function): Iteration<4>. (line 27) * gsl_multimin_fminimizer_name (C function): Initializing the Multidimensional Minimizer. (line 66) * gsl_multimin_fminimizer_set (C function): Initializing the Multidimensional Minimizer. (line 31) * gsl_multimin_fminimizer_size (C function): Iteration<4>. (line 27) * gsl_multimin_fminimizer_type (C type): Algorithms without Derivatives<2>. (line 9) * gsl_multimin_fminimizer_type.gsl_multimin_fminimizer_nmsimplex (C var): Algorithms without Derivatives<2>. (line 14) * gsl_multimin_fminimizer_type.gsl_multimin_fminimizer_nmsimplex2 (C var): Algorithms without Derivatives<2>. (line 14) * gsl_multimin_fminimizer_type.gsl_multimin_fminimizer_nmsimplex2rand (C var): Algorithms without Derivatives<2>. (line 62) * gsl_multimin_fminimizer_x (C function): Iteration<4>. (line 27) * gsl_multimin_function (C type): Providing a function to minimize. (line 53) * gsl_multimin_function_fdf (C type): Providing a function to minimize. (line 13) * gsl_multimin_test_gradient (C function): Stopping Criteria. (line 18) * gsl_multimin_test_size (C function): Stopping Criteria. (line 33) * gsl_multiroot_fdfsolver (C type): Initializing the Solver<2>. (line 16) * gsl_multiroot_fdfsolver_alloc (C function): Initializing the Solver<2>. (line 37) * gsl_multiroot_fdfsolver_dx (C function): Iteration<3>. (line 51) * gsl_multiroot_fdfsolver_f (C function): Iteration<3>. (line 43) * gsl_multiroot_fdfsolver_free (C function): Initializing the Solver<2>. (line 65) * gsl_multiroot_fdfsolver_iterate (C function): Iteration<3>. (line 12) * gsl_multiroot_fdfsolver_name (C function): Initializing the Solver<2>. (line 72) * gsl_multiroot_fdfsolver_root (C function): Iteration<3>. (line 35) * gsl_multiroot_fdfsolver_set (C function): Initializing the Solver<2>. (line 54) * gsl_multiroot_fdfsolver_type (C type): Algorithms using Derivatives. (line 13) * gsl_multiroot_fdfsolver_type.gsl_multiroot_fdfsolver_gnewton (C var): Algorithms using Derivatives. (line 105) * gsl_multiroot_fdfsolver_type.gsl_multiroot_fdfsolver_hybridj (C var): Algorithms using Derivatives. (line 76) * gsl_multiroot_fdfsolver_type.gsl_multiroot_fdfsolver_hybridsj (C var): Algorithms using Derivatives. (line 18) * gsl_multiroot_fdfsolver_type.gsl_multiroot_fdfsolver_newton (C var): Algorithms using Derivatives. (line 84) * gsl_multiroot_fsolver (C type): Initializing the Solver<2>. (line 11) * gsl_multiroot_fsolver_alloc (C function): Initializing the Solver<2>. (line 21) * gsl_multiroot_fsolver_dx (C function): Iteration<3>. (line 51) * gsl_multiroot_fsolver_f (C function): Iteration<3>. (line 43) * gsl_multiroot_fsolver_free (C function): Initializing the Solver<2>. (line 65) * gsl_multiroot_fsolver_iterate (C function): Iteration<3>. (line 12) * gsl_multiroot_fsolver_name (C function): Initializing the Solver<2>. (line 72) * gsl_multiroot_fsolver_root (C function): Iteration<3>. (line 35) * gsl_multiroot_fsolver_set (C function): Initializing the Solver<2>. (line 54) * gsl_multiroot_fsolver_type (C type): Algorithms without Derivatives. (line 13) * gsl_multiroot_fsolver_type.gsl_multiroot_fsolver_broyden (C var): Algorithms without Derivatives. (line 57) * gsl_multiroot_fsolver_type.gsl_multiroot_fsolver_dnewton (C var): Algorithms without Derivatives. (line 34) * gsl_multiroot_fsolver_type.gsl_multiroot_fsolver_hybrid (C var): Algorithms without Derivatives. (line 28) * gsl_multiroot_fsolver_type.gsl_multiroot_fsolver_hybrids (C var): Algorithms without Derivatives. (line 18) * gsl_multiroot_function (C type): Providing the function to solve<2>. (line 10) * gsl_multiroot_function_fdf (C type): Providing the function to solve<2>. (line 62) * gsl_multiroot_test_delta (C function): Search Stopping Parameters<2>. (line 20) * gsl_multiroot_test_residual (C function): Search Stopping Parameters<2>. (line 34) * gsl_multiset (C type): The Multiset struct. (line 6) * gsl_multiset_alloc (C function): Multiset allocation. (line 6) * gsl_multiset_calloc (C function): Multiset allocation. (line 17) * gsl_multiset_data (C function): Multiset properties. (line 15) * gsl_multiset_fprintf (C function): Reading and writing multisets. (line 29) * gsl_multiset_fread (C function): Reading and writing multisets. (line 18) * gsl_multiset_free (C function): Multiset allocation. (line 37) * gsl_multiset_fscanf (C function): Reading and writing multisets. (line 39) * gsl_multiset_fwrite (C function): Reading and writing multisets. (line 9) * gsl_multiset_get (C function): Accessing multiset elements. (line 8) * gsl_multiset_init_first (C function): Multiset allocation. (line 26) * gsl_multiset_init_last (C function): Multiset allocation. (line 31) * gsl_multiset_k (C function): Multiset properties. (line 10) * gsl_multiset_memcpy (C function): Multiset allocation. (line 42) * gsl_multiset_n (C function): Multiset properties. (line 6) * gsl_multiset_next (C function): Multiset functions. (line 6) * gsl_multiset_prev (C function): Multiset functions. (line 15) * gsl_multiset_valid (C function): Multiset properties. (line 20) * GSL_NAN (C macro): Infinities and Not-a-number. (line 16) * GSL_NEGINF (C macro): Infinities and Not-a-number. (line 11) * gsl_ntuple (C type): The ntuple struct. (line 6) * gsl_ntuple_bookdata (C function): Writing ntuples. (line 11) * gsl_ntuple_close (C function): Closing an ntuple file. (line 6) * gsl_ntuple_create (C function): Creating ntuples. (line 6) * gsl_ntuple_open (C function): Opening an existing ntuple file. (line 6) * gsl_ntuple_project (C function): Histogramming ntuple values. (line 41) * gsl_ntuple_read (C function): Reading ntuples. (line 6) * gsl_ntuple_select_fn (C type): Histogramming ntuple values. (line 13) * gsl_ntuple_value_fn (C type): Histogramming ntuple values. (line 27) * gsl_ntuple_write (C function): Writing ntuples. (line 6) * gsl_odeiv2_control (C type): Adaptive Step-size Control. (line 10) * gsl_odeiv2_control_alloc (C function): Adaptive Step-size Control. (line 88) * gsl_odeiv2_control_errlevel (C function): Adaptive Step-size Control. (line 137) * gsl_odeiv2_control_free (C function): Adaptive Step-size Control. (line 105) * gsl_odeiv2_control_hadjust (C function): Adaptive Step-size Control. (line 110) * gsl_odeiv2_control_init (C function): Adaptive Step-size Control. (line 97) * gsl_odeiv2_control_name (C function): Adaptive Step-size Control. (line 127) * gsl_odeiv2_control_scaled_new (C function): Adaptive Step-size Control. (line 72) * gsl_odeiv2_control_set_driver (C function): Adaptive Step-size Control. (line 146) * gsl_odeiv2_control_standard_new (C function): Adaptive Step-size Control. (line 18) * gsl_odeiv2_control_type (C type): Adaptive Step-size Control. (line 14) * gsl_odeiv2_control_yp_new (C function): Adaptive Step-size Control. (line 63) * gsl_odeiv2_control_y_new (C function): Adaptive Step-size Control. (line 54) * gsl_odeiv2_driver_alloc_scaled_new (C function): Driver. (line 9) * gsl_odeiv2_driver_alloc_standard_new (C function): Driver. (line 9) * gsl_odeiv2_driver_alloc_yp_new (C function): Driver. (line 9) * gsl_odeiv2_driver_alloc_y_new (C function): Driver. (line 9) * gsl_odeiv2_driver_apply (C function): Driver. (line 52) * gsl_odeiv2_driver_apply_fixed_step (C function): Driver. (line 71) * gsl_odeiv2_driver_free (C function): Driver. (line 93) * gsl_odeiv2_driver_reset (C function): Driver. (line 82) * gsl_odeiv2_driver_reset_hstart (C function): Driver. (line 86) * gsl_odeiv2_driver_set_hmax (C function): Driver. (line 39) * gsl_odeiv2_driver_set_hmin (C function): Driver. (line 33) * gsl_odeiv2_driver_set_nmax (C function): Driver. (line 45) * gsl_odeiv2_evolve (C type): Evolution. (line 10) * gsl_odeiv2_evolve_alloc (C function): Evolution. (line 15) * gsl_odeiv2_evolve_apply (C function): Evolution. (line 21) * gsl_odeiv2_evolve_apply_fixed_step (C function): Evolution. (line 63) * gsl_odeiv2_evolve_free (C function): Evolution. (line 81) * gsl_odeiv2_evolve_reset (C function): Evolution. (line 75) * gsl_odeiv2_evolve_set_driver (C function): Evolution. (line 86) * gsl_odeiv2_step (C type): Stepping Functions. (line 10) * gsl_odeiv2_step_alloc (C function): Stepping Functions. (line 14) * gsl_odeiv2_step_apply (C function): Stepping Functions. (line 61) * gsl_odeiv2_step_free (C function): Stepping Functions. (line 29) * gsl_odeiv2_step_name (C function): Stepping Functions. (line 34) * gsl_odeiv2_step_order (C function): Stepping Functions. (line 44) * gsl_odeiv2_step_reset (C function): Stepping Functions. (line 23) * gsl_odeiv2_step_set_driver (C function): Stepping Functions. (line 51) * gsl_odeiv2_step_type (C type): Stepping Functions. (line 102) * gsl_odeiv2_step_type.gsl_odeiv2_step_bsimp (C var): Stepping Functions. (line 157) * gsl_odeiv2_step_type.gsl_odeiv2_step_msadams (C var): Stepping Functions. (line 164) * gsl_odeiv2_step_type.gsl_odeiv2_step_msbdf (C var): Stepping Functions. (line 175) * gsl_odeiv2_step_type.gsl_odeiv2_step_rk1imp (C var): Stepping Functions. (line 131) * gsl_odeiv2_step_type.gsl_odeiv2_step_rk2 (C var): Stepping Functions. (line 104) * gsl_odeiv2_step_type.gsl_odeiv2_step_rk2imp (C var): Stepping Functions. (line 140) * gsl_odeiv2_step_type.gsl_odeiv2_step_rk4 (C var): Stepping Functions. (line 108) * gsl_odeiv2_step_type.gsl_odeiv2_step_rk4imp (C var): Stepping Functions. (line 149) * gsl_odeiv2_step_type.gsl_odeiv2_step_rk8pd (C var): Stepping Functions. (line 126) * gsl_odeiv2_step_type.gsl_odeiv2_step_rkck (C var): Stepping Functions. (line 121) * gsl_odeiv2_step_type.gsl_odeiv2_step_rkf45 (C var): Stepping Functions. (line 115) * gsl_odeiv2_system (C type): Defining the ODE System. (line 18) * gsl_permutation (C type): The Permutation struct. (line 6) * gsl_permutation_alloc (C function): Permutation allocation. (line 6) * gsl_permutation_calloc (C function): Permutation allocation. (line 16) * gsl_permutation_canonical_cycles (C function): Permutations in cyclic form. (line 73) * gsl_permutation_canonical_to_linear (C function): Permutations in cyclic form. (line 51) * gsl_permutation_data (C function): Permutation properties. (line 10) * gsl_permutation_fprintf (C function): Reading and writing permutations. (line 29) * gsl_permutation_fread (C function): Reading and writing permutations. (line 18) * gsl_permutation_free (C function): Permutation allocation. (line 29) * gsl_permutation_fscanf (C function): Reading and writing permutations. (line 39) * gsl_permutation_fwrite (C function): Reading and writing permutations. (line 9) * gsl_permutation_get (C function): Accessing permutation elements. (line 9) * gsl_permutation_init (C function): Permutation allocation. (line 24) * gsl_permutation_inverse (C function): Permutation functions. (line 11) * gsl_permutation_inversions (C function): Permutations in cyclic form. (line 58) * gsl_permutation_linear_cycles (C function): Permutations in cyclic form. (line 67) * gsl_permutation_linear_to_canonical (C function): Permutations in cyclic form. (line 45) * gsl_permutation_memcpy (C function): Permutation allocation. (line 34) * gsl_permutation_mul (C function): Applying Permutations. (line 56) * gsl_permutation_next (C function): Permutation functions. (line 17) * gsl_permutation_prev (C function): Permutation functions. (line 26) * gsl_permutation_reverse (C function): Permutation functions. (line 6) * gsl_permutation_size (C function): Permutation properties. (line 6) * gsl_permutation_swap (C function): Accessing permutation elements. (line 18) * gsl_permutation_valid (C function): Permutation properties. (line 15) * gsl_permute (C function): Applying Permutations. (line 9) * gsl_permute_inverse (C function): Applying Permutations. (line 16) * gsl_permute_matrix (C function): Applying Permutations. (line 45) * gsl_permute_vector (C function): Applying Permutations. (line 23) * gsl_permute_vector_inverse (C function): Applying Permutations. (line 33) * gsl_poly_complex_eval (C function): Polynomial Evaluation. (line 19) * gsl_poly_complex_solve (C function): General Polynomial Equations. (line 34) * gsl_poly_complex_solve_cubic (C function): Cubic Equations. (line 25) * gsl_poly_complex_solve_quadratic (C function): Quadratic Equations. (line 31) * gsl_poly_complex_workspace (C type): General Polynomial Equations. (line 11) * gsl_poly_complex_workspace_alloc (C function): General Polynomial Equations. (line 16) * gsl_poly_complex_workspace_free (C function): General Polynomial Equations. (line 28) * gsl_poly_dd_eval (C function): Divided Difference Representation of Polynomials. (line 42) * gsl_poly_dd_hermite_init (C function): Divided Difference Representation of Polynomials. (line 62) * gsl_poly_dd_init (C function): Divided Difference Representation of Polynomials. (line 32) * gsl_poly_dd_taylor (C function): Divided Difference Representation of Polynomials. (line 50) * gsl_poly_eval (C function): Polynomial Evaluation. (line 13) * gsl_poly_eval_derivs (C function): Polynomial Evaluation. (line 31) * gsl_poly_solve_cubic (C function): Cubic Equations. (line 6) * gsl_poly_solve_quadratic (C function): Quadratic Equations. (line 6) * GSL_POSINF (C macro): Infinities and Not-a-number. (line 6) * gsl_pow_2 (C function): Small integer powers. (line 20) * gsl_pow_3 (C function): Small integer powers. (line 20) * gsl_pow_4 (C function): Small integer powers. (line 20) * gsl_pow_5 (C function): Small integer powers. (line 20) * gsl_pow_6 (C function): Small integer powers. (line 20) * gsl_pow_7 (C function): Small integer powers. (line 20) * gsl_pow_8 (C function): Small integer powers. (line 20) * gsl_pow_9 (C function): Small integer powers. (line 20) * gsl_pow_int (C function): Small integer powers. (line 11) * gsl_pow_uint (C function): Small integer powers. (line 11) * gsl_qrng (C type): Quasi-random number generator initialization. (line 6) * gsl_qrng_alloc (C function): Quasi-random number generator initialization. (line 10) * gsl_qrng_clone (C function): Saving and restoring quasi-random number generator state. (line 13) * gsl_qrng_free (C function): Quasi-random number generator initialization. (line 19) * gsl_qrng_get (C function): Sampling from a quasi-random number generator. (line 6) * gsl_qrng_init (C function): Quasi-random number generator initialization. (line 24) * gsl_qrng_memcpy (C function): Saving and restoring quasi-random number generator state. (line 6) * gsl_qrng_name (C function): Auxiliary quasi-random number generator functions. (line 6) * gsl_qrng_size (C function): Auxiliary quasi-random number generator functions. (line 10) * gsl_qrng_state (C function): Auxiliary quasi-random number generator functions. (line 10) * gsl_qrng_type (C type): Quasi-random number generator algorithms. (line 8) * gsl_qrng_type.gsl_qrng_halton (C var): Quasi-random number generator algorithms. (line 22) * gsl_qrng_type.gsl_qrng_niederreiter_2 (C var): Quasi-random number generator algorithms. (line 10) * gsl_qrng_type.gsl_qrng_reversehalton (C var): Quasi-random number generator algorithms. (line 22) * gsl_qrng_type.gsl_qrng_sobol (C var): Quasi-random number generator algorithms. (line 16) * GSL_RANGE_CHECK_OFF (C macro): Accessing vector elements. (line 17) * gsl_ran_bernoulli (C function): The Bernoulli Distribution. (line 6) * gsl_ran_bernoulli_pdf (C function): The Bernoulli Distribution. (line 16) * gsl_ran_beta (C function): The Beta Distribution. (line 6) * gsl_ran_beta_pdf (C function): The Beta Distribution. (line 15) * gsl_ran_binomial (C function): The Binomial Distribution. (line 6) * gsl_ran_binomial_pdf (C function): The Binomial Distribution. (line 18) * gsl_ran_bivariate_gaussian (C function): The Bivariate Gaussian Distribution. (line 6) * gsl_ran_bivariate_gaussian_pdf (C function): The Bivariate Gaussian Distribution. (line 20) * gsl_ran_cauchy (C function): The Cauchy Distribution. (line 6) * gsl_ran_cauchy_pdf (C function): The Cauchy Distribution. (line 17) * gsl_ran_chisq (C function): The Chi-squared Distribution. (line 14) * gsl_ran_chisq_pdf (C function): The Chi-squared Distribution. (line 24) * gsl_ran_choose (C function): Shuffling and Sampling. (line 34) * gsl_ran_dirichlet (C function): The Dirichlet Distribution. (line 6) * gsl_ran_dirichlet_lnpdf (C function): The Dirichlet Distribution. (line 33) * gsl_ran_dirichlet_pdf (C function): The Dirichlet Distribution. (line 26) * gsl_ran_dir_2d (C function): Spherical Vector Distributions. (line 10) * gsl_ran_dir_2d_trig_method (C function): Spherical Vector Distributions. (line 10) * gsl_ran_dir_3d (C function): Spherical Vector Distributions. (line 33) * gsl_ran_dir_nd (C function): Spherical Vector Distributions. (line 44) * gsl_ran_discrete (C function): General Discrete Distributions. (line 66) * gsl_ran_discrete_free (C function): General Discrete Distributions. (line 81) * gsl_ran_discrete_pdf (C function): General Discrete Distributions. (line 72) * gsl_ran_discrete_preproc (C function): General Discrete Distributions. (line 54) * gsl_ran_discrete_t (C type): General Discrete Distributions. (line 49) * gsl_ran_exponential (C function): The Exponential Distribution. (line 6) * gsl_ran_exponential_pdf (C function): The Exponential Distribution. (line 15) * gsl_ran_exppow (C function): The Exponential Power Distribution. (line 6) * gsl_ran_exppow_pdf (C function): The Exponential Power Distribution. (line 19) * gsl_ran_fdist (C function): The F-distribution. (line 13) * gsl_ran_fdist_pdf (C function): The F-distribution. (line 28) * gsl_ran_flat (C function): The Flat Uniform Distribution. (line 6) * gsl_ran_flat_pdf (C function): The Flat Uniform Distribution. (line 16) * gsl_ran_gamma (C function): The Gamma Distribution. (line 6) * gsl_ran_gamma_knuth (C function): The Gamma Distribution. (line 23) * gsl_ran_gamma_pdf (C function): The Gamma Distribution. (line 29) * gsl_ran_gaussian (C function): The Gaussian Distribution. (line 6) * gsl_ran_gaussian_pdf (C function): The Gaussian Distribution. (line 20) * gsl_ran_gaussian_ratio_method (C function): The Gaussian Distribution. (line 27) * gsl_ran_gaussian_tail (C function): The Gaussian Tail Distribution. (line 6) * gsl_ran_gaussian_tail_pdf (C function): The Gaussian Tail Distribution. (line 25) * gsl_ran_gaussian_ziggurat (C function): The Gaussian Distribution. (line 27) * gsl_ran_geometric (C function): The Geometric Distribution. (line 6) * gsl_ran_geometric_pdf (C function): The Geometric Distribution. (line 20) * gsl_ran_gumbel1 (C function): The Type-1 Gumbel Distribution. (line 6) * gsl_ran_gumbel1_pdf (C function): The Type-1 Gumbel Distribution. (line 16) * gsl_ran_gumbel2 (C function): The Type-2 Gumbel Distribution. (line 6) * gsl_ran_gumbel2_pdf (C function): The Type-2 Gumbel Distribution. (line 16) * gsl_ran_hypergeometric (C function): The Hypergeometric Distribution. (line 6) * gsl_ran_hypergeometric_pdf (C function): The Hypergeometric Distribution. (line 23) * gsl_ran_landau (C function): The Landau Distribution. (line 6) * gsl_ran_landau_pdf (C function): The Landau Distribution. (line 19) * gsl_ran_laplace (C function): The Laplace Distribution. (line 6) * gsl_ran_laplace_pdf (C function): The Laplace Distribution. (line 15) * gsl_ran_levy (C function): The Levy alpha-Stable Distributions. (line 6) * gsl_ran_levy_skew (C function): The Levy skew alpha-Stable Distribution. (line 6) * gsl_ran_logarithmic (C function): The Logarithmic Distribution. (line 6) * gsl_ran_logarithmic_pdf (C function): The Logarithmic Distribution. (line 17) * gsl_ran_logistic (C function): The Logistic Distribution. (line 6) * gsl_ran_logistic_pdf (C function): The Logistic Distribution. (line 15) * gsl_ran_lognormal (C function): The Lognormal Distribution. (line 6) * gsl_ran_lognormal_pdf (C function): The Lognormal Distribution. (line 16) * gsl_ran_multinomial (C function): The Multinomial Distribution. (line 6) * gsl_ran_multinomial_lnpdf (C function): The Multinomial Distribution. (line 36) * gsl_ran_multinomial_pdf (C function): The Multinomial Distribution. (line 29) * gsl_ran_multivariate_gaussian (C function): The Multivariate Gaussian Distribution. (line 6) * gsl_ran_multivariate_gaussian_log_pdf (C function): The Multivariate Gaussian Distribution. (line 21) * gsl_ran_multivariate_gaussian_mean (C function): The Multivariate Gaussian Distribution. (line 34) * gsl_ran_multivariate_gaussian_pdf (C function): The Multivariate Gaussian Distribution. (line 21) * gsl_ran_multivariate_gaussian_vcov (C function): The Multivariate Gaussian Distribution. (line 47) * gsl_ran_negative_binomial (C function): The Negative Binomial Distribution. (line 6) * gsl_ran_negative_binomial_pdf (C function): The Negative Binomial Distribution. (line 19) * gsl_ran_pareto (C function): The Pareto Distribution. (line 6) * gsl_ran_pareto_pdf (C function): The Pareto Distribution. (line 16) * gsl_ran_pascal (C function): The Pascal Distribution. (line 6) * gsl_ran_pascal_pdf (C function): The Pascal Distribution. (line 17) * gsl_ran_poisson (C function): The Poisson Distribution. (line 6) * gsl_ran_poisson_pdf (C function): The Poisson Distribution. (line 16) * gsl_ran_rayleigh (C function): The Rayleigh Distribution. (line 6) * gsl_ran_rayleigh_pdf (C function): The Rayleigh Distribution. (line 16) * gsl_ran_rayleigh_tail (C function): The Rayleigh Tail Distribution. (line 6) * gsl_ran_rayleigh_tail_pdf (C function): The Rayleigh Tail Distribution. (line 17) * gsl_ran_sample (C function): Shuffling and Sampling. (line 63) * gsl_ran_shuffle (C function): Shuffling and Sampling. (line 13) * gsl_ran_tdist (C function): The t-distribution. (line 14) * gsl_ran_tdist_pdf (C function): The t-distribution. (line 23) * gsl_ran_ugaussian (C function): The Gaussian Distribution. (line 37) * gsl_ran_ugaussian_pdf (C function): The Gaussian Distribution. (line 37) * gsl_ran_ugaussian_ratio_method (C function): The Gaussian Distribution. (line 37) * gsl_ran_ugaussian_tail (C function): The Gaussian Tail Distribution. (line 34) * gsl_ran_ugaussian_tail_pdf (C function): The Gaussian Tail Distribution. (line 34) * gsl_ran_weibull (C function): The Weibull Distribution. (line 6) * gsl_ran_weibull_pdf (C function): The Weibull Distribution. (line 16) * gsl_ran_wishart (C function): The Wishart Distribution. (line 6) * gsl_ran_wishart_log_pdf (C function): The Wishart Distribution. (line 21) * gsl_ran_wishart_pdf (C function): The Wishart Distribution. (line 21) * GSL_REAL (C macro): Complex number macros. (line 9) * gsl_rng (C type): The Random Number Generator Interface. (line 18) * gsl_rng_alloc (C function): Random number generator initialization. (line 6) * gsl_rng_borosh13 (C var): Other random number generators. (line 193) * gsl_rng_clone (C function): Copying random number generator state. (line 18) * gsl_rng_cmrg (C var): Random number generator algorithms. (line 103) * gsl_rng_coveyou (C var): Other random number generators. (line 222) * gsl_rng_default (C var): Random number environment variables. (line 23) * gsl_rng_default_seed (C var): Random number environment variables. (line 31) * gsl_rng_env_setup (C function): Random number environment variables. (line 40) * gsl_rng_fishman18 (C var): Other random number generators. (line 193) * gsl_rng_fishman20 (C var): Other random number generators. (line 193) * gsl_rng_fishman2x (C var): Other random number generators. (line 211) * gsl_rng_fread (C function): Reading and writing random number generator state. (line 18) * gsl_rng_free (C function): Random number generator initialization. (line 48) * gsl_rng_fwrite (C function): Reading and writing random number generator state. (line 9) * gsl_rng_get (C function): Sampling from a random number generator. (line 12) * gsl_rng_gfsr4 (C var): Random number generator algorithms. (line 181) * gsl_rng_knuthran (C var): Other random number generators. (line 183) * gsl_rng_knuthran2 (C var): Other random number generators. (line 173) * gsl_rng_knuthran2002 (C var): Other random number generators. (line 183) * gsl_rng_lecuyer21 (C var): Other random number generators. (line 193) * gsl_rng_max (C function): Auxiliary random number generator functions. (line 21) * gsl_rng_memcpy (C function): Copying random number generator state. (line 11) * gsl_rng_min (C function): Auxiliary random number generator functions. (line 26) * gsl_rng_minstd (C var): Other random number generators. (line 123) * gsl_rng_mrg (C var): Random number generator algorithms. (line 126) * gsl_rng_mt19937 (C var): Random number generator algorithms. (line 19) * gsl_rng_name (C function): Auxiliary random number generator functions. (line 10) * gsl_rng_r250 (C var): Other random number generators. (line 62) * gsl_rng_rand (C var): Unix random number generators. (line 17) * gsl_rng_rand48 (C var): Unix random number generators. (line 59) * gsl_rng_random_bsd (C var): Unix random number generators. (line 27) * gsl_rng_random_glibc2 (C var): Unix random number generators. (line 27) * gsl_rng_random_libc5 (C var): Unix random number generators. (line 27) * gsl_rng_randu (C var): Other random number generators. (line 113) * gsl_rng_ranf (C var): Other random number generators. (line 22) * gsl_rng_ranlux (C var): Random number generator algorithms. (line 74) * gsl_rng_ranlux389 (C var): Random number generator algorithms. (line 74) * gsl_rng_ranlxd1 (C var): Random number generator algorithms. (line 67) * gsl_rng_ranlxd2 (C var): Random number generator algorithms. (line 67) * gsl_rng_ranlxs0 (C var): Random number generator algorithms. (line 47) * gsl_rng_ranlxs1 (C var): Random number generator algorithms. (line 47) * gsl_rng_ranlxs2 (C var): Random number generator algorithms. (line 47) * gsl_rng_ranmar (C var): Other random number generators. (line 55) * GSL_RNG_SEED (C macro): Random number environment variables. (line 18) * gsl_rng_set (C function): Random number generator initialization. (line 26) * gsl_rng_size (C function): Auxiliary random number generator functions. (line 33) * gsl_rng_slatec (C var): Other random number generators. (line 153) * gsl_rng_state (C function): Auxiliary random number generator functions. (line 33) * gsl_rng_taus (C var): Random number generator algorithms. (line 144) * gsl_rng_taus2 (C var): Random number generator algorithms. (line 144) * gsl_rng_transputer (C var): Other random number generators. (line 103) * gsl_rng_tt800 (C var): Other random number generators. (line 79) * GSL_RNG_TYPE (C macro): Random number environment variables. (line 12) * gsl_rng_type (C type): The Random Number Generator Interface. (line 18) * gsl_rng_types_setup (C function): Auxiliary random number generator functions. (line 45) * gsl_rng_uni (C var): Other random number generators. (line 145) * gsl_rng_uni32 (C var): Other random number generators. (line 145) * gsl_rng_uniform (C function): Sampling from a random number generator. (line 20) * gsl_rng_uniform_int (C function): Sampling from a random number generator. (line 40) * gsl_rng_uniform_pos (C function): Sampling from a random number generator. (line 31) * gsl_rng_vax (C var): Other random number generators. (line 93) * gsl_rng_waterman14 (C var): Other random number generators. (line 193) * gsl_rng_zuf (C var): Other random number generators. (line 158) * gsl_root_fdfsolver (C type): Initializing the Solver. (line 11) * gsl_root_fdfsolver_alloc (C function): Initializing the Solver. (line 30) * gsl_root_fdfsolver_free (C function): Initializing the Solver. (line 58) * gsl_root_fdfsolver_iterate (C function): Iteration. (line 12) * gsl_root_fdfsolver_name (C function): Initializing the Solver. (line 64) * gsl_root_fdfsolver_root (C function): Iteration. (line 35) * gsl_root_fdfsolver_set (C function): Initializing the Solver. (line 51) * gsl_root_fdfsolver_type (C type): Root Finding Algorithms using Derivatives. (line 15) * gsl_root_fdfsolver_type.gsl_root_fdfsolver_newton (C var): Root Finding Algorithms using Derivatives. (line 17) * gsl_root_fdfsolver_type.gsl_root_fdfsolver_secant (C var): Root Finding Algorithms using Derivatives. (line 32) * gsl_root_fdfsolver_type.gsl_root_fdfsolver_steffenson (C var): Root Finding Algorithms using Derivatives. (line 64) * gsl_root_fsolver (C type): Initializing the Solver. (line 6) * gsl_root_fsolver_alloc (C function): Initializing the Solver. (line 16) * gsl_root_fsolver_free (C function): Initializing the Solver. (line 58) * gsl_root_fsolver_iterate (C function): Iteration. (line 12) * gsl_root_fsolver_name (C function): Initializing the Solver. (line 64) * gsl_root_fsolver_root (C function): Iteration. (line 35) * gsl_root_fsolver_set (C function): Initializing the Solver. (line 44) * gsl_root_fsolver_type (C type): Root Bracketing Algorithms. (line 16) * gsl_root_fsolver_type.gsl_root_fsolver_bisection (C var): Root Bracketing Algorithms. (line 18) * gsl_root_fsolver_type.gsl_root_fsolver_brent (C var): Root Bracketing Algorithms. (line 54) * gsl_root_fsolver_type.gsl_root_fsolver_falsepos (C var): Root Bracketing Algorithms. (line 35) * gsl_root_fsolver_x_lower (C function): Iteration. (line 42) * gsl_root_fsolver_x_upper (C function): Iteration. (line 42) * gsl_root_test_delta (C function): Search Stopping Parameters. (line 43) * gsl_root_test_interval (C function): Search Stopping Parameters. (line 19) * gsl_root_test_residual (C function): Search Stopping Parameters. (line 55) * gsl_rstat_add (C function): Adding Data to the Accumulator. (line 6) * gsl_rstat_alloc (C function): Initializing the Accumulator. (line 12) * gsl_rstat_free (C function): Initializing the Accumulator. (line 17) * gsl_rstat_kurtosis (C function): Current Statistics. (line 55) * gsl_rstat_max (C function): Current Statistics. (line 10) * gsl_rstat_mean (C function): Current Statistics. (line 14) * gsl_rstat_median (C function): Current Statistics. (line 62) * gsl_rstat_min (C function): Current Statistics. (line 6) * gsl_rstat_n (C function): Adding Data to the Accumulator. (line 12) * gsl_rstat_quantile_add (C function): Quantiles. (line 38) * gsl_rstat_quantile_alloc (C function): Quantiles. (line 18) * gsl_rstat_quantile_free (C function): Quantiles. (line 26) * gsl_rstat_quantile_get (C function): Quantiles. (line 44) * gsl_rstat_quantile_reset (C function): Quantiles. (line 32) * gsl_rstat_quantile_workspace (C type): Quantiles. (line 13) * gsl_rstat_reset (C function): Initializing the Accumulator. (line 22) * gsl_rstat_rms (C function): Current Statistics. (line 41) * gsl_rstat_sd (C function): Current Statistics. (line 28) * gsl_rstat_sd_mean (C function): Current Statistics. (line 34) * gsl_rstat_skew (C function): Current Statistics. (line 48) * gsl_rstat_variance (C function): Current Statistics. (line 21) * gsl_rstat_workspace (C type): Initializing the Accumulator. (line 6) * GSL_SET_COMPLEX (C macro): Complex number macros. (line 27) * gsl_set_error_handler (C function): Error Handlers. (line 43) * gsl_set_error_handler_off (C function): Error Handlers. (line 69) * gsl_sf_airy_Ai (C function): Airy Functions. (line 6) * gsl_sf_airy_Ai_deriv (C function): Derivatives of Airy Functions. (line 6) * gsl_sf_airy_Ai_deriv_e (C function): Derivatives of Airy Functions. (line 6) * gsl_sf_airy_Ai_deriv_scaled (C function): Derivatives of Airy Functions. (line 20) * gsl_sf_airy_Ai_deriv_scaled_e (C function): Derivatives of Airy Functions. (line 20) * gsl_sf_airy_Ai_e (C function): Airy Functions. (line 6) * gsl_sf_airy_Ai_scaled (C function): Airy Functions. (line 20) * gsl_sf_airy_Ai_scaled_e (C function): Airy Functions. (line 20) * gsl_sf_airy_Bi (C function): Airy Functions. (line 13) * gsl_sf_airy_Bi_deriv (C function): Derivatives of Airy Functions. (line 13) * gsl_sf_airy_Bi_deriv_e (C function): Derivatives of Airy Functions. (line 13) * gsl_sf_airy_Bi_deriv_scaled (C function): Derivatives of Airy Functions. (line 29) * gsl_sf_airy_Bi_deriv_scaled_e (C function): Derivatives of Airy Functions. (line 29) * gsl_sf_airy_Bi_e (C function): Airy Functions. (line 13) * gsl_sf_airy_Bi_scaled (C function): Airy Functions. (line 28) * gsl_sf_airy_Bi_scaled_e (C function): Airy Functions. (line 28) * gsl_sf_airy_zero_Ai (C function): Zeros of Airy Functions. (line 6) * gsl_sf_airy_zero_Ai_deriv (C function): Zeros of Derivatives of Airy Functions. (line 6) * gsl_sf_airy_zero_Ai_deriv_e (C function): Zeros of Derivatives of Airy Functions. (line 6) * gsl_sf_airy_zero_Ai_e (C function): Zeros of Airy Functions. (line 6) * gsl_sf_airy_zero_Bi (C function): Zeros of Airy Functions. (line 13) * gsl_sf_airy_zero_Bi_deriv (C function): Zeros of Derivatives of Airy Functions. (line 13) * gsl_sf_airy_zero_Bi_deriv_e (C function): Zeros of Derivatives of Airy Functions. (line 13) * gsl_sf_airy_zero_Bi_e (C function): Zeros of Airy Functions. (line 13) * gsl_sf_angle_restrict_pos (C function): Restriction Functions. (line 16) * gsl_sf_angle_restrict_pos_e (C function): Restriction Functions. (line 16) * gsl_sf_angle_restrict_symm (C function): Restriction Functions. (line 6) * gsl_sf_angle_restrict_symm_e (C function): Restriction Functions. (line 6) * gsl_sf_atanint (C function): Arctangent Integral. (line 6) * gsl_sf_atanint_e (C function): Arctangent Integral. (line 6) * gsl_sf_bessel_I0 (C function): Regular Modified Cylindrical Bessel Functions. (line 6) * gsl_sf_bessel_I0_e (C function): Regular Modified Cylindrical Bessel Functions. (line 6) * gsl_sf_bessel_I0_scaled (C function): Regular Modified Cylindrical Bessel Functions. (line 36) * gsl_sf_bessel_i0_scaled (C function): Regular Modified Spherical Bessel Functions. (line 10) * gsl_sf_bessel_I0_scaled_e (C function): Regular Modified Cylindrical Bessel Functions. (line 36) * gsl_sf_bessel_i0_scaled_e (C function): Regular Modified Spherical Bessel Functions. (line 10) * gsl_sf_bessel_I1 (C function): Regular Modified Cylindrical Bessel Functions. (line 12) * gsl_sf_bessel_I1_e (C function): Regular Modified Cylindrical Bessel Functions. (line 12) * gsl_sf_bessel_I1_scaled (C function): Regular Modified Cylindrical Bessel Functions. (line 43) * gsl_sf_bessel_i1_scaled (C function): Regular Modified Spherical Bessel Functions. (line 17) * gsl_sf_bessel_I1_scaled_e (C function): Regular Modified Cylindrical Bessel Functions. (line 43) * gsl_sf_bessel_i1_scaled_e (C function): Regular Modified Spherical Bessel Functions. (line 17) * gsl_sf_bessel_i2_scaled (C function): Regular Modified Spherical Bessel Functions. (line 24) * gsl_sf_bessel_i2_scaled_e (C function): Regular Modified Spherical Bessel Functions. (line 24) * gsl_sf_bessel_il_scaled (C function): Regular Modified Spherical Bessel Functions. (line 31) * gsl_sf_bessel_il_scaled_array (C function): Regular Modified Spherical Bessel Functions. (line 38) * gsl_sf_bessel_il_scaled_e (C function): Regular Modified Spherical Bessel Functions. (line 31) * gsl_sf_bessel_In (C function): Regular Modified Cylindrical Bessel Functions. (line 18) * gsl_sf_bessel_Inu (C function): Regular Modified Bessel Functions—Fractional Order. (line 6) * gsl_sf_bessel_Inu_e (C function): Regular Modified Bessel Functions—Fractional Order. (line 6) * gsl_sf_bessel_Inu_scaled (C function): Regular Modified Bessel Functions—Fractional Order. (line 13) * gsl_sf_bessel_Inu_scaled_e (C function): Regular Modified Bessel Functions—Fractional Order. (line 13) * gsl_sf_bessel_In_array (C function): Regular Modified Cylindrical Bessel Functions. (line 25) * gsl_sf_bessel_In_e (C function): Regular Modified Cylindrical Bessel Functions. (line 18) * gsl_sf_bessel_In_scaled (C function): Regular Modified Cylindrical Bessel Functions. (line 50) * gsl_sf_bessel_In_scaled_array (C function): Regular Modified Cylindrical Bessel Functions. (line 57) * gsl_sf_bessel_In_scaled_e (C function): Regular Modified Cylindrical Bessel Functions. (line 50) * gsl_sf_bessel_J0 (C function): Regular Cylindrical Bessel Functions. (line 6) * gsl_sf_bessel_j0 (C function): Regular Spherical Bessel Functions. (line 6) * gsl_sf_bessel_J0_e (C function): Regular Cylindrical Bessel Functions. (line 6) * gsl_sf_bessel_j0_e (C function): Regular Spherical Bessel Functions. (line 6) * gsl_sf_bessel_J1 (C function): Regular Cylindrical Bessel Functions. (line 12) * gsl_sf_bessel_j1 (C function): Regular Spherical Bessel Functions. (line 12) * gsl_sf_bessel_J1_e (C function): Regular Cylindrical Bessel Functions. (line 12) * gsl_sf_bessel_j1_e (C function): Regular Spherical Bessel Functions. (line 12) * gsl_sf_bessel_j2 (C function): Regular Spherical Bessel Functions. (line 18) * gsl_sf_bessel_j2_e (C function): Regular Spherical Bessel Functions. (line 18) * gsl_sf_bessel_jl (C function): Regular Spherical Bessel Functions. (line 24) * gsl_sf_bessel_jl_array (C function): Regular Spherical Bessel Functions. (line 31) * gsl_sf_bessel_jl_e (C function): Regular Spherical Bessel Functions. (line 24) * gsl_sf_bessel_jl_steed_array (C function): Regular Spherical Bessel Functions. (line 41) * gsl_sf_bessel_Jn (C function): Regular Cylindrical Bessel Functions. (line 18) * gsl_sf_bessel_Jnu (C function): Regular Bessel Function—Fractional Order. (line 6) * gsl_sf_bessel_Jnu_e (C function): Regular Bessel Function—Fractional Order. (line 6) * gsl_sf_bessel_Jn_array (C function): Regular Cylindrical Bessel Functions. (line 25) * gsl_sf_bessel_Jn_e (C function): Regular Cylindrical Bessel Functions. (line 18) * gsl_sf_bessel_K0 (C function): Irregular Modified Cylindrical Bessel Functions. (line 6) * gsl_sf_bessel_K0_e (C function): Irregular Modified Cylindrical Bessel Functions. (line 6) * gsl_sf_bessel_K0_scaled (C function): Irregular Modified Cylindrical Bessel Functions. (line 36) * gsl_sf_bessel_k0_scaled (C function): Irregular Modified Spherical Bessel Functions. (line 10) * gsl_sf_bessel_K0_scaled_e (C function): Irregular Modified Cylindrical Bessel Functions. (line 36) * gsl_sf_bessel_k0_scaled_e (C function): Irregular Modified Spherical Bessel Functions. (line 10) * gsl_sf_bessel_K1 (C function): Irregular Modified Cylindrical Bessel Functions. (line 12) * gsl_sf_bessel_K1_e (C function): Irregular Modified Cylindrical Bessel Functions. (line 12) * gsl_sf_bessel_K1_scaled (C function): Irregular Modified Cylindrical Bessel Functions. (line 43) * gsl_sf_bessel_k1_scaled (C function): Irregular Modified Spherical Bessel Functions. (line 17) * gsl_sf_bessel_K1_scaled_e (C function): Irregular Modified Cylindrical Bessel Functions. (line 43) * gsl_sf_bessel_k1_scaled_e (C function): Irregular Modified Spherical Bessel Functions. (line 17) * gsl_sf_bessel_k2_scaled (C function): Irregular Modified Spherical Bessel Functions. (line 24) * gsl_sf_bessel_k2_scaled_e (C function): Irregular Modified Spherical Bessel Functions. (line 24) * gsl_sf_bessel_kl_scaled (C function): Irregular Modified Spherical Bessel Functions. (line 31) * gsl_sf_bessel_kl_scaled_array (C function): Irregular Modified Spherical Bessel Functions. (line 38) * gsl_sf_bessel_kl_scaled_e (C function): Irregular Modified Spherical Bessel Functions. (line 31) * gsl_sf_bessel_Kn (C function): Irregular Modified Cylindrical Bessel Functions. (line 18) * gsl_sf_bessel_Knu (C function): Irregular Modified Bessel Functions—Fractional Order. (line 6) * gsl_sf_bessel_Knu_e (C function): Irregular Modified Bessel Functions—Fractional Order. (line 6) * gsl_sf_bessel_Knu_scaled (C function): Irregular Modified Bessel Functions—Fractional Order. (line 21) * gsl_sf_bessel_Knu_scaled_e (C function): Irregular Modified Bessel Functions—Fractional Order. (line 21) * gsl_sf_bessel_Kn_array (C function): Irregular Modified Cylindrical Bessel Functions. (line 25) * gsl_sf_bessel_Kn_e (C function): Irregular Modified Cylindrical Bessel Functions. (line 18) * gsl_sf_bessel_Kn_scaled (C function): Irregular Modified Cylindrical Bessel Functions. (line 50) * gsl_sf_bessel_Kn_scaled_array (C function): Irregular Modified Cylindrical Bessel Functions. (line 57) * gsl_sf_bessel_Kn_scaled_e (C function): Irregular Modified Cylindrical Bessel Functions. (line 50) * gsl_sf_bessel_lnKnu (C function): Irregular Modified Bessel Functions—Fractional Order. (line 13) * gsl_sf_bessel_lnKnu_e (C function): Irregular Modified Bessel Functions—Fractional Order. (line 13) * gsl_sf_bessel_sequence_Jnu_e (C function): Regular Bessel Function—Fractional Order. (line 13) * gsl_sf_bessel_Y0 (C function): Irregular Cylindrical Bessel Functions. (line 6) * gsl_sf_bessel_y0 (C function): Irregular Spherical Bessel Functions. (line 6) * gsl_sf_bessel_Y0_e (C function): Irregular Cylindrical Bessel Functions. (line 6) * gsl_sf_bessel_y0_e (C function): Irregular Spherical Bessel Functions. (line 6) * gsl_sf_bessel_Y1 (C function): Irregular Cylindrical Bessel Functions. (line 12) * gsl_sf_bessel_y1 (C function): Irregular Spherical Bessel Functions. (line 12) * gsl_sf_bessel_Y1_e (C function): Irregular Cylindrical Bessel Functions. (line 12) * gsl_sf_bessel_y1_e (C function): Irregular Spherical Bessel Functions. (line 12) * gsl_sf_bessel_y2 (C function): Irregular Spherical Bessel Functions. (line 18) * gsl_sf_bessel_y2_e (C function): Irregular Spherical Bessel Functions. (line 18) * gsl_sf_bessel_yl (C function): Irregular Spherical Bessel Functions. (line 24) * gsl_sf_bessel_yl_array (C function): Irregular Spherical Bessel Functions. (line 31) * gsl_sf_bessel_yl_e (C function): Irregular Spherical Bessel Functions. (line 24) * gsl_sf_bessel_Yn (C function): Irregular Cylindrical Bessel Functions. (line 18) * gsl_sf_bessel_Ynu (C function): Irregular Bessel Functions—Fractional Order. (line 6) * gsl_sf_bessel_Ynu_e (C function): Irregular Bessel Functions—Fractional Order. (line 6) * gsl_sf_bessel_Yn_array (C function): Irregular Cylindrical Bessel Functions. (line 25) * gsl_sf_bessel_Yn_e (C function): Irregular Cylindrical Bessel Functions. (line 18) * gsl_sf_bessel_zero_J0 (C function): Zeros of Regular Bessel Functions. (line 6) * gsl_sf_bessel_zero_J0_e (C function): Zeros of Regular Bessel Functions. (line 6) * gsl_sf_bessel_zero_J1 (C function): Zeros of Regular Bessel Functions. (line 13) * gsl_sf_bessel_zero_J1_e (C function): Zeros of Regular Bessel Functions. (line 13) * gsl_sf_bessel_zero_Jnu (C function): Zeros of Regular Bessel Functions. (line 20) * gsl_sf_bessel_zero_Jnu_e (C function): Zeros of Regular Bessel Functions. (line 20) * gsl_sf_beta (C function): Beta Functions. (line 6) * gsl_sf_beta_e (C function): Beta Functions. (line 6) * gsl_sf_beta_inc (C function): Incomplete Beta Function. (line 6) * gsl_sf_beta_inc_e (C function): Incomplete Beta Function. (line 6) * gsl_sf_Chi (C function): Hyperbolic Integrals. (line 13) * gsl_sf_Chi_e (C function): Hyperbolic Integrals. (line 13) * gsl_sf_choose (C function): Factorials. (line 43) * gsl_sf_choose_e (C function): Factorials. (line 43) * gsl_sf_Ci (C function): Trigonometric Integrals. (line 13) * gsl_sf_Ci_e (C function): Trigonometric Integrals. (line 13) * gsl_sf_clausen (C function): Clausen Functions. (line 14) * gsl_sf_clausen_e (C function): Clausen Functions. (line 14) * gsl_sf_complex_cos_e (C function): Trigonometric Functions for Complex Arguments. (line 12) * gsl_sf_complex_dilog_e (C function): Complex Argument. (line 6) * gsl_sf_complex_logsin_e (C function): Trigonometric Functions for Complex Arguments. (line 19) * gsl_sf_complex_log_e (C function): Logarithm and Related Functions. (line 22) * gsl_sf_complex_sin_e (C function): Trigonometric Functions for Complex Arguments. (line 6) * gsl_sf_conicalP_0 (C function): Conical Functions. (line 24) * gsl_sf_conicalP_0_e (C function): Conical Functions. (line 24) * gsl_sf_conicalP_1 (C function): Conical Functions. (line 31) * gsl_sf_conicalP_1_e (C function): Conical Functions. (line 31) * gsl_sf_conicalP_cyl_reg (C function): Conical Functions. (line 46) * gsl_sf_conicalP_cyl_reg_e (C function): Conical Functions. (line 46) * gsl_sf_conicalP_half (C function): Conical Functions. (line 10) * gsl_sf_conicalP_half_e (C function): Conical Functions. (line 10) * gsl_sf_conicalP_mhalf (C function): Conical Functions. (line 17) * gsl_sf_conicalP_mhalf_e (C function): Conical Functions. (line 17) * gsl_sf_conicalP_sph_reg (C function): Conical Functions. (line 38) * gsl_sf_conicalP_sph_reg_e (C function): Conical Functions. (line 38) * gsl_sf_cos (C function): Circular Trigonometric Functions. (line 11) * gsl_sf_cos_e (C function): Circular Trigonometric Functions. (line 11) * gsl_sf_cos_err_e (C function): Trigonometric Functions With Error Estimates. (line 14) * gsl_sf_coulomb_CL_array (C function): Coulomb Wave Function Normalization Constant. (line 15) * gsl_sf_coulomb_CL_e (C function): Coulomb Wave Function Normalization Constant. (line 9) * gsl_sf_coulomb_wave_FGp_array (C function): Coulomb Wave Functions. (line 52) * gsl_sf_coulomb_wave_FG_array (C function): Coulomb Wave Functions. (line 42) * gsl_sf_coulomb_wave_FG_e (C function): Coulomb Wave Functions. (line 20) * gsl_sf_coulomb_wave_F_array (C function): Coulomb Wave Functions. (line 34) * gsl_sf_coulomb_wave_sphF_array (C function): Coulomb Wave Functions. (line 64) * gsl_sf_coupling_3j (C function): 3-j Symbols. (line 6) * gsl_sf_coupling_3j_e (C function): 3-j Symbols. (line 6) * gsl_sf_coupling_6j (C function): 6-j Symbols. (line 6) * gsl_sf_coupling_6j_e (C function): 6-j Symbols. (line 6) * gsl_sf_coupling_9j (C function): 9-j Symbols. (line 6) * gsl_sf_coupling_9j_e (C function): 9-j Symbols. (line 6) * gsl_sf_dawson (C function): Dawson Function. (line 14) * gsl_sf_dawson_e (C function): Dawson Function. (line 14) * gsl_sf_debye_1 (C function): Debye Functions. (line 13) * gsl_sf_debye_1_e (C function): Debye Functions. (line 13) * gsl_sf_debye_2 (C function): Debye Functions. (line 18) * gsl_sf_debye_2_e (C function): Debye Functions. (line 18) * gsl_sf_debye_3 (C function): Debye Functions. (line 23) * gsl_sf_debye_3_e (C function): Debye Functions. (line 23) * gsl_sf_debye_4 (C function): Debye Functions. (line 28) * gsl_sf_debye_4_e (C function): Debye Functions. (line 28) * gsl_sf_debye_5 (C function): Debye Functions. (line 33) * gsl_sf_debye_5_e (C function): Debye Functions. (line 33) * gsl_sf_debye_6 (C function): Debye Functions. (line 38) * gsl_sf_debye_6_e (C function): Debye Functions. (line 38) * gsl_sf_dilog (C function): Real Argument. (line 6) * gsl_sf_dilog_e (C function): Real Argument. (line 6) * gsl_sf_doublefact (C function): Factorials. (line 19) * gsl_sf_doublefact_e (C function): Factorials. (line 19) * gsl_sf_ellint_D (C function): Legendre Form of Incomplete Elliptic Integrals. (line 37) * gsl_sf_ellint_D_e (C function): Legendre Form of Incomplete Elliptic Integrals. (line 37) * gsl_sf_ellint_E (C function): Legendre Form of Incomplete Elliptic Integrals. (line 16) * gsl_sf_ellint_Ecomp (C function): Legendre Form of Complete Elliptic Integrals. (line 15) * gsl_sf_ellint_Ecomp_e (C function): Legendre Form of Complete Elliptic Integrals. (line 15) * gsl_sf_ellint_E_e (C function): Legendre Form of Incomplete Elliptic Integrals. (line 16) * gsl_sf_ellint_F (C function): Legendre Form of Incomplete Elliptic Integrals. (line 6) * gsl_sf_ellint_F_e (C function): Legendre Form of Incomplete Elliptic Integrals. (line 6) * gsl_sf_ellint_Kcomp (C function): Legendre Form of Complete Elliptic Integrals. (line 6) * gsl_sf_ellint_Kcomp_e (C function): Legendre Form of Complete Elliptic Integrals. (line 6) * gsl_sf_ellint_P (C function): Legendre Form of Incomplete Elliptic Integrals. (line 26) * gsl_sf_ellint_Pcomp (C function): Legendre Form of Complete Elliptic Integrals. (line 24) * gsl_sf_ellint_Pcomp_e (C function): Legendre Form of Complete Elliptic Integrals. (line 24) * gsl_sf_ellint_P_e (C function): Legendre Form of Incomplete Elliptic Integrals. (line 26) * gsl_sf_ellint_RC (C function): Carlson Forms. (line 6) * gsl_sf_ellint_RC_e (C function): Carlson Forms. (line 6) * gsl_sf_ellint_RD (C function): Carlson Forms. (line 14) * gsl_sf_ellint_RD_e (C function): Carlson Forms. (line 14) * gsl_sf_ellint_RF (C function): Carlson Forms. (line 22) * gsl_sf_ellint_RF_e (C function): Carlson Forms. (line 22) * gsl_sf_ellint_RJ (C function): Carlson Forms. (line 30) * gsl_sf_ellint_RJ_e (C function): Carlson Forms. (line 30) * gsl_sf_elljac_e (C function): Elliptic Functions Jacobi. (line 10) * gsl_sf_erf (C function): Error Function. (line 6) * gsl_sf_erfc (C function): Complementary Error Function. (line 6) * gsl_sf_erfc_e (C function): Complementary Error Function. (line 6) * gsl_sf_erf_e (C function): Error Function. (line 6) * gsl_sf_erf_Q (C function): Probability functions. (line 15) * gsl_sf_erf_Q_e (C function): Probability functions. (line 15) * gsl_sf_erf_Z (C function): Probability functions. (line 9) * gsl_sf_erf_Z_e (C function): Probability functions. (line 9) * gsl_sf_eta (C function): Eta Function. (line 16) * gsl_sf_eta_e (C function): Eta Function. (line 16) * gsl_sf_eta_int (C function): Eta Function. (line 10) * gsl_sf_eta_int_e (C function): Eta Function. (line 10) * gsl_sf_exp (C function): Exponential Function. (line 6) * gsl_sf_expint_3 (C function): Ei_3 x. (line 6) * gsl_sf_expint_3_e (C function): Ei_3 x. (line 6) * gsl_sf_expint_E1 (C function): Exponential Integral. (line 6) * gsl_sf_expint_E1_e (C function): Exponential Integral. (line 6) * gsl_sf_expint_E2 (C function): Exponential Integral. (line 13) * gsl_sf_expint_E2_e (C function): Exponential Integral. (line 13) * gsl_sf_expint_Ei (C function): Ei x. (line 6) * gsl_sf_expint_Ei_e (C function): Ei x. (line 6) * gsl_sf_expint_En (C function): Exponential Integral. (line 21) * gsl_sf_expint_En_e (C function): Exponential Integral. (line 21) * gsl_sf_expm1 (C function): Relative Exponential Functions. (line 6) * gsl_sf_expm1_e (C function): Relative Exponential Functions. (line 6) * gsl_sf_exprel (C function): Relative Exponential Functions. (line 12) * gsl_sf_exprel_2 (C function): Relative Exponential Functions. (line 20) * gsl_sf_exprel_2_e (C function): Relative Exponential Functions. (line 20) * gsl_sf_exprel_e (C function): Relative Exponential Functions. (line 12) * gsl_sf_exprel_n (C function): Relative Exponential Functions. (line 28) * gsl_sf_exprel_n_e (C function): Relative Exponential Functions. (line 28) * gsl_sf_exp_e (C function): Exponential Function. (line 6) * gsl_sf_exp_e10_e (C function): Exponential Function. (line 12) * gsl_sf_exp_err_e (C function): Exponentiation With Error Estimate. (line 6) * gsl_sf_exp_err_e10_e (C function): Exponentiation With Error Estimate. (line 12) * gsl_sf_exp_mult (C function): Exponential Function. (line 19) * gsl_sf_exp_mult_e (C function): Exponential Function. (line 19) * gsl_sf_exp_mult_e10_e (C function): Exponential Function. (line 26) * gsl_sf_exp_mult_err_e (C function): Exponentiation With Error Estimate. (line 19) * gsl_sf_exp_mult_err_e10_e (C function): Exponentiation With Error Estimate. (line 26) * gsl_sf_fact (C function): Factorials. (line 11) * gsl_sf_fact_e (C function): Factorials. (line 11) * gsl_sf_fermi_dirac_0 (C function): Complete Fermi-Dirac Integrals. (line 21) * gsl_sf_fermi_dirac_0_e (C function): Complete Fermi-Dirac Integrals. (line 21) * gsl_sf_fermi_dirac_1 (C function): Complete Fermi-Dirac Integrals. (line 28) * gsl_sf_fermi_dirac_1_e (C function): Complete Fermi-Dirac Integrals. (line 28) * gsl_sf_fermi_dirac_2 (C function): Complete Fermi-Dirac Integrals. (line 35) * gsl_sf_fermi_dirac_2_e (C function): Complete Fermi-Dirac Integrals. (line 35) * gsl_sf_fermi_dirac_3half (C function): Complete Fermi-Dirac Integrals. (line 64) * gsl_sf_fermi_dirac_3half_e (C function): Complete Fermi-Dirac Integrals. (line 64) * gsl_sf_fermi_dirac_half (C function): Complete Fermi-Dirac Integrals. (line 57) * gsl_sf_fermi_dirac_half_e (C function): Complete Fermi-Dirac Integrals. (line 57) * gsl_sf_fermi_dirac_inc_0 (C function): Incomplete Fermi-Dirac Integrals. (line 10) * gsl_sf_fermi_dirac_inc_0_e (C function): Incomplete Fermi-Dirac Integrals. (line 10) * gsl_sf_fermi_dirac_int (C function): Complete Fermi-Dirac Integrals. (line 42) * gsl_sf_fermi_dirac_int_e (C function): Complete Fermi-Dirac Integrals. (line 42) * gsl_sf_fermi_dirac_m1 (C function): Complete Fermi-Dirac Integrals. (line 13) * gsl_sf_fermi_dirac_m1_e (C function): Complete Fermi-Dirac Integrals. (line 13) * gsl_sf_fermi_dirac_mhalf (C function): Complete Fermi-Dirac Integrals. (line 50) * gsl_sf_fermi_dirac_mhalf_e (C function): Complete Fermi-Dirac Integrals. (line 50) * gsl_sf_gamma (C function): Gamma Functions. (line 14) * gsl_sf_gammainv (C function): Gamma Functions. (line 54) * gsl_sf_gammainv_e (C function): Gamma Functions. (line 54) * gsl_sf_gammastar (C function): Gamma Functions. (line 43) * gsl_sf_gammastar_e (C function): Gamma Functions. (line 43) * gsl_sf_gamma_e (C function): Gamma Functions. (line 14) * gsl_sf_gamma_inc (C function): Incomplete Gamma Functions. (line 6) * gsl_sf_gamma_inc_e (C function): Incomplete Gamma Functions. (line 6) * gsl_sf_gamma_inc_P (C function): Incomplete Gamma Functions. (line 22) * gsl_sf_gamma_inc_P_e (C function): Incomplete Gamma Functions. (line 22) * gsl_sf_gamma_inc_Q (C function): Incomplete Gamma Functions. (line 14) * gsl_sf_gamma_inc_Q_e (C function): Incomplete Gamma Functions. (line 14) * gsl_sf_gegenpoly_1 (C function): Gegenbauer Functions. (line 11) * gsl_sf_gegenpoly_1_e (C function): Gegenbauer Functions. (line 11) * gsl_sf_gegenpoly_2 (C function): Gegenbauer Functions. (line 11) * gsl_sf_gegenpoly_2_e (C function): Gegenbauer Functions. (line 11) * gsl_sf_gegenpoly_3 (C function): Gegenbauer Functions. (line 11) * gsl_sf_gegenpoly_3_e (C function): Gegenbauer Functions. (line 11) * gsl_sf_gegenpoly_array (C function): Gegenbauer Functions. (line 32) * gsl_sf_gegenpoly_n (C function): Gegenbauer Functions. (line 24) * gsl_sf_gegenpoly_n_e (C function): Gegenbauer Functions. (line 24) * gsl_sf_hazard (C function): Probability functions. (line 29) * gsl_sf_hazard_e (C function): Probability functions. (line 29) * gsl_sf_hermite (C function): Hermite Polynomials. (line 23) * gsl_sf_hermite_array (C function): Hermite Polynomials. (line 32) * gsl_sf_hermite_array_deriv (C function): Derivatives of Hermite Polynomials. (line 15) * gsl_sf_hermite_deriv (C function): Derivatives of Hermite Polynomials. (line 6) * gsl_sf_hermite_deriv_array (C function): Derivatives of Hermite Polynomials. (line 24) * gsl_sf_hermite_deriv_e (C function): Derivatives of Hermite Polynomials. (line 6) * gsl_sf_hermite_e (C function): Hermite Polynomials. (line 23) * gsl_sf_hermite_func (C function): Hermite Functions. (line 28) * gsl_sf_hermite_func_array (C function): Hermite Functions. (line 46) * gsl_sf_hermite_func_der (C function): Derivatives of Hermite Functions. (line 6) * gsl_sf_hermite_func_der_e (C function): Derivatives of Hermite Functions. (line 6) * gsl_sf_hermite_func_e (C function): Hermite Functions. (line 28) * gsl_sf_hermite_func_fast (C function): Hermite Functions. (line 36) * gsl_sf_hermite_func_fast_e (C function): Hermite Functions. (line 36) * gsl_sf_hermite_func_series (C function): Hermite Functions. (line 54) * gsl_sf_hermite_func_series_e (C function): Hermite Functions. (line 54) * gsl_sf_hermite_func_zero (C function): Zeros of Hermite Polynomials and Hermite Functions. (line 26) * gsl_sf_hermite_func_zero_e (C function): Zeros of Hermite Polynomials and Hermite Functions. (line 26) * gsl_sf_hermite_prob (C function): Hermite Polynomials. (line 48) * gsl_sf_hermite_prob_array (C function): Hermite Polynomials. (line 57) * gsl_sf_hermite_prob_array_deriv (C function): Derivatives of Hermite Polynomials. (line 42) * gsl_sf_hermite_prob_deriv (C function): Derivatives of Hermite Polynomials. (line 33) * gsl_sf_hermite_prob_deriv_array (C function): Derivatives of Hermite Polynomials. (line 51) * gsl_sf_hermite_prob_deriv_e (C function): Derivatives of Hermite Polynomials. (line 33) * gsl_sf_hermite_prob_e (C function): Hermite Polynomials. (line 48) * gsl_sf_hermite_prob_series (C function): Hermite Polynomials. (line 64) * gsl_sf_hermite_prob_series_e (C function): Hermite Polynomials. (line 64) * gsl_sf_hermite_prob_zero (C function): Zeros of Hermite Polynomials and Hermite Functions. (line 19) * gsl_sf_hermite_prob_zero_e (C function): Zeros of Hermite Polynomials and Hermite Functions. (line 19) * gsl_sf_hermite_series (C function): Hermite Polynomials. (line 39) * gsl_sf_hermite_series_e (C function): Hermite Polynomials. (line 39) * gsl_sf_hermite_zero (C function): Zeros of Hermite Polynomials and Hermite Functions. (line 12) * gsl_sf_hermite_zero_e (C function): Zeros of Hermite Polynomials and Hermite Functions. (line 12) * gsl_sf_hydrogenicR (C function): Normalized Hydrogenic Bound States. (line 13) * gsl_sf_hydrogenicR_1 (C function): Normalized Hydrogenic Bound States. (line 6) * gsl_sf_hydrogenicR_1_e (C function): Normalized Hydrogenic Bound States. (line 6) * gsl_sf_hydrogenicR_e (C function): Normalized Hydrogenic Bound States. (line 13) * gsl_sf_hyperg_0F1 (C function): Hypergeometric Functions. (line 10) * gsl_sf_hyperg_0F1_e (C function): Hypergeometric Functions. (line 10) * gsl_sf_hyperg_1F1 (C function): Hypergeometric Functions. (line 28) * gsl_sf_hyperg_1F1_e (C function): Hypergeometric Functions. (line 28) * gsl_sf_hyperg_1F1_int (C function): Hypergeometric Functions. (line 18) * gsl_sf_hyperg_1F1_int_e (C function): Hypergeometric Functions. (line 18) * gsl_sf_hyperg_2F0 (C function): Hypergeometric Functions. (line 116) * gsl_sf_hyperg_2F0_e (C function): Hypergeometric Functions. (line 116) * gsl_sf_hyperg_2F1 (C function): Hypergeometric Functions. (line 67) * gsl_sf_hyperg_2F1_conj (C function): Hypergeometric Functions. (line 81) * gsl_sf_hyperg_2F1_conj_e (C function): Hypergeometric Functions. (line 81) * gsl_sf_hyperg_2F1_conj_renorm (C function): Hypergeometric Functions. (line 104) * gsl_sf_hyperg_2F1_conj_renorm_e (C function): Hypergeometric Functions. (line 104) * gsl_sf_hyperg_2F1_e (C function): Hypergeometric Functions. (line 67) * gsl_sf_hyperg_2F1_renorm (C function): Hypergeometric Functions. (line 92) * gsl_sf_hyperg_2F1_renorm_e (C function): Hypergeometric Functions. (line 92) * gsl_sf_hyperg_U (C function): Hypergeometric Functions. (line 53) * gsl_sf_hyperg_U_e (C function): Hypergeometric Functions. (line 53) * gsl_sf_hyperg_U_e10_e (C function): Hypergeometric Functions. (line 60) * gsl_sf_hyperg_U_int (C function): Hypergeometric Functions. (line 38) * gsl_sf_hyperg_U_int_e (C function): Hypergeometric Functions. (line 38) * gsl_sf_hyperg_U_int_e10_e (C function): Hypergeometric Functions. (line 45) * gsl_sf_hypot (C function): Circular Trigonometric Functions. (line 16) * gsl_sf_hypot_e (C function): Circular Trigonometric Functions. (line 16) * gsl_sf_hzeta (C function): Hurwitz Zeta Function. (line 10) * gsl_sf_hzeta_e (C function): Hurwitz Zeta Function. (line 10) * gsl_sf_laguerre_1 (C function): Laguerre Functions. (line 20) * gsl_sf_laguerre_1_e (C function): Laguerre Functions. (line 20) * gsl_sf_laguerre_2 (C function): Laguerre Functions. (line 20) * gsl_sf_laguerre_2_e (C function): Laguerre Functions. (line 20) * gsl_sf_laguerre_3 (C function): Laguerre Functions. (line 20) * gsl_sf_laguerre_3_e (C function): Laguerre Functions. (line 20) * gsl_sf_laguerre_n (C function): Laguerre Functions. (line 33) * gsl_sf_laguerre_n_e (C function): Laguerre Functions. (line 33) * gsl_sf_lambert_W0 (C function): Lambert W Functions. (line 13) * gsl_sf_lambert_W0_e (C function): Lambert W Functions. (line 13) * gsl_sf_lambert_Wm1 (C function): Lambert W Functions. (line 19) * gsl_sf_lambert_Wm1_e (C function): Lambert W Functions. (line 19) * gsl_sf_legendre_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 84) * gsl_sf_legendre_array_e (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 84) * gsl_sf_legendre_array_index (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 193) * gsl_sf_legendre_array_n (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 184) * gsl_sf_legendre_array_size (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 239) * gsl_sf_legendre_deriv2_alt_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 157) * gsl_sf_legendre_deriv2_alt_array_e (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 157) * gsl_sf_legendre_deriv2_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 136) * gsl_sf_legendre_deriv2_array_e (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 136) * gsl_sf_legendre_deriv_alt_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 118) * gsl_sf_legendre_deriv_alt_array_e (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 118) * gsl_sf_legendre_deriv_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 101) * gsl_sf_legendre_deriv_array_e (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 101) * gsl_sf_legendre_H3d (C function): Radial Functions for Hyperbolic Space. (line 35) * gsl_sf_legendre_H3d_0 (C function): Radial Functions for Hyperbolic Space. (line 11) * gsl_sf_legendre_H3d_0_e (C function): Radial Functions for Hyperbolic Space. (line 11) * gsl_sf_legendre_H3d_1 (C function): Radial Functions for Hyperbolic Space. (line 23) * gsl_sf_legendre_H3d_1_e (C function): Radial Functions for Hyperbolic Space. (line 23) * gsl_sf_legendre_H3d_array (C function): Radial Functions for Hyperbolic Space. (line 45) * gsl_sf_legendre_H3d_e (C function): Radial Functions for Hyperbolic Space. (line 35) * gsl_sf_legendre_nlm (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 178) * gsl_sf_legendre_P1 (C function): Legendre Polynomials. (line 6) * gsl_sf_legendre_P1_e (C function): Legendre Polynomials. (line 6) * gsl_sf_legendre_P2 (C function): Legendre Polynomials. (line 6) * gsl_sf_legendre_P2_e (C function): Legendre Polynomials. (line 6) * gsl_sf_legendre_P3 (C function): Legendre Polynomials. (line 6) * gsl_sf_legendre_P3_e (C function): Legendre Polynomials. (line 6) * gsl_sf_legendre_Pl (C function): Legendre Polynomials. (line 16) * gsl_sf_legendre_Plm (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 204) * gsl_sf_legendre_Plm_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 221) * gsl_sf_legendre_Plm_deriv_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 221) * gsl_sf_legendre_Plm_e (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 204) * gsl_sf_legendre_Pl_array (C function): Legendre Polynomials. (line 24) * gsl_sf_legendre_Pl_deriv_array (C function): Legendre Polynomials. (line 24) * gsl_sf_legendre_Pl_e (C function): Legendre Polynomials. (line 16) * gsl_sf_legendre_Q0 (C function): Legendre Polynomials. (line 32) * gsl_sf_legendre_Q0_e (C function): Legendre Polynomials. (line 32) * gsl_sf_legendre_Q1 (C function): Legendre Polynomials. (line 38) * gsl_sf_legendre_Q1_e (C function): Legendre Polynomials. (line 38) * gsl_sf_legendre_Ql (C function): Legendre Polynomials. (line 44) * gsl_sf_legendre_Ql_e (C function): Legendre Polynomials. (line 44) * gsl_sf_legendre_sphPlm (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 211) * gsl_sf_legendre_sphPlm_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 230) * gsl_sf_legendre_sphPlm_deriv_array (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 230) * gsl_sf_legendre_sphPlm_e (C function): Associated Legendre Polynomials and Spherical Harmonics. (line 211) * gsl_sf_legendre_t (C type): Associated Legendre Polynomials and Spherical Harmonics. (line 66) * gsl_sf_lnbeta (C function): Beta Functions. (line 14) * gsl_sf_lnbeta_e (C function): Beta Functions. (line 14) * gsl_sf_lnchoose (C function): Factorials. (line 50) * gsl_sf_lnchoose_e (C function): Factorials. (line 50) * gsl_sf_lncosh (C function): Hyperbolic Trigonometric Functions. (line 11) * gsl_sf_lncosh_e (C function): Hyperbolic Trigonometric Functions. (line 11) * gsl_sf_lndoublefact (C function): Factorials. (line 36) * gsl_sf_lndoublefact_e (C function): Factorials. (line 36) * gsl_sf_lnfact (C function): Factorials. (line 27) * gsl_sf_lnfact_e (C function): Factorials. (line 27) * gsl_sf_lngamma (C function): Gamma Functions. (line 23) * gsl_sf_lngamma_complex_e (C function): Gamma Functions. (line 60) * gsl_sf_lngamma_e (C function): Gamma Functions. (line 23) * gsl_sf_lngamma_sgn_e (C function): Gamma Functions. (line 32) * gsl_sf_lnpoch (C function): Pochhammer Symbol. (line 16) * gsl_sf_lnpoch_e (C function): Pochhammer Symbol. (line 16) * gsl_sf_lnpoch_sgn_e (C function): Pochhammer Symbol. (line 23) * gsl_sf_lnsinh (C function): Hyperbolic Trigonometric Functions. (line 6) * gsl_sf_lnsinh_e (C function): Hyperbolic Trigonometric Functions. (line 6) * gsl_sf_log (C function): Logarithm and Related Functions. (line 10) * gsl_sf_log_1plusx (C function): Logarithm and Related Functions. (line 30) * gsl_sf_log_1plusx_e (C function): Logarithm and Related Functions. (line 30) * gsl_sf_log_1plusx_mx (C function): Logarithm and Related Functions. (line 36) * gsl_sf_log_1plusx_mx_e (C function): Logarithm and Related Functions. (line 36) * gsl_sf_log_abs (C function): Logarithm and Related Functions. (line 16) * gsl_sf_log_abs_e (C function): Logarithm and Related Functions. (line 16) * gsl_sf_log_e (C function): Logarithm and Related Functions. (line 10) * gsl_sf_log_erfc (C function): Log Complementary Error Function. (line 6) * gsl_sf_log_erfc_e (C function): Log Complementary Error Function. (line 6) * gsl_sf_mathieu_a (C function): Mathieu Function Characteristic Values. (line 6) * gsl_sf_mathieu_alloc (C function): Mathieu Function Workspace. (line 14) * gsl_sf_mathieu_a_array (C function): Mathieu Function Characteristic Values. (line 16) * gsl_sf_mathieu_a_e (C function): Mathieu Function Characteristic Values. (line 6) * gsl_sf_mathieu_b (C function): Mathieu Function Characteristic Values. (line 6) * gsl_sf_mathieu_b_array (C function): Mathieu Function Characteristic Values. (line 16) * gsl_sf_mathieu_b_e (C function): Mathieu Function Characteristic Values. (line 6) * gsl_sf_mathieu_ce (C function): Angular Mathieu Functions. (line 6) * gsl_sf_mathieu_ce_array (C function): Angular Mathieu Functions. (line 16) * gsl_sf_mathieu_ce_e (C function): Angular Mathieu Functions. (line 6) * gsl_sf_mathieu_free (C function): Mathieu Function Workspace. (line 22) * gsl_sf_mathieu_Mc (C function): Radial Mathieu Functions. (line 6) * gsl_sf_mathieu_Mc_array (C function): Radial Mathieu Functions. (line 22) * gsl_sf_mathieu_Mc_e (C function): Radial Mathieu Functions. (line 6) * gsl_sf_mathieu_Ms (C function): Radial Mathieu Functions. (line 6) * gsl_sf_mathieu_Ms_array (C function): Radial Mathieu Functions. (line 22) * gsl_sf_mathieu_Ms_e (C function): Radial Mathieu Functions. (line 6) * gsl_sf_mathieu_se (C function): Angular Mathieu Functions. (line 6) * gsl_sf_mathieu_se_array (C function): Angular Mathieu Functions. (line 16) * gsl_sf_mathieu_se_e (C function): Angular Mathieu Functions. (line 6) * gsl_sf_mathieu_workspace (C type): Mathieu Function Workspace. (line 10) * gsl_sf_multiply (C function): Elementary Operations. (line 10) * gsl_sf_multiply_e (C function): Elementary Operations. (line 10) * gsl_sf_multiply_err_e (C function): Elementary Operations. (line 17) * gsl_sf_poch (C function): Pochhammer Symbol. (line 6) * gsl_sf_pochrel (C function): Pochhammer Symbol. (line 31) * gsl_sf_pochrel_e (C function): Pochhammer Symbol. (line 31) * gsl_sf_poch_e (C function): Pochhammer Symbol. (line 6) * gsl_sf_polar_to_rect (C function): Conversion Functions. (line 6) * gsl_sf_pow_int (C function): Power Function. (line 10) * gsl_sf_pow_int_e (C function): Power Function. (line 10) * gsl_sf_psi (C function): Digamma Function. (line 13) * gsl_sf_psi_1 (C function): Trigamma Function. (line 12) * gsl_sf_psi_1piy (C function): Digamma Function. (line 19) * gsl_sf_psi_1piy_e (C function): Digamma Function. (line 19) * gsl_sf_psi_1_e (C function): Trigamma Function. (line 12) * gsl_sf_psi_1_int (C function): Trigamma Function. (line 6) * gsl_sf_psi_1_int_e (C function): Trigamma Function. (line 6) * gsl_sf_psi_e (C function): Digamma Function. (line 13) * gsl_sf_psi_int (C function): Digamma Function. (line 6) * gsl_sf_psi_int_e (C function): Digamma Function. (line 6) * gsl_sf_psi_n (C function): Polygamma Function. (line 6) * gsl_sf_psi_n_e (C function): Polygamma Function. (line 6) * gsl_sf_rect_to_polar (C function): Conversion Functions. (line 13) * gsl_sf_result (C type): The gsl_sf_result struct. (line 13) * gsl_sf_result_e10 (C type): The gsl_sf_result struct. (line 31) * gsl_sf_Shi (C function): Hyperbolic Integrals. (line 6) * gsl_sf_Shi_e (C function): Hyperbolic Integrals. (line 6) * gsl_sf_Si (C function): Trigonometric Integrals. (line 6) * gsl_sf_sin (C function): Circular Trigonometric Functions. (line 6) * gsl_sf_sinc (C function): Circular Trigonometric Functions. (line 23) * gsl_sf_sinc_e (C function): Circular Trigonometric Functions. (line 23) * gsl_sf_sin_e (C function): Circular Trigonometric Functions. (line 6) * gsl_sf_sin_err_e (C function): Trigonometric Functions With Error Estimates. (line 6) * gsl_sf_Si_e (C function): Trigonometric Integrals. (line 6) * gsl_sf_synchrotron_1 (C function): Synchrotron Functions. (line 9) * gsl_sf_synchrotron_1_e (C function): Synchrotron Functions. (line 9) * gsl_sf_synchrotron_2 (C function): Synchrotron Functions. (line 16) * gsl_sf_synchrotron_2_e (C function): Synchrotron Functions. (line 16) * gsl_sf_taylorcoeff (C function): Factorials. (line 57) * gsl_sf_taylorcoeff_e (C function): Factorials. (line 57) * gsl_sf_transport_2 (C function): Transport Functions. (line 13) * gsl_sf_transport_2_e (C function): Transport Functions. (line 13) * gsl_sf_transport_3 (C function): Transport Functions. (line 18) * gsl_sf_transport_3_e (C function): Transport Functions. (line 18) * gsl_sf_transport_4 (C function): Transport Functions. (line 23) * gsl_sf_transport_4_e (C function): Transport Functions. (line 23) * gsl_sf_transport_5 (C function): Transport Functions. (line 28) * gsl_sf_transport_5_e (C function): Transport Functions. (line 28) * gsl_sf_zeta (C function): Riemann Zeta Function. (line 16) * gsl_sf_zetam1 (C function): Riemann Zeta Function Minus One. (line 16) * gsl_sf_zetam1_e (C function): Riemann Zeta Function Minus One. (line 16) * gsl_sf_zetam1_int (C function): Riemann Zeta Function Minus One. (line 10) * gsl_sf_zetam1_int_e (C function): Riemann Zeta Function Minus One. (line 10) * gsl_sf_zeta_e (C function): Riemann Zeta Function. (line 16) * gsl_sf_zeta_int (C function): Riemann Zeta Function. (line 10) * gsl_sf_zeta_int_e (C function): Riemann Zeta Function. (line 10) * GSL_SIGN (C macro): Testing the Sign of Numbers. (line 6) * gsl_siman_copy_construct_t (C type): Simulated Annealing functions. (line 93) * gsl_siman_copy_t (C type): Simulated Annealing functions. (line 86) * gsl_siman_destroy_t (C type): Simulated Annealing functions. (line 100) * gsl_siman_Efunc_t (C type): Simulated Annealing functions. (line 56) * gsl_siman_metric_t (C type): Simulated Annealing functions. (line 72) * gsl_siman_params_t (C type): Simulated Annealing functions. (line 107) * gsl_siman_print_t (C type): Simulated Annealing functions. (line 79) * gsl_siman_solve (C function): Simulated Annealing functions. (line 6) * gsl_siman_step_t (C type): Simulated Annealing functions. (line 63) * gsl_sort (C function): Sorting vectors. (line 22) * gsl_sort2 (C function): Sorting vectors. (line 29) * gsl_sort_index (C function): Sorting vectors. (line 49) * gsl_sort_largest (C function): Selecting the k smallest or largest elements. (line 26) * gsl_sort_largest_index (C function): Selecting the k smallest or largest elements. (line 58) * gsl_sort_smallest (C function): Selecting the k smallest or largest elements. (line 16) * gsl_sort_smallest_index (C function): Selecting the k smallest or largest elements. (line 48) * gsl_sort_vector (C function): Sorting vectors. (line 38) * gsl_sort_vector2 (C function): Sorting vectors. (line 43) * gsl_sort_vector_index (C function): Sorting vectors. (line 61) * gsl_sort_vector_largest (C function): Selecting the k smallest or largest elements. (line 35) * gsl_sort_vector_largest_index (C function): Selecting the k smallest or largest elements. (line 69) * gsl_sort_vector_smallest (C function): Selecting the k smallest or largest elements. (line 35) * gsl_sort_vector_smallest_index (C function): Selecting the k smallest or largest elements. (line 69) * gsl_spblas_dgemm (C function): Sparse BLAS operations. (line 17) * gsl_spblas_dgemv (C function): Sparse BLAS operations. (line 6) * gsl_splinalg_itersolve_alloc (C function): Iterating the Sparse Linear System. (line 9) * gsl_splinalg_itersolve_free (C function): Iterating the Sparse Linear System. (line 20) * gsl_splinalg_itersolve_iterate (C function): Iterating the Sparse Linear System. (line 31) * gsl_splinalg_itersolve_name (C function): Iterating the Sparse Linear System. (line 26) * gsl_splinalg_itersolve_normr (C function): Iterating the Sparse Linear System. (line 53) * gsl_splinalg_itersolve_type (C type): Types of Sparse Iterative Solvers. (line 9) * gsl_splinalg_itersolve_type.gsl_splinalg_itersolve_gmres (C var): Types of Sparse Iterative Solvers. (line 11) * gsl_spline (C type): 1D Higher-level Interface. (line 14) * gsl_spline2d (C type): 2D Higher-level Interface. (line 14) * gsl_spline2d_alloc (C function): 2D Higher-level Interface. (line 19) * gsl_spline2d_eval (C function): 2D Higher-level Interface. (line 33) * gsl_spline2d_eval_deriv_x (C function): 2D Higher-level Interface. (line 47) * gsl_spline2d_eval_deriv_xx (C function): 2D Higher-level Interface. (line 61) * gsl_spline2d_eval_deriv_xx_e (C function): 2D Higher-level Interface. (line 61) * gsl_spline2d_eval_deriv_xy (C function): 2D Higher-level Interface. (line 75) * gsl_spline2d_eval_deriv_xy_e (C function): 2D Higher-level Interface. (line 75) * gsl_spline2d_eval_deriv_x_e (C function): 2D Higher-level Interface. (line 47) * gsl_spline2d_eval_deriv_y (C function): 2D Higher-level Interface. (line 54) * gsl_spline2d_eval_deriv_yy (C function): 2D Higher-level Interface. (line 68) * gsl_spline2d_eval_deriv_yy_e (C function): 2D Higher-level Interface. (line 68) * gsl_spline2d_eval_deriv_y_e (C function): 2D Higher-level Interface. (line 54) * gsl_spline2d_eval_e (C function): 2D Higher-level Interface. (line 33) * gsl_spline2d_eval_extrap (C function): 2D Higher-level Interface. (line 40) * gsl_spline2d_eval_extrap_e (C function): 2D Higher-level Interface. (line 40) * gsl_spline2d_free (C function): 2D Higher-level Interface. (line 26) * gsl_spline2d_get (C function): 2D Higher-level Interface. (line 85) * gsl_spline2d_init (C function): 2D Higher-level Interface. (line 22) * gsl_spline2d_min_size (C function): 2D Higher-level Interface. (line 30) * gsl_spline2d_name (C function): 2D Higher-level Interface. (line 28) * gsl_spline2d_set (C function): 2D Higher-level Interface. (line 82) * gsl_spline_alloc (C function): 1D Higher-level Interface. (line 19) * gsl_spline_eval (C function): 1D Higher-level Interface. (line 32) * gsl_spline_eval_deriv (C function): 1D Higher-level Interface. (line 37) * gsl_spline_eval_deriv2 (C function): 1D Higher-level Interface. (line 42) * gsl_spline_eval_deriv2_e (C function): 1D Higher-level Interface. (line 42) * gsl_spline_eval_deriv_e (C function): 1D Higher-level Interface. (line 37) * gsl_spline_eval_e (C function): 1D Higher-level Interface. (line 32) * gsl_spline_eval_integ (C function): 1D Higher-level Interface. (line 47) * gsl_spline_eval_integ_e (C function): 1D Higher-level Interface. (line 47) * gsl_spline_free (C function): 1D Higher-level Interface. (line 25) * gsl_spline_init (C function): 1D Higher-level Interface. (line 22) * gsl_spline_min_size (C function): 1D Higher-level Interface. (line 29) * gsl_spline_name (C function): 1D Higher-level Interface. (line 27) * gsl_spmatrix (C type): Overview<8>. (line 10) * gsl_spmatrix_add (C function): Matrix Operations. (line 45) * gsl_spmatrix_alloc (C function): Allocation. (line 12) * gsl_spmatrix_alloc_nzmax (C function): Allocation. (line 28) * gsl_spmatrix_alloc_nzmax.GSL_SPMATRIX_COO (C macro): Allocation. (line 46) * gsl_spmatrix_alloc_nzmax.GSL_SPMATRIX_CSC (C macro): Allocation. (line 50) * gsl_spmatrix_alloc_nzmax.GSL_SPMATRIX_CSR (C macro): Allocation. (line 54) * gsl_spmatrix_compress (C function): Compressed Format. (line 29) * gsl_spmatrix_csc (C function): Compressed Format. (line 9) * gsl_spmatrix_csr (C function): Compressed Format. (line 19) * gsl_spmatrix_d2sp (C function): Conversion Between Sparse and Dense Matrices. (line 10) * gsl_spmatrix_dense_add (C function): Matrix Operations. (line 53) * gsl_spmatrix_dense_sub (C function): Matrix Operations. (line 65) * gsl_spmatrix_equal (C function): Matrix Properties. (line 28) * gsl_spmatrix_fprintf (C function): Reading and Writing Matrices. (line 32) * gsl_spmatrix_fread (C function): Reading and Writing Matrices. (line 18) * gsl_spmatrix_free (C function): Allocation. (line 73) * gsl_spmatrix_fscanf (C function): Reading and Writing Matrices. (line 46) * gsl_spmatrix_fwrite (C function): Reading and Writing Matrices. (line 6) * gsl_spmatrix_get (C function): Accessing Matrix Elements. (line 6) * gsl_spmatrix_memcpy (C function): Copying Matrices. (line 6) * gsl_spmatrix_minmax (C function): Finding Maximum and Minimum Elements. (line 6) * gsl_spmatrix_min_index (C function): Finding Maximum and Minimum Elements. (line 16) * gsl_spmatrix_nnz (C function): Matrix Properties. (line 20) * gsl_spmatrix_norm1 (C function): Matrix Properties. (line 39) * gsl_spmatrix_ptr (C function): Accessing Matrix Elements. (line 23) * gsl_spmatrix_realloc (C function): Allocation. (line 61) * gsl_spmatrix_scale (C function): Matrix Operations. (line 6) * gsl_spmatrix_scale_columns (C function): Matrix Operations. (line 15) * gsl_spmatrix_scale_rows (C function): Matrix Operations. (line 30) * gsl_spmatrix_set (C function): Accessing Matrix Elements. (line 15) * gsl_spmatrix_set_zero (C function): Initializing Matrix Elements. (line 11) * gsl_spmatrix_sp2d (C function): Conversion Between Sparse and Dense Matrices. (line 18) * gsl_spmatrix_transpose (C function): Exchanging Rows and Columns. (line 17) * gsl_spmatrix_transpose_memcpy (C function): Exchanging Rows and Columns. (line 6) * gsl_spmatrix_type (C function): Matrix Properties. (line 6) * gsl_stats_absdev (C function): Absolute deviation. (line 6) * gsl_stats_absdev_m (C function): Absolute deviation. (line 22) * gsl_stats_correlation (C function): Correlation. (line 6) * gsl_stats_covariance (C function): Covariance. (line 6) * gsl_stats_covariance_m (C function): Covariance. (line 16) * gsl_stats_gastwirth_from_sorted_data (C function): Gastwirth Estimator. (line 15) * gsl_stats_kurtosis (C function): Higher moments skewness and kurtosis. (line 35) * gsl_stats_kurtosis_m_sd (C function): Higher moments skewness and kurtosis. (line 48) * gsl_stats_lag1_autocorrelation (C function): Autocorrelation. (line 6) * gsl_stats_lag1_autocorrelation_m (C function): Autocorrelation. (line 16) * gsl_stats_mad (C function): Median Absolute Deviation MAD. (line 20) * gsl_stats_mad0 (C function): Median Absolute Deviation MAD. (line 17) * gsl_stats_max (C function): Maximum and Minimum values. (line 12) * gsl_stats_max_index (C function): Maximum and Minimum values. (line 42) * gsl_stats_mean (C function): Mean Standard Deviation and Variance. (line 6) * gsl_stats_median (C function): Median and Percentiles. (line 29) * gsl_stats_median_from_sorted_data (C function): Median and Percentiles. (line 13) * gsl_stats_min (C function): Maximum and Minimum values. (line 24) * gsl_stats_minmax (C function): Maximum and Minimum values. (line 36) * gsl_stats_minmax_index (C function): Maximum and Minimum values. (line 60) * gsl_stats_min_index (C function): Maximum and Minimum values. (line 51) * gsl_stats_Qn0_from_sorted_data (C function): Q_n Statistic. (line 19) * gsl_stats_Qn_from_sorted_data (C function): Q_n Statistic. (line 19) * gsl_stats_quantile_from_sorted_data (C function): Median and Percentiles. (line 38) * gsl_stats_sd (C function): Mean Standard Deviation and Variance. (line 50) * gsl_stats_sd_m (C function): Mean Standard Deviation and Variance. (line 50) * gsl_stats_sd_with_fixed_mean (C function): Mean Standard Deviation and Variance. (line 83) * gsl_stats_select (C function): Order Statistics. (line 14) * gsl_stats_skew (C function): Higher moments skewness and kurtosis. (line 6) * gsl_stats_skew_m_sd (C function): Higher moments skewness and kurtosis. (line 22) * gsl_stats_Sn0_from_sorted_data (C function): S_n Statistic. (line 17) * gsl_stats_Sn_from_sorted_data (C function): S_n Statistic. (line 21) * gsl_stats_spearman (C function): Correlation. (line 20) * gsl_stats_trmean_from_sorted_data (C function): Trimmed Mean. (line 14) * gsl_stats_tss (C function): Mean Standard Deviation and Variance. (line 59) * gsl_stats_tss_m (C function): Mean Standard Deviation and Variance. (line 59) * gsl_stats_variance (C function): Mean Standard Deviation and Variance. (line 20) * gsl_stats_variance_m (C function): Mean Standard Deviation and Variance. (line 40) * gsl_stats_variance_with_fixed_mean (C function): Mean Standard Deviation and Variance. (line 72) * gsl_stats_wabsdev (C function): Weighted Samples. (line 96) * gsl_stats_wabsdev_m (C function): Weighted Samples. (line 105) * gsl_stats_wkurtosis (C function): Weighted Samples. (line 129) * gsl_stats_wkurtosis_m_sd (C function): Weighted Samples. (line 137) * gsl_stats_wmean (C function): Weighted Samples. (line 14) * gsl_stats_wsd (C function): Weighted Samples. (line 48) * gsl_stats_wsd_m (C function): Weighted Samples. (line 56) * gsl_stats_wsd_with_fixed_mean (C function): Weighted Samples. (line 74) * gsl_stats_wskew (C function): Weighted Samples. (line 113) * gsl_stats_wskew_m_sd (C function): Weighted Samples. (line 121) * gsl_stats_wtss (C function): Weighted Samples. (line 82) * gsl_stats_wtss_m (C function): Weighted Samples. (line 82) * gsl_stats_wvariance (C function): Weighted Samples. (line 24) * gsl_stats_wvariance_m (C function): Weighted Samples. (line 40) * gsl_stats_wvariance_with_fixed_mean (C function): Weighted Samples. (line 62) * gsl_strerror (C function): Error Codes. (line 41) * gsl_sum_levin_utrunc_accel (C function): Acceleration functions without error estimation. (line 39) * gsl_sum_levin_utrunc_alloc (C function): Acceleration functions without error estimation. (line 26) * gsl_sum_levin_utrunc_free (C function): Acceleration functions without error estimation. (line 33) * gsl_sum_levin_utrunc_workspace (C type): Acceleration functions without error estimation. (line 22) * gsl_sum_levin_u_accel (C function): Acceleration functions. (line 38) * gsl_sum_levin_u_alloc (C function): Acceleration functions. (line 27) * gsl_sum_levin_u_free (C function): Acceleration functions. (line 33) * gsl_sum_levin_u_workspace (C type): Acceleration functions. (line 23) * gsl_vector (C type): Vectors. (line 17) * gsl_vector_add (C function): Vector operations. (line 6) * gsl_vector_add_constant (C function): Vector operations. (line 40) * gsl_vector_alloc (C function): Vector allocation. (line 14) * gsl_vector_axpby (C function): Vector operations. (line 52) * gsl_vector_calloc (C function): Vector allocation. (line 23) * gsl_vector_complex_const_imag (C function): Vector views. (line 115) * gsl_vector_complex_const_real (C function): Vector views. (line 103) * gsl_vector_complex_imag (C function): Vector views. (line 115) * gsl_vector_complex_real (C function): Vector views. (line 103) * gsl_vector_const_ptr (C function): Accessing vector elements. (line 66) * gsl_vector_const_subvector (C function): Vector views. (line 31) * gsl_vector_const_subvector_with_stride (C function): Vector views. (line 63) * gsl_vector_const_view (C type): Vector views. (line 12) * gsl_vector_const_view_array (C function): Vector views. (line 127) * gsl_vector_const_view_array_with_stride (C function): Vector views. (line 152) * gsl_vector_div (C function): Vector operations. (line 27) * gsl_vector_equal (C function): Vector properties. (line 19) * gsl_vector_fprintf (C function): Reading and writing vectors. (line 28) * gsl_vector_fread (C function): Reading and writing vectors. (line 17) * gsl_vector_free (C function): Vector allocation. (line 28) * gsl_vector_fscanf (C function): Reading and writing vectors. (line 38) * gsl_vector_fwrite (C function): Reading and writing vectors. (line 9) * gsl_vector_get (C function): Accessing vector elements. (line 48) * gsl_vector_isneg (C function): Vector properties. (line 10) * gsl_vector_isnonneg (C function): Vector properties. (line 10) * gsl_vector_isnull (C function): Vector properties. (line 10) * gsl_vector_ispos (C function): Vector properties. (line 10) * gsl_vector_max (C function): Finding maximum and minimum elements of vectors. (line 8) * gsl_vector_max_index (C function): Finding maximum and minimum elements of vectors. (line 23) * gsl_vector_memcpy (C function): Copying vectors. (line 12) * gsl_vector_min (C function): Finding maximum and minimum elements of vectors. (line 12) * gsl_vector_minmax (C function): Finding maximum and minimum elements of vectors. (line 16) * gsl_vector_minmax_index (C function): Finding maximum and minimum elements of vectors. (line 35) * gsl_vector_min_index (C function): Finding maximum and minimum elements of vectors. (line 29) * gsl_vector_mul (C function): Vector operations. (line 20) * gsl_vector_ptr (C function): Accessing vector elements. (line 66) * gsl_vector_reverse (C function): Exchanging elements. (line 15) * gsl_vector_scale (C function): Vector operations. (line 34) * gsl_vector_set (C function): Accessing vector elements. (line 57) * gsl_vector_set_all (C function): Initializing vector elements. (line 6) * gsl_vector_set_basis (C function): Initializing vector elements. (line 16) * gsl_vector_set_zero (C function): Initializing vector elements. (line 11) * gsl_vector_sub (C function): Vector operations. (line 13) * gsl_vector_subvector (C function): Vector views. (line 31) * gsl_vector_subvector_with_stride (C function): Vector views. (line 63) * gsl_vector_sum (C function): Vector operations. (line 47) * gsl_vector_swap (C function): Copying vectors. (line 19) * gsl_vector_swap_elements (C function): Exchanging elements. (line 9) * gsl_vector_view (C type): Vector views. (line 12) * gsl_vector_view_array (C function): Vector views. (line 127) * gsl_vector_view_array_with_stride (C function): Vector views. (line 152) * gsl_wavelet (C type): Initialization. (line 6) * gsl_wavelet2d_nstransform (C function): Wavelet transforms in two dimension. (line 67) * gsl_wavelet2d_nstransform_forward (C function): Wavelet transforms in two dimension. (line 67) * gsl_wavelet2d_nstransform_inverse (C function): Wavelet transforms in two dimension. (line 67) * gsl_wavelet2d_nstransform_matrix (C function): Wavelet transforms in two dimension. (line 80) * gsl_wavelet2d_nstransform_matrix_forward (C function): Wavelet transforms in two dimension. (line 80) * gsl_wavelet2d_nstransform_matrix_inverse (C function): Wavelet transforms in two dimension. (line 80) * gsl_wavelet2d_transform (C function): Wavelet transforms in two dimension. (line 29) * gsl_wavelet2d_transform_forward (C function): Wavelet transforms in two dimension. (line 29) * gsl_wavelet2d_transform_inverse (C function): Wavelet transforms in two dimension. (line 29) * gsl_wavelet2d_transform_matrix (C function): Wavelet transforms in two dimension. (line 56) * gsl_wavelet2d_transform_matrix_forward (C function): Wavelet transforms in two dimension. (line 56) * gsl_wavelet2d_transform_matrix_inverse (C function): Wavelet transforms in two dimension. (line 56) * gsl_wavelet_alloc (C function): Initialization. (line 11) * gsl_wavelet_free (C function): Initialization. (line 57) * gsl_wavelet_name (C function): Initialization. (line 52) * gsl_wavelet_transform (C function): Wavelet transforms in one dimension. (line 6) * gsl_wavelet_transform_forward (C function): Wavelet transforms in one dimension. (line 6) * gsl_wavelet_transform_inverse (C function): Wavelet transforms in one dimension. (line 6) * gsl_wavelet_type (C type): Initialization. (line 22) * gsl_wavelet_type.gsl_wavelet_bspline (C var): Initialization. (line 39) * gsl_wavelet_type.gsl_wavelet_bspline_centered (C var): Initialization. (line 39) * gsl_wavelet_type.gsl_wavelet_daubechies (C var): Initialization. (line 24) * gsl_wavelet_type.gsl_wavelet_daubechies_centered (C var): Initialization. (line 24) * gsl_wavelet_type.gsl_wavelet_haar (C var): Initialization. (line 32) * gsl_wavelet_type.gsl_wavelet_haar_centered (C var): Initialization. (line 32) * gsl_wavelet_workspace (C type): Initialization. (line 61) * gsl_wavelet_workspace_alloc (C function): Initialization. (line 66) * gsl_wavelet_workspace_free (C function): Initialization. (line 77) * Gumbel distribution (Type 1): The Type-1 Gumbel Distribution. (line 6) * Gumbel distribution (Type 2): The Type-2 Gumbel Distribution. (line 6) * Haar wavelets: Initialization. (line 32) * Hankel transforms, discrete: References and Further Reading<27>. (line 54) * HAVE_INLINE: ANSI C Compliance. (line 25) * hazard function, normal distribution: Probability functions. (line 21) * HBOOK: References and Further Reading<19>. (line 6) * header files, including: An Example Program. (line 28) * heapsort: Examples<6>. (line 111) * HEMM, Level-3 BLAS: Level 3. (line 50) * HEMV, Level-2 BLAS: Level 2. (line 85) * HER, Level-2 BLAS: Level 2. (line 140) * HER2, Level-2 BLAS: Level 2. (line 168) * HER2K, Level-3 BLAS: Level 3. (line 187) * HERK, Level-3 BLAS: Level 3. (line 144) * Hermite functions: Derivatives of Hermite Polynomials. (line 59) * Hermite functions, derivatives: Derivatives of Hermite Functions. (line 6) * Hermite functions, zeros: Zeros of Hermite Polynomials and Hermite Functions. (line 6) * Hermite polynomials: Hermite Polynomials and Functions. (line 11) * Hermite polynomials, derivatives: Derivatives of Hermite Polynomials. (line 6) * Hermite polynomials, zeros: Zeros of Hermite Polynomials and Hermite Functions. (line 6) * hermitian matrix, complex, eigensystem: Complex Hermitian Matrices. (line 9) * Hessenberg decomposition: Tridiagonal Decomposition of Hermitian Matrices. (line 44) * Hessenberg triangular decomposition: Hessenberg Decomposition of Real Matrices. (line 55) * histogram statistics: Searching histogram ranges. (line 23) * histogram, from ntuple: Histogramming ntuple values. (line 41) * histograms: References and Further Reading<18>. (line 19) * histograms, random sampling from: Resampling from histograms. (line 17) * Householder linear solver: Householder Transformations. (line 59) * Householder matrix: Givens Rotations. (line 36) * Householder transformation: Givens Rotations. (line 36) * how to report: No Warranty. (line 12) * Hurwitz Zeta Function: Hurwitz Zeta Function. (line 6) * HYBRID algorithm, unscaled without derivatives: Algorithms without Derivatives. (line 28) * HYBRID algorithms for nonlinear systems: Algorithms using Derivatives. (line 18) * HYBRIDJ algorithm: Algorithms using Derivatives. (line 76) * HYBRIDS algorithm, scaled without derivatives: Algorithms without Derivatives. (line 18) * HYBRIDSJ algorithm: Algorithms using Derivatives. (line 18) * hydrogen atom: Coulomb Functions. (line 6) * hyperbolic cosine, inverse: Elementary Functions. (line 36) * hyperbolic functions, complex numbers: Inverse Complex Trigonometric Functions. (line 65) * hyperbolic integrals: Hyperbolic Integrals. (line 6) * hyperbolic sine, inverse: Elementary Functions. (line 41) * hyperbolic space: Legendre Functions and Spherical Harmonics. (line 6) * hyperbolic tangent, inverse: Elementary Functions. (line 46) * hypergeometric functions: Hypergeometric Functions. (line 6) * hypergeometric random variates: The Hypergeometric Distribution. (line 6) * hypot: Elementary Functions. (line 24) * hypot function, special functions: Circular Trigonometric Functions. (line 16) * I(x), Bessel Functions: Regular Modified Cylindrical Bessel Functions. (line 6) * i(x), Bessel Functions: Regular Modified Spherical Bessel Functions. (line 6) * identity matrix: Accessing matrix elements. (line 48) * identity permutation: Permutation allocation. (line 24) * IEEE exceptions: Representation of floating point numbers. (line 135) * IEEE floating point: References and Further Reading<39>. (line 18) * IEEE format for floating point numbers: IEEE floating-point arithmetic. (line 11) * IEEE infinity, defined as a macro: Mathematical Constants. (line 59) * IEEE NaN, defined as a macro: Infinities and Not-a-number. (line 16) * illumination, units of: Viscosity. (line 13) * imperial units: Measurement of Time. (line 22) * Implicit Euler method: Stepping Functions. (line 131) * Implicit Runge-Kutta method: Stepping Functions. (line 140) * importance sampling, VEGAS: MISER. (line 160) * including GSL header files: An Example Program. (line 29) * incomplete Beta function, normalized: Incomplete Beta Function. (line 6) * incomplete Fermi-Dirac integral: Incomplete Fermi-Dirac Integrals. (line 6) * incomplete Gamma function: Incomplete Gamma Functions. (line 14) * indirect sorting: Sorting objects. (line 58) * indirect sorting, of vector elements: Sorting vectors. (line 49) * infinity, defined as a macro: Mathematical Constants. (line 60) * infinity, IEEE format: Representation of floating point numbers. (line 30) * info-gsl mailing list: GSL is Free Software. (line 45) * initial value problems, differential equations: References and Further Reading<21>. (line 11) * initializing matrices: Accessing matrix elements. (line 48) * initializing vectors: Accessing vector elements. (line 75) * inline functions: ANSI C Compliance. (line 25) * integer powers: Power Function. (line 6) * integrals, exponential: Exponential Integrals. (line 6) * integration, numerical (quadrature): References and Further Reading<11>. (line 67) * interpolating quadrature: Fixed point quadratures. (line 6) * interpolation: References and Further Reading<22>. (line 47) * interpolation, using Chebyshev polynomials: References and Further Reading<24>. (line 14) * inverse complex trigonometric functions: Complex Trigonometric Functions. (line 35) * inverse cumulative distribution functions: References. (line 14) * inverse hyperbolic cosine: Elementary Functions. (line 36) * inverse hyperbolic functions, complex numbers: Complex Hyperbolic Functions. (line 35) * inverse hyperbolic sine: Elementary Functions. (line 41) * inverse hyperbolic tangent: Elementary Functions. (line 46) * inverse of a matrix, by LU decomposition: LU Decomposition. (line 77) * inverting a permutation: Permutation functions. (line 11) * Irregular Cylindrical Bessel Functions: Irregular Cylindrical Bessel Functions. (line 6) * Irregular Modified Bessel Functions, Fractional Order: Irregular Modified Bessel Functions—Fractional Order. (line 6) * Irregular Modified Cylindrical Bessel Functions: Irregular Modified Cylindrical Bessel Functions. (line 6) * Irregular Modified Spherical Bessel Functions: Irregular Modified Spherical Bessel Functions. (line 6) * Irregular Spherical Bessel Functions: Irregular Spherical Bessel Functions. (line 6) * iterating through combinations: Combination functions. (line 6) * iterating through multisets: Multiset functions. (line 6) * iterating through permutations: Permutation functions. (line 17) * iterative refinement of solutions in linear systems: LU Decomposition. (line 64) * J(x), Bessel Functions: Regular Cylindrical Bessel Functions. (line 6) * j(x), Bessel Functions: Regular Spherical Bessel Functions. (line 6) * Jacobi elliptic functions: Elliptic Functions Jacobi. (line 6) * Jacobi orthogonalization: Singular Value Decomposition. (line 61) * Jacobian matrix, ODEs: Defining the ODE System. (line 42) * Jacobian matrix, root finding: Overview<3>. (line 41) * K(x), Bessel Functions: Irregular Modified Cylindrical Bessel Functions. (line 6) * k(x), Bessel Functions: Irregular Modified Spherical Bessel Functions. (line 6) * knots, basis splines: Initializing the B-splines solver. (line 24) * kurtosis: Absolute deviation. (line 34) * Laguerre functions: Laguerre Functions. (line 6) * Lambert function: Lambert W Functions. (line 6) * Landau distribution: The Landau Distribution. (line 6) * LAPACK: References and Further Reading<10>. (line 18) * Laplace distribution: The Laplace Distribution. (line 6) * large dense linear least squares: Robust linear regression. (line 335) * large linear least squares, normal equations: Large dense linear systems. (line 46) * large linear least squares, routines: Large Dense Linear Systems Solution Steps. (line 30) * large linear least squares, steps: Tall Skinny QR TSQR Approach. (line 38) * large linear least squares, TSQR: Normal Equations Approach. (line 30) * ldexp: Elementary Functions. (line 51) * LDL decomposition: Modified Cholesky Decomposition. (line 69) * LDLT decomposition: Modified Cholesky Decomposition. (line 69) * LDLT decomposition, banded: Banded Cholesky Decomposition. (line 98) * LD_LIBRARY_PATH: Linking with an alternative BLAS library. (line 25) * leading dimension, matrices: Example programs for vectors. (line 100) * least squares fit: References and Further Reading<32>. (line 24) * least squares troubleshooting: Large Dense Linear Least Squares Routines. (line 204) * least squares, covariance of best-fit parameters: High Level Driver. (line 40) * least squares, nonlinear: References and Further Reading<33>. (line 55) * least squares, regularized: Multi-parameter regression. (line 182) * least squares, robust: Regularized regression. (line 411) * Legendre forms of elliptic integrals: Definition of Legendre Forms. (line 6) * Legendre functions: Legendre Functions and Spherical Harmonics. (line 6) * Legendre polynomials: Legendre Functions and Spherical Harmonics. (line 6) * length, computed accurately using hypot: Elementary Functions. (line 24) * length, computed accurately using hypot3: Elementary Functions. (line 30) * Levenberg-Marquardt algorithm: Solving the Trust Region Subproblem TRS. (line 29) * Levenberg-Marquardt algorithm, geodesic acceleration: Levenberg-Marquardt. (line 34) * Levin u-transform: References and Further Reading<25>. (line 12) * Levy distribution: The Levy alpha-Stable Distributions. (line 6) * Levy distribution, skew: The Levy skew alpha-Stable Distribution. (line 6) * libraries, linking with: Compiling and Linking. (line 24) * libraries, shared: Linking with an alternative BLAS library. (line 26) * license of GSL: Top. (line 12) * light, units of: Viscosity. (line 14) * linear algebra: References and Further Reading<8>. (line 33) * linear algebra, BLAS: References and Further Reading<7>. (line 15) * linear algebra, sparse: References and Further Reading<37>. (line 13) * linear interpolation: 1D Interpolation Types. (line 10) * linear least squares, large: Robust linear regression. (line 334) * linear regression: Overview<5>. (line 60) * linear systems: LU Decomposition. (line 43) * linear systems, refinement of solutions: LU Decomposition. (line 64) * linking with GSL libraries: Compiling and Linking. (line 24) * location estimation: Order Statistics. (line 23) * log1p: Elementary Functions. (line 12) * logarithm and related functions: Logarithm and Related Functions. (line 6) * logarithm of Beta function: Beta Functions. (line 14) * logarithm of combinatorial factor C(m: Factorials. (line 50) * logarithm of complex number: Elementary Complex Functions. (line 33) * logarithm of cosh function, special functions: Hyperbolic Trigonometric Functions. (line 11) * logarithm of double factorial: Factorials. (line 36) * logarithm of factorial: Factorials. (line 27) * logarithm of Gamma function: Gamma Functions. (line 23) * logarithm of Pochhammer symbol: Pochhammer Symbol. (line 16) * logarithm of sinh function: Hyperbolic Trigonometric Functions. (line 6) * logarithm of the determinant of a matrix: LU Decomposition. (line 120) * logarithm, computed accurately near 1: Elementary Functions. (line 12) * Logarithmic random variates: The Hypergeometric Distribution. (line 39) * Logistic distribution: The Logistic Distribution. (line 6) * Lognormal distribution: The Lognormal Distribution. (line 6) * low discrepancy sequences: Acknowledgements. (line 12) * Low-level CBLAS: Autoconf Macros. (line 107) * LQ decomposition: QR Decomposition with Column Pivoting. (line 167) * LU decomposition: Linear Algebra. (line 15) * LU decomposition, banded: Symmetric Banded Format. (line 36) * macros for mathematical constants: Mathematical Functions. (line 14) * magnitude of complex number: Properties of complex numbers. (line 11) * mailing list: No Warranty. (line 12) * mailing list archives: Reporting Bugs. (line 31) * mailing list for GSL announcements: GSL is Free Software. (line 46) * mantissa, IEEE format: IEEE floating-point arithmetic. (line 11) * mass, units of: Volume Area and Length. (line 42) * mathematical constants, defined as macros: Mathematical Functions. (line 14) * mathematical functions, elementary: Examples. (line 38) * Mathieu Function Characteristic Values: Mathieu Function Characteristic Values. (line 6) * Mathieu functions: Mathieu Functions. (line 6) * matrices: Example programs for vectors. (line 100) * matrices, banded: Triangular Systems. (line 43) * matrices, initializing: Accessing matrix elements. (line 48) * matrices, range-checking: Matrix allocation. (line 40) * matrices, sparse: References and Further Reading<35>. (line 22) * matrix determinant: LU Decomposition. (line 111) * matrix diagonal: Creating row and column views. (line 70) * matrix factorization: References and Further Reading<8>. (line 33) * matrix inverse: LU Decomposition. (line 77) * matrix square root, Cholesky decomposition: Singular Value Decomposition. (line 96) * matrix subdiagonal: Creating row and column views. (line 84) * matrix superdiagonal: Creating row and column views. (line 98) * matrix, constant: Accessing matrix elements. (line 47) * matrix, identity: Accessing matrix elements. (line 48) * matrix, operations: References and Further Reading<7>. (line 15) * matrix, zero: Accessing matrix elements. (line 48) * max: References and Further Reading<14>. (line 51) * maximal phase, Daubechies wavelets: Initialization. (line 24) * maximization, see minimization: References and Further Reading<29>. (line 17) * maximum of two numbers: Maximum and Minimum functions. (line 10) * maximum value, from histogram: Searching histogram ranges. (line 23) * mean: References and Further Reading<14>. (line 52) * mean value, from histogram: Histogram Statistics. (line 28) * mean, trimmed: Robust Location Estimates. (line 16) * mean, truncated: Robust Location Estimates. (line 16) * median absolute deviation: Robust Scale Estimates. (line 12) * Mills’ ratio, inverse: Probability functions. (line 21) * min: References and Further Reading<14>. (line 52) * minimization, BFGS algorithm: Algorithms with Derivatives. (line 45) * minimization, caveats: Overview<2>. (line 45) * minimization, conjugate gradient algorithm: Algorithms with Derivatives. (line 15) * minimization, multidimensional: References and Further Reading<31>. (line 29) * minimization, one-dimensional: References and Further Reading<29>. (line 17) * minimization, overview: One Dimensional Minimization. (line 20) * minimization, Polak-Ribiere algorithm: Algorithms with Derivatives. (line 35) * minimization, providing a function to minimize: Initializing the Minimizer. (line 60) * minimization, simplex algorithm: Algorithms without Derivatives<2>. (line 14) * minimization, steepest descent algorithm: Algorithms with Derivatives. (line 69) * minimization, stopping parameters: Iteration<2>. (line 56) * minimum finding, Brent’s method: Minimization Algorithms. (line 34) * minimum finding, golden section algorithm: Minimization Algorithms. (line 15) * minimum of two numbers: Maximum and Minimum functions. (line 15) * minimum value, from histogram: Searching histogram ranges. (line 22) * MINPACK, minimization algorithms: Algorithms using Derivatives. (line 18) * MISCFUN: References and Further Reading<3>. (line 15) * MISER monte carlo integration: PLAIN Monte Carlo. (line 66) * Mixed-radix FFT, complex data: Radix-2 FFT routines for complex data. (line 119) * Mixed-radix FFT, real data: Radix-2 FFT routines for real data. (line 97) * Modified Bessel Functions, Fractional Order: Regular Modified Bessel Functions—Fractional Order. (line 6) * Modified Cholesky Decomposition: Pivoted Cholesky Decomposition. (line 110) * Modified Clenshaw-Curtis quadrature: Integrands with weight functions. (line 6) * Modified Cylindrical Bessel Functions: Regular Modified Cylindrical Bessel Functions. (line 6) * Modified Givens Rotation, BLAS: Level 1. (line 152) * Modified Newton’s method for nonlinear systems: Algorithms using Derivatives. (line 105) * Modified Spherical Bessel Functions: Regular Modified Spherical Bessel Functions. (line 6) * Monte Carlo integration: References and Further Reading<19>. (line 9) * moving maximum: Moving Variance and Standard Deviation. (line 32) * moving mean: Allocation for Moving Window Statistics. (line 30) * moving median: Moving Sum. (line 20) * moving median absolute deviation: Robust Scale Estimation. (line 17) * moving minimum: Moving Variance and Standard Deviation. (line 32) * moving quantile range: Moving MAD. (line 31) * moving standard deviation: Moving Mean. (line 25) * moving sum: Moving Minimum and Maximum. (line 40) * moving variance: Moving Mean. (line 25) * moving window accumulators: Moving Q_n. (line 20) * moving window statistics: References and Further Reading<16>. (line 14) * moving window, allocation: Handling Endpoints. (line 40) * MRG, multiple recursive random number generator: Random number generator algorithms. (line 126) * MT19937 random number generator: Random number generator algorithms. (line 19) * multi-parameter regression: Linear regression without a constant term. (line 51) * multidimensional integration: References and Further Reading<19>. (line 8) * multidimensional root finding, Broyden algorithm: Algorithms without Derivatives. (line 57) * multidimensional root finding, overview: Multidimensional Root-Finding. (line 20) * multidimensional root finding, providing a function to solve: Initializing the Solver<2>. (line 83) * Multimin, caveats: Overview<4>. (line 47) * Multinomial distribution: The Multinomial Distribution. (line 6) * multiplication: Elementary Operations. (line 6) * multisets: References and Further Reading<6>. (line 11) * multistep methods, ODEs: Stepping Functions. (line 164) * n): Factorials. (line 43) * n) <1>: Factorials. (line 50) * N-dimensional random direction vector: Spherical Vector Distributions. (line 44) * NaN, defined as a macro: Infinities and Not-a-number. (line 16) * nautical units: Imperial Units. (line 25) * Negative Binomial distribution, random variates: The Negative Binomial Distribution. (line 6) * Nelder-Mead simplex algorithm for minimization: Algorithms without Derivatives<2>. (line 14) * Newton algorithm, discrete: Algorithms without Derivatives. (line 34) * Newton algorithm, globally convergent: Algorithms using Derivatives. (line 105) * Newton’s method for finding roots: Root Finding Algorithms using Derivatives. (line 17) * Newton’s method for systems of nonlinear equations: Algorithms using Derivatives. (line 84) * Niederreiter sequence: Acknowledgements. (line 11) * NIST Statistical Reference Datasets: References and Further Reading<33>. (line 16) * non-normalized incomplete Gamma function: Incomplete Gamma Functions. (line 6) * nonlinear equation, solutions of: References and Further Reading<28>. (line 14) * nonlinear fitting, stopping parameters, convergence: Iteration<5>. (line 120) * nonlinear functions, minimization: References and Further Reading<29>. (line 16) * nonlinear least squares: References and Further Reading<33>. (line 56) * nonlinear least squares, dogleg: Levenberg-Marquardt with Geodesic Acceleration. (line 29) * nonlinear least squares, double dogleg: Dogleg. (line 26) * nonlinear least squares, levenberg-marquardt: Solving the Trust Region Subproblem TRS. (line 29) * nonlinear least squares, levenberg-marquardt, geodesic acceleration: Levenberg-Marquardt. (line 33) * nonlinear least squares, overview: Nonlinear Least-Squares Fitting. (line 27) * nonlinear systems of equations, solution of: References and Further Reading<30>. (line 13) * nonsymmetric matrix, real, eigensystem: Real Nonsymmetric Matrices. (line 6) * Nordsieck form: Stepping Functions. (line 164) * normalized form, IEEE format: Representation of floating point numbers. (line 14) * normalized incomplete Beta function: Incomplete Beta Function. (line 6) * Not-a-number, defined as a macro: Infinities and Not-a-number. (line 16) * NRM2, Level-1 BLAS: Level 1. (line 42) * ntuples: Example programs for 2D histograms. (line 72) * nuclear physics, constants: Astronomy and Astrophysics. (line 29) * numerical constants, defined as macros: Mathematical Functions. (line 14) * numerical derivatives: References and Further Reading<23>. (line 18) * numerical integration (quadrature): References and Further Reading<11>. (line 67) * obtaining GSL: GSL is Free Software. (line 46) * ODEs, initial value problems: References and Further Reading<21>. (line 10) * online statistics: References and Further Reading<15>. (line 39) * online statistics <1>: References and Further Reading<16>. (line 13) * optimization, combinatorial: References and Further Reading<20>. (line 21) * optimization, see minimization: References and Further Reading<29>. (line 17) * optimized functions, alternatives: Portability functions. (line 33) * ordering, matrix elements: Example programs for vectors. (line 99) * ordinary differential equations, initial value problem: References and Further Reading<21>. (line 11) * oscillatory functions, numerical integration of: QAWO adaptive integration for oscillatory functions. (line 6) * overflow, IEEE exceptions: Representation of floating point numbers. (line 134) * Pareto distribution: The Pareto Distribution. (line 6) * PAW: References and Further Reading<19>. (line 6) * permutations: References and Further Reading<4>. (line 12) * physical constants: References and Further Reading<38>. (line 15) * physical dimension, matrices: Example programs for vectors. (line 100) * pi, defined as a macro: Mathematical Constants. (line 9) * Pivoted Cholesky Decomposition: Cholesky Decomposition. (line 154) * plain Monte Carlo: Interface. (line 80) * Pochhammer symbol: Pochhammer Symbol. (line 6) * Poisson random numbers: The Poisson Distribution. (line 6) * Polak-Ribiere algorithm, minimization: Algorithms with Derivatives. (line 35) * polar form of complex numbers: Complex Numbers. (line 21) * polar to rectangular conversion: Conversion Functions. (line 6) * polygamma functions: Psi Digamma Function. (line 6) * polynomial evaluation: Polynomials. (line 13) * polynomial interpolation: 1D Interpolation Types. (line 15) * polynomials, roots of: References and Further Reading. (line 32) * power function: Power Function. (line 6) * power of complex number: Elementary Complex Functions. (line 17) * power, units of: Mass and Weight. (line 50) * precision, IEEE arithmetic: Representation of floating point numbers. (line 135) * predictor-corrector method, ODEs: Stepping Functions. (line 164) * prefixes: Force and Energy. (line 22) * pressure: Thermal Energy and Power. (line 22) * Prince-Dormand, Runge-Kutta method: Stepping Functions. (line 126) * printers units: Speed and Nautical Units. (line 25) * probability distribution, from histogram: Resampling from histograms. (line 18) * probability distributions, from histograms: Reading and writing histograms. (line 67) * projection of ntuples: Histogramming ntuple values. (line 41) * psi function: Psi Digamma Function. (line 6) * QAG quadrature algorithm: QAG adaptive integration. (line 6) * QAGI quadrature algorithm: QAGI adaptive integration on infinite intervals. (line 6) * QAGP quadrature algorithm: QAGP adaptive integration with known singular points. (line 6) * QAGS quadrature algorithm: QAGS adaptive integration with singularities. (line 6) * QAWC quadrature algorithm: QAWC adaptive integration for Cauchy principal values. (line 6) * QAWF quadrature algorithm: QAWF adaptive integration for Fourier integrals. (line 6) * QAWO quadrature algorithm: QAWO adaptive integration for oscillatory functions. (line 6) * QAWS quadrature algorithm: QAWS adaptive integration for singular functions. (line 6) * QL decomposition: LQ Decomposition. (line 65) * Qn statistic: S_n Statistic. (line 35) * QNG quadrature algorithm: QNG non-adaptive Gauss-Kronrod integration. (line 6) * QR decomposition: LU Decomposition. (line 136) * QR decomposition with column pivoting: Triangle on Top of Diagonal. (line 54) * QUADPACK: References and Further Reading<11>. (line 66) * quadratic equation, solving: Divided Difference Representation of Polynomials. (line 81) * quadrature: References and Further Reading<11>. (line 67) * quadrature, fixed point: Fixed point quadratures. (line 6) * quadrature, interpolating: Fixed point quadratures. (line 6) * quantile functions: References. (line 13) * quasi-random sequences: Acknowledgements. (line 12) * R250 shift-register random number generator: Other random number generators. (line 62) * Racah coefficients: Coupling Coefficients. (line 6) * Radial Mathieu Functions: Radial Mathieu Functions. (line 6) * radioactivity: Light and Illumination. (line 34) * Radix-2 FFT for real data: Overview of real data FFTs. (line 49) * Radix-2 FFT, complex data: Overview of complex data FFTs. (line 72) * rand, BSD random number generator: Unix random number generators. (line 17) * rand48 random number generator: Unix random number generators. (line 59) * random number distributions: References. (line 14) * random number generators: References and Further Reading<12>. (line 33) * random sampling from histograms: Resampling from histograms. (line 18) * RANDU random number generator: Other random number generators. (line 113) * RANF random number generator: Other random number generators. (line 22) * range: References and Further Reading<14>. (line 52) * range-checking for matrices: Matrix allocation. (line 39) * range-checking for vectors: Vector allocation. (line 36) * RANLUX random number generator: Random number generator algorithms. (line 74) * RANLXD random number generator: Random number generator algorithms. (line 67) * RANLXS random number generator: Random number generator algorithms. (line 47) * RANMAR random number generator: Other random number generators. (line 55) * RANMAR random number generator <1>: Other random number generators. (line 123) * Rayleigh distribution: The Rayleigh Distribution. (line 6) * Rayleigh Tail distribution: The Rayleigh Tail Distribution. (line 6) * real nonsymmetric matrix, eigensystem: Real Nonsymmetric Matrices. (line 6) * real symmetric matrix, eigensystem: Real Symmetric Matrices. (line 6) * Reciprocal Gamma function: Gamma Functions. (line 54) * rectangular to polar conversion: Conversion Functions. (line 6) * recursive stratified sampling, MISER: PLAIN Monte Carlo. (line 65) * reduction of angular variables: Restriction Functions. (line 6) * refinement of solutions in linear systems: LU Decomposition. (line 64) * regression, least squares: References and Further Reading<32>. (line 24) * regression, ridge: Multi-parameter regression. (line 183) * regression, robust: Regularized regression. (line 412) * regression, Tikhonov: Multi-parameter regression. (line 183) * Regular Bessel Functions, Fractional Order: Regular Bessel Function—Fractional Order. (line 6) * Regular Bessel Functions, Zeros of: Zeros of Regular Bessel Functions. (line 6) * Regular Cylindrical Bessel Functions: Regular Cylindrical Bessel Functions. (line 6) * Regular Modified Bessel Functions, Fractional Order: Regular Modified Bessel Functions—Fractional Order. (line 6) * Regular Modified Cylindrical Bessel Functions: Regular Modified Cylindrical Bessel Functions. (line 6) * Regular Modified Spherical Bessel Functions: Regular Modified Spherical Bessel Functions. (line 6) * Regular Spherical Bessel Functions: Regular Spherical Bessel Functions. (line 6) * Regulated Gamma function: Gamma Functions. (line 43) * relative Pochhammer symbol: Pochhammer Symbol. (line 31) * reporting bugs in GSL: No Warranty. (line 12) * representations of complex numbers: Complex Numbers. (line 21) * resampling from histograms: Reading and writing histograms. (line 68) * residual, in nonlinear systems of equations: Search Stopping Parameters<2>. (line 34) * reversing a permutation: Permutation functions. (line 6) * ridge regression: Multi-parameter regression. (line 183) * Riemann Zeta Function: Riemann Zeta Function. (line 6) * RK2, Runge-Kutta method: Stepping Functions. (line 104) * RK4, Runge-Kutta method: Stepping Functions. (line 108) * RKF45, Runge-Kutta-Fehlberg method: Stepping Functions. (line 115) * robust location estimators: Order Statistics. (line 23) * robust regression: Regularized regression. (line 412) * robust scale estimators: Gastwirth Estimator. (line 25) * rolling maximum: Moving Variance and Standard Deviation. (line 31) * rolling mean: Allocation for Moving Window Statistics. (line 29) * rolling median: Moving Sum. (line 19) * rolling median absolute deviation: Robust Scale Estimation. (line 17) * rolling minimum: Moving Variance and Standard Deviation. (line 32) * rolling quantile range: Moving MAD. (line 30) * rolling standard deviation: Moving Mean. (line 24) * rolling sum: Moving Minimum and Maximum. (line 39) * rolling variance: Moving Mean. (line 25) * rolling window accumulators: Moving Q_n. (line 19) * root finding: References and Further Reading<28>. (line 15) * root finding, bisection algorithm: Root Bracketing Algorithms. (line 18) * root finding, Brent’s method: Root Bracketing Algorithms. (line 54) * root finding, caveats: Overview. (line 39) * root finding, false position algorithm: Root Bracketing Algorithms. (line 35) * root finding, initial guess: Providing the function to solve. (line 121) * root finding, Newton’s method: Root Finding Algorithms using Derivatives. (line 17) * root finding, overview: One Dimensional Root-Finding. (line 19) * root finding, providing a function to solve: Initializing the Solver. (line 75) * root finding, search bounds: Providing the function to solve. (line 122) * root finding, secant method: Root Finding Algorithms using Derivatives. (line 32) * root finding, Steffenson’s method: Root Finding Algorithms using Derivatives. (line 64) * root finding, stopping parameters: Iteration. (line 49) * root finding, stopping parameters <1>: Iteration<3>. (line 58) * roots: References and Further Reading<28>. (line 15) * ROTG, Level-1 BLAS: Level 1. (line 130) * rounding mode: Representation of floating point numbers. (line 135) * Runge-Kutta Cash-Karp method: Stepping Functions. (line 121) * Runge-Kutta methods, ordinary differential equations: Stepping Functions. (line 104) * Runge-Kutta Prince-Dormand method: Stepping Functions. (line 126) * running statistics: References and Further Reading<15>. (line 40) * safe comparison of floating point numbers: Approximate Comparison of Floating Point Numbers. (line 12) * safeguarded step-length algorithm: Minimization Algorithms. (line 52) * sampling from histograms: Reading and writing histograms. (line 68) * sampling from histograms <1>: Resampling from histograms. (line 18) * SAXPY, Level-1 BLAS: Level 1. (line 105) * SCAL, Level-1 BLAS: Level 1. (line 117) * scale estimation: Gastwirth Estimator. (line 25) * schedule, cooling: Simulated Annealing algorithm. (line 20) * se(q,x), Mathieu function: Angular Mathieu Functions. (line 6) * secant method for finding roots: Root Finding Algorithms using Derivatives. (line 32) * selection function, ntuples: Histogramming ntuple values. (line 13) * series, acceleration: References and Further Reading<25>. (line 12) * shared libraries: Linking with an alternative BLAS library. (line 26) * shell prompt: Further Information. (line 27) * Shi(x): Hyperbolic Integrals. (line 6) * shift-register random number generator: Other random number generators. (line 62) * Si(x): Trigonometric Integrals. (line 6) * sign bit, IEEE format: IEEE floating-point arithmetic. (line 11) * sign of the determinant of a matrix: LU Decomposition. (line 129) * simplex algorithm, minimization: Algorithms without Derivatives<2>. (line 14) * simulated annealing: References and Further Reading<20>. (line 21) * sin, of complex number: Complex Trigonometric Functions. (line 6) * sine function, special functions: Circular Trigonometric Functions. (line 6) * single precision, IEEE format: Representation of floating point numbers. (line 34) * singular functions, numerical integration of: QAWS adaptive integration for singular functions. (line 6) * singular points, specifying positions in quadrature: QAGP adaptive integration with known singular points. (line 6) * singular value decomposition: Complete Orthogonal Decomposition. (line 122) * Skew Levy distribution: The Levy skew alpha-Stable Distribution. (line 6) * skewness: Absolute deviation. (line 35) * slope, see numerical derivative: References and Further Reading<23>. (line 17) * Sn statistic: Median Absolute Deviation MAD. (line 30) * Sobol sequence: Acknowledgements. (line 12) * solution of: LU Decomposition. (line 43) * solution of linear system by Householder transformations: Householder Transformations. (line 60) * solution of linear systems, Ax=b: References and Further Reading<8>. (line 33) * solving a nonlinear equation: References and Further Reading<28>. (line 15) * solving nonlinear systems of equations: References and Further Reading<30>. (line 13) * sorting: Examples<6>. (line 112) * sorting eigenvalues and eigenvectors: Sorting Eigenvalues and Eigenvectors. (line 6) * sorting vector elements: Sorting vectors. (line 22) * source code, reuse in applications: Deprecated Functions. (line 13) * sparse BLAS: References and Further Reading<36>. (line 14) * sparse BLAS, references: Sparse BLAS operations. (line 22) * sparse linear algebra: References and Further Reading<37>. (line 14) * sparse linear algebra, examples: Iterating the Sparse Linear System. (line 59) * sparse linear algebra, iterative solvers: Overview<9>. (line 20) * sparse linear algebra, overview: Sparse Linear Algebra. (line 14) * sparse linear algebra, references: Examples<35>. (line 136) * sparse matrices: References and Further Reading<35>. (line 23) * sparse matrices, accessing elements: Allocation. (line 80) * sparse matrices, allocation: Overview<8>. (line 61) * sparse matrices, BLAS operations: Sparse BLAS Support. (line 14) * sparse matrices, compressed column storage: Coordinate Storage COO. (line 39) * sparse matrices, compressed row storage: Compressed Sparse Column CSC. (line 27) * sparse matrices, compressed sparse column: Coordinate Storage COO. (line 40) * sparse matrices, compressed sparse row: Compressed Sparse Column CSC. (line 28) * sparse matrices, compression: Finding Maximum and Minimum Elements. (line 27) * sparse matrices, conversion: Compressed Format. (line 38) * sparse matrices, coordinate format: Sparse Matrix Storage Formats. (line 18) * sparse matrices, copying: Reading and Writing Matrices. (line 55) * sparse matrices, data types: Sparse Matrices. (line 18) * sparse matrices, examples: Conversion Between Sparse and Dense Matrices. (line 26) * sparse matrices, exchanging rows and columns: Copying Matrices. (line 15) * sparse matrices, initializing elements: Accessing Matrix Elements. (line 33) * sparse matrices, iterative solvers: Overview<9>. (line 20) * sparse matrices, min/max elements: Matrix Properties. (line 48) * sparse matrices, operations: Exchanging Rows and Columns. (line 30) * sparse matrices, overview: Compressed Sparse Row CSR. (line 26) * sparse matrices, properties: Matrix Operations. (line 76) * sparse matrices, reading: Initializing Matrix Elements. (line 21) * sparse matrices, references: Examples<34>. (line 133) * sparse matrices, storage formats: Data types<2>. (line 56) * sparse matrices, triplet format: Sparse Matrix Storage Formats. (line 18) * sparse matrices, writing: Initializing Matrix Elements. (line 20) * sparse, iterative solvers: Overview<9>. (line 19) * special functions: References and Further Reading<2>. (line 28) * special functions <1>: Hyperbolic Trigonometric Functions. (line 6) * Spherical Bessel Functions: Regular Spherical Bessel Functions. (line 6) * spherical harmonics: Legendre Functions and Spherical Harmonics. (line 6) * spherical random variates, 2D: Spherical Vector Distributions. (line 10) * spherical random variates, 3D: Spherical Vector Distributions. (line 33) * spherical random variates, N-dimensional: Spherical Vector Distributions. (line 44) * spline: References and Further Reading<22>. (line 46) * splines, basis: References and Further Reading<34>. (line 37) * square root of a matrix, Cholesky decomposition: Singular Value Decomposition. (line 97) * square root of complex number: Elementary Complex Functions. (line 6) * standard deviation: References and Further Reading<14>. (line 52) * standard deviation, from histogram: Histogram Statistics. (line 35) * standards conformance, ANSI C: Conventions used in this manual. (line 20) * Statistical Reference Datasets (StRD): References and Further Reading<33>. (line 16) * statistics: References and Further Reading<14>. (line 52) * statistics, from histogram: Searching histogram ranges. (line 23) * statistics, moving window: References and Further Reading<16>. (line 14) * steepest descent algorithm, minimization: Algorithms with Derivatives. (line 69) * Steffenson’s method for finding roots: Root Finding Algorithms using Derivatives. (line 64) * stratified sampling in Monte Carlo integration: References and Further Reading<19>. (line 9) * stride, of vector index: Example programs for blocks. (line 27) * Student t-distribution: The t-distribution. (line 14) * subdiagonal, of a matrix: Creating row and column views. (line 84) * summation, acceleration: References and Further Reading<25>. (line 12) * superdiagonal, matrix: Creating row and column views. (line 98) * SVD: Complete Orthogonal Decomposition. (line 123) * SWAP, Level-1 BLAS: Level 1. (line 83) * swapping permutation elements: Accessing permutation elements. (line 18) * SYMM, Level-3 BLAS: Level 3. (line 27) * symmetric matrices, banded: General Banded Format. (line 26) * symmetric matrix, real, eigensystem: Real Symmetric Matrices. (line 6) * SYMV, Level-2 BLAS: Level 2. (line 70) * synchrotron functions: Synchrotron Functions. (line 6) * SYR, Level-2 BLAS: Level 2. (line 127) * SYR2, Level-2 BLAS: Level 2. (line 154) * SYR2K, Level-3 BLAS: Level 3. (line 162) * SYRK, Level-3 BLAS: Level 3. (line 121) * systems of equations, nonlinear: References and Further Reading<30>. (line 12) * t-distribution: The t-distribution. (line 14) * t-test: References and Further Reading<14>. (line 52) * tangent of complex number: Complex Trigonometric Functions. (line 16) * Tausworthe random number generator: Random number generator algorithms. (line 144) * Taylor coefficients, computation of: Factorials. (line 57) * testing combination for validity: Combination properties. (line 20) * testing multiset for validity: Multiset properties. (line 20) * testing permutation for validity: Permutation properties. (line 15) * thermal energy, units of: Mass and Weight. (line 49) * Tikhonov regression: Multi-parameter regression. (line 183) * time units: Atomic and Nuclear Physics. (line 79) * trailing dimension, matrices: Example programs for vectors. (line 100) * transformation, Householder: Givens Rotations. (line 35) * transforms, Hankel: References and Further Reading<27>. (line 53) * transforms, wavelet: References and Further Reading<26>. (line 24) * transport functions: Transport Functions. (line 6) * traveling salesman problem: Trivial example. (line 114) * triangular systems: Tridiagonal Systems. (line 70) * tridiagonal decomposition: LDLT Decomposition. (line 63) * tridiagonal decomposition <1>: Tridiagonal Decomposition of Real Symmetric Matrices. (line 43) * tridiagonal systems: Householder solver for linear systems. (line 21) * trigonometric functions: Trigonometric Functions. (line 6) * trigonometric functions of complex numbers: Elementary Complex Functions. (line 50) * trigonometric integrals: Trigonometric Integrals. (line 6) * trimmed mean: Robust Location Estimates. (line 16) * TRMM, Level-3 BLAS: Level 3. (line 68) * TRMV, Level-2 BLAS: Level 2. (line 25) * TRSM, Level-3 BLAS: Level 3. (line 94) * TRSV, Level-2 BLAS: Level 2. (line 48) * truncated mean: Robust Location Estimates. (line 16) * TSP: Trivial example. (line 115) * TT800 random number generator: Other random number generators. (line 79) * two dimensional Gaussian distribution: The Bivariate Gaussian Distribution. (line 6) * two dimensional Gaussian distribution <1>: The Multivariate Gaussian Distribution. (line 6) * two dimensional histograms: Example programs for histograms. (line 65) * two-sided exponential distribution: The Laplace Distribution. (line 6) * Type 1 Gumbel distribution, random variates: The Type-1 Gumbel Distribution. (line 6) * Type 2 Gumbel distribution: The Type-2 Gumbel Distribution. (line 6) * u-transform for series: References and Further Reading<25>. (line 12) * underflow, IEEE exceptions: Representation of floating point numbers. (line 135) * uniform distribution: The Flat Uniform Distribution. (line 6) * units of: Thermal Energy and Power. (line 21) * units of <1>: Pressure. (line 33) * units of <2>: Light and Illumination. (line 33) * units of <3>: Radioactivity. (line 17) * units, conversion of: References and Further Reading<38>. (line 14) * units, imperial: Measurement of Time. (line 21) * Unix random number generators, rand: Unix random number generators. (line 17) * Unix random number generators, rand48: Unix random number generators. (line 17) * unnormalized incomplete Gamma function: Incomplete Gamma Functions. (line 6) * unweighted linear fits: References and Further Reading<32>. (line 23) * value function, ntuples: Histogramming ntuple values. (line 27) * Van der Pol oscillator, example: Examples<22>. (line 6) * variance: References and Further Reading<14>. (line 52) * variance, from histogram: Histogram Statistics. (line 35) * variance-covariance matrix, linear fits: Overview<5>. (line 48) * VAX random number generator: Other random number generators. (line 93) * vector, operations: References and Further Reading<7>. (line 15) * vector, sorting elements of: Sorting vectors. (line 22) * vectors: Example programs for blocks. (line 28) * vectors, initializing: Accessing vector elements. (line 76) * vectors, range-checking: Vector allocation. (line 36) * VEGAS Monte Carlo integration: MISER. (line 161) * viscosity: Pressure. (line 34) * volume units: Printers Units. (line 13) * W function: Lambert W Functions. (line 6) * warning options: Handling floating point exceptions. (line 31) * warranty (none): Obtaining GSL. (line 25) * wavelet transforms: References and Further Reading<26>. (line 25) * website, developer information: Reporting Bugs. (line 31) * Weibull distribution: The Weibull Distribution. (line 6) * weight, units of: Volume Area and Length. (line 41) * weighted linear fits: References and Further Reading<32>. (line 24) * Wigner coefficients: Coupling Coefficients. (line 6) * Wishart random variates: The Logarithmic Distribution. (line 23) * Y(x), Bessel Functions: Irregular Cylindrical Bessel Functions. (line 6) * y(x), Bessel Functions: Irregular Spherical Bessel Functions. (line 6) * zero finding: References and Further Reading<28>. (line 15) * zero matrix: Accessing matrix elements. (line 48) * zero, IEEE format: Representation of floating point numbers. (line 30) * Zeros of Regular Bessel Functions: Zeros of Regular Bessel Functions. (line 6) * Zeta functions: Zeta Functions. (line 6) * Ziggurat method: The Gaussian Distribution. (line 27)  Tag Table: Node: Top324 Ref: index doc527 Ref: 0527 Node: Introduction26393 Ref: intro doc26484 Ref: 126484 Ref: intro gnu-scientific-library26484 Ref: 226484 Ref: intro introduction26484 Ref: 326484 Node: Routines available in GSL27045 Ref: intro routines-available-in-gsl27149 Ref: 427149 Node: GSL is Free Software30186 Ref: intro gsl-is-free-software30312 Ref: 530312 Ref: GSL is Free Software-Footnote-132525 Ref: GSL is Free Software-Footnote-232643 Node: Obtaining GSL32692 Ref: intro obtaining-gsl32804 Ref: 632804 Node: No Warranty33716 Ref: intro no-warranty33822 Ref: 733822 Ref: No Warranty-Footnote-134228 Node: Reporting Bugs34346 Ref: intro reporting-bugs34458 Ref: 834458 Ref: Reporting Bugs-Footnote-135302 Node: Further Information35360 Ref: intro further-information35492 Ref: 935492 Node: Conventions used in this manual36377 Ref: intro conventions-used-in-this-manual36486 Ref: a36486 Node: Using the Library37140 Ref: usage doc37242 Ref: b37242 Ref: usage using-the-library37242 Ref: c37242 Node: An Example Program37726 Ref: usage an-example-program37829 Ref: d37829 Ref: An Example Program-Footnote-138470 Node: Compiling and Linking38581 Ref: usage compiling-and-linking38709 Ref: e38709 Node: Linking programs with the library39690 Ref: usage linking-programs-with-the-library39831 Ref: f39831 Ref: Linking programs with the library-Footnote-141124 Ref: Linking programs with the library-Footnote-241164 Node: Linking with an alternative BLAS library41225 Ref: usage linking-with-an-alternative-blas-library41366 Ref: 1041366 Node: Shared Libraries42319 Ref: usage shared-libraries42446 Ref: 1242446 Ref: Shared Libraries-Footnote-143811 Node: ANSI C Compliance43866 Ref: usage ansi-c-compliance43988 Ref: 1343988 Node: Inline functions44958 Ref: usage inline-functions45075 Ref: 1445075 Ref: usage sec-inline-functions45075 Ref: 1545075 Node: Long double46537 Ref: usage long-double46658 Ref: 1746658 Node: Portability functions48165 Ref: usage id548301 Ref: 1848301 Ref: usage portability-functions48301 Ref: 1948301 Node: Alternative optimized functions49600 Ref: usage alternative-optimized-functions49760 Ref: 1b49760 Node: Support for different numeric types51062 Ref: usage support-for-different-numeric-types51223 Ref: 1c51223 Node: Compatibility with C++54167 Ref: usage compatibility-with-c54315 Ref: 1e54315 Node: Aliasing of arrays54748 Ref: usage aliasing-of-arrays54874 Ref: 1f54874 Ref: usage id654874 Ref: 2054874 Node: Thread-safety55503 Ref: usage thread-safety55627 Ref: 2155627 Node: Deprecated Functions56592 Ref: usage deprecated-functions56708 Ref: 2256708 Node: Code Reuse57221 Ref: usage code-reuse57315 Ref: 2357315 Node: Error Handling57874 Ref: err doc57986 Ref: 2457986 Ref: err error-handling57986 Ref: 2557986 Node: Error Reporting58609 Ref: err error-reporting58696 Ref: 2658696 Node: Error Codes60509 Ref: err error-codes60619 Ref: 2760619 Ref: err c GSL_EDOM61067 Ref: 2861067 Ref: err c GSL_ERANGE61279 Ref: 2961279 Ref: err c GSL_ENOMEM61482 Ref: 2a61482 Ref: err c GSL_EINVAL61784 Ref: 2b61784 Ref: err c gsl_strerror62093 Ref: 2c62093 Node: Error Handlers62450 Ref: err error-handlers62592 Ref: 2d62592 Ref: err c gsl_error_handler_t63531 Ref: 2e63531 Ref: err c gsl_set_error_handler64450 Ref: 2f64450 Ref: err c gsl_set_error_handler_off65542 Ref: 3065542 Node: Using GSL error reporting in your own functions66178 Ref: err using-gsl-error-reporting-in-your-own-functions66317 Ref: 3166317 Ref: err c GSL_ERROR66892 Ref: 3266892 Ref: err c GSL_ERROR_VAL67643 Ref: 3367643 Node: Examples68142 Ref: err examples68258 Ref: 3468258 Node: Mathematical Functions69351 Ref: math doc69461 Ref: 3669461 Ref: math mathematical-functions69461 Ref: 3769461 Node: Mathematical Constants70113 Ref: math mathematical-constants70231 Ref: 3870231 Node: Infinities and Not-a-number72385 Ref: math infinities-and-not-a-number72532 Ref: 3972532 Ref: math c GSL_POSINF72597 Ref: 3a72597 Ref: math c GSL_NEGINF72758 Ref: 3b72758 Ref: math c GSL_NAN72919 Ref: 3c72919 Ref: math c gsl_isnan73079 Ref: 3d73079 Ref: math c gsl_isinf73188 Ref: 3e73188 Ref: math c gsl_finite73370 Ref: 3f73370 Ref: Infinities and Not-a-number-Footnote-173563 Node: Elementary Functions74003 Ref: math elementary-functions74148 Ref: 4074148 Ref: math c gsl_log1p74528 Ref: 4174528 Ref: math c gsl_expm174757 Ref: 4274757 Ref: math c gsl_hypot74986 Ref: 1a74986 Ref: math c gsl_hypot375222 Ref: 4375222 Ref: math c gsl_acosh75415 Ref: 4475415 Ref: math c gsl_asinh75600 Ref: 4575600 Ref: math c gsl_atanh75785 Ref: 4675785 Ref: math c gsl_ldexp75970 Ref: 4775970 Ref: math c gsl_frexp76152 Ref: 4876152 Node: Small integer powers76547 Ref: math small-integer-powers76692 Ref: 4976692 Ref: math c gsl_pow_int77007 Ref: 4a77007 Ref: math c gsl_pow_uint77058 Ref: 4b77058 Ref: math c gsl_pow_277453 Ref: 4d77453 Ref: math c gsl_pow_377501 Ref: 4e77501 Ref: math c gsl_pow_477549 Ref: 4f77549 Ref: math c gsl_pow_577597 Ref: 5077597 Ref: math c gsl_pow_677645 Ref: 5177645 Ref: math c gsl_pow_777693 Ref: 5277693 Ref: math c gsl_pow_877741 Ref: 5377741 Ref: math c gsl_pow_977789 Ref: 5477789 Node: Testing the Sign of Numbers78233 Ref: math testing-the-sign-of-numbers78390 Ref: 5578390 Ref: math c GSL_SIGN78455 Ref: 5678455 Node: Testing for Odd and Even Numbers78677 Ref: math testing-for-odd-and-even-numbers78843 Ref: 5778843 Ref: math c GSL_IS_ODD78918 Ref: 5878918 Ref: math c GSL_IS_EVEN79073 Ref: 5979073 Node: Maximum and Minimum functions79279 Ref: math maximum-and-minimum-functions79466 Ref: 5a79466 Ref: math c GSL_MAX79728 Ref: 5b79728 Ref: math c GSL_MIN79868 Ref: 5c79868 Ref: math c GSL_MAX_DBL80008 Ref: 5d80008 Ref: math c GSL_MIN_DBL80442 Ref: 5e80442 Ref: math c GSL_MAX_INT80876 Ref: 5f80876 Ref: math c GSL_MIN_INT80935 Ref: 6080935 Ref: math c GSL_MAX_LDBL81276 Ref: 6181276 Ref: math c GSL_MIN_LDBL81370 Ref: 6281370 Node: Approximate Comparison of Floating Point Numbers81755 Ref: math approximate-comparison-of-floating-point-numbers81901 Ref: 6381901 Ref: math c gsl_fcmp82317 Ref: 6482317 Node: Complex Numbers83162 Ref: complex doc83269 Ref: 6583269 Ref: complex complex-numbers83269 Ref: 6683269 Ref: Complex Numbers-Footnote-184493 Node: Representation of complex numbers84561 Ref: complex representation-of-complex-numbers84677 Ref: 6784677 Node: Complex number macros86590 Ref: complex complex-number-macros86740 Ref: 6a86740 Ref: complex c GSL_REAL86870 Ref: 6886870 Ref: complex c GSL_IMAG86894 Ref: 6986894 Ref: complex c GSL_SET_COMPLEX87396 Ref: 6b87396 Node: Assigning complex numbers87671 Ref: complex assigning-complex-numbers87817 Ref: 6c87817 Ref: complex c gsl_complex_rect87878 Ref: 6d87878 Ref: complex c gsl_complex_polar88142 Ref: 6e88142 Node: Properties of complex numbers88389 Ref: complex properties-of-complex-numbers88542 Ref: 6f88542 Ref: complex c gsl_complex_arg88611 Ref: 7088611 Ref: complex c gsl_complex_abs88784 Ref: 7188784 Ref: complex c gsl_complex_abs288925 Ref: 7288925 Ref: complex c gsl_complex_logabs89077 Ref: 7389077 Node: Complex arithmetic operators89440 Ref: complex complex-arithmetic-operators89596 Ref: 7489596 Ref: complex c gsl_complex_add89663 Ref: 7589663 Ref: complex c gsl_complex_sub89838 Ref: 7689838 Ref: complex c gsl_complex_mul90020 Ref: 7790020 Ref: complex c gsl_complex_div90198 Ref: 7890198 Ref: complex c gsl_complex_add_real90378 Ref: 7990378 Ref: complex c gsl_complex_sub_real90568 Ref: 7a90568 Ref: complex c gsl_complex_mul_real90765 Ref: 7b90765 Ref: complex c gsl_complex_div_real90958 Ref: 7c90958 Ref: complex c gsl_complex_add_imag91153 Ref: 7d91153 Ref: complex c gsl_complex_sub_imag91340 Ref: 7e91340 Ref: complex c gsl_complex_mul_imag91534 Ref: 7f91534 Ref: complex c gsl_complex_div_imag91727 Ref: 8091727 Ref: complex c gsl_complex_conjugate91921 Ref: 8191921 Ref: complex c gsl_complex_inverse92091 Ref: 8292091 Ref: complex c gsl_complex_negative92279 Ref: 8392279 Node: Elementary Complex Functions92443 Ref: complex elementary-complex-functions92601 Ref: 8492601 Ref: complex c gsl_complex_sqrt92668 Ref: 8592668 Ref: complex c gsl_complex_sqrt_real92933 Ref: 8692933 Ref: complex c gsl_complex_pow93118 Ref: 8793118 Ref: complex c gsl_complex_pow_real93395 Ref: 8893395 Ref: complex c gsl_complex_exp93577 Ref: 8993577 Ref: complex c gsl_complex_log93737 Ref: 8a93737 Ref: complex c gsl_complex_log1093960 Ref: 8b93960 Ref: complex c gsl_complex_log_b94134 Ref: 8c94134 Node: Complex Trigonometric Functions94401 Ref: complex complex-trigonometric-functions94570 Ref: 8d94570 Ref: complex c gsl_complex_sin94643 Ref: 8e94643 Ref: complex c gsl_complex_cos94826 Ref: 8f94826 Ref: complex c gsl_complex_tan95008 Ref: 9095008 Ref: complex c gsl_complex_sec95182 Ref: 9195182 Ref: complex c gsl_complex_csc95349 Ref: 9295349 Ref: complex c gsl_complex_cot95518 Ref: 9395518 Node: Inverse Complex Trigonometric Functions95688 Ref: complex inverse-complex-trigonometric-functions95857 Ref: 9495857 Ref: complex c gsl_complex_arcsin95946 Ref: 9595946 Ref: complex c gsl_complex_arcsin_real96185 Ref: 9696185 Ref: complex c gsl_complex_arccos96628 Ref: 9796628 Ref: complex c gsl_complex_arccos_real96869 Ref: 9896869 Ref: complex c gsl_complex_arctan97278 Ref: 9997278 Ref: complex c gsl_complex_arcsec97514 Ref: 9a97514 Ref: complex c gsl_complex_arcsec_real97693 Ref: 9b97693 Ref: complex c gsl_complex_arccsc97869 Ref: 9c97869 Ref: complex c gsl_complex_arccsc_real98050 Ref: 9d98050 Ref: complex c gsl_complex_arccot98228 Ref: 9e98228 Node: Complex Hyperbolic Functions98410 Ref: complex complex-hyperbolic-functions98584 Ref: 9f98584 Ref: complex c gsl_complex_sinh98651 Ref: a098651 Ref: complex c gsl_complex_cosh98842 Ref: a198842 Ref: complex c gsl_complex_tanh99035 Ref: a299035 Ref: complex c gsl_complex_sech99224 Ref: a399224 Ref: complex c gsl_complex_csch99405 Ref: a499405 Ref: complex c gsl_complex_coth99588 Ref: a599588 Node: Inverse Complex Hyperbolic Functions99772 Ref: complex inverse-complex-hyperbolic-functions99937 Ref: a699937 Ref: complex c gsl_complex_arcsinh100022 Ref: a7100022 Ref: complex c gsl_complex_arccosh100268 Ref: a8100268 Ref: complex c gsl_complex_arccosh_real100648 Ref: a9100648 Ref: complex c gsl_complex_arctanh100822 Ref: aa100822 Ref: complex c gsl_complex_arctanh_real101077 Ref: ab101077 Ref: complex c gsl_complex_arcsech101252 Ref: ac101252 Ref: complex c gsl_complex_arccsch101445 Ref: ad101445 Ref: complex c gsl_complex_arccoth101640 Ref: ae101640 Node: References and Further Reading101836 Ref: complex references-and-further-reading101964 Ref: af101964 Node: Polynomials103248 Ref: poly doc103350 Ref: b0103350 Ref: poly polynomials103350 Ref: b1103350 Node: Polynomial Evaluation104023 Ref: poly polynomial-evaluation104150 Ref: b2104150 Ref: poly c gsl_poly_eval104437 Ref: b3104437 Ref: poly c gsl_poly_complex_eval104639 Ref: b4104639 Ref: poly c gsl_complex_poly_complex_eval104862 Ref: b5104862 Ref: poly c gsl_poly_eval_derivs105101 Ref: b6105101 Node: Divided Difference Representation of Polynomials105508 Ref: poly divided-difference-representation-of-polynomials105663 Ref: b7105663 Ref: poly c gsl_poly_dd_init106990 Ref: b8106990 Ref: poly c gsl_poly_dd_eval107514 Ref: b9107514 Ref: poly c gsl_poly_dd_taylor107908 Ref: ba107908 Ref: poly c gsl_poly_dd_hermite_init108549 Ref: bb108549 Node: Quadratic Equations109720 Ref: poly quadratic-equations109869 Ref: bc109869 Ref: poly c gsl_poly_solve_quadratic109918 Ref: bd109918 Ref: poly c gsl_poly_complex_solve_quadratic111174 Ref: be111174 Node: Cubic Equations111775 Ref: poly cubic-equations111904 Ref: bf111904 Ref: poly c gsl_poly_solve_cubic111945 Ref: c0111945 Ref: poly c gsl_poly_complex_solve_cubic112919 Ref: c1112919 Node: General Polynomial Equations113457 Ref: poly general-polynomial-equations113578 Ref: c2113578 Ref: poly c gsl_poly_complex_workspace113918 Ref: c3113918 Ref: poly c gsl_poly_complex_workspace_alloc114048 Ref: c4114048 Ref: poly c gsl_poly_complex_workspace_free114570 Ref: c6114570 Ref: poly c gsl_poly_complex_solve114753 Ref: c5114753 Node: Examples<2>116039 Ref: poly examples116178 Ref: c7116178 Node: References and Further Reading<2>117367 Ref: poly references-and-further-reading117469 Ref: c8117469 Node: Special Functions118387 Ref: specfunc doc118494 Ref: c9118494 Ref: specfunc special-functions118494 Ref: ca118494 Node: Usage120486 Ref: specfunc usage120579 Ref: cb120579 Node: The gsl_sf_result struct121658 Ref: specfunc the-gsl-sf-result-struct121765 Ref: cc121765 Ref: specfunc c gsl_sf_result122153 Ref: cd122153 Ref: specfunc c gsl_sf_result_e10122835 Ref: ce122835 Node: Modes123005 Ref: specfunc modes123137 Ref: cf123137 Ref: specfunc c gsl_mode_t123613 Ref: d0123613 Ref: specfunc c gsl_mode_t GSL_PREC_DOUBLE123635 Ref: d1123635 Ref: specfunc c gsl_mode_t GSL_PREC_SINGLE123758 Ref: d2123758 Ref: specfunc c gsl_mode_t GSL_PREC_APPROX123876 Ref: d3123876 Node: Airy Functions and Derivatives124078 Ref: specfunc airy-functions-and-derivatives124202 Ref: d4124202 Node: Airy Functions124761 Ref: specfunc airy-functions124881 Ref: d5124881 Ref: specfunc c gsl_sf_airy_Ai124924 Ref: d6124924 Ref: specfunc c gsl_sf_airy_Ai_e124988 Ref: d7124988 Ref: specfunc c gsl_sf_airy_Bi125189 Ref: d8125189 Ref: specfunc c gsl_sf_airy_Bi_e125253 Ref: d9125253 Ref: specfunc c gsl_sf_airy_Ai_scaled125454 Ref: da125454 Ref: specfunc c gsl_sf_airy_Ai_scaled_e125525 Ref: db125525 Ref: specfunc c gsl_sf_airy_Bi_scaled125802 Ref: dc125802 Ref: specfunc c gsl_sf_airy_Bi_scaled_e125873 Ref: dd125873 Node: Derivatives of Airy Functions126149 Ref: specfunc derivatives-of-airy-functions126301 Ref: de126301 Ref: specfunc c gsl_sf_airy_Ai_deriv126374 Ref: df126374 Ref: specfunc c gsl_sf_airy_Ai_deriv_e126444 Ref: e0126444 Ref: specfunc c gsl_sf_airy_Bi_deriv126663 Ref: e1126663 Ref: specfunc c gsl_sf_airy_Bi_deriv_e126733 Ref: e2126733 Ref: specfunc c gsl_sf_airy_Ai_deriv_scaled126952 Ref: e3126952 Ref: specfunc c gsl_sf_airy_Ai_deriv_scaled_e127039 Ref: e4127039 Ref: specfunc c gsl_sf_airy_Bi_deriv_scaled127321 Ref: e5127321 Ref: specfunc c gsl_sf_airy_Bi_deriv_scaled_e127408 Ref: e6127408 Node: Zeros of Airy Functions127689 Ref: specfunc zeros-of-airy-functions127865 Ref: e7127865 Ref: specfunc c gsl_sf_airy_zero_Ai127926 Ref: e8127926 Ref: specfunc c gsl_sf_airy_zero_Ai_e127984 Ref: e9127984 Ref: specfunc c gsl_sf_airy_zero_Bi128178 Ref: ea128178 Ref: specfunc c gsl_sf_airy_zero_Bi_e128236 Ref: eb128236 Node: Zeros of Derivatives of Airy Functions128430 Ref: specfunc zeros-of-derivatives-of-airy-functions128568 Ref: ec128568 Ref: specfunc c gsl_sf_airy_zero_Ai_deriv128659 Ref: ed128659 Ref: specfunc c gsl_sf_airy_zero_Ai_deriv_e128723 Ref: ee128723 Ref: specfunc c gsl_sf_airy_zero_Bi_deriv128935 Ref: ef128935 Ref: specfunc c gsl_sf_airy_zero_Bi_deriv_e128999 Ref: f0128999 Node: Bessel Functions129211 Ref: specfunc bessel-functions129347 Ref: f1129347 Node: Regular Cylindrical Bessel Functions130407 Ref: specfunc regular-cylindrical-bessel-functions130544 Ref: f2130544 Ref: specfunc c gsl_sf_bessel_J0130631 Ref: f3130631 Ref: specfunc c gsl_sf_bessel_J0_e130680 Ref: f4130680 Ref: specfunc c gsl_sf_bessel_J1130851 Ref: f5130851 Ref: specfunc c gsl_sf_bessel_J1_e130900 Ref: f6130900 Ref: specfunc c gsl_sf_bessel_Jn131070 Ref: f7131070 Ref: specfunc c gsl_sf_bessel_Jn_e131126 Ref: f8131126 Ref: specfunc c gsl_sf_bessel_Jn_array131319 Ref: f9131319 Node: Irregular Cylindrical Bessel Functions131773 Ref: specfunc irregular-cylindrical-bessel-functions131964 Ref: fa131964 Ref: specfunc c gsl_sf_bessel_Y0132055 Ref: fb132055 Ref: specfunc c gsl_sf_bessel_Y0_e132104 Ref: fc132104 Ref: specfunc c gsl_sf_bessel_Y1132286 Ref: fd132286 Ref: specfunc c gsl_sf_bessel_Y1_e132335 Ref: fe132335 Ref: specfunc c gsl_sf_bessel_Yn132516 Ref: ff132516 Ref: specfunc c gsl_sf_bessel_Yn_e132572 Ref: 100132572 Ref: specfunc c gsl_sf_bessel_Yn_array132777 Ref: 101132777 Node: Regular Modified Cylindrical Bessel Functions133278 Ref: specfunc regular-modified-cylindrical-bessel-functions133480 Ref: 102133480 Ref: specfunc c gsl_sf_bessel_I0133585 Ref: 103133585 Ref: specfunc c gsl_sf_bessel_I0_e133634 Ref: 104133634 Ref: specfunc c gsl_sf_bessel_I1133814 Ref: 105133814 Ref: specfunc c gsl_sf_bessel_I1_e133863 Ref: 106133863 Ref: specfunc c gsl_sf_bessel_In134042 Ref: 107134042 Ref: specfunc c gsl_sf_bessel_In_e134098 Ref: 108134098 Ref: specfunc c gsl_sf_bessel_In_array134301 Ref: 109134301 Ref: specfunc c gsl_sf_bessel_I0_scaled134845 Ref: 10a134845 Ref: specfunc c gsl_sf_bessel_I0_scaled_e134901 Ref: 10b134901 Ref: specfunc c gsl_sf_bessel_I1_scaled135115 Ref: 10c135115 Ref: specfunc c gsl_sf_bessel_I1_scaled_e135171 Ref: 10d135171 Ref: specfunc c gsl_sf_bessel_In_scaled135384 Ref: 10e135384 Ref: specfunc c gsl_sf_bessel_In_scaled_e135447 Ref: 10f135447 Ref: specfunc c gsl_sf_bessel_In_scaled_array135674 Ref: 110135674 Node: Irregular Modified Cylindrical Bessel Functions136234 Ref: specfunc irregular-modified-cylindrical-bessel-functions136432 Ref: 111136432 Ref: specfunc c gsl_sf_bessel_K0136541 Ref: 112136541 Ref: specfunc c gsl_sf_bessel_K0_e136590 Ref: 113136590 Ref: specfunc c gsl_sf_bessel_K1136783 Ref: 114136783 Ref: specfunc c gsl_sf_bessel_K1_e136832 Ref: 115136832 Ref: specfunc c gsl_sf_bessel_Kn137024 Ref: 116137024 Ref: specfunc c gsl_sf_bessel_Kn_e137080 Ref: 117137080 Ref: specfunc c gsl_sf_bessel_Kn_array137296 Ref: 118137296 Ref: specfunc c gsl_sf_bessel_K0_scaled137878 Ref: 119137878 Ref: specfunc c gsl_sf_bessel_K0_scaled_e137934 Ref: 11a137934 Ref: specfunc c gsl_sf_bessel_K1_scaled138155 Ref: 11b138155 Ref: specfunc c gsl_sf_bessel_K1_scaled_e138211 Ref: 11c138211 Ref: specfunc c gsl_sf_bessel_Kn_scaled138431 Ref: 11d138431 Ref: specfunc c gsl_sf_bessel_Kn_scaled_e138494 Ref: 11e138494 Ref: specfunc c gsl_sf_bessel_Kn_scaled_array138730 Ref: 11f138730 Node: Regular Spherical Bessel Functions139324 Ref: specfunc regular-spherical-bessel-functions139513 Ref: 120139513 Ref: specfunc c gsl_sf_bessel_j0139596 Ref: 121139596 Ref: specfunc c gsl_sf_bessel_j0_e139645 Ref: 122139645 Ref: specfunc c gsl_sf_bessel_j1139826 Ref: 123139826 Ref: specfunc c gsl_sf_bessel_j1_e139875 Ref: 124139875 Ref: specfunc c gsl_sf_bessel_j2140069 Ref: 125140069 Ref: specfunc c gsl_sf_bessel_j2_e140118 Ref: 126140118 Ref: specfunc c gsl_sf_bessel_jl140325 Ref: 127140325 Ref: specfunc c gsl_sf_bessel_jl_e140381 Ref: 128140381 Ref: specfunc c gsl_sf_bessel_jl_array140600 Ref: 129140600 Ref: specfunc c gsl_sf_bessel_jl_steed_array141065 Ref: 12a141065 Node: Irregular Spherical Bessel Functions141621 Ref: specfunc irregular-spherical-bessel-functions141806 Ref: 12b141806 Ref: specfunc c gsl_sf_bessel_y0141893 Ref: 12c141893 Ref: specfunc c gsl_sf_bessel_y0_e141942 Ref: 12d141942 Ref: specfunc c gsl_sf_bessel_y1142126 Ref: 12e142126 Ref: specfunc c gsl_sf_bessel_y1_e142175 Ref: 12f142175 Ref: specfunc c gsl_sf_bessel_y2142372 Ref: 130142372 Ref: specfunc c gsl_sf_bessel_y2_e142421 Ref: 131142421 Ref: specfunc c gsl_sf_bessel_yl142633 Ref: 132142633 Ref: specfunc c gsl_sf_bessel_yl_e142689 Ref: 133142689 Ref: specfunc c gsl_sf_bessel_yl_array142897 Ref: 134142897 Node: Regular Modified Spherical Bessel Functions143351 Ref: specfunc regular-modified-spherical-bessel-functions143547 Ref: 135143547 Ref: specfunc c gsl_sf_bessel_i0_scaled143808 Ref: 136143808 Ref: specfunc c gsl_sf_bessel_i0_scaled_e143864 Ref: 137143864 Ref: specfunc c gsl_sf_bessel_i1_scaled144077 Ref: 138144077 Ref: specfunc c gsl_sf_bessel_i1_scaled_e144133 Ref: 139144133 Ref: specfunc c gsl_sf_bessel_i2_scaled144345 Ref: 13a144345 Ref: specfunc c gsl_sf_bessel_i2_scaled_e144401 Ref: 13b144401 Ref: specfunc c gsl_sf_bessel_il_scaled144613 Ref: 13c144613 Ref: specfunc c gsl_sf_bessel_il_scaled_e144676 Ref: 13d144676 Ref: specfunc c gsl_sf_bessel_il_scaled_array144901 Ref: 13e144901 Node: Irregular Modified Spherical Bessel Functions145387 Ref: specfunc irregular-modified-spherical-bessel-functions145589 Ref: 13f145589 Ref: specfunc c gsl_sf_bessel_k0_scaled145867 Ref: 140145867 Ref: specfunc c gsl_sf_bessel_k0_scaled_e145923 Ref: 141145923 Ref: specfunc c gsl_sf_bessel_k1_scaled146144 Ref: 142146144 Ref: specfunc c gsl_sf_bessel_k1_scaled_e146200 Ref: 143146200 Ref: specfunc c gsl_sf_bessel_k2_scaled146420 Ref: 144146420 Ref: specfunc c gsl_sf_bessel_k2_scaled_e146476 Ref: 145146476 Ref: specfunc c gsl_sf_bessel_kl_scaled146697 Ref: 146146697 Ref: specfunc c gsl_sf_bessel_kl_scaled_e146760 Ref: 147146760 Ref: specfunc c gsl_sf_bessel_kl_scaled_array146994 Ref: 148146994 Node: Regular Bessel Function—Fractional Order147487 Ref: specfunc regular-bessel-function-fractional-order147691 Ref: 149147691 Ref: specfunc c gsl_sf_bessel_Jnu147788 Ref: 14a147788 Ref: specfunc c gsl_sf_bessel_Jnu_e147849 Ref: 14b147849 Ref: specfunc c gsl_sf_bessel_sequence_Jnu_e148052 Ref: 14c148052 Node: Irregular Bessel Functions—Fractional Order148510 Ref: specfunc irregular-bessel-functions-fractional-order148721 Ref: 14d148721 Ref: specfunc c gsl_sf_bessel_Ynu148826 Ref: 14e148826 Ref: specfunc c gsl_sf_bessel_Ynu_e148887 Ref: 14f148887 Node: Regular Modified Bessel Functions—Fractional Order149092 Ref: specfunc regular-modified-bessel-functions-fractional-order149315 Ref: 150149315 Ref: specfunc c gsl_sf_bessel_Inu149434 Ref: 151149434 Ref: specfunc c gsl_sf_bessel_Inu_e149495 Ref: 152149495 Ref: specfunc c gsl_sf_bessel_Inu_scaled149710 Ref: 153149710 Ref: specfunc c gsl_sf_bessel_Inu_scaled_e149778 Ref: 154149778 Node: Irregular Modified Bessel Functions—Fractional Order150017 Ref: specfunc irregular-modified-bessel-functions-fractional-order150228 Ref: 155150228 Ref: specfunc c gsl_sf_bessel_Knu150351 Ref: 156150351 Ref: specfunc c gsl_sf_bessel_Knu_e150412 Ref: 157150412 Ref: specfunc c gsl_sf_bessel_lnKnu150629 Ref: 158150629 Ref: specfunc c gsl_sf_bessel_lnKnu_e150692 Ref: 159150692 Ref: specfunc c gsl_sf_bessel_Knu_scaled150938 Ref: 15a150938 Ref: specfunc c gsl_sf_bessel_Knu_scaled_e151006 Ref: 15b151006 Node: Zeros of Regular Bessel Functions151253 Ref: specfunc zeros-of-regular-bessel-functions151403 Ref: 15c151403 Ref: specfunc c gsl_sf_bessel_zero_J0151486 Ref: 15d151486 Ref: specfunc c gsl_sf_bessel_zero_J0_e151546 Ref: 15e151546 Ref: specfunc c gsl_sf_bessel_zero_J1151755 Ref: 15f151755 Ref: specfunc c gsl_sf_bessel_zero_J1_e151815 Ref: 160151815 Ref: specfunc c gsl_sf_bessel_zero_Jnu152024 Ref: 161152024 Ref: specfunc c gsl_sf_bessel_zero_Jnu_e152096 Ref: 162152096 Node: Clausen Functions152403 Ref: specfunc clausen-functions152526 Ref: 163152526 Ref: specfunc c gsl_sf_clausen152841 Ref: 165152841 Ref: specfunc c gsl_sf_clausen_e152888 Ref: 166152888 Node: Coulomb Functions153017 Ref: specfunc coulomb-functions153145 Ref: 167153145 Node: Normalized Hydrogenic Bound States153470 Ref: specfunc normalized-hydrogenic-bound-states153590 Ref: 168153590 Ref: specfunc c gsl_sf_hydrogenicR_1153673 Ref: 169153673 Ref: specfunc c gsl_sf_hydrogenicR_1_e153736 Ref: 16a153736 Ref: specfunc c gsl_sf_hydrogenicR153968 Ref: 16b153968 Ref: specfunc c gsl_sf_hydrogenicR_e154053 Ref: 16c154053 Node: Coulomb Wave Functions154571 Ref: specfunc coulomb-wave-functions154744 Ref: 16e154744 Ref: specfunc c gsl_sf_coulomb_wave_FG_e155404 Ref: 16f155404 Ref: specfunc c gsl_sf_coulomb_wave_F_array156224 Ref: 170156224 Ref: specfunc c gsl_sf_coulomb_wave_FG_array156599 Ref: 171156599 Ref: specfunc c gsl_sf_coulomb_wave_FGp_array157083 Ref: 172157083 Ref: specfunc c gsl_sf_coulomb_wave_sphF_array157716 Ref: 173157716 Node: Coulomb Wave Function Normalization Constant158215 Ref: specfunc coulomb-wave-function-normalization-constant158345 Ref: 174158345 Ref: specfunc c gsl_sf_coulomb_CL_e158531 Ref: 175158531 Ref: specfunc c gsl_sf_coulomb_CL_array158731 Ref: 176158731 Node: Coupling Coefficients158969 Ref: specfunc coupling-coefficients159095 Ref: 177159095 Node: 3-j Symbols159696 Ref: specfunc j-symbols159786 Ref: 178159786 Ref: specfunc c gsl_sf_coupling_3j159823 Ref: 179159823 Ref: specfunc c gsl_sf_coupling_3j_e159946 Ref: 17a159946 Node: 6-j Symbols160331 Ref: specfunc id1160441 Ref: 17b160441 Ref: specfunc c gsl_sf_coupling_6j160478 Ref: 17c160478 Ref: specfunc c gsl_sf_coupling_6j_e160601 Ref: 17d160601 Node: 9-j Symbols160980 Ref: specfunc id2161070 Ref: 17e161070 Ref: specfunc c gsl_sf_coupling_9j161107 Ref: 17f161107 Ref: specfunc c gsl_sf_coupling_9j_e161276 Ref: 180161276 Node: Dawson Function161715 Ref: specfunc dawson-function161839 Ref: 181161839 Ref: specfunc c gsl_sf_dawson162111 Ref: 182162111 Ref: specfunc c gsl_sf_dawson_e162157 Ref: 183162157 Node: Debye Functions162311 Ref: specfunc debye-functions162425 Ref: 184162425 Ref: specfunc c gsl_sf_debye_1162723 Ref: 185162723 Ref: specfunc c gsl_sf_debye_1_e162770 Ref: 186162770 Ref: specfunc c gsl_sf_debye_2162908 Ref: 187162908 Ref: specfunc c gsl_sf_debye_2_e162955 Ref: 188162955 Ref: specfunc c gsl_sf_debye_3163094 Ref: 189163094 Ref: specfunc c gsl_sf_debye_3_e163141 Ref: 18a163141 Ref: specfunc c gsl_sf_debye_4163279 Ref: 18b163279 Ref: specfunc c gsl_sf_debye_4_e163326 Ref: 18c163326 Ref: specfunc c gsl_sf_debye_5163465 Ref: 18d163465 Ref: specfunc c gsl_sf_debye_5_e163512 Ref: 18e163512 Ref: specfunc c gsl_sf_debye_6163650 Ref: 18f163650 Ref: specfunc c gsl_sf_debye_6_e163697 Ref: 190163697 Node: Dilogarithm163835 Ref: specfunc dilog-function163955 Ref: 164163955 Ref: specfunc dilogarithm163955 Ref: 191163955 Node: Real Argument164208 Ref: specfunc real-argument164295 Ref: 192164295 Ref: specfunc c gsl_sf_dilog164338 Ref: 193164338 Ref: specfunc c gsl_sf_dilog_e164383 Ref: 194164383 Node: Complex Argument164914 Ref: specfunc complex-argument165001 Ref: 195165001 Ref: specfunc c gsl_sf_complex_dilog_e165050 Ref: 196165050 Node: Elementary Operations165410 Ref: specfunc elementary-operations165533 Ref: 197165533 Ref: specfunc c gsl_sf_multiply165768 Ref: 198165768 Ref: specfunc c gsl_sf_multiply_e165826 Ref: 199165826 Ref: specfunc c gsl_sf_multiply_err_e166052 Ref: 19a166052 Node: Elliptic Integrals166394 Ref: specfunc elliptic-integrals166531 Ref: 19b166531 Node: Definition of Legendre Forms166968 Ref: specfunc definition-of-legendre-forms167088 Ref: 19c167088 Node: Definition of Carlson Forms167783 Ref: specfunc definition-of-carlson-forms167956 Ref: 19d167956 Node: Legendre Form of Complete Elliptic Integrals168469 Ref: specfunc legendre-form-of-complete-elliptic-integrals168660 Ref: 19e168660 Ref: specfunc c gsl_sf_ellint_Kcomp168765 Ref: 19f168765 Ref: specfunc c gsl_sf_ellint_Kcomp_e168834 Ref: 1a0168834 Ref: specfunc c gsl_sf_ellint_Ecomp169167 Ref: 1a1169167 Ref: specfunc c gsl_sf_ellint_Ecomp_e169236 Ref: 1a2169236 Ref: specfunc c gsl_sf_ellint_Pcomp169569 Ref: 1a3169569 Ref: specfunc c gsl_sf_ellint_Pcomp_e169658 Ref: 1a4169658 Node: Legendre Form of Incomplete Elliptic Integrals170070 Ref: specfunc legendre-form-of-incomplete-elliptic-integrals170247 Ref: 1a5170247 Ref: specfunc c gsl_sf_ellint_F170356 Ref: 1a6170356 Ref: specfunc c gsl_sf_ellint_F_e170443 Ref: 1a7170443 Ref: specfunc c gsl_sf_ellint_E170790 Ref: 1a8170790 Ref: specfunc c gsl_sf_ellint_E_e170877 Ref: 1a9170877 Ref: specfunc c gsl_sf_ellint_P171224 Ref: 1aa171224 Ref: specfunc c gsl_sf_ellint_P_e171321 Ref: 1ab171321 Ref: specfunc c gsl_sf_ellint_D171748 Ref: 1ac171748 Ref: specfunc c gsl_sf_ellint_D_e171835 Ref: 1ad171835 Node: Carlson Forms172194 Ref: specfunc carlson-forms172318 Ref: 1ae172318 Ref: specfunc c gsl_sf_ellint_RC172361 Ref: 1af172361 Ref: specfunc c gsl_sf_ellint_RC_e172447 Ref: 1b0172447 Ref: specfunc c gsl_sf_ellint_RD172695 Ref: 1b1172695 Ref: specfunc c gsl_sf_ellint_RD_e172791 Ref: 1b2172791 Ref: specfunc c gsl_sf_ellint_RF173051 Ref: 1b3173051 Ref: specfunc c gsl_sf_ellint_RF_e173147 Ref: 1b4173147 Ref: specfunc c gsl_sf_ellint_RJ173407 Ref: 1b5173407 Ref: specfunc c gsl_sf_ellint_RJ_e173513 Ref: 1b6173513 Node: Elliptic Functions Jacobi173785 Ref: specfunc elliptic-functions-jacobi173916 Ref: 1b7173916 Ref: specfunc c gsl_sf_elljac_e174134 Ref: 1b8174134 Node: Error Functions174366 Ref: specfunc error-functions174500 Ref: 1b9174500 Node: Error Function174819 Ref: specfunc error-function174923 Ref: 1ba174923 Ref: specfunc c gsl_sf_erf174968 Ref: 1bb174968 Ref: specfunc c gsl_sf_erf_e175011 Ref: 1bc175011 Node: Complementary Error Function175194 Ref: specfunc complementary-error-function175339 Ref: 1bd175339 Ref: specfunc c gsl_sf_erfc175412 Ref: 1be175412 Ref: specfunc c gsl_sf_erfc_e175456 Ref: 1bf175456 Node: Log Complementary Error Function175655 Ref: specfunc log-complementary-error-function175807 Ref: 1c0175807 Ref: specfunc c gsl_sf_log_erfc175888 Ref: 1c1175888 Ref: specfunc c gsl_sf_log_erfc_e175936 Ref: 1c2175936 Node: Probability functions176107 Ref: specfunc probability-functions176222 Ref: 1c3176222 Ref: specfunc c gsl_sf_erf_Z176400 Ref: 1c4176400 Ref: specfunc c gsl_sf_erf_Z_e176445 Ref: 1c5176445 Ref: specfunc c gsl_sf_erf_Q176625 Ref: 1c6176625 Ref: specfunc c gsl_sf_erf_Q_e176670 Ref: 1c7176670 Ref: specfunc c gsl_sf_hazard177159 Ref: 1c8177159 Ref: specfunc c gsl_sf_hazard_e177205 Ref: 1c9177205 Node: Exponential Functions177357 Ref: specfunc exponential-functions177487 Ref: 1ca177487 Node: Exponential Function177744 Ref: specfunc exponential-function177862 Ref: 1cb177862 Ref: specfunc c gsl_sf_exp177919 Ref: 1cc177919 Ref: specfunc c gsl_sf_exp_e177962 Ref: 1cd177962 Ref: specfunc c gsl_sf_exp_e10_e178134 Ref: 1ce178134 Ref: specfunc c gsl_sf_exp_mult178462 Ref: 1cf178462 Ref: specfunc c gsl_sf_exp_mult_e178520 Ref: 1d0178520 Ref: specfunc c gsl_sf_exp_mult_e10_e178738 Ref: 1d1178738 Node: Relative Exponential Functions179002 Ref: specfunc relative-exponential-functions179163 Ref: 1d2179163 Ref: specfunc c gsl_sf_expm1179240 Ref: 1d3179240 Ref: specfunc c gsl_sf_expm1_e179285 Ref: 1d4179285 Ref: specfunc c gsl_sf_exprel179459 Ref: 1d5179459 Ref: specfunc c gsl_sf_exprel_e179505 Ref: 1d6179505 Ref: specfunc c gsl_sf_exprel_2179829 Ref: 1d7179829 Ref: specfunc c gsl_sf_exprel_2_e179877 Ref: 1d8179877 Ref: specfunc c gsl_sf_exprel_n180213 Ref: 1d9180213 Ref: specfunc c gsl_sf_exprel_n_e180268 Ref: 1da180268 Node: Exponentiation With Error Estimate180760 Ref: specfunc exponentiation-with-error-estimate180892 Ref: 1db180892 Ref: specfunc c gsl_sf_exp_err_e180977 Ref: 1dc180977 Ref: specfunc c gsl_sf_exp_err_e10_e181170 Ref: 1dd181170 Ref: specfunc c gsl_sf_exp_mult_err_e181471 Ref: 1de181471 Ref: specfunc c gsl_sf_exp_mult_err_e10_e181757 Ref: 1df181757 Node: Exponential Integrals182140 Ref: specfunc exponential-integrals182275 Ref: 1e0182275 Node: Exponential Integral182638 Ref: specfunc exponential-integral182730 Ref: 1e1182730 Ref: specfunc c gsl_sf_expint_E1182787 Ref: 1e2182787 Ref: specfunc c gsl_sf_expint_E1_e182836 Ref: 1e3182836 Ref: specfunc c gsl_sf_expint_E2183025 Ref: 1e4183025 Ref: specfunc c gsl_sf_expint_E2_e183074 Ref: 1e5183074 Ref: specfunc c gsl_sf_expint_En183282 Ref: 1e6183282 Ref: specfunc c gsl_sf_expint_En_e183338 Ref: 1e7183338 Node: Ei x183573 Ref: specfunc ei-x183694 Ref: 1e8183694 Ref: specfunc c gsl_sf_expint_Ei183721 Ref: 1e9183721 Ref: specfunc c gsl_sf_expint_Ei_e183770 Ref: 1ea183770 Node: Hyperbolic Integrals184019 Ref: specfunc hyperbolic-integrals184126 Ref: 1eb184126 Ref: specfunc c gsl_sf_Shi184183 Ref: 1ec184183 Ref: specfunc c gsl_sf_Shi_e184226 Ref: 1ed184226 Ref: specfunc c gsl_sf_Chi184377 Ref: 1ee184377 Ref: specfunc c gsl_sf_Chi_e184420 Ref: 1ef184420 Node: Ei_3 x184692 Ref: specfunc ei-3-x184818 Ref: 1f0184818 Ref: specfunc c gsl_sf_expint_3184849 Ref: 1f1184849 Ref: specfunc c gsl_sf_expint_3_e184897 Ref: 1f2184897 Node: Trigonometric Integrals185098 Ref: specfunc trigonometric-integrals185223 Ref: 1f3185223 Ref: specfunc c gsl_sf_Si185286 Ref: 1f4185286 Ref: specfunc c gsl_sf_Si_e185334 Ref: 1f5185334 Ref: specfunc c gsl_sf_Ci185487 Ref: 1f6185487 Ref: specfunc c gsl_sf_Ci_e185535 Ref: 1f7185535 Node: Arctangent Integral185713 Ref: specfunc arctangent-integral185823 Ref: 1f8185823 Ref: specfunc c gsl_sf_atanint185878 Ref: 1f9185878 Ref: specfunc c gsl_sf_atanint_e185925 Ref: 1fa185925 Node: Fermi-Dirac Function186118 Ref: specfunc fermi-dirac-function186256 Ref: 1fb186256 Node: Complete Fermi-Dirac Integrals186492 Ref: specfunc complete-fermi-dirac-integrals186621 Ref: 1fc186621 Ref: specfunc c gsl_sf_fermi_dirac_m1186932 Ref: 1fd186932 Ref: specfunc c gsl_sf_fermi_dirac_m1_e186986 Ref: 1fe186986 Ref: specfunc c gsl_sf_fermi_dirac_0187223 Ref: 1ff187223 Ref: specfunc c gsl_sf_fermi_dirac_0_e187276 Ref: 200187276 Ref: specfunc c gsl_sf_fermi_dirac_1187500 Ref: 201187500 Ref: specfunc c gsl_sf_fermi_dirac_1_e187553 Ref: 202187553 Ref: specfunc c gsl_sf_fermi_dirac_2187773 Ref: 203187773 Ref: specfunc c gsl_sf_fermi_dirac_2_e187826 Ref: 204187826 Ref: specfunc c gsl_sf_fermi_dirac_int188054 Ref: 205188054 Ref: specfunc c gsl_sf_fermi_dirac_int_e188116 Ref: 206188116 Ref: specfunc c gsl_sf_fermi_dirac_mhalf188376 Ref: 207188376 Ref: specfunc c gsl_sf_fermi_dirac_mhalf_e188433 Ref: 208188433 Ref: specfunc c gsl_sf_fermi_dirac_half188604 Ref: 209188604 Ref: specfunc c gsl_sf_fermi_dirac_half_e188660 Ref: 20a188660 Ref: specfunc c gsl_sf_fermi_dirac_3half188829 Ref: 20b188829 Ref: specfunc c gsl_sf_fermi_dirac_3half_e188886 Ref: 20c188886 Node: Incomplete Fermi-Dirac Integrals189056 Ref: specfunc incomplete-fermi-dirac-integrals189185 Ref: 20d189185 Ref: specfunc c gsl_sf_fermi_dirac_inc_0189400 Ref: 20e189400 Ref: specfunc c gsl_sf_fermi_dirac_inc_0_e189467 Ref: 20f189467 Node: Gamma and Beta Functions189696 Ref: specfunc gamma-and-beta-functions189833 Ref: 210189833 Node: Gamma Functions190272 Ref: specfunc gamma-functions190368 Ref: 211190368 Ref: specfunc c gsl_sf_gamma190705 Ref: 212190705 Ref: specfunc c gsl_sf_gamma_e190750 Ref: 213190750 Ref: specfunc c gsl_sf_lngamma191139 Ref: 214191139 Ref: specfunc c gsl_sf_lngamma_e191186 Ref: 215191186 Ref: specfunc c gsl_sf_lngamma_sgn_e191565 Ref: 216191565 Ref: specfunc c gsl_sf_gammastar192085 Ref: 217192085 Ref: specfunc c gsl_sf_gammastar_e192134 Ref: 218192134 Ref: specfunc c gsl_sf_gammainv192510 Ref: 219192510 Ref: specfunc c gsl_sf_gammainv_e192558 Ref: 21a192558 Ref: specfunc c gsl_sf_lngamma_complex_e192744 Ref: 21b192744 Node: Factorials193432 Ref: specfunc factorials193554 Ref: 21c193554 Ref: specfunc c gsl_sf_fact193880 Ref: 21d193880 Ref: specfunc c gsl_sf_fact_e193930 Ref: 21e193930 Ref: specfunc c gsl_sf_doublefact194258 Ref: 21f194258 Ref: specfunc c gsl_sf_doublefact_e194314 Ref: 220194314 Ref: specfunc c gsl_sf_lnfact194619 Ref: 221194619 Ref: specfunc c gsl_sf_lnfact_e194671 Ref: 222194671 Ref: specfunc c gsl_sf_lndoublefact194997 Ref: 223194997 Ref: specfunc c gsl_sf_lndoublefact_e195055 Ref: 224195055 Ref: specfunc c gsl_sf_choose195246 Ref: 225195246 Ref: specfunc c gsl_sf_choose_e195314 Ref: 226195314 Ref: specfunc c gsl_sf_lnchoose195507 Ref: 227195507 Ref: specfunc c gsl_sf_lnchoose_e195577 Ref: 228195577 Ref: specfunc c gsl_sf_taylorcoeff195816 Ref: 229195816 Ref: specfunc c gsl_sf_taylorcoeff_e195874 Ref: 22a195874 Node: Pochhammer Symbol196052 Ref: specfunc id3196185 Ref: 22b196185 Ref: specfunc pochhammer-symbol196185 Ref: 22c196185 Ref: specfunc c gsl_sf_poch196236 Ref: 22d196236 Ref: specfunc c gsl_sf_poch_e196290 Ref: 22e196290 Ref: specfunc c gsl_sf_lnpoch196664 Ref: 22f196664 Ref: specfunc c gsl_sf_lnpoch_e196720 Ref: 230196720 Ref: specfunc c gsl_sf_lnpoch_sgn_e196928 Ref: 231196928 Ref: specfunc c gsl_sf_pochrel197294 Ref: 232197294 Ref: specfunc c gsl_sf_pochrel_e197351 Ref: 233197351 Node: Incomplete Gamma Functions197559 Ref: specfunc incomplete-gamma-functions197696 Ref: 234197696 Ref: specfunc c gsl_sf_gamma_inc197765 Ref: 235197765 Ref: specfunc c gsl_sf_gamma_inc_e197824 Ref: 236197824 Ref: specfunc c gsl_sf_gamma_inc_Q198073 Ref: 237198073 Ref: specfunc c gsl_sf_gamma_inc_Q_e198134 Ref: 238198134 Ref: specfunc c gsl_sf_gamma_inc_P198385 Ref: 239198385 Ref: specfunc c gsl_sf_gamma_inc_P_e198446 Ref: 23a198446 Node: Beta Functions198817 Ref: specfunc beta-functions198961 Ref: 23b198961 Ref: specfunc c gsl_sf_beta199006 Ref: 23c199006 Ref: specfunc c gsl_sf_beta_e199060 Ref: 23d199060 Ref: specfunc c gsl_sf_lnbeta199293 Ref: 23e199293 Ref: specfunc c gsl_sf_lnbeta_e199349 Ref: 23f199349 Node: Incomplete Beta Function199569 Ref: specfunc incomplete-beta-function199678 Ref: 240199678 Ref: specfunc c gsl_sf_beta_inc199743 Ref: 241199743 Ref: specfunc c gsl_sf_beta_inc_e199811 Ref: 242199811 Node: Gegenbauer Functions200304 Ref: specfunc gegenbauer-functions200454 Ref: 243200454 Ref: specfunc c gsl_sf_gegenpoly_1200735 Ref: 244200735 Ref: specfunc c gsl_sf_gegenpoly_2200801 Ref: 245200801 Ref: specfunc c gsl_sf_gegenpoly_3200867 Ref: 246200867 Ref: specfunc c gsl_sf_gegenpoly_1_e200933 Ref: 247200933 Ref: specfunc c gsl_sf_gegenpoly_2_e201031 Ref: 248201031 Ref: specfunc c gsl_sf_gegenpoly_3_e201129 Ref: 249201129 Ref: specfunc c gsl_sf_gegenpoly_n201358 Ref: 24a201358 Ref: specfunc c gsl_sf_gegenpoly_n_e201431 Ref: 24b201431 Ref: specfunc c gsl_sf_gegenpoly_array201730 Ref: 24c201730 Node: Hermite Polynomials and Functions202003 Ref: specfunc hermite-polynomials-and-functions202153 Ref: 24d202153 Node: Hermite Polynomials202689 Ref: specfunc hermite-polynomials202822 Ref: 24e202822 Ref: specfunc c gsl_sf_hermite203387 Ref: 24f203387 Ref: specfunc c gsl_sf_hermite_e203453 Ref: 250203453 Ref: specfunc c gsl_sf_hermite_array203780 Ref: 251203780 Ref: specfunc c gsl_sf_hermite_series204069 Ref: 252204069 Ref: specfunc c gsl_sf_hermite_series_e204169 Ref: 253204169 Ref: specfunc c gsl_sf_hermite_prob204450 Ref: 254204450 Ref: specfunc c gsl_sf_hermite_prob_e204521 Ref: 255204521 Ref: specfunc c gsl_sf_hermite_prob_array204856 Ref: 256204856 Ref: specfunc c gsl_sf_hermite_prob_series205156 Ref: 257205156 Ref: specfunc c gsl_sf_hermite_prob_series_e205261 Ref: 258205261 Node: Derivatives of Hermite Polynomials205551 Ref: specfunc derivatives-of-hermite-polynomials205710 Ref: 259205710 Ref: specfunc c gsl_sf_hermite_deriv205795 Ref: 25a205795 Ref: specfunc c gsl_sf_hermite_deriv_e205890 Ref: 25b205890 Ref: specfunc c gsl_sf_hermite_array_deriv206172 Ref: 25c206172 Ref: specfunc c gsl_sf_hermite_deriv_array206624 Ref: 25d206624 Ref: specfunc c gsl_sf_hermite_prob_deriv207079 Ref: 25e207079 Ref: specfunc c gsl_sf_hermite_prob_deriv_e207179 Ref: 25f207179 Ref: specfunc c gsl_sf_hermite_prob_array_deriv207469 Ref: 260207469 Ref: specfunc c gsl_sf_hermite_prob_deriv_array207930 Ref: 261207930 Node: Hermite Functions208394 Ref: specfunc hermite-functions208566 Ref: 262208566 Ref: specfunc c gsl_sf_hermite_func209448 Ref: 263209448 Ref: specfunc c gsl_sf_hermite_func_e209519 Ref: 264209519 Ref: specfunc c gsl_sf_hermite_func_fast209818 Ref: 265209818 Ref: specfunc c gsl_sf_hermite_func_fast_e209904 Ref: 266209904 Ref: specfunc c gsl_sf_hermite_func_array210238 Ref: 267210238 Ref: specfunc c gsl_sf_hermite_func_series210624 Ref: 268210624 Ref: specfunc c gsl_sf_hermite_func_series_e210729 Ref: 269210729 Node: Derivatives of Hermite Functions211009 Ref: specfunc derivatives-of-hermite-functions211197 Ref: 26a211197 Ref: specfunc c gsl_sf_hermite_func_der211278 Ref: 26b211278 Ref: specfunc c gsl_sf_hermite_func_der_e211376 Ref: 26c211376 Node: Zeros of Hermite Polynomials and Hermite Functions211652 Ref: specfunc zeros-of-hermite-polynomials-and-hermite-functions211814 Ref: 26d211814 Ref: specfunc c gsl_sf_hermite_zero212240 Ref: 26e212240 Ref: specfunc c gsl_sf_hermite_zero_e212308 Ref: 26f212308 Ref: specfunc c gsl_sf_hermite_prob_zero212536 Ref: 270212536 Ref: specfunc c gsl_sf_hermite_prob_zero_e212609 Ref: 271212609 Ref: specfunc c gsl_sf_hermite_func_zero212845 Ref: 272212845 Ref: specfunc c gsl_sf_hermite_func_zero_e212918 Ref: 273212918 Node: Hypergeometric Functions213142 Ref: specfunc hypergeometric-functions213290 Ref: 274213290 Ref: specfunc c gsl_sf_hyperg_0F1213507 Ref: 275213507 Ref: specfunc c gsl_sf_hyperg_0F1_e213567 Ref: 276213567 Ref: specfunc c gsl_sf_hyperg_1F1_int213737 Ref: 277213737 Ref: specfunc c gsl_sf_hyperg_1F1_int_e213805 Ref: 278213805 Ref: specfunc c gsl_sf_hyperg_1F1214063 Ref: 279214063 Ref: specfunc c gsl_sf_hyperg_1F1_e214133 Ref: 27a214133 Ref: specfunc c gsl_sf_hyperg_U_int214393 Ref: 27b214393 Ref: specfunc c gsl_sf_hyperg_U_int_e214459 Ref: 27c214459 Ref: specfunc c gsl_sf_hyperg_U_int_e10_e214690 Ref: 27d214690 Ref: specfunc c gsl_sf_hyperg_U215022 Ref: 27e215022 Ref: specfunc c gsl_sf_hyperg_U_e215090 Ref: 27f215090 Ref: specfunc c gsl_sf_hyperg_U_e10_e215273 Ref: 280215273 Ref: specfunc c gsl_sf_hyperg_2F1215551 Ref: 281215551 Ref: specfunc c gsl_sf_hyperg_2F1_e215641 Ref: 282215641 Ref: specfunc c gsl_sf_hyperg_2F1_conj216127 Ref: 283216127 Ref: specfunc c gsl_sf_hyperg_2F1_conj_e216224 Ref: 284216224 Ref: specfunc c gsl_sf_hyperg_2F1_renorm216495 Ref: 285216495 Ref: specfunc c gsl_sf_hyperg_2F1_renorm_e216592 Ref: 286216592 Ref: specfunc c gsl_sf_hyperg_2F1_conj_renorm216848 Ref: 287216848 Ref: specfunc c gsl_sf_hyperg_2F1_conj_renorm_e216952 Ref: 288216952 Ref: specfunc c gsl_sf_hyperg_2F0217238 Ref: 289217238 Ref: specfunc c gsl_sf_hyperg_2F0_e217308 Ref: 28a217308 Node: Laguerre Functions217641 Ref: specfunc id4217775 Ref: 28b217775 Ref: specfunc laguerre-functions217775 Ref: 16d217775 Ref: specfunc c gsl_sf_laguerre_1218376 Ref: 28c218376 Ref: specfunc c gsl_sf_laguerre_2218436 Ref: 28d218436 Ref: specfunc c gsl_sf_laguerre_3218496 Ref: 28e218496 Ref: specfunc c gsl_sf_laguerre_1_e218556 Ref: 28f218556 Ref: specfunc c gsl_sf_laguerre_2_e218648 Ref: 290218648 Ref: specfunc c gsl_sf_laguerre_3_e218740 Ref: 291218740 Ref: specfunc c gsl_sf_laguerre_n218966 Ref: 292218966 Ref: specfunc c gsl_sf_laguerre_n_e219061 Ref: 293219061 Node: Lambert W Functions219263 Ref: specfunc lambert-w-functions219415 Ref: 294219415 Ref: specfunc c gsl_sf_lambert_W0219874 Ref: 295219874 Ref: specfunc c gsl_sf_lambert_W0_e219924 Ref: 296219924 Ref: specfunc c gsl_sf_lambert_Wm1220078 Ref: 297220078 Ref: specfunc c gsl_sf_lambert_Wm1_e220129 Ref: 298220129 Node: Legendre Functions and Spherical Harmonics220299 Ref: specfunc legendre-functions-and-spherical-harmonics220464 Ref: 299220464 Node: Legendre Polynomials220892 Ref: specfunc legendre-polynomials221056 Ref: 29a221056 Ref: specfunc c gsl_sf_legendre_P1221113 Ref: 29b221113 Ref: specfunc c gsl_sf_legendre_P2221164 Ref: 29c221164 Ref: specfunc c gsl_sf_legendre_P3221215 Ref: 29d221215 Ref: specfunc c gsl_sf_legendre_P1_e221266 Ref: 29e221266 Ref: specfunc c gsl_sf_legendre_P2_e221339 Ref: 29f221339 Ref: specfunc c gsl_sf_legendre_P3_e221412 Ref: 2a0221412 Ref: specfunc c gsl_sf_legendre_Pl221602 Ref: 2a1221602 Ref: specfunc c gsl_sf_legendre_Pl_e221660 Ref: 2a2221660 Ref: specfunc c gsl_sf_legendre_Pl_array221909 Ref: 2a3221909 Ref: specfunc c gsl_sf_legendre_Pl_deriv_array222006 Ref: 2a4222006 Ref: specfunc c gsl_sf_legendre_Q0222277 Ref: 2a5222277 Ref: specfunc c gsl_sf_legendre_Q0_e222328 Ref: 2a6222328 Ref: specfunc c gsl_sf_legendre_Q1222489 Ref: 2a7222489 Ref: specfunc c gsl_sf_legendre_Q1_e222540 Ref: 2a8222540 Ref: specfunc c gsl_sf_legendre_Ql222701 Ref: 2a9222701 Ref: specfunc c gsl_sf_legendre_Ql_e222759 Ref: 2aa222759 Node: Associated Legendre Polynomials and Spherical Harmonics222946 Ref: specfunc associated-legendre-polynomials-and-spherical-harmonics223136 Ref: 2ab223136 Ref: specfunc c gsl_sf_legendre_t225684 Ref: 2ac225684 Ref: specfunc c gsl_sf_legendre_array226749 Ref: 2ad226749 Ref: specfunc c gsl_sf_legendre_array_e226888 Ref: 2ae226888 Ref: specfunc c gsl_sf_legendre_deriv_array227761 Ref: 2b0227761 Ref: specfunc c gsl_sf_legendre_deriv_array_e227945 Ref: 2b1227945 Ref: specfunc c gsl_sf_legendre_deriv_alt_array228761 Ref: 2b2228761 Ref: specfunc c gsl_sf_legendre_deriv_alt_array_e228949 Ref: 2b3228949 Ref: specfunc c gsl_sf_legendre_deriv2_array229741 Ref: 2b4229741 Ref: specfunc c gsl_sf_legendre_deriv2_array_e229966 Ref: 2b5229966 Ref: specfunc c gsl_sf_legendre_deriv2_alt_array230921 Ref: 2b6230921 Ref: specfunc c gsl_sf_legendre_deriv2_alt_array_e231150 Ref: 2b7231150 Ref: specfunc c gsl_sf_legendre_nlm232152 Ref: 2b8232152 Ref: specfunc c gsl_sf_legendre_array_n232386 Ref: 2af232386 Ref: specfunc c gsl_sf_legendre_array_index232826 Ref: 2b9232826 Ref: specfunc c gsl_sf_legendre_Plm233233 Ref: 2ba233233 Ref: specfunc c gsl_sf_legendre_Plm_e233299 Ref: 2bb233299 Ref: specfunc c gsl_sf_legendre_sphPlm233513 Ref: 2bc233513 Ref: specfunc c gsl_sf_legendre_sphPlm_e233582 Ref: 2bd233582 Ref: specfunc c gsl_sf_legendre_Plm_array234024 Ref: 2be234024 Ref: specfunc c gsl_sf_legendre_Plm_deriv_array234129 Ref: 2bf234129 Ref: specfunc c gsl_sf_legendre_sphPlm_array234448 Ref: 2c0234448 Ref: specfunc c gsl_sf_legendre_sphPlm_deriv_array234556 Ref: 2c1234556 Ref: specfunc c gsl_sf_legendre_array_size234878 Ref: 2c2234878 Node: Conical Functions235048 Ref: specfunc conical-functions235255 Ref: 2c3235255 Ref: specfunc c gsl_sf_conicalP_half235436 Ref: 2c4235436 Ref: specfunc c gsl_sf_conicalP_half_e235504 Ref: 2c5235504 Ref: specfunc c gsl_sf_conicalP_mhalf235722 Ref: 2c6235722 Ref: specfunc c gsl_sf_conicalP_mhalf_e235791 Ref: 2c7235791 Ref: specfunc c gsl_sf_conicalP_0236009 Ref: 2c8236009 Ref: specfunc c gsl_sf_conicalP_0_e236074 Ref: 2c9236074 Ref: specfunc c gsl_sf_conicalP_1236265 Ref: 2ca236265 Ref: specfunc c gsl_sf_conicalP_1_e236330 Ref: 2cb236330 Ref: specfunc c gsl_sf_conicalP_sph_reg236521 Ref: 2cc236521 Ref: specfunc c gsl_sf_conicalP_sph_reg_e236609 Ref: 2cd236609 Ref: specfunc c gsl_sf_conicalP_cyl_reg236851 Ref: 2ce236851 Ref: specfunc c gsl_sf_conicalP_cyl_reg_e236939 Ref: 2cf236939 Node: Radial Functions for Hyperbolic Space237179 Ref: specfunc radial-functions-for-hyperbolic-space237322 Ref: 2d0237322 Ref: specfunc c gsl_sf_legendre_H3d_0237680 Ref: 2d1237680 Ref: specfunc c gsl_sf_legendre_H3d_0_e237751 Ref: 2d2237751 Ref: specfunc c gsl_sf_legendre_H3d_1238170 Ref: 2d3238170 Ref: specfunc c gsl_sf_legendre_H3d_1_e238241 Ref: 2d4238241 Ref: specfunc c gsl_sf_legendre_H3d238753 Ref: 2d5238753 Ref: specfunc c gsl_sf_legendre_H3d_e238839 Ref: 2d6238839 Ref: specfunc c gsl_sf_legendre_H3d_array239196 Ref: 2d7239196 Node: Logarithm and Related Functions239428 Ref: specfunc logarithm-and-related-functions239591 Ref: 2d8239591 Ref: specfunc c gsl_sf_log239864 Ref: 2d9239864 Ref: specfunc c gsl_sf_log_e239907 Ref: 2da239907 Ref: specfunc c gsl_sf_log_abs240058 Ref: 2db240058 Ref: specfunc c gsl_sf_log_abs_e240105 Ref: 2dc240105 Ref: specfunc c gsl_sf_complex_log_e240281 Ref: 2dd240281 Ref: specfunc c gsl_sf_log_1plusx240632 Ref: 2de240632 Ref: specfunc c gsl_sf_log_1plusx_e240682 Ref: 2df240682 Ref: specfunc c gsl_sf_log_1plusx_mx240872 Ref: 2e0240872 Ref: specfunc c gsl_sf_log_1plusx_mx_e240925 Ref: 2e1240925 Node: Mathieu Functions241132 Ref: specfunc mathieu-functions241267 Ref: 2e2241267 Node: Mathieu Function Workspace242490 Ref: specfunc mathieu-function-workspace242618 Ref: 2e3242618 Ref: specfunc c gsl_sf_mathieu_workspace242857 Ref: 2e4242857 Ref: specfunc c gsl_sf_mathieu_alloc242943 Ref: 2e5242943 Ref: specfunc c gsl_sf_mathieu_free243294 Ref: 2e6243294 Node: Mathieu Function Characteristic Values243424 Ref: specfunc mathieu-function-characteristic-values243586 Ref: 2e7243586 Ref: specfunc c gsl_sf_mathieu_a243679 Ref: 2e8243679 Ref: specfunc c gsl_sf_mathieu_a_e243732 Ref: 2e9243732 Ref: specfunc c gsl_sf_mathieu_b243820 Ref: 2ea243820 Ref: specfunc c gsl_sf_mathieu_b_e243873 Ref: 2eb243873 Ref: specfunc c gsl_sf_mathieu_a_array244101 Ref: 2ec244101 Ref: specfunc c gsl_sf_mathieu_b_array244258 Ref: 2ed244258 Node: Angular Mathieu Functions244644 Ref: specfunc angular-mathieu-functions244804 Ref: 2ee244804 Ref: specfunc c gsl_sf_mathieu_ce244871 Ref: 2ef244871 Ref: specfunc c gsl_sf_mathieu_ce_e244935 Ref: 2f0244935 Ref: specfunc c gsl_sf_mathieu_se245034 Ref: 2f1245034 Ref: specfunc c gsl_sf_mathieu_se_e245098 Ref: 2f2245098 Ref: specfunc c gsl_sf_mathieu_ce_array245301 Ref: 2f3245301 Ref: specfunc c gsl_sf_mathieu_se_array245459 Ref: 2f4245459 Node: Radial Mathieu Functions245850 Ref: specfunc radial-mathieu-functions245963 Ref: 2f5245963 Ref: specfunc c gsl_sf_mathieu_Mc246028 Ref: 2f6246028 Ref: specfunc c gsl_sf_mathieu_Mc_e246099 Ref: 2f7246099 Ref: specfunc c gsl_sf_mathieu_Ms246205 Ref: 2f8246205 Ref: specfunc c gsl_sf_mathieu_Ms_e246276 Ref: 2f9246276 Ref: specfunc c gsl_sf_mathieu_Mc_array246767 Ref: 2fa246767 Ref: specfunc c gsl_sf_mathieu_Ms_array246932 Ref: 2fb246932 Node: Power Function247327 Ref: specfunc power-function247451 Ref: 2fc247451 Ref: specfunc c gsl_sf_pow_int247670 Ref: 2fd247670 Ref: specfunc c gsl_sf_pow_int_e247724 Ref: 4c247724 Node: Psi Digamma Function248304 Ref: specfunc psi-digamma-function248432 Ref: 2fe248432 Node: Digamma Function248826 Ref: specfunc digamma-function248926 Ref: 2ff248926 Ref: specfunc c gsl_sf_psi_int248975 Ref: 300248975 Ref: specfunc c gsl_sf_psi_int_e249019 Ref: 301249019 Ref: specfunc c gsl_sf_psi249244 Ref: 302249244 Ref: specfunc c gsl_sf_psi_e249287 Ref: 303249287 Ref: specfunc c gsl_sf_psi_1piy249451 Ref: 304249451 Ref: specfunc c gsl_sf_psi_1piy_e249499 Ref: 305249499 Node: Trigamma Function249683 Ref: specfunc trigamma-function249810 Ref: 306249810 Ref: specfunc c gsl_sf_psi_1_int249861 Ref: 307249861 Ref: specfunc c gsl_sf_psi_1_int_e249907 Ref: 308249907 Ref: specfunc c gsl_sf_psi_1250065 Ref: 309250065 Ref: specfunc c gsl_sf_psi_1_e250110 Ref: 30a250110 Node: Polygamma Function250269 Ref: specfunc polygamma-function250371 Ref: 30b250371 Ref: specfunc c gsl_sf_psi_n250424 Ref: 30c250424 Ref: specfunc c gsl_sf_psi_n_e250476 Ref: 30d250476 Node: Synchrotron Functions250652 Ref: specfunc synchrotron-functions250785 Ref: 30e250785 Ref: specfunc c gsl_sf_synchrotron_1250941 Ref: 30f250941 Ref: specfunc c gsl_sf_synchrotron_1_e250994 Ref: 310250994 Ref: specfunc c gsl_sf_synchrotron_2251188 Ref: 311251188 Ref: specfunc c gsl_sf_synchrotron_2_e251241 Ref: 312251241 Node: Transport Functions251419 Ref: specfunc transport-functions251555 Ref: 313251555 Ref: specfunc c gsl_sf_transport_2251794 Ref: 314251794 Ref: specfunc c gsl_sf_transport_2_e251845 Ref: 315251845 Ref: specfunc c gsl_sf_transport_3251979 Ref: 316251979 Ref: specfunc c gsl_sf_transport_3_e252030 Ref: 317252030 Ref: specfunc c gsl_sf_transport_4252164 Ref: 318252164 Ref: specfunc c gsl_sf_transport_4_e252215 Ref: 319252215 Ref: specfunc c gsl_sf_transport_5252349 Ref: 31a252349 Ref: specfunc c gsl_sf_transport_5_e252400 Ref: 31b252400 Node: Trigonometric Functions252534 Ref: specfunc trigonometric-functions252663 Ref: 31c252663 Node: Circular Trigonometric Functions253161 Ref: specfunc circular-trigonometric-functions253308 Ref: 31d253308 Ref: specfunc c gsl_sf_sin253389 Ref: 31e253389 Ref: specfunc c gsl_sf_sin_e253432 Ref: 31f253432 Ref: specfunc c gsl_sf_cos253554 Ref: 320253554 Ref: specfunc c gsl_sf_cos_e253597 Ref: 321253597 Ref: specfunc c gsl_sf_hypot253721 Ref: 322253721 Ref: specfunc c gsl_sf_hypot_e253776 Ref: 323253776 Ref: specfunc c gsl_sf_sinc253972 Ref: 324253972 Ref: specfunc c gsl_sf_sinc_e254016 Ref: 325254016 Node: Trigonometric Functions for Complex Arguments254181 Ref: specfunc trigonometric-functions-for-complex-arguments254371 Ref: 326254371 Ref: specfunc c gsl_sf_complex_sin_e254478 Ref: 327254478 Ref: specfunc c gsl_sf_complex_cos_e254733 Ref: 328254733 Ref: specfunc c gsl_sf_complex_logsin_e254995 Ref: 329254995 Node: Hyperbolic Trigonometric Functions255285 Ref: specfunc hyperbolic-trigonometric-functions255463 Ref: 32a255463 Ref: specfunc c gsl_sf_lnsinh255548 Ref: 32b255548 Ref: specfunc c gsl_sf_lnsinh_e255594 Ref: 32c255594 Ref: specfunc c gsl_sf_lncosh255718 Ref: 32d255718 Ref: specfunc c gsl_sf_lncosh_e255764 Ref: 32e255764 Node: Conversion Functions255899 Ref: specfunc conversion-functions256053 Ref: 32f256053 Ref: specfunc c gsl_sf_polar_to_rect256110 Ref: 330256110 Ref: specfunc c gsl_sf_rect_to_polar256412 Ref: 331256412 Node: Restriction Functions256792 Ref: specfunc restriction-functions256956 Ref: 332256956 Ref: specfunc c gsl_sf_angle_restrict_symm257015 Ref: 333257015 Ref: specfunc c gsl_sf_angle_restrict_symm_e257078 Ref: 334257078 Ref: specfunc c gsl_sf_angle_restrict_pos257404 Ref: 335257404 Ref: specfunc c gsl_sf_angle_restrict_pos_e257466 Ref: 336257466 Node: Trigonometric Functions With Error Estimates257772 Ref: specfunc trigonometric-functions-with-error-estimates257907 Ref: 337257907 Ref: specfunc c gsl_sf_sin_err_e258012 Ref: 338258012 Ref: specfunc c gsl_sf_cos_err_e258369 Ref: 339258369 Node: Zeta Functions258728 Ref: specfunc zeta-functions258849 Ref: 33a258849 Node: Riemann Zeta Function259175 Ref: specfunc riemann-zeta-function259288 Ref: 33b259288 Ref: specfunc c gsl_sf_zeta_int259447 Ref: 33c259447 Ref: specfunc c gsl_sf_zeta_int_e259492 Ref: 33d259492 Ref: specfunc c gsl_sf_zeta259664 Ref: 33e259664 Ref: specfunc c gsl_sf_zeta_e259708 Ref: 33f259708 Node: Riemann Zeta Function Minus One259881 Ref: specfunc riemann-zeta-function-minus-one260024 Ref: 340260024 Ref: specfunc c gsl_sf_zetam1_int260286 Ref: 341260286 Ref: specfunc c gsl_sf_zetam1_int_e260333 Ref: 342260333 Ref: specfunc c gsl_sf_zetam1260485 Ref: 343260485 Ref: specfunc c gsl_sf_zetam1_e260531 Ref: 344260531 Node: Hurwitz Zeta Function260684 Ref: specfunc hurwitz-zeta-function260818 Ref: 345260818 Ref: specfunc c gsl_sf_hzeta260962 Ref: 346260962 Ref: specfunc c gsl_sf_hzeta_e261017 Ref: 347261017 Node: Eta Function261194 Ref: specfunc eta-function261288 Ref: 348261288 Ref: specfunc c gsl_sf_eta_int261398 Ref: 349261398 Ref: specfunc c gsl_sf_eta_int_e261442 Ref: 34a261442 Ref: specfunc c gsl_sf_eta261594 Ref: 34b261594 Ref: specfunc c gsl_sf_eta_e261637 Ref: 34c261637 Node: Examples<3>261790 Ref: specfunc examples261921 Ref: 34d261921 Node: References and Further Reading<3>263431 Ref: specfunc references-and-further-reading263539 Ref: 34e263539 Node: Vectors and Matrices265078 Ref: vectors doc265186 Ref: 34f265186 Ref: vectors vectors-and-matrices265186 Ref: 350265186 Node: Data types265785 Ref: vectors data-types265868 Ref: 351265868 Node: Blocks268190 Ref: vectors blocks268289 Ref: 352268289 Ref: vectors c gsl_block268536 Ref: 353268536 Node: Block allocation269163 Ref: vectors block-allocation269258 Ref: 354269258 Ref: vectors c gsl_block_alloc269776 Ref: 355269776 Ref: vectors c gsl_block_calloc270352 Ref: 356270352 Ref: vectors c gsl_block_free270527 Ref: 357270527 Node: Reading and writing blocks270744 Ref: vectors reading-and-writing-blocks270875 Ref: 358270875 Ref: vectors c gsl_block_fwrite271049 Ref: 359271049 Ref: vectors c gsl_block_fread271470 Ref: 35a271470 Ref: vectors c gsl_block_fprintf272038 Ref: 35b272038 Ref: vectors c gsl_block_fscanf272545 Ref: 35c272545 Node: Example programs for blocks273003 Ref: vectors example-programs-for-blocks273109 Ref: 35d273109 Node: Vectors273630 Ref: vectors vectors273727 Ref: 35e273727 Ref: vectors c gsl_vector274295 Ref: 35f274295 Node: Vector allocation275656 Ref: vectors vector-allocation275752 Ref: 360275752 Ref: vectors c gsl_vector_alloc276274 Ref: 361276274 Ref: vectors c gsl_vector_calloc276752 Ref: 362276752 Ref: vectors c gsl_vector_free276955 Ref: 363276955 Node: Accessing vector elements277351 Ref: vectors accessing-vector-elements277484 Ref: 364277484 Ref: vectors c GSL_RANGE_CHECK_OFF278146 Ref: 367278146 Ref: vectors c GSL_C99_INLINE278771 Ref: 368278771 Ref: vectors c gsl_check_range279123 Ref: 369279123 Ref: vectors c gsl_vector_get279693 Ref: 365279693 Ref: vectors c gsl_vector_set280084 Ref: 366280084 Ref: vectors c gsl_vector_ptr280487 Ref: 36a280487 Ref: vectors c gsl_vector_const_ptr280550 Ref: 36b280550 Ref: Accessing vector elements-Footnote-1281013 Node: Initializing vector elements281260 Ref: vectors initializing-vector-elements281403 Ref: 36c281403 Ref: vectors c gsl_vector_set_all281474 Ref: 36d281474 Ref: vectors c gsl_vector_set_zero281641 Ref: 36e281641 Ref: vectors c gsl_vector_set_basis281781 Ref: 36f281781 Node: Reading and writing vectors282023 Ref: vectors reading-and-writing-vectors282153 Ref: 370282153 Ref: vectors c gsl_vector_fwrite282330 Ref: 371282330 Ref: vectors c gsl_vector_fread282754 Ref: 372282754 Ref: vectors c gsl_vector_fprintf283326 Ref: 373283326 Ref: vectors c gsl_vector_fscanf283836 Ref: 374283836 Node: Vector views284298 Ref: vectors vector-views284415 Ref: 375284415 Ref: vectors c gsl_vector_view284733 Ref: 376284733 Ref: vectors c gsl_vector_const_view284759 Ref: 377284759 Ref: vectors c gsl_vector_subvector285752 Ref: 378285752 Ref: vectors c gsl_vector_const_subvector285866 Ref: 379285866 Ref: vectors c gsl_vector_subvector_with_stride287309 Ref: 37a287309 Ref: vectors c gsl_vector_const_subvector_with_stride287460 Ref: 37b287460 Ref: vectors c gsl_vector_complex_real289244 Ref: 37c289244 Ref: vectors c gsl_vector_complex_const_real289344 Ref: 37d289344 Ref: vectors c gsl_vector_complex_imag289753 Ref: 37e289753 Ref: vectors c gsl_vector_complex_const_imag289853 Ref: 37f289853 Ref: vectors c gsl_vector_view_array290267 Ref: 380290267 Ref: vectors c gsl_vector_const_view_array290366 Ref: 381290366 Ref: vectors c gsl_vector_view_array_with_stride291365 Ref: 382291365 Ref: vectors c gsl_vector_const_view_array_with_stride291501 Ref: 383291501 Node: Copying vectors292652 Ref: vectors copying-vectors292761 Ref: 384292761 Ref: vectors c gsl_vector_memcpy293106 Ref: 385293106 Ref: vectors c gsl_vector_swap293352 Ref: 386293352 Node: Exchanging elements293574 Ref: vectors exchanging-elements293688 Ref: 387293688 Ref: vectors c gsl_vector_swap_elements293829 Ref: 388293829 Ref: vectors c gsl_vector_reverse294043 Ref: 389294043 Node: Vector operations294185 Ref: vectors vector-operations294331 Ref: 38a294331 Ref: vectors c gsl_vector_add294380 Ref: 38b294380 Ref: vectors c gsl_vector_sub294712 Ref: 38c294712 Ref: vectors c gsl_vector_mul295052 Ref: 38d295052 Ref: vectors c gsl_vector_div295391 Ref: 38e295391 Ref: vectors c gsl_vector_scale295726 Ref: 38f295726 Ref: vectors c gsl_vector_add_constant295969 Ref: 390295969 Ref: vectors c gsl_vector_sum296228 Ref: 391296228 Ref: vectors c gsl_vector_axpby296389 Ref: 392296389 Node: Finding maximum and minimum elements of vectors296669 Ref: vectors finding-maximum-and-minimum-elements-of-vectors296813 Ref: 393296813 Ref: vectors c gsl_vector_max296983 Ref: 394296983 Ref: vectors c gsl_vector_min297116 Ref: 395297116 Ref: vectors c gsl_vector_minmax297249 Ref: 396297249 Ref: vectors c gsl_vector_max_index297510 Ref: 397297510 Ref: vectors c gsl_vector_min_index297754 Ref: 398297754 Ref: vectors c gsl_vector_minmax_index297998 Ref: 399297998 Node: Vector properties298369 Ref: vectors vector-properties298524 Ref: 39a298524 Ref: vectors c gsl_vector_isnull298726 Ref: 39b298726 Ref: vectors c gsl_vector_ispos298784 Ref: 39c298784 Ref: vectors c gsl_vector_isneg298841 Ref: 39d298841 Ref: vectors c gsl_vector_isnonneg298898 Ref: 39e298898 Ref: vectors c gsl_vector_equal299143 Ref: 39f299143 Node: Example programs for vectors299375 Ref: vectors example-programs-for-vectors299474 Ref: 3a0299474 Node: Matrices301729 Ref: vectors matrices301811 Ref: 3a1301811 Ref: vectors c gsl_matrix302052 Ref: 3a2302052 Node: Matrix allocation304835 Ref: vectors matrix-allocation304932 Ref: 3a3304932 Ref: vectors c gsl_matrix_alloc305442 Ref: 3a4305442 Ref: vectors c gsl_matrix_calloc306022 Ref: 3a5306022 Ref: vectors c gsl_matrix_free306282 Ref: 3a6306282 Node: Accessing matrix elements306678 Ref: vectors accessing-matrix-elements306812 Ref: 3a7306812 Ref: vectors c gsl_matrix_get307428 Ref: 3a8307428 Ref: vectors c gsl_matrix_set307864 Ref: 3a9307864 Ref: vectors c gsl_matrix_ptr308312 Ref: 3aa308312 Ref: vectors c gsl_matrix_const_ptr308385 Ref: 3ab308385 Node: Initializing matrix elements308850 Ref: vectors initializing-matrix-elements308995 Ref: 3ac308995 Ref: vectors c gsl_matrix_set_all309066 Ref: 3ad309066 Ref: vectors c gsl_matrix_set_zero309233 Ref: 3ae309233 Ref: vectors c gsl_matrix_set_identity309373 Ref: 3af309373 Node: Reading and writing matrices309704 Ref: vectors reading-and-writing-matrices309836 Ref: 3b0309836 Ref: vectors c gsl_matrix_fwrite310016 Ref: 3b1310016 Ref: vectors c gsl_matrix_fread310440 Ref: 3b2310440 Ref: vectors c gsl_matrix_fprintf311016 Ref: 3b3311016 Ref: vectors c gsl_matrix_fscanf311526 Ref: 3b4311526 Node: Matrix views311991 Ref: vectors matrix-views312124 Ref: 3b5312124 Ref: vectors c gsl_matrix_view312163 Ref: 3b6312163 Ref: vectors c gsl_matrix_const_view312189 Ref: 3b7312189 Ref: vectors c gsl_matrix_submatrix313022 Ref: 3b8313022 Ref: vectors c gsl_matrix_const_submatrix313155 Ref: 3b9313155 Ref: vectors c gsl_matrix_view_array314821 Ref: 3ba314821 Ref: vectors c gsl_matrix_const_view_array314932 Ref: 3bb314932 Ref: vectors c gsl_matrix_view_array_with_tda316050 Ref: 3bc316050 Ref: vectors c gsl_matrix_const_view_array_with_tda316182 Ref: 3bd316182 Ref: vectors c gsl_matrix_view_vector317459 Ref: 3be317459 Ref: vectors c gsl_matrix_const_view_vector317572 Ref: 3bf317572 Ref: vectors c gsl_matrix_view_vector_with_tda318728 Ref: 3c0318728 Ref: vectors c gsl_matrix_const_view_vector_with_tda318872 Ref: 3c1318872 Node: Creating row and column views320186 Ref: vectors creating-row-and-column-views320307 Ref: 3c2320307 Ref: vectors c gsl_matrix_row320729 Ref: 3c3320729 Ref: vectors c gsl_matrix_const_row320822 Ref: 3c4320822 Ref: vectors c gsl_matrix_column321305 Ref: 3c5321305 Ref: vectors c gsl_matrix_const_column321401 Ref: 3c6321401 Ref: vectors c gsl_matrix_subrow321896 Ref: 3c7321896 Ref: vectors c gsl_matrix_const_subrow322017 Ref: 3c8322017 Ref: vectors c gsl_matrix_subcolumn322683 Ref: 3c9322683 Ref: vectors c gsl_matrix_const_subcolumn322807 Ref: 3ca322807 Ref: vectors c gsl_matrix_diagonal323491 Ref: 3cb323491 Ref: vectors c gsl_matrix_const_diagonal323579 Ref: 3cc323579 Ref: vectors c gsl_matrix_subdiagonal324130 Ref: 3cd324130 Ref: vectors c gsl_matrix_const_subdiagonal324231 Ref: 3ce324231 Ref: vectors c gsl_matrix_superdiagonal324766 Ref: 3cf324766 Ref: vectors c gsl_matrix_const_superdiagonal324869 Ref: 3d0324869 Node: Copying matrices325412 Ref: vectors copying-matrices325545 Ref: 3d1325545 Ref: vectors c gsl_matrix_memcpy325592 Ref: 3d2325592 Ref: vectors c gsl_matrix_swap325837 Ref: 3d3325837 Node: Copying rows and columns326063 Ref: vectors copying-rows-and-columns326194 Ref: 3d4326194 Ref: vectors c gsl_matrix_get_row326676 Ref: 3d5326676 Ref: vectors c gsl_matrix_get_col326976 Ref: 3d6326976 Ref: vectors c gsl_matrix_set_row327282 Ref: 3d7327282 Ref: vectors c gsl_matrix_set_col327582 Ref: 3d8327582 Node: Exchanging rows and columns327888 Ref: vectors exchanging-rows-and-columns328020 Ref: 3d9328020 Ref: vectors c gsl_matrix_swap_rows328172 Ref: 3da328172 Ref: vectors c gsl_matrix_swap_columns328378 Ref: 3db328378 Ref: vectors c gsl_matrix_swap_rowcol328590 Ref: 3dc328590 Ref: vectors c gsl_matrix_transpose_memcpy328871 Ref: 3dd328871 Ref: vectors c gsl_matrix_transpose329313 Ref: 3de329313 Ref: vectors c gsl_matrix_complex_conjtrans_memcpy329560 Ref: 3df329560 Node: Matrix operations330051 Ref: vectors matrix-operations330207 Ref: 3e0330207 Ref: vectors c gsl_matrix_add330327 Ref: 3e1330327 Ref: vectors c gsl_matrix_sub330674 Ref: 3e2330674 Ref: vectors c gsl_matrix_mul_elements331028 Ref: 3e3331028 Ref: vectors c gsl_matrix_div_elements331400 Ref: 3e4331400 Ref: vectors c gsl_matrix_scale331769 Ref: 3e5331769 Ref: vectors c gsl_matrix_scale_columns332018 Ref: 3e6332018 Ref: vectors c gsl_matrix_scale_rows332404 Ref: 3e7332404 Ref: vectors c gsl_matrix_add_constant332781 Ref: 3e8332781 Node: Finding maximum and minimum elements of matrices333046 Ref: vectors finding-maximum-and-minimum-elements-of-matrices333192 Ref: 3e9333192 Ref: vectors c gsl_matrix_max333367 Ref: 3ea333367 Ref: vectors c gsl_matrix_min333500 Ref: 3eb333500 Ref: vectors c gsl_matrix_minmax333633 Ref: 3ec333633 Ref: vectors c gsl_matrix_max_index333894 Ref: 3ed333894 Ref: vectors c gsl_matrix_min_index334272 Ref: 3ee334272 Ref: vectors c gsl_matrix_minmax_index334650 Ref: 3ef334650 Node: Matrix properties335129 Ref: vectors matrix-properties335287 Ref: 3f0335287 Ref: vectors c gsl_matrix_isnull335491 Ref: 3f1335491 Ref: vectors c gsl_matrix_ispos335549 Ref: 3f2335549 Ref: vectors c gsl_matrix_isneg335606 Ref: 3f3335606 Ref: vectors c gsl_matrix_isnonneg335663 Ref: 3f4335663 Ref: vectors c gsl_matrix_equal336009 Ref: 3f6336009 Ref: vectors c gsl_matrix_norm1336242 Ref: 3f7336242 Node: Example programs for matrices336489 Ref: vectors example-programs-for-matrices336632 Ref: 3f8336632 Node: References and Further Reading<4>340654 Ref: vectors references-and-further-reading340771 Ref: 3f9340771 Node: Permutations341142 Ref: permutation doc341245 Ref: 3fa341245 Ref: permutation permutations341245 Ref: 3fb341245 Node: The Permutation struct342451 Ref: permutation the-permutation-struct342554 Ref: 3fc342554 Ref: permutation c gsl_permutation342609 Ref: 3fd342609 Node: Permutation allocation343038 Ref: permutation permutation-allocation343180 Ref: 3fe343180 Ref: permutation c gsl_permutation_alloc343235 Ref: 3ff343235 Ref: permutation c gsl_permutation_calloc343716 Ref: 400343716 Ref: permutation c gsl_permutation_init344027 Ref: 401344027 Ref: permutation c gsl_permutation_free344204 Ref: 402344204 Ref: permutation c gsl_permutation_memcpy344350 Ref: 403344350 Node: Accessing permutation elements344624 Ref: permutation accessing-permutation-elements344766 Ref: 404344766 Ref: permutation c gsl_permutation_get344913 Ref: 405344913 Ref: permutation c gsl_permutation_swap345326 Ref: 406345326 Node: Permutation properties345548 Ref: permutation permutation-properties345689 Ref: 407345689 Ref: permutation c gsl_permutation_size345744 Ref: 408345744 Ref: permutation c gsl_permutation_data345884 Ref: 409345884 Ref: permutation c gsl_permutation_valid346056 Ref: 40a346056 Node: Permutation functions346299 Ref: permutation permutation-functions346431 Ref: 40b346431 Ref: permutation c gsl_permutation_reverse346484 Ref: 40c346484 Ref: permutation c gsl_permutation_inverse346629 Ref: 40d346629 Ref: permutation c gsl_permutation_next346848 Ref: 40e346848 Ref: permutation c gsl_permutation_prev347322 Ref: 40f347322 Node: Applying Permutations347657 Ref: permutation applying-permutations347799 Ref: 410347799 Ref: permutation c gsl_permute347961 Ref: 411347961 Ref: permutation c gsl_permute_inverse348210 Ref: 412348210 Ref: permutation c gsl_permute_vector348482 Ref: 413348482 Ref: permutation c gsl_permute_vector_inverse348970 Ref: 414348970 Ref: permutation c gsl_permute_matrix349580 Ref: 415349580 Ref: permutation c gsl_permutation_mul350108 Ref: 416350108 Node: Reading and writing permutations350490 Ref: permutation reading-and-writing-permutations350638 Ref: 417350638 Ref: permutation c gsl_permutation_fwrite350826 Ref: 418350826 Ref: permutation c gsl_permutation_fread351257 Ref: 419351257 Ref: permutation c gsl_permutation_fprintf351837 Ref: 41a351837 Ref: permutation c gsl_permutation_fscanf352384 Ref: 41b352384 Ref: Reading and writing permutations-Footnote-1352894 Node: Permutations in cyclic form353014 Ref: permutation permutations-in-cyclic-form353152 Ref: 41c353152 Ref: permutation c gsl_permutation_linear_to_canonical355187 Ref: 41d355187 Ref: permutation c gsl_permutation_canonical_to_linear355436 Ref: 41e355436 Ref: permutation c gsl_permutation_inversions355703 Ref: 41f355703 Ref: permutation c gsl_permutation_linear_cycles356107 Ref: 420356107 Ref: permutation c gsl_permutation_canonical_cycles356304 Ref: 421356304 Node: Examples<4>356507 Ref: permutation examples356646 Ref: 422356646 Node: References and Further Reading<5>359064 Ref: permutation references-and-further-reading359167 Ref: 423359167 Node: Combinations359698 Ref: combination doc359790 Ref: 424359790 Ref: combination combinations359790 Ref: 425359790 Node: The Combination struct360571 Ref: combination the-combination-struct360674 Ref: 426360674 Ref: combination c gsl_combination360731 Ref: 427360731 Node: Combination allocation361218 Ref: combination combination-allocation361360 Ref: 428361360 Ref: combination c gsl_combination_alloc361417 Ref: 429361417 Ref: combination c gsl_combination_calloc361962 Ref: 42a361962 Ref: combination c gsl_combination_init_first362332 Ref: 42b362332 Ref: combination c gsl_combination_init_last362542 Ref: 42c362542 Ref: combination c gsl_combination_free362764 Ref: 42d362764 Ref: combination c gsl_combination_memcpy362910 Ref: 42e362910 Node: Accessing combination elements363184 Ref: combination accessing-combination-elements363326 Ref: 42f363326 Ref: combination c gsl_combination_get363476 Ref: 430363476 Node: Combination properties363889 Ref: combination combination-properties364030 Ref: 431364030 Ref: combination c gsl_combination_n364087 Ref: 432364087 Ref: combination c gsl_combination_k364218 Ref: 433364218 Ref: combination c gsl_combination_data364378 Ref: 434364378 Ref: combination c gsl_combination_valid364550 Ref: 435364550 Node: Combination functions364815 Ref: combination combination-functions364958 Ref: 436364958 Ref: combination c gsl_combination_next365013 Ref: 437365013 Ref: combination c gsl_combination_prev365484 Ref: 438365484 Node: Reading and writing combinations365819 Ref: combination reading-and-writing-combinations365951 Ref: 439365951 Ref: combination c gsl_combination_fwrite366141 Ref: 43a366141 Ref: combination c gsl_combination_fread366572 Ref: 43b366572 Ref: combination c gsl_combination_fprintf367173 Ref: 43c367173 Ref: combination c gsl_combination_fscanf367720 Ref: 43d367720 Ref: Reading and writing combinations-Footnote-1368237 Node: Examples<5>368357 Ref: combination examples368501 Ref: 43e368501 Node: References and Further Reading<6>369632 Ref: combination references-and-further-reading369735 Ref: 43f369735 Node: Multisets370015 Ref: multiset doc370102 Ref: 440370102 Ref: multiset multisets370102 Ref: 441370102 Node: The Multiset struct370947 Ref: multiset the-multiset-struct371041 Ref: 442371041 Ref: multiset c gsl_multiset371092 Ref: 443371092 Node: Multiset allocation371561 Ref: multiset multiset-allocation371691 Ref: 444371691 Ref: multiset c gsl_multiset_alloc371742 Ref: 445371742 Ref: multiset c gsl_multiset_calloc372271 Ref: 446372271 Ref: multiset c gsl_multiset_init_first372639 Ref: 447372639 Ref: multiset c gsl_multiset_init_last372840 Ref: 448372840 Ref: multiset c gsl_multiset_free373046 Ref: 449373046 Ref: multiset c gsl_multiset_memcpy373183 Ref: 44a373183 Node: Accessing multiset elements373439 Ref: multiset accessing-multiset-elements373569 Ref: 44b373569 Ref: multiset c gsl_multiset_get373710 Ref: 44c373710 Node: Multiset properties374114 Ref: multiset multiset-properties374243 Ref: 44d374243 Ref: multiset c gsl_multiset_n374294 Ref: 44e374294 Ref: multiset c gsl_multiset_k374427 Ref: 44f374427 Ref: multiset c gsl_multiset_data374578 Ref: 450374578 Ref: multiset c gsl_multiset_valid374741 Ref: 451374741 Node: Multiset functions374984 Ref: multiset multiset-functions375115 Ref: 452375115 Ref: multiset c gsl_multiset_next375164 Ref: 453375164 Ref: multiset c gsl_multiset_prev375630 Ref: 454375630 Node: Reading and writing multisets375958 Ref: multiset reading-and-writing-multisets376081 Ref: 455376081 Ref: multiset c gsl_multiset_fwrite376262 Ref: 456376262 Ref: multiset c gsl_multiset_fread376684 Ref: 457376684 Ref: multiset c gsl_multiset_fprintf377263 Ref: 458377263 Ref: multiset c gsl_multiset_fscanf377801 Ref: 459377801 Ref: Reading and writing multisets-Footnote-1378296 Node: Examples<6>378416 Ref: multiset examples378512 Ref: 45a378512 Node: Sorting380503 Ref: sort doc380590 Ref: 45b380590 Ref: sort sorting380590 Ref: 45c380590 Node: Sorting objects381481 Ref: sort sorting-objects381565 Ref: 45d381565 Ref: sort c gsl_heapsort382103 Ref: 45e382103 Ref: sort c gsl_heapsort gsl_comparison_fn_t382471 Ref: 45f382471 Ref: sort c gsl_heapsort_index383694 Ref: 460383694 Node: Sorting vectors384534 Ref: sort sorting-vectors384671 Ref: 461384671 Ref: sort c gsl_sort385706 Ref: 462385706 Ref: sort c gsl_sort2385947 Ref: 463385947 Ref: sort c gsl_sort_vector386366 Ref: 464386366 Ref: sort c gsl_sort_vector2386522 Ref: 465386522 Ref: sort c gsl_sort_index386767 Ref: 466386767 Ref: sort c gsl_sort_vector_index387421 Ref: 467387421 Node: Selecting the k smallest or largest elements388081 Ref: sort selecting-the-k-smallest-or-largest-elements388221 Ref: 468388221 Ref: sort c gsl_sort_smallest388930 Ref: 469388930 Ref: sort c gsl_sort_largest389419 Ref: 46a389419 Ref: sort c gsl_sort_vector_smallest389879 Ref: 46b389879 Ref: sort c gsl_sort_vector_largest389978 Ref: 46c389978 Ref: sort c gsl_sort_smallest_index390404 Ref: 46d390404 Ref: sort c gsl_sort_largest_index390941 Ref: 46e390941 Ref: sort c gsl_sort_vector_smallest_index391483 Ref: 46f391483 Ref: sort c gsl_sort_vector_largest_index391585 Ref: 470391585 Node: Computing the rank391931 Ref: sort computing-the-rank392067 Ref: 471392067 Node: Examples<7>393140 Ref: sort examples393265 Ref: 472393265 Node: References and Further Reading<7>394776 Ref: sort references-and-further-reading394874 Ref: 473394874 Node: BLAS Support395285 Ref: blas doc395377 Ref: 474395377 Ref: blas blas-support395377 Ref: 475395377 Ref: blas chap-blas-support395377 Ref: 11395377 Ref: BLAS Support-Footnote-1399327 Node: GSL BLAS Interface399500 Ref: blas gsl-blas-interface399588 Ref: 477399588 Node: Level 1399932 Ref: blas level-1400011 Ref: 478400011 Ref: blas c gsl_blas_sdsdot400042 Ref: 479400042 Ref: blas c gsl_blas_sdot400323 Ref: 47a400323 Ref: blas c gsl_blas_dsdot400435 Ref: 47b400435 Ref: blas c gsl_blas_ddot400549 Ref: 47c400549 Ref: blas c gsl_blas_cdotu400807 Ref: 47d400807 Ref: blas c gsl_blas_zdotu400946 Ref: 47e400946 Ref: blas c gsl_blas_cdotc401230 Ref: 47f401230 Ref: blas c gsl_blas_zdotc401369 Ref: 480401369 Ref: blas c gsl_blas_snrm2401663 Ref: 481401663 Ref: blas c gsl_blas_dnrm2401726 Ref: 482401726 Ref: blas c gsl_blas_scnrm2401895 Ref: 483401895 Ref: blas c gsl_blas_dznrm2401967 Ref: 484401967 Ref: blas c gsl_blas_sasum402185 Ref: 485402185 Ref: blas c gsl_blas_dasum402248 Ref: 486402248 Ref: blas c gsl_blas_scasum402414 Ref: 487402414 Ref: blas c gsl_blas_dzasum402486 Ref: 488402486 Ref: blas c gsl_blas_isamax402733 Ref: 489402733 Ref: blas c gsl_blas_idamax402805 Ref: 48a402805 Ref: blas c gsl_blas_icamax402871 Ref: 48b402871 Ref: blas c gsl_blas_izamax402961 Ref: 48c402961 Ref: blas c gsl_blas_sswap403444 Ref: 48d403444 Ref: blas c gsl_blas_dswap403530 Ref: 48e403530 Ref: blas c gsl_blas_cswap403594 Ref: 48f403594 Ref: blas c gsl_blas_zswap403696 Ref: 490403696 Ref: blas c gsl_blas_scopy403883 Ref: 491403883 Ref: blas c gsl_blas_dcopy403975 Ref: 492403975 Ref: blas c gsl_blas_ccopy404045 Ref: 493404045 Ref: blas c gsl_blas_zcopy404153 Ref: 494404153 Ref: blas c gsl_blas_saxpy404353 Ref: 495404353 Ref: blas c gsl_blas_daxpy404458 Ref: 496404458 Ref: blas c gsl_blas_caxpy404552 Ref: 497404552 Ref: blas c gsl_blas_zaxpy404691 Ref: 498404691 Ref: blas c gsl_blas_sscal404921 Ref: 499404921 Ref: blas c gsl_blas_dscal404990 Ref: 49a404990 Ref: blas c gsl_blas_cscal405054 Ref: 49b405054 Ref: blas c gsl_blas_zscal405159 Ref: 49c405159 Ref: blas c gsl_blas_csscal405252 Ref: 49d405252 Ref: blas c gsl_blas_zdscal405340 Ref: 49e405340 Ref: blas c gsl_blas_srotg405521 Ref: 49f405521 Ref: blas c gsl_blas_drotg405609 Ref: 4a0405609 Ref: blas c gsl_blas_srot405950 Ref: 4a1405950 Ref: blas c gsl_blas_drot406053 Ref: 4a2406053 Ref: blas c gsl_blas_srotmg406288 Ref: 4a3406288 Ref: blas c gsl_blas_drotmg406390 Ref: 4a4406390 Ref: blas c gsl_blas_srotm406688 Ref: 4a5406688 Ref: blas c gsl_blas_drotm406791 Ref: 4a6406791 Node: Level 2406946 Ref: blas level-2407041 Ref: 4a7407041 Ref: blas c gsl_blas_sgemv407072 Ref: 4a8407072 Ref: blas c gsl_blas_dgemv407252 Ref: 4a9407252 Ref: blas c gsl_blas_cgemv407416 Ref: 4aa407416 Ref: blas c gsl_blas_zgemv407666 Ref: 4ab407666 Ref: blas c gsl_blas_strmv408102 Ref: 4ac408102 Ref: blas c gsl_blas_dtrmv408268 Ref: 4ad408268 Ref: blas c gsl_blas_ctrmv408412 Ref: 4ae408412 Ref: blas c gsl_blas_ztrmv408594 Ref: 4af408594 Ref: blas c gsl_blas_strsv409442 Ref: 4b0409442 Ref: blas c gsl_blas_dtrsv409608 Ref: 4b1409608 Ref: blas c gsl_blas_ctrsv409752 Ref: 4b2409752 Ref: blas c gsl_blas_ztrsv409934 Ref: 4b3409934 Ref: blas c gsl_blas_ssymv410731 Ref: 4b4410731 Ref: blas c gsl_blas_dsymv410904 Ref: 4b5410904 Ref: blas c gsl_blas_chemv411535 Ref: 4b6411535 Ref: blas c gsl_blas_zhemv411778 Ref: 4b7411778 Ref: blas c gsl_blas_sger412566 Ref: 4b8412566 Ref: blas c gsl_blas_dger412697 Ref: 4b9412697 Ref: blas c gsl_blas_cgeru412811 Ref: 4ba412811 Ref: blas c gsl_blas_zgeru412995 Ref: 4bb412995 Ref: blas c gsl_blas_cgerc413258 Ref: 4bc413258 Ref: blas c gsl_blas_zgerc413442 Ref: 4bd413442 Ref: blas c gsl_blas_ssyr413715 Ref: 4be413715 Ref: blas c gsl_blas_dsyr413838 Ref: 4bf413838 Ref: blas c gsl_blas_cher414413 Ref: 4c0414413 Ref: blas c gsl_blas_zher414552 Ref: 4c1414552 Ref: blas c gsl_blas_ssyr2415219 Ref: 4c2415219 Ref: blas c gsl_blas_dsyr2415380 Ref: 4c3415380 Ref: blas c gsl_blas_cher2415992 Ref: 4c4415992 Ref: blas c gsl_blas_zher2416205 Ref: 4c5416205 Node: Level 3416939 Ref: blas level-3417018 Ref: 4c6417018 Ref: blas c gsl_blas_sgemm417049 Ref: 4c7417049 Ref: blas c gsl_blas_dgemm417265 Ref: 4c8417265 Ref: blas c gsl_blas_cgemm417455 Ref: 4c9417455 Ref: blas c gsl_blas_zgemm417731 Ref: 4ca417731 Ref: blas c gsl_blas_ssymm418251 Ref: 4cb418251 Ref: blas c gsl_blas_dsymm418443 Ref: 4cc418443 Ref: blas c gsl_blas_csymm418619 Ref: 4cd418619 Ref: blas c gsl_blas_zsymm418881 Ref: 4ce418881 Ref: blas c gsl_blas_chemm419606 Ref: 4cf419606 Ref: blas c gsl_blas_zhemm419868 Ref: 4d0419868 Ref: blas c gsl_blas_strmm420669 Ref: 4d1420669 Ref: blas c gsl_blas_dtrmm420867 Ref: 4d2420867 Ref: blas c gsl_blas_ctrmm421054 Ref: 4d3421054 Ref: blas c gsl_blas_ztrmm421296 Ref: 4d4421296 Ref: blas c gsl_blas_strsm422307 Ref: 4d5422307 Ref: blas c gsl_blas_dtrsm422505 Ref: 4d6422505 Ref: blas c gsl_blas_ctrsm422692 Ref: 4d7422692 Ref: blas c gsl_blas_ztrsm422934 Ref: 4d8422934 Ref: blas c gsl_blas_ssyrk423966 Ref: 4d9423966 Ref: blas c gsl_blas_dsyrk424137 Ref: 4da424137 Ref: blas c gsl_blas_csyrk424298 Ref: 4db424298 Ref: blas c gsl_blas_zsyrk424531 Ref: 4dc424531 Ref: blas c gsl_blas_cherk425312 Ref: 4dd425312 Ref: blas c gsl_blas_zherk425499 Ref: 4de425499 Ref: blas c gsl_blas_ssyr2k426342 Ref: 4df426342 Ref: blas c gsl_blas_dsyr2k426541 Ref: 4e0426541 Ref: blas c gsl_blas_csyr2k426724 Ref: 4e1426724 Ref: blas c gsl_blas_zsyr2k426993 Ref: 4e2426993 Ref: blas c gsl_blas_cher2k427850 Ref: 4e3427850 Ref: blas c gsl_blas_zher2k428101 Ref: 4e4428101 Node: Examples<8>429031 Ref: blas examples429161 Ref: 4e5429161 Node: References and Further Reading<8>430433 Ref: blas references-and-further-reading430536 Ref: 4e6430536 Ref: blas sec-blas-references430536 Ref: 4e7430536 Node: Linear Algebra431788 Ref: linalg doc431885 Ref: 4e8431885 Ref: linalg linear-algebra431885 Ref: 4e9431885 Node: LU Decomposition433189 Ref: linalg lu-decomposition433282 Ref: 4ea433282 Ref: linalg sec-lu-decomposition433282 Ref: 4eb433282 Ref: linalg c gsl_linalg_LU_decomp433942 Ref: 4ec433942 Ref: linalg c gsl_linalg_complex_LU_decomp434040 Ref: 4ed434040 Ref: linalg c gsl_linalg_LU_solve435205 Ref: 4ee435205 Ref: linalg c gsl_linalg_complex_LU_solve435338 Ref: 4ef435338 Ref: linalg c gsl_linalg_LU_svx435751 Ref: 4f0435751 Ref: linalg c gsl_linalg_complex_LU_svx435861 Ref: 4f1435861 Ref: linalg c gsl_linalg_LU_refine436262 Ref: 4f2436262 Ref: linalg c gsl_linalg_complex_LU_refine436445 Ref: 4f3436445 Ref: linalg c gsl_linalg_LU_invert436949 Ref: 4f4436949 Ref: linalg c gsl_linalg_complex_LU_invert437068 Ref: 4f5437068 Ref: linalg c gsl_linalg_LU_invx437827 Ref: 4f6437827 Ref: linalg c gsl_linalg_complex_LU_invx437917 Ref: 4f7437917 Ref: linalg c gsl_linalg_LU_det438643 Ref: 4f8438643 Ref: linalg c gsl_linalg_complex_LU_det438711 Ref: 4f9438711 Ref: linalg c gsl_linalg_LU_lndet439057 Ref: 4fa439057 Ref: linalg c gsl_linalg_complex_LU_lndet439115 Ref: 4fb439115 Ref: linalg c gsl_linalg_LU_sgndet439474 Ref: 4fc439474 Ref: linalg c gsl_linalg_complex_LU_sgndet439542 Ref: 4fd439542 Node: QR Decomposition439805 Ref: linalg linalg-qr439944 Ref: 4fe439944 Ref: linalg qr-decomposition439944 Ref: 4ff439944 Ref: linalg c gsl_linalg_QR_decomp_r442458 Ref: 502442458 Ref: linalg c gsl_linalg_complex_QR_decomp_r442530 Ref: 503442530 Ref: linalg c gsl_linalg_QR_solve_r443257 Ref: 504443257 Ref: linalg c gsl_linalg_complex_QR_solve_r443387 Ref: 505443387 Ref: linalg c gsl_linalg_QR_lssolve_r443872 Ref: 506443872 Ref: linalg c gsl_linalg_complex_QR_lssolve_r444032 Ref: 507444032 Ref: linalg c gsl_linalg_QR_QTvec_r445101 Ref: 508445101 Ref: linalg c gsl_linalg_complex_QR_QHvec_r445228 Ref: 509445228 Ref: linalg c gsl_linalg_QR_QTmat_r445868 Ref: 50a445868 Ref: linalg c gsl_linalg_QR_unpack_r446411 Ref: 50b446411 Ref: linalg c gsl_linalg_complex_QR_unpack_r446536 Ref: 50c446536 Ref: linalg c gsl_linalg_QR_rcond447173 Ref: 50d447173 Node: Level 2 Interface447791 Ref: linalg level-2-interface447899 Ref: 50e447899 Ref: linalg c gsl_linalg_QR_decomp448159 Ref: 50f448159 Ref: linalg c gsl_linalg_complex_QR_decomp448231 Ref: 510448231 Ref: linalg c gsl_linalg_QR_solve449261 Ref: 511449261 Ref: linalg c gsl_linalg_complex_QR_solve449391 Ref: 512449391 Ref: linalg c gsl_linalg_QR_svx449822 Ref: 514449822 Ref: linalg c gsl_linalg_complex_QR_svx449929 Ref: 515449929 Ref: linalg c gsl_linalg_QR_lssolve450328 Ref: 513450328 Ref: linalg c gsl_linalg_complex_QR_lssolve450492 Ref: 516450492 Ref: linalg c gsl_linalg_QR_QTvec451288 Ref: 517451288 Ref: linalg c gsl_linalg_complex_QR_QHvec451397 Ref: 518451397 Ref: linalg c gsl_linalg_QR_Qvec451934 Ref: 519451934 Ref: linalg c gsl_linalg_complex_QR_Qvec452042 Ref: 51a452042 Ref: linalg c gsl_linalg_QR_QTmat452514 Ref: 51b452514 Ref: linalg c gsl_linalg_QR_Rsolve452968 Ref: 51c452968 Ref: linalg c gsl_linalg_QR_Rsvx453272 Ref: 51d453272 Ref: linalg c gsl_linalg_QR_unpack453688 Ref: 51e453688 Ref: linalg c gsl_linalg_QR_QRsolve454023 Ref: 51f454023 Ref: linalg c gsl_linalg_QR_update454337 Ref: 520454337 Ref: linalg c gsl_linalg_R_solve454754 Ref: 521454754 Ref: linalg c gsl_linalg_R_svx454958 Ref: 522454958 Node: Triangle on Top of Rectangle455215 Ref: linalg triangle-on-top-of-rectangle455359 Ref: 523455359 Ref: linalg c gsl_linalg_QR_UR_decomp455963 Ref: 524455963 Node: Triangle on Top of Triangle456362 Ref: linalg triangle-on-top-of-triangle456519 Ref: 525456519 Ref: linalg c gsl_linalg_QR_UU_decomp456972 Ref: 526456972 Ref: linalg c gsl_linalg_QR_UU_lssolve457388 Ref: 527457388 Ref: linalg c gsl_linalg_QR_UU_QTec458398 Ref: 528458398 Node: Triangle on Top of Trapezoidal458846 Ref: linalg triangle-on-top-of-trapezoidal459002 Ref: 529459002 Ref: linalg c gsl_linalg_QR_UZ_decomp459659 Ref: 52a459659 Node: Triangle on Top of Diagonal460070 Ref: linalg triangle-on-top-of-diagonal460190 Ref: 52b460190 Ref: linalg c gsl_linalg_QR_UD_decomp460735 Ref: 52c460735 Ref: linalg c gsl_linalg_QR_UD_lssolve461179 Ref: 52d461179 Node: QR Decomposition with Column Pivoting462186 Ref: linalg linalg-qrpt462325 Ref: 500462325 Ref: linalg qr-decomposition-with-column-pivoting462325 Ref: 52e462325 Ref: linalg c gsl_linalg_QRPT_decomp463477 Ref: 52f463477 Ref: linalg c gsl_linalg_QRPT_decomp2464840 Ref: 530464840 Ref: linalg c gsl_linalg_QRPT_solve465248 Ref: 531465248 Ref: linalg c gsl_linalg_QRPT_svx465656 Ref: 532465656 Ref: linalg c gsl_linalg_QRPT_lssolve466067 Ref: 533466067 Ref: linalg c gsl_linalg_QRPT_lssolve2466935 Ref: 534466935 Ref: linalg c gsl_linalg_QRPT_QRsolve467912 Ref: 536467912 Ref: linalg c gsl_linalg_QRPT_update468287 Ref: 537468287 Ref: linalg c gsl_linalg_QRPT_Rsolve468805 Ref: 538468805 Ref: linalg c gsl_linalg_QRPT_Rsvx469060 Ref: 539469060 Ref: linalg c gsl_linalg_QRPT_rank469418 Ref: 535469418 Ref: linalg c gsl_linalg_QRPT_rcond469876 Ref: 53a469876 Node: LQ Decomposition470332 Ref: linalg lq-decomposition470471 Ref: 53b470471 Ref: linalg c gsl_linalg_LQ_decomp471297 Ref: 53c471297 Ref: linalg c gsl_linalg_LQ_lssolve472097 Ref: 53d472097 Ref: linalg c gsl_linalg_LQ_unpack472709 Ref: 53e472709 Ref: linalg c gsl_linalg_LQ_QTvec473044 Ref: 53f473044 Node: QL Decomposition473272 Ref: linalg ql-decomposition473407 Ref: 540473407 Ref: linalg c gsl_linalg_QL_decomp473885 Ref: 541473885 Ref: linalg c gsl_linalg_QL_unpack474284 Ref: 542474284 Node: Complete Orthogonal Decomposition474619 Ref: linalg complete-orthogonal-decomposition474766 Ref: 543474766 Ref: linalg linalg-cod474766 Ref: 501474766 Ref: linalg c gsl_linalg_COD_decomp476391 Ref: 544476391 Ref: linalg c gsl_linalg_COD_decomp_e476557 Ref: 545476557 Ref: linalg c gsl_linalg_COD_lssolve477636 Ref: 546477636 Ref: linalg c gsl_linalg_COD_lssolve2478547 Ref: 547478547 Ref: linalg c gsl_linalg_COD_unpack479569 Ref: 548479569 Ref: linalg c gsl_linalg_COD_matZ480069 Ref: 549480069 Node: Singular Value Decomposition480572 Ref: linalg singular-value-decomposition480725 Ref: 54a480725 Ref: linalg c gsl_linalg_SV_decomp482352 Ref: 54b482352 Ref: linalg c gsl_linalg_SV_decomp_mod483145 Ref: 54c483145 Ref: linalg c gsl_linalg_SV_decomp_jacobi483526 Ref: 54d483526 Ref: linalg c gsl_linalg_SV_solve483904 Ref: 54e483904 Ref: linalg c gsl_linalg_SV_leverage484757 Ref: 54f484757 Node: Cholesky Decomposition485224 Ref: linalg cholesky-decomposition485374 Ref: 550485374 Ref: linalg sec-cholesky-decomposition485374 Ref: 3f5485374 Ref: linalg c gsl_linalg_cholesky_decomp1486343 Ref: 551486343 Ref: linalg c gsl_linalg_complex_cholesky_decomp486405 Ref: 552486405 Ref: linalg c gsl_linalg_cholesky_decomp487385 Ref: 553487385 Ref: linalg c gsl_linalg_cholesky_solve487539 Ref: 554487539 Ref: linalg c gsl_linalg_complex_cholesky_solve487658 Ref: 555487658 Ref: linalg c gsl_linalg_cholesky_svx488102 Ref: 556488102 Ref: linalg c gsl_linalg_complex_cholesky_svx488198 Ref: 557488198 Ref: linalg c gsl_linalg_cholesky_invert488727 Ref: 558488727 Ref: linalg c gsl_linalg_complex_cholesky_invert488795 Ref: 559488795 Ref: linalg c gsl_linalg_cholesky_decomp2489221 Ref: 55a489221 Ref: linalg c gsl_linalg_cholesky_solve2490247 Ref: 55b490247 Ref: linalg c gsl_linalg_cholesky_svx2490645 Ref: 55c490645 Ref: linalg c gsl_linalg_cholesky_scale491136 Ref: 55d491136 Ref: linalg c gsl_linalg_cholesky_scale_apply491686 Ref: 55e491686 Ref: linalg c gsl_linalg_cholesky_rcond491939 Ref: 55f491939 Node: Pivoted Cholesky Decomposition492458 Ref: linalg pivoted-cholesky-decomposition492611 Ref: 560492611 Ref: linalg c gsl_linalg_pcholesky_decomp493362 Ref: 561493362 Ref: linalg c gsl_linalg_pcholesky_solve494156 Ref: 562494156 Ref: linalg c gsl_linalg_pcholesky_svx494558 Ref: 563494558 Ref: linalg c gsl_linalg_pcholesky_decomp2495071 Ref: 564495071 Ref: linalg c gsl_linalg_pcholesky_solve2496276 Ref: 565496276 Ref: linalg c gsl_linalg_pcholesky_svx2496758 Ref: 566496758 Ref: linalg c gsl_linalg_pcholesky_invert497341 Ref: 567497341 Ref: linalg c gsl_linalg_pcholesky_rcond497678 Ref: 568497678 Node: Modified Cholesky Decomposition498224 Ref: linalg modified-cholesky-decomposition498373 Ref: 569498373 Ref: linalg c gsl_linalg_mcholesky_decomp499458 Ref: 56a499458 Ref: linalg c gsl_linalg_mcholesky_solve500415 Ref: 56b500415 Ref: linalg c gsl_linalg_mcholesky_svx500829 Ref: 56c500829 Ref: linalg c gsl_linalg_mcholesky_rcond501353 Ref: 56d501353 Node: LDLT Decomposition501895 Ref: linalg ldlt-decomposition502066 Ref: 56e502066 Ref: linalg sec-ldlt-decomposition502066 Ref: 56f502066 Ref: linalg c gsl_linalg_ldlt_decomp502863 Ref: 570502863 Ref: linalg c gsl_linalg_ldlt_solve503640 Ref: 572503640 Ref: linalg c gsl_linalg_ldlt_svx503967 Ref: 573503967 Ref: linalg c gsl_linalg_ldlt_rcond504392 Ref: 571504392 Node: Tridiagonal Decomposition of Real Symmetric Matrices504892 Ref: linalg tridiagonal-decomposition-of-real-symmetric-matrices505079 Ref: 574505079 Ref: linalg c gsl_linalg_symmtd_decomp505375 Ref: 575505375 Ref: linalg c gsl_linalg_symmtd_unpack506041 Ref: 576506041 Ref: linalg c gsl_linalg_symmtd_unpack_T506534 Ref: 577506534 Node: Tridiagonal Decomposition of Hermitian Matrices506911 Ref: linalg tridiagonal-decomposition-of-hermitian-matrices507121 Ref: 578507121 Ref: linalg c gsl_linalg_hermtd_decomp507408 Ref: 579507408 Ref: linalg c gsl_linalg_hermtd_unpack508135 Ref: 57a508135 Ref: linalg c gsl_linalg_hermtd_unpack_T508649 Ref: 57b508649 Node: Hessenberg Decomposition of Real Matrices509033 Ref: linalg hessenberg-decomposition-of-real-matrices509243 Ref: 57c509243 Ref: linalg c gsl_linalg_hessenberg_decomp509728 Ref: 57d509728 Ref: linalg c gsl_linalg_hessenberg_unpack510433 Ref: 57e510433 Ref: linalg c gsl_linalg_hessenberg_unpack_accum510809 Ref: 57f510809 Ref: linalg c gsl_linalg_hessenberg_set_zero511418 Ref: 580511418 Node: Hessenberg-Triangular Decomposition of Real Matrices511714 Ref: linalg hessenberg-triangular-decomposition-of-real-matrices511894 Ref: 581511894 Ref: linalg c gsl_linalg_hesstri_decomp512381 Ref: 582512381 Node: Bidiagonalization512917 Ref: linalg bidiagonalization513072 Ref: 583513072 Ref: linalg c gsl_linalg_bidiag_decomp513424 Ref: 584513424 Ref: linalg c gsl_linalg_bidiag_unpack514164 Ref: 585514164 Ref: linalg c gsl_linalg_bidiag_unpack2514835 Ref: 586514835 Ref: linalg c gsl_linalg_bidiag_unpack_B515348 Ref: 587515348 Node: Givens Rotations515723 Ref: linalg givens-rotations515853 Ref: 588515853 Ref: linalg c gsl_linalg_givens516495 Ref: 589516495 Ref: linalg c gsl_linalg_givens_gv516782 Ref: 58a516782 Node: Householder Transformations517148 Ref: linalg householder-transformations517298 Ref: 58b517298 Ref: linalg c gsl_linalg_householder_transform517801 Ref: 58c517801 Ref: linalg c gsl_linalg_complex_householder_transform517871 Ref: 58d517871 Ref: linalg c gsl_linalg_householder_hm518558 Ref: 58e518558 Ref: linalg c gsl_linalg_complex_householder_hm518661 Ref: 58f518661 Ref: linalg c gsl_linalg_householder_mh519032 Ref: 590519032 Ref: linalg c gsl_linalg_complex_householder_mh519135 Ref: 591519135 Ref: linalg c gsl_linalg_householder_hv519507 Ref: 592519507 Ref: linalg c gsl_linalg_complex_householder_hv519610 Ref: 593519610 Node: Householder solver for linear systems519967 Ref: linalg householder-solver-for-linear-systems520120 Ref: 594520120 Ref: linalg c gsl_linalg_HH_solve520209 Ref: 595520209 Ref: linalg c gsl_linalg_HH_svx520576 Ref: 596520576 Node: Tridiagonal Systems520935 Ref: linalg tridiagonal-systems521079 Ref: 597521079 Ref: linalg c gsl_linalg_solve_tridiag521511 Ref: 598521511 Ref: linalg c gsl_linalg_solve_symm_tridiag522141 Ref: 599522141 Ref: linalg c gsl_linalg_solve_cyc_tridiag522712 Ref: 59a522712 Ref: linalg c gsl_linalg_solve_symm_cyc_tridiag523367 Ref: 59b523367 Node: Triangular Systems523979 Ref: linalg triangular-systems524100 Ref: 59c524100 Ref: linalg c gsl_linalg_tri_invert524151 Ref: 59d524151 Ref: linalg c gsl_linalg_complex_tri_invert524255 Ref: 59e524255 Ref: linalg c gsl_linalg_tri_LTL524736 Ref: 59f524736 Ref: linalg c gsl_linalg_complex_tri_LHL524789 Ref: 5a0524789 Ref: linalg c gsl_linalg_tri_UL525009 Ref: 5a1525009 Ref: linalg c gsl_linalg_complex_tri_UL525062 Ref: 5a2525062 Ref: linalg c gsl_linalg_tri_rcond525433 Ref: 5a3525433 Node: Banded Systems526051 Ref: linalg banded-systems526162 Ref: 5a4526162 Node: General Banded Format526807 Ref: linalg general-banded-format526912 Ref: 5a5526912 Node: Symmetric Banded Format528429 Ref: linalg sec-symmetric-banded528566 Ref: 5a6528566 Ref: linalg symmetric-banded-format528566 Ref: 5a7528566 Node: Banded LU Decomposition530197 Ref: linalg banded-lu-decomposition530342 Ref: 5a8530342 Ref: linalg c gsl_linalg_LU_band_decomp532274 Ref: 5a9532274 Ref: linalg c gsl_linalg_LU_band_solve533097 Ref: 5aa533097 Ref: linalg c gsl_linalg_LU_band_svx533691 Ref: 5ab533691 Ref: linalg c gsl_linalg_LU_band_unpack534285 Ref: 5ac534285 Node: Banded Cholesky Decomposition534930 Ref: linalg banded-cholesky-decomposition535077 Ref: 5ad535077 Ref: linalg c gsl_linalg_cholesky_band_decomp535627 Ref: 5ae535627 Ref: linalg c gsl_linalg_cholesky_band_solve536587 Ref: 5b0536587 Ref: linalg c gsl_linalg_cholesky_band_solvem536706 Ref: 5b1536706 Ref: linalg c gsl_linalg_cholesky_band_svx537081 Ref: 5b2537081 Ref: linalg c gsl_linalg_cholesky_band_svxm537177 Ref: 5b3537177 Ref: linalg c gsl_linalg_cholesky_band_invert537675 Ref: 5b4537675 Ref: linalg c gsl_linalg_cholesky_band_unpack538101 Ref: 5b5538101 Ref: linalg c gsl_linalg_cholesky_band_scale538449 Ref: 5b6538449 Ref: linalg c gsl_linalg_cholesky_band_scale_apply539011 Ref: 5b7539011 Ref: linalg c gsl_linalg_cholesky_band_rcond539304 Ref: 5af539304 Node: Banded LDLT Decomposition539825 Ref: linalg banded-ldlt-decomposition539940 Ref: 5b8539940 Ref: linalg c gsl_linalg_ldlt_band_decomp540479 Ref: 5b9540479 Ref: linalg c gsl_linalg_ldlt_band_solve541077 Ref: 5ba541077 Ref: linalg c gsl_linalg_ldlt_band_svx541431 Ref: 5bb541431 Ref: linalg c gsl_linalg_ldlt_band_unpack541888 Ref: 5bc541888 Ref: linalg c gsl_linalg_ldlt_band_rcond542313 Ref: 5bd542313 Node: Balancing542825 Ref: linalg balancing542929 Ref: 5be542929 Ref: linalg id1542929 Ref: 5bf542929 Ref: linalg c gsl_linalg_balance_matrix543354 Ref: 5c0543354 Node: Examples<9>543630 Ref: linalg examples543753 Ref: 5c1543753 Node: References and Further Reading<9>545600 Ref: linalg references-and-further-reading545705 Ref: 5c2545705 Node: Eigensystems548095 Ref: eigen doc548208 Ref: 5c3548208 Ref: eigen eigensystems548208 Ref: 5c4548208 Node: Real Symmetric Matrices549331 Ref: eigen real-symmetric-matrices549439 Ref: 5c5549439 Ref: eigen c gsl_eigen_symm_workspace549780 Ref: 5c6549780 Ref: eigen c gsl_eigen_symm_alloc549919 Ref: 5c7549919 Ref: eigen c gsl_eigen_symm_free550194 Ref: 5c8550194 Ref: eigen c gsl_eigen_symm550350 Ref: 5c9550350 Ref: eigen c gsl_eigen_symmv_workspace550879 Ref: 5ca550879 Ref: eigen c gsl_eigen_symmv_alloc551035 Ref: 5cb551035 Ref: eigen c gsl_eigen_symmv_free551329 Ref: 5cc551329 Ref: eigen c gsl_eigen_symmv551487 Ref: 5cd551487 Node: Complex Hermitian Matrices552347 Ref: eigen complex-hermitian-matrices552490 Ref: 5ce552490 Ref: eigen c gsl_eigen_herm_workspace552674 Ref: 5cf552674 Ref: eigen c gsl_eigen_herm_alloc552813 Ref: 5d0552813 Ref: eigen c gsl_eigen_herm_free553091 Ref: 5d1553091 Ref: eigen c gsl_eigen_herm553247 Ref: 5d2553247 Ref: eigen c gsl_eigen_hermv_workspace553881 Ref: 5d3553881 Ref: eigen c gsl_eigen_hermv_alloc554037 Ref: 5d4554037 Ref: eigen c gsl_eigen_hermv_free554334 Ref: 5d5554334 Ref: eigen c gsl_eigen_hermv554492 Ref: 5d6554492 Node: Real Nonsymmetric Matrices555473 Ref: eigen real-nonsymmetric-matrices555641 Ref: 5d7555641 Ref: eigen c gsl_eigen_nonsymm_workspace556141 Ref: 5d8556141 Ref: eigen c gsl_eigen_nonsymm_alloc556286 Ref: 5d9556286 Ref: eigen c gsl_eigen_nonsymm_free556570 Ref: 5da556570 Ref: eigen c gsl_eigen_nonsymm_params556742 Ref: 5db556742 Ref: eigen c gsl_eigen_nonsymm558452 Ref: 5dc558452 Ref: eigen c gsl_eigen_nonsymm_Z559281 Ref: 5dd559281 Ref: eigen c gsl_eigen_nonsymmv_workspace559579 Ref: 5de559579 Ref: eigen c gsl_eigen_nonsymmv_alloc559741 Ref: 5df559741 Ref: eigen c gsl_eigen_nonsymmv_free560044 Ref: 5e0560044 Ref: eigen c gsl_eigen_nonsymmv_params560218 Ref: 5e1560218 Ref: eigen c gsl_eigen_nonsymmv560760 Ref: 5e2560760 Ref: eigen c gsl_eigen_nonsymmv_Z561642 Ref: 5e3561642 Node: Real Generalized Symmetric-Definite Eigensystems561955 Ref: eigen real-generalized-symmetric-definite-eigensystems562148 Ref: 5e4562148 Ref: eigen c gsl_eigen_gensymm_workspace563043 Ref: 5e5563043 Ref: eigen c gsl_eigen_gensymm_alloc563197 Ref: 5e6563197 Ref: eigen c gsl_eigen_gensymm_free563503 Ref: 5e7563503 Ref: eigen c gsl_eigen_gensymm563675 Ref: 5e8563675 Ref: eigen c gsl_eigen_gensymmv_workspace564116 Ref: 5e9564116 Ref: eigen c gsl_eigen_gensymmv_alloc564287 Ref: 5ea564287 Ref: eigen c gsl_eigen_gensymmv_free564617 Ref: 5eb564617 Ref: eigen c gsl_eigen_gensymmv564791 Ref: 5ec564791 Node: Complex Generalized Hermitian-Definite Eigensystems565353 Ref: eigen complex-generalized-hermitian-definite-eigensystems565562 Ref: 5ed565562 Ref: eigen c gsl_eigen_genherm_workspace566272 Ref: 5ee566272 Ref: eigen c gsl_eigen_genherm_alloc566426 Ref: 5ef566426 Ref: eigen c gsl_eigen_genherm_free566740 Ref: 5f0566740 Ref: eigen c gsl_eigen_genherm566912 Ref: 5f1566912 Ref: eigen c gsl_eigen_genhermv_workspace567382 Ref: 5f2567382 Ref: eigen c gsl_eigen_genhermv_alloc567553 Ref: 5f3567553 Ref: eigen c gsl_eigen_genhermv_free567886 Ref: 5f4567886 Ref: eigen c gsl_eigen_genhermv568060 Ref: 5f5568060 Node: Real Generalized Nonsymmetric Eigensystems568649 Ref: eigen real-generalized-nonsymmetric-eigensystems568846 Ref: 5f6568846 Ref: eigen c gsl_eigen_gen_workspace570638 Ref: 5f7570638 Ref: eigen c gsl_eigen_gen_alloc570778 Ref: 5f8570778 Ref: eigen c gsl_eigen_gen_free571069 Ref: 5f9571069 Ref: eigen c gsl_eigen_gen_params571223 Ref: 5fa571223 Ref: eigen c gsl_eigen_gen572393 Ref: 5fb572393 Ref: eigen c gsl_eigen_gen_QZ573505 Ref: 5fc573505 Ref: eigen c gsl_eigen_genv_workspace573898 Ref: 5fd573898 Ref: eigen c gsl_eigen_genv_alloc574055 Ref: 5fe574055 Ref: eigen c gsl_eigen_genv_free574366 Ref: 5ff574366 Ref: eigen c gsl_eigen_genv574522 Ref: 600574522 Ref: eigen c gsl_eigen_genv_QZ575608 Ref: 601575608 Node: Sorting Eigenvalues and Eigenvectors576040 Ref: eigen sorting-eigenvalues-and-eigenvectors576198 Ref: 602576198 Ref: eigen c gsl_eigen_symmv_sort576283 Ref: 603576283 Ref: eigen c gsl_eigen_symmv_sort gsl_eigen_sort_t576703 Ref: 604576703 Ref: eigen c gsl_eigen_hermv_sort577418 Ref: 605577418 Ref: eigen c gsl_eigen_nonsymmv_sort577865 Ref: 606577865 Ref: eigen c gsl_eigen_gensymmv_sort578454 Ref: 607578454 Ref: eigen c gsl_eigen_genhermv_sort578893 Ref: 608578893 Ref: eigen c gsl_eigen_genv_sort579343 Ref: 609579343 Node: Examples<10>579978 Ref: eigen examples580128 Ref: 60a580128 Node: References and Further Reading<10>585337 Ref: eigen references-and-further-reading585442 Ref: 60b585442 Node: Fast Fourier Transforms FFTs586337 Ref: fft doc586457 Ref: 60c586457 Ref: fft fast-fourier-transforms-ffts586457 Ref: 60d586457 Node: Mathematical Definitions587589 Ref: fft mathematical-definitions587717 Ref: 60f587717 Node: Overview of complex data FFTs590031 Ref: fft overview-of-complex-data-ffts590205 Ref: 612590205 Node: Radix-2 FFT routines for complex data592992 Ref: fft radix-2-fft-routines-for-complex-data593183 Ref: 613593183 Ref: fft c gsl_fft_complex_radix2_forward593765 Ref: 35593765 Ref: fft c gsl_fft_complex_radix2_transform593881 Ref: 614593881 Ref: fft c gsl_fft_complex_radix2_backward594033 Ref: 615594033 Ref: fft c gsl_fft_complex_radix2_inverse594150 Ref: 616594150 Ref: fft c gsl_fft_complex_radix2_dif_forward594903 Ref: 617594903 Ref: fft c gsl_fft_complex_radix2_dif_transform595023 Ref: 618595023 Ref: fft c gsl_fft_complex_radix2_dif_backward595179 Ref: 619595179 Ref: fft c gsl_fft_complex_radix2_dif_inverse595300 Ref: 61a595300 Ref: fft fig-fft-complex-radix2597449 Ref: 61b597449 Node: Mixed-radix FFT routines for complex data597574 Ref: fft mixed-radix-fft-routines-for-complex-data597762 Ref: 61c597762 Ref: fft c gsl_fft_complex_wavetable_alloc600025 Ref: 61d600025 Ref: fft c gsl_fft_complex_wavetable_free601211 Ref: 61f601211 Ref: fft c gsl_fft_complex_wavetable601946 Ref: 61e601946 Ref: fft c gsl_fft_complex_workspace603152 Ref: 620603152 Ref: fft c gsl_fft_complex_workspace_alloc603309 Ref: 621603309 Ref: fft c gsl_fft_complex_workspace_free603511 Ref: 622603511 Ref: fft c gsl_fft_complex_forward603840 Ref: 610603840 Ref: fft c gsl_fft_complex_transform604036 Ref: 623604036 Ref: fft c gsl_fft_complex_backward604268 Ref: 624604268 Ref: fft c gsl_fft_complex_inverse604465 Ref: 611604465 Node: Overview of real data FFTs607887 Ref: fft overview-of-real-data-ffts608072 Ref: 625608072 Node: Radix-2 FFT routines for real data610344 Ref: fft radix-2-fft-routines-for-real-data610526 Ref: 626610526 Ref: fft c gsl_fft_real_radix2_transform610862 Ref: 627610862 Ref: fft c gsl_fft_halfcomplex_radix2_inverse612963 Ref: 629612963 Ref: fft c gsl_fft_halfcomplex_radix2_backward613067 Ref: 62a613067 Ref: fft c gsl_fft_halfcomplex_radix2_unpack613494 Ref: 628613494 Node: Mixed-radix FFT routines for real data614846 Ref: fft mixed-radix-fft-routines-for-real-data615036 Ref: 62b615036 Ref: fft c gsl_fft_real_wavetable618265 Ref: 62d618265 Ref: fft c gsl_fft_halfcomplex_wavetable618298 Ref: 62e618298 Ref: fft c gsl_fft_real_wavetable_alloc618422 Ref: 62f618422 Ref: fft c gsl_fft_halfcomplex_wavetable_alloc618522 Ref: 630618522 Ref: fft c gsl_fft_real_wavetable_free619677 Ref: 631619677 Ref: fft c gsl_fft_halfcomplex_wavetable_free619770 Ref: 632619770 Ref: fft c gsl_fft_real_workspace620170 Ref: 633620170 Ref: fft c gsl_fft_real_workspace_alloc620275 Ref: 634620275 Ref: fft c gsl_fft_real_workspace_free620563 Ref: 635620563 Ref: fft c gsl_fft_real_transform620917 Ref: 62c620917 Ref: fft c gsl_fft_halfcomplex_transform621090 Ref: 636621090 Ref: fft c gsl_fft_real_unpack622062 Ref: 637622062 Ref: fft c gsl_fft_halfcomplex_unpack622714 Ref: 638622714 Ref: fft fig-fft-real-mixedradix626090 Ref: 639626090 Node: References and Further Reading<11>626215 Ref: fft fft-references626362 Ref: 60e626362 Ref: fft references-and-further-reading626362 Ref: 63a626362 Node: Numerical Integration629045 Ref: integration doc629177 Ref: 63b629177 Ref: integration numerical-integration629177 Ref: 63c629177 Node: Introduction<2>630902 Ref: integration introduction631027 Ref: 63d631027 Node: Integrands without weight functions633127 Ref: integration integrands-without-weight-functions633256 Ref: 63e633256 Node: Integrands with weight functions633934 Ref: integration integrands-with-weight-functions634113 Ref: 63f634113 Node: Integrands with singular weight functions634623 Ref: integration integrands-with-singular-weight-functions634758 Ref: 640634758 Node: QNG non-adaptive Gauss-Kronrod integration635520 Ref: integration qng-non-adaptive-gauss-kronrod-integration635678 Ref: 641635678 Ref: integration c gsl_integration_qng635982 Ref: 642635982 Node: QAG adaptive integration636840 Ref: integration qag-adaptive-integration637027 Ref: 643637027 Ref: integration c gsl_integration_workspace637471 Ref: 644637471 Ref: integration c gsl_integration_workspace_alloc637610 Ref: 645637610 Ref: integration c gsl_integration_workspace_free638019 Ref: 646638019 Ref: integration c gsl_integration_qag638197 Ref: 647638197 Node: QAGS adaptive integration with singularities640374 Ref: integration qags-adaptive-integration-with-singularities640571 Ref: 648640571 Ref: integration c gsl_integration_qags641172 Ref: 649641172 Node: QAGP adaptive integration with known singular points642242 Ref: integration qagp-adaptive-integration-with-known-singular-points642462 Ref: 64a642462 Ref: integration c gsl_integration_qagp642579 Ref: 64b642579 Node: QAGI adaptive integration on infinite intervals643637 Ref: integration qagi-adaptive-integration-on-infinite-intervals643866 Ref: 64c643866 Ref: integration c gsl_integration_qagi643973 Ref: 64d643973 Ref: integration c gsl_integration_qagiu644766 Ref: 64e644766 Ref: integration c gsl_integration_qagil645334 Ref: 64f645334 Node: QAWC adaptive integration for Cauchy principal values645902 Ref: integration qawc-adaptive-integration-for-cauchy-principal-values646127 Ref: 650646127 Ref: integration c gsl_integration_qawc646246 Ref: 651646246 Node: QAWS adaptive integration for singular functions647079 Ref: integration qaws-adaptive-integration-for-singular-functions647308 Ref: 652647308 Ref: integration c gsl_integration_qaws_table647644 Ref: 653647644 Ref: integration c gsl_integration_qaws_table_alloc647764 Ref: 654647764 Ref: integration c gsl_integration_qaws_table_set649796 Ref: 655649796 Ref: integration c gsl_integration_qaws_table_free650096 Ref: 656650096 Ref: integration c gsl_integration_qaws650316 Ref: 657650316 Node: QAWO adaptive integration for oscillatory functions651354 Ref: integration qawo-adaptive-integration-for-oscillatory-functions651577 Ref: 658651577 Ref: integration c gsl_integration_qawo_table_alloc651945 Ref: 659651945 Ref: integration c gsl_integration_qawo_table_alloc GSL_INTEG_COSINE652661 Ref: 65a652661 Ref: integration c gsl_integration_qawo_table_alloc GSL_INTEG_SINE652695 Ref: 65b652695 Ref: integration c gsl_integration_qawo_table_set653293 Ref: 65d653293 Ref: integration c gsl_integration_qawo_table_set_length653597 Ref: 65e653597 Ref: integration c gsl_integration_qawo_table_free653819 Ref: 65f653819 Ref: integration c gsl_integration_qawo654003 Ref: 65c654003 Node: QAWF adaptive integration for Fourier integrals655380 Ref: integration qawf-adaptive-integration-for-fourier-integrals655588 Ref: 660655588 Ref: integration c gsl_integration_qawf655697 Ref: 661655697 Node: CQUAD doubly-adaptive integration658102 Ref: integration cquad-doubly-adaptive-integration658278 Ref: 662658278 Ref: integration c gsl_integration_cquad_workspace_alloc659149 Ref: 663659149 Ref: integration c gsl_integration_cquad_workspace_free659655 Ref: 664659655 Ref: integration c gsl_integration_cquad659845 Ref: 665659845 Node: Romberg integration661483 Ref: integration romberg-integration661638 Ref: 666661638 Ref: integration c gsl_integration_romberg_alloc662220 Ref: 667662220 Ref: integration c gsl_integration_romberg_free662670 Ref: 668662670 Ref: integration c gsl_integration_romberg662854 Ref: 669662854 Node: Gauss-Legendre integration663861 Ref: integration gauss-legendre-integration664006 Ref: 66a664006 Ref: integration c gsl_integration_glfixed_table_alloc664538 Ref: 66b664538 Ref: integration c gsl_integration_glfixed664951 Ref: 66c664951 Ref: integration c gsl_integration_glfixed_point665217 Ref: 66d665217 Ref: integration c gsl_integration_glfixed_table_free665773 Ref: 66e665773 Node: Fixed point quadratures665955 Ref: integration fixed-point-quadratures666092 Ref: 66f666092 Ref: integration tab-fixed-quadratures667035 Ref: 670667035 Ref: integration c gsl_integration_fixed_workspace670397 Ref: 671670397 Ref: integration c gsl_integration_fixed_alloc670954 Ref: 672670954 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type671644 Ref: 673671644 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_legendre671809 Ref: 674671809 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_chebyshev672101 Ref: 675672101 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_gegenbauer672401 Ref: 676672401 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_jacobi672658 Ref: 677672658 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_laguerre672834 Ref: 678672834 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_hermite673087 Ref: 679673087 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_exponential673338 Ref: 67a673338 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_rational673597 Ref: 67b673597 Ref: integration c gsl_integration_fixed_alloc gsl_integration_fixed_type gsl_integration_fixed_chebyshev2673777 Ref: 67c673777 Ref: integration c gsl_integration_fixed_free674078 Ref: 67d674078 Ref: integration c gsl_integration_fixed_n674257 Ref: 67e674257 Ref: integration c gsl_integration_fixed_nodes674428 Ref: 67f674428 Ref: integration c gsl_integration_fixed_weights674640 Ref: 680674640 Ref: integration c gsl_integration_fixed674856 Ref: 681674856 Node: Error codes675401 Ref: integration error-codes675524 Ref: 682675524 Node: Examples<11>676444 Ref: integration examples676578 Ref: 683676578 Node: Adaptive integration example676687 Ref: integration adaptive-integration-example676804 Ref: 684676804 Node: Fixed-point quadrature example678711 Ref: integration fixed-point-quadrature-example678828 Ref: 685678828 Node: References and Further Reading<12>681502 Ref: integration references-and-further-reading681616 Ref: 686681616 Node: Random Number Generation682943 Ref: rng doc683069 Ref: 687683069 Ref: rng random-number-generation683069 Ref: 688683069 Node: General comments on random numbers684418 Ref: rng general-comments-on-random-numbers684560 Ref: 689684560 Node: The Random Number Generator Interface686387 Ref: rng the-random-number-generator-interface686576 Ref: 68a686576 Ref: rng c gsl_rng_type687429 Ref: 68b687429 Ref: rng c gsl_rng687452 Ref: 68c687452 Node: Random number generator initialization687842 Ref: rng random-number-generator-initialization688036 Ref: 68d688036 Ref: rng c gsl_rng_alloc688125 Ref: 68e688125 Ref: rng c gsl_rng_set688988 Ref: 691688988 Ref: rng c gsl_rng_free690158 Ref: 693690158 Node: Sampling from a random number generator690294 Ref: rng sampling-from-a-random-number-generator690494 Ref: 694690494 Ref: rng c gsl_rng_get690880 Ref: 696690880 Ref: rng c gsl_rng_uniform691319 Ref: 699691319 Ref: rng c gsl_rng_uniform_pos691926 Ref: 69a691926 Ref: rng c gsl_rng_uniform_int692368 Ref: 69b692368 Node: Auxiliary random number generator functions693727 Ref: rng auxiliary-random-number-generator-functions693924 Ref: 69f693924 Ref: rng c gsl_rng_name694191 Ref: 6a0694191 Ref: rng c gsl_rng_max694471 Ref: 697694471 Ref: rng c gsl_rng_min694629 Ref: 698694629 Ref: rng c gsl_rng_state694944 Ref: 6a1694944 Ref: rng c gsl_rng_size694997 Ref: 6a2694997 Ref: rng c gsl_rng_types_setup695425 Ref: 6a3695425 Node: Random number environment variables696093 Ref: rng random-number-environment-variables696288 Ref: 6a4696288 Ref: rng c GSL_RNG_TYPE696669 Ref: 6a5696669 Ref: rng c GSL_RNG_SEED696859 Ref: 690696859 Ref: rng c gsl_rng_default696980 Ref: 6a7696980 Ref: rng c gsl_rng_default_seed697301 Ref: 68f697301 Ref: rng c gsl_rng_env_setup697670 Ref: 6a6697670 Node: Copying random number generator state699429 Ref: rng copying-random-number-generator-state699630 Ref: 6a8699630 Ref: rng c gsl_rng_memcpy699966 Ref: 6a9699966 Ref: rng c gsl_rng_clone700279 Ref: 6aa700279 Node: Reading and writing random number generator state700474 Ref: rng reading-and-writing-random-number-generator-state700674 Ref: 6ab700674 Ref: rng c gsl_rng_fwrite700891 Ref: 6ac700891 Ref: rng c gsl_rng_fread701342 Ref: 6ad701342 Node: Random number generator algorithms701939 Ref: rng random-number-generator-algorithms702131 Ref: 6ae702131 Ref: rng c gsl_rng_mt19937702903 Ref: 69d702903 Ref: rng c gsl_rng_ranlxs0704415 Ref: 6af704415 Ref: rng c gsl_rng_ranlxs1704471 Ref: 6b0704471 Ref: rng c gsl_rng_ranlxs2704527 Ref: 6b1704527 Ref: rng c gsl_rng_ranlxd1705534 Ref: 69c705534 Ref: rng c gsl_rng_ranlxd2705590 Ref: 6b2705590 Ref: rng c gsl_rng_ranlux705855 Ref: 692705855 Ref: rng c gsl_rng_ranlux389705910 Ref: 6b3705910 Ref: rng c gsl_rng_cmrg707398 Ref: 6b4707398 Ref: rng c gsl_rng_mrg708281 Ref: 6b5708281 Ref: rng c gsl_rng_taus708957 Ref: 69e708957 Ref: rng c gsl_rng_taus2709010 Ref: 6b6709010 Ref: rng c gsl_rng_gfsr4710473 Ref: 6b7710473 Node: Unix random number generators712472 Ref: rng unix-random-number-generators712645 Ref: 6b8712645 Ref: rng c gsl_rng_rand713386 Ref: 6b9713386 Ref: rng c gsl_rng_random_bsd713734 Ref: 6ba713734 Ref: rng c gsl_rng_random_libc5713793 Ref: 6bb713793 Ref: rng c gsl_rng_random_glibc2713854 Ref: 6bc713854 Ref: rng c gsl_rng_rand48715403 Ref: 6bd715403 Node: Other random number generators716449 Ref: rng other-random-number-generators716599 Ref: 6be716599 Ref: rng c gsl_rng_ranf717628 Ref: 6bf717628 Ref: rng c gsl_rng_ranmar719105 Ref: 6c0719105 Ref: rng c gsl_rng_r250719417 Ref: 6c1719417 Ref: rng c gsl_rng_tt800720011 Ref: 6c2720011 Ref: rng c gsl_rng_vax720603 Ref: 6c3720603 Ref: rng c gsl_rng_transputer720947 Ref: 6c4720947 Ref: rng c gsl_rng_randu721242 Ref: 6c5721242 Ref: rng c gsl_rng_minstd721588 Ref: 6c6721588 Ref: rng c gsl_rng_uni722465 Ref: 6c7722465 Ref: rng c gsl_rng_uni32722517 Ref: 6c8722517 Ref: rng c gsl_rng_slatec722810 Ref: 6c9722810 Ref: rng c gsl_rng_zuf722992 Ref: 6ca722992 Ref: rng c gsl_rng_knuthran2723458 Ref: 6cb723458 Ref: rng c gsl_rng_knuthran2002723785 Ref: 6cc723785 Ref: rng c gsl_rng_knuthran723846 Ref: 6cd723846 Ref: rng c gsl_rng_borosh13724278 Ref: 6ce724278 Ref: rng c gsl_rng_fishman18724335 Ref: 6cf724335 Ref: rng c gsl_rng_fishman20724393 Ref: 6d0724393 Ref: rng c gsl_rng_lecuyer21724451 Ref: 6d1724451 Ref: rng c gsl_rng_waterman14724509 Ref: 6d2724509 Ref: rng c gsl_rng_fishman2x725066 Ref: 6d3725066 Ref: rng c gsl_rng_coveyou725482 Ref: 6d4725482 Node: Performance725807 Ref: rng performance725940 Ref: 6d5725940 Node: Examples<12>726896 Ref: rng examples727033 Ref: 6d6727033 Node: References and Further Reading<13>728363 Ref: rng references-and-further-reading728505 Ref: 6d7728505 Node: Acknowledgements729644 Ref: rng acknowledgements729765 Ref: 6d8729765 Node: Quasi-Random Sequences730105 Ref: qrng doc730237 Ref: 6d9730237 Ref: qrng quasi-random-sequences730237 Ref: 6da730237 Node: Quasi-random number generator initialization731176 Ref: qrng quasi-random-number-generator-initialization731334 Ref: 6db731334 Ref: qrng c gsl_qrng731435 Ref: 6dc731435 Ref: qrng c gsl_qrng_alloc731519 Ref: 6dd731519 Ref: qrng c gsl_qrng_free731970 Ref: 6de731970 Ref: qrng c gsl_qrng_init732108 Ref: 6df732108 Node: Sampling from a quasi-random number generator732346 Ref: qrng sampling-from-a-quasi-random-number-generator732562 Ref: 6e0732562 Ref: qrng c gsl_qrng_get732665 Ref: 6e1732665 Node: Auxiliary quasi-random number generator functions733099 Ref: qrng auxiliary-quasi-random-number-generator-functions733327 Ref: 6e2733327 Ref: qrng c gsl_qrng_name733438 Ref: 6e3733438 Ref: qrng c gsl_qrng_size733567 Ref: 6e4733567 Ref: qrng c gsl_qrng_state733622 Ref: 6e5733622 Node: Saving and restoring quasi-random number generator state734048 Ref: qrng saving-and-restoring-quasi-random-number-generator-state734271 Ref: 6e6734271 Ref: qrng c gsl_qrng_memcpy734396 Ref: 6e7734396 Ref: qrng c gsl_qrng_clone734720 Ref: 6e8734720 Node: Quasi-random number generator algorithms734918 Ref: qrng quasi-random-number-generator-algorithms735104 Ref: 6e9735104 Ref: qrng c gsl_qrng_type735260 Ref: 6ea735260 Ref: qrng c gsl_qrng_type gsl_qrng_niederreiter_2735285 Ref: 6eb735285 Ref: qrng c gsl_qrng_type gsl_qrng_sobol735543 Ref: 6ec735543 Ref: qrng c gsl_qrng_type gsl_qrng_halton735790 Ref: 6ed735790 Ref: qrng c gsl_qrng_type gsl_qrng_reversehalton735852 Ref: 6ee735852 Node: Examples<13>736237 Ref: qrng examples736377 Ref: 6ef736377 Ref: qrng fig-qrng737274 Ref: 6f0737274 Node: References737383 Ref: qrng references737474 Ref: 6f1737474 Node: Random Number Distributions737872 Ref: randist doc737990 Ref: 6f2737990 Ref: randist chap-random-number-distributions737990 Ref: 695737990 Ref: randist random-number-distributions737990 Ref: 6f3737990 Node: Introduction<3>741162 Ref: randist introduction741276 Ref: 6f4741276 Node: The Gaussian Distribution742934 Ref: randist the-gaussian-distribution743087 Ref: 6f5743087 Ref: randist c gsl_ran_gaussian743150 Ref: 6f6743150 Ref: randist c gsl_ran_gaussian_pdf743809 Ref: 6f7743809 Ref: randist c gsl_ran_gaussian_ziggurat744095 Ref: 6f8744095 Ref: randist c gsl_ran_gaussian_ratio_method744185 Ref: 6f9744185 Ref: randist c gsl_ran_ugaussian744513 Ref: 6fa744513 Ref: randist c gsl_ran_ugaussian_pdf744571 Ref: 6fb744571 Ref: randist c gsl_ran_ugaussian_ratio_method744625 Ref: 6fc744625 Ref: randist c gsl_cdf_gaussian_P744875 Ref: 6fd744875 Ref: randist c gsl_cdf_gaussian_Q744940 Ref: 6fe744940 Ref: randist c gsl_cdf_gaussian_Pinv745005 Ref: 6ff745005 Ref: randist c gsl_cdf_gaussian_Qinv745073 Ref: 700745073 Ref: randist c gsl_cdf_ugaussian_P745322 Ref: 701745322 Ref: randist c gsl_cdf_ugaussian_Q745374 Ref: 702745374 Ref: randist c gsl_cdf_ugaussian_Pinv745426 Ref: 703745426 Ref: randist c gsl_cdf_ugaussian_Qinv745481 Ref: 704745481 Node: The Gaussian Tail Distribution745676 Ref: randist the-gaussian-tail-distribution745849 Ref: 705745849 Ref: randist c gsl_ran_gaussian_tail745922 Ref: 706745922 Ref: randist c gsl_ran_gaussian_tail_pdf746755 Ref: 707746755 Ref: randist c gsl_ran_ugaussian_tail747111 Ref: 708747111 Ref: randist c gsl_ran_ugaussian_tail_pdf747184 Ref: 709747184 Node: The Bivariate Gaussian Distribution747443 Ref: randist the-bivariate-gaussian-distribution747629 Ref: 70a747629 Ref: randist c gsl_ran_bivariate_gaussian747712 Ref: 70b747712 Ref: randist c gsl_ran_bivariate_gaussian_pdf748462 Ref: 70c748462 Node: The Multivariate Gaussian Distribution748923 Ref: randist the-multivariate-gaussian-distribution749107 Ref: 70d749107 Ref: randist c gsl_ran_multivariate_gaussian749196 Ref: 70e749196 Ref: randist c gsl_ran_multivariate_gaussian_pdf750031 Ref: 70f750031 Ref: randist c gsl_ran_multivariate_gaussian_log_pdf750202 Ref: 710750202 Ref: randist c gsl_ran_multivariate_gaussian_mean750676 Ref: 711750676 Ref: randist c gsl_ran_multivariate_gaussian_vcov751215 Ref: 712751215 Node: The Exponential Distribution751851 Ref: randist the-exponential-distribution752024 Ref: 713752024 Ref: randist c gsl_ran_exponential752093 Ref: 714752093 Ref: randist c gsl_ran_exponential_pdf752366 Ref: 715752366 Ref: randist c gsl_cdf_exponential_P752642 Ref: 716752642 Ref: randist c gsl_cdf_exponential_Q752707 Ref: 717752707 Ref: randist c gsl_cdf_exponential_Pinv752772 Ref: 718752772 Ref: randist c gsl_cdf_exponential_Qinv752840 Ref: 719752840 Node: The Laplace Distribution753075 Ref: randist the-laplace-distribution753244 Ref: 71a753244 Ref: randist c gsl_ran_laplace753305 Ref: 71b753305 Ref: randist c gsl_ran_laplace_pdf753582 Ref: 71c753582 Ref: randist c gsl_cdf_laplace_P753844 Ref: 71d753844 Ref: randist c gsl_cdf_laplace_Q753904 Ref: 71e753904 Ref: randist c gsl_cdf_laplace_Pinv753964 Ref: 71f753964 Ref: randist c gsl_cdf_laplace_Qinv754027 Ref: 720754027 Node: The Exponential Power Distribution754253 Ref: randist the-exponential-power-distribution754417 Ref: 721754417 Ref: randist c gsl_ran_exppow754498 Ref: 722754498 Ref: randist c gsl_ran_exppow_pdf755001 Ref: 723755001 Ref: randist c gsl_cdf_exppow_P755324 Ref: 724755324 Ref: randist c gsl_cdf_exppow_Q755393 Ref: 725755393 Node: The Cauchy Distribution755639 Ref: randist the-cauchy-distribution755804 Ref: 726755804 Ref: randist c gsl_ran_cauchy755863 Ref: 727755863 Ref: randist c gsl_ran_cauchy_pdf756281 Ref: 728756281 Ref: randist c gsl_cdf_cauchy_P756550 Ref: 729756550 Ref: randist c gsl_cdf_cauchy_Q756609 Ref: 72a756609 Ref: randist c gsl_cdf_cauchy_Pinv756668 Ref: 72b756668 Ref: randist c gsl_cdf_cauchy_Qinv756730 Ref: 72c756730 Node: The Rayleigh Distribution756964 Ref: randist the-rayleigh-distribution757125 Ref: 72d757125 Ref: randist c gsl_ran_rayleigh757190 Ref: 72e757190 Ref: randist c gsl_ran_rayleigh_pdf757494 Ref: 72f757494 Ref: randist c gsl_cdf_rayleigh_P757777 Ref: 730757777 Ref: randist c gsl_cdf_rayleigh_Q757842 Ref: 731757842 Ref: randist c gsl_cdf_rayleigh_Pinv757907 Ref: 732757907 Ref: randist c gsl_cdf_rayleigh_Qinv757975 Ref: 733757975 Node: The Rayleigh Tail Distribution758221 Ref: randist the-rayleigh-tail-distribution758382 Ref: 734758382 Ref: randist c gsl_ran_rayleigh_tail758457 Ref: 735758457 Ref: randist c gsl_ran_rayleigh_tail_pdf758841 Ref: 736758841 Node: The Landau Distribution759194 Ref: randist the-landau-distribution759365 Ref: 737759365 Ref: randist c gsl_ran_landau759426 Ref: 738759426 Ref: randist c gsl_ran_landau_pdf759946 Ref: 739759946 Node: The Levy alpha-Stable Distributions760192 Ref: randist the-levy-alpha-stable-distributions760372 Ref: 73a760372 Ref: randist c gsl_ran_levy760457 Ref: 73b760457 Node: The Levy skew alpha-Stable Distribution761307 Ref: randist the-levy-skew-alpha-stable-distribution761486 Ref: 73c761486 Ref: randist c gsl_ran_levy_skew761579 Ref: 73d761579 Node: The Gamma Distribution762983 Ref: randist the-gamma-distribution763156 Ref: 73e763156 Ref: randist c gsl_ran_gamma763215 Ref: 73f763215 Ref: randist c gsl_ran_gamma_knuth763810 Ref: 740763810 Ref: randist c gsl_ran_gamma_pdf763991 Ref: 741763991 Ref: randist c gsl_cdf_gamma_P764280 Ref: 742764280 Ref: randist c gsl_cdf_gamma_Q764348 Ref: 743764348 Ref: randist c gsl_cdf_gamma_Pinv764416 Ref: 744764416 Ref: randist c gsl_cdf_gamma_Qinv764487 Ref: 745764487 Node: The Flat Uniform Distribution764742 Ref: randist the-flat-uniform-distribution764902 Ref: 746764902 Ref: randist c gsl_ran_flat764979 Ref: 747764979 Ref: randist c gsl_ran_flat_pdf765281 Ref: 748765281 Ref: randist c gsl_cdf_flat_P765558 Ref: 749765558 Ref: randist c gsl_cdf_flat_Q765625 Ref: 74a765625 Ref: randist c gsl_cdf_flat_Pinv765692 Ref: 74b765692 Ref: randist c gsl_cdf_flat_Qinv765762 Ref: 74c765762 Node: The Lognormal Distribution766004 Ref: randist the-lognormal-distribution766170 Ref: 74d766170 Ref: randist c gsl_ran_lognormal766237 Ref: 74e766237 Ref: randist c gsl_ran_lognormal_pdf766559 Ref: 74f766559 Ref: randist c gsl_cdf_lognormal_P766884 Ref: 750766884 Ref: randist c gsl_cdf_lognormal_Q766973 Ref: 751766973 Ref: randist c gsl_cdf_lognormal_Pinv767062 Ref: 752767062 Ref: randist c gsl_cdf_lognormal_Qinv767154 Ref: 753767154 Node: The Chi-squared Distribution767441 Ref: randist the-chi-squared-distribution767596 Ref: 754767596 Ref: randist c gsl_ran_chisq767898 Ref: 755767898 Ref: randist c gsl_ran_chisq_pdf768223 Ref: 756768223 Ref: randist c gsl_cdf_chisq_P768501 Ref: 757768501 Ref: randist c gsl_cdf_chisq_Q768560 Ref: 758768560 Ref: randist c gsl_cdf_chisq_Pinv768619 Ref: 759768619 Ref: randist c gsl_cdf_chisq_Qinv768681 Ref: 75a768681 Node: The F-distribution768925 Ref: randist the-f-distribution769072 Ref: 75b769072 Ref: randist c gsl_ran_fdist769351 Ref: 75c769351 Ref: randist c gsl_ran_fdist_pdf769861 Ref: 75d769861 Ref: randist c gsl_cdf_fdist_P770174 Ref: 75e770174 Ref: randist c gsl_cdf_fdist_Q770246 Ref: 75f770246 Ref: randist c gsl_cdf_fdist_Pinv770318 Ref: 760770318 Ref: randist c gsl_cdf_fdist_Qinv770403 Ref: 761770403 Node: The t-distribution770681 Ref: randist the-t-distribution770821 Ref: 762770821 Ref: randist c gsl_ran_tdist771132 Ref: 763771132 Ref: randist c gsl_ran_tdist_pdf771446 Ref: 764771446 Ref: randist c gsl_cdf_tdist_P771714 Ref: 765771714 Ref: randist c gsl_cdf_tdist_Q771773 Ref: 766771773 Ref: randist c gsl_cdf_tdist_Pinv771832 Ref: 767771832 Ref: randist c gsl_cdf_tdist_Qinv771894 Ref: 768771894 Node: The Beta Distribution772128 Ref: randist the-beta-distribution772275 Ref: 769772275 Ref: randist c gsl_ran_beta772332 Ref: 76a772332 Ref: randist c gsl_ran_beta_pdf772623 Ref: 76b772623 Ref: randist c gsl_cdf_beta_P772909 Ref: 76c772909 Ref: randist c gsl_cdf_beta_Q772976 Ref: 76d772976 Ref: randist c gsl_cdf_beta_Pinv773043 Ref: 76e773043 Ref: randist c gsl_cdf_beta_Qinv773113 Ref: 76f773113 Node: The Logistic Distribution773366 Ref: randist the-logistic-distribution773518 Ref: 770773518 Ref: randist c gsl_ran_logistic773583 Ref: 771773583 Ref: randist c gsl_ran_logistic_pdf773863 Ref: 772773863 Ref: randist c gsl_cdf_logistic_P774138 Ref: 773774138 Ref: randist c gsl_cdf_logistic_Q774199 Ref: 774774199 Ref: randist c gsl_cdf_logistic_Pinv774260 Ref: 775774260 Ref: randist c gsl_cdf_logistic_Qinv774324 Ref: 776774324 Node: The Pareto Distribution774562 Ref: randist the-pareto-distribution774723 Ref: 777774723 Ref: randist c gsl_ran_pareto774784 Ref: 778774784 Ref: randist c gsl_ran_pareto_pdf775066 Ref: 779775066 Ref: randist c gsl_cdf_pareto_P775362 Ref: 77a775362 Ref: randist c gsl_cdf_pareto_Q775431 Ref: 77b775431 Ref: randist c gsl_cdf_pareto_Pinv775500 Ref: 77c775500 Ref: randist c gsl_cdf_pareto_Qinv775572 Ref: 77d775572 Node: Spherical Vector Distributions775833 Ref: randist spherical-vector-distributions775993 Ref: 77e775993 Ref: randist c gsl_ran_dir_2d776237 Ref: 77f776237 Ref: randist c gsl_ran_dir_2d_trig_method776322 Ref: 780776322 Ref: randist c gsl_ran_dir_3d777611 Ref: 781777611 Ref: randist c gsl_ran_dir_nd778171 Ref: 782778171 Node: The Weibull Distribution778808 Ref: randist the-weibull-distribution778975 Ref: 783778975 Ref: randist c gsl_ran_weibull779038 Ref: 784779038 Ref: randist c gsl_ran_weibull_pdf779317 Ref: 785779317 Ref: randist c gsl_cdf_weibull_P779616 Ref: 786779616 Ref: randist c gsl_cdf_weibull_Q779686 Ref: 787779686 Ref: randist c gsl_cdf_weibull_Pinv779756 Ref: 788779756 Ref: randist c gsl_cdf_weibull_Qinv779829 Ref: 789779829 Node: The Type-1 Gumbel Distribution780092 Ref: randist the-type-1-gumbel-distribution780259 Ref: 78a780259 Ref: randist c gsl_ran_gumbel1780334 Ref: 78b780334 Ref: randist c gsl_ran_gumbel1_pdf780639 Ref: 78c780639 Ref: randist c gsl_cdf_gumbel1_P780940 Ref: 78d780940 Ref: randist c gsl_cdf_gumbel1_Q781010 Ref: 78e781010 Ref: randist c gsl_cdf_gumbel1_Pinv781080 Ref: 78f781080 Ref: randist c gsl_cdf_gumbel1_Qinv781153 Ref: 790781153 Node: The Type-2 Gumbel Distribution781418 Ref: randist the-type-2-gumbel-distribution781587 Ref: 791781587 Ref: randist c gsl_ran_gumbel2781662 Ref: 792781662 Ref: randist c gsl_ran_gumbel2_pdf781960 Ref: 793781960 Ref: randist c gsl_cdf_gumbel2_P782261 Ref: 794782261 Ref: randist c gsl_cdf_gumbel2_Q782331 Ref: 795782331 Ref: randist c gsl_cdf_gumbel2_Pinv782401 Ref: 796782401 Ref: randist c gsl_cdf_gumbel2_Qinv782474 Ref: 797782474 Node: The Dirichlet Distribution782739 Ref: randist the-dirichlet-distribution782908 Ref: 798782908 Ref: randist c gsl_ran_dirichlet782975 Ref: 799782975 Ref: randist c gsl_ran_dirichlet_pdf783887 Ref: 79a783887 Ref: randist c gsl_ran_dirichlet_lnpdf784200 Ref: 79b784200 Node: General Discrete Distributions784483 Ref: randist general-discrete-distributions784646 Ref: 79c784646 Ref: randist c gsl_ran_discrete_t786995 Ref: 79d786995 Ref: randist c gsl_ran_discrete_preproc787119 Ref: 79e787119 Ref: randist c gsl_ran_discrete787747 Ref: 79f787747 Ref: randist c gsl_ran_discrete_pdf787961 Ref: 7a0787961 Ref: randist c gsl_ran_discrete_free788414 Ref: 7a1788414 Node: The Poisson Distribution788544 Ref: randist the-poisson-distribution788707 Ref: 7a2788707 Ref: randist c gsl_ran_poisson788770 Ref: 7a3788770 Ref: randist c gsl_ran_poisson_pdf789074 Ref: 7a4789074 Ref: randist c gsl_cdf_poisson_P789346 Ref: 7a5789346 Ref: randist c gsl_cdf_poisson_Q789413 Ref: 7a6789413 Node: The Bernoulli Distribution789624 Ref: randist the-bernoulli-distribution789782 Ref: 7a7789782 Ref: randist c gsl_ran_bernoulli789849 Ref: 7a8789849 Ref: randist c gsl_ran_bernoulli_pdf790149 Ref: 7a9790149 Node: The Binomial Distribution790442 Ref: randist the-binomial-distribution790604 Ref: 7aa790604 Ref: randist c gsl_ran_binomial790669 Ref: 7ab790669 Ref: randist c gsl_ran_binomial_pdf791088 Ref: 7ac791088 Ref: randist c gsl_cdf_binomial_P791411 Ref: 7ad791411 Ref: randist c gsl_cdf_binomial_Q791504 Ref: 7ae791504 Node: The Multinomial Distribution791765 Ref: randist the-multinomial-distribution791935 Ref: 7af791935 Ref: randist c gsl_ran_multinomial792006 Ref: 7b0792006 Ref: randist c gsl_ran_multinomial_pdf793084 Ref: 7b1793084 Ref: randist c gsl_ran_multinomial_lnpdf793387 Ref: 7b2793387 Node: The Negative Binomial Distribution793655 Ref: randist the-negative-binomial-distribution793823 Ref: 7b3793823 Ref: randist c gsl_ran_negative_binomial793906 Ref: 7b4793906 Ref: randist c gsl_ran_negative_binomial_pdf794436 Ref: 7b5794436 Ref: randist c gsl_cdf_negative_binomial_P794772 Ref: 7b6794772 Ref: randist c gsl_cdf_negative_binomial_Q794868 Ref: 7b7794868 Node: The Pascal Distribution795141 Ref: randist the-pascal-distribution795307 Ref: 7b8795307 Ref: randist c gsl_ran_pascal795368 Ref: 7b9795368 Ref: randist c gsl_ran_pascal_pdf795730 Ref: 7ba795730 Ref: randist c gsl_cdf_pascal_P796047 Ref: 7bb796047 Ref: randist c gsl_cdf_pascal_Q796138 Ref: 7bc796138 Node: The Geometric Distribution796395 Ref: randist the-geometric-distribution796558 Ref: 7bd796558 Ref: randist c gsl_ran_geometric796625 Ref: 7be796625 Ref: randist c gsl_ran_geometric_pdf797153 Ref: 7bf797153 Ref: randist c gsl_cdf_geometric_P797446 Ref: 7c0797446 Ref: randist c gsl_cdf_geometric_Q797514 Ref: 7c1797514 Node: The Hypergeometric Distribution797727 Ref: randist the-hypergeometric-distribution797895 Ref: 7c2797895 Ref: randist c gsl_ran_hypergeometric797972 Ref: 7c3797972 Ref: randist c gsl_ran_hypergeometric_pdf798706 Ref: 7c4798706 Ref: randist c gsl_cdf_hypergeometric_P799083 Ref: 7c5799083 Ref: randist c gsl_cdf_hypergeometric_Q799206 Ref: 7c6799206 Node: The Logarithmic Distribution799519 Ref: randist the-logarithmic-distribution799685 Ref: 7c7799685 Ref: randist c gsl_ran_logarithmic799756 Ref: 7c8799756 Ref: randist c gsl_ran_logarithmic_pdf800073 Ref: 7c9800073 Node: The Wishart Distribution800372 Ref: randist the-wishart-distribution800529 Ref: 7ca800529 Ref: randist c gsl_ran_wishart800592 Ref: 7cb800592 Ref: randist c gsl_ran_wishart_pdf801307 Ref: 7cc801307 Ref: randist c gsl_ran_wishart_log_pdf801481 Ref: 7cd801481 Node: Shuffling and Sampling802049 Ref: randist shuffling-and-sampling802190 Ref: 7ce802190 Ref: randist c gsl_ran_shuffle802642 Ref: 7cf802642 Ref: randist c gsl_ran_choose803361 Ref: 7d0803361 Ref: randist c gsl_ran_sample804610 Ref: 7d1804610 Node: Examples<14>805095 Ref: randist examples805246 Ref: 7d2805246 Ref: randist fig-rand-walk807423 Ref: 7d3807423 Node: References and Further Reading<14>808292 Ref: randist references-and-further-reading808412 Ref: 7d4808412 Node: Statistics810409 Ref: statistics doc810523 Ref: 7d5810523 Ref: statistics statistics810523 Ref: 7d6810523 Node: Mean Standard Deviation and Variance812001 Ref: statistics mean-standard-deviation-and-variance812112 Ref: 7d7812112 Ref: statistics c gsl_stats_mean812199 Ref: 1d812199 Ref: statistics c gsl_stats_variance812721 Ref: 7d8812721 Ref: statistics c gsl_stats_variance_m813651 Ref: 7d9813651 Ref: statistics c gsl_stats_sd814049 Ref: 7da814049 Ref: statistics c gsl_stats_sd_m814140 Ref: 7db814140 Ref: statistics c gsl_stats_tss814420 Ref: 7dc814420 Ref: statistics c gsl_stats_tss_m814512 Ref: 7dd814512 Ref: statistics c gsl_stats_variance_with_fixed_mean814947 Ref: 7de814947 Ref: statistics c gsl_stats_sd_with_fixed_mean815464 Ref: 7df815464 Node: Absolute deviation815785 Ref: statistics absolute-deviation815941 Ref: 7e0815941 Ref: statistics c gsl_stats_absdev815990 Ref: 7e1815990 Ref: statistics c gsl_stats_absdev_m816652 Ref: 7e2816652 Node: Higher moments skewness and kurtosis817185 Ref: statistics higher-moments-skewness-and-kurtosis817320 Ref: 7e3817320 Ref: statistics c gsl_stats_skew817409 Ref: 7e4817409 Ref: statistics c gsl_stats_skew_m_sd818040 Ref: 7e5818040 Ref: statistics c gsl_stats_kurtosis818546 Ref: 7e6818546 Ref: statistics c gsl_stats_kurtosis_m_sd819045 Ref: 7e7819045 Node: Autocorrelation819562 Ref: statistics autocorrelation819689 Ref: 7e8819689 Ref: statistics c gsl_stats_lag1_autocorrelation819732 Ref: 7e9819732 Ref: statistics c gsl_stats_lag1_autocorrelation_m820113 Ref: 7ea820113 Node: Covariance820409 Ref: statistics covariance820511 Ref: 7eb820511 Ref: statistics c gsl_stats_covariance820544 Ref: 7ec820544 Ref: statistics c gsl_stats_covariance_m820952 Ref: 7ed820952 Node: Correlation821493 Ref: statistics correlation821596 Ref: 7ee821596 Ref: statistics c gsl_stats_correlation821631 Ref: 7ef821631 Ref: statistics c gsl_stats_spearman822239 Ref: 7f0822239 Node: Weighted Samples822966 Ref: statistics weighted-samples823085 Ref: 7f1823085 Ref: statistics c gsl_stats_wmean823579 Ref: 7f2823579 Ref: statistics c gsl_stats_wvariance824036 Ref: 7f3824036 Ref: statistics c gsl_stats_wvariance_m824760 Ref: 7f4824760 Ref: statistics c gsl_stats_wsd825069 Ref: 7f5825069 Ref: statistics c gsl_stats_wsd_m825406 Ref: 7f6825406 Ref: statistics c gsl_stats_wvariance_with_fixed_mean825680 Ref: 7f7825680 Ref: statistics c gsl_stats_wsd_with_fixed_mean826255 Ref: 7f8826255 Ref: statistics c gsl_stats_wtss826598 Ref: 7f9826598 Ref: statistics c gsl_stats_wtss_m826731 Ref: 7fa826731 Ref: statistics c gsl_stats_wabsdev827246 Ref: 7fb827246 Ref: statistics c gsl_stats_wabsdev_m827604 Ref: 7fc827604 Ref: statistics c gsl_stats_wskew827912 Ref: 7fd827912 Ref: statistics c gsl_stats_wskew_m_sd828202 Ref: 7fe828202 Ref: statistics c gsl_stats_wkurtosis828580 Ref: 7ff828580 Ref: statistics c gsl_stats_wkurtosis_m_sd828884 Ref: 800828884 Node: Maximum and Minimum values829266 Ref: statistics maximum-and-minimum-values829396 Ref: 801829396 Ref: statistics c gsl_stats_max829773 Ref: 802829773 Ref: statistics c gsl_stats_min830299 Ref: 803830299 Ref: statistics c gsl_stats_minmax830826 Ref: 804830826 Ref: statistics c gsl_stats_max_index831083 Ref: 805831083 Ref: statistics c gsl_stats_min_index831524 Ref: 806831524 Ref: statistics c gsl_stats_minmax_index831965 Ref: 807831965 Node: Median and Percentiles832269 Ref: statistics median-and-percentiles832399 Ref: 808832399 Ref: statistics c gsl_stats_median_from_sorted_data832811 Ref: 809832811 Ref: statistics c gsl_stats_median833696 Ref: 80a833696 Ref: statistics c gsl_stats_quantile_from_sorted_data834141 Ref: 80b834141 Node: Order Statistics835444 Ref: statistics order-statistics835573 Ref: 80c835573 Ref: statistics c gsl_stats_select836102 Ref: 80d836102 Node: Robust Location Estimates836534 Ref: statistics robust-location-estimates836663 Ref: 80e836663 Node: Trimmed Mean837342 Ref: statistics trimmed-mean837445 Ref: 80f837445 Ref: statistics c gsl_stats_trmean_from_sorted_data837956 Ref: 810837956 Node: Gastwirth Estimator838595 Ref: statistics gastwirth-estimator838698 Ref: 811838698 Ref: statistics c gsl_stats_gastwirth_from_sorted_data839053 Ref: 812839053 Node: Robust Scale Estimates839556 Ref: statistics robust-scale-estimates839681 Ref: 813839681 Node: Median Absolute Deviation MAD840170 Ref: statistics median-absolute-deviation-mad840281 Ref: 814840281 Ref: statistics sec-mad-statistic840281 Ref: 815840281 Ref: statistics c gsl_stats_mad0840857 Ref: 816840857 Ref: statistics c gsl_stats_mad840978 Ref: 817840978 Node: S_n Statistic841553 Ref: statistics s-n-statistic841686 Ref: 818841686 Ref: statistics sec-sn-statistic841686 Ref: 819841686 Ref: statistics c gsl_stats_Sn0_from_sorted_data842267 Ref: 81a842267 Ref: statistics c gsl_stats_Sn_from_sorted_data842421 Ref: 81b842421 Node: Q_n Statistic843234 Ref: statistics q-n-statistic843329 Ref: 81c843329 Ref: statistics sec-qn-statistic843329 Ref: 81d843329 Ref: statistics c gsl_stats_Qn0_from_sorted_data843796 Ref: 81e843796 Ref: statistics c gsl_stats_Qn_from_sorted_data843965 Ref: 81f843965 Node: Examples<15>844815 Ref: statistics examples844949 Ref: 820844949 Node: References and Further Reading<15>847444 Ref: statistics references-and-further-reading847547 Ref: 821847547 Node: Running Statistics849017 Ref: rstat doc849128 Ref: 822849128 Ref: rstat running-statistics849128 Ref: 823849128 Node: Initializing the Accumulator850250 Ref: rstat initializing-the-accumulator850373 Ref: 824850373 Ref: rstat c gsl_rstat_workspace850442 Ref: 825850442 Ref: rstat c gsl_rstat_alloc850628 Ref: 826850628 Ref: rstat c gsl_rstat_free850816 Ref: 827850816 Ref: rstat c gsl_rstat_reset850962 Ref: 828850962 Node: Adding Data to the Accumulator851150 Ref: rstat adding-data-to-the-accumulator851300 Ref: 829851300 Ref: rstat c gsl_rstat_add851373 Ref: 82a851373 Ref: rstat c gsl_rstat_n851640 Ref: 82b851640 Node: Current Statistics851790 Ref: rstat current-statistics851921 Ref: 82c851921 Ref: rstat c gsl_rstat_min851970 Ref: 82d851970 Ref: rstat c gsl_rstat_max852109 Ref: 82e852109 Ref: rstat c gsl_rstat_mean852248 Ref: 82f852248 Ref: rstat c gsl_rstat_variance852443 Ref: 830852443 Ref: rstat c gsl_rstat_sd852669 Ref: 831852669 Ref: rstat c gsl_rstat_sd_mean852890 Ref: 832852890 Ref: rstat c gsl_rstat_rms853084 Ref: 833853084 Ref: rstat c gsl_rstat_skew853301 Ref: 834853301 Ref: rstat c gsl_rstat_kurtosis853524 Ref: 835853524 Ref: rstat c gsl_rstat_median853762 Ref: 836853762 Node: Quantiles853923 Ref: rstat quantiles854036 Ref: 837854036 Ref: rstat c gsl_rstat_quantile_workspace854458 Ref: 838854458 Ref: rstat c gsl_rstat_quantile_alloc854592 Ref: 839854592 Ref: rstat c gsl_rstat_quantile_free854924 Ref: 83a854924 Ref: rstat c gsl_rstat_quantile_reset855098 Ref: 83b855098 Ref: rstat c gsl_rstat_quantile_add855314 Ref: 83c855314 Ref: rstat c gsl_rstat_quantile_get855518 Ref: 83d855518 Node: Examples<16>855675 Ref: rstat examples855804 Ref: 83e855804 Node: References and Further Reading<16>861162 Ref: rstat references-and-further-reading861273 Ref: 83f861273 Node: Moving Window Statistics861658 Ref: movstat doc861776 Ref: 840861776 Ref: movstat moving-window-statistics861776 Ref: 841861776 Node: Introduction<4>862676 Ref: movstat introduction862780 Ref: 842862780 Node: Handling Endpoints863689 Ref: movstat handling-endpoints863841 Ref: 843863841 Ref: movstat c gsl_movstat_end_t864191 Ref: 844864191 Ref: movstat c gsl_movstat_end_t GSL_MOVSTAT_END_PADZERO864345 Ref: 845864345 Ref: movstat c gsl_movstat_end_t GSL_MOVSTAT_END_PADVALUE864719 Ref: 846864719 Ref: movstat c gsl_movstat_end_t GSL_MOVSTAT_END_TRUNCATE865067 Ref: 847865067 Node: Allocation for Moving Window Statistics865244 Ref: movstat allocation-for-moving-window-statistics865392 Ref: 848865392 Ref: movstat c gsl_movstat_workspace865483 Ref: 849865483 Ref: movstat c gsl_movstat_alloc865585 Ref: 84a865585 Ref: movstat c gsl_movstat_alloc2865916 Ref: 84b865916 Ref: movstat c gsl_movstat_free866294 Ref: 84c866294 Node: Moving Mean866426 Ref: movstat moving-mean866594 Ref: 84d866594 Ref: movstat c gsl_movstat_mean867023 Ref: 84e867023 Node: Moving Variance and Standard Deviation867492 Ref: movstat moving-variance-and-standard-deviation867647 Ref: 84f867647 Ref: movstat c gsl_movstat_variance868076 Ref: 850868076 Ref: movstat c gsl_movstat_sd868557 Ref: 851868557 Node: Moving Minimum and Maximum869052 Ref: movstat moving-minimum-and-maximum869206 Ref: 852869206 Ref: movstat c gsl_movstat_min869432 Ref: 853869432 Ref: movstat c gsl_movstat_max869899 Ref: 854869899 Ref: movstat c gsl_movstat_minmax870366 Ref: 855870366 Node: Moving Sum870854 Ref: movstat moving-sum870983 Ref: 856870983 Ref: movstat c gsl_movstat_sum871139 Ref: 857871139 Node: Moving Median871605 Ref: movstat moving-median871731 Ref: 858871731 Ref: movstat c gsl_movstat_median871886 Ref: 859871886 Node: Robust Scale Estimation872357 Ref: movstat robust-scale-estimation872503 Ref: 85a872503 Node: Moving MAD873179 Ref: movstat moving-mad873269 Ref: 85b873269 Ref: movstat c gsl_movstat_mad0873698 Ref: 85c873698 Ref: movstat c gsl_movstat_mad873873 Ref: 85d873873 Node: Moving QQR874609 Ref: movstat moving-qqr874718 Ref: 85e874718 Ref: movstat c gsl_movstat_qqr875355 Ref: 85f875355 Node: Moving S_n875993 Ref: movstat moving-s-n876102 Ref: 860876102 Ref: movstat c gsl_movstat_Sn876366 Ref: 861876366 Node: Moving Q_n876924 Ref: movstat moving-q-n877014 Ref: 862877014 Ref: movstat c gsl_movstat_Qn877250 Ref: 863877250 Node: User-defined Moving Statistics877808 Ref: movstat user-defined-moving-statistics877953 Ref: 864877953 Ref: movstat c gsl_movstat_function878605 Ref: 866878605 Ref: movstat c gsl_movstat_function function879024 Ref: 868879024 Ref: movstat c gsl_movstat_function params879347 Ref: 869879347 Ref: movstat c gsl_movstat_apply879447 Ref: 867879447 Ref: movstat c gsl_movstat_fill880017 Ref: 865880017 Node: Accumulators880732 Ref: movstat accumulators880866 Ref: 86a880866 Ref: movstat c gsl_movstat_accum881161 Ref: 86b881161 Ref: movstat c gsl_movstat_accum size881731 Ref: 86c881731 Ref: movstat c gsl_movstat_accum init881926 Ref: 86d881926 Ref: movstat c gsl_movstat_accum insert882100 Ref: 86e882100 Ref: movstat c gsl_movstat_accum delete882469 Ref: 86f882469 Ref: movstat c gsl_movstat_accum get882656 Ref: 870882656 Ref: movstat c gsl_movstat_accum_min883085 Ref: 871883085 Ref: movstat c gsl_movstat_accum_max883152 Ref: 872883152 Ref: movstat c gsl_movstat_accum_minmax883219 Ref: 873883219 Ref: movstat c gsl_movstat_accum_mean883408 Ref: 874883408 Ref: movstat c gsl_movstat_accum_sd883476 Ref: 875883476 Ref: movstat c gsl_movstat_accum_variance883542 Ref: 876883542 Ref: movstat c gsl_movstat_accum_median883751 Ref: 877883751 Ref: movstat c gsl_movstat_accum_Sn883943 Ref: 878883943 Ref: movstat c gsl_movstat_accum_Qn884009 Ref: 879884009 Ref: movstat c gsl_movstat_accum_sum884191 Ref: 87a884191 Ref: movstat c gsl_movstat_accum_qqr884316 Ref: 87b884316 Node: Examples<17>884454 Ref: movstat examples884592 Ref: 87c884592 Node: Example 1884777 Ref: movstat example-1884867 Ref: 87d884867 Ref: movstat fig-movstat1885071 Ref: 87e885071 Node: Example 2 Robust Scale886781 Ref: movstat example-2-robust-scale886916 Ref: 87f886916 Ref: movstat fig-movstat2887866 Ref: 880887866 Node: Example 3 User-defined Moving Window891083 Ref: movstat example-3-user-defined-moving-window891200 Ref: 881891200 Ref: movstat fig-movstat3891767 Ref: 882891767 Node: References and Further Reading<17>893771 Ref: movstat references-and-further-reading893888 Ref: 883893888 Node: Digital Filtering894468 Ref: filter doc894578 Ref: 884894578 Ref: filter digital-filtering894578 Ref: 885894578 Node: Introduction<5>894862 Ref: filter introduction894962 Ref: 886894962 Node: Handling Endpoints<2>895343 Ref: filter handling-endpoints895474 Ref: 887895474 Ref: filter c gsl_filter_end_t895819 Ref: 888895819 Ref: filter c gsl_filter_end_t GSL_FILTER_END_PADZERO895972 Ref: 889895972 Ref: filter c gsl_filter_end_t GSL_FILTER_END_PADVALUE896347 Ref: 88a896347 Ref: filter c gsl_filter_end_t GSL_FILTER_END_TRUNCATE896696 Ref: 88b896696 Node: Linear Digital Filters896872 Ref: filter linear-digital-filters897013 Ref: 88c897013 Node: Gaussian Filter897100 Ref: filter gaussian-filter897175 Ref: 88d897175 Ref: filter c gsl_filter_gaussian_alloc898617 Ref: 88e898617 Ref: filter c gsl_filter_gaussian_free898974 Ref: 88f898974 Ref: filter c gsl_filter_gaussian899131 Ref: 890899131 Ref: filter c gsl_filter_gaussian_kernel899834 Ref: 891899834 Node: Nonlinear Digital Filters900487 Ref: filter nonlinear-digital-filters900619 Ref: 892900619 Node: Standard Median Filter901143 Ref: filter standard-median-filter901260 Ref: 893901260 Ref: filter c gsl_filter_median_alloc901598 Ref: 894901598 Ref: filter c gsl_filter_median_free901990 Ref: 895901990 Ref: filter c gsl_filter_median902143 Ref: 896902143 Node: Recursive Median Filter902598 Ref: filter recursive-median-filter902748 Ref: 897902748 Ref: filter c gsl_filter_rmedian_alloc903469 Ref: 898903469 Ref: filter c gsl_filter_rmedian_free903864 Ref: 899903864 Ref: filter c gsl_filter_rmedian904019 Ref: 89a904019 Node: Impulse Detection Filter904477 Ref: filter impulse-detection-filter904596 Ref: 89b904596 Ref: filter c gsl_filter_scale_t906921 Ref: 89c906921 Ref: filter c gsl_filter_scale_t GSL_FILTER_SCALE_MAD907044 Ref: 89d907044 Ref: filter c gsl_filter_scale_t GSL_FILTER_SCALE_IQR907417 Ref: 89e907417 Ref: filter c gsl_filter_scale_t GSL_FILTER_SCALE_SN908036 Ref: 89f908036 Ref: filter c gsl_filter_scale_t GSL_FILTER_SCALE_QN908231 Ref: 8a0908231 Ref: filter c gsl_filter_impulse_alloc909295 Ref: 8a1909295 Ref: filter c gsl_filter_impulse_free909682 Ref: 8a2909682 Ref: filter c gsl_filter_impulse909837 Ref: 8a3909837 Node: Examples<18>910815 Ref: filter examples910959 Ref: 8a4910959 Node: Gaussian Example 1911105 Ref: filter gaussian-example-1911200 Ref: 8a5911200 Ref: filter fig-filt-gaussian911487 Ref: 8a6911487 Node: Gaussian Example 2914708 Ref: filter gaussian-example-2914838 Ref: 8a7914838 Ref: filter fig-filt-gaussian2915561 Ref: 8a8915561 Node: Square Wave Signal Example918865 Ref: filter square-wave-signal-example919002 Ref: 8a9919002 Ref: filter fig-filt-edge919504 Ref: 8aa919504 Node: Impulse Detection Example922152 Ref: filter impulse-detection-example922262 Ref: 8ab922262 Ref: filter fig-impulse922744 Ref: 8ac922744 Node: References and Further Reading<18>925461 Ref: filter references-and-further-reading925571 Ref: 8ad925571 Node: Histograms926186 Ref: histogram doc926280 Ref: 8ae926280 Ref: histogram histograms926280 Ref: 8af926280 Node: The histogram struct927955 Ref: histogram the-histogram-struct928052 Ref: 8b0928052 Ref: histogram c gsl_histogram928154 Ref: 8b1928154 Node: Histogram allocation930237 Ref: histogram histogram-allocation930361 Ref: 8b2930361 Ref: histogram c gsl_histogram_alloc930903 Ref: 8b3930903 Ref: histogram c gsl_histogram_set_ranges931458 Ref: 8b4931458 Ref: histogram c gsl_histogram_set_ranges_uniform932612 Ref: 8b5932612 Ref: histogram c gsl_histogram_free933218 Ref: 8b6933218 Node: Copying Histograms933377 Ref: histogram copying-histograms933522 Ref: 8b7933522 Ref: histogram c gsl_histogram_memcpy933571 Ref: 8b8933571 Ref: histogram c gsl_histogram_clone933898 Ref: 8b9933898 Node: Updating and accessing histogram elements934125 Ref: histogram updating-and-accessing-histogram-elements934276 Ref: 8ba934276 Ref: histogram c gsl_histogram_increment934631 Ref: 8bb934631 Ref: histogram c gsl_histogram_accumulate935513 Ref: 8bc935513 Ref: histogram c gsl_histogram_get935819 Ref: 8bd935819 Ref: histogram c gsl_histogram_get_range936203 Ref: 8be936203 Ref: histogram c gsl_histogram_max937076 Ref: 8bf937076 Ref: histogram c gsl_histogram_min937140 Ref: 8c0937140 Ref: histogram c gsl_histogram_bins937204 Ref: 8c1937204 Ref: histogram c gsl_histogram_reset937530 Ref: 8c2937530 Node: Searching histogram ranges937674 Ref: histogram searching-histogram-ranges937827 Ref: 8c3937827 Ref: histogram c gsl_histogram_find938021 Ref: 8c4938021 Node: Histogram Statistics938762 Ref: histogram histogram-statistics938894 Ref: 8c5938894 Ref: histogram c gsl_histogram_max_val938947 Ref: 8c6938947 Ref: histogram c gsl_histogram_max_bin939100 Ref: 8c7939100 Ref: histogram c gsl_histogram_min_val939353 Ref: 8c8939353 Ref: histogram c gsl_histogram_min_bin939506 Ref: 8c9939506 Ref: histogram c gsl_histogram_mean939759 Ref: 8ca939759 Ref: histogram c gsl_histogram_sigma940098 Ref: 8cb940098 Ref: histogram c gsl_histogram_sum940457 Ref: 8cc940457 Node: Histogram Operations940628 Ref: histogram histogram-operations940764 Ref: 8cd940764 Ref: histogram c gsl_histogram_equal_bins_p940817 Ref: 8ce940817 Ref: histogram c gsl_histogram_add941053 Ref: 8cf941053 Ref: histogram c gsl_histogram_sub941376 Ref: 8d0941376 Ref: histogram c gsl_histogram_mul941707 Ref: 8d1941707 Ref: histogram c gsl_histogram_div942053 Ref: 8d2942053 Ref: histogram c gsl_histogram_scale942396 Ref: 8d3942396 Ref: histogram c gsl_histogram_shift942631 Ref: 8d4942631 Node: Reading and writing histograms942865 Ref: histogram reading-and-writing-histograms943007 Ref: 8d5943007 Ref: histogram c gsl_histogram_fwrite943191 Ref: 8d6943191 Ref: histogram c gsl_histogram_fread943646 Ref: 8d7943646 Ref: histogram c gsl_histogram_fprintf944238 Ref: 8d8944238 Ref: histogram c gsl_histogram_fscanf945557 Ref: 8d9945557 Node: Resampling from histograms946138 Ref: histogram resampling-from-histograms946305 Ref: 8da946305 Node: The histogram probability distribution struct946938 Ref: histogram the-histogram-probability-distribution-struct947106 Ref: 8db947106 Ref: histogram c gsl_histogram_pdf947945 Ref: 8dc947945 Ref: histogram c gsl_histogram_pdf_alloc948906 Ref: 8dd948906 Ref: histogram c gsl_histogram_pdf_init949333 Ref: 8de949333 Ref: histogram c gsl_histogram_pdf_free949769 Ref: 8df949769 Ref: histogram c gsl_histogram_pdf_sample949960 Ref: 8e0949960 Node: Example programs for histograms950494 Ref: histogram example-programs-for-histograms950662 Ref: 8e1950662 Ref: histogram fig-histogram952473 Ref: 8e2952473 Node: Two dimensional histograms952550 Ref: histogram two-dimensional-histograms952696 Ref: 8e3952696 Node: The 2D histogram struct953270 Ref: histogram the-2d-histogram-struct953408 Ref: 8e4953408 Ref: histogram c gsl_histogram2d953534 Ref: 8e5953534 Node: 2D Histogram allocation955597 Ref: histogram d-histogram-allocation955730 Ref: 8e6955730 Ref: histogram c gsl_histogram2d_alloc956286 Ref: 8e7956286 Ref: histogram c gsl_histogram2d_set_ranges956917 Ref: 8e8956917 Ref: histogram c gsl_histogram2d_set_ranges_uniform957343 Ref: 8e9957343 Ref: histogram c gsl_histogram2d_free957733 Ref: 8ea957733 Node: Copying 2D Histograms957899 Ref: histogram copying-2d-histograms958053 Ref: 8eb958053 Ref: histogram c gsl_histogram2d_memcpy958110 Ref: 8ec958110 Ref: histogram c gsl_histogram2d_clone958443 Ref: 8ed958443 Node: Updating and accessing 2D histogram elements958676 Ref: histogram updating-and-accessing-2d-histogram-elements958836 Ref: 8ee958836 Ref: histogram c gsl_histogram2d_increment959259 Ref: 8ef959259 Ref: histogram c gsl_histogram2d_accumulate959992 Ref: 8f0959992 Ref: histogram c gsl_histogram2d_get960314 Ref: 8f1960314 Ref: histogram c gsl_histogram2d_get_xrange960744 Ref: 8f2960744 Ref: histogram c gsl_histogram2d_get_yrange960868 Ref: 8f3960868 Ref: histogram c gsl_histogram2d_xmax961766 Ref: 8f4961766 Ref: histogram c gsl_histogram2d_xmin961835 Ref: 8f5961835 Ref: histogram c gsl_histogram2d_nx961904 Ref: 8f6961904 Ref: histogram c gsl_histogram2d_ymax961971 Ref: 8f7961971 Ref: histogram c gsl_histogram2d_ymin962040 Ref: 8f8962040 Ref: histogram c gsl_histogram2d_ny962109 Ref: 8f9962109 Ref: histogram c gsl_histogram2d_reset962471 Ref: 8fa962471 Node: Searching 2D histogram ranges962619 Ref: histogram searching-2d-histogram-ranges962781 Ref: 8fb962781 Ref: histogram c gsl_histogram2d_find962987 Ref: 8fc962987 Node: 2D Histogram Statistics963721 Ref: histogram d-histogram-statistics963862 Ref: 8fd963862 Ref: histogram c gsl_histogram2d_max_val963923 Ref: 8fe963923 Ref: histogram c gsl_histogram2d_max_bin964080 Ref: 8ff964080 Ref: histogram c gsl_histogram2d_min_val964459 Ref: 900964459 Ref: histogram c gsl_histogram2d_min_bin964616 Ref: 901964616 Ref: histogram c gsl_histogram2d_xmean964995 Ref: 902964995 Ref: histogram c gsl_histogram2d_ymean965283 Ref: 903965283 Ref: histogram c gsl_histogram2d_xsigma965571 Ref: 904965571 Ref: histogram c gsl_histogram2d_ysigma965875 Ref: 905965875 Ref: histogram c gsl_histogram2d_cov966179 Ref: 906966179 Ref: histogram c gsl_histogram2d_sum966479 Ref: 907966479 Node: 2D Histogram Operations966654 Ref: histogram d-histogram-operations966799 Ref: 908966799 Ref: histogram c gsl_histogram2d_equal_bins_p966860 Ref: 909966860 Ref: histogram c gsl_histogram2d_add967095 Ref: 90a967095 Ref: histogram c gsl_histogram2d_sub967430 Ref: 90b967430 Ref: histogram c gsl_histogram2d_mul967773 Ref: 90c967773 Ref: histogram c gsl_histogram2d_div968130 Ref: 90d968130 Ref: histogram c gsl_histogram2d_scale968485 Ref: 90e968485 Ref: histogram c gsl_histogram2d_shift968736 Ref: 90f968736 Node: Reading and writing 2D histograms968988 Ref: histogram reading-and-writing-2d-histograms969139 Ref: 910969139 Ref: histogram c gsl_histogram2d_fwrite969347 Ref: 911969347 Ref: histogram c gsl_histogram2d_fread969806 Ref: 912969806 Ref: histogram c gsl_histogram2d_fprintf970415 Ref: 913970415 Ref: histogram c gsl_histogram2d_fscanf972237 Ref: 914972237 Node: Resampling from 2D histograms972836 Ref: histogram resampling-from-2d-histograms972998 Ref: 915972998 Ref: histogram c gsl_histogram2d_pdf973711 Ref: 916973711 Ref: histogram c gsl_histogram2d_pdf_alloc975002 Ref: 917975002 Ref: histogram c gsl_histogram2d_pdf_init975485 Ref: 918975485 Ref: histogram c gsl_histogram2d_pdf_free975938 Ref: 919975938 Ref: histogram c gsl_histogram2d_pdf_sample976149 Ref: 91a976149 Node: Example programs for 2D histograms976496 Ref: histogram example-programs-for-2d-histograms976616 Ref: 91b976616 Node: N-tuples978683 Ref: ntuple doc978783 Ref: 91c978783 Ref: ntuple n-tuples978783 Ref: 91d978783 Node: The ntuple struct979963 Ref: ntuple the-ntuple-struct980051 Ref: 91e980051 Ref: ntuple c gsl_ntuple980098 Ref: 91f980098 Node: Creating ntuples980532 Ref: ntuple creating-ntuples980660 Ref: 920980660 Ref: ntuple c gsl_ntuple_create980705 Ref: 921980705 Node: Opening an existing ntuple file981246 Ref: ntuple opening-an-existing-ntuple-file981372 Ref: 922981372 Ref: ntuple c gsl_ntuple_open981447 Ref: 923981447 Node: Writing ntuples981920 Ref: ntuple writing-ntuples982045 Ref: 924982045 Ref: ntuple c gsl_ntuple_write982088 Ref: 925982088 Ref: ntuple c gsl_ntuple_bookdata982276 Ref: 926982276 Node: Reading ntuples982404 Ref: ntuple reading-ntuples982520 Ref: 927982520 Ref: ntuple c gsl_ntuple_read982563 Ref: 928982563 Node: Closing an ntuple file982753 Ref: ntuple closing-an-ntuple-file982881 Ref: 929982881 Ref: ntuple c gsl_ntuple_close982938 Ref: 92a982938 Node: Histogramming ntuple values983105 Ref: ntuple histogramming-ntuple-values983230 Ref: 92b983230 Ref: ntuple c gsl_ntuple_select_fn983683 Ref: 92d983683 Ref: ntuple c gsl_ntuple_value_fn984164 Ref: 92e984164 Ref: ntuple c gsl_ntuple_project984616 Ref: 92c984616 Node: Examples<19>985385 Ref: ntuple examples985522 Ref: 92f985522 Ref: ntuple fig-ntuples988758 Ref: 930988758 Node: References and Further Reading<19>988826 Ref: ntuple references-and-further-reading988927 Ref: 931988927 Node: Monte Carlo Integration989133 Ref: montecarlo doc989242 Ref: 932989242 Ref: montecarlo monte-carlo-integration989242 Ref: 933989242 Node: Interface990399 Ref: montecarlo interface990495 Ref: 934990495 Ref: montecarlo c gsl_monte_function991503 Ref: 935991503 Node: PLAIN Monte Carlo993744 Ref: montecarlo plain-monte-carlo993854 Ref: 936993854 Ref: montecarlo c gsl_monte_plain_state994868 Ref: 937994868 Ref: montecarlo c gsl_monte_plain_alloc994961 Ref: 938994961 Ref: montecarlo c gsl_monte_plain_init995178 Ref: 939995178 Ref: montecarlo c gsl_monte_plain_integrate995404 Ref: 93a995404 Ref: montecarlo c gsl_monte_plain_free996288 Ref: 93b996288 Node: MISER996449 Ref: montecarlo miser996555 Ref: 93c996555 Ref: montecarlo c gsl_monte_miser_state998475 Ref: 93d998475 Ref: montecarlo c gsl_monte_miser_alloc998571 Ref: 93e998571 Ref: montecarlo c gsl_monte_miser_init998858 Ref: 93f998858 Ref: montecarlo c gsl_monte_miser_integrate999084 Ref: 940999084 Ref: montecarlo c gsl_monte_miser_free999968 Ref: 941999968 Ref: montecarlo c gsl_monte_miser_params_get1000246 Ref: 9421000246 Ref: montecarlo c gsl_monte_miser_params_set1000492 Ref: 9431000492 Ref: montecarlo c gsl_monte_miser_params1001119 Ref: 9441001119 Ref: montecarlo c gsl_monte_miser_params estimate_frac1001153 Ref: 9451001153 Ref: montecarlo c gsl_monte_miser_params min_calls1001423 Ref: 9461001423 Ref: montecarlo c gsl_monte_miser_params min_calls_per_bisection1001945 Ref: 9471001945 Ref: montecarlo c gsl_monte_miser_params alpha1002435 Ref: 9481002435 Ref: montecarlo c gsl_monte_miser_params dither1003311 Ref: 9491003311 Ref: MISER-Footnote-11003819 Node: VEGAS1003958 Ref: montecarlo vegas1004059 Ref: 94a1004059 Ref: montecarlo c gsl_monte_vegas_state1006334 Ref: 94b1006334 Ref: montecarlo c gsl_monte_vegas_alloc1006430 Ref: 94c1006430 Ref: montecarlo c gsl_monte_vegas_init1006717 Ref: 94d1006717 Ref: montecarlo c gsl_monte_vegas_integrate1006943 Ref: 94e1006943 Ref: montecarlo c gsl_monte_vegas_free1008111 Ref: 94f1008111 Ref: montecarlo c gsl_monte_vegas_chisq1009788 Ref: 9501009788 Ref: montecarlo c gsl_monte_vegas_runval1010302 Ref: 9511010302 Ref: montecarlo c gsl_monte_vegas_params_get1010716 Ref: 9521010716 Ref: montecarlo c gsl_monte_vegas_params_set1010962 Ref: 9531010962 Ref: montecarlo c gsl_monte_vegas_params1011589 Ref: 9541011589 Ref: montecarlo c gsl_monte_vegas_params alpha1011623 Ref: 9551011623 Ref: montecarlo c gsl_monte_vegas_params iterations1011897 Ref: 9561011897 Ref: montecarlo c gsl_monte_vegas_params stage1012058 Ref: 9571012058 Ref: montecarlo c gsl_monte_vegas_params mode1012979 Ref: 9581012979 Ref: montecarlo c gsl_monte_vegas_params verbose1013471 Ref: 9591013471 Ref: montecarlo c gsl_monte_vegas_params ostream1013502 Ref: 95a1013502 Node: Examples<20>1014231 Ref: montecarlo examples1014361 Ref: 95b1014361 Node: References and Further Reading<20>1020282 Ref: montecarlo references-and-further-reading1020398 Ref: 95c1020398 Node: Simulated Annealing1021040 Ref: siman doc1021172 Ref: 95d1021172 Ref: siman simulated-annealing1021172 Ref: 95e1021172 Node: Simulated Annealing algorithm1022251 Ref: siman simulated-annealing-algorithm1022375 Ref: 95f1022375 Node: Simulated Annealing functions1023405 Ref: siman simulated-annealing-functions1023550 Ref: 9601023550 Ref: siman c gsl_siman_solve1023621 Ref: 9611023621 Ref: siman c gsl_siman_Efunc_t1026203 Ref: 9621026203 Ref: siman c gsl_siman_step_t1026366 Ref: 9631026366 Ref: siman c gsl_siman_metric_t1026688 Ref: 9641026688 Ref: siman c gsl_siman_print_t1026886 Ref: 9651026886 Ref: siman c gsl_siman_copy_t1027050 Ref: 9661027050 Ref: siman c gsl_siman_copy_construct_t1027231 Ref: 9671027231 Ref: siman c gsl_siman_destroy_t1027414 Ref: 9681027414 Ref: siman c gsl_siman_params_t1027588 Ref: 9691027588 Node: Examples<21>1028694 Ref: siman examples1028844 Ref: 96a1028844 Node: Trivial example1029211 Ref: siman trivial-example1029311 Ref: 96b1029311 Ref: siman fig-siman-test1031873 Ref: 96c1031873 Ref: siman fig-siman-energy1032074 Ref: 96d1032074 Node: Traveling Salesman Problem1032155 Ref: siman traveling-salesman-problem1032255 Ref: 96e1032255 Node: References and Further Reading<21>1046679 Ref: siman references-and-further-reading1046791 Ref: 96f1046791 Node: Ordinary Differential Equations1047053 Ref: ode-initval doc1047175 Ref: 9701047175 Ref: ode-initval ordinary-differential-equations1047175 Ref: 9711047175 Node: Defining the ODE System1048292 Ref: ode-initval defining-the-ode-system1048411 Ref: 9721048411 Ref: ode-initval c gsl_odeiv2_system1048811 Ref: 9731048811 Node: Stepping Functions1051578 Ref: ode-initval stepping-functions1051732 Ref: 9771051732 Ref: ode-initval c gsl_odeiv2_step1051947 Ref: 9781051947 Ref: ode-initval c gsl_odeiv2_step_alloc1052039 Ref: 9791052039 Ref: ode-initval c gsl_odeiv2_step_reset1052520 Ref: 9761052520 Ref: ode-initval c gsl_odeiv2_step_free1052760 Ref: 97a1052760 Ref: ode-initval c gsl_odeiv2_step_name1052920 Ref: 97b1052920 Ref: ode-initval c gsl_odeiv2_step_order1053234 Ref: 97c1053234 Ref: ode-initval c gsl_odeiv2_step_set_driver1053481 Ref: 97d1053481 Ref: ode-initval c gsl_odeiv2_step_apply1053967 Ref: 97e1053967 Ref: ode-initval c gsl_odeiv2_step_type1056409 Ref: 97f1056409 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_rk21056441 Ref: 9801056441 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_rk41056571 Ref: 9811056571 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_rkf451056877 Ref: 9821056877 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_rkck1057094 Ref: 9831057094 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_rk8pd1057250 Ref: 9841057250 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_rk1imp1057412 Ref: 9851057412 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_rk2imp1057831 Ref: 9861057831 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_rk4imp1058233 Ref: 9871058233 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_bsimp1058584 Ref: 9881058584 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_msadams1058852 Ref: 9891058852 Ref: ode-initval c gsl_odeiv2_step_type gsl_odeiv2_step_msbdf1059380 Ref: 98a1059380 Node: Adaptive Step-size Control1060072 Ref: ode-initval adaptive-step-size-control1060212 Ref: 98b1060212 Ref: ode-initval c gsl_odeiv2_control1060461 Ref: 98c1060461 Ref: ode-initval c gsl_odeiv2_control_type1060544 Ref: 98d1060544 Ref: ode-initval c gsl_odeiv2_control_standard_new1060618 Ref: 98e1060618 Ref: ode-initval c gsl_odeiv2_control_y_new1062231 Ref: 98f1062231 Ref: ode-initval c gsl_odeiv2_control_yp_new1062679 Ref: 9901062679 Ref: ode-initval c gsl_odeiv2_control_scaled_new1063148 Ref: 9911063148 Ref: ode-initval c gsl_odeiv2_control_alloc1063869 Ref: 9921063869 Ref: ode-initval c gsl_odeiv2_control_init1064281 Ref: 9931064281 Ref: ode-initval c gsl_odeiv2_control_free1064691 Ref: 9941064691 Ref: ode-initval c gsl_odeiv2_control_hadjust1064856 Ref: 9951064856 Ref: ode-initval c gsl_odeiv2_control_name1065856 Ref: 9961065856 Ref: ode-initval c gsl_odeiv2_control_errlevel1066186 Ref: 9971066186 Ref: ode-initval c gsl_odeiv2_control_set_driver1066634 Ref: 9981066634 Node: Evolution1066853 Ref: ode-initval evolution1066981 Ref: 9991066981 Ref: ode-initval c gsl_odeiv2_evolve1067182 Ref: 99a1067182 Ref: ode-initval c gsl_odeiv2_evolve_alloc1067296 Ref: 99b1067296 Ref: ode-initval c gsl_odeiv2_evolve_apply1067531 Ref: 99c1067531 Ref: ode-initval c gsl_odeiv2_evolve_apply_fixed_step1069862 Ref: 99d1069862 Ref: ode-initval c gsl_odeiv2_evolve_reset1070547 Ref: 9751070547 Ref: ode-initval c gsl_odeiv2_evolve_free1070792 Ref: 99e1070792 Ref: ode-initval c gsl_odeiv2_evolve_set_driver1070957 Ref: 99f1070957 Node: Driver1071572 Ref: ode-initval driver1071686 Ref: 9a01071686 Ref: ode-initval c gsl_odeiv2_driver_alloc_y_new1071825 Ref: 9a11071825 Ref: ode-initval c gsl_odeiv2_driver_alloc_yp_new1072033 Ref: 9a21072033 Ref: ode-initval c gsl_odeiv2_driver_alloc_standard_new1072242 Ref: 9a31072242 Ref: ode-initval c gsl_odeiv2_driver_alloc_scaled_new1072506 Ref: 9a41072506 Ref: ode-initval c gsl_odeiv2_driver_set_hmin1073268 Ref: 9a51073268 Ref: ode-initval c gsl_odeiv2_driver_set_hmax1073490 Ref: 9a61073490 Ref: ode-initval c gsl_odeiv2_driver_set_nmax1073728 Ref: 9a71073728 Ref: ode-initval c gsl_odeiv2_driver_apply1073996 Ref: 9a81073996 Ref: ode-initval c gsl_odeiv2_driver_apply_fixed_step1075017 Ref: 9a91075017 Ref: ode-initval c gsl_odeiv2_driver_reset1075550 Ref: 9741075550 Ref: ode-initval c gsl_odeiv2_driver_reset_hstart1075678 Ref: 9aa1075678 Ref: ode-initval c gsl_odeiv2_driver_free1075977 Ref: 9ab1075977 Node: Examples<22>1076148 Ref: ode-initval examples1076287 Ref: 9ac1076287 Node: References and Further Reading<22>1081538 Ref: ode-initval references-and-further-reading1081662 Ref: 9ad1081662 Node: Interpolation1083329 Ref: interp doc1083457 Ref: 9ae1083457 Ref: interp interpolation1083457 Ref: 9af1083457 Ref: interp sec-interpolation1083457 Ref: 9b01083457 Node: Introduction to 1D Interpolation1084865 Ref: interp introduction-to-1d-interpolation1084983 Ref: 9b21084983 Node: 1D Interpolation Functions1085352 Ref: interp d-interpolation-functions1085501 Ref: 9b31085501 Ref: interp c gsl_interp1085707 Ref: 9b41085707 Ref: interp c gsl_interp_alloc1085766 Ref: 9b51085766 Ref: interp c gsl_interp_init1086013 Ref: 9b61086013 Ref: interp c gsl_interp_free1086685 Ref: 9b71086685 Node: 1D Interpolation Types1086812 Ref: interp d-interpolation-types1086962 Ref: 9b81086962 Ref: interp c gsl_interp_type1087090 Ref: 9b91087090 Ref: interp c gsl_interp_type gsl_interp_linear1087117 Ref: 9ba1087117 Ref: interp c gsl_interp_type gsl_interp_polynomial1087294 Ref: 9bb1087294 Ref: interp c gsl_interp_type gsl_interp_cspline1087700 Ref: 9bc1087700 Ref: interp c gsl_interp_type gsl_interp_cspline_periodic1088075 Ref: 9bd1088075 Ref: interp c gsl_interp_type gsl_interp_akima1088654 Ref: 9be1088654 Ref: interp c gsl_interp_type gsl_interp_akima_periodic1088862 Ref: 9bf1088862 Ref: interp c gsl_interp_type gsl_interp_steffen1089095 Ref: 9c01089095 Ref: interp c gsl_interp_name1089729 Ref: 9c11089729 Ref: interp c gsl_interp_min_size1090073 Ref: 9c21090073 Ref: interp c gsl_interp_type_min_size1090157 Ref: 9c31090157 Node: 1D Index Look-up and Acceleration1090478 Ref: interp d-index-look-up-and-acceleration1090642 Ref: 9c41090642 Ref: interp c gsl_interp_accel1090855 Ref: 9c51090855 Ref: interp c gsl_interp_bsearch1091131 Ref: 9c61091131 Ref: interp c gsl_interp_accel_alloc1091556 Ref: 9c71091556 Ref: interp c gsl_interp_accel_find1092105 Ref: 9c81092105 Ref: interp c gsl_interp_accel_reset1092637 Ref: 9c91092637 Ref: interp c gsl_interp_accel_free1092905 Ref: 9ca1092905 Node: 1D Evaluation of Interpolating Functions1093036 Ref: interp d-evaluation-of-interpolating-functions1093203 Ref: 9cb1093203 Ref: interp c gsl_interp_eval1093296 Ref: 9cc1093296 Ref: interp c gsl_interp_eval_e1093451 Ref: 9cd1093451 Ref: interp c gsl_interp_eval_deriv1094034 Ref: 9ce1094034 Ref: interp c gsl_interp_eval_deriv_e1094195 Ref: 9cf1094195 Ref: interp c gsl_interp_eval_deriv21094632 Ref: 9d01094632 Ref: interp c gsl_interp_eval_deriv2_e1094794 Ref: 9d11094794 Ref: interp c gsl_interp_eval_integ1095241 Ref: 9d21095241 Ref: interp c gsl_interp_eval_integ_e1095412 Ref: 9d31095412 Node: 1D Higher-level Interface1095896 Ref: interp d-higher-level-interface1096063 Ref: 9d41096063 Ref: interp c gsl_spline1096573 Ref: 9d51096573 Ref: interp c gsl_spline_alloc1096693 Ref: 9d61096693 Ref: interp c gsl_spline_init1096799 Ref: 9d71096799 Ref: interp c gsl_spline_free1096916 Ref: 9d81096916 Ref: interp c gsl_spline_name1096973 Ref: 9d91096973 Ref: interp c gsl_spline_min_size1097043 Ref: 9da1097043 Ref: interp c gsl_spline_eval1097128 Ref: 9db1097128 Ref: interp c gsl_spline_eval_e1097235 Ref: 9dc1097235 Ref: interp c gsl_spline_eval_deriv1097353 Ref: 9dd1097353 Ref: interp c gsl_spline_eval_deriv_e1097466 Ref: 9de1097466 Ref: interp c gsl_spline_eval_deriv21097590 Ref: 9df1097590 Ref: interp c gsl_spline_eval_deriv2_e1097704 Ref: 9e01097704 Ref: interp c gsl_spline_eval_integ1097830 Ref: 9e11097830 Ref: interp c gsl_spline_eval_integ_e1097953 Ref: 9e21097953 Node: 1D Interpolation Example Programs1098092 Ref: interp d-interpolation-example-programs1098251 Ref: 9e31098251 Ref: interp fig-interp1099655 Ref: 9e41099655 Ref: interp fig-interpp1101245 Ref: 9e51101245 Ref: interp fig-interp-compare1103409 Ref: 9e61103409 Node: Introduction to 2D Interpolation1103907 Ref: interp introduction-to-2d-interpolation1104067 Ref: 9e71104067 Node: 2D Interpolation Functions1104441 Ref: interp id11104590 Ref: 9e81104590 Ref: interp c gsl_interp2d1104798 Ref: 9e91104798 Ref: interp c gsl_interp2d_alloc1104859 Ref: 9ea1104859 Ref: interp c gsl_interp2d_init1105222 Ref: 9eb1105222 Ref: interp c gsl_interp2d_free1106189 Ref: 9ec1106189 Node: 2D Interpolation Grids1106320 Ref: interp d-interpolation-grids1106459 Ref: 9ed1106459 Ref: interp c gsl_interp2d_set1106860 Ref: 9ee1106860 Ref: interp c gsl_interp2d_get1107133 Ref: 9ef1107133 Ref: interp c gsl_interp2d_idx1107392 Ref: 9f01107392 Node: 2D Interpolation Types1107644 Ref: interp id21107797 Ref: 9f11107797 Ref: interp c gsl_interp2d_type1107856 Ref: 9f21107856 Ref: interp c gsl_interp2d_type gsl_interp2d_bilinear1107969 Ref: 9f31107969 Ref: interp c gsl_interp2d_type gsl_interp2d_bicubic1108154 Ref: 9f41108154 Ref: interp c gsl_interp2d_name1108260 Ref: 9f51108260 Ref: interp c gsl_interp2d_min_size1108611 Ref: 9f61108611 Ref: interp c gsl_interp2d_type_min_size1108699 Ref: 9f71108699 Node: 2D Evaluation of Interpolating Functions1109014 Ref: interp id31109170 Ref: 9f81109170 Ref: interp c gsl_interp2d_eval1109265 Ref: 9f91109265 Ref: interp c gsl_interp2d_eval_e1109500 Ref: 9fa1109500 Ref: interp c gsl_interp2d_eval_extrap1110227 Ref: 9fb1110227 Ref: interp c gsl_interp2d_eval_extrap_e1110469 Ref: 9fc1110469 Ref: interp c gsl_interp2d_eval_deriv_x1111233 Ref: 9fd1111233 Ref: interp c gsl_interp2d_eval_deriv_x_e1111476 Ref: 9fe1111476 Ref: interp c gsl_interp2d_eval_deriv_y1112239 Ref: 9ff1112239 Ref: interp c gsl_interp2d_eval_deriv_y_e1112482 Ref: a001112482 Ref: interp c gsl_interp2d_eval_deriv_xx1113245 Ref: a011113245 Ref: interp c gsl_interp2d_eval_deriv_xx_e1113489 Ref: a021113489 Ref: interp c gsl_interp2d_eval_deriv_yy1114257 Ref: a031114257 Ref: interp c gsl_interp2d_eval_deriv_yy_e1114501 Ref: a041114501 Ref: interp c gsl_interp2d_eval_deriv_xy1115269 Ref: a051115269 Ref: interp c gsl_interp2d_eval_deriv_xy_e1115513 Ref: a061115513 Node: 2D Higher-level Interface1116290 Ref: interp id41116457 Ref: a071116457 Ref: interp c gsl_spline2d1116978 Ref: a081116978 Ref: interp c gsl_spline2d_alloc1117102 Ref: a091117102 Ref: interp c gsl_spline2d_init1117229 Ref: a0a1117229 Ref: interp c gsl_spline2d_free1117394 Ref: a0b1117394 Ref: interp c gsl_spline2d_name1117455 Ref: a0c1117455 Ref: interp c gsl_spline2d_min_size1117529 Ref: a0d1117529 Ref: interp c gsl_spline2d_eval1117618 Ref: a0e1117618 Ref: interp c gsl_spline2d_eval_e1117786 Ref: a0f1117786 Ref: interp c gsl_spline2d_eval_extrap1117965 Ref: a101117965 Ref: interp c gsl_spline2d_eval_extrap_e1118140 Ref: a111118140 Ref: interp c gsl_spline2d_eval_deriv_x1118326 Ref: a121118326 Ref: interp c gsl_spline2d_eval_deriv_x_e1118502 Ref: a131118502 Ref: interp c gsl_spline2d_eval_deriv_y1118689 Ref: a141118689 Ref: interp c gsl_spline2d_eval_deriv_y_e1118865 Ref: a151118865 Ref: interp c gsl_spline2d_eval_deriv_xx1119052 Ref: a161119052 Ref: interp c gsl_spline2d_eval_deriv_xx_e1119229 Ref: a171119229 Ref: interp c gsl_spline2d_eval_deriv_yy1119417 Ref: a181119417 Ref: interp c gsl_spline2d_eval_deriv_yy_e1119594 Ref: a191119594 Ref: interp c gsl_spline2d_eval_deriv_xy1119782 Ref: a1a1119782 Ref: interp c gsl_spline2d_eval_deriv_xy_e1119959 Ref: a1b1119959 Ref: interp c gsl_spline2d_set1120147 Ref: a1c1120147 Ref: interp c gsl_spline2d_get1120283 Ref: a1d1120283 Node: 2D Interpolation Example programs1120542 Ref: interp id51120703 Ref: a1e1120703 Ref: interp fig-interp2d1122755 Ref: a1f1122755 Node: References and Further Reading<23>1122818 Ref: interp references-and-further-reading1122945 Ref: a201122945 Node: Numerical Differentiation1123525 Ref: diff doc1123646 Ref: a211123646 Ref: diff numerical-differentiation1123646 Ref: a221123646 Node: Functions1124108 Ref: diff functions1124201 Ref: a231124201 Ref: diff c gsl_deriv_central1124232 Ref: a241124232 Ref: diff c gsl_deriv_forward1125273 Ref: a251125273 Ref: diff c gsl_deriv_backward1126439 Ref: a261126439 Node: Examples<23>1127257 Ref: diff examples1127393 Ref: a271127393 Node: References and Further Reading<24>1128682 Ref: diff references-and-further-reading1128800 Ref: a281128800 Node: Chebyshev Approximations1129209 Ref: cheb doc1129336 Ref: a291129336 Ref: cheb chebyshev-approximations1129336 Ref: a2a1129336 Node: Definitions1130240 Ref: cheb definitions1130366 Ref: a2b1130366 Ref: cheb c gsl_cheb_series1130401 Ref: a2c1130401 Node: Creation and Calculation of Chebyshev Series1131099 Ref: cheb creation-and-calculation-of-chebyshev-series1131253 Ref: a2d1131253 Ref: cheb c gsl_cheb_alloc1131354 Ref: a2e1131354 Ref: cheb c gsl_cheb_free1131599 Ref: a2f1131599 Ref: cheb c gsl_cheb_init1131741 Ref: a301131741 Node: Auxiliary Functions1132147 Ref: cheb auxiliary-functions1132317 Ref: a311132317 Ref: cheb c gsl_cheb_order1132449 Ref: a321132449 Ref: cheb c gsl_cheb_size1132587 Ref: a331132587 Ref: cheb c gsl_cheb_coeffs1132650 Ref: a341132650 Node: Chebyshev Series Evaluation1132890 Ref: cheb chebyshev-series-evaluation1133041 Ref: a351133041 Ref: cheb c gsl_cheb_eval1133108 Ref: a361133108 Ref: cheb c gsl_cheb_eval_err1133284 Ref: a371133284 Ref: cheb c gsl_cheb_eval_n1133679 Ref: a381133679 Ref: cheb c gsl_cheb_eval_n_err1133933 Ref: a391133933 Node: Derivatives and Integrals1134401 Ref: cheb derivatives-and-integrals1134545 Ref: a3a1134545 Ref: cheb c gsl_cheb_calc_deriv1134889 Ref: a3b1134889 Ref: cheb c gsl_cheb_calc_integ1135259 Ref: a3c1135259 Node: Examples<24>1135721 Ref: cheb examples1135872 Ref: a3d1135872 Ref: cheb fig-cheb1137241 Ref: a3e1137241 Node: References and Further Reading<25>1137319 Ref: cheb references-and-further-reading1137436 Ref: a3f1137436 Node: Series Acceleration1137735 Ref: sum doc1137855 Ref: a401137855 Ref: sum series-acceleration1137855 Ref: a411137855 Node: Acceleration functions1138525 Ref: sum acceleration-functions1138660 Ref: a421138660 Ref: sum c gsl_sum_levin_u_workspace1139618 Ref: a431139618 Ref: sum c gsl_sum_levin_u_alloc1139696 Ref: a441139696 Ref: sum c gsl_sum_levin_u_free1139932 Ref: a451139932 Ref: sum c gsl_sum_levin_u_accel1140090 Ref: a461140090 Node: Acceleration functions without error estimation1141043 Ref: sum acceleration-functions-without-error-estimation1141199 Ref: a471141199 Ref: sum c gsl_sum_levin_utrunc_workspace1142289 Ref: a481142289 Ref: sum c gsl_sum_levin_utrunc_alloc1142396 Ref: a491142396 Ref: sum c gsl_sum_levin_utrunc_free1142666 Ref: a4a1142666 Ref: sum c gsl_sum_levin_utrunc_accel1142844 Ref: a4b1142844 Node: Examples<25>1143856 Ref: sum examples1144024 Ref: a4c1144024 Node: References and Further Reading<26>1146185 Ref: sum references-and-further-reading1146297 Ref: a4d1146297 Node: Wavelet Transforms1147069 Ref: dwt doc1147191 Ref: a4e1147191 Ref: dwt wavelet-transforms1147191 Ref: a4f1147191 Node: Definitions<2>1147681 Ref: dwt definitions1147774 Ref: a501147774 Node: Initialization1148651 Ref: dwt initialization1148772 Ref: a511148772 Ref: dwt c gsl_wavelet1148813 Ref: a521148813 Ref: dwt c gsl_wavelet_alloc1148954 Ref: a531148954 Ref: dwt c gsl_wavelet_type1149395 Ref: a541149395 Ref: dwt c gsl_wavelet_type gsl_wavelet_daubechies1149423 Ref: a551149423 Ref: dwt c gsl_wavelet_type gsl_wavelet_daubechies_centered1149495 Ref: a561149495 Ref: dwt c gsl_wavelet_type gsl_wavelet_haar1149774 Ref: a571149774 Ref: dwt c gsl_wavelet_type gsl_wavelet_haar_centered1149840 Ref: a581149840 Ref: dwt c gsl_wavelet_type gsl_wavelet_bspline1150035 Ref: a591150035 Ref: dwt c gsl_wavelet_type gsl_wavelet_bspline_centered1150104 Ref: a5a1150104 Ref: dwt c gsl_wavelet_name1150610 Ref: a5b1150610 Ref: dwt c gsl_wavelet_free1150772 Ref: a5c1150772 Ref: dwt c gsl_wavelet_workspace1150885 Ref: a5d1150885 Ref: dwt c gsl_wavelet_workspace_alloc1151065 Ref: a5e1151065 Ref: dwt c gsl_wavelet_workspace_free1151654 Ref: a5f1151654 Node: Transform Functions1151808 Ref: dwt transform-functions1151927 Ref: a601151927 Node: Wavelet transforms in one dimension1152365 Ref: dwt wavelet-transforms-in-one-dimension1152501 Ref: a611152501 Ref: dwt c gsl_wavelet_transform1152588 Ref: a621152588 Ref: dwt c gsl_wavelet_transform_forward1152766 Ref: a631152766 Ref: dwt c gsl_wavelet_transform_inverse1152925 Ref: a641152925 Node: Wavelet transforms in two dimension1154541 Ref: dwt wavelet-transforms-in-two-dimension1154677 Ref: a651154677 Ref: dwt c gsl_wavelet2d_transform1155988 Ref: a661155988 Ref: dwt c gsl_wavelet2d_transform_forward1156183 Ref: a671156183 Ref: dwt c gsl_wavelet2d_transform_inverse1156359 Ref: a681156359 Ref: dwt c gsl_wavelet2d_transform_matrix1157522 Ref: a691157522 Ref: dwt c gsl_wavelet2d_transform_matrix_forward1157685 Ref: a6a1157685 Ref: dwt c gsl_wavelet2d_transform_matrix_inverse1157819 Ref: a6b1157819 Ref: dwt c gsl_wavelet2d_nstransform1158062 Ref: a6c1158062 Ref: dwt c gsl_wavelet2d_nstransform_forward1158259 Ref: a6d1158259 Ref: dwt c gsl_wavelet2d_nstransform_inverse1158437 Ref: a6e1158437 Ref: dwt c gsl_wavelet2d_nstransform_matrix1158711 Ref: a6f1158711 Ref: dwt c gsl_wavelet2d_nstransform_matrix_forward1158876 Ref: a701158876 Ref: dwt c gsl_wavelet2d_nstransform_matrix_inverse1159012 Ref: a711159012 Node: Examples<26>1159287 Ref: dwt examples1159426 Ref: a721159426 Ref: dwt fig-dwt1161502 Ref: a731161502 Node: References and Further Reading<27>1161677 Ref: dwt references-and-further-reading1161788 Ref: a741161788 Node: Discrete Hankel Transforms1163874 Ref: dht doc1164005 Ref: a751164005 Ref: dht discrete-hankel-transforms1164005 Ref: a761164005 Node: Definitions<3>1164352 Ref: dht definitions1164451 Ref: a771164451 Node: Functions<2>1167662 Ref: dht functions1167804 Ref: a7c1167804 Ref: dht c gsl_dht1167835 Ref: a781167835 Ref: dht c gsl_dht_alloc1167911 Ref: a791167911 Ref: dht c gsl_dht_init1168070 Ref: a7a1168070 Ref: dht c gsl_dht_new1168264 Ref: a7d1168264 Ref: dht c gsl_dht_free1168539 Ref: a7e1168539 Ref: dht c gsl_dht_apply1168639 Ref: a7b1168639 Ref: dht c gsl_dht_x_sample1169096 Ref: a7f1169096 Ref: dht c gsl_dht_k_sample1169379 Ref: a801169379 Node: References and Further Reading<28>1169563 Ref: dht references-and-further-reading1169682 Ref: a811169682 Node: One Dimensional Root-Finding1169963 Ref: roots doc1170104 Ref: a821170104 Ref: roots one-dimensional-root-finding1170104 Ref: a831170104 Node: Overview1171207 Ref: roots overview1171297 Ref: a841171297 Node: Caveats1173091 Ref: roots caveats1173213 Ref: a871173213 Node: Initializing the Solver1174827 Ref: roots initializing-the-solver1174972 Ref: a881174972 Ref: roots c gsl_root_fsolver1175031 Ref: a851175031 Ref: roots c gsl_root_fdfsolver1175156 Ref: a861175156 Ref: roots c gsl_root_fsolver_alloc1175276 Ref: a891175276 Ref: roots c gsl_root_fdfsolver_alloc1175896 Ref: a8a1175896 Ref: roots c gsl_root_fsolver_set1176549 Ref: a8b1176549 Ref: roots c gsl_root_fdfsolver_set1176873 Ref: a8c1176873 Ref: roots c gsl_root_fsolver_free1177169 Ref: a8d1177169 Ref: roots c gsl_root_fdfsolver_free1177232 Ref: a8e1177232 Ref: roots c gsl_root_fsolver_name1177388 Ref: a8f1177388 Ref: roots c gsl_root_fdfsolver_name1177474 Ref: a901177474 Node: Providing the function to solve1177787 Ref: roots providing-function-to-solve1177950 Ref: a911177950 Ref: roots providing-the-function-to-solve1177950 Ref: a921177950 Ref: roots c gsl_function1178252 Ref: a931178252 Ref: roots c gsl_function_fdf1179390 Ref: a941179390 Node: Search Bounds and Guesses1181484 Ref: roots search-bounds-and-guesses1181633 Ref: a951181633 Node: Iteration1182331 Ref: roots iteration1182475 Ref: a961182475 Ref: roots c gsl_root_fsolver_iterate1182801 Ref: a971182801 Ref: roots c gsl_root_fdfsolver_iterate1182866 Ref: a981182866 Ref: roots c gsl_root_fsolver_root1183696 Ref: a991183696 Ref: roots c gsl_root_fdfsolver_root1183767 Ref: a9a1183767 Ref: roots c gsl_root_fsolver_x_lower1183949 Ref: a9b1183949 Ref: roots c gsl_root_fsolver_x_upper1184033 Ref: a9c1184033 Node: Search Stopping Parameters1184213 Ref: roots search-stopping-parameters1184358 Ref: a9d1184358 Ref: roots c gsl_root_test_interval1184837 Ref: a9e1184837 Ref: roots c gsl_root_test_delta1185868 Ref: a9f1185868 Ref: roots c gsl_root_test_residual1186325 Ref: aa01186325 Node: Root Bracketing Algorithms1186853 Ref: roots root-bracketing-algorithms1187030 Ref: aa11187030 Ref: roots c gsl_root_fsolver_type1187597 Ref: aa21187597 Ref: roots c gsl_root_fsolver_type gsl_root_fsolver_bisection1187630 Ref: aa31187630 Ref: roots c gsl_root_fsolver_type gsl_root_fsolver_falsepos1188461 Ref: aa41188461 Ref: roots c gsl_root_fsolver_type gsl_root_fsolver_brent1189421 Ref: aa51189421 Node: Root Finding Algorithms using Derivatives1190582 Ref: roots root-finding-algorithms-using-derivatives1190745 Ref: aa61190745 Ref: roots c gsl_root_fdfsolver_type1191263 Ref: aa71191263 Ref: roots c gsl_root_fdfsolver_type gsl_root_fdfsolver_newton1191298 Ref: aa81191298 Ref: roots c gsl_root_fdfsolver_type gsl_root_fdfsolver_secant1191954 Ref: aa91191954 Ref: roots c gsl_root_fdfsolver_type gsl_root_fdfsolver_steffenson1193410 Ref: aaa1193410 Ref: Root Finding Algorithms using Derivatives-Footnote-11194541 Node: Examples<27>1194703 Ref: roots examples1194874 Ref: aab1194874 Node: References and Further Reading<29>1201984 Ref: roots references-and-further-reading1202105 Ref: aac1202105 Node: One Dimensional Minimization1202624 Ref: min doc1202768 Ref: aad1202768 Ref: min one-dimensional-minimization1202768 Ref: aae1202768 Node: Overview<2>1203933 Ref: min overview1204029 Ref: aaf1204029 Ref: min fig-min-interval1204280 Ref: ab01204280 Node: Caveats<2>1205789 Ref: min caveats1205920 Ref: ab21205920 Node: Initializing the Minimizer1207104 Ref: min initializing-the-minimizer1207258 Ref: ab31207258 Ref: min c gsl_min_fminimizer1207323 Ref: ab11207323 Ref: min c gsl_min_fminimizer_alloc1207405 Ref: ab41207405 Ref: min c gsl_min_fminimizer_set1208057 Ref: ab51208057 Ref: min c gsl_min_fminimizer_set_with_values1208600 Ref: ab61208600 Ref: min c gsl_min_fminimizer_free1209080 Ref: ab71209080 Ref: min c gsl_min_fminimizer_name1209238 Ref: ab81209238 Node: Providing the function to minimize1209557 Ref: min providing-the-function-to-minimize1209713 Ref: ab91209713 Node: Iteration<2>1210039 Ref: min iteration1210188 Ref: aba1210188 Ref: min c gsl_min_fminimizer_iterate1210520 Ref: abb1210520 Ref: min c gsl_min_fminimizer_x_minimum1211256 Ref: abc1211256 Ref: min c gsl_min_fminimizer_x_upper1211464 Ref: abd1211464 Ref: min c gsl_min_fminimizer_x_lower1211552 Ref: abe1211552 Ref: min c gsl_min_fminimizer_f_minimum1211757 Ref: abf1211757 Ref: min c gsl_min_fminimizer_f_upper1211847 Ref: ac01211847 Ref: min c gsl_min_fminimizer_f_lower1211935 Ref: ac11211935 Node: Stopping Parameters1212210 Ref: min stopping-parameters1212348 Ref: ac21212348 Ref: min c gsl_min_test_interval1212791 Ref: ac31212791 Node: Minimization Algorithms1213848 Ref: min minimization-algorithms1213986 Ref: ac41213986 Ref: min c gsl_min_fminimizer_type1214471 Ref: ac51214471 Ref: min c gsl_min_fminimizer_type gsl_min_fminimizer_goldensection1214506 Ref: ac61214506 Ref: min c gsl_min_fminimizer_type gsl_min_fminimizer_brent1215577 Ref: ac71215577 Ref: min c gsl_min_fminimizer_type gsl_min_fminimizer_quad_golden1216554 Ref: ac81216554 Node: Examples<28>1216788 Ref: min examples1216941 Ref: ac91216941 Node: References and Further Reading<30>1219506 Ref: min references-and-further-reading1219627 Ref: aca1219627 Node: Multidimensional Root-Finding1219946 Ref: multiroots doc1220091 Ref: acb1220091 Ref: multiroots multidimensional-root-finding1220091 Ref: acc1220091 Node: Overview<3>1221385 Ref: multiroots overview1221498 Ref: acd1221498 Node: Initializing the Solver<2>1223584 Ref: multiroots initializing-the-solver1223740 Ref: ad01223740 Ref: multiroots c gsl_multiroot_fsolver1224022 Ref: acf1224022 Ref: multiroots c gsl_multiroot_fdfsolver1224141 Ref: ace1224141 Ref: multiroots c gsl_multiroot_fsolver_alloc1224259 Ref: ad11224259 Ref: multiroots c gsl_multiroot_fdfsolver_alloc1225024 Ref: ad21225024 Ref: multiroots c gsl_multiroot_fsolver_set1225822 Ref: ad31225822 Ref: multiroots c gsl_multiroot_fdfsolver_set1225951 Ref: ad41225951 Ref: multiroots c gsl_multiroot_fsolver_free1226405 Ref: ad51226405 Ref: multiroots c gsl_multiroot_fdfsolver_free1226478 Ref: ad61226478 Ref: multiroots c gsl_multiroot_fsolver_name1226654 Ref: ad71226654 Ref: multiroots c gsl_multiroot_fdfsolver_name1226750 Ref: ad81226750 Node: Providing the function to solve<2>1227077 Ref: multiroots providing-the-function-to-solve1227234 Ref: ad91227234 Ref: multiroots c gsl_multiroot_function1227487 Ref: ada1227487 Ref: multiroots c gsl_multiroot_function_fdf1229021 Ref: adb1229021 Node: Iteration<3>1232103 Ref: multiroots iteration1232263 Ref: adc1232263 Ref: multiroots c gsl_multiroot_fsolver_iterate1232589 Ref: add1232589 Ref: multiroots c gsl_multiroot_fdfsolver_iterate1232674 Ref: ade1232674 Ref: multiroots c gsl_multiroot_fsolver_root1233422 Ref: adf1233422 Ref: multiroots c gsl_multiroot_fdfsolver_root1233530 Ref: ae01233530 Ref: multiroots c gsl_multiroot_fsolver_f1233760 Ref: ae11233760 Ref: multiroots c gsl_multiroot_fdfsolver_f1233865 Ref: ae21233865 Ref: multiroots c gsl_multiroot_fsolver_dx1234119 Ref: ae31234119 Ref: multiroots c gsl_multiroot_fdfsolver_dx1234225 Ref: ae41234225 Node: Search Stopping Parameters<2>1234443 Ref: multiroots search-stopping-parameters1234597 Ref: ae51234597 Ref: multiroots c gsl_multiroot_test_delta1235098 Ref: ae61235098 Ref: multiroots c gsl_multiroot_test_residual1235676 Ref: ae71235676 Node: Algorithms using Derivatives1236230 Ref: multiroots algorithms-using-derivatives1236402 Ref: ae81236402 Ref: multiroots c gsl_multiroot_fdfsolver_type1236880 Ref: ae91236880 Ref: multiroots c gsl_multiroot_fdfsolver_type gsl_multiroot_fdfsolver_hybridsj1237014 Ref: aea1237014 Ref: multiroots c gsl_multiroot_fdfsolver_type gsl_multiroot_fdfsolver_hybridj1239874 Ref: aeb1239874 Ref: multiroots c gsl_multiroot_fdfsolver_type gsl_multiroot_fdfsolver_newton1240266 Ref: aec1240266 Ref: multiroots c gsl_multiroot_fdfsolver_type gsl_multiroot_fdfsolver_gnewton1241148 Ref: aed1241148 Node: Algorithms without Derivatives1241780 Ref: multiroots algorithms-without-derivatives1241935 Ref: aee1241935 Ref: multiroots c gsl_multiroot_fsolver_type1242396 Ref: aef1242396 Ref: multiroots c gsl_multiroot_fsolver_type gsl_multiroot_fsolver_hybrids1242530 Ref: af01242530 Ref: multiroots c gsl_multiroot_fsolver_type gsl_multiroot_fsolver_hybrid1243030 Ref: af11243030 Ref: multiroots c gsl_multiroot_fsolver_type gsl_multiroot_fsolver_dnewton1243241 Ref: af21243241 Ref: multiroots c gsl_multiroot_fsolver_type gsl_multiroot_fsolver_broyden1244254 Ref: af31244254 Node: Examples<29>1245441 Ref: multiroots examples1245602 Ref: af41245602 Node: References and Further Reading<31>1254142 Ref: multiroots references-and-further-reading1254264 Ref: af51254264 Node: Multidimensional Minimization1255343 Ref: multimin doc1255488 Ref: af61255488 Ref: multimin multidimensional-minimization1255488 Ref: af71255488 Node: Overview<4>1256751 Ref: multimin overview1256848 Ref: af81256848 Node: Caveats<3>1258862 Ref: multimin caveats1259011 Ref: afb1259011 Node: Initializing the Multidimensional Minimizer1259596 Ref: multimin initializing-the-multidimensional-minimizer1259766 Ref: afc1259766 Ref: multimin c gsl_multimin_fdfminimizer1260058 Ref: af91260058 Ref: multimin c gsl_multimin_fminimizer1260165 Ref: afa1260165 Ref: multimin c gsl_multimin_fdfminimizer_alloc1260272 Ref: afd1260272 Ref: multimin c gsl_multimin_fminimizer_alloc1260429 Ref: afe1260429 Ref: multimin c gsl_multimin_fdfminimizer_set1260922 Ref: aff1260922 Ref: multimin c gsl_multimin_fminimizer_set1261104 Ref: b001261104 Ref: multimin c gsl_multimin_fdfminimizer_free1262521 Ref: b011262521 Ref: multimin c gsl_multimin_fminimizer_free1262612 Ref: b021262612 Ref: multimin c gsl_multimin_fdfminimizer_name1262790 Ref: b031262790 Ref: multimin c gsl_multimin_fminimizer_name1262894 Ref: b041262894 Node: Providing a function to minimize1263237 Ref: multimin providing-a-function-to-minimize1263409 Ref: b051263409 Ref: multimin c gsl_multimin_function_fdf1263846 Ref: b061263846 Ref: multimin c gsl_multimin_function1265334 Ref: b071265334 Ref: multimin multimin-paraboloid1265958 Ref: b081265958 Node: Iteration<4>1267469 Ref: multimin iteration1267615 Ref: b091267615 Ref: multimin c gsl_multimin_fdfminimizer_iterate1267919 Ref: b0a1267919 Ref: multimin c gsl_multimin_fminimizer_iterate1268012 Ref: b0b1268012 Ref: multimin c gsl_multimin_fdfminimizer_x1268644 Ref: b0c1268644 Ref: multimin c gsl_multimin_fminimizer_x1268757 Ref: b0d1268757 Ref: multimin c gsl_multimin_fdfminimizer_minimum1268866 Ref: b0e1268866 Ref: multimin c gsl_multimin_fminimizer_minimum1268968 Ref: b0f1268968 Ref: multimin c gsl_multimin_fdfminimizer_gradient1269066 Ref: b101269066 Ref: multimin c gsl_multimin_fdfminimizer_dx1269186 Ref: b111269186 Ref: multimin c gsl_multimin_fminimizer_size1269300 Ref: b121269300 Ref: multimin c gsl_multimin_fdfminimizer_restart1269669 Ref: b131269669 Node: Stopping Criteria1269873 Ref: multimin stopping-criteria1270014 Ref: b141270014 Ref: multimin c gsl_multimin_test_gradient1270453 Ref: b151270453 Ref: multimin c gsl_multimin_test_size1271114 Ref: b161271114 Node: Algorithms with Derivatives1271486 Ref: multimin algorithms-with-derivatives1271648 Ref: b171271648 Ref: multimin c gsl_multimin_fdfminimizer_type1271935 Ref: b181271935 Ref: multimin c gsl_multimin_fdfminimizer_type gsl_multimin_fdfminimizer_conjugate_fr1272045 Ref: b191272045 Ref: multimin c gsl_multimin_fdfminimizer_type gsl_multimin_fdfminimizer_conjugate_pr1273166 Ref: b1a1273166 Ref: multimin c gsl_multimin_fdfminimizer_type gsl_multimin_fdfminimizer_vector_bfgs21273667 Ref: b1b1273667 Ref: multimin c gsl_multimin_fdfminimizer_type gsl_multimin_fdfminimizer_vector_bfgs1273784 Ref: b1c1273784 Ref: multimin c gsl_multimin_fdfminimizer_type gsl_multimin_fdfminimizer_steepest_descent1275040 Ref: b1d1275040 Node: Algorithms without Derivatives<2>1275718 Ref: multimin algorithms-without-derivatives1275875 Ref: b1e1275875 Ref: multimin c gsl_multimin_fminimizer_type1276051 Ref: b1f1276051 Ref: multimin c gsl_multimin_fminimizer_type gsl_multimin_fminimizer_nmsimplex21276174 Ref: b201276174 Ref: multimin c gsl_multimin_fminimizer_type gsl_multimin_fminimizer_nmsimplex1276285 Ref: b211276285 Ref: multimin c gsl_multimin_fminimizer_type gsl_multimin_fminimizer_nmsimplex2rand1278672 Ref: b221278672 Node: Examples<30>1279417 Ref: multimin examples1279581 Ref: b231279581 Ref: multimin fig-multimin1282331 Ref: b241282331 Node: References and Further Reading<32>1286087 Ref: multimin references-and-further-reading1286209 Ref: b251286209 Node: Linear Least-Squares Fitting1286948 Ref: lls doc1287095 Ref: b261287095 Ref: lls linear-least-squares-fitting1287095 Ref: b271287095 Node: Overview<5>1288009 Ref: lls overview1288112 Ref: b281288112 Ref: lls sec-lls-overview1288112 Ref: b291288112 Node: Linear regression1290366 Ref: lls linear-regression1290504 Ref: b2a1290504 Node: Linear regression with a constant term1290815 Ref: lls linear-regression-with-a-constant-term1290958 Ref: b2b1290958 Ref: lls c gsl_fit_linear1291182 Ref: b2c1291182 Ref: lls c gsl_fit_wlinear1292268 Ref: b2d1292268 Ref: lls c gsl_fit_linear_est1293394 Ref: b2e1293394 Node: Linear regression without a constant term1293894 Ref: lls linear-regression-without-a-constant-term1294037 Ref: b2f1294037 Ref: lls c gsl_fit_mul1294280 Ref: b301294280 Ref: lls c gsl_fit_wmul1295060 Ref: b311295060 Ref: lls c gsl_fit_mul_est1296041 Ref: b321296041 Node: Multi-parameter regression1296424 Ref: lls multi-parameter-regression1296573 Ref: b331296573 Ref: lls c gsl_multifit_linear_workspace1297969 Ref: b341297969 Ref: lls c gsl_multifit_linear_alloc1298100 Ref: b351298100 Ref: lls c gsl_multifit_linear_free1298505 Ref: b361298505 Ref: lls c gsl_multifit_linear_svd1298674 Ref: b371298674 Ref: lls c gsl_multifit_linear_bsvd1298942 Ref: b381298942 Ref: lls c gsl_multifit_linear1299343 Ref: b391299343 Ref: lls c gsl_multifit_linear_tsvd1300645 Ref: b3a1300645 Ref: lls c gsl_multifit_wlinear1301877 Ref: b3b1301877 Ref: lls c gsl_multifit_wlinear_tsvd1302859 Ref: b3c1302859 Ref: lls c gsl_multifit_linear_est1304116 Ref: b3d1304116 Ref: lls c gsl_multifit_linear_residuals1304568 Ref: b3e1304568 Ref: lls c gsl_multifit_linear_rank1304891 Ref: b3f1304891 Node: Regularized regression1305313 Ref: lls regularized-regression1305469 Ref: b401305469 Ref: lls sec-regularized-regression1305469 Ref: b411305469 Ref: lls c gsl_multifit_linear_stdform11310776 Ref: b421310776 Ref: lls c gsl_multifit_linear_wstdform11310976 Ref: b461310976 Ref: lls c gsl_multifit_linear_L_decomp1312382 Ref: b481312382 Ref: lls c gsl_multifit_linear_stdform21313115 Ref: b431313115 Ref: lls c gsl_multifit_linear_wstdform21313366 Ref: b491313366 Ref: lls c gsl_multifit_linear_solve1314674 Ref: b4a1314674 Ref: lls c gsl_multifit_linear_genform11315864 Ref: b441315864 Ref: lls c gsl_multifit_linear_genform21316450 Ref: b451316450 Ref: lls c gsl_multifit_linear_wgenform21316712 Ref: b4b1316712 Ref: lls c gsl_multifit_linear_applyW1317638 Ref: b471317638 Ref: lls c gsl_multifit_linear_lcurve1318273 Ref: b4c1318273 Ref: lls c gsl_multifit_linear_lcurvature1319375 Ref: b4d1319375 Ref: lls c gsl_multifit_linear_lcorner1320455 Ref: b4e1320455 Ref: lls c gsl_multifit_linear_lcorner21321507 Ref: b4f1321507 Ref: lls c gsl_multifit_linear_gcv_init1322840 Ref: b501322840 Ref: lls c gsl_multifit_linear_gcv_curve1323489 Ref: b511323489 Ref: lls c gsl_multifit_linear_gcv_min1323986 Ref: b521323986 Ref: lls c gsl_multifit_linear_gcv_calc1324605 Ref: b531324605 Ref: lls c gsl_multifit_linear_gcv1324896 Ref: b541324896 Ref: lls c gsl_multifit_linear_Lk1325629 Ref: b551325629 Ref: lls c gsl_multifit_linear_Lsobolev1325976 Ref: b561325976 Ref: lls c gsl_multifit_linear_rcond1326934 Ref: b571326934 Node: Robust linear regression1327345 Ref: lls robust-linear-regression1327501 Ref: b581327501 Ref: lls c gsl_multifit_robust_workspace1330630 Ref: b591330630 Ref: lls c gsl_multifit_robust_alloc1330734 Ref: b5a1330734 Ref: lls c gsl_multifit_robust_alloc gsl_multifit_robust_type1331188 Ref: b5b1331188 Ref: lls c gsl_multifit_robust_alloc gsl_multifit_robust_type gsl_multifit_robust_default1331229 Ref: b5c1331229 Ref: lls c gsl_multifit_robust_alloc gsl_multifit_robust_type gsl_multifit_robust_bisquare1331523 Ref: b5d1331523 Ref: lls c gsl_multifit_robust_alloc gsl_multifit_robust_type gsl_multifit_robust_cauchy1331985 Ref: b5e1331985 Ref: lls c gsl_multifit_robust_alloc gsl_multifit_robust_type gsl_multifit_robust_fair1332591 Ref: b5f1332591 Ref: lls c gsl_multifit_robust_alloc gsl_multifit_robust_type gsl_multifit_robust_huber1332996 Ref: b601332996 Ref: lls c gsl_multifit_robust_alloc gsl_multifit_robust_type gsl_multifit_robust_ols1333682 Ref: b611333682 Ref: lls c gsl_multifit_robust_alloc gsl_multifit_robust_type gsl_multifit_robust_welsch1334125 Ref: b621334125 Ref: lls c gsl_multifit_robust_free1334538 Ref: b631334538 Ref: lls c gsl_multifit_robust_name1334714 Ref: b641334714 Ref: lls c gsl_multifit_robust_tune1334939 Ref: b651334939 Ref: lls c gsl_multifit_robust_maxiter1335358 Ref: b661335358 Ref: lls c gsl_multifit_robust_weights1335703 Ref: b671335703 Ref: lls c gsl_multifit_robust1336471 Ref: b681336471 Ref: lls c gsl_multifit_robust_est1337715 Ref: b6a1337715 Ref: lls c gsl_multifit_robust_residuals1338168 Ref: b6b1338168 Ref: lls c gsl_multifit_robust_statistics1338790 Ref: b691338790 Ref: lls c gsl_multifit_robust_statistics gsl_multifit_robust_stats1339277 Ref: b6c1339277 Node: Large dense linear systems1341211 Ref: lls large-dense-linear-systems1341360 Ref: b6d1341360 Node: Normal Equations Approach1343469 Ref: lls normal-equations-approach1343595 Ref: b6e1343595 Node: Tall Skinny QR TSQR Approach1344904 Ref: lls tall-skinny-qr-tsqr-approach1345080 Ref: b6f1345080 Node: Large Dense Linear Systems Solution Steps1346524 Ref: lls large-dense-linear-systems-solution-steps1346716 Ref: b701346716 Node: Large Dense Linear Least Squares Routines1347671 Ref: lls large-dense-linear-least-squares-routines1347826 Ref: b711347826 Ref: lls c gsl_multilarge_linear_workspace1347925 Ref: b721347925 Ref: lls c gsl_multilarge_linear_alloc1348063 Ref: b731348063 Ref: lls c gsl_multilarge_linear_alloc gsl_multilarge_linear_type1348413 Ref: b741348413 Ref: lls c gsl_multilarge_linear_alloc gsl_multilarge_linear_type gsl_multilarge_linear_normal1348637 Ref: b751348637 Ref: lls c gsl_multilarge_linear_alloc gsl_multilarge_linear_type gsl_multilarge_linear_tsqr1349082 Ref: b761349082 Ref: lls c gsl_multilarge_linear_free1349575 Ref: b771349575 Ref: lls c gsl_multilarge_linear_name1349755 Ref: b781349755 Ref: lls c gsl_multilarge_linear_reset1349944 Ref: b791349944 Ref: lls c gsl_multilarge_linear_stdform11350156 Ref: b7a1350156 Ref: lls c gsl_multilarge_linear_wstdform11350360 Ref: b7b1350360 Ref: lls c gsl_multilarge_linear_L_decomp1351527 Ref: b7d1351527 Ref: lls c gsl_multilarge_linear_stdform21351995 Ref: b7f1351995 Ref: lls c gsl_multilarge_linear_wstdform21352235 Ref: b7e1352235 Ref: lls c gsl_multilarge_linear_accumulate1353330 Ref: b811353330 Ref: lls c gsl_multilarge_linear_solve1353846 Ref: b821353846 Ref: lls c gsl_multilarge_linear_genform11354424 Ref: b7c1354424 Ref: lls c gsl_multilarge_linear_genform21355014 Ref: b801355014 Ref: lls c gsl_multilarge_linear_lcurve1355531 Ref: b831355531 Ref: lls c gsl_multilarge_linear_matrix_ptr1356555 Ref: b841356555 Ref: lls c gsl_multilarge_linear_rhs_ptr1356889 Ref: b851356889 Ref: lls c gsl_multilarge_linear_rcond1357239 Ref: b861357239 Node: Troubleshooting1357846 Ref: lls troubleshooting1357983 Ref: b871357983 Node: Examples<31>1358473 Ref: lls examples1358618 Ref: b881358618 Node: Simple Linear Regression Example1358996 Ref: lls simple-linear-regression-example1359128 Ref: b891359128 Ref: lls fig-fit-wlinear1361221 Ref: b8a1361221 Node: Multi-parameter Linear Regression Example1361305 Ref: lls multi-parameter-linear-regression-example1361485 Ref: b8b1361485 Ref: lls fig-fit-wlinear21366355 Ref: b8c1366355 Node: Regularized Linear Regression Example 11366434 Ref: lls regularized-linear-regression-example-11366621 Ref: b8d1366621 Ref: lls fig-regularized1368319 Ref: b8e1368319 Node: Regularized Linear Regression Example 21374090 Ref: lls regularized-linear-regression-example-21374268 Ref: b8f1374268 Ref: lls fig-regularized21375744 Ref: b901375744 Node: Robust Linear Regression Example1381081 Ref: lls robust-linear-regression-example1381257 Ref: b911381257 Ref: lls fig-robust1385004 Ref: b921385004 Node: Large Dense Linear Regression Example1385077 Ref: lls large-dense-linear-regression-example1385205 Ref: b931385205 Ref: lls fig-multilarge1387127 Ref: b941387127 Node: References and Further Reading<33>1392728 Ref: lls references-and-further-reading1392849 Ref: b951392849 Node: Nonlinear Least-Squares Fitting1395023 Ref: nls doc1395154 Ref: b961395154 Ref: nls nonlinear-least-squares-fitting1395154 Ref: b971395154 Node: Overview<6>1396931 Ref: nls overview1397059 Ref: b981397059 Node: Solving the Trust Region Subproblem TRS1401311 Ref: nls solving-the-trust-region-subproblem-trs1401480 Ref: b991401480 Node: Levenberg-Marquardt1403271 Ref: nls levenberg-marquardt1403422 Ref: b9a1403422 Node: Levenberg-Marquardt with Geodesic Acceleration1404822 Ref: nls levenberg-marquardt-with-geodesic-acceleration1404988 Ref: b9b1404988 Node: Dogleg1406499 Ref: nls dogleg1406659 Ref: b9c1406659 Node: Double Dogleg1407893 Ref: nls double-dogleg1408031 Ref: b9d1408031 Node: Two Dimensional Subspace1408623 Ref: nls two-dimensional-subspace1408788 Ref: b9e1408788 Node: Steihaug-Toint Conjugate Gradient1409527 Ref: nls steihaug-toint-conjugate-gradient1409670 Ref: b9f1409670 Node: Weighted Nonlinear Least-Squares1410231 Ref: nls weighted-nonlinear-least-squares1410407 Ref: ba01410407 Node: Tunable Parameters1411430 Ref: nls sec-tunable-parameters1411593 Ref: ba21411593 Ref: nls tunable-parameters1411593 Ref: ba31411593 Ref: nls c gsl_multifit_nlinear_parameters1411872 Ref: ba41411872 Ref: nls c gsl_multilarge_nlinear_parameters1412967 Ref: ba51412967 Ref: nls c gsl_multifit_nlinear_trs1414166 Ref: ba61414166 Ref: nls c gsl_multilarge_nlinear_trs1414201 Ref: ba71414201 Ref: nls c gsl_multilarge_nlinear_trs gsl_multifit_nlinear_trs_lm1414387 Ref: ba81414387 Ref: nls c gsl_multilarge_nlinear_trs gsl_multilarge_nlinear_trs_lm1414487 Ref: ba91414487 Ref: nls c gsl_multilarge_nlinear_trs gsl_multifit_nlinear_trs_lmaccel1414651 Ref: baa1414651 Ref: nls c gsl_multilarge_nlinear_trs gsl_multilarge_nlinear_trs_lmaccel1414756 Ref: bab1414756 Ref: nls c gsl_multilarge_nlinear_trs gsl_multifit_nlinear_trs_dogleg1414962 Ref: bac1414962 Ref: nls c gsl_multilarge_nlinear_trs gsl_multilarge_nlinear_trs_dogleg1415066 Ref: bad1415066 Ref: nls c gsl_multilarge_nlinear_trs gsl_multifit_nlinear_trs_ddogleg1415221 Ref: bae1415221 Ref: nls c gsl_multilarge_nlinear_trs gsl_multilarge_nlinear_trs_ddogleg1415326 Ref: baf1415326 Ref: nls c gsl_multilarge_nlinear_trs gsl_multifit_nlinear_trs_subspace2D1415489 Ref: bb01415489 Ref: nls c gsl_multilarge_nlinear_trs gsl_multilarge_nlinear_trs_subspace2D1415597 Ref: bb11415597 Ref: nls c gsl_multilarge_nlinear_trs gsl_multilarge_nlinear_trs_cgst1415761 Ref: bb21415761 Ref: nls c gsl_multifit_nlinear_scale1416000 Ref: bb31416000 Ref: nls c gsl_multilarge_nlinear_scale1416037 Ref: bb41416037 Ref: nls c gsl_multilarge_nlinear_scale gsl_multifit_nlinear_scale_more1416203 Ref: bb51416203 Ref: nls c gsl_multilarge_nlinear_scale gsl_multilarge_nlinear_scale_more1416309 Ref: bb61416309 Ref: nls c gsl_multilarge_nlinear_scale gsl_multifit_nlinear_scale_levenberg1417326 Ref: bb71417326 Ref: nls c gsl_multilarge_nlinear_scale gsl_multilarge_nlinear_scale_levenberg1417437 Ref: bb81417437 Ref: nls c gsl_multilarge_nlinear_scale gsl_multifit_nlinear_scale_marquardt1417993 Ref: bb91417993 Ref: nls c gsl_multilarge_nlinear_scale gsl_multilarge_nlinear_scale_marquardt1418104 Ref: bba1418104 Ref: nls c gsl_multifit_nlinear_solver1418531 Ref: bbb1418531 Ref: nls c gsl_multilarge_nlinear_solver1418569 Ref: bbc1418569 Ref: nls c gsl_multilarge_nlinear_solver gsl_multifit_nlinear_solver_qr1418923 Ref: bbd1418923 Ref: nls c gsl_multilarge_nlinear_solver gsl_multifit_nlinear_solver_cholesky1419365 Ref: bbe1419365 Ref: nls c gsl_multilarge_nlinear_solver gsl_multilarge_nlinear_solver_cholesky1419477 Ref: bbf1419477 Ref: nls c gsl_multilarge_nlinear_solver gsl_multifit_nlinear_solver_mcholesky1420341 Ref: bc01420341 Ref: nls c gsl_multilarge_nlinear_solver gsl_multilarge_nlinear_solver_mcholesky1420454 Ref: bc11420454 Ref: nls c gsl_multilarge_nlinear_solver gsl_multifit_nlinear_solver_svd1421158 Ref: bc21421158 Ref: nls c gsl_multifit_nlinear_fdtype1421515 Ref: bc31421515 Ref: nls c gsl_multifit_nlinear_fdtype GSL_MULTIFIT_NLINEAR_FWDIFF1421828 Ref: bc41421828 Ref: nls c gsl_multifit_nlinear_fdtype GSL_MULTIFIT_NLINEAR_CTRDIFF1422724 Ref: bc51422724 Node: Initializing the Solver<3>1424977 Ref: nls initializing-the-solver1425146 Ref: bc61425146 Ref: nls c gsl_multifit_nlinear_type1425205 Ref: bc71425205 Ref: nls c gsl_multifit_nlinear_type gsl_multifit_nlinear_trust1425416 Ref: bc81425416 Ref: nls c gsl_multifit_nlinear_alloc1425646 Ref: bc91425646 Ref: nls c gsl_multilarge_nlinear_alloc1425865 Ref: bca1425865 Ref: nls c gsl_multifit_nlinear_default_parameters1427546 Ref: bcb1427546 Ref: nls c gsl_multilarge_nlinear_default_parameters1427661 Ref: bcc1427661 Ref: nls c gsl_multifit_nlinear_init1428043 Ref: bcd1428043 Ref: nls c gsl_multifit_nlinear_winit1428195 Ref: ba11428195 Ref: nls c gsl_multilarge_nlinear_init1428371 Ref: bce1428371 Ref: nls c gsl_multifit_nlinear_free1428980 Ref: bd01428980 Ref: nls c gsl_multilarge_nlinear_free1429071 Ref: bd11429071 Ref: nls c gsl_multifit_nlinear_name1429258 Ref: bd21429258 Ref: nls c gsl_multilarge_nlinear_name1429362 Ref: bd31429362 Ref: nls c gsl_multifit_nlinear_trs_name1429700 Ref: bd41429700 Ref: nls c gsl_multilarge_nlinear_trs_name1429808 Ref: bd51429808 Node: Providing the Function to be Minimized1430185 Ref: nls providing-the-function-to-be-minimized1430348 Ref: bd61430348 Ref: nls sec-providing-function-minimized1430348 Ref: bcf1430348 Ref: nls c gsl_multifit_nlinear_fdf1430632 Ref: bd71430632 Ref: nls c gsl_multilarge_nlinear_fdf1433759 Ref: bd81433759 Node: Iteration<5>1437617 Ref: nls iteration1437777 Ref: bd91437777 Ref: nls c gsl_multifit_nlinear_iterate1437974 Ref: bda1437974 Ref: nls c gsl_multilarge_nlinear_iterate1438067 Ref: bdb1438067 Ref: nls c gsl_multifit_nlinear_position1439088 Ref: bdc1439088 Ref: nls c gsl_multilarge_nlinear_position1439208 Ref: bdd1439208 Ref: nls c gsl_multifit_nlinear_residual1439446 Ref: bde1439446 Ref: nls c gsl_multilarge_nlinear_residual1439566 Ref: bdf1439566 Ref: nls c gsl_multifit_nlinear_jac1439874 Ref: be01439874 Ref: nls c gsl_multifit_nlinear_niter1440201 Ref: be11440201 Ref: nls c gsl_multilarge_nlinear_niter1440301 Ref: be21440301 Ref: nls c gsl_multifit_nlinear_rcond1440618 Ref: be31440618 Ref: nls c gsl_multilarge_nlinear_rcond1440730 Ref: be41440730 Ref: nls c gsl_multifit_nlinear_avratio1442285 Ref: be51442285 Ref: nls c gsl_multilarge_nlinear_avratio1442387 Ref: be61442387 Node: Testing for Convergence1442828 Ref: nls testing-for-convergence1442967 Ref: be71442967 Ref: nls c gsl_multifit_nlinear_test1443455 Ref: be81443455 Ref: nls c gsl_multilarge_nlinear_test1443629 Ref: be91443629 Node: High Level Driver1445732 Ref: nls high-level-driver1445899 Ref: bea1445899 Ref: nls c gsl_multifit_nlinear_driver1446057 Ref: beb1446057 Ref: nls c gsl_multilarge_nlinear_driver1446383 Ref: bec1446383 Node: Covariance matrix of best fit parameters1448053 Ref: nls covariance-matrix-of-best-fit-parameters1448215 Ref: bee1448215 Ref: nls c gsl_multifit_nlinear_covar1448310 Ref: bef1448310 Ref: nls c gsl_multilarge_nlinear_covar1448427 Ref: bf01448427 Node: Troubleshooting<2>1450259 Ref: nls sec-nlinear-troubleshooting1450416 Ref: bed1450416 Ref: nls troubleshooting1450416 Ref: bf11450416 Node: Examples<32>1452241 Ref: nls examples1452392 Ref: bf21452392 Node: Exponential Fitting Example1452707 Ref: nls exponential-fitting-example1452824 Ref: bf31452824 Ref: nls fig-fit-exp1456770 Ref: bf41456770 Node: Geodesic Acceleration Example 11463120 Ref: nls geodesic-acceleration-example-11463277 Ref: bf51463277 Ref: nls fig-nlfit21465048 Ref: bf61465048 Node: Geodesic Acceleration Example 21470463 Ref: nls geodesic-acceleration-example-21470622 Ref: bf71470622 Ref: nls fig-nlfit2b1474413 Ref: bf81474413 Node: Comparing TRS Methods Example1482136 Ref: nls comparing-trs-methods-example1482301 Ref: bf91482301 Ref: nls fig-nlfit31484610 Ref: bfa1484610 Node: Large Nonlinear Least Squares Example1490949 Ref: nls large-nonlinear-least-squares-example1491074 Ref: bfb1491074 Node: References and Further Reading<34>1499873 Ref: nls references-and-further-reading1499997 Ref: bfc1499997 Node: Basis Splines1501447 Ref: bspline doc1501565 Ref: bfd1501565 Ref: bspline basis-splines1501565 Ref: bfe1501565 Ref: bspline chap-basis-splines1501565 Ref: 9b11501565 Node: Overview<7>1502326 Ref: bspline overview1502430 Ref: bff1502430 Node: Initializing the B-splines solver1503652 Ref: bspline initializing-the-b-splines-solver1503794 Ref: c001503794 Ref: bspline c gsl_bspline_workspace1503873 Ref: c011503873 Ref: bspline c gsl_bspline_alloc1503990 Ref: c021503990 Ref: bspline c gsl_bspline_free1504417 Ref: c031504417 Node: Constructing the knots vector1504567 Ref: bspline constructing-the-knots-vector1504721 Ref: c041504721 Ref: bspline c gsl_bspline_knots1504792 Ref: c051504792 Ref: bspline c gsl_bspline_knots_uniform1505022 Ref: c061505022 Node: Evaluation of B-splines1505353 Ref: bspline evaluation-of-b-splines1505508 Ref: c071505508 Ref: bspline c gsl_bspline_eval1505567 Ref: c081505567 Ref: bspline c gsl_bspline_eval_nonzero1506167 Ref: c0a1506167 Ref: bspline c gsl_bspline_ncoeffs1506905 Ref: c091506905 Node: Evaluation of B-spline derivatives1507072 Ref: bspline evaluation-of-b-spline-derivatives1507233 Ref: c0b1507233 Ref: bspline c gsl_bspline_deriv_eval1507314 Ref: c0c1507314 Ref: bspline c gsl_bspline_deriv_eval_nonzero1508161 Ref: c0d1508161 Node: Working with the Greville abscissae1509091 Ref: bspline working-with-the-greville-abscissae1509241 Ref: c0e1509241 Ref: bspline c gsl_bspline_greville_abscissa1509774 Ref: c0f1509774 Node: Examples<33>1510076 Ref: bspline examples1510226 Ref: c101510226 Ref: bspline fig-bspline1513975 Ref: c111513975 Node: References and Further Reading<35>1514048 Ref: bspline references-and-further-reading1514154 Ref: c121514154 Node: Sparse Matrices1514844 Ref: spmatrix doc1514950 Ref: c131514950 Ref: spmatrix sparse-matrices1514950 Ref: c141514950 Node: Data types<2>1516137 Ref: spmatrix data-types1516241 Ref: c151516241 Node: Sparse Matrix Storage Formats1518441 Ref: spmatrix sparse-matrix-storage-formats1518565 Ref: c161518565 Node: Coordinate Storage COO1519403 Ref: spmatrix coordinate-storage-coo1519529 Ref: c171519529 Ref: spmatrix sec-spmatrix-coo1519529 Ref: c181519529 Node: Compressed Sparse Column CSC1521634 Ref: spmatrix compressed-sparse-column-csc1521794 Ref: c191521794 Ref: spmatrix sec-spmatrix-csc1521794 Ref: c1a1521794 Node: Compressed Sparse Row CSR1523151 Ref: spmatrix compressed-sparse-row-csr1523280 Ref: c1b1523280 Ref: spmatrix sec-spmatrix-csr1523280 Ref: c1c1523280 Node: Overview<8>1524567 Ref: spmatrix overview1524688 Ref: c1d1524688 Ref: spmatrix c gsl_spmatrix1524910 Ref: c1e1524910 Node: Allocation1527387 Ref: spmatrix allocation1527504 Ref: c1f1527504 Ref: spmatrix c gsl_spmatrix_alloc1527886 Ref: c201527886 Ref: spmatrix c gsl_spmatrix_alloc_nzmax1528789 Ref: c211528789 Ref: spmatrix c gsl_spmatrix_alloc_nzmax GSL_SPMATRIX_COO1529809 Ref: c231529809 Ref: spmatrix c gsl_spmatrix_alloc_nzmax GSL_SPMATRIX_CSC1529904 Ref: c241529904 Ref: spmatrix c gsl_spmatrix_alloc_nzmax GSL_SPMATRIX_CSR1530003 Ref: c251530003 Ref: spmatrix c gsl_spmatrix_realloc1530180 Ref: c221530180 Ref: spmatrix c gsl_spmatrix_free1530675 Ref: c271530675 Node: Accessing Matrix Elements1530912 Ref: spmatrix accessing-matrix-elements1531046 Ref: c281531046 Ref: spmatrix c gsl_spmatrix_get1531109 Ref: c291531109 Ref: spmatrix c gsl_spmatrix_set1531404 Ref: c261531404 Ref: spmatrix c gsl_spmatrix_ptr1531693 Ref: c2a1531693 Node: Initializing Matrix Elements1532124 Ref: spmatrix initializing-matrix-elements1532276 Ref: c2b1532276 Ref: spmatrix c gsl_spmatrix_set_zero1532603 Ref: c2c1532603 Node: Reading and Writing Matrices1533009 Ref: spmatrix reading-and-writing-matrices1533152 Ref: c2d1533152 Ref: spmatrix c gsl_spmatrix_fwrite1533221 Ref: c2e1533221 Ref: spmatrix c gsl_spmatrix_fread1533750 Ref: c2f1533750 Ref: spmatrix c gsl_spmatrix_fprintf1534474 Ref: c301534474 Ref: spmatrix c gsl_spmatrix_fscanf1535181 Ref: c311535181 Node: Copying Matrices1535661 Ref: spmatrix copying-matrices1535803 Ref: c321535803 Ref: spmatrix c gsl_spmatrix_memcpy1535848 Ref: c331535848 Node: Exchanging Rows and Columns1536226 Ref: spmatrix exchanging-rows-and-columns1536357 Ref: c341536357 Ref: spmatrix c gsl_spmatrix_transpose_memcpy1536424 Ref: c351536424 Ref: spmatrix c gsl_spmatrix_transpose1536888 Ref: c361536888 Node: Matrix Operations1537589 Ref: spmatrix matrix-operations1537721 Ref: c371537721 Ref: spmatrix c gsl_spmatrix_scale1537770 Ref: c381537770 Ref: spmatrix c gsl_spmatrix_scale_columns1538114 Ref: c391538114 Ref: spmatrix c gsl_spmatrix_scale_rows1538609 Ref: c3a1538609 Ref: spmatrix c gsl_spmatrix_add1539095 Ref: c3b1539095 Ref: spmatrix c gsl_spmatrix_dense_add1539379 Ref: c3c1539379 Ref: spmatrix c gsl_spmatrix_dense_sub1539863 Ref: c3d1539863 Node: Matrix Properties1540354 Ref: spmatrix matrix-properties1540495 Ref: c3e1540495 Ref: spmatrix c gsl_spmatrix_type1540544 Ref: c3f1540544 Ref: spmatrix c gsl_spmatrix_nnz1540965 Ref: c401540965 Ref: spmatrix c gsl_spmatrix_equal1541201 Ref: c411541201 Ref: spmatrix c gsl_spmatrix_norm11541644 Ref: c421541644 Node: Finding Maximum and Minimum Elements1541986 Ref: spmatrix finding-maximum-and-minimum-elements1542127 Ref: c431542127 Ref: spmatrix c gsl_spmatrix_minmax1542214 Ref: c441542214 Ref: spmatrix c gsl_spmatrix_min_index1542611 Ref: c451542611 Node: Compressed Format1543098 Ref: spmatrix compressed-format1543266 Ref: c461543266 Ref: spmatrix c gsl_spmatrix_csc1543395 Ref: c471543395 Ref: spmatrix c gsl_spmatrix_csr1543774 Ref: c481543774 Ref: spmatrix c gsl_spmatrix_compress1544150 Ref: c491544150 Node: Conversion Between Sparse and Dense Matrices1544655 Ref: spmatrix conversion-between-sparse-and-dense-matrices1544799 Ref: c4a1544799 Ref: spmatrix c gsl_spmatrix_d2sp1545048 Ref: c4b1545048 Ref: spmatrix c gsl_spmatrix_sp2d1545315 Ref: c4c1545315 Node: Examples<34>1545617 Ref: spmatrix examples1545778 Ref: c4d1545778 Node: References and Further Reading<36>1549365 Ref: spmatrix references-and-further-reading1549473 Ref: c4e1549473 Node: Sparse BLAS Support1549794 Ref: spblas doc1549908 Ref: c4f1549908 Ref: spblas sparse-blas-support1549908 Ref: c501549908 Node: Sparse BLAS operations1550452 Ref: spblas sparse-blas-operations1550574 Ref: c511550574 Ref: spblas c gsl_spblas_dgemv1550631 Ref: c521550631 Ref: spblas c gsl_spblas_dgemm1551198 Ref: c531551198 Node: References and Further Reading<37>1551454 Ref: spblas references-and-further-reading1551576 Ref: c541551576 Node: Sparse Linear Algebra1551895 Ref: splinalg doc1552012 Ref: c551552012 Ref: splinalg sparse-linear-algebra1552012 Ref: c561552012 Node: Overview<9>1552537 Ref: splinalg overview1552640 Ref: c571552640 Node: Sparse Iterative Solvers1553266 Ref: splinalg sparse-iterative-solvers1553390 Ref: c581553390 Node: Overview<10>1553564 Ref: splinalg id11553680 Ref: c591553680 Node: Types of Sparse Iterative Solvers1554408 Ref: splinalg types-of-sparse-iterative-solvers1554567 Ref: c5a1554567 Ref: splinalg c gsl_splinalg_itersolve_type1554736 Ref: c5b1554736 Ref: splinalg c gsl_splinalg_itersolve_type gsl_splinalg_itersolve_gmres1554775 Ref: c5c1554775 Node: Iterating the Sparse Linear System1556803 Ref: splinalg iterating-the-sparse-linear-system1556941 Ref: c5e1556941 Ref: splinalg c gsl_splinalg_itersolve_alloc1557152 Ref: c5f1557152 Ref: splinalg c gsl_splinalg_itersolve_free1557708 Ref: c601557708 Ref: splinalg c gsl_splinalg_itersolve_name1557880 Ref: c611557880 Ref: splinalg c gsl_splinalg_itersolve_iterate1558051 Ref: c5d1558051 Ref: splinalg c gsl_splinalg_itersolve_normr1559219 Ref: c621559219 Node: Examples<35>1559480 Ref: splinalg examples1559627 Ref: c631559627 Ref: splinalg fig-splinalg-poisson1560666 Ref: c641560666 Node: References and Further Reading<38>1563957 Ref: splinalg references-and-further-reading1564071 Ref: c651564071 Node: Physical Constants1564454 Ref: const doc1564582 Ref: c661564582 Ref: const physical-constants1564582 Ref: c671564582 Node: Fundamental Constants1565911 Ref: const fundamental-constants1566023 Ref: c681566023 Ref: const c GSL_CONST_MKSA_SPEED_OF_LIGHT1566078 Ref: c691566078 Ref: const c GSL_CONST_MKSA_VACUUM_PERMEABILITY1566159 Ref: c6a1566159 Ref: const c GSL_CONST_MKSA_VACUUM_PERMITTIVITY1566307 Ref: c6b1566307 Ref: const c GSL_CONST_MKSA_PLANCKS_CONSTANT_H1566460 Ref: c6c1566460 Ref: const c GSL_CONST_MKSA_PLANCKS_CONSTANT_HBAR1566536 Ref: c6d1566536 Ref: const c GSL_CONST_NUM_AVOGADRO1566635 Ref: c6e1566635 Ref: const c GSL_CONST_MKSA_FARADAY1566702 Ref: c6f1566702 Ref: const c GSL_CONST_MKSA_BOLTZMANN1566774 Ref: c701566774 Ref: const c GSL_CONST_MKSA_MOLAR_GAS1566844 Ref: c711566844 Ref: const c GSL_CONST_MKSA_STANDARD_GAS_VOLUME1566916 Ref: c721566916 Ref: const c GSL_CONST_MKSA_STEFAN_BOLTZMANN_CONSTANT1566999 Ref: c731566999 Ref: const c GSL_CONST_MKSA_GAUSS1567107 Ref: c741567107 Node: Astronomy and Astrophysics1567177 Ref: const astronomy-and-astrophysics1567324 Ref: c751567324 Ref: const c GSL_CONST_MKSA_ASTRONOMICAL_UNIT1567389 Ref: c761567389 Ref: const c GSL_CONST_MKSA_GRAVITATIONAL_CONSTANT1567505 Ref: c771567505 Ref: const c GSL_CONST_MKSA_LIGHT_YEAR1567592 Ref: c781567592 Ref: const c GSL_CONST_MKSA_PARSEC1567670 Ref: c791567670 Ref: const c GSL_CONST_MKSA_GRAV_ACCEL1567740 Ref: c7a1567740 Ref: const c GSL_CONST_MKSA_SOLAR_MASS1567837 Ref: c7b1567837 Node: Atomic and Nuclear Physics1567902 Ref: const atomic-and-nuclear-physics1568047 Ref: c7c1568047 Ref: const c GSL_CONST_MKSA_ELECTRON_CHARGE1568112 Ref: c7d1568112 Ref: const c GSL_CONST_MKSA_ELECTRON_VOLT1568192 Ref: c7e1568192 Ref: const c GSL_CONST_MKSA_UNIFIED_ATOMIC_MASS1568274 Ref: c7f1568274 Ref: const c GSL_CONST_MKSA_MASS_ELECTRON1568357 Ref: c801568357 Ref: const c GSL_CONST_MKSA_MASS_MUON1568435 Ref: c811568435 Ref: const c GSL_CONST_MKSA_MASS_PROTON1568507 Ref: c821568507 Ref: const c GSL_CONST_MKSA_MASS_NEUTRON1568581 Ref: c831568581 Ref: const c GSL_CONST_NUM_FINE_STRUCTURE1568657 Ref: c841568657 Ref: const c GSL_CONST_MKSA_RYDBERG1568756 Ref: c851568756 Ref: const c GSL_CONST_MKSA_BOHR_RADIUS1568930 Ref: c861568930 Ref: const c GSL_CONST_MKSA_ANGSTROM1568997 Ref: c871568997 Ref: const c GSL_CONST_MKSA_BARN1569065 Ref: c881569065 Ref: const c GSL_CONST_MKSA_BOHR_MAGNETON1569123 Ref: c891569123 Ref: const c GSL_CONST_MKSA_NUCLEAR_MAGNETON1569196 Ref: c8a1569196 Ref: const c GSL_CONST_MKSA_ELECTRON_MAGNETIC_MOMENT1569275 Ref: c8b1569275 Ref: const c GSL_CONST_MKSA_PROTON_MAGNETIC_MOMENT1569462 Ref: c8c1569462 Ref: const c GSL_CONST_MKSA_THOMSON_CROSS_SECTION1569560 Ref: c8d1569560 Ref: const c GSL_CONST_MKSA_DEBYE1569652 Ref: c8e1569652 Node: Measurement of Time1569733 Ref: const measurement-of-time1569866 Ref: c8f1569866 Ref: const c GSL_CONST_MKSA_MINUTE1569917 Ref: c901569917 Ref: const c GSL_CONST_MKSA_HOUR1569992 Ref: c911569992 Ref: const c GSL_CONST_MKSA_DAY1570063 Ref: c921570063 Ref: const c GSL_CONST_MKSA_WEEK1570132 Ref: c931570132 Node: Imperial Units1570203 Ref: const imperial-units1570334 Ref: c941570334 Ref: const c GSL_CONST_MKSA_INCH1570375 Ref: c951570375 Ref: const c GSL_CONST_MKSA_FOOT1570435 Ref: c961570435 Ref: const c GSL_CONST_MKSA_YARD1570495 Ref: c971570495 Ref: const c GSL_CONST_MKSA_MILE1570555 Ref: c981570555 Ref: const c GSL_CONST_MKSA_MIL1570615 Ref: c991570615 Node: Speed and Nautical Units1570695 Ref: const speed-and-nautical-units1570821 Ref: c9a1570821 Ref: const c GSL_CONST_MKSA_KILOMETERS_PER_HOUR1570882 Ref: c9b1570882 Ref: const c GSL_CONST_MKSA_MILES_PER_HOUR1570970 Ref: c9c1570970 Ref: const c GSL_CONST_MKSA_NAUTICAL_MILE1571048 Ref: c9d1571048 Ref: const c GSL_CONST_MKSA_FATHOM1571126 Ref: c9e1571126 Ref: const c GSL_CONST_MKSA_KNOT1571190 Ref: c9f1571190 Node: Printers Units1571249 Ref: const printers-units1571383 Ref: ca01571383 Ref: const c GSL_CONST_MKSA_POINT1571424 Ref: ca11571424 Ref: const c GSL_CONST_MKSA_TEXPOINT1571510 Ref: ca21571510 Node: Volume Area and Length1571594 Ref: const volume-area-and-length1571719 Ref: ca31571719 Ref: const c GSL_CONST_MKSA_MICRON1571778 Ref: ca41571778 Ref: const c GSL_CONST_MKSA_HECTARE1571842 Ref: ca51571842 Ref: const c GSL_CONST_MKSA_ACRE1571906 Ref: ca61571906 Ref: const c GSL_CONST_MKSA_LITER1571964 Ref: ca71571964 Ref: const c GSL_CONST_MKSA_US_GALLON1572026 Ref: ca81572026 Ref: const c GSL_CONST_MKSA_CANADIAN_GALLON1572096 Ref: ca91572096 Ref: const c GSL_CONST_MKSA_UK_GALLON1572178 Ref: caa1572178 Ref: const c GSL_CONST_MKSA_QUART1572248 Ref: cab1572248 Ref: const c GSL_CONST_MKSA_PINT1572310 Ref: cac1572310 Node: Mass and Weight1572370 Ref: const mass-and-weight1572505 Ref: cad1572505 Ref: const c GSL_CONST_MKSA_POUND_MASS1572548 Ref: cae1572548 Ref: const c GSL_CONST_MKSA_OUNCE_MASS1572613 Ref: caf1572613 Ref: const c GSL_CONST_MKSA_TON1572678 Ref: cb01572678 Ref: const c GSL_CONST_MKSA_METRIC_TON1572734 Ref: cb11572734 Ref: const c GSL_CONST_MKSA_UK_TON1572814 Ref: cb21572814 Ref: const c GSL_CONST_MKSA_TROY_OUNCE1572876 Ref: cb31572876 Ref: const c GSL_CONST_MKSA_CARAT1572946 Ref: cb41572946 Ref: const c GSL_CONST_MKSA_GRAM_FORCE1573006 Ref: cb51573006 Ref: const c GSL_CONST_MKSA_POUND_FORCE1573078 Ref: cb61573078 Ref: const c GSL_CONST_MKSA_KILOPOUND_FORCE1573152 Ref: cb71573152 Ref: const c GSL_CONST_MKSA_POUNDAL1573234 Ref: cb81573234 Node: Thermal Energy and Power1573299 Ref: const thermal-energy-and-power1573420 Ref: cb91573420 Ref: const c GSL_CONST_MKSA_CALORIE1573483 Ref: cba1573483 Ref: const c GSL_CONST_MKSA_BTU1573549 Ref: cbb1573549 Ref: const c GSL_CONST_MKSA_THERM1573629 Ref: cbc1573629 Ref: const c GSL_CONST_MKSA_HORSEPOWER1573691 Ref: cbd1573691 Node: Pressure1573762 Ref: const pressure1573877 Ref: cbe1573877 Ref: const c GSL_CONST_MKSA_BAR1573908 Ref: cbf1573908 Ref: const c GSL_CONST_MKSA_STD_ATMOSPHERE1573968 Ref: cc01573968 Ref: const c GSL_CONST_MKSA_TORR1574055 Ref: cc11574055 Ref: const c GSL_CONST_MKSA_METER_OF_MERCURY1574117 Ref: cc21574117 Ref: const c GSL_CONST_MKSA_INCH_OF_MERCURY1574203 Ref: cc31574203 Ref: const c GSL_CONST_MKSA_INCH_OF_WATER1574287 Ref: cc41574287 Ref: const c GSL_CONST_MKSA_PSI1574367 Ref: cc51574367 Node: Viscosity1574445 Ref: const viscosity1574558 Ref: cc61574558 Ref: const c GSL_CONST_MKSA_POISE1574591 Ref: cc71574591 Ref: const c GSL_CONST_MKSA_STOKES1574664 Ref: cc81574664 Node: Light and Illumination1574741 Ref: const light-and-illumination1574859 Ref: cc91574859 Ref: const c GSL_CONST_MKSA_STILB1574918 Ref: cca1574918 Ref: const c GSL_CONST_MKSA_LUMEN1574983 Ref: ccb1574983 Ref: const c GSL_CONST_MKSA_LUX1575052 Ref: ccc1575052 Ref: const c GSL_CONST_MKSA_PHOT1575115 Ref: ccd1575115 Ref: const c GSL_CONST_MKSA_FOOTCANDLE1575180 Ref: cce1575180 Ref: const c GSL_CONST_MKSA_LAMBERT1575257 Ref: ccf1575257 Ref: const c GSL_CONST_MKSA_FOOTLAMBERT1575326 Ref: cd01575326 Node: Radioactivity1575403 Ref: const radioactivity1575528 Ref: cd11575528 Ref: const c GSL_CONST_MKSA_CURIE1575569 Ref: cd21575569 Ref: const c GSL_CONST_MKSA_ROENTGEN1575633 Ref: cd31575633 Ref: const c GSL_CONST_MKSA_RAD1575703 Ref: cd41575703 Node: Force and Energy1575768 Ref: const force-and-energy1575879 Ref: cd51575879 Ref: const c GSL_CONST_MKSA_NEWTON1575926 Ref: cd61575926 Ref: const c GSL_CONST_MKSA_DYNE1575998 Ref: cd71575998 Ref: const c GSL_CONST_MKSA_JOULE1576074 Ref: cd81576074 Ref: const c GSL_CONST_MKSA_ERG1576145 Ref: cd91576145 Node: Prefixes1576216 Ref: const prefixes1576326 Ref: cda1576326 Ref: const c GSL_CONST_NUM_YOTTA1576409 Ref: cdb1576409 Ref: const c GSL_CONST_NUM_ZETTA1576455 Ref: cdc1576455 Ref: const c GSL_CONST_NUM_EXA1576501 Ref: cdd1576501 Ref: const c GSL_CONST_NUM_PETA1576545 Ref: cde1576545 Ref: const c GSL_CONST_NUM_TERA1576590 Ref: cdf1576590 Ref: const c GSL_CONST_NUM_GIGA1576635 Ref: ce01576635 Ref: const c GSL_CONST_NUM_MEGA1576677 Ref: ce11576677 Ref: const c GSL_CONST_NUM_KILO1576719 Ref: ce21576719 Ref: const c GSL_CONST_NUM_MILLI1576761 Ref: ce31576761 Ref: const c GSL_CONST_NUM_MICRO1576807 Ref: ce41576807 Ref: const c GSL_CONST_NUM_NANO1576853 Ref: ce51576853 Ref: const c GSL_CONST_NUM_PICO1576898 Ref: ce61576898 Ref: const c GSL_CONST_NUM_FEMTO1576944 Ref: ce71576944 Ref: const c GSL_CONST_NUM_ATTO1576991 Ref: ce81576991 Ref: const c GSL_CONST_NUM_ZEPTO1577037 Ref: ce91577037 Ref: const c GSL_CONST_NUM_YOCTO1577084 Ref: cea1577084 Node: Examples<36>1577131 Ref: const examples1577259 Ref: ceb1577259 Node: References and Further Reading<39>1578889 Ref: const references-and-further-reading1579000 Ref: cec1579000 Node: IEEE floating-point arithmetic1579607 Ref: ieee754 doc1579742 Ref: ced1579742 Ref: ieee754 chap-ieee1579742 Ref: cee1579742 Ref: ieee754 ieee-floating-point-arithmetic1579742 Ref: cef1579742 Node: Representation of floating point numbers1580227 Ref: ieee754 representation-of-floating-point-numbers1580376 Ref: cf01580376 Ref: ieee754 c gsl_ieee_fprintf_float1582405 Ref: cf11582405 Ref: ieee754 c gsl_ieee_fprintf_double1582478 Ref: cf21582478 Ref: ieee754 c gsl_ieee_printf_float1583358 Ref: cf31583358 Ref: ieee754 c gsl_ieee_printf_double1583416 Ref: cf41583416 Node: Setting up your IEEE environment1584790 Ref: ieee754 setting-up-your-ieee-environment1584982 Ref: cf51584982 Ref: ieee754 c GSL_IEEE_MODE1586051 Ref: cf61586051 Ref: ieee754 c gsl_ieee_env_setup1586131 Ref: cf71586131 Node: References and Further Reading<40>1591925 Ref: ieee754 references-and-further-reading1592068 Ref: cf81592068 Node: Debugging Numerical Programs1593095 Ref: debug doc1593231 Ref: cf91593231 Ref: debug debugging-numerical-programs1593231 Ref: cfa1593231 Node: Using gdb1593608 Ref: debug using-gdb1593726 Ref: cfb1593726 Node: Examining floating point registers1597020 Ref: debug examining-floating-point-registers1597181 Ref: cfc1597181 Node: Handling floating point exceptions1598323 Ref: debug handling-floating-point-exceptions1598517 Ref: cfd1598517 Node: GCC warning options for numerical programs1599779 Ref: debug gcc-warning-options-for-numerical-programs1599973 Ref: cfe1599973 Node: References and Further Reading<41>1603906 Ref: debug references-and-further-reading1604057 Ref: cff1604057 Node: Contributors to GSL1604699 Ref: contrib doc1604820 Ref: d001604820 Ref: contrib contributors-to-gsl1604820 Ref: d011604820 Node: Autoconf Macros1609295 Ref: autoconf doc1609405 Ref: d021609405 Ref: autoconf autoconf-macros1609405 Ref: d031609405 Ref: autoconf chap-autoconf-macros1609405 Ref: 161609405 Node: GSL CBLAS Library1613475 Ref: cblas doc1613592 Ref: d041613592 Ref: cblas chap-cblas1613592 Ref: 4761613592 Ref: cblas gsl-cblas-library1613592 Ref: d051613592 Node: Level 1<2>1613974 Ref: cblas level-11614058 Ref: d061614058 Ref: cblas c cblas_sdsdot1614085 Ref: d071614085 Ref: cblas c cblas_dsdot1614226 Ref: d081614226 Ref: cblas c cblas_sdot1614348 Ref: d091614348 Ref: cblas c cblas_ddot1614468 Ref: d0a1614468 Ref: cblas c cblas_cdotu_sub1614591 Ref: d0b1614591 Ref: cblas c cblas_cdotc_sub1614725 Ref: d0c1614725 Ref: cblas c cblas_zdotu_sub1614859 Ref: d0d1614859 Ref: cblas c cblas_zdotc_sub1614993 Ref: d0e1614993 Ref: cblas c cblas_snrm21615127 Ref: d0f1615127 Ref: cblas c cblas_sasum1615216 Ref: d101615216 Ref: cblas c cblas_dnrm21615305 Ref: d111615305 Ref: cblas c cblas_dasum1615396 Ref: d121615396 Ref: cblas c cblas_scnrm21615487 Ref: d131615487 Ref: cblas c cblas_scasum1615576 Ref: d141615576 Ref: cblas c cblas_dznrm21615665 Ref: d151615665 Ref: cblas c cblas_dzasum1615755 Ref: d161615755 Ref: cblas c cblas_isamax1615845 Ref: d171615845 Ref: cblas c cblas_idamax1615941 Ref: d181615941 Ref: cblas c cblas_icamax1616038 Ref: d191616038 Ref: cblas c cblas_izamax1616133 Ref: d1a1616133 Ref: cblas c cblas_sswap1616228 Ref: d1b1616228 Ref: cblas c cblas_scopy1616336 Ref: d1c1616336 Ref: cblas c cblas_saxpy1616450 Ref: d1d1616450 Ref: cblas c cblas_dswap1616583 Ref: d1e1616583 Ref: cblas c cblas_dcopy1616693 Ref: d1f1616693 Ref: cblas c cblas_daxpy1616809 Ref: d201616809 Ref: cblas c cblas_cswap1616945 Ref: d211616945 Ref: cblas c cblas_ccopy1617051 Ref: d221617051 Ref: cblas c cblas_caxpy1617163 Ref: d231617163 Ref: cblas c cblas_zswap1617294 Ref: d241617294 Ref: cblas c cblas_zcopy1617400 Ref: d251617400 Ref: cblas c cblas_zaxpy1617512 Ref: d261617512 Ref: cblas c cblas_srotg1617643 Ref: d271617643 Ref: cblas c cblas_srotmg1617716 Ref: d281617716 Ref: cblas c cblas_srot1617819 Ref: d291617819 Ref: cblas c cblas_srotm1617956 Ref: d2a1617956 Ref: cblas c cblas_drotg1618080 Ref: d2b1618080 Ref: cblas c cblas_drotmg1618167 Ref: d2c1618167 Ref: cblas c cblas_drot1618275 Ref: d2d1618275 Ref: cblas c cblas_drotm1618416 Ref: d2e1618416 Ref: cblas c cblas_sscal1618543 Ref: d2f1618543 Ref: cblas c cblas_dscal1618644 Ref: d301618644 Ref: cblas c cblas_cscal1618747 Ref: d311618747 Ref: cblas c cblas_zscal1618847 Ref: d321618847 Ref: cblas c cblas_csscal1618947 Ref: d331618947 Ref: cblas c cblas_zdscal1619048 Ref: d341619048 Node: Level 2<2>1619150 Ref: cblas level-21619253 Ref: d351619253 Ref: cblas c cblas_sgemv1619280 Ref: d361619280 Ref: cblas c cblas_sgbmv1619560 Ref: d371619560 Ref: cblas c cblas_strmv1619878 Ref: d381619878 Ref: cblas c cblas_stbmv1620132 Ref: d391620132 Ref: cblas c cblas_stpmv1620399 Ref: d3a1620399 Ref: cblas c cblas_strsv1620639 Ref: d3b1620639 Ref: cblas c cblas_stbsv1620893 Ref: d3c1620893 Ref: cblas c cblas_stpsv1621160 Ref: d3d1621160 Ref: cblas c cblas_dgemv1621400 Ref: d3e1621400 Ref: cblas c cblas_dgbmv1621685 Ref: d3f1621685 Ref: cblas c cblas_dtrmv1622008 Ref: d401622008 Ref: cblas c cblas_dtbmv1622264 Ref: d411622264 Ref: cblas c cblas_dtpmv1622533 Ref: d421622533 Ref: cblas c cblas_dtrsv1622775 Ref: d431622775 Ref: cblas c cblas_dtbsv1623031 Ref: d441623031 Ref: cblas c cblas_dtpsv1623300 Ref: d451623300 Ref: cblas c cblas_cgemv1623542 Ref: d461623542 Ref: cblas c cblas_cgbmv1623819 Ref: d471623819 Ref: cblas c cblas_ctrmv1624134 Ref: d481624134 Ref: cblas c cblas_ctbmv1624386 Ref: d491624386 Ref: cblas c cblas_ctpmv1624651 Ref: d4a1624651 Ref: cblas c cblas_ctrsv1624889 Ref: d4b1624889 Ref: cblas c cblas_ctbsv1625141 Ref: d4c1625141 Ref: cblas c cblas_ctpsv1625406 Ref: d4d1625406 Ref: cblas c cblas_zgemv1625644 Ref: d4e1625644 Ref: cblas c cblas_zgbmv1625921 Ref: d4f1625921 Ref: cblas c cblas_ztrmv1626236 Ref: d501626236 Ref: cblas c cblas_ztbmv1626488 Ref: d511626488 Ref: cblas c cblas_ztpmv1626753 Ref: d521626753 Ref: cblas c cblas_ztrsv1626991 Ref: d531626991 Ref: cblas c cblas_ztbsv1627243 Ref: d541627243 Ref: cblas c cblas_ztpsv1627508 Ref: d551627508 Ref: cblas c cblas_ssymv1627746 Ref: d561627746 Ref: cblas c cblas_ssbmv1628006 Ref: d571628006 Ref: cblas c cblas_sspmv1628279 Ref: d581628279 Ref: cblas c cblas_sger1628525 Ref: d591628525 Ref: cblas c cblas_ssyr1628741 Ref: d5a1628741 Ref: cblas c cblas_sspr1628940 Ref: d5b1628940 Ref: cblas c cblas_ssyr21629125 Ref: d5c1629125 Ref: cblas c cblas_sspr21629367 Ref: d5d1629367 Ref: cblas c cblas_dsymv1629584 Ref: d5e1629584 Ref: cblas c cblas_dsbmv1629849 Ref: d5f1629849 Ref: cblas c cblas_dspmv1630127 Ref: d601630127 Ref: cblas c cblas_dger1630378 Ref: d611630378 Ref: cblas c cblas_dsyr1630608 Ref: d621630608 Ref: cblas c cblas_dspr1630810 Ref: d631630810 Ref: cblas c cblas_dsyr21630998 Ref: d641630998 Ref: cblas c cblas_dspr21631244 Ref: d651631244 Ref: cblas c cblas_chemv1631475 Ref: d661631475 Ref: cblas c cblas_chbmv1631732 Ref: d671631732 Ref: cblas c cblas_chpmv1632002 Ref: d681632002 Ref: cblas c cblas_cgeru1632245 Ref: d691632245 Ref: cblas c cblas_cgerc1632459 Ref: d6a1632459 Ref: cblas c cblas_cher1632673 Ref: d6b1632673 Ref: cblas c cblas_chpr1632870 Ref: d6c1632870 Ref: cblas c cblas_cher21633052 Ref: d6d1633052 Ref: cblas c cblas_chpr21633291 Ref: d6e1633291 Ref: cblas c cblas_zhemv1633506 Ref: d6f1633506 Ref: cblas c cblas_zhbmv1633763 Ref: d701633763 Ref: cblas c cblas_zhpmv1634033 Ref: d711634033 Ref: cblas c cblas_zgeru1634276 Ref: d721634276 Ref: cblas c cblas_zgerc1634490 Ref: d731634490 Ref: cblas c cblas_zher1634704 Ref: d741634704 Ref: cblas c cblas_zhpr1634902 Ref: d751634902 Ref: cblas c cblas_zher21635085 Ref: d761635085 Ref: cblas c cblas_zhpr21635324 Ref: d771635324 Node: Level 3<2>1635539 Ref: cblas level-31635644 Ref: d781635644 Ref: cblas c cblas_sgemm1635671 Ref: d791635671 Ref: cblas c cblas_ssymm1636007 Ref: d7a1636007 Ref: cblas c cblas_ssyrk1636316 Ref: d7b1636316 Ref: cblas c cblas_ssyr2k1636590 Ref: d7c1636590 Ref: cblas c cblas_strmm1636906 Ref: d7d1636906 Ref: cblas c cblas_strsm1637229 Ref: d7e1637229 Ref: cblas c cblas_dgemm1637552 Ref: d7f1637552 Ref: cblas c cblas_dsymm1637893 Ref: d801637893 Ref: cblas c cblas_dsyrk1638207 Ref: d811638207 Ref: cblas c cblas_dsyr2k1638485 Ref: d821638485 Ref: cblas c cblas_dtrmm1638806 Ref: d831638806 Ref: cblas c cblas_dtrsm1639132 Ref: d841639132 Ref: cblas c cblas_cgemm1639458 Ref: d851639458 Ref: cblas c cblas_csymm1639791 Ref: d861639791 Ref: cblas c cblas_csyrk1640097 Ref: d871640097 Ref: cblas c cblas_csyr2k1640369 Ref: d881640369 Ref: cblas c cblas_ctrmm1640682 Ref: d891640682 Ref: cblas c cblas_ctrsm1641003 Ref: d8a1641003 Ref: cblas c cblas_zgemm1641324 Ref: d8b1641324 Ref: cblas c cblas_zsymm1641657 Ref: d8c1641657 Ref: cblas c cblas_zsyrk1641963 Ref: d8d1641963 Ref: cblas c cblas_zsyr2k1642235 Ref: d8e1642235 Ref: cblas c cblas_ztrmm1642548 Ref: d8f1642548 Ref: cblas c cblas_ztrsm1642869 Ref: d901642869 Ref: cblas c cblas_chemm1643190 Ref: d911643190 Ref: cblas c cblas_cherk1643496 Ref: d921643496 Ref: cblas c cblas_cher2k1643768 Ref: d931643768 Ref: cblas c cblas_zhemm1644081 Ref: d941644081 Ref: cblas c cblas_zherk1644387 Ref: d951644387 Ref: cblas c cblas_zher2k1644661 Ref: d961644661 Ref: cblas c cblas_xerbla1644975 Ref: d971644975 Node: Examples<37>1645067 Ref: cblas examples1645153 Ref: d981645153 Node: GNU General Public License1646618 Ref: gpl doc1646750 Ref: d991646750 Ref: gpl gnu-general-public-license1646750 Ref: d9a1646750 Node: GNU Free Documentation License1684724 Ref: fdl doc1684844 Ref: d9b1684844 Ref: fdl gnu-free-documentation-license1684844 Ref: d9c1684844 Node: Index1709747  End Tag Table  Local Variables: coding: utf-8 End: