Math Kernel Library

Intel Math Kernel Library (Intel MKL) is a library of optimized math routines for science, engineering, and financial applications. Core math functions include BLAS, LAPACK, ScaLAPACK, sparse solvers, fast Fourier transforms, and vector math.[4][5]

Math Kernel Library
Developer(s)Intel
Initial releaseMay 9, 2003 (2003-05-09)
Stable release
2020 Update 1 / March 31, 2020 (2020-03-31)[1]
Written inC/C++, Fortran
Operating systemMicrosoft Windows, Linux, macOS
PlatformIntel Xeon Phi, Intel Xeon, Intel Core, Intel Atom[2]
TypeLibrary and framework
LicenseFreeware[3]
Websitesoftware.intel.com/mkl 

The library supports Intel processors[2] and is available for Windows, Linux and macOS operating systems.[4][5][6]

History

Intel launched the Math Kernel Library on May 9, 2003, and called it blas.lib.[7] The project's development teams are located in Russia and the United States. MKL is bundled with Intel Parallel Studio XE, Intel Cluster Studio XE, Intel C++, Fortran Studio XE products as well as canopy. Standalone versions have not been sold for years to new customers but are now available for free.[8]

License

The library is available free of charge under the terms of Intel Simplified Software License[3] which allow redistribution.[8] Commercial support is available when purchased as a standalone software or as part of Intel Parallel Studio XE or Intel System Studio.

Performance

Intel MKL and other programs generated by the Intel C++ Compiler improve performance with a technique called function multi-versioning: a function is compiled or written for many of the x86 instruction set extensions, and at run-time a "master function" uses the CPUID instruction to select a version most appropriate for the current CPU. However, as long as the master function detects a non-Intel CPU, it almost always chooses the most basic (and slowest) function to use, regardless of what instruction sets the CPU claims to support. This has netted the system a nickname of "cripple AMD" routine since 2009.[9] As of 2020, Intels MKL, which remains the numeric library installed by default along with many pre-compiled mathematical applications on Windows (such as NumPy, SymPy, and MATLAB), still significantly underperforms on AMD CPUs by ignoring their supported instruction sets.[10][11] In older versions, setting the undocumented environment variable MKL_DEBUG_CPU_TYPE=5 could be used to override the vendor string dependent codepath choice and activate supported instructions up to AVX2 on AMD processor based systems resulting in equal or even better performance when compared to Intel CPUs.[12][13][14] Since at least Update 1 2020, the workaround does not work anymore.[10][11]

Details

Functional categories

Intel MKL has the following functional categories:[15]

  • Linear algebra: BLAS routines are vector-vector (Level 1), matrix-vector (Level 2) and matrix matrix (Level 3) operations for real and complex single and double precision data. LAPACK consists of tuned LU, Cholesky and QR factorizations, eigenvalue and least squares solvers. MKL also includes Sparse BLAS, ScaLAPACK, Sparse Solver, Extended Eigensolver, PBLAS and BLACS.
    Since MKL uses standard interfaces for BLAS and LAPACK, the application which uses other implementations can get better performance on Intel and compatible processors by re-linking with MKL libraries.
  • MKL includes a variety of Fast Fourier Transforms (FFTs) from 1D to multidimensional, complex to complex, real to complex, and real to real transforms of arbitrary lengths. Applications written with the open source FFTW can be easily ported to MKL by linking with interface wrapper libraries provided as part of MKL for easy migration.
    Cluster versions of LAPACK and FFTs are also available as part of MKL to take advantage of MPI parallelism in addition to single node parallelism from multithreading.
  • Vector math functions include computationally intensive core mathematical operations for single and double precision real and complex data types. These are similar to libm functions from compiler libraries but operate on vectors rather than scalars to provide better performance. There are various controls for setting accuracy, error mode and denormalized number handling to customize the behavior of the routines.
  • Statistics functions include random number generators and probability distributions. optimized for multicore processors. Also included are compute-intensive in and out-of-core routines to compute basic statistics, estimation of dependencies etc.
  • Data fitting functions include splines (linear, quadratic, cubic, look-up, stepwise constant) for 1-dimensional interpolation that can be used in data analytics, geometric modeling and surface approximation applications.
  • Deep Neural Network
  • Partial Differential Equations
  • Nonlinear Optimization Problem Solvers
gollark: I fixed it except now my thing plays itself at some point and recurses infinitely.
gollark: It has a child process. This is ridiculous. It lies.]
gollark: This is still not working, I may need to change tactics.
gollark: `fork` returning 0 means something is the child, right?
gollark: Oh, I see, hm.

See also

References

This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.