ScaLAPACK
The ScaLAPACK (or Scalable LAPACK) library includes a subset of LAPACK routines redesigned for distributed memory MIMD parallel computers. It is currently written in a Single-Program-Multiple-Data style using explicit message passing for interprocessor communication. It assumes matrices are laid out in a two-dimensional block cyclic decomposition.[1][2][3]
ScaLAPACK is designed for heterogeneous computing and is portable on any computer that supports MPI or PVM.
ScaLAPACK depends on PBLAS operations in the same way LAPACK depends on BLAS.
As of version 2.0 the code base directly includes PBLAS and BLACS and has dropped support for PVM.
Examples
- Programming with Big Data in R fully utilizes ScaLAPACK and two-dimensional block cyclic decomposition for Big Data statistical analysis which is an extension to R.
gollark: ↓ you, utterly
gollark: No, you're supposed to type "y" (yes), "n" (no), "m" (maybe), "n" (never), "a" (always), "h" (heavserver) or [DATA EXPUNGED] (embrace the darkness of Cthulhu)
gollark: (y/n/m/n/a/h)
gollark: Deploy apiogrammaticohazards?
gollark: My laptop has 8GB of RAM, you don't really *need* to deallocate anything.
References
- J. Dongarra and D. Walker. "The Design of Linear Algebra Libraries for High Performance Computers". Cite journal requires
|journal=
(help) - J. Demmel, M. Heath, and H. van der Vorst. "Parallel Numerical Linear Algebra". Cite journal requires
|journal=
(help)CS1 maint: multiple names: authors list (link) - "2d block-cyclic data layout".
External links
- The ScaLAPACK Project on Netlib.org
This article is issued from Wikipedia. The text is licensed under Creative Commons - Attribution - Sharealike. Additional terms may apply for the media files.