Multi-Threaded Dense Linear Algebra Libraries for Low-Power Asymmetric Multicore Processors
November 06, 2015 Β· Declared Dead Β· π Journal of Computer Science
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Sandra CatalΓ‘n, JosΓ© R. Herrero, Francisco D. Igual, Rafael RodrΓguez-SΓ‘nchez, Enrique S. Quintana-OrtΓ
arXiv ID
1511.02171
Category
cs.MS: Mathematical Software
Cross-listed
cs.DC
Citations
5
Venue
Journal of Computer Science
Last Checked
2 months ago
Abstract
Dense linear algebra libraries, such as BLAS and LAPACK, provide a relevant collection of numerical tools for many scientific and engineering applications. While there exist high performance implementations of the BLAS (and LAPACK) functionality for many current multi-threaded architectures,the adaption of these libraries for asymmetric multicore processors (AMPs)is still pending. In this paper we address this challenge by developing an asymmetry-aware implementation of the BLAS, based on the BLIS framework, and tailored for AMPs equipped with two types of cores: fast/power hungry versus slow/energy efficient. For this purpose, we integrate coarse-grain and fine-grain parallelization strategies into the library routines which, respectively, dynamically distribute the workload between the two core types and statically repartition this work among the cores of the same type. Our results on an ARM big.LITTLE processor embedded in the Exynos 5422 SoC, using the asymmetry-aware version of the BLAS and a plain migration of the legacy version of LAPACK, experimentally assess the benefits, limitations, and potential of this approach.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Mathematical Software
π
π
Old Age
π
π
Old Age
CSR5: An Efficient Storage Format for Cross-Platform Sparse Matrix-Vector Multiplication
R.I.P.
π»
Ghosted
Mathematical Foundations of the GraphBLAS
R.I.P.
π»
Ghosted
The DUNE Framework: Basic Concepts and Recent Developments
R.I.P.
π»
Ghosted
Format Abstraction for Sparse Tensor Algebra Compilers
R.I.P.
π»
Ghosted
AMReX: Block-Structured Adaptive Mesh Refinement for Multiphysics Applications
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted