Expressing Sparse Matrix Computations for Productive Performance on Spatial Architectures
October 12, 2018 ยท Declared Dead ยท ๐ arXiv.org
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Hongbo Rong
arXiv ID
1810.07517
Category
cs.MS: Mathematical Software
Cross-listed
cs.PL
Citations
2
Venue
arXiv.org
Last Checked
2 months ago
Abstract
This paper addresses spatial programming of sparse matrix computations for productive performance. The challenge is how to express an irregular computation and its optimizations in a regular way. A sparse matrix has (non-zero) values and a structure. In this paper, we propose to classify the implementations of a computation on a sparse matrix into two categories: (1) structure-driven, or top-down, approach, which traverses the structure with given row and column indices and locates the corresponding values, and (2) values-driven, or bottom-up, approach, which loads and processes the values in parallel streams, and decodes the structure for the values' corresponding row and column indices. On a spatial architecture like FPGAs, the values-driven approach is the norm. We show how to express a sparse matrix computation and its optimizations for a values-driven implementation. A compiler automatically synthesizes a code to decode the structure. In this way, programmers focus on optimizing the processing of the values, using familiar optimizations for dense matrices, while leaving the complex, irregular structure traversal to an automatic compiler. We also attempt to regularize the optimizations of the reduction for a dynamic number of values, which is common in a sparse matrix computation.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Mathematical Software
๐
๐
Old Age
๐
๐
Old Age
CSR5: An Efficient Storage Format for Cross-Platform Sparse Matrix-Vector Multiplication
R.I.P.
๐ป
Ghosted
Mathematical Foundations of the GraphBLAS
R.I.P.
๐ป
Ghosted
The DUNE Framework: Basic Concepts and Recent Developments
R.I.P.
๐ป
Ghosted
Format Abstraction for Sparse Tensor Algebra Compilers
R.I.P.
๐ป
Ghosted
AMReX: Block-Structured Adaptive Mesh Refinement for Multiphysics Applications
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted