Distributed-memory $\mathcal{H}$-matrix Algebra I: Data Distribution and Matrix-vector Multiplication

August 28, 2020 ยท Declared Dead ยท ๐Ÿ› arXiv.org

๐Ÿ‘ป CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Yingzhou Li, Jack Poulson, Lexing Ying arXiv ID 2008.12441 Category math.NA: Numerical Analysis Cross-listed cs.DC Citations 3 Venue arXiv.org Last Checked 2 months ago
Abstract
We introduce a data distribution scheme for $\mathcal{H}$-matrices and a distributed-memory algorithm for $\mathcal{H}$-matrix-vector multiplication. Our data distribution scheme avoids an expensive $ฮฉ(P^2)$ scheduling procedure used in previous work, where $P$ is the number of processes, while data balancing is well-preserved. Based on the data distribution, our distributed-memory algorithm evenly distributes all computations among $P$ processes and adopts a novel tree-communication algorithm to reduce the latency cost. The overall complexity of our algorithm is $O\Big(\frac{N \log N}{P} + ฮฑ\log P + ฮฒ\log^2 P \Big)$ for $\mathcal{H}$-matrices under weak admissibility condition, where $N$ is the matrix size, $ฮฑ$ denotes the latency, and $ฮฒ$ denotes the inverse bandwidth. Numerically, our algorithm is applied to address both two- and three-dimensional problems of various sizes among various numbers of processes. On thousands of processes, good parallel efficiency is still observed.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Numerical Analysis

R.I.P. ๐Ÿ‘ป Ghosted

Tensor Ring Decomposition

Qibin Zhao, Guoxu Zhou, ... (+3 more)

math.NA ๐Ÿ› arXiv ๐Ÿ“š 427 cites 9 years ago

Died the same way โ€” ๐Ÿ‘ป Ghosted