Computing Tropical Prevarieties in Parallel
May 01, 2017 ยท Declared Dead ยท ๐ PASCO@ISSAC
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Anders Jensen, Jeff Sommars, Jan Verschelde
arXiv ID
1705.00720
Category
cs.MS: Mathematical Software
Cross-listed
cs.CG,
cs.DC,
math.AG,
math.CO
Citations
7
Venue
PASCO@ISSAC
Last Checked
2 months ago
Abstract
The computation of the tropical prevariety is the first step in the application of polyhedral methods to compute positive dimensional solution sets of polynomial systems. In particular, pretropisms are candidate leading exponents for the power series developments of the solutions. The computation of the power series may start as soon as one pretropism is available, so our parallel computation of the tropical prevariety has an application in a pipelined solver. We present a parallel implementation of dynamic enumeration. Our first distributed memory implementation with forked processes achieved good speedups, but quite often resulted in large variations in the execution times of the processes. The shared memory multithreaded version applies work stealing to reduce the variability of the run time. Our implementation applies the thread safe Parma Polyhedral Library (PPL), in exact arithmetic with the GNU Multiprecision Arithmetic Library (GMP), aided by the fast memory allocations of TCMalloc. Our parallel implementation is capable of computing the tropical prevariety of the cyclic 16-roots problem. We also report on computational experiments on the $n$-body and $n$-vortex problems; our computational results compare favorably with Gfan.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Mathematical Software
๐
๐
Old Age
๐
๐
Old Age
CSR5: An Efficient Storage Format for Cross-Platform Sparse Matrix-Vector Multiplication
R.I.P.
๐ป
Ghosted
Mathematical Foundations of the GraphBLAS
R.I.P.
๐ป
Ghosted
The DUNE Framework: Basic Concepts and Recent Developments
R.I.P.
๐ป
Ghosted
Format Abstraction for Sparse Tensor Algebra Compilers
R.I.P.
๐ป
Ghosted
AMReX: Block-Structured Adaptive Mesh Refinement for Multiphysics Applications
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted