Improving Efficiency of Parallel Across the Method Spectral Deferred Corrections
March 27, 2024 Β· Declared Dead Β· π SIAM Journal on Scientific Computing
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Gayatri ΔakloviΔ, Thibaut Lunet, Sebastian GΓΆtschel, Daniel Ruprecht
arXiv ID
2403.18641
Category
math.NA: Numerical Analysis
Cross-listed
cs.DC
Citations
2
Venue
SIAM Journal on Scientific Computing
Last Checked
2 months ago
Abstract
Parallel-across-the method time integration can provide small scale parallelism when solving initial value problems. Spectral deferred corrections (SDC) with a diagonal sweeper, which is closely related to iterated Runge-Kutta methods proposed by Van der Houwen and Sommeijer, can use a number of threads equal to the number of quadrature nodes in the underlying collocation method. However, convergence speed, efficiency and stability depends critically on the used coefficients. Previous approaches have used numerical optimization to find good parameters. Instead, we propose an ansatz that allows to find optimal parameters analytically. We show that the resulting parallel SDC methods provide stability domains and convergence order very similar to those of well established serial SDC variants. Using a model for computational cost that assumes 80% efficiency of an implementation of parallel SDC we show that our variants are competitive with serial SDC, previously published parallel SDC coefficients as well as Picard iteration, explicit RKM-4 and an implicit fourth-order diagonally implicit Runge-Kutta method.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
π Similar Papers
In the same crypt β Numerical Analysis
R.I.P.
π»
Ghosted
R.I.P.
π»
Ghosted
Deep learning-based numerical methods for high-dimensional parabolic partial differential equations and backward stochastic differential equations
R.I.P.
π»
Ghosted
PDE-Net: Learning PDEs from Data
R.I.P.
π»
Ghosted
Efficient tensor completion for color image and video recovery: Low-rank tensor train
R.I.P.
π»
Ghosted
Tensor Ring Decomposition
R.I.P.
π»
Ghosted
Machine learning approximation algorithms for high-dimensional fully nonlinear partial differential equations and second-order backward stochastic differential equations
Died the same way β π» Ghosted
R.I.P.
π»
Ghosted
Language Models are Few-Shot Learners
R.I.P.
π»
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
π»
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
π»
Ghosted