Towards Fair and Efficient Learning-based Congestion Control
March 04, 2024 ยท Declared Dead ยท ๐ European Conference on Computer Systems
"No code URL or promise found in abstract"
Evidence collected by the PWNC Scanner
Authors
Xudong Liao, Han Tian, Chaoliang Zeng, Xinchen Wan, Kai Chen
arXiv ID
2403.01798
Category
cs.NI: Networking & Internet
Cross-listed
cs.LG
Citations
23
Venue
European Conference on Computer Systems
Last Checked
2 months ago
Abstract
Recent years have witnessed a plethora of learning-based solutions for congestion control (CC) that demonstrate better performance over traditional TCP schemes. However, they fail to provide consistently good convergence properties, including {\em fairness}, {\em fast convergence} and {\em stability}, due to the mismatch between their objective functions and these properties. Despite being intuitive, integrating these properties into existing learning-based CC is challenging, because: 1) their training environments are designed for the performance optimization of single flow but incapable of cooperative multi-flow optimization, and 2) there is no directly measurable metric to represent these properties into the training objective function. We present Astraea, a new learning-based congestion control that ensures fast convergence to fairness with stability. At the heart of Astraea is a multi-agent deep reinforcement learning framework that explicitly optimizes these convergence properties during the training process by enabling the learning of interactive policy between multiple competing flows, while maintaining high performance. We further build a faithful multi-flow environment that emulates the competing behaviors of concurrent flows, explicitly expressing convergence properties to enable their optimization during training. We have fully implemented Astraea and our comprehensive experiments show that Astraea can quickly converge to fairness point and exhibit better stability than its counterparts. For example, \sys achieves near-optimal bandwidth sharing (i.e., fairness) when multiple flows compete for the same bottleneck, delivers up to 8.4$\times$ faster convergence speed and 2.8$\times$ smaller throughput deviation, while achieving comparable or even better performance over prior solutions.
Community Contributions
Found the code? Know the venue? Think something is wrong? Let us know!
๐ Similar Papers
In the same crypt โ Networking & Internet
R.I.P.
๐ป
Ghosted
R.I.P.
๐ป
Ghosted
Federated Learning in Mobile Edge Networks: A Comprehensive Survey
R.I.P.
๐ป
Ghosted
A Survey of Indoor Localization Systems and Technologies
R.I.P.
๐ป
Ghosted
Survey of Important Issues in UAV Communication Networks
R.I.P.
๐ป
Ghosted
Network Function Virtualization: State-of-the-art and Research Challenges
R.I.P.
๐ป
Ghosted
Applications of Deep Reinforcement Learning in Communications and Networking: A Survey
Died the same way โ ๐ป Ghosted
R.I.P.
๐ป
Ghosted
Language Models are Few-Shot Learners
R.I.P.
๐ป
Ghosted
PyTorch: An Imperative Style, High-Performance Deep Learning Library
R.I.P.
๐ป
Ghosted
XGBoost: A Scalable Tree Boosting System
R.I.P.
๐ป
Ghosted