PREMA: A Predictive Multi-task Scheduling Algorithm For Preemptible Neural Processing Units

September 06, 2019 Β· Declared Dead Β· πŸ› International Symposium on High-Performance Computer Architecture

πŸ‘» CAUSE OF DEATH: Ghosted
No code link whatsoever

"No code URL or promise found in abstract"

Evidence collected by the PWNC Scanner

Authors Yujeong Choi, Minsoo Rhu arXiv ID 1909.04548 Category cs.DC: Distributed Computing Cross-listed cs.LG, cs.NE Citations 155 Venue International Symposium on High-Performance Computer Architecture Last Checked 2 months ago
Abstract
To amortize cost, cloud vendors providing DNN acceleration as a service to end-users employ consolidation and virtualization to share the underlying resources among multiple DNN service requests. This paper makes a case for a "preemptible" neural processing unit (NPU) and a "predictive" multi-task scheduler to meet the latency demands of high-priority inference while maintaining high throughput. We evaluate both the mechanisms that enable NPUs to be preemptible and the policies that utilize them to meet scheduling objectives. We show that preemptive NPU multi-tasking can achieve an average 7.8x, 1.4x, and 4.8x improvement in latency, throughput, and SLA satisfaction, respectively.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

πŸ“œ Similar Papers

In the same crypt β€” Distributed Computing

Died the same way β€” πŸ‘» Ghosted