SWE-Lancer: Can Frontier LLMs Earn $1 Million from Real-World Freelance Software Engineering?

February 17, 2025 ยท Declared Dead ยท ๐Ÿ› International Conference on Machine Learning

๐Ÿ“œ CAUSE OF DEATH: Death by README
Repo has only a README

Repo contents: README.md

Authors Samuel Miserendino, Michele Wang, Tejal Patwardhan, Johannes Heidecke arXiv ID 2502.12115 Category cs.LG: Machine Learning Cross-listed cs.SE Citations 67 Venue International Conference on Machine Learning Repository https://github.com/openai/SWELancer-Benchmark โญ 1439 Last Checked 1 month ago
Abstract
We introduce SWE-Lancer, a benchmark of over 1,400 freelance software engineering tasks from Upwork, valued at \$1 million USD total in real-world payouts. SWE-Lancer encompasses both independent engineering tasks--ranging from \$50 bug fixes to \$32,000 feature implementations--and managerial tasks, where models choose between technical implementation proposals. Independent tasks are graded with end-to-end tests triple-verified by experienced software engineers, while managerial decisions are assessed against the choices of the original hired engineering managers. We evaluate model performance and find that frontier models are still unable to solve the majority of tasks. To facilitate future research, we open-source a unified Docker image and a public evaluation split, SWE-Lancer Diamond (https://github.com/openai/SWELancer-Benchmark). By mapping model performance to monetary value, we hope SWE-Lancer enables greater research into the economic impact of AI model development.
Community shame:
Not yet rated
Community Contributions

Found the code? Know the venue? Think something is wrong? Let us know!

๐Ÿ“œ Similar Papers

In the same crypt โ€” Machine Learning

Died the same way โ€” ๐Ÿ“œ Death by README