ByteDance's Doubao Large Model Team has officially released the open-source Multi-SWE-bench dataset, marking the debut of a multi-language dataset tailored for evaluating and enhancing the 'automatic bug-fixing' capabilities of large models. Encompassing eight popular programming languages, this dataset draws its extensive content from GitHub issues, curated over a period of nearly a year. The objective of Multi-SWE-bench is to propel advancements in automatic programming technology and elevate the sophisticated programming intelligence of large models.
