talnarchives

Une archive numérique francophone des articles de recherche en Traitement Automatique de la Langue.

Evaluating LLMs Efficiency Using Successive Attempts on Binary-Outcome Tasks

Mohamed Amine El Yagouby, Mehdi Zekroum, Abdelkader Lahmadi, Mounir Ghogho, Olivier Festor

Abstract : Evaluating Large Language Models (LLMs) using single-attempt metrics like Success Rate (SR) overlooks their capacity for iterative problem solving. In tasks with binary outcomes (success or failure), such as coding or planning, LLMs often benefit from multiple attempts. Existing multiattempt metrics like pass@k and success@k account for eventual success but ignore how efficiently it is achieved, making them more costly. We propose a new evaluation method with Successive Multiple Attempts, where a maximum number of retries is fixed, and introduce our Success Efficiency (SE) metric, which captures both success and efficiency in a single value by rewarding earlier successes and penalizing delays. Tested using the HumanEval dataset across six LLMs, SE captures how quickly an LLM solves tasks, which existing metrics do not offer. This work complements existing evaluation methods by measuring not only whether LLMs succeed but also how efficiently they do so.

Keywords : LLM Evaluation, Success Efficiency. M OTS - CLÉS: Évaluation du LLM, Efficacité du succès. A RTICLE : Accepté à EvalLLM 2025.