talnarchives

Une archive numérique francophone des articles de recherche en Traitement Automatique de la Langue.

Towards a Robust Detection of Language Model-Generated Text: Is ChatGPT that easy to detect?

Wissam Antoun, Virginie Mouilleron, Benoît Sagot, Djamé Seddah

Abstract : Recent advances in natural language processing (NLP) have led to the development of large language models (LLMs) such as ChatGPT. This paper proposes a methodology for developing and evaluating ChatGPT detectors for French text, with a focus on investigating their robustness on out-of-domain data and against common attack schemes. The proposed method involves translating an English dataset into French and training a classifier on the translated data. Results show that the detectors can effectively detect ChatGPT-generated text, with a degree of robustness against basic attack techniques in in-domain settings. However, vulnerabilities are evident in out-of-domain contexts, highlighting the challenge of detecting adversarial text. The study emphasizes caution when applying in-domain testing results to a wider variety of content. We provide our translated datasets and models as open-source resources.

Keywords : ChatGPT, text generation, detection of machine, generated text, robustness