ONLINE Session of SYFLAT Research Methodology Seminars 18 January 2025

ONLINE Session of SYFLAT Research Methodology Seminars 2024/2025
Time: Saturday 18 January, 2025 at 10:00 a.m. (Tunisia time)

The Program:

10:00-10:45 – Speaker 1: Dr. Basma Bouziri
 
Title: A Mixed-Method Approach to Reliability in Corpus-Based Discourse Studies
Abstract: One major challenge in corpus-based discourse studies is subjectivity which affects their quality and questions their methodological rigor. This is especially the case when coding decisions are inadequately reported. To reduce subjectivity, and guarantee consistency, assessing reliability is essential. Some factors however can hinder researchers from measuring reliability. These include practical constraints and/or lack of awareness of the significant role that reliability plays in enhancing the quality of research. In this presentation, I advocate for a mixed-method approach that combines traditional quantitative measures with a qualitative reliability assessment. The presentation starts with an overview of the existing reliability measures. It moves then to highlight a qualitative aspect of the reliability construct focusing on transparency and comprehensiveness in reliability reports. The use of a mixed approach to reliability is illustrated through a sample reliability report. The presentation finally ends with strategies to implement reliability in non/under-funded research contexts.
 
10:45-11:30 – Speaker 2: Dr. Mimoun Melliti
 
Title: AI in MA Thesis Writing: The Use of Lexical Patterns to Study the ChatGPT Influence
Abstract: This paper investigates the use of Artificial Intelligence (AI) in MA thesis writing, addressing a notable gap in existing research. While AI’s role in undergraduate essays and general academic writing has been explored, the specific use in the genre of MA theses, characterized by rigorous academic inquiry and advanced scholarly engagement, remains underexplored. This study examines the frequency and contextual usage of specific lexical items in 53 MA theses in linguistics, literature, discourse, and culture studies, aiming to identify patterns indicative of AI-generated content. Employing a systematic comparison of MA theses defended before, and after the release of AI text generators, the research tracks the usage of targeted lexical items to identify deviations suggestive of AI influence. Through analyzing these patterns, the study seeks to provide empirical insights into integrating AI technologies in graduate-level writing, contributing to theoretical understanding and offering practical implications for educational institutions and policymakers. The findings indicate a dramatic increase in the salience of specific lexical items frequently used by ChatGPT compared to the frequency of their use before the release of this text generator. The findings inform the ethical considerations and pedagogical strategies necessary for responsibly incorporating AI into graduate writing instruction, ensuring the integrity of scholarly communication practices.