Image of a college-age girl working at a desk in a warmly-lit library. She has her laptop open and a stack of books in front of her, and she writes in a book.

Introduction

The emergence of generative artificial intelligence (GenAI) tools such as ChatGPT, GrammarlyGO, or Microsoft Copilot has led to a significant change in academic writing. Whereas a decade ago, the primary focus of writing evaluation was on students’ mastery of grammar, structure, argumentation, and citation skills, today, we as educators, must also consider how these newly advanced technologies affect both the process and product of writing. In the past, we sought perfection in students’ academic assignments. However, today, we look for imperfection as evidence that the work is genuinely the student’s own. As a lecturer working with postgraduates, I have observed the growing need to rethink assessment practices that both accommodate and critically evaluate GenAI-influenced academic writing.

The Changing Nature of Academic Writing in the GenAI Era

I recall reading one of the earliest academic discussions on large language models (LLMs) such as ChatGPT‑3.5 back in 2023, when research in this area was still limited; Barrot’s study was among the few exploring GenAI in second language writing. There were — and still are — contradictions about GenAI use in writing; as put by Warschauer et al. (2023), those students who are the best writers may be suspected by their teachers of using GenAI when producing texts that seem “too good to be true.” Today, we cannot deny that GenAI is becoming a partner for students, with most using it to help them write in English, regardless of the extent or purpose of that use. In the short time since, GenAI has developed at a remarkable pace, and the volume of studies on its educational applications has surged. Today, it is common to receive daily Google Scholar alerts on new work examining the use of GenAI in writing, reflecting a shift in assessment philosophy from the pursuit of error‑free output toward valuing authentic, human expression. What remains crucial is: How should we assess students’ writing in the digital era?

Generative AI can perform tasks which were once considered as clear indicators of individual writing competency. These tasks include generating coherent sentences, paraphrasing, summarizing, and even developing an argument. The ability of GenAI to produce text rapidly and convincingly raises questions and doubts about students’ authenticity, originality, and writing skill development. As Kim and Danilina (2025) note, today, educators must be able to distinguish between students demonstrating their own writing abilities and competence and students orchestrating GenAI input in creating a polished product. This is not a trivial distinction given that traditional markers of language proficiency—clarity, organization, coherence and cohesion as well as language correctness—may now reflect the competence of the GenAI tool, not the student. This means that GenAI is not inherently a threat to language learning, especially writing. When used ethically and critically, it can support the development of ideas, expand vocabulary, and enable students to refine complex arguments. Our suggested assessment methods should promote students’ responsible use while maintaining the integrity of academic discourse.

Key Strategies for Writing Evaluation

Process-Oriented Assessment

One useful approach is to evaluate the process of writing (e.g., planning, writing, revising) rather than only the final written product. By incorporating drafting stages, reflective commentaries, and annotated outlines, students can gain insight into how they are able to construct their work (Barrot, 2023; Chuang & Yan, 2025). Asking students to submit early drafts and notes, along with records of how they engaged with AI tools, can also display their cognitive and linguistic contributions. As Boud and Molloy (2013) suggest, assessment should be a learning process, not merely an audit of final output.

Critical Engagement with Sources

While GenAI tools can retrieve and summarize sources, evaluating students’ ability to assess and integrate evidence in their work remains paramount. Writing tasks should require students to provide justifications why certain sources have been used in their work, articulate connections between readings, and critique previous studies. An effective rubric can be utilized to explicitly value critical analysis over superficial aggregation. By so doing, we as teachers, can shift focus from surface-level correctness to deep intellectual engagement.

Reflective and Metacognitive Tasks

To foster students’ own cognitive skills, such as thinking, writing teachers can include reflective writing in their classes. Reflective writing allows students to explain their argument choices, assess the strengths and weaknesses of their drafts, and discuss the role of GenAI in their writing process. This metacognitive dimension not only reveals authenticity but helps students become more self-aware writers (Zimmerman, 2002).

Critical GenAI Literacy Assessment

The ability to responsibly use AI tools—knowing their capabilities and limitations—should itself be a learning outcome in writing courses. Within the emergence of LLMs and its integration in writing, recent research has shifted our attention from feedback literacy to GenAI literacy-A concept that has been defined and conceptualized in different manners but commonly, it stresses the need for developing students’ capacities and abilities to evaluate the strengths and limitations of GenAI content (e.g., Dang & Wang, 2024; Darvin, 2025; Hwang et al., 2025). Evaluating these competencies positions writing assessment within the realities of contemporary authorship.

Challenges in Practice

Despite the usefulness of these assessment strategies in writing course, implementation will require effort and institutional support. Time constraints, large cohorts, and varying disciplinary norms can be barriers to such process-based or oral assessment approaches. Detection of GenAI content is another challenge as AI detection are still imperfect and may risk false positives. Therefore, it is important for writing teachers and instructors to create an encouraging learning environment that fosters trust and ethical norms in writing classes since there seems to be a pedagogical tension between penalizing students for using GenAI in writing and encouraging them to use it constructively. However, I do think both extremes are counterproductive. A balanced policy should recognize GenAI as part of the scholarly environment, much like spell checkers or referencing software, but it must demand transparent integration. For instance, students might include an “AI Use Statement” describing which tools were used in their writing, for what purpose, and how the content was verified.

Conclusion

Assessing students’ academic writing skills in the GenAI era necessitates our movement beyond an exclusive focus on students’ final writing product toward a broader view of cognition, authorship, and ethical engagement. Writing assessment practices should value engagement in writing processes, foster students’ critical thinking, and encourage transparency in GenAI use. Ultimately, the goal is not to quarantine students from GenAI but to ensure they retain—and develop—the intellectual skills of argumentation, synthesis, and reflection that define scholarly work. If students emerge from our writing courses able to write with clarity, substance, and integrity—whether or not GenAI is involved—we will have met our responsibility as academicians and educators. The GenAI revolution is not merely a challenge to academic writing; it is an opportunity for deepening our teaching and creating more authentic, engaging ways to assess the next generation of scholars.

Barrot, J. S. (2023). Using ChatGPT for second language writing: Pitfalls and potentials. Assessing Writing, 57, 100745.

Boud, D., & Molloy, E. (2013). Feedback in Higher and Professional Education: Understanding it and Doing it Well. Routledge.

Chuang, P. L., & Yan, X. (2025). Language assessment in the era of generative artificial intelligence: Opportunities, challenges, and future directions. System, 103846.

Dang, A., & Wang, H. (2024). Ethical use of generative AI for writing practices: Addressing linguistically diverse students in US Universities’ AI statements. Journal of Second Language Writing, 66, 101157.

Darvin, R. (2025). The need for critical digital literacies in generative AI-mediated L2 writing. Journal of Second Language Writing, 67, 101186.

Hwang, H., Chang, X., & Sun, J. (2025). Generative AI is useful for second language writing, but when, why, and for how long do learners use it?. Journal of Second Language Writing, 69, 101230.

Kim, J., & Danilina, E. (2025). Towards inclusive and equitable assessment practices in the age of GenAI: Revisiting academic literacies for multilingual students in academic writing. Innovations in Education and Teaching International, 1-5.

Warschauer, M., Tseng, W., Yim, S., Webster, T., Jacob, S., Du, Q., & Tate, T. (2023). The affordances and contradictions of AI-generated text for writers of English as a second or foreign language. Journal of Second Language Writing, 62.

Zimmerman, B. J. (2002). Becoming a self-regulated learner: An overview. Theory into practice, 41(2), 64-70.

Headshot of Dr. Murad Abdu SaeedDr. Murad Abdu Saeed is a Senior Lecturer in the Department of English Language, Universiti Malaya, Malaysia. He holds a Ph.D. in English Language Studies from Universiti Kebangsaan Malaysia and has over a decade of teaching and research experience across Yemen, Saudi Arabia, and Malaysia. His research focuses on EFL writing, academic writing pedagogy, collaborative and technology-enhanced learning, and the integration of AI in education. Dr. Saeed has published widely in leading journals such as Assessment & Evaluation in Higher Education, Assessing Writing, Journal of Second Language Writing, Language Learning & Technology, Computers and Composition, and The Language Learning Journal. He is supervising numerous postgraduate theses and leading funded projects on AI literacy, feedback practices, and digital pedagogy, contributing significantly to the intersection of academic writing, supervision, and emerging technologies.

Read More from OLC Insights

Circular cutout of a photo of Dr. Georgianna Laws ringed by a teal and navy gradient.

November 16-19, 2026 in Orlando, Florida

Registration and Call for Proposals are Now Open!

OLC Accelerate is the premiere online learning conference showcasing groundbreaking research and highly effective practices in online and digital learning across K-12, higher education, and corporate L&D. 

August 3-5, 2026 in Fort Worth, Texas and 
August 6-7, 2026 in Rockford, Illinois

Registration and Call for Proposals are Now Open!

OLC Elevate is a bold new initiative from OLC that brings high-impact digital learning discussions, hands-on workshops, and thought leadership directly to your regional community. 

We use cookies to enhance your experience. By continuing to visit this site you agree to our use of cookies. More info