Visibility into AI use in student writing has a role to play in ensuring student equity.
When some students are perceived to prosper through deliberate AI misconduct, it can demotivate fellow students who adhere strictly to integrity policies. Meanwhile, applying the same consequences to unintentional AI use as to deliberate misconduct can similarly undermine engagement and, paradoxically, increase the likelihood of well-meaning students resorting to cheating to level the playing field.
Building a trusted space for student writing, including clear expectations around AI use in the writing process, can help relieve these tensions. A trusted space can support ethical practices, reduce the likelihood of academic misconduct, and reinforce student motivation. They are one element of the broader shift towards ‘designing for integrity’ in education – a strategic, pedagogical response to the growing influence of AI.
So what is a ‘trusted space’ for student writing and – if the focus is on trust – what is the role of AI detection within them?
What is a trusted space for student writing?
A trusted space is an environment where students are given the psychological safety to experiment, make mistakes, and refine their academic practices without fear of punishment.
The intersection of AI and academic integrity is difficult for students to navigate, especially with a maelstrom of marketing promoting ‘homework help’ and appearing (unsolicited) offering answers in learning tools. While missteps are inevitable, misconduct is not, and institutional responses need to reflect this.
A trusted space for writing allows students to compose assignments using the tools at their disposal, including AI, and then receive formative feedback on both the product and process of their practices. This allows educators to discuss the implications of different AI uses – from text refinement to using unedited AI content in assignments – and build towards ethical AI integration and academic integrity.
It isn’t about policing; it’s about transparency. Students need to develop AI literacy and academic integrity through feedback and guidance, rather than fear and punishment.
Why is a trusted space important for academic integrity?
Research consistently shows that educative, dialogic, feedback-based approaches to academic integrity are more effective than punitive ones (Sbaffi and Zhao, 2022).
While punishment-focused systems tend to isolate and stigmatize students, restorative practices are shown to strengthen student connection to the academic community and foster behavioral change. (KPU). Trusted spaces allow for these constructive conversations at a time when a lack of clarity around AI may lead to inadvertent policy breaches.
It’s important for educators to be able to differentiate between AI missteps and misconduct because students view unintentional errors as distinct from deliberate plagiarism (Tight, 2024). Treating them the same way can undermine engagement and motivation. By identifying and addressing the use of AI with educative feedback rather than punitive measures, institutions can preserve supportive student-tutor relationships, which are repeatedly shown to reduce the likelihood of academic misconduct (Bretag and Harper, 2019).
Not only that, but building a trusted space also ensures fairness in assessment is visible to students. This is essential for ongoing engagement and motivation.
As Miles et al. (2022) note: ‘A student who does not cheat, thereby not receiving any additional assistance, can be disadvantaged compared to those students who do cheat and are not caught. It is essential that there is equality for all students within the learning environment and students need to know that the institution actively promotes equal opportunities for all students to succeed fairly.’
By providing clarity, educative feedback, and effective remediation, institutions demonstrate that AI missteps and misconduct are handled fairly, reducing perceptions of unfair advantage. This is important because ‘cheating is contagious’. When students observe or overestimate cheating by peers, this reduces their own likelihood of compliance with academic integrity policies, creating a ‘vicious cycle’ (Chacko, et al., 2024).
Put simply, building a trusted space for student writing creates dialogue, increases AI literacy, improves motivation and engagement, and mitigates the compounding risks of academic misconduct.
What role does AI detection play in a ‘trusted’ space?
To have an educative dialogue with students about their use of generative AI, educators need to see how and when they may have used it. At first glance, this may seem anathema to the concept of a ‘trusted’ space. But in this context, AI detection is simply used to identify instances of AI use, not to infer the intent behind it.
Here, ‘trust’ isn’t about educators having blind faith that students will always behave perfectly. It is about students trusting they will be treated fairly and equitably as they navigate new learning tools and processes, and receive constructive guidance to build their AI literacy and critical engagement.
For educators interested in building a trusted space for student composition, a combination of writing transparency and human judgment is key – and that means using AI detection tools to highlight areas for review and discussion.
That isn’t to negate the importance of ‘detection as deterrent’, which, alongside institutional sanctions, remains a consideration in student integrity decisions (Ortiz-Bonnin and Blahopoulou, 2025).
However, with research indicating that almost half of students (HEPI, Turnitin/Vanson Bourne) avoid using AI altogether for fear of being accused of cheating, an AI detector doesn’t just serve to correct over-reliance. When deployed as a way to start conversation about proper use, it also supports more confident AI engagement and adoption in students who may otherwise avoid it; further improving student equity.
If AI detection isn’t 100% accurate, is it worth using?
One hundred percent accurate AI detection will never exist, especially as generative AI tools become increasingly sophisticated. By its very nature, generative AI content is difficult to identify.
This doesn’t render AI detectors useless. Indeed, AI checkers are highly effective at supporting quality, original writing. It just means they are designed to be used within a broader framework of safeguards.
Although no single AI detection solution is perfect in isolation, when combined with human judgment, an AI detector is a critical component in the overall risk response.
Think of AI detection software like a diagnostic tool in medicine. A test may highlight areas of potential concern, but it takes a trained professional to interpret the results and determine the appropriate course of action.
It is the same with AI detection tools. The goal isn’t perfect detection and punishment. It is to provide writing transparency that ensures student equity, validates authentic effort, and builds confidence in appropriate AI use.
Data simply means dialogue. So what does this look like in practice?
Turning data into dialogue: Educative AI conversations in practice
Student A: Submits an assignment that has an average composition time and minimal flags for similarity to existing or AI content. No intervention needed; the process audit trail confirms the student is creating authentic and original work.
Student B: Submits an assignment with shorter-than-average composition time and a minor flag for similarity to AI content. On review, the educator discovers the student has relied too heavily on source material with limited interpretation. This prompts a supportive conversation about how to use and cite source material, helping develop student skills and confidence.
Student C: Submits an assignment with low composition, numerous flags for similarity to existing content, and high instances of pasted content. This triggers a discussion about the student’s researching and writing practices, including guidance on appropriate AI use, citation practices, and the value of authentic composition for cognitive engagement.
Building a trusted space for student writing is one step towards increased fairness in assessment, particularly in the age of generative AI. Through AI detection and educative feedback, institutions can help students navigate the grey areas of new technology and strengthen their academic integrity.
By removing opportunities for students to gain an unfair advantage through undetected AI misuse, institutions improve both student equity and trust in fairness in assessment. This creates a strong foundation for cognitive engagement, ensuring that the brain (like a muscle) is exercised through authentic inquiry, ultimately leading to deeper learning outcomes.