Generative AI trends paint a nuanced picture of student behavior regarding how they incorporate Artificial Intelligence into their learning practices.
As generative AI reshapes how students engage with learning, executive leaders have an opportunity to align institutional policies and practices with emerging generative AI trends. Gaining visibility into AI usage can be a strategic asset for institutional planning, policy-making, and maintaining trust.
However, some assumptions about student AI adoption risk misapprehensions that might undermine student outcomes and institutional reputation.
In this article, we break down the key themes emerging from three major research reports into generative AI trends in student behavior.
- Kortext/Higher Education Policy Institute – Student Generative AI Survey (2025)
- Vanson Bourne/Turnitin – Crossroads: Navigating the Intersection of AI in Education (2025)
- Common Sense Media and Hopelab/Harvard University Centre for Digital Thriving – Teen and Young Adult Perspectives on Generative AI (2024)
Plus, we explore what these insights could mean for AI policy in schools and universities.
What do 2025 generative AI trends tell us about student integrity?
Recent generative AI trends show that our current thinking on student AI attitudes may be misfounded.
Since ChatGPT emerged on the education landscape in 2022, the prevailing narrative has been that AI-assisted academic misconduct is on the rise, and that AI access equals abuse.
While AI misconduct investigations have risen since LLMs (large language models) became widely available, they currently stand at just 5.1 per 1,000 students (ibid), and the assumption that access to AI tools automatically undermine student academic integrity may be unfounded.
It’s possible that AI isn’t the wholesale invitation to plagiarism that many have feared. Rather, it could be the latest in a long line of opportunities that those students set on misconduct might use – from copying work and contract cheating in previous decades, to misuse of generative AI and AI-human rewriter services today.
However, unlike previous misconduct tools, using artificial intelligence also offers considerable benefits to students in an AI-focused future workforce. As such, it may call for an institutional response that both robustly protects against misconduct while supporting AI literacy.
Multiple studies find students have a more critical, strategic, and ethical approach to using AI in education than institutions may have first thought. This in itself presents an emerging consideration for student outcomes, graduate employability, and institutional reputation – one which leaders might consider addressing proactively.
Key findings on student use of generative AI in 2025
1. Students appear more cautious about AI than we might think
Research finds students are not necessarily rushing to abuse AI. They seem to have strong views on cheating, high levels of concern about its role in education, and mixed opinions on its impact on their lives.
- 63% of students say using AI to write an entire piece of work is cheating – more than faculty at 55% or administrators at 45% (Turnitin)
- 64% of students are worried about AI’s use in education, versus only 50% of educators and 41% of academic administrators (Turnitin)
- 41% of teens believe AI will have both positive and negative impacts on their lives in the next 10 years (CfDT)
- Students are more aware of AI’s limitations, with 51% of students saying AI hallucinations discourage them from using AI (HEPI) and 47% concerned about misinformation (Turnitin)
Insight: This suggests that students are more ethically aware and AI savvy than institutions may assume. They appear to be adapting rapidly, but could benefit from institutional support to do it well.
2. Student AI use is widespread but may not constitute misconduct
Data suggests that AI is mostly being used as a learning companion, not a way to cheat the system. After saving time and improving quality, the top reasons for students using AI are to get instant, personalized, or out-of-hours support (HEPI), pointing to a strong appetite for AI to supplement learning.
- 88% of students have used generative AI in assignments, but that doesn’t mean cheating (HEPI)
- The most popular academic uses of AI are: explaining concepts (58%), summarising articles (48%), and suggesting research ideas (41%). Only 18% use AI-generated text directly in assessments (HEPI)
- 59% of students are worried AI could reduce their critical thinking, and 49% are worried about becoming over-reliant on AI tools (Turnitin)
Insight: Institutional policies could benefit from reflecting the reality that most students appear to be using AI to supplement – not subvert – their learning. To support effective AI use, institutions may find value in having stronger visibility into how students integrate AI into their workflows.
3. Current AI policies deter use, but with unintended consequences
Efforts to maintain academic integrity through prohibitive AI policies and detection systems appear to be successful in influencing student behavior. However, while deterrents may reduce misuse, they also risk limiting student opportunities to develop essential digital skills.
- 67% of students believe AI proficiency will enhance their employability (Turnitin), something supported by industry reports on top employability skills
- However, many students are opting not to use AI at all, even for legitimate academic support
- Around half of students are scared to use AI in their learning practice for fear of being accused of cheating (53% in HEPI, 47% in Turnitin research)
- 76% believe their institution would detect AI use (HEPI)
A key consideration: There’s a possibility that well-intentioned policies could be deterring students from legitimate AI use and undermining their AI literacy. With tools that provide visibility into how students are composing work, institutions position themselves to eliminate the guesswork and create the foundation for meaningful conversations between student and faculty, ultimately leading to policies that are centered on creating effective learning environments.
4. Students seem to want guidance on using AI but might not be getting it
Student experimentation with AI is likely inevitable, and missteps can occur. Research finds some students conflicted about their use of AI and may be in need of greater guidance.
- Students say they’re confused by inconsistencies between different staff use of AI (HEPI)
- 50% of students want to use AI in their studies, but don’t know how to get the most benefit from it (Turnitin)
- Only 42% of students say staff are equipped to help them use AI effectively (HEPI), and 39% of educators say they don’t know how to use it in their role (Turnitin)
- 35% of students say they receive support from their institution to develop AI skills, while 31% say AI is banned or discouraged (HEPI)
Insight: Students appear to be navigating the AI landscape alone, which can lead to confusion. Greater institutional alignment and training on AI could be a powerful way to support students effectively. Providing approved AI tools may also support clarity for both staff and students, as well as equity.
5. Inequity in student AI use appears to be emerging
Attitudes and access to AI tools are not always equal across the student population, potentially dividing AI proficiency along gender, socioeconomic, and ethnic lines.
- Only 24% of students say their institution provides access to AI tools, despite 53% believing they should – up from 30% who believed that in 2024 (HEPI)
- Students from higher-income households use AI more strategically, while those from lower-income groups are less likely to use AI academically (HEPI)
- Men are 14 percentage points more likely than women to have used AI before university (HEPI)
- 28% of LGBTQ+ students fear AI will negatively impact their lives vs 17% of cisgender/straight peers (CfDT)
Insight: Without intentional support, AI has the potential to become a new vector for digital inequality.
6. Academic disciplines may determine student views of AI
Student experience with, and attitudes toward, AI seem to vary significantly by subject area – with potential implications for teaching and assessment design. Variations in exposure could affect student engagement, not only with AI but their program overall.
- 40-50% of STEM and Health students believe AI content would perform well in their subject, compared to 20% in Arts and Humanities (HEPI)
- Students in STEM and computing report higher AI use in school, and greater AI confidence on arrival at university (HEPI)
- Students are divided on AI grading and marking, with 29% saying they’d work less hard if their work wasn’t assessed by a human vs 34% who said they’d worked harder (HEPI)
Insight: Student AI starting points are different – policies and AI literacy initiatives might benefit from recognizing this. AI tools are equally important in the humanities, for example, in student writing, where they can support critical thinking, structuring arguments, and mastering tone.
How might education leaders respond to generative AI trends in 2025?
1. Review institutional AI policies and guidance
- Acknowledge the many benefits of AI use in education
- Shift from a tone of suspicion and prohibition to support and guidance
- Provide positive examples and use cases to support student clarity
- Support students to navigate AI hallucinations, risks, and biases effectively
2. Support faculty to support students
- Facilitate internal alignment on how AI could be integrated into pedagogic practice and student workflows
- Define what’s allowed and what’s not — for coursework, exams, revision, etc — and communicate this clearly and consistently to students
- Introduce AI-assisted grading mindfully to maintain engagement
- Leverage appropriate tools to gain visibility into student AI use, which can support both AI literacy and academic integrity
3. Ensure equal access
- Consider closing the access gap by providing institutionally supported AI tools
- Think about providing targeted AI training and support for students from marginalized groups
- Note disciplinary differences in student AI exposure and confidence
Don’t let AI ambiguity become a risk to your reputation
Generative AI is fundamentally changing how students engage with information and assignments, moving far beyond simple copy-paste behaviour. Without visibility into these new AI-driven behaviours, institutions could be operating with a significant blind spot, potentially hindering informed strategic decisions about the future of teaching and learning.
Understanding how and why students are using AI is a foundational step toward developing effective academic integrity policies and supportive pedagogical strategies.
AI detection tools and other learning transparency solutions can provide the critical visibility that helps to protect institutional reputation, support educators, and ensure students are developing authentic skills.
The goal isn't necessarily to ‘catch’ students out; it's to gain the necessary insight to guide the institution with confidence through a major technological shift. This can be achieved through tools that provide:
- A secure platform to create and administer more misconduct-proof assessments
- Tools to detect common forms of misconduct, which can help guide student composition practices
- An audit trail to evidence and expedite misconduct investigations if they arise
Turnitin Feedback Studio—now enhanced with Turnitin Clarity for greater visibility into the student writing process—offers transparency into how students write, revise, and engage with their work. It provides actionable insights to adapt teaching, safeguard integrity, and build trust across the academic community.