The Hidden Risks of AI-Content Detection — and How Teachers Can Respond Responsibly

When “AI Detection” Looks Like a Solution—but Isn’t

As AI becomes part of students’ daily learning, it is natural for teachers to wonder how to tell whether a piece of writing was produced by the student or by an AI tool.

AI-content detection platforms promise quick and reliable answers, yet the research shows a very different reality. These systems often create serious risks—particularly for learners of English.


What the Research Shows

False positives against non-native writers

A Stanford-led study found that 61% of genuine TOEFL essays were incorrectly flagged as AI-generated, while essays by native speakers were almost always classified correctly. In practice, this means that the more accurately a student writes in English, the more likely they are to be falsely accused of using AI.

Easy evasion by AI tools

Conversely, texts genuinely written by AI can often avoid detection entirely. Simple paraphrasing or minor rewording can reduce a detector’s accuracy from around 70 percent to below 5 percent.

Universities are stepping back

Leading institutions such as MIT, Cornell, Vanderbilt, and the University of Pittsburgh have publicly discouraged or discontinued the use of AI-detection tools in their teaching. Their decisions are based on evidence of unreliability, bias, and the potential harm caused to students.


Legal and Ethical Context

Under the EU AI Act, any AI system used to assess learning outcomes or to discipline students is classified as high-risk. This means that human oversight, transparency, and accuracy guarantees are legally required.

Under GDPR Article 22, students have the right not to be subject to a decision based solely on automated processing. In other words, no AI detection result can ever serve as the only basis for academic or disciplinary action.


Responsible Teacher Behaviour

At Penmate, AI detection is available only as an optional investigative aid. It must never be used as the sole evidence of academic misconduct.

Teachers who choose to enable detection should:

  1. Acknowledge the limitations. Detection results are probabilities, not proof.

  2. Seek corroborating evidence. Review drafts, revision histories, or discuss the work with the student.

  3. Ensure due process. No disciplinary decision should be made without human review.

  4. Be transparent. Inform students when detection is used and explain its purpose.

  5. Use extra caution with ESL/EFL writing. Writing at Cambridge B1–C2 levels is particularly prone to false positives.


Better Alternatives to Detection

Instead of relying on uncertain technology, teachers can adopt more effective pedagogical approaches:

  • Process-based assessment – require outlines, drafts, reflections, and peer feedback to show authentic development.

  • Personalised prompts – design assignments that connect to personal experience, class discussions, or current events.

  • Portfolio evaluation – assess progress across several pieces of work rather than one essay.

  • Open dialogue – talk with students about ethical AI use and demonstrate how AI can support learning rather than replace it.


Why It Matters

False accusations can cause long-term harm to students’ confidence and trust. Protecting students from wrongful suspicion is part of maintaining a safe and supportive classroom culture. Integrity in education is best achieved through fairness, transparency, and human judgment—not through flawed automated systems.


Penmate’s Policy

AI detection in Penmate is optional and must be activated manually.

Teachers are required to confirm that they understand the limitations of current detection technology and agree not to make disciplinary decisions based solely on detection scores.