Careful Assessments Needed when Using AI-Detection Tools

Insights 14 600x400 AiDetection

Most teachers have used an artificial intelligence (AI)-detection program to assess whether a student’s work was completed with the assistance of generative AI, according to a new survey of educators by the Center for Democracy & Technology and reported by Education Slice.

But the usefulness of these programs is limited, according to Victor Lee, an associate professor of learning sciences and technology design and STEM education at the Stanford Graduate School of Education. “They are fallible, you can work around them,” he says. “And there is a serious harm risk associated in that an incorrect accusation is a very serious accusation to make.”

Adding to the risk is the finding that only a quarter of teachers in the survey say they are “very effective” at discerning whether assignments were written by their students or by an AI tool.

Half of teachers say generative AI has made them more distrustful that students’ schoolwork is actually their own.

Professor Lee said schools should take care when using AI to police students’ use of generative AI. “It could put a label on a student that could have longer term effects on the students’ standing or disciplinary record,” he says. “It could also alienate them from school, because if it was not AI produced text, and they wrote it and were told it’s bad. That is not a very affirming message.”

Education Slice

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
InnovativeSchools Insights Masthead

Subscribe

Subscribe today to get K-12 news you can use delivered to your inbox twice a month

More Insights