How to Safeguard Student Privacy in the World of AI

How to Safeguard Student Privacy in the World of AI

In a survey of 1,020 teachers conducted by the nonprofit RAND Corporation in the fall of 2023, 18% of teachers reported using AI for teaching, with 53% saying they use chatbots like ChatGPT at least once a week, and another 16% reporting using AI grading tools with the same frequency.

Many AI companies offer services like AI-powered tutors for students, and AI chatbots and platforms that serve as teaching assistants. But many do not sufficiently protect students’ personal data, according to an article in Chalkbeat Philadelphia.

Without guidance from their districts, teachers who experiment with AI could lack crucial understanding of these platforms’ privacy risks. Personal student information could be exposed, with repercussions lasting for years.

Many districts have been slow to prepare teachers for the new learning environment. According to an Education Week survey last year, 58% of educators had received no training on AI.

The risks depend on the platform and how teachers use it. Most common AI platforms were not designed specifically for the use in education.

Tools created specifically for educational purposes have more safeguards in place. They still depend heavily on teachers being cautious about what information they input.

Tips and information for ensuring student privacy:

1) Before entering any information into a platform, know if it will use the data to target ads to students or share data with third parties, and even how long the platform will keep student data, says Anjali Nambiar, an education research manager at Learning Collider.

2) Be aware that giving AI platforms personally identifiable attendance, grades, or even work from students can lead to discrimination against them in adult life, such as when they look for jobs. Private information uploaded to these platforms like parents’ names or Social Security numbers can spur identity theft.

3) Some platforms designed specifically for education include mechanisms to reduce privacy risks. Khanmigo and MagicSchool show multiple messages alerting teachers to not disclose students’ personal data. They also try to identify any sensitive information that teachers load into the platform and delete that information.

“We have an agreement that no student or teacher data is used to train their model,” says Kristen DiCerbo, Khan Academy’s chief learning officer. “We also anonymize all the information that is sent to their model.”

4) Various federal and other laws protect student data like students’ names and family information, as well as attendance and behavioral records, disabilities, and disciplinary history. Statutes can vary from state to state. Some states are discussing bills to regulate AI.

5) Congress passed the Family Education Rights and Privacy Act, or FERPA, in 1974. Schools are ultimately responsible for student data, according to the law. FERPA determines a series of conditions for schools to disclose students’ information for third parties like contractors or technology vendors, including that they should be “under the direct control of the agency or institution with respect to the use and maintenance of education records.”

6) Understanding the potential risks is a very complex issue that teachers shouldn’t be expected to navigate by themselves, according to Randi Weingarten, president of the American Federation of Teachers (AFT).

“(It) cannot become the responsibility of only a few teachers,” Weingarten says.

7) It’s fundamental that school and district technology departments take the lead in vetting the tools that educators can use, according to the AFT’s Commonsense Guardrails for Using Advanced Technology in Schools.

The education technology department in Moore Public Schools outside Oklahoma City uses these strategies:

  • Provide teachers and school administrators with training to understand the risks and responsibilities involved in using these platforms, says Brandon Wilmarth, the district’s director of educational technology.
  • One training session focused on how principals could use the language model to help them write behavioral reports, following the release of ChatGPT. “We very openly said: You must omit any personally identifiable information,” Wilmarth recalls. “You can write down students’ behaviors, but do not include their real names.”
  • After using the tool, principals transfer the AI response to the districts’ template for these documents. They review the quality of the output and make any necessary adjustments. The AI assistance makes this process faster and provides helpful insight for principals.

 

“The AI analysis of the cases was really spot on,” Wilmarth says. Many principals struggle with their subjective relationships with students. Whenever they would enter the facts, just the facts, it was to get objective feedback.

  • Many professional development sessions explore the potential, limitations, and risks of using AI.
  • Teachers have easy access to information on which AI platforms are vetted as safe for use.
  • A process allows teachers to submit requests to have the district vet new software before they start using it. “Everyone is pretty aware that you don’t sign up for an account with your Moore Schools email unless it’s an approved tool and that you never get students to sign up for anything that hasn’t been approved,” Wilmarth says.

 

Chalkbeat Philadelphia

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
InnovativeSchools Insights Masthead

Subscribe

Subscribe today to get K-12 news you can use delivered to your inbox twice a month

More Insights