School district technology leaders have four pieces of advice for protecting student data and avoiding bias when selecting new AI apps and platforms, according to an article in K-12 Dive.
1) Go slow, be cautious and get feedback
Washington’s Peninsula School District is easing into AI implementation. It now permits the use of 11 AI tools, ranging from popular tools such as ChatGPT to ones more specifically ed tech, says Kris Hagel, the district’s chief information officer.
Hagel wants to be cautious when adopting AI tools. Peninsula School District already uses more than 1,000 different ed tech tools. The district will significantly clean up its approval system in the next year, he says.
Ed tech tools have proliferated throughout the district, Hagel says. Moving forward, any approved tools — including AI — must actually improve the district’s instruction or operations. There needs to be a plan, he says. It’s not a matter of selecting a new tool because someone “went to a conference” and “saw a shiny thing.”
It’s important that districts “start small” and “go slow” when beginning to approve AI tools, says Andrew Fenstermaker, instructional technology coordinator at Iowa City Community School District. Districts should be hands-on and get feedback from teachers using the AI tools.
“If it is a product that’s more catered towards the elementary classroom space, make sure to get some insights from classroom teachers that are in the elementary space,” Fenstermaker says.
2) AI options and instructional & operational goals should align
Implementing any AI tools should align with broader strategies, including instructional and operational goals.
Teachers must submit a form when they want a new AI tool or another ed tech product for curricular use at Iowa City Community School District, Fenstermaker says. A coordinator in the district’s curriculum and instruction department reviews then the request to see if the tool aligns with the instructional goals. An important consideration: will the product provide data to help with teaching decisions?
If the curriculum coordinator approves the tool, it then goes to the technology department to examine for any data privacy concerns, Fenstermaker says.
Don’t create an entirely separate vetting process for AI tools if you already had one for adopting ed tech, particularly for data privacy and instructional alignment, Hagel says. Lean on the existing systems and add minor adjustments to include AI, “because everything is going to have AI in it, and if it doesn’t already, it will shortly,” he says.
3) Protecting student data is more important than ever
Lynwood Unified School District in California has an 18-point AI fact sheet for ed tech vendors to fill out before the district adopts a new tool, says Patrick Gittisriboongul, assistant superintendent of technology and innovation. Vendors are asked to detail the purpose of the platform, the training data being used, the source of the training data, and the type of AI model the tool relies on.
Identifying the model type is “much more important now, because every foundational model is competing with each other,” Gittisriboongul says. “So being able to navigate between the models is something that, as IT directors and CTOs [chief technology officers], we want to know to make sure that we protect our organization and understand how some of those models have been trained and maintained and benchmarked.”
But getting vendors to fill out the checklist can be a problem, Lynwood admits.
Transparency is critical with district AI products, any related data sets, and the privacy terms articulated by an AI platform, Fenstermaker says. This allows districts to protect themselves from any future data breaches, and also ensures a district is keeping its own student data secure while using AI tools, he says.
To help with bandwidth, Fenstermaker said, Iowa City Community School District often leans on the global nonprofit 1EdTech to help vet privacy policies for tools that have already been evaluated by the organization. By doing so, the district can easily spot a tool’s possible red flags in its data privacy policy while also streamlining the district’s process to ensure student data is protected, he said.
4) Be aware of AI biases
Any technology, but especially AI, is programmed with some sort of biases, says Hagel. The bias can already exist in society or it’s the bias of the people who are building the tool, he says
Schools must account for those biases and make sure they get the results that are wanted from these tools, Hagel says. Know that bias exists and there are things you can do to compensate for that.
When asking vendors about potential biases in an AI tool, Gittisriboongul says, Lynwood USD investigates how AI tools aim to treat students fairly and without prejudice. The district also looks for any third-party studies on AI tools that rate any issues of potential bias and the model’s level of fairness or accuracy. The district also tries to ensure reporting of any issues staff or students may encounter with an AI tool.
In Iowa City Community School District, administrators look for any third-party certifications an AI tool may have, Fenstermaker says. Nonprofit Digital Promise’s Responsibly Designed AI certification is an example. To get certified, the vendor is examined to ensure AI tools are inclusive and represent all demographics across populations, he says.
K-12 Dive