Four Keys to Balance AI Skill-Building against Realistic Concerns

Four Keys to Balance AI Skill-Building against Realistic Concerns

A complete overreliance on opaque, biased, and poorly regulated AI systems is the worst-case outcome for artificial intelligence in K-12 schools, according to an article in Education Week. Teachers are only facilitators because they lack adequate training and understanding of the technology’s limitations. They blindly support AI-generated assessments, lesson plans, and even evaluations of their own performance. Human elements of empathy, nuanced understanding, and social-emotional development are diminished. Students are not ready for complex human interactions and critical discernment in an AI-dominated world.

This nightmare of some educators could discourage beneficial, innovative uses of the technology. They see AI as an clear threat to their professional existence and a dehumanizing force at home and work. They want to brake hard on the use of AI in schools.

Young people also have growing concerns that AI automation will destroy jobs, as pointed out in the Wired magazine article titled “The AI Backlash Keeps Growing Stronger.” Wired writer Reece Rogers notes, “Right now, though a growing number of Americans use ChatGPT, many people are sick of AI’s encroachment into their lives and are ready to fight back.”

As a growing number of teachers use AI tools to plan lessons, communicate with parents, and even teach writing skills, is outright hostility or even quiet resistance toward the fast-evolving technology really the right approach? Or does resistance risk denying critical learning opportunities and skill building for a generation of students? Here are four keys to answering these questions.

1) Use AI as a brainstorming tool for students

Kristina Peterson, who has taught English for 17 years at Exeter High School in New Hampshire, uses a “workshop style” instructional approach in her classes that emphasizes the process of learning rather than just the outcome. Students read physical books and collaborate in small groups to help each other — unmediated by technology.

Peterson’s students are also empowered to use school-approved AI tools to brainstorm—for instance, talking to an AI chatbot that represents Atticus Finch, the lead character in To Kill a Mockingbird, when they are reading that classic novel.

Educators should view AI as a brainstorming tool bolstered by meaningful guardrails and best practices, she argues. Teaching students to be healthy skeptics of anything AI produces is one way to build those guardrails. For instance, one of her classes finishes reading a chapter in a novel. They will come to class the next day, and Peterson will give them a chapter summary generated by ChatGPT. Those AI-generated summaries typically have factual errors, which Peterson confirms before handing them to the students. Students then go through the summary and highlight the errors.

“I do that to double check that they actually read and understood the chapter,” says Peterson. “But more importantly, to show them that even as advanced as AI is becoming, it still can hallucinate. It still can get things wrong. They love to point out what AI got wrong. And they push back on AI far more than they push back on me.”

Students and teachers must learn to put a human touch on everything AI produces. But schools get hung up trying to balance strict compliance with the use of AI tools against the curiosity about how those tools work. Teachers are more concerned about monitoring or catching students using AI to write an essay rather than teaching them how to use those technologies appropriately to tackle class assignments. This compliance culture  stymies students because most of them will be expected to use AI tools when they enter the workforce. This type of AI ecosystem can produced overprescriptive, innovation-killing policies and classroom rules, as well as misguided use of AI plagiarism detectors.

The result is a split screen for AI-skill building. “If some students are getting really thoughtful and guided opportunities to explore and play with AI, and others are just getting the shutdown,” Peterson says, “then we’re just deepening that [skill] divide even more.”

The world schools operate in now is complex and fast-paced, packed with decisions about technology and learning and the unintended consequences that come with those decisions. Districts realize that putting their heads in the sand and hoping this fast-emerging technology fades away is simply not an option.

2) Schools must encourage experimentation

In today’s environment, schools need to tolerate a wider range of experimentation around AI use, suggests Rafe Steinhauer, an instructional assistant professor in the school of engineering at Dartmouth College. “I would be nervous about any school district saying, ‘We’re going all in [on AI].’ And I would be nervous about any school district [banning the technology],” he says. “We know already that there are tremendous risks to student learning and we know that there are tremendous opportunities with generative AI.”

This cautionary approach leads to a concept called “containment.” It applies to all kinds of sectors—business, the military, and government, as well as K-12 education—and is articulated by Microsoft AI CEO Mustafa Suleyman in his New York Times bestselling book, The Coming Wave: AI, Power, and Our Future.

Suleyman explains that “containment is about meaningful control, the capability to stop a use case, change a research direction, or deny access to harmful actors.” For education this means directing AI use toward strategies that enhance student learning, make teachers’ jobs easier, and protect the massive amounts of student and educator data flowing through AI programs.

Using AI in developmentally appropriate ways for different age groups represents meaningful containment in K-12 education. Kindergartners through 2nd graders are more likely than older kids to attribute human qualities to AI technologies and may even trust AI responses more than those of their teachers. For high schoolers, educators must teach students limitations of the tools and the need to be skeptical about the accuracy of what they generate. This can’t happen if teachers are not allowed and encouraged to experiment with AI in meaningful ways first.

3) Use AI to teach kids how to tackle complex, real-world problems

Educators should also teach high school students—and maybe middle schoolers, too—how AI can be used to solve some of the most complex problems facing society today. This is real-world learning that builds skills needed to succeed in the workplace.

Clayton Dagler, a teacher at Franklin High School in Elk Grove, Calif., is teaching students how to tackle complex societal problems by pairing a computer-coding language commonly used in artificial intelligence technologies with math concepts. Students learn valuable math, problem-solving, and AI skills simultaneously while also learning the building blocks for the algorithms that drive AI.

Using AI in more strategic and thoughtful ways to help kids develop critical-thinking skills is also a high priority for Justin Reich, an associate professor of digital media at the Massachusetts Institute of Technology. Reich says his biggest worry about the technology is the threat of “bypassing learning.” Not necessarily cheating but simply not learning.

His team of researchers has interviewed about 100 teachers and 40 middle and high school students from around the country to understand their perspectives on AI and the questions they have about its use. During the study, a male high school student described a typical experience in his science class: All the students had their laptops open while the teacher was addressing questions to the entire class. Most of the students were plugging the questions into AI and repeating what the chatbot spat out. The teacher kept chugging along, picking up the pace, as the questions were answered. One student said he was genuinely trying to process the concepts the teacher was presenting, but he couldn’t because the class was moving too quickly thanks to reliance on AI out of the teacher’s eye line.

This is bad teaching. But the problem is that AI can amplify both good and bad outcomes, making poor teaching approaches even worse.

4) Recognize and respond to advances in artificial intelligence

The knowledge gap between AI experts and novices grows wider, Reich notes, when the knowledgeable ones can use AI to increase their expertise exponentially while the novices are manipulated by bad or inaccurate AI-generated information.

“When people are learning to write, they really shouldn’t use this stuff much, because it bypasses their thinking,” he says. “Just like when young people are learning to [do math]. Really good math teachers don’t let them use calculators.”

 “There’s no technology which has been developed, which I am aware of, where any school system has raced to be an early adopter and seen widespread advantages from winning that race,” he says.

But containing the technology primarily for good will be difficult. Some might even say impossible. Is it worth the time and effort? Schools don’t have a choice. Education leaders need to follow what Steinhauer from Dartmouth and Reich from MIT recommend: Encourage teachers to experiment with AI in small but meaningful ways. Look hard at the benefits and drawbacks of the technology. Be curious about how students use artificial intelligence in and outside school. Then, share those critical lessons learned with others.

K-12 schools need to seize this moment and figure out how to use the technology smartly to their advantage. Not doing so would be a huge mistake.

Education Week

Generic selectors
Exact matches only
Search in title
Search in content
Post Type Selectors
InnovativeSchools Insights Masthead

Subscribe

Subscribe today to get K-12 news you can use delivered to your inbox twice a month

More Insights