K-12 Public Education Insights: Empowering Parents of Color — Trends, Tactics, and Topics That Impact POC
Raising kids can be tough! I know because I’ve been a single mom who raised two kids on my own. And when they get in the K-12 public education system, learning the ins and outs of that system can get you all tangled up, especially when you’re a parent of color (POC). You need to be aware of the current trends, tactics, and topics, as well as the necessary resources to navigate within the system. That’s what the K-12 Public Education Insights: Empowering Parents of Color podcast is all about — providing you with tools, information, and practical actions to help you and your children succeed within the complexities of K-12 public education.
K-12 Public Education Insights: Empowering Parents of Color — Trends, Tactics, and Topics That Impact POC
Episode 163: Your Kid’s “Tutor” Might Be Making Them Dumber
AI is showing up in every corner of K‑12—from lesson planning and grading to chat-based “tutors” that promise instant help. I pull back the curtain on what gets lost when schools rush in without guardrails: weakened critical thinking, strained teacher-student connection, and new safety threats like deepfakes that can spread faster than adults can respond. You’ll hear what the data actually say, why many districts still lack clear policies, and how missteps can happen.
I share how teachers and students use AI differently, where hallucinations and bias creep in, and why academic integrity now lives in a gray zone. Then we get practical. I outline simple, high-impact habits for families: treat AI as a brainstorming companion, build healthy skepticism into every prompt, and bring the human elements—curiosity, empathy, voice—back to the center. For educators, I focus on embedding AI literacy inside real subjects, using models to draft, critique, and verify ideas rather than shortcut the thinking itself.
This conversation is for parents who want clarity, teachers who want workable guardrails, and anyone who cares about keeping learning human while still preparing students for an AI-shaped future. By the end, you’ll have a grounded view of risks, a playbook for safer use, and a checklist districts can adopt. If this resonated, subscribe, share with a parent or educator who needs it, and leave a review with your take: where should schools draw the line with AI?
Love my show? Consider being a regular subscriber! Just go to https://tinyurl.com/podcastsupport.
- Thanks for listening! For more information about the show, episodes, and ways to support, check out these websites: https://k12educationinsights.buzzsprout.com or https: //www.liberationthrougheducation.com/podcast
- Subscribe on Buzzsprout to receive a shout out on an upcoming episode
- You can also support me with ratings, kind words of encouragement, and by sharing this podcast with friends and family
- Contact me with any specific questions you have at: kim@liberationthrougheducation.com
Welcome to another episode of K-12 Public Education Insights, Empowering Parents of Color Podcast. The podcast that converges at the intersection of educational research and parental actions. It's about making the trends, topics, and theories in public education understandable so that you can implement them into practical, actionable strategies that work for your children. My name is Dr. Kim J. Fields, former corporate manager, turned education researcher, and advocate, and I'm the host of this podcast. I got into this space after dealing with some frustrating interactions with school educators and administrators, as well as experiencing the microaggressions that I faced as an African-American mom raising my two kids who were in the public school system. I really wanted to understand how teachers were trained and what the research provided about the challenges of the public education system. Once I gained the information and the insights that I needed, I was then equipped to be able to successfully support my children in their educational progress. This battle-tested experience is what I provide as action steps for you to take. It's like enjoying a bowl of educational research with a sprinkling of motherwood wisdom on top. If you're looking to find out more about the current information and issues in education that could affect you or your children, and the action steps you can take to give your children the advantages they need, then you're in the right place. Thanks for tuning in today. I know that staying informed about K-12 public education trends and topics is important to you, so keep listening. Give me 30 minutes or less, and I'll provide insights on the latest trends, issues, and topics pertaining to this constantly evolving K-12 public education environment. AI is everywhere, especially in K-12 schools. AI has been around for decades, but attention to it spiked following the release of ChatGPT in 2022. As with most technology, there are benefits and detriments. You may be concerned about how teachers are using AI to teach your children, or you may be concerned about how your children's use of AI is being used to complete assignments. Both are valid concerns. But the question is whether AI is being misused in schools and what are the negative effects on students. In this episode, I discuss the downsides and negative impacts on students when AI is misused, as well as I provide a real life example of what happens when schools overzealously embrace AI instead of strategically utilizing it. I then discuss some things for you to keep in mind when you or your children use artificial intelligence. Let's gain some insight on this. Just to make sure we're on the same page, the AI that I'm talking about is not just ChatGPT. I'm talking about the AI that powers popular education tools like Khan Academies, Con Mingo, Google Gemini, Microsoft Copilot, and iRead, as well as the AI tools that track student engagement and design curricula. In any case, it's not the type of AI that's deployed. That's the concern. It's more about the way AI is used and the negative consequences if used inappropriately or without guardrails. That's the real concern. Teachers and students' use of AI in K-12 classrooms continues to increase at a rapid rate, and this prompts serious concerns about the potentially negative effects on students, according to the school's embrace of AI Connected to Increased Risk Report, released last fall by the Nonprofit Center for Democracy on the Use of AI in the 2024 school year. One of the negative consequences that artificial intelligence is having on students is that it's hurting their ability to develop meaningful relationships with their teachers. According to that report, half of the students agreed that using AI in the class made them feel less connected to their teachers. A decrease in peer-to-peer connections was also a result of relying heavily on AI use. 70% of teachers reported that AI weakens critical thinking and research skills in their students. While AI is being hyped as a way to transform education, the negative impact on students remains a real possibility. AI use in schools also comes with other risks like large-scale data breaches, technology-fueled sexual harassment and bullying, and treating students unfairly. So, what can be done to mitigate these potentially negative effects on students? There are a couple of ways to address these concerns. One is for schools to develop AI training, and the second way is to develop and implement policies that put meaningful guardrails around the use of AI in schools. Schools need to help teachers and students use AI tools in the most beneficial way. Now, teachers use AI for curriculum and content development, student engagement, professional development, and for grading purposes. AI helps teachers do their jobs more effectively and efficiently because it is a potential time saver when developing personalized learning as well as improving their teaching methods and skills. The idea is that AI gives teachers more time to interact with students directly. On the other hand, teachers have indicated that student use of AI has created an additional burden on them to understand whether the student's work is their own or whether it's AI generated. Students don't use AI for efficiency per se, because there isn't any guarantee that they can use AI to learn faster. For the most part, students have been using AI for tutoring and for college and career advice. The problem with this is that these uses can quickly turn students to seeking advice on relationships or mental health support. And given AI's tendency to hallucinate, this may not result in the best outcomes for the students. The best way for schools and districts to address risks and concerns that come with the increased use of AI tools in school is by providing professional development for teachers and AI literacy lessons for students. Unfortunately, this standard is not being implemented and schools and districts are lagging behind. Even though most teachers and students are already using AI, less than half of them have received any training or information about the technology from their schools or districts. Less than half of the teachers have participated in any training or professional development on AI specifically as provided by their schools and districts, and less than half of the students said that anyone at their school provided them with information on risk and ethical use of AI for schoolwork or personal use. So there's a gap in the training and knowledge needed to use AI effectively. AI is certainly not going away. All of us have the potential to level up the work we do with the help of AI. Schools just need to make sure that students level up their work as well with the right guidance and understanding. And teachers need specific AI training to help students do just that. Another negative outcome of the use of AI in schools is the growing use of deep fakes. This is when boys as young as 14 have used artificial intelligence to create fake, yet lifelike, pornographic images of their female classmates and share them on social media sites like Snapchat. Students aren't the only targets of deep fakes, though. In early 2023, the athletic director of Pikesville High School in Baltimore used an off-the-shelf$1,900 software program to create a fake audio clip of his principal. The fake principal could be heard spouting harmful and racist stereotypes about his black and Jewish students. The real principal was absolved of any wrongdoing, but the district placed him at another school following strong reaction from Pikesfield's students and parent community to the fake clip. The spike in the misuse of AI-generated deep fakes has stunned school districts, which are now trying to catch up to curb this type of behavior. Deep fakes are really the next iteration of online bullying. A nationally representative survey of more than 1,100 teachers, principals, and district leaders indicated that 67% of them believed that their students had been misled by a deep fake. This, according to the Education Week Research Center report in September 2024. The data from the survey also showed that schools haven't adopted a uniformed approach to training their staff on the danger of deep fakes. When faced with a deep fake incident, most school districts resort to expulsions, suspensions, and firings, as in the case of that athletic director that I mentioned earlier. In cases that involved pornographic deep fakes, districts turned to law enforcement to investigate. It's critical that schools emphasize that students could face disciplinary and even legal action if they create deep fakes intended to harass and bully another student or staff member. Students need to be coached about how to respond from the moment they receive an explicit photo or a piece of harmful information. Schools should focus on what students should do and not solely on what they shouldn't do. School districts have been quick to launch investigations into deep fake incidents, but the results of these investigations have varied. In some cases, the victims have felt like too little was done, and if anything was done, it seemed to be too late. Twenty states have passed bills that aim to make AI-generated imagery with the faces of real adults or children a part of the statutes that criminalize the creation and possession of such deep fakes. A nationwide survey of students, teachers, and parents by the Center for Democracy and Technology showed that 40% of students were aware of deep fakes associated with someone they knew at school, compared with 29% of teachers in the know and 17% of parents surveyed said they were aware of a deep fake associated with someone they knew at school. This chasm of awareness, more than likely, means that deep fake incidents are highly underreported. Teaching the ethical use and purpose of AI is one way to address the negative impacts of using germitive AI tools. However, educators should be teaching students how to use AI for learning and especially focus on teaching AI literacy in the context of a subject. Talking about how an AI tool can be programmed to demonstrate how it generates information is a lot less meaningful and useful than putting it in the context of learning and helping children know how the tool works when they're trying to get ready for a history test or build the project for science or write an opinion paper in English class. The problem is that most teachers say their districts haven't made their artificial intelligence policies clear to them or their students. Many schools are hesitant to develop clear policies for AI usage because there's a fear of doing it, quote unquote, wrong, or setting a precedent that may need to be revised later. But this leaves educators and students in the gray area because they're unsure about what's acceptable. The policies need to balance ethical considerations, academic integrity, and innovation, but these fears of missteps are what's holding progress back. Districts and state education agencies across the country have been grappling with how to leverage the rapidly evolving technology, but they often don't have the expertise they need to figure it out. So the use and often misuse continues. AI may promise efficiency, but in education, efficiency comes at a cost because when students rely on generative AI to complete assignments, for example, they lose essential skills like critical thinking, wrestling with ideas, making mistakes, and building understanding through productive struggle. AI not only replaces the necessary cognitive skills, but it can erode social skills that schools are meant to cultivate. Schools should be cautious before fully integrating AI tools. Instead of outright banning them or embracing them wholesale, teachers should educate students on how to use AI as a cognitive companion. Here's a real life incident that's an example of when AI implementation and use goes bad. In March 2024, the Los Angeles Unified School District was touted as a trailplacer for its embrace of artificial intelligence, and during that time, it unveiled a custom-designed chatbot. The superintendent even called the tool a game changer that would accelerate learning at a level never seen before. But in just five months, LAUSD went from enviable AI pioneer to cautionary tale. The district has since temporarily turned off its once celebrated chatbot called Ed. That decision appeared to have been prompted by turmoil at All Here, the company the district hired to create the tool at a cost of up to six million dollars over five years. All Here furloughed most of its workers, and its CEO and founder, Joanna Smith Griffin, is no longer with the company. In addition, a company whistleblower raised serious privacy concerns about the platform. LAUSD has now become a poster district for what not to do in harnessing AI for K-12 education. This is what happened. For starters, the district didn't appear to have a tightly defined problem statement that it was trying to fix with the technology. Plus, the district selected an inexperienced vendor and set an overly ambitious timetable for the project without proper protections for student data, all of which indicates that its leaders bought uncritically into the AI hype. Another thing is that the district made it clear that it's not giving up on AI for this specific tool implementation because it belongs to the district. Sure, that makes sense since you spent six million dollars on something that was botched so badly. Let's continue to claim that it will eventually provide a one-of-a-kind resource to students and families. Yeah, in the upside-down world, that makes total sense. In any case, there are lessons to be learned for other school districts developing an AI tool as an offering for students and staff. One, be clear about the problem that you're trying to solve with AI. Two, vet the education technology company carefully. Three, start small and work on a reasonable timetable. And four, make data privacy, especially student data, a top priority. And just so you know, this is the second time this school district has bungled a cutting edge technology initiative. Under different leadership in 2013, the district rolled out a one to one computing iPad program that was a complete disaster. Now, this doesn't look well for the second largest school district in the country. The bottom line is that AI is here to stay. The key is understanding how it best supports learning and using it properly because students need this competency to be successful in the future workforce. So, what can you do with the information that I just shared? Here are the action steps you can take regarding the negative impacts and misuse of AI in K-12 schools. This discussion was mostly about bringing awareness to you regarding the misuse and downsides of using AI in schools, but there are certain things for you to keep in mind. One, use AI as a brainstorming tool that needs meaningful guardrails and best practices. One way to build those guardrails is to teach your children how to be healthy skeptics of anything AI produces, regardless of the subject. Two, encourage your children to bring the human elements to their learning. This includes empathy and curiosity. These may be balanced with the use of any technology tool, especially AI tools. And three, find strategic and powerful ways to help your children develop critical thinking skills by deeply thinking about an issue that's important to them and asking generative AI tools about that specific complex societal problem. Then questioning or challenging the outputs produced by the tool. Be curious about how your children use artificial intelligence in and outside of school and share those critical lessons with other parents. This is what community is all about. Here are this episode's takeaways. 70% of teachers reported that AI weakens critical thinking and research skills in their students. While AI is being hyped up as a way to transform education, the negative impact on students remains a real possibility. AI use in schools also comes with other risks like large-scale data breaches, technology-fueled sexual harassment and bullying, and treating students unfairly. Another negative outcome of the use of AI in schools is the growing use of deep fakes. This is when boys as young as 14 have used artificial intelligence to create fake yet lifelike pornographic images of their female classmates and share them on social media sites like Snapchat. AI may promise efficiency, but in education, efficiency comes at a cost because when students rely on generative AI to complete assignments, for example, they lose essential skills like critical thinking, wrestling with ideas, making mistakes, and building understanding through productive struggle. AI also not only replaces these necessary cognitive skills, but it can erode social skills that schools are meant to cultivate. Teaching the ethical use and purpose of AI is one way to address the negative impacts of using generative AI tools. However, educators should be teaching students how to use AI for learning and especially focus on teaching AI literacy in the context of a subject. Now it's my turn to hear from you. What are your thoughts about how AI is being used in schools? Let me know what your thoughts about this are by leaving me a text comment on the podcast website of K12 Education Insights.buzzRoute.com. Here's how you can leave that text comment. Go to the episode description page and click on the Send Me a Text Message link. Again, share your thoughts on this vital topic of the negative impacts of AI on your students as they use this tool in school by going to K12 Education Insights.budsprout.com and leaving me a text comment. If you enjoyed this episode, why not listen to another episode from my catalog? It could take as little as 15 minutes of your day. And remember, new episodes come out every Tuesday. Thanks for listening today. Be sure to come back for more insights on K-12 educational topics that impact you and your children. And remember to share my podcast with anyone that you think would find it valuable. That includes your friends, your family, and your community. Until next time, learn something new every day.
Podcasts we love
Check out these other fine podcasts recommended by us, not an algorithm.