The Real Lesson AI Is Teaching Education: Ethics Must Come Before Adoption
By Michelle Collins, Chief Operating Officer, CodeBaby
Every once in a while, two stories land in my feed the same week and I can’t stop thinking about the thread connecting them.
The first one made my stomach drop. Students at a school had used AI to create deepfake videos of their teachers. Not because they had some sophisticated technical skill set. Because they had a phone and a few minutes. That’s all it takes now.
The second story was quieter but just as important. New York City schools released a set of policies about how teachers can and can’t use AI in their classrooms. Guidelines around grading, around lesson planning, and around where the human has to stay in the loop.
At first glance, these seem like completely different conversations. One is about kids behaving badly with new tools. The other is about institutions trying to get ahead of the curve.
But they’re actually about the same thing. Not capability. Not innovation. They’re about responsibility. And education might be the first place where we’re being forced to answer that question in a way that actually sticks.
When the Tools Outrun the Rules
The deepfake story bothers me, and not just in the obvious way. Yes, it’s disturbing that a student can damage a teacher’s reputation with a few taps on a screen. But what really unsettles me is what it says about the environment we’ve created.
Trust used to be a given in the classroom. You showed up, your teacher showed up, and there was an understood social contract between you. That contract didn’t need to be spelled out because the tools available to students couldn’t fundamentally violate it.
That’s not the world we live in anymore.
We haven’t established clear expectations about how these tools should be used, what the boundaries are, or what happens when someone crosses them. Schools have always taught digital literacy. Now they have to teach digital ethics, and those are very different things. Knowing how to use a tool and knowing when you shouldn’t are not the same skill.
The NYC Policy Gets Something Right
Here’s what the New York City policy actually reveals. The question isn’t whether AI can perform a given task. It’s who takes responsibility when it does.
Think about it this way. An AI tool can grade a quiz. It can probably do it consistently and quickly. But if a parent questions that grade, the school can’t ask them to take it up with the software. Accountability has to live with a person. That’s not a limitation of the technology. That’s just how trust works.
People are generally willing to accept support from technology. They expect accountability from humans. And the NYC policy clearly draws that line. AI can help with preparation. It can support drafting and research. But when a decision affects a student’s record or their future, a human has to own that decision.
I think that principle is going to end up being the foundation for responsible AI adoption well beyond education. Healthcare, customer service, hospitality, you name it. The organizations that figure out where to draw that line and actually hold it are the ones that will earn long-term trust.
Two Risks, One Root Cause
Both of these stories point to something that I think a lot of organizations are still catching up to. AI introduces two very different kinds of risk, and they come from the same place.
The first is misuse. That’s the deepfake story. Powerful tools in the hands of people who haven’t been taught (or don’t care) about the consequences.
The second is overreliance. That’s what happens when institutions let AI make decisions that really need human judgment behind them. When the efficiency gains are so appealing that we stop asking whether a human should still be in the loop.
Both risks trace back to the same root issue. We introduced powerful tools without fully building the ethical frameworks around them. And technology almost never waits for policy to catch up. So ethics has to move fast.
What I Think Responsible AI in Education Looks Like
If conversational AI is going to play a positive role in education, and I believe it can, we have to be clear about its role from the start. Not after something goes wrong. Not after we’ve already blurred the lines between what AI decides and what humans decide.
At CodeBaby, we think about this a lot because our work sits at the intersection of students, educators, and institutions. That’s a space where trust matters deeply, and where getting it wrong has real consequences for real people.
So we’ve anchored our work around a few principles that I think apply broadly. Students should always know when they’re interacting with AI. Full stop. AI should support instruction, not replace educator authority. When conversations reach complex or sensitive territory, the system should route students to a human. And when the AI doesn’t know something, it should say so rather than guessing.
None of that is technically difficult, by the way. These aren’t engineering challenges. They’re ethical choices that get built into the technical design. They take more time, sure. But the alternative, shipping fast and figuring out the ethics later, is how you end up with students making deepfakes of their teachers and no one knowing what to do about it.
The Hybrid Classroom Is Coming
To be clear, this isn’t a humans vs. AI conversation. Education is moving toward a hybrid model where both play important roles, and that’s a good thing.
AI can offer consistency, availability, and scalability in ways that humans simply can’t match. Teachers bring judgment, mentorship, and the kind of accountability that no algorithm can replicate. When those roles stay clear, AI becomes a genuinely powerful support system. When those roles start to blur, trust erodes. And once trust erodes in a classroom, it’s incredibly hard to get back.
The best implementations I’ve seen treat AI like a teaching assistant, not a teacher. A guide, not a decision maker. A support system, not an authority. That distinction protects both students and educators, and it’s one we need to guard carefully as the technology gets more capable.
Why Education Is the Canary in the Coal Mine
Education tends to become the testing ground for big societal shifts because the stakes are so immediate and so personal. When something affects kids, we pay closer attention. We ask harder questions, and we demand better answers.
Right now, education is showing every other industry what responsible AI adoption is going to require. It’s forcing us to ask what role AI should play, who is accountable when things go sideways, and what choices we need to make right now to ensure trust is preserved.
Those questions aren’t going away. They’re going to follow AI into healthcare, into customer service, into every field where human trust is on the line.
What I Keep Coming Back To
AI is already changing education. That’s not a prediction; it’s just what’s happening. The thing that will determine whether that change is positive isn’t the sophistication of the technology. It’s the strength of the ethical frameworks we build around it.
At CodeBaby, we believe conversational AI should make education more human, not less. It should help educators do their best work. It should help students access support when they need it. And it should always operate within guardrails that are clear, intentional, and built to protect trust.
Technology will keep moving fast. It always does. Our job is to make sure our ethics move faster.