Screen Time Isn’t the Problem. Design Is.
What the AAP’s new guidance means for AI, learning, and responsibility
By Michelle Collins, Chief Operating Officer, CodeBaby
The American Academy of Pediatrics quietly did something important recently, updating their screen time guidance in a way that, for the first time in what feels like forever, stops pretending the whole conversation can be reduced to a single number.
For years, we all repeated the same line. Two hours a day became the shorthand for responsible parenting and thoughtful teaching, and it gave everyone something to point to in a world that was moving faster than most of us could keep up with. But anyone who has actually watched a kid interact with technology knows that number was doing a lot of heavy lifting it was never built to do. A child watching short-form videos for an hour is simply not having the same experience as a child working through a math problem with a patient tutor, and most of us understood that intuitively, even when we did not have a better framework to offer in its place.
Now we have the start of one.
The AAP is asking us to think about what the child is actually doing on the device, how the experience is designed, whether an adult is involved in any meaningful way, and whether the screen is displacing sleep, relationships, or time outside. Think about it this way. It is not a limit. It is a lens. And that shift in framing opens up a much bigger question for anyone building or using AI in education.
Why the Old Rule Stopped Working
The two-hour rule was never really about accuracy; it was about simplicity. It gave us a way to set limits without asking harder questions about what was actually on the screen, and that simplicity came at a cost. It let us skip the parts that actually mattered, like whether a given tool was helping students think or helping them finish, whether it was supporting the work of learning or quietly replacing it, whether its design was pulling attention in useful directions or just pulling attention.
Those questions take more effort to sit with, but they also lead to much better decisions. The updated guidance pushes us toward that harder work by acknowledging something educators have been saying quietly for years. A well-designed experience can genuinely support a child’s development, and a poorly designed one, no matter how educational it looks on paper, can do the opposite.
That distinction matters even more now that AI is in the conversation.
AI Changes What “Screen Time” Even Means
When we talked about screen time in the past, we were mostly talking about content. A video played, a game got played, a student watched or clicked, and the technology on the other side of that interaction was essentially stationary. Conversational AI is a different kind of thing entirely, because it responds, adapts, and nudges, and it can shape a student’s next thought in real time based on the one they just had. That makes the design of these systems enormously consequential.
Picture two students, each spending thirty minutes with an AI tutor. The first uses a system that hands over the answer as soon as the student seems stuck, so the student finishes the homework, feels relieved, and moves on. The second uses a system that asks guiding questions, waits, gently redirects, and only offers help once the student has genuinely tried. That student closes the laptop, having actually wrestled with something, and the thing they learned is not just about the math.
Same amount of screen time, completely different lesson.
This is the part of the conversation I think we are still catching up to. The duration of the interaction tells you almost nothing, while the design of the interaction tells you almost everything.
What Actually Matters
When you look closely at the AAP guidance, three things come into focus.
Quality is the most familiar of the three. Is the information trustworthy, and is the experience built around real learning rather than just keeping attention? Those are not the same goal, and we have spent enough time pretending they are.
Context is about whether a teacher or parent is still involved in any meaningful way, or whether the technology has become the whole interaction. Technology that supports a human in the room is a fundamentally different thing from technology that stands in for one, and conflating the two is how a lot of well-intentioned ed-tech ends up missing the mark.
Design is the quietest of the three, and also the most influential. Design decides what the system makes easy, what it rewards, and what it quietly discourages, and students are remarkably good at figuring out what a piece of technology wants from them and adjusting accordingly. If a tool makes it easy to skip thinking, students will skip thinking. If it rewards effort and curiosity, they will offer more of both. None of this is mysterious; it is just rarely named out loud.
The Urge to Overreact
Every time new research or new guidance comes out, there is a strong pull in two directions at once. Some people want to embrace the new technology fully, and others want to pull it out of classrooms entirely, and we are watching that play out in real time right now. Concerns about reading, attention, and student well-being are real, and they are pushing some schools to question whether any of this belongs in front of kids at all.
The AAP guidance is not really pointing us toward a ban, though. It is pointing us toward better judgment, asking schools, parents, and the companies building these tools to take design seriously, to think about context, and to stop outsourcing the harder questions to a stopwatch. That is a much more useful conversation than the one we have been having.
What This Changes for Schools
For educators and administrators, this shift means the evaluation criteria for ed-tech tools need to get sharper. Instead of asking how much time a tool will take up in the day, the better questions are whether it encourages real thinking or routes around it, whether it supports the teacher or tries to stand in for the teacher, whether it builds a student’s confidence over time or quietly creates a dependency the student will eventually have to unlearn, and whether the way the tool actually works matches the way students actually learn. Those are harder questions to sit with, and they put the responsibility exactly where it belongs, on the people designing these systems and the institutions choosing to bring them in.
First, Do No Harm
This is a phrase I keep coming back to in conversations about AI, and I think it applies here as much as it does in healthcare.
At CodeBaby, we build conversational AI for classrooms, among other places, and our guiding principle is that these systems should make learning more human, not less. In practice, that means designing avatars that encourage effort instead of shortcutting it, that prompt curiosity instead of handing over completion, and that make it unmistakably clear where a student still needs a teacher, a parent, or a peer. Those are not features you bolt on at the end. They are decisions that have to be made at the start, because students begin learning from a system the moment they start using it, and what they are learning is not only the material. They are also learning a set of habits about what effort is worth and what is acceptable to skip. If we are not careful, the habits are what stick.
The Bottom Line
The AAP did not say screen time is fine. They said we have been measuring it wrong. What a student does with the screen, how the experience is shaped, and whether it supports the rest of their development all matter more than the clock, and that is a harder standard to hold ourselves to than a simple time limit was. It is also a more honest one.
Because in the end, the design of the experience is the lesson. Students do not just learn from the content; they learn from the shape of the interaction itself, and that is a responsibility worth taking seriously, whether you are a parent, a teacher, or someone building the tools that end up in a student’s hands.