Screen Time Isn’t the Problem. Design Is.
For years, we measured screen time with a simple number. But new guidance from the American Academy of Pediatrics makes it clear that the real issue is not how long students use technology, but how it is designed and used. As conversational AI becomes more integrated into education, the focus must shift to quality, context, and experience. In this article, Michelle Collins explores why ethical design, not time limits, will determine whether technology supports or undermines learning.
Using AI to Extend the Impact of Great Teachers
AI is already shaping how students learn, often outside the classroom. The challenge for schools is not whether to adopt it, but how to do so responsibly. This article outlines four essential considerations to ensure AI tutors protect students, reinforce teacher-led instruction, and deliver meaningful, personalized learning outcomes.
Ethical AI Starts in Product Design, Not Policy
Schools are rushing to create AI policies, but the most important ethical decisions happen much earlier in the product design process. From how AI tutors respond to mistakes to how they encourage critical thinking, every interaction teaches something. In this article, Alexa Carpentier explores why responsible educational AI must build guardrails into the learning experience from the very first interaction.
The Real Lesson AI Is Teaching Education: Ethics Must Come Before Adoption
AI is moving into classrooms faster than policies can keep up. From deepfakes to new school AI policies, the real question is not what AI can do. It is who remains responsible when it does. In this article, Michelle Collins explores why ethical guardrails, human accountability, and thoughtful design will determine whether AI strengthens trust in education or undermines it.
What Most People Get Wrong About Real-Time Avatar Performance
When people evaluate digital humans, they often focus on the wrong thing, skin texture, facial detail, or photorealistic rendering. But in real-time conversational AI, realism isn’t primarily a graphics problem. It’s a performance problem. In this article, CodeBaby’s Creative Director explores why micro-movement, conversational timing, and subtle behavioral cues—not hyper-real visuals—are what truly determine whether an avatar feels natural, trustworthy, and human-centered.
Why Hospitality AI Must Feel Human Without Pretending to Be
Hospitality has always been about how guests feel—not how advanced the technology behind the scenes might be. As hotels, resorts, and entertainment venues invest in AI concierges and digital humans, many are focused on making these systems appear more human. But that may be the wrong goal. In this piece, Michelle Collins explores why hospitality AI should feel warm, clear, and supportive without pretending to be human—and why transparency, trust, and knowing when to hand off to a real person matter more than realism.
The Science of Trust: What Makes People Engage More Deeply With an Avatar
What makes people trust an avatar? It’s not photorealism, it’s behavior. From eye gaze and micro expressions to cadence and emotional rhythm, the research shows that trust is built through subtle, human-like signals that help people feel seen, supported, and understood. In this article, Michelle Collins breaks down the science behind why people engage more deeply with digital humans—and why ethical design matters more than ever.
When Technology Meets Grief: Hard Questions About AI Avatars of the Deceased
Technology has always changed the way we remember the people we’ve lost, but AI that lets us “talk” to the deceased takes us into completely new emotional territory. In this reflection, Michelle Collins explores the complex mix of comfort, uncertainty, and ethical responsibility surrounding AI-generated avatars of loved ones who have passed. Instead of easy answers, she offers a more human one: grief deserves humility, transparency, and serious questioning before we build tools for people at their most vulnerable.
When AI Should Say “I Don’t Know”: The Ethics of Honesty in Conversational AI
As conversational AI gets smarter, one of the most ethical design choices may be teaching it to say, “I don’t know.” In this article, CodeBaby’s COO Michelle Collins explores why humility builds more trust than perfection and how designing for honesty helps AI become more human, reliable, and ethically grounded.
Prompting Isn’t Just an Input: It’s the Conversation That Shapes Everything
Most people treat prompts like search queries, they type, hope, and pray for a good result. But prompting isn’t just an input; it’s the framework that shapes every AI interaction. In this article, CodeBaby’s COO explores why prompt management is essential for consistency, trust, and meaningful human-AI communication.