CodeBaby

Screen Time Isn’t the Problem. Design Is.

For years, we measured screen time with a simple number. But new guidance from the American Academy of Pediatrics makes it clear that the real issue is not how long students use technology, but how it is designed and used. As conversational AI becomes more integrated into education, the focus must shift to quality, context, and experience. In this article, Michelle Collins explores why ethical design, not time limits, will determine whether technology supports or undermines learning.

Using AI to Extend the Impact of Great Teachers

AI is already shaping how students learn, often outside the classroom. The challenge for schools is not whether to adopt it, but how to do so responsibly. This article outlines four essential considerations to ensure AI tutors protect students, reinforce teacher-led instruction, and deliver meaningful, personalized learning outcomes.

Ethical AI Starts in Product Design, Not Policy

Schools are rushing to create AI policies, but the most important ethical decisions happen much earlier in the product design process. From how AI tutors respond to mistakes to how they encourage critical thinking, every interaction teaches something. In this article, Alexa Carpentier explores why responsible educational AI must build guardrails into the learning experience from the very first interaction.

The Real Lesson AI Is Teaching Education: Ethics Must Come Before Adoption

AI is moving into classrooms faster than policies can keep up. From deepfakes to new school AI policies, the real question is not what AI can do. It is who remains responsible when it does. In this article, Michelle Collins explores why ethical guardrails, human accountability, and thoughtful design will determine whether AI strengthens trust in education or undermines it.

What Most People Get Wrong About Real-Time Avatar Performance

When people evaluate digital humans, they often focus on the wrong thing, skin texture, facial detail, or photorealistic rendering. But in real-time conversational AI, realism isn’t primarily a graphics problem. It’s a performance problem. In this article, CodeBaby’s Creative Director explores why micro-movement, conversational timing, and subtle behavioral cues—not hyper-real visuals—are what truly determine whether an avatar feels natural, trustworthy, and human-centered.

Why Hospitality AI Must Feel Human Without Pretending to Be

Hospitality has always been about how guests feel—not how advanced the technology behind the scenes might be. As hotels, resorts, and entertainment venues invest in AI concierges and digital humans, many are focused on making these systems appear more human. But that may be the wrong goal. In this piece, Michelle Collins explores why hospitality AI should feel warm, clear, and supportive without pretending to be human—and why transparency, trust, and knowing when to hand off to a real person matter more than realism.

The Science of Trust: What Makes People Engage More Deeply With an Avatar

What makes people trust an avatar? It’s not photorealism, it’s behavior. From eye gaze and micro expressions to cadence and emotional rhythm, the research shows that trust is built through subtle, human-like signals that help people feel seen, supported, and understood. In this article, Michelle Collins breaks down the science behind why people engage more deeply with digital humans—and why ethical design matters more than ever.

When Technology Meets Grief: Hard Questions About AI Avatars of the Deceased

Technology has always changed the way we remember the people we’ve lost, but AI that lets us “talk” to the deceased takes us into completely new emotional territory. In this reflection, Michelle Collins explores the complex mix of comfort, uncertainty, and ethical responsibility surrounding AI-generated avatars of loved ones who have passed. Instead of easy answers, she offers a more human one: grief deserves humility, transparency, and serious questioning before we build tools for people at their most vulnerable.

AI in the Workforce: Stop Asking If It Will Take Your Job—Ask How It Can Make Your Job Better

How can we deploy AI to help us do our jobs better

Here’s the thing about the “Will AI take my job?” conversation: we’re asking the wrong question. It’s like asking whether a calculator will replace an accountant. Sure, it handles the arithmetic, but that just means the accountant can focus on analyzing trends, advising clients, and making strategic recommendations instead of adding up columns of numbers all day. The real question isn’t whether AI will replace workers—it’s whether we’re smart enough to deploy it in ways that actually help people do their jobs better.

When AI Replaces the Teacher: Why Connection Still Matters More Than Efficiency

AI can play an important role in advancing education, augmenting a teacher’s strengths and empowering students to overcome obstacles, solve problems and inspire a curiosity to dig deeper in their learning experience. But we can’t underestimate the importance of human connection in our educational experience. Because the most important human capabilities aren’t about being more creative or contrarian than machines. They’re about being empathetic, collaborative, ethical, and wise. They’re about developing the judgment to know when to trust technology and when to question it. They’re about learning to work with people who are different from you, to navigate ambiguity, and to persist through challenges that don’t have clear solutions. Those capabilities develop through relationship and community, not through individual optimization.