CodeBaby

Ethics in AI Is Your AI Telling the Truth? Designing for Transparency in a Hallucination-Prone Landscape

Ethics in AI Is Your AI Telling the Truth? Designing for Transparency in a Hallucination-Prone Landscape

By Michelle Collins, Chief Revenue Officer, CodeBaby

The future of AI isn’t just about what it can do—it’s about whether we can trust it.

From spiritual delusions to confidently wrong answers, the headlines this month paint a sobering picture of a rapidly advancing technology that still struggles with something fundamental: the truth. In Futurism, we see developers worrying that newer AI models, while more sophisticated, are also getting better at “hallucinating” false information. Over at VICE, a story on AI’s religious inclinations spirals into questions about identity, perception, and whether our machines are starting to believe their own hype.

This isn’t science fiction. It’s the reality of generative AI in 2025—and it’s a wake-up call for those of us building in this space.

At CodeBaby, we believe that creating emotionally intelligent AI also means creating accountable AI. That’s why transparency isn’t just a feature—it’s the foundation.

When the Delivery Outpaces the Facts

Let’s be clear: hallucination isn’t a bug that only shows up in fringe use cases. It’s baked into the architecture of most large language models. These systems are prediction engines. They don’t “know” facts—they generate the most statistically likely next word. And without the right constraints, they’ll do that with all the confidence of a PhD and the accuracy of a magic 8-ball.

That’s a problem—especially when those words are coming from a friendly, lifelike avatar designed to build trust.

Why Transparency Can’t Be an Afterthought

Trust is earned, not engineered. That means AI systems need to do more than sound human—they need to communicate boundaries clearly. At CodeBaby, we’ve built our platform around the principle of clear disclosure: avatars that identify themselves as AI, that provide citations or reference material when needed, and that are programmed to say “I don’t know” instead of faking an answer.

It also means building in human oversight. Our digital humans aren’t operating in isolation—they’re designed to escalate, defer, or connect users with live professionals when the conversation requires nuance, expertise, or empathy beyond the machine’s scope.

Designing for Accountability

While others may rush to make AI indistinguishable from human interaction, we believe the opposite is the goal: good AI should be transparent that it’s AI. One of our core principles as we build our avatars and the experiences they enable is that we aren’t trying to convince people that our avatars are humans – we aim to design interactions that are so meaningful that people don’t care that they’re not talking with a real person.

That doesn’t make the experience less engaging—it makes it more ethical, and by extension, more trustworthy. By preserving transparency, we empower users to make informed choices, question outputs, and understand what’s behind the curtain. And we create space for AI to be helpful without being deceptive.

We’ve implemented a layered design approach that combines user-customized elements with fixed system prompts to keep conversations aligned with intended goals and boundaries. This includes temperature settings to manage response variability and fallback algorithms designed to handle off-topic or sensitive questions effectively.

The Ethical Advantage

In an era where AI is being used to plan and execute the next social engineering phishing scheme, clarity is no longer optional—it’s a competitive differentiator.

Customers and users are beginning to ask harder questions. “Where did this information from?” “Can I rely on this source?”

If you can’t answer those questions confidently, your AI shouldn’t be answering theirs.

Where We Go from Here

AI hallucinations aren’t going away anytime soon. And Large Language Models are all built on an extremely wide range of data – some of which may be unreliable. But that doesn’t mean we have to accept deception as the cost of innovation. As builders, we have the responsibility—and frankly, the opportunity—to lead with integrity.

At CodeBaby, that means designing avatars that know their limits, stay within the lines, and elevate—not replace—the human behind the brand.

Because the real future of AI isn’t just smart. It’s honest.