CodeBaby features  help our avatars shine.


We strive for the highest accessibility. Just for starters. we have both audio and text inputs as well as outputs!


We believe that our animated characters not only provide one of the most intuitive UIs available but that their capacity to listen and convey empathy makes for experiences that are not only easier – they’re truly more supportive.


Our core technology relies on Artificial Intelligence-powered Natural Language Processing, Speech Recognition, Synthesized Speech, and Animation Generation. While we have default services for all of those functions, we’ve built our platform to be provider-agnostic. So if your organization uses Amazon Polly for NLP instead of Google DialogFlow, we can work with you on that integration.

Conversational AI

We can work with you to develop conversations and FAQ-type databases. Then we train the Natural Language Procressing (NLP) Engine to understand the expected user inputs, match those with the appropriate responses, and continually monitor and improve the training and response data to make the avatar’s conversational ability more robust and accurate the longer it is active. In other words, the avatar's conversational skills get smarter the more it works for you.


Our avatars can be integrated with the environment - allowing them to detect the context of customer behavior. This context allows for adaptation to customer insight, and a greater level of personalization.

Video Output

For our customers who’d like to integrate an animated character into their static content – such as e-learning lessons or marketing materials – we offer the ability to output your avatar’s “scenes” as transparent WebM video.

Dynamic Output

Dynamic content is crucial for easy-to-update, detail-oriented, personalized information to be conveyed from the avatar to your users. There is also no waiting for you to make changes to your conversation and send out those changes when you are ready.