We’ve put a face (and a body) to ChatGPT

Like the majority of tech-minded people (and even those who don’t pay much attention to the IT world), we were blown away with the early December announcement about ChatGPT and our initial forays into trying it out. I attended a networking event the night of the announcement, and discussions about what it could do – and the implications for business – were plentiful.

Would it destroy content-creation companies? Was it ethical? Was it accurate? How could it be used in business? How could we use it to take care of some of the more tiresome, repetitive tasks we do on a daily basis? And since those very early days (we’re still just a couple of months in), the questions and discussions continue to grow.

Within days of the announcement, our amazing dev team had an avatar using ChatGPT for conversations. Within weeks of that, we had internal tooling to connect it to Google Dialogflow as our default fallback, to create new avatars connected to the technology, and tooling to help us with training data generation and model training. As we continue to experiment with ChatGPT, and other projects from OpenAI, we’re committed to being transparent and mindful of the fact that, well to put it simply, “ChatGPT lies. Quite a bit. And very convincingly.”

We’ve started with two applications of the technology on our own website. Our primary avatar, that helps users through our site and answers any questions they might have, still runs exclusively on Google Dialogflow. That gives us utmost control of the information that people are receiving about our company and products. But if you hit the default fallback – by asking for a question that doesn’t match with one of our intents – you’re asked to sign up to take a look at our ChatGPT options.

Why are we using a sign-up? We want to make sure people are very clear that they’re getting not-always-reliable data and that it isn’t confused with the authoritative information we’re providing about our company. We also want to, at least initially, throttle requests so we don’t run afoul of any OpenAI usage policies. We’d love for you to take a look and provide any feedback about our implementation: https://codebaby.com/resources/demos/demo-chatgpt-enabled-agent/.

We’re also really curious to see the types of interactions people are trying to have with the ChatGPT enabled avatars. Are they just going to use it to try to get around school or municipality restrictions against accessing the ChatGPT software? Or are they truly asking questions that should either be part of our conversational corpus already. Maybe people are just using it to have fun – we’re ok with that, too.

As we go forward and introduce more products using OpenAI tooling, we’re committed to acting as ethically and transparently as possible. We’re in the process of putting together our internal guidelines for generative AI and eagerly anticipating the release of GPT 4 so our trained models are likely to be a bit more reliable than the ones we’ve done against GPT 3.

And, as always, we welcome feedback from the community. Whether it’s asking if it can be used for a particular use case, or more technical questions… no ideas are bad ones, and we’re beyond excited to have those conversations.

In the meantime, please stop by the demo and feel free to reach out. We’ll also be in Chattanooga, TN for Project Voice 2023 in late April, and can’t wait to have some of these conversations in person.

This technology is all about conversations. Let’s get one started.

Michelle Collins
Director of Marketing & Product Development at CodeBaby

LinkedIn

Related Articles

Get started by requesting a demo
Request partnership information from CodeBaby
Request a new feature with CodeBaby