The Importance of Building a Responsible AI: Is Your Conversational AI a Threat to Your Brand?
Rogue chatbots at DPD and Air Canada show the huge brand risk of conversational AI. Marketers must champion Responsible AI (the 'how') not just Ethical AI (the 'why'). Learn how to protect your brand and build customer trust.
Building your own AI and implementing it in your business by giving it a voice means you trust it to speak in your name. Because from that point onwards, everything your brand new conversational AI says, it says in the name of the company.
While, as a person in marketing, you will not be tasked with the technical aspects of building an AI tool, you must be well aware of its importance and the dangers AI could potentially carry if not approached correctly.
There are numerous stories where AI failed in such matters, and these stories cannot be justified as technical coding mistakes. Unfortunately, what they do is damage your brand. So, as someone in charge of building a brand, it will inevitably be your responsibility.
Examples of AI Failing a Brand
Mistakes can happen, and they surely do. Throughout the years of its existence and implementation, AI has had its fair share of doing the exact opposite of what it was designed for.
One of the most familiar examples of an AI gone rogue is the DPD chatbot. After being carefully prompted by a frustrated customer, it started swearing, complaining about DPD’s poor service, and even wrote a haiku on how bad the service is. DPD blamed an AI update, but the damage to the brand had already been done.
DPD Chatbot fail (Source: X, @ashbeauchamp)
Another stark example is Air Canada's costly chatbot experience. A customer was incorrectly told by the airline's chatbot that he could apply for a bereavement fare after booking his flight. When the airline refused to honour this promise, the customer took them to court. The tribunal ruled in the customer's favour, forcing Air Canada to pay damages and stating that the airline was responsible for all information on its website, whether it came from a human or a chatbot.
A similar example was the Chevrolet chatbot, which, as the customers quickly realised, could be easily manipulated into saying anything they wanted.
Chevrolet chatbot fail (Source: X, @ChrisJBakke)
These instances are more than just tech malfunctions. They are failures to prioritise ethics and responsibility. What was left in the end? Customers who were dissatisfied, disappointed, and sometimes even strongly offended by the AI that was meant to help them.
What is ethical AI? What is responsible AI?
We have now mentioned these two slightly abstract terms a couple of times, so it’s time to explain what they really mean. If you understand them, you will be able to ask the right questions about your AI project, making sure your brand and your customers are safe.
Think of it this way:
Ethical AI is the 'Why'.
It's the moral compass. It’s about doing the right thing and aligning AI with human values like fairness and justice. A failure in ethics is when an AI system produces a fundamentally unfair outcome.
For example, Amazon had to scrap an AI recruiting tool because it learned from historical data to penalise resumes that included the word "women's," systematically discriminating against female candidates. The tool worked technically, but its outcome was unethical.
Responsible AI is the 'How'.
This is where philosophy becomes practice. Responsible AI is the operational framework: the concrete policies, tests, and safety nets you build to put your ethical principles into action. A failure in responsibility is when a lack of proper governance leads to a harmful or brand-damaging event.
The Air Canada chatbot making up a refund policy is a perfect example. The company failed in the 'how' by not having the responsible processes in place to ensure its AI was robust, accurate, and accountable for the information it provided.
Why This Matters to You, the Marketer
This distinction is crucial because an AI system itself cannot be "ethical"; it has no morals. But a company can, and legally must, be responsible for the outcomes of its AI. Framing the challenge around "Responsible AI" shifts the conversation from an abstract debate to a practical business discussion about accountability and risk management. This is the other half of the conversation we started in our last blog post. Once you've made the case for AI's strategic value, you must also prove you can manage its risks.
As the brand's primary custodian, this is your domain. You are not expected to code the algorithms, but you are expected to protect the customer and the brand's reputation. Understanding this framework allows you to ask the right questions of your tech teams and vendors:
- "How are we testing this tool for bias to ensure it treats all customer segments fairly?"
- "What is our process if the chatbot gives a customer incorrect information?"
- "How can we explain why the AI recommended this product to this specific person?"
Conclusion: Your Brand's New Guardian
Ultimately, championing Responsible AI is not just an IT task. It is central to the core marketing mission of building and protecting brand value.
In an era of increasing customer scepticism and an additional ever-increasing growth in customer expectations, a clear commitment to using AI safely and transparently is no longer just a defensive measure.
Instead, it is one of the most powerful competitive advantages you can build, transforming a complex risk into a compelling reason for customers to choose, trust, and advocate for your brand over others.
The future belongs to marketers who don't just ask if their AI is smart, but demand that it is responsible.





