Top 3 Gen-AI Concerns From Customers
Potential customers have reasonable concerns when adopting AI products. In this essay, we'll dive into the top three concerns customers of AI have, providing examples and suggesting solutions.
Intro
We're continuing our series on lessons learned from companies that have been working with Large Language Models (LLMs) since 2018. Last week, I wrote about the top 3 mistakes that companies often make when they're working with gen-ai. This week, by reader suggestion, I asked engineers and leaders at these companies the main concerns their customers have. Let’s dive in.
Concern 1: Reliability and Quality of the AI
Reliability and quality of the AI are potential customers’ biggest concerns. To better understand this, let's look at two real stories from customers.
The first one said, "If I put your chatbot in front of my customers, will it say anything it shouldn't?" This concern comes from AI not always behaving as we expect. When companies fine-tune their models using past customer chats, the models can become trained on rude or inappropriate behavior. Teams can try to fix this by preparing the data with methods like stemming or looking for specific keywords after the AI has created its response. Still, these steps won't catch everything. Language changes over time, and what might be okay to say today could be offensive tomorrow.
The second worry is about whether the AI is giving accurate information. One customer wondered, "If I let my analysts use your AI, how can I be sure it will give them the right information?" If information is incorrect, it could lead to poor decisions with far-reaching consequences for the business. Again, while there is no perfect solution here, teams can combat this by investing in prompt engineering, creating user feedback loops from the analysts themselves, and using that feedback for reinforcement learning.
Teams should have robust monitoring and observability services that can measure engagement and accuracy of responses.
Concern 2: Workflow Integration
The second major concern customers have is the integration of gen-ai products into their existing workflows. Change is hard, and the more a new tool disrupts the current way of doing things, the more resistance it's likely to face. One customer explained it this way, “If integrating your AI tool is going to cause a lot of disruption in our work routine, why would we want to adopt it?”
It’s difficult to convince customers of the merits of improved efficiency or costs without them first seeing positive results. Often, customers have built their workflows over years, refining them to be efficient and to meet their specific needs. Throwing a new, complex AI tool into the mix can upset that balance. It's not just about learning how to use the tool itself. It's also about re-shaping processes, re-training employees, and re-adjusting to a new “normal.” This is why ease of integration is such a significant factor in the adoption of new technologies.
To address this, companies need to focus on user-friendly design and flexible features that can adapt to a variety of workflows rather than forcing the workflow to adapt to the tool. This is contradictory to our advice from last week on owning the entire stack, as it depends on industry. In general, if your industry has a well-defined go-to tool like Figma or Adobe, then it may make sense to adapt your tools into that specific workflow. But if your market doesn’t have a tool with high NPS yet, such as in the legal field, it may make sense to start with trying to vertically integrate the process.
Concern 3: Security and Privacy
A third concern revolves around the security and privacy implications of using gen-ai products. As AI applications often need to train on sensitive data, customers ask, “How secure is your AI? Can it guarantee the privacy of our data?”
Customers need their data to be handled responsibly. They need reassurances the AI won't remember or share confidential information from one interaction to the next. For example, if a model is trained on past customer conversations, customers will want to know how their data is anonymized during this process.
To address these concerns, gen-ai companies must focus on robust security measures. When fine-tuning models, teams should strip out customer-identifying information. Customers will also often want full management of their data including the ability to delete or access their data whenever they need to. I recommend setting up separate kubernetes clusters for each customer to simplify this task. Lastly, having SOC 2 certification will go a long way in building trust for healthcare and finance industries.
Other Concerns
“If I use this product, people will be concerned about the status of their jobs.”
“We’ve tried a similar product from X and didn’t find it helpful.”
Conclusion
In conclusion, the adoption of generative AI products comes with its own unique challenges. We've examined three significant customer concerns: the reliability and quality of the AI, ease of integration with existing workflows, and security and privacy measures. If you have any feedback, thoughts, or questions, please comment below or reachout on Twitter DMs.