Ad hoc use of genAI won’t deliver strategic benefits

Marketers and sales execs blithely experimenting with genAI risk repeating CRM mistakes of the past, where companies lost potentially valuable data capture opportunities.

Source: Shutterstock

If you look at industry surveys about the adoption of generative AI, there are a lot of marketers putting their toe in the water, or a foot.

But the easy access to these productivity tools means that adoption can sometimes be rather ad hoc and fragmented.

For example, Adobe’s 2024 trends survey found only a quarter of client-side senior executives had already conducted skill-building programmes on genAI. And yet new Marketing Week data shows almost half of marketers are using AI for market research, and more than two in five for audience segmentation and creative testing.

Anecdotally, I’m sure we have all spoken to people, within marketing and without, who have cracked on with using new tools to create imagery and copy, and advised others to do the same.

Is there any problem with having an open mind and keeping abreast of new technology? Isn’t this what any good marketer should do?

Will GenAI be a timesaver or will it frazzle marketers at scale?

Is it benefitting the wider organisation?

The danger to look out for is when ad hoc experiments deliver small local benefits while missing the opportunity to capture larger benefits for the wider organisation.

I spoke to one experienced data and analytics leader earlier this year who used the analogy of the early days of CRM systems in B2B businesses, and the difficulty in ensuring adoption by salespeople.

“For 10-plus years, [businesses said] manage your contacts and key sales processes in CRM. They never did that properly. And then suddenly they realised they can [manage contacts] on their smartphones. They can find a customer, they can use WhatsApp, they can do their own small campaign within minutes. They create contacts, groups of customers… So, basically the company has not got its hands on this customer data asset. It’s on people’s devices. It’s fragmented. It’s lost,” they told me.

“If they had strategised properly on a CRM system and found ways to get that data in, they would be very rich with customer data. They would have a unified data set. Their customers would be happy, the sales reps will have all the productivity they need, the asset will be managed as a corporate asset.

“I’ve seen a lot of companies that missed that opportunity. The same could happen in AI.”

The risks of free rein

The potential downsides of allowing a salesperson or marketer or customer service associate to add Large Language Models (LLMs) to their workflow have been well documented.

CEOs may perceive genAI to be a threat to both compliance and competitiveness, with issues such as hallucinations and privacy breaches often front of mind.

The challenge of aligning genAI usage with company strategy, and bolstering institutional knowledge is one that vendors are well aware of too.

Salesforce, for example, on marketing pages for its Einstein Trust Layer, talks about ‘dynamic grounding’, which it describes as adding “domain-specific knowledge and customer information” to prompts, to generate more accurate responses, such as using “CRM data, knowledge articles, service chats, and more” to “reduce the chance of hallucinations”. There is also detail on “data masking” and “zero retention architecture”, shielding data from external LLMs.

One might argue, in the case of ad hoc use of LLMs, that productivity tools are, by their nature, dispersed, and that employees already need to take personal responsibility for their use of everything from Google search and social media to enterprise software.

The danger to look out for is when ad hoc experiments deliver small local benefits while missing the opportunity to capture larger benefits for the wider organisation.

This is undoubtedly true, and the first step in training is to make sure employees are aware of what constitutes responsible use of new tools. However, the broader point still stands: Is there a need for AI to solve a business problem, or not? If there is, then that solution will need to be aligned to company strategy.

Cassie Kozyrkov, CEO at Data Scientific and former chief decision scientist at Google, wrote in a post on LinkedIn last month that AI, “should be what you try after traditional programming fails. When you have something to automate, but you aren’t able to do it with your existing bag of tricks. When the need is so critical that you’re willing to add complexity and the reduction of control that comes with it.”

Ritson: Synthetic data is as good as real – next comes synthetic strategy

In the comments underneath her post, when asked whether genAI tools improve efficiency for all organisations, Kozyrkov pointed out, “It has been shown to decrease efficiency when mismanaged.”

Expectations for genAI are sky-high, with two-thirds of senior executives in Adobe’s recent survey saying they are optimistic the technology will deliver business transformation across analytics, content, customer service and sales. But without strategic oversight, and with the self-starters simply left to follow their noses, those expectations are likely to be unmet.

Ben Davis is insights editor at Econsultancy, which provides e-learning, live-learning online workshops and skills mapping in digital, marketing and ecommerce.

Recommended