
The Problem With AI Evangelism in Business
AI evangelism in business strategy has become a defining feature of contemporary executive discourse. Across industries, artificial intelligence is presented not merely as a technological tool…
AI personalization has become one of the most widely accepted assumptions in contemporary digital marketing strategy. Across strategy decks, product roadmaps, and vendor narratives, personalization is presented as both inevitable and universally welcomed: a rational response to information overload, a driver of relevance, and a prerequisite for competitive advantage. Artificial intelligence, in this context, is framed as the long-awaited enabler—finally capable of tailoring content, offers, and experiences at scale.
Yet the apparent consensus conceals a more fragile reality. While personalization technologies have advanced rapidly, consumer attitudes toward them remain ambivalent, context-dependent, and often poorly understood. What is frequently described as “what consumers want” is, in practice, a projection of organisational incentives, platform economics, and measurement regimes that privilege short-term optimisation over deeper behavioural understanding.
This article examines whether consumers actually want AI-driven personalization, and under what conditions. It distinguishes between expressed preferences and revealed behaviour, between convenience and consent, and between relevance and perceived control. The aim is not to reject personalization as such, but to interrogate the assumptions that currently surround it—and to clarify why many personalization initiatives fail not technically, but psychologically and strategically.
Personalization entered mainstream digital strategy as a corrective to mass communication. In its early forms—segmented email campaigns, location-based offers, rule-based recommendations—it addressed a tangible problem: generic messages in a fragmented media environment. Over time, however, personalization has expanded from a tactical adjustment to a normative expectation. Experiences that are not personalised are increasingly framed as deficient, inefficient, or even disrespectful of user attention.
Artificial intelligence accelerated this inflation. Machine learning models promised to infer preferences without explicit input, anticipate needs before they are articulated, and continuously optimise interactions in real time. In doing so, they shifted personalization from a choice to a background condition—something that happens by default, often invisibly.
This shift matters. When personalization is framed as a neutral improvement in relevance, questions of agency, interpretation, and legitimacy recede. Consumers are no longer asked what they want; systems infer what they are assumed to want. The distinction between service and surveillance becomes blurred, not because consumers cannot perceive it, but because it is rarely presented as a decision point.
A recurring source of confusion in debates about personalization lies in the interpretation of consumer data. Surveys often show that users appreciate “relevant” recommendations and dislike “irrelevant” advertising. These findings are frequently cited as evidence of demand for personalization. Yet such results are, at best, incomplete.
First, expressed preferences are shaped by framing. When respondents are asked whether they prefer relevant content over irrelevant content, the answer is unsurprising. The question does not address how relevance is achieved, what data is used, or what trade-offs are involved. Convenience is evaluated in isolation from cost.
Second, much of the behavioural data used to justify personalization reflects tolerance rather than desire. Users may continue to engage with personalised systems because opting out is difficult, alternatives are limited, or the perceived cost of resistance is higher than the discomfort of compliance. Continued usage, in this sense, is not equivalent to endorsement.
This distinction becomes visible in moments of rupture. Public reactions to certain personalization practices—such as overly intimate recommendations, predictive inferences about sensitive attributes, or ads that appear immediately after private conversations—are often framed as anomalies or communication failures. In reality, they reveal underlying expectations about boundaries. Consumers may accept personalization up to a point, but that point is neither fixed nor purely individual; it is shaped by social norms, cultural context, and trust in the actor deploying the system.
The strategic rhetoric around personalization assumes that relevance is an unqualified good. More relevance, achieved through more data and better models, is presumed to lead to better outcomes for both consumers and organisations. This linear logic ignores the relational nature of relevance itself.
Relevance is not merely a function of accuracy. A recommendation can be factually aligned with a user’s past behaviour and still feel intrusive, manipulative, or premature. In such cases, the issue is not that the system is wrong, but that it is too confident—or too opaque—about how it knows what it knows.
Empirical research in consumer psychology suggests that perceived autonomy plays a critical role in how personalised experiences are evaluated. When users feel that a system is guiding them while preserving a sense of choice, personalization can enhance satisfaction. When they feel steered, categorised, or predicted in ways that limit exploration, the same mechanisms can provoke resistance.
This helps explain why certain forms of personalization are widely accepted—such as recommendation lists framed as suggestions—while others trigger discomfort, even if they are technically similar. The difference lies in how much interpretive space is left to the user, and whether the system’s influence is perceived as assistive or directive.
Another persistent belief is that increasing data volume necessarily improves personalization quality. From this perspective, consumer resistance is treated as a temporary lag—something that will diminish as models become more accurate and experiences more seamless.
This belief conflates prediction with understanding. Machine learning systems are effective at identifying patterns across large datasets, but they do not comprehend intent, meaning, or context in the human sense. They infer correlations, not motivations. As a result, they often over-generalise from partial signals, reinforcing narrow behavioural loops.
For consumers, this can translate into a feeling of being reduced to a profile—a stable set of inferred preferences that fails to account for change, ambiguity, or contradiction. While such reduction may improve short-term metrics, it can erode longer-term trust, particularly when users notice that systems struggle to accommodate shifts in identity, taste, or circumstance.
Importantly, this erosion does not always manifest as overt backlash. More often, it appears as disengagement: ignored recommendations, muted interaction, or a gradual withdrawal from features perceived as overly prescriptive. These outcomes are rarely attributed to personalization itself, because they do not register as clear failures within existing performance frameworks.
Consumer expectations around personalization have not emerged in a vacuum. They have been shaped by dominant platforms whose economic models depend on behavioural prediction and attention optimisation. Over time, these platforms have normalised a level of data-driven adaptation that would have been considered excessive in other contexts.
However, normalisation does not imply desire. Many users accept personalization on large platforms because of perceived inevitability: the sense that participation requires acquiescence. This dynamic differs significantly from contexts in which consumers interact with smaller brands, public institutions, or services associated with higher trust expectations.
In such contexts, the same personalization practices can feel disproportionate. An AI-driven recommendation from a global entertainment platform may be perceived as convenient; a similar inference by a financial service or healthcare provider may raise concerns about overreach. Consumers adjust their expectations based on perceived power asymmetries and the stakes involved.
This variability challenges the idea that there is a single consumer attitude toward personalization. What exists instead is a set of situational judgments, influenced by who is personalising, for what purpose, and with what degree of transparency.
One of the more counterintuitive findings in practice is that personalization can, under certain conditions, reduce trust rather than enhance it. This occurs not when systems are inaccurate, but when they appear too knowing without sufficient justification.
Trust, in this sense, is not a function of performance alone. It depends on whether users feel respected as agents rather than treated as predictable objects. Systems that adapt without explanation, that anticipate needs without invitation, or that operate beyond the user’s perceived zone of legitimacy risk triggering suspicion—even if they deliver objectively “better” outcomes.
This risk is amplified by AI systems that learn continuously. As personalization becomes more dynamic, the logic behind it becomes harder to reconstruct. Users may struggle to understand why certain content appears, or why options seem constrained. In the absence of intelligible cues, they may attribute intent where none exists, interpreting optimisation as manipulation.
Such interpretations are not irrational. They reflect an attempt to make sense of opaque systems that nonetheless exert influence over choice. From a strategic perspective, dismissing these reactions as misunderstandings misses the point. Perception, not intention, determines trust.
Within organisations, personalization is often discussed at a level of abstraction that obscures its experiential consequences. Strategy documents emphasise efficiency, relevance, and scalability, while metrics focus on conversion rates, engagement, and retention. What remains under-examined is how personalization feels over time, especially to users who do not conform neatly to inferred categories.
This gap can be observed in cases where companies invest heavily in personalization infrastructure, only to find that incremental gains plateau or reverse. The typical response is further optimisation: more data sources, more granular segmentation, more complex models. Rarely is the underlying assumption questioned—that consumers want deeper personalisation in the first place.
From a behavioural standpoint, this response is understandable. Organisational learning is shaped by what is measurable, and short-term uplift often rewards intensification. Yet without a parallel inquiry into consumer expectations and boundaries, such intensification risks becoming self-defeating.
The question, then, is not whether consumers want personalization in an absolute sense. It is whether they want the forms of personalization currently being deployed, at the depth and opacity with which they are often implemented.
Evidence suggests that consumers value personalization when it is:
clearly beneficial in a specific context,
proportionate to the relationship,
and aligned with their sense of control.
They are far less comfortable with personalization that feels extractive, speculative, or difficult to contest. Importantly, these judgments are not static. They evolve as norms shift, as regulatory frameworks intervene, and as public awareness of data practices increases.
For strategy, this implies a shift in emphasis. Instead of asking how to personalise more effectively, organisations may need to ask when not to personalise, or how to design restraint into adaptive systems. Such questions are less amenable to automation, but more central to long-term legitimacy.
Treating personalization as an unquestioned good has led many organisations to over-invest in technical capability while under-investing in conceptual clarity. As AI systems become more powerful, this imbalance becomes more consequential.
A more sustainable approach begins with acknowledging that consumer desire is conditional, not absolute. It recognises that relevance must be negotiated, not imposed, and that trust cannot be optimised in the same way as click-through rates. It also accepts that some forms of personalisation may deliver short-term gains at the expense of longer-term relationships.
For senior decision-makers, this perspective reframes personalization from a default strategy to a design choice—one that requires judgment, contextual sensitivity, and an understanding of human behaviour that extends beyond data patterns.
AI has made personalization technically feasible at unprecedented scale, but it has not resolved the underlying question of desirability. Consumers do not uniformly want AI-driven personalization; they evaluate it through the lenses of trust, agency, and appropriateness. Where these dimensions are respected, personalization can enhance experience. Where they are ignored, it can quietly erode confidence.
The prevailing misconception is that resistance to personalization reflects a lack of understanding or a temporary discomfort with new technology. In many cases, it reflects something more fundamental: an intuitive response to systems that overstep perceived boundaries. Recognising this does not require abandoning personalization, but it does require re-situating it within a broader strategic and ethical frame.
As digital environments continue to evolve, the organisations that will retain credibility are likely to be those that treat personalization not as an inevitability to be maximised, but as a relationship to be carefully governed.
Article by Dario Sipos.
Dario Sipos, Ph.D., is a Digital Marketing Strategist, Branding Expert, Keynote Public Speaker, Business Columnist, Author of the highly acclaimed books Digital Personal Branding and Digital Retail Marketing.
Readers who wish to explore the underlying research, citations, and peer-reviewed publications can find them via his Google Scholar Profile.
His verified academic identifier is available through ORCID.

AI evangelism in business strategy has become a defining feature of contemporary executive discourse. Across industries, artificial intelligence is presented not merely as a technological tool…

The Digital Markets Act (DMA), the Digital Services Act (DSA), and dark patterns in digital marketing now define a new regulatory boundary for persuasive design in the European Union.

Compliance in marketing strategy is becoming a competitive advantage in increasingly regulated digital markets. Under the accelerating expansion of digital regulation—ranging from the…