
DMA, DSA, and the End of Dark Patterns in Digital Marketing
The Digital Markets Act (DMA), the Digital Services Act (DSA), and dark patterns in digital marketing now define a new regulatory boundary for persuasive design in the European Union.
More data in personalization strategy does not automatically produce better personalization outcomes. In contemporary digital marketing, the accumulation of behavioural, transactional, and contextual data is often treated as a proxy for strategic sophistication. The prevailing assumption is linear: more signals generate more accurate models, which generate more relevant experiences, which generate higher conversion and retention.
This assumption appears intuitively plausible. Artificial intelligence systems thrive on data. Machine learning models improve predictive accuracy when exposed to larger and more diverse datasets. However, the strategic leap from predictive improvement to experiential improvement is far less straightforward. Personalization is not solely a technical function of input volume. It is a relational phenomenon shaped by interpretation, context, and boundary perception.
The question, therefore, is not whether data improves models. It is whether more data improves the user’s experience in ways that are strategically sustainable.
Data accumulation holds structural appeal for organisations. It promises clarity in uncertain environments. When markets fragment and consumer behaviour becomes less predictable, expanding data collection appears to reduce ambiguity. Dashboards grow more detailed; segmentation becomes more granular; predictive systems more confident.
Within this logic, insufficient performance is often attributed to insufficient data. If personalization results plateau, the response is to integrate additional sources—cross-device tracking, third-party enrichment, contextual inference, behavioural clustering. The underlying belief is that deeper insight necessarily leads to better outcomes. This belief often reflects a broader pattern of AI evangelism in business rather than disciplined strategic evaluation.
Yet this belief conflates two distinct processes: prediction and perception. While predictive accuracy may increase with data volume, user perception of personalization is shaped by more than statistical alignment.
As data density increases, so does the likelihood of inferential complexity. Systems draw connections across browsing history, purchasing patterns, location signals, device interactions, and sometimes external datasets. From a computational standpoint, this integration enhances probability estimation.
From a user standpoint, it may produce interpretive overreach.
Personalization that draws on signals the user does not consciously associate with a brand interaction can feel disproportionate. A recommendation that reflects a clearly recent purchase may appear helpful. A recommendation based on indirect behavioural correlation may feel intrusive or inexplicable.
The issue is not accuracy alone. It is the visibility of inference. When users cannot reconstruct how a conclusion was reached, personalization may be interpreted as surveillance rather than service.
More data increases the probability of such opacity. As inference chains lengthen, transparency diminishes.
In performance optimisation environments, segmentation is frequently refined until audiences become narrowly defined clusters. Micro-segmentation promises precise targeting and tailored messaging.
However, hyper-granularity can generate diminishing returns.
First, small segments reduce statistical robustness. Models trained on highly specific behavioural slices may overfit, performing well in limited contexts but failing to generalise. Second, excessive segmentation complicates creative coherence. Brands risk fragmenting their narrative into multiple micro-messages, weakening identity consistency.
Third, from a psychological perspective, over-targeting can reduce perceived autonomy. When offers appear too closely aligned with recent behaviour, users may feel steered rather than supported.
Thus, while data volume enables granularity, strategic value may decline beyond a certain threshold.
A recurring blind spot in personalization strategy is the distinction between data quality and data quantity. Large datasets do not guarantee reliability. Inaccurate, outdated, or contextually misaligned data can degrade model performance regardless of volume.
Moreover, not all behavioural signals carry equal meaning. A single intentional purchase may reveal more about preference than dozens of passive browsing events. Systems that treat all signals as equivalent risk amplifying noise.
Investment in governance—clean data pipelines, contextual tagging, interpretive discipline—often produces greater personalization gains than sheer expansion of data capture. Yet governance improvements lack the symbolic appeal of scale.
Strategic maturity involves recognising that better personalization depends as much on curation as on accumulation.
As organisations collect more data, they move closer to the boundaries of perceived privacy. Even when data practices comply with regulatory standards, perception may diverge from legality.
Users evaluate personalization not only by relevance but by proportionality. When the depth of inference exceeds the perceived scope of interaction, trust may weaken. This is particularly evident when personalization reflects cross-context aggregation that users did not anticipate.
The paradox is clear: the very data expansion intended to improve personalization can undermine the relational foundation upon which personalization depends.
Trust, unlike predictive accuracy, does not scale linearly with data volume.
Artificial intelligence systems trained on large datasets often generate probabilistic outputs expressed with high confidence scores. Internally, these probabilities reflect statistical patterns. Externally, they may be interpreted as certainty.
Overreliance on expansive datasets can create an illusion of predictive completeness. Organisations may assume that behavioural history sufficiently captures future intention. In reality, human preferences are dynamic, situational, and sometimes contradictory.
When personalization systems anchor too heavily in historical data, they risk reinforcing outdated representations of identity. Users may experience a narrowing of exposure, encountering repeated suggestions that reflect past behaviour rather than emerging interests. Such dynamics raise important questions about marketing after automation and the preservation of human judgment.
More data intensifies this anchoring effect unless deliberately counterbalanced with mechanisms for exploration and reset.
Expanding data ecosystems increases operational complexity. Multiple data sources require integration, harmonisation, security management, and regulatory oversight. As systems grow more intricate, internal alignment becomes more challenging.
Marketing teams may struggle to interpret model outputs derived from opaque pipelines. Decision accountability can blur: was a recommendation driven by behavioural trend, algorithmic bias, or data artefact?
Complexity introduces risk. Errors propagate more easily across interconnected systems. Minor inaccuracies in one dataset can influence downstream decisions.
Strategically, complexity consumes managerial attention. Resources directed toward maintaining expansive data infrastructure may divert focus from creative differentiation and customer value development.
A more sustainable approach to personalization strategy emphasises sufficiency rather than maximization. Instead of asking how much data can be collected, organisations might ask how much data is necessary to deliver meaningful relevance.
This reframing shifts emphasis from expansion to calibration. It prioritises clarity of purpose, contextual understanding, and proportional inference.
Mechanisms such as data minimization, transparent user controls, and periodic model resets can enhance both trust and adaptability. By limiting inferential depth to what is defensible and explainable, organisations reduce the risk of interpretive overreach.
In this model, personalization becomes an exercise in disciplined judgment rather than data accumulation.
More data in personalization strategy does not inherently produce better outcomes. While expanded datasets can improve predictive models, they also introduce interpretive opacity, psychological discomfort, operational complexity, and trust vulnerability.
Effective personalization requires balance. It depends on high-quality signals, contextual sensitivity, and governance discipline. It recognises that human identity evolves and that prediction must coexist with autonomy.
In a digital environment increasingly attentive to regulation, ethics, and user agency, competitive advantage may derive not from maximal data extraction but from calibrated, intelligible personalization.
Precision without excess may prove more durable than scale without restraint.
Article by Dario Sipos.
Dario Sipos, Ph.D., is a Digital Marketing Strategist, Branding Expert, Keynote Public Speaker, Business Columnist, Author of the highly acclaimed books Digital Personal Branding and Digital Retail Marketing.
Readers who wish to explore the underlying research, citations, and peer-reviewed publications can find them via his Google Scholar Profile.
His verified academic identifier is available through ORCID.

The Digital Markets Act (DMA), the Digital Services Act (DSA), and dark patterns in digital marketing now define a new regulatory boundary for persuasive design in the European Union.

Compliance in marketing strategy is becoming a competitive advantage in increasingly regulated digital markets. Under the accelerating expansion of digital regulation—ranging from the…

The Digital Markets Act (DMA) marks one of the most consequential regulatory interventions in the history of digital platform governance. Adopted by the European Union to address…