
Marketing After Automation: What Remains Human
Marketing after automation requires redefining what remains human in strategic communication. As artificial intelligence systems assume greater responsibility for targeting, optimisation…
The AI Act and marketing strategy are becoming structurally intertwined as artificial intelligence moves from experimentation to regulated deployment within the European Union. With the introduction of the AI Act, the regulatory environment governing digital systems expands beyond data protection and platform competition into the domain of algorithmic design, risk classification, and accountability.
For marketing, this represents a shift in perspective. Artificial intelligence is no longer only a tool for optimisation or personalization. It becomes a regulated system subject to defined boundaries regarding how it can be developed, deployed, and applied to influence behaviour. The distinction between acceptable and prohibited practices is no longer left solely to organisational judgment or market tolerance. It is increasingly codified.
This article examines what the AI Act means for marketing, focusing on what is likely to become illegal, what will remain permissible, and how the boundary between the two reshapes strategic decision-making.
Previous regulatory frameworks, particularly the General Data Protection Regulation (GDPR), focused primarily on data—how it is collected, stored, and processed. The AI Act introduces a complementary layer: it regulates systems that use data to generate decisions, predictions, or behavioural influence.
This distinction is significant. Marketing practices that were previously evaluated through the lens of data consent must now also be assessed in terms of system behaviour. An AI-driven targeting model, recommendation engine, or content optimisation system is not neutral infrastructure; it becomes a subject of regulatory classification.
The AI Act operates through a risk-based framework. Systems are categorised according to their potential impact on fundamental rights and societal outcomes. For marketing, most applications fall into the categories of limited or high risk, depending on how they are deployed.
Strategically, this introduces a new requirement: understanding not only what data is used, but how algorithmic systems interpret and act upon that data.
The AI Act defines a set of prohibited practices—applications considered unacceptable due to their potential to manipulate behaviour, exploit vulnerability, or undermine autonomy.
For marketing, the most relevant prohibitions concern:
Subliminal or manipulative techniques that materially distort behaviour in ways users cannot consciously perceive or resist.
Exploitation of vulnerable groups, particularly where AI systems target individuals based on age, disability, or socio-economic conditions to influence decisions disproportionately.
Social scoring mechanisms, where individuals are evaluated or ranked in ways that affect access to services or opportunities beyond the original context of data collection.
While some of these practices may appear distant from mainstream marketing, elements of behavioural manipulation have historically existed in less explicit forms. The AI Act narrows tolerance for such practices, particularly when amplified through automated systems.
The key shift is not only prohibition but definitional clarity. What was previously ambiguous becomes legally bounded.
Certain AI applications fall into the category of “high-risk systems,” particularly when they influence access to essential services, employment, credit, or other socio-economic opportunities. While most marketing applications may not be directly classified as high-risk, overlaps are possible.
For example, AI-driven profiling used in financial services marketing, insurance segmentation, or employment-related targeting may intersect with regulated domains. In such cases, additional obligations apply: transparency, documentation, human oversight, and risk mitigation procedures.
The strategic implication is that marketing cannot be isolated from product or service context. The same personalization logic may be permissible in one domain and restricted in another.
This requires coordination between marketing, legal, and product teams. AI deployment decisions must consider classification outcomes, not only performance objectives.
Despite its restrictive elements, the AI Act does not eliminate AI-driven marketing. Many applications remain permissible, particularly those that:
Personalization, segmentation, and predictive analytics continue to be viable strategies. The difference lies in how they are implemented and communicated.
Permissible AI is not defined by absence of influence, but by the presence of interpretability and proportionality. Users must be able to understand, at least in principle, how and why they are being targeted.
One of the central principles of the AI Act is transparency. Users should be informed when they are interacting with AI systems, particularly when those systems generate content, recommendations, or decisions that affect them.
For marketing, this introduces operational adjustments. AI-generated content, automated messaging systems, and recommendation engines may require disclosure. The form of this disclosure is still evolving, but the principle is clear: opacity is no longer defensible.
Transparency affects not only communication but design. Systems must be constructed in ways that allow explanation. Black-box optimisation becomes harder to justify when accountability mechanisms demand interpretability.
This does not eliminate complexity, but it introduces friction where opacity once provided advantage.
A central challenge in applying the AI Act to marketing lies in distinguishing influence from manipulation. Marketing, by definition, seeks to influence behaviour. The regulatory framework does not prohibit persuasion; it restricts distortion.
The boundary is contextual. It depends on:
AI systems complicate this boundary by scaling influence. What might be acceptable in individual interaction can become problematic when automated across millions of users with adaptive precision.
Strategically, this requires recalibration. Techniques that optimise engagement must be evaluated against their interpretive transparency. Influence must remain intelligible.
The AI Act introduces compliance not only as legal requirement but as design constraint. Systems must be built with risk classification, transparency, and oversight in mind from the outset.
This affects the innovation process. Instead of developing capabilities and retrofitting compliance, organisations may need to integrate regulatory considerations at the design stage. Certain ideas may be abandoned not because they are technically infeasible, but because they are legally untenable.
At the same time, compliance functions as strategic filter. It reduces the space of possible interventions, focusing attention on approaches that are sustainable under scrutiny.
This constraint may improve strategic clarity. By removing borderline practices, organisations can concentrate on value creation that does not depend on ambiguity.
The AI Act’s risk-based approach introduces gradation into marketing strategy. Not all AI applications are treated equally. The level of oversight, documentation, and accountability varies according to potential impact.
Marketing teams must therefore develop sensitivity to risk classification. Decisions about targeting, personalization, and automation are no longer purely tactical; they carry regulatory implications.
This shifts marketing closer to governance. Campaign design, data usage, and system selection become part of a broader institutional framework.
The role of the marketer expands from performance optimisation to risk-aware strategic design.
The AI Act does not eliminate AI from marketing. It defines the conditions under which it can be used. By establishing boundaries around manipulation, transparency, and system accountability, it transforms legality into a strategic parameter.
For organisations, the challenge is not simply to avoid prohibited practices. It is to understand how regulatory boundaries reshape competitive dynamics. Strategies that rely on opacity, asymmetry, or behavioural exploitation may become untenable. Strategies grounded in transparency, proportionality, and interpretability may gain resilience.
The distinction between what becomes illegal and what remains permissible is not static. It will evolve through enforcement, interpretation, and technological change. Marketing strategy must therefore remain adaptive.
In a regulated digital environment, legality is no longer external constraint. It is embedded within the architecture of strategic possibility.
Article by Dario Sipos.
Dario Sipos, Ph.D., is a Digital Marketing Strategist, Branding Expert, Keynote Public Speaker, Business Columnist, Author of the highly acclaimed books Digital Personal Branding and Digital Retail Marketing.
Readers who wish to explore the underlying research, citations, and peer-reviewed publications can find them via his Google Scholar Profile.
His verified academic identifier is available through ORCID.

Marketing after automation requires redefining what remains human in strategic communication. As artificial intelligence systems assume greater responsibility for targeting, optimisation…

More data in personalization strategy does not automatically produce better personalization outcomes. In contemporary digital marketing, the accumulation of behavioural…

AI evangelism in business strategy has become a defining feature of contemporary executive discourse. Across industries, artificial intelligence is presented not merely as a technological tool…