aistrategyproduct

The AI feature problem — why bolting intelligence onto reporting does not produce intelligence

Every analytics vendor now claims AI. Most of what is being sold is pattern recognition applied to existing measurement outputs — a faster way to describe what happened, not a structurally different way to understand why. The distinction matters more than the marketing suggests.

An abstract split visualization: on the left, a glowing 'AI' label sits on top of an unchanged reporting surface — same bars, same funnel, now with a chatbot overlay. On the right, a different architecture altogether — signals flowing into an inference layer, then out to structured explanation. Dark background, gold for the architectural layer, cream for the reporting surface — Veinera's visual system.

Every analytics vendor now claims AI. Most of what is being sold is pattern recognition applied to existing measurement outputs — a faster way to describe what happened, not a structurally different way to understand why. The distinction matters more than the marketing suggests.


The difference between AI as a feature — added to an existing reporting surface — and AI as architecture — built around a different starting question.


Spend ten minutes reviewing the marketing pages of analytics, BI, and campaign performance vendors in 2026 and one word appears on almost every one of them: AI.

AI-powered insights. AI-driven recommendations. AI analytics. Intelligent dashboards. Agentic reporting. The vocabulary is everywhere and the underlying products vary enormously — from genuine architectural changes in how analysis works to, in many cases, a natural language summary layer placed over a dashboard that has not changed at all.

This is not a new problem. Markets always converge on the vocabulary of whatever is generating the most investment attention. But when the word "AI" gets applied to both genuinely different analytical approaches and to cosmetic feature additions on existing reporting tools, the buyer has to do work that the market language is no longer doing for them.

This blog is an attempt to do that work clearly.


The naming problem has a name

Gartner coined the term "agent washing" in 2025 to describe a specific practice: the rebranding of existing products — chatbots, robotic process automation, rule-based alerting systems — as "AI agents" without any substantive change to the underlying capability. Their analysis estimated that of the thousands of vendors claiming agentic AI capabilities, only approximately 130 are doing work that genuinely qualifies as agentic under any meaningful definition.

The same dynamic exists one layer down, in the marketing analytics and campaign performance space, though it has not yet received an equally memorable label.

It works like this. A vendor builds a reporting platform that measures campaign outputs — impressions, clicks, conversions, ROAS. The data is aggregated, the dashboards are built, and the reports ship on cadence. Then a large language model is connected to the output of that system, enabling users to ask questions in natural language and receive answers. Or an anomaly detection algorithm is added to flag metric deviations from historical baselines. Or a summary generator is connected to produce weekly executive digests.

Each of these additions gets described as "AI." None of them changed what the underlying system is for.

The system was built to measure outputs. It still measures outputs. The AI layer describes those outputs in a more conversational format, or flags when they move outside expected ranges, or summarizes them automatically. These are useful features. They are not behavioral intelligence. And they do not produce explanation.


What AI on top of reporting actually produces

The key question to ask of any AI-enabled analytics product is not whether it uses AI. It is what the AI is applied to, and what question it is designed to answer.

If the AI is applied to measurement outputs — campaign metrics, funnel rates, platform-reported ROAS — it can do several things well. It can identify patterns in historical data faster than manual analysis. It can flag statistically significant deviations. It can generate natural language summaries of what metrics moved and in which direction. It can answer direct questions about the data it has access to.

What it cannot do, regardless of how capable the underlying model is, is answer questions that the data it is applied to was never designed to answer. If the data is click-based attribution, the AI can make click-based attribution more accessible and faster to read. It cannot make click-based attribution capable of measuring offline sell-through. If the data is aggregated platform reporting, the AI can surface patterns in that aggregation. It cannot construct a causal inference from that aggregation without the analytical methods and data connections required for causal inference.

Clevertouch's State of Martech 2025 research puts the practical consequence directly: point conversational AI at a structurally broken or fragmented data model, and the result is "confident, articulate, completely wrong answers." The AI amplifies what is in the data. If what is in the data is structurally incomplete — because it was organized for reporting needs rather than analytical ones — the AI amplifies the incompleteness.

Gartner's research on AI-ready data found that 57% of organizations estimate their data is not AI-ready for the specific use cases they are pursuing. The data exists. The AI capability exists. The structural connection between the question the organization wants to answer and the data that could answer it has not been built.


The investment reality behind the marketing

The gap between what AI is claimed to deliver and what it actually produces in enterprise settings is documented at a scale that is striking when set against the marketing language of the same period.

Gartner's research published in July 2024 predicted that at least 30% of generative AI projects would be abandoned after proof of concept by the end of 2025, citing poor data quality, inadequate risk controls, escalating costs, and unclear business value. By June 2025, they had updated the outlook for agentic AI specifically: over 40% of agentic AI projects will be canceled by end of 2027 due to the same set of causes.

MIT Project NANDA's research, published in July 2025 and covering over 300 AI initiatives through practitioner interviews and structured surveys, found that 95% of organizations deploying generative AI saw zero measurable P&L return. Not low return. Zero. Their finding: the failure is almost never the model. It is data readiness, workflow integration, and the absence of a clearly defined outcome before the build starts.

Gartner's April 2026 survey of 782 infrastructure and operations leaders found that only 28% of AI use cases fully succeeded and met ROI expectations, while 20% failed outright. Among those that failed, 57% were attributed to expecting too much, too fast from AI that was applied to contexts where the foundational conditions for success had not been established.

Despite an average organizational spend of $1.9 million on GenAI initiatives in 2024, less than 30% of AI leaders report their CEOs are satisfied with the return on that investment, according to Gartner.

The pattern across these data points is consistent. AI investment is substantial. Marketing around AI capability is pervasive. Actual value realization is concentrated in a small minority of deployments — and the common factor among those that succeed is not the sophistication of the AI model. It is whether the system was built around a clearly defined business problem with the data infrastructure to match.


The distinction that actually matters

The line between AI as a feature and AI as architecture is not a line between more and less AI. It is a line between two different starting questions.

AI as a feature starts with an existing system — a measurement platform, a reporting dashboard, a data warehouse — and adds AI capabilities to make that system faster, more conversational, or better at surfacing patterns. The fundamental structure of what the system measures and what question it is designed to answer remains unchanged.

AI as architecture starts with the business question — the outcome that needs to be understood — and designs the data pipeline, the inference method, and the output governance around producing a structured answer to that specific question. The AI is not a layer on top of an existing system. It is the organizing principle of a system built for a different purpose.

For campaign performance specifically, the difference is this. An AI feature on a reporting platform answers: "What did my campaign metrics look like, and what patterns can I identify in them?" Faster. More conversationally. With anomaly flags.

An AI-architected behavioral intelligence system answers: "What caused the commercial outcome this campaign produced — across content, distribution, conversion, and execution — and what should change to produce a different one?" This requires causal inference methods, not pattern recognition on measurement outputs. It requires offline commercial data, not just platform attribution. It requires a system designed to examine the relationship between consumer behavior and commercial outcomes, not to describe the metrics that proxied for those outcomes.

These are different analytical tasks. They require different data. They produce different outputs. And they deliver different organizational value.

The former makes existing reporting more accessible. The latter creates a new category of decision support.


Why "AI-powered" is not Veinera's lead

This distinction is why Veinera does not use "AI-powered" as its primary descriptor, despite AI being core to how the platform works.

The AI in Veinera's architecture operates at three levels: the setup layer that classifies campaign data and maps signal environments automatically; the inference layer that applies causal methods — geographic difference-in-differences, Bayesian structural time series, behavioral attribution across content tiers — to connect campaign signals to commercial outcomes; and the output layer that translates those causal findings into structured direction for the teams making decisions about budget, creative, and channel allocation.

These are architectural roles. The AI is not describing what happened in the metrics. It is building and interrogating the causal model that explains why the commercial outcome emerged, and what the model implies should happen next.

Calling this "AI-powered" is technically accurate and strategically misleading — because in the current market, "AI-powered" has become the term for the features described earlier: conversational summaries, anomaly flags, natural language queries over existing dashboards. Veinera is not that. It is a different starting question, a different analytical method, and a different category of output.

The category is behavioral intelligence. The AI is how behavioral intelligence is produced at the speed and scale that operating teams require. Those are not the same claim, and conflating them would put Veinera in the wrong competitive set.


A practical test for buyers

For any enterprise organization evaluating AI-enabled analytics or campaign intelligence tools, the questions that matter are not about the technology. They are about the starting question and the analytical design.

What is the system designed to measure? If the answer is campaign metrics, platform-reported ROAS, or funnel rates, the AI is most likely applied to those metrics. The question is whether those metrics, no matter how elegantly surfaced, are the right outcome variable for the decisions you actually need to make.

What data is the AI analyzing? If the data is platform-reported attribution, click-based conversion tracking, or aggregated BI outputs, the AI is working within the limits of those data systems. The intelligence produced is constrained by the completeness and structure of the underlying data. Gartner's finding that 57% of enterprise data is not AI-ready reflects how common this constraint is.

What question does the output answer? A system that tells you what happened faster is a reporting improvement. A system that tells you why the commercial outcome emerged, with a methodology that connects campaign signals to real-world purchase behavior, is a different category of product. The difference is not in the AI label. It is in the analytical architecture underneath it.

Can the system measure what matters most to your business? For brands whose commercial outcomes are primarily offline, the most important question is whether the intelligence system can reach those outcomes. A system built to measure e-commerce conversion rates, regardless of how much AI is applied to it, will not tell you what is happening at Indomaret, at Guardian, or in the distributor pipeline. The constraint is not the AI. It is the question the system was designed to answer.


What this means for the category

The current moment in AI-enabled analytics is characterized by a specific tension: massive investment in AI features applied to unchanged analytical frameworks, producing well-documented disappointment at the enterprise level, alongside genuine architectural progress in a smaller set of systems designed for specific outcomes.

The disappointment is predictable. Gartner's 40% cancellation forecast for agentic AI projects, MIT's 95% zero-ROI finding, and the pattern of AI investment failing to produce CEO-level satisfaction despite $1.9 million average spend — these are the outcome of a market that applied AI to existing systems before establishing whether those systems were asking the right questions.

The progress will come from the opposite direction: systems built around clearly defined business questions, with the data infrastructure and analytical methodology to match, that use AI as architecture rather than as a feature. The category shift is from AI as a way to improve the reporting surface to AI as a way to build intelligence that the reporting surface was never designed for.

That is the direction behavioral intelligence is moving. And it is the reason the distinction between AI as a feature and AI as architecture is not an academic one. For the organizations that get it right, it will determine what they actually know about how their campaigns produce commercial outcomes — not what their dashboards tell them about the proxies.


Sources and references

  • Gartner. Gartner Predicts Over 40% of Agentic AI Projects Will Be Canceled by End of 2027. Press release, June 25, 2025. Including Gartner estimate that approximately 130 of thousands of agentic AI vendors are genuine.
  • Gartner. Gartner Predicts 30% of Generative AI Projects Will Be Abandoned After Proof of Concept By End of 2025. Press release, July 29, 2024.
  • Gartner. Gartner Says AI Projects in I&O Stall Ahead of Meaningful ROI Returns. Survey of 782 I&O leaders, November-December 2025. Press release, April 2026.
  • Gartner. Hype Cycle for Artificial Intelligence, 2025. Finding: 57% of organizations estimate their data is not AI-ready. Average GenAI initiative spend of $1.9 million in 2024; less than 30% of AI leaders report CEO satisfaction with return.
  • MIT Project NANDA. The GenAI Divide: State of AI in Business 2025. July 2025. 95% of organizations deploying generative AI saw zero measurable P&L return. Coverage of 300+ AI initiatives.
  • Clevertouch Consulting. The Future of Marketing Intelligence: From Dashboards to Decisions. Including: "Point conversational analytics at a messy data model and you'll get confident, articulate, completely wrong answers." State of Martech 2025 findings: 96% of marketers satisfied with martech, only 25% have well-integrated systems. March 2026.
  • LayerFive. AI Data Analytics for Marketing. Including the distinction: "A system that can only generate a natural-language summary of a dashboard you could have read yourself is AI-branded, not AI-powered." March 2026.
  • IAB. State of Data 2024. 73% of companies expect their ability to attribute campaign performance to decrease as privacy regulations tighten.

Want to understand how Veinera's analytical architecture differs from AI-feature dashboards? Book a 30-minute walkthrough with a Veinera specialist — against your actual campaign environment, no commitment.

Book a Demo · Back to Blog


Related reading

  • Explanation over reporting — why analytics is due for a rewrite · Mar 28, 2026
  • Behavioral intelligence as a new decision layer · Apr 12, 2026
  • The offline data desert — why the most valuable behavioral signal is the hardest to reach · Apr 19, 2026