Portfolio MMM Strategy Cockpit
Paid Media Portfolio Planning Tool

MMM Strategy Cockpit

A planning-grade MMM cockpit built around the paid media team's actual concerns: reducing outside-agency black-box dependence, translating full-funnel performance for executive stakeholders, and pressure-testing media decisions against incremental value instead of last-click comfort.

What it fixes Shows credited versus incremental outcomes so brand search and retargeting can no longer hide inside platform ROAS.
How it helps Turns spend, CAC, LTV, fee drag, ecosystem halo, and funnel balance into leadership-ready recommendations.
Built for operators Supports Sportsbook, Casino, or combined planning and reflects a realistic 45/55 brand-lower-funnel operating posture.
What to do next Use it to frame budget moves, then validate major swings with geo-lift, holdouts, and a stronger measurement stack once the data layer matures.

Portfolio Output

Designed to answer the question: "How do we know it is actually working?" with something sharper than a black-box slide.

Total weekly investment
$0
Across modeled paid channels.
Working media
$0
After agency fee drag.
Blended incremental ROAS
0.00x
Primary decision metric.
Incremental FTDs
0
Adjusted for overlap and adstock.
Portfolio payback
0.0 mo
Compared with target window.
Measurement range
$0
Low-high incremental revenue band.

Credited vs incremental reality

These bars make over-crediting visible. The pale bar is platform-credited ROAS; the red bar is modeled incremental ROAS.

Funnel balance

This view checks whether the spend mix is behaving like a healthy full-funnel portfolio rather than a last-click-heavy capture machine.

Stage allocation Awareness 0% · Hybrid 0% · Performance 0% · Harvest 0%
Awareness Hybrid Performance Harvest
Brand mix health On plan
Benchmarking against a 45% brand / 55% lower-funnel planning benchmark.
Working media retained 0%
This is the cleanest expression of the operating model argument: how much spend actually reaches media.

Diminishing Returns View

Stylized spend-versus-iROAS curves for two anchor channels to make saturation visible instead of implied.

Non-Brand Search

Fast-payback demand capture should still flatten as incremental reach gets more expensive.

Modeled iROAS curve Current spend point

CTV / Streaming

Higher-adstock channels can absorb more spend, but saturation still shows up as slower marginal returns.

Modeled iROAS curve Current spend point

Channel intelligence

Each row surfaces whether a channel is truly creating value, merely harvesting demand, or carrying hidden pressure on payback and confidence.

Channel Weekly Spend Working Media Credited ROAS Incremental ROAS Payback Confidence Action

Recommended moves

Short, concrete actions for the paid media team and leadership conversations.

Executive narrative

Ready-to-use summary language for finance, growth leadership, or executive stakeholders.

This scenario should be treated as a directional planning layer. Large reallocations still need geo-lift or holdout validation, especially for tentpoles, loyalty-led campaigns, and any state-specific launch plan.

State-by-State Planning Inputs

Weight the portfolio by market importance, market efficiency, and compliance friction so the topline story reflects real operating differences by state.

State planning view

This turns one portfolio assumption set into a state-level readout for budget, expected value, and execution risk.

State Spend Share Weighted Spend Incremental Revenue Incremental ROAS Risk Note

Model-Ready Data Plan

Use one canonical data design across every paid channel so MMM tools learn from comparable exposure, spend, and outcome signals instead of platform-specific reporting noise.

1. Standardize the grain

Keep raw extracts intact, but force every modeled record into one reporting grain before it reaches MMM.

Raw layer: ingest daily platform exports with source IDs and untouched source names for auditability.
Canonical layer: normalize to date x geo x brand x business unit x channel x subchannel x platform x campaign.
MMM layer: aggregate to week_start_date x geo x brand x business unit x channel x subchannel using one fixed week definition.

2. Separate taxonomy from facts

Do not let naming chaos inside ad platforms define the model. Map it once, then reuse the mapping everywhere.

Dimensions to map: channel, subchannel, publisher, objective, funnel stage, audience type, buy type, creative format, partner, and market.
Mapping key: use stable source IDs first, naming rules second, and manual overrides only when needed.
Governance: lock taxonomy definitions quarterly so historical rows do not drift when teams rename campaigns.

3. Train on clean economics

MMM accuracy improves when spend, outcomes, and controls are separated cleanly and reconciled to the business ledger.

Media fact: store working media spend separately from agency fees, platform credits, and other non-working costs.
Outcome fact: model against one business KPI by week and geo, such as new customers, net revenue, or gross profit.
Control fact: add promotions, seasonality, tentpoles, pricing shifts, product launches, and market availability outside the media table.

Canonical MMM Schema

These are the fields worth enforcing across all channels before any teaching set is handed to an MMM tool.

Column Group Required Fields Rule Why It Matters
Keys week_start_date, geo, brand, business_unit, channel, subchannel Use the same week start, market hierarchy, and business rollups for every source. Prevents calendar and market mismatches from looking like media effects.
Delivery impressions, clicks, reach, video_views Keep units consistent and leave null when a metric is not truly available instead of backfilling zeros. Lets the model compare exposure strength without inventing false precision.
Spend spend_working_media, spend_fees, spend_total, currency Reconcile weekly totals to finance and convert currency before aggregation. Separates true media pressure from operating drag and avoids distorted ROI.
Quality source_system, load_timestamp, is_estimated, quality_status Flag imputed, late, or partial data explicitly. Stops bad rows from silently teaching the model the wrong signal.
Taxonomy publisher, objective, funnel_stage, audience_type, creative_format Populate from a shared mapping table, not directly from raw campaign naming. Creates comparability across search, social, video, affiliate, audio, display, and offline media.
Diagnostics platform_conversions, platform_revenue, landing_sessions Keep attributed metrics for QA and storytelling, but do not use them as the primary MMM target. Prevents the model from inheriting platform attribution bias as truth.

Data Hygiene Rules

These rules matter more than adding more columns. If these break, the model usually gets noisier rather than smarter.

One KPI: choose one modeling outcome and keep its definition fixed across the training period.
One calendar: enforce a single business week, one time zone rule, and one late-arriving data policy.
No blended metrics: never mix gross and net spend, web and app conversions, or new and existing customer outcomes in the same target series.
Explicit missingness: use flags for partial data, tracking outages, or unavailable metrics instead of hiding them inside zeros.
Finance check: set a reconciliation threshold so weekly modeled spend cannot drift materially from booked spend.

Implementation Sequence

Build the teaching set in a sequence that reduces taxonomy churn before modeling work starts.

Phase 1: land raw exports for every channel with immutable IDs and source timestamps.
Phase 2: publish a shared taxonomy mapping table and backfill it across history.
Phase 3: create a daily standardized fact table and a weekly MMM aggregate table.
Phase 4: join outcome and control tables, then run QA checks for completeness, outliers, and finance reconciliation.
Phase 5: freeze the training extract for each model run so analysts can reproduce results exactly.
The cleanest teaching setup is usually four tables: raw_media_delivery_daily, dim_media_taxonomy, fact_media_performance_daily, and fact_media_mmm_weekly, with outcome and control tables joined at the weekly market level right before modeling.

Channel Inputs

Spend, CAC, and LTV are editable. Incrementality is exposed on-card so assumptions are transparent instead of buried.