Pull three reports from your CRM right now: pipeline value, win rate, and marketing-sourced revenue. Now ask your CMO, your VP of Sales, and your RevOps lead each to pull the same numbers independently. You will get three different answers to each question — and all three people will be confident that their number is correct.
This is not a technology problem. You're all using the same CRM. It's a definitions problem — and it's quietly destroying the strategic credibility of your revenue leadership team every time they sit in the same room and cite conflicting data to make competing arguments about the business.
The data definition problem is one of the most common and least-acknowledged challenges in B2B revenue operations. This article explains why it happens, which terms most commonly cause misalignment, and how to fix it with a process that will actually hold.
Why the Same CRM Produces Different Numbers
If everyone is using the same CRM, how can the numbers be different? The answer is that the CRM stores records — and humans interpret those records through filters, definitions, and report logic that vary by team, by report builder, and sometimes by the day of the week.
The most common sources of definitional divergence:
- Field definitions used inconsistently: What does "Qualified" mean in your stage field? If ten different people built ten different deals in your CRM and each one had a slightly different understanding of when to mark a lead as qualified, your pipeline data is a composite of ten different qualification frameworks presented as if it's one.
- Different filters applied by different teams: Marketing pulls pipeline including all opportunities regardless of source. Sales pulls only opportunities they progressed. RevOps pulls only opportunities with activity in the last 30 days. All three are looking at "pipeline" — but the numbers don't match because they're actually looking at different subsets.
- Overlapping or conflicting attribution models: Marketing claims first-touch attribution. Sales claims the deal came from a rep's outbound motion. Both are technically true — but the CRM is only configured to track one source, so one team's contribution is invisible in the official report.
- Historical data entry variation: Your CRM data was entered over years, by different people, under different instructions, with different required fields at different points in time. The records are internally inconsistent in ways that make consistent reporting nearly impossible without a conscious standardization effort.
- Report logic built by different people at different times: The marketing dashboard was built by a contractor two years ago using a specific date filter. The sales pipeline report was built by a RevOps hire eight months ago using a different date filter. The numbers won't reconcile because they were never designed to.
The 6 Terms That Must Be Defined and Agreed in Writing
There is a specific set of revenue terms where ambiguity is most damaging. These six terms cause the most frequent, most heated, and most strategically destructive disagreements in B2B revenue teams:
- Lead: When is a record a lead? Is every new contact a lead? Only contacts who have engaged with marketing? Only contacts who meet firmographic criteria? The answer matters because it determines what gets counted in your lead volume metrics and what gets attributed to marketing.
- MQL (Marketing Qualified Lead): What score or criteria exactly? Is it a point threshold? A specific set of behaviors? Is the MQL defined by score alone, or does it also require firmographic fit? If marketing is passing MQLs that sales doesn't accept, the problem is almost always an undefined or inconsistently applied MQL definition.
- SQL (Sales Qualified Lead): What does sales accept? This is the handoff point between marketing and sales — and the definition gap here is where most of the "marketing generates junk" complaints originate. If sales's informal definition of an SQL is different from marketing's formal definition of an MQL, the gap will show up as lead quality complaints on one side and SQL rejection rates on the other.
- Opportunity: When is an opportunity real? When a lead is accepted? When a discovery call is completed? When budget is confirmed? The more rigorous your opportunity definition, the more accurate your pipeline metrics — but also the fewer opportunities you'll count, which creates its own political pressure.
- Pipeline: What deals count as pipeline? All open opportunities? Only opportunities in certain stages? Only opportunities with activity in a specific window? The pipeline number is perhaps the most cited number in revenue reviews — and it's almost never consistently defined across teams.
- Win rate: What's the denominator? Created opportunities, accepted opportunities, opportunities that reached a specific stage, or something else? Marketing and sales win rates will diverge dramatically depending on what they count in the denominator — even if the closed won numbers are identical.
How to Run a Revenue Definitions Workshop
The fix is simple in concept and requires real discipline in execution. You need to run a structured definitions workshop with all three revenue functions in the room — marketing, sales, and RevOps — and you need to come out with written definitions that everyone has signed off on.
Here's how to run it effectively:
- Start with a pre-work exercise: Before the meeting, ask each team lead to write down, independently, their current understanding of what each term means. Don't share the answers before the session. The pre-work makes the gaps visible when you compare answers in the room.
- Reveal the answers publicly: Put each team's definition of each term on a whiteboard or shared screen. The gaps will be immediately obvious. Often, people are surprised to see how different the definitions are — even within the same team. This moment of shared recognition is important: it shifts the framing from "they're wrong" to "we haven't agreed."
- Negotiate toward a single definition for each term: This is where the facilitation matters. For each term, the goal is not the best theoretical definition — it's the one that all three teams can live with and commit to consistently applying. Sometimes that means marketing's definition wins. Sometimes sales's definition wins. Sometimes you invent a new definition that captures the intent of both.
- Document every agreed definition in writing: Immediately after the session, publish the agreed definitions in a shared document that all three teams can reference. This document becomes the source of truth when definitional disagreements resurface — and they will.
- Update CRM field descriptions to match: Every agreed definition should be reflected in the CRM field description so that reps entering data see the agreed definition at the point of entry. This closes the loop between the governance decision and the operational execution.
Building a Single Source of Truth
The workshop produces the definitions. Building a single source of truth requires ongoing governance infrastructure:
- A data dictionary: A living document (in Notion, Confluence, or your wiki of choice) that lists every revenue term with its agreed definition, the CRM field it maps to, and the last date it was reviewed. Every new hire should read this document as part of onboarding.
- CRM field standardization: Required fields enforced at stage advancement. Picklist values that enforce agreed definitions rather than free-text fields that allow interpretation. Stage names that reflect the agreed criteria, not generic labels.
- Shared report templates: A set of reports that all three teams use, built from the same logic, using the agreed definitions. When leadership asks for pipeline, everyone should be able to pull the same report and get the same number.
- Quarterly definition reviews: Markets change, product evolves, GTM strategy shifts. Definitions that were right 18 months ago may no longer reflect how you sell. A quarterly 30-minute review of your data dictionary prevents definitional drift before it becomes a crisis.
How Attribution Breaks When Definitions Break
There's a second-order consequence of the data definition problem that is especially relevant for teams using pipeline-based advertising: when definitions are inconsistent, attribution becomes meaningless.
If marketing and sales disagree on what counts as pipeline, they disagree on what the advertising is supposed to be influencing. If the pipeline number is different depending on who pulls it, the denominator for advertising ROI calculations is unstable. You cannot calculate the impact of advertising on pipeline velocity when the pipeline itself is defined differently in every report.
This is the reason RevOps alignment on definitions is not just a reporting hygiene issue — it's a strategic capability issue. Teams that have clean, agreed definitions can measure the impact of every GTM investment accurately. Teams that don't will forever be having the same argument about whether marketing is contributing to revenue, because the numbers will never add up the same way twice.
One Pipeline. One Source of Truth. One Revenue Team.
Signal connects your live CRM pipeline data to your advertising — but it only works when everyone agrees on what "pipeline" means. Book a demo to see how we help revenue teams build the infrastructure that requires clean definitions.
Book a Demo → See PricingFrequently Asked Questions
Why do marketing and sales always disagree on pipeline numbers?
The most common causes are: different filters applied to the same CRM data (what time period, what stages, what source), inconsistent field entry by reps who applied different criteria over time, conflicting attribution models where the same deal is counted differently by each team, and report logic built separately by different teams using different definitions. The root cause is almost always a governance failure — the terms were never formally defined and agreed in writing.
What is a revenue data dictionary and why does it matter?
A revenue data dictionary is a living document that defines every key revenue term (lead, MQL, SQL, opportunity, pipeline, win rate) with its agreed definition, the CRM field it maps to, and the last date reviewed. It matters because it makes the agreed definitions portable — available to every new hire, referenced in every reporting dispute, and reviewable when market conditions change. Without a data dictionary, definitions live in people's heads and drift over time.
How do you align marketing and sales on MQL and SQL definitions?
Run a structured definitions workshop: have both teams write down their current understanding of MQL and SQL independently, then reveal and compare. The gaps will be visible. Negotiate toward a single definition that both teams can commit to, document it in writing, update your CRM field descriptions to match, and build a shared report that uses the agreed definitions. Review annually to prevent definitional drift.
How does the data definition problem affect advertising attribution?
If marketing and sales disagree on what counts as pipeline, advertising attribution becomes unstable. The denominator for pipeline-influenced revenue calculations changes depending on who pulls the report. You cannot accurately measure advertising impact on pipeline velocity when the pipeline definition is inconsistent. Clean, agreed definitions are a prerequisite for meaningful attribution — not just good practice.