Stites & Associates, LLC
Ball Machine2.png
 
Evidence-Based Decision-Making
 

Active Roster

Media

Sponsors

EBDM & Marketing

Madison Avenue marketeers, entertainment moguls and almost anyone chasing consumer fetishes, frequently use Evidence-Based Decision-Making to tune their products and messages. Yet it is rare for objective evidence to be a part of B2B marketing. Hardly any manager, executive or sales person wants to admit that their “knowledge” of their market is barely more than a “gut feel.” Even those who have been selling their products or services for many years, rarely have objective evidence about the size of their market or the buying habits of their customers and prospects.

This is especially odd given the fickle of consumer behavior compared to the relative stability of B2B markets. If we gather the right information to predict the buying habits of a B2B population, it has great return on marketing data investments. Sadly, few seem to embrace this.

Furthermore, one would think that with the ubiquitous use of computers and improved ease of data collection, evidence-based marketing decisions would be the norm throughout all markets. Hardly anything can be farther from the truth. True, we now have lots of data about the numbers of clicks on our web-pages, but we probably have less information about our markets than ever before. We have tons of disconnected, confusing and contradictory data points, but hardly any information.

One of the main reasons for such a paucity of information is almost universal absence of effective data evaluation and experimental design. We simply do not look at what we have learned objectively and use that information to our advantage. We are especially resistant to considering the negation of our first assumptions and consider how additional learning can improve our chances of success.

The reader is invited to think through the Monte Hall “Paradox” for a personal learning experience in this area. Click Here.

Effective Data Evaluation:

We use historical data to develop useful hypotheses. This is the first step in learning. We must gather data points in order to intuit relationships that make sense to test. Too often, we confuse data collection with learning. It is not learning. It is only when we construct clear hypotheses about relationships between data elements and test them objectively that we can begin to learn.

Until we test our hypotheses against additional (preferably controlled) data elements, they are no better than prejudices.

We learn by looking at historical data (experience), formulating hypotheses (assumptions about relationships between data elements) and then collecting additional data to test the reliability of those presumed relationships. We then apply statistical analysis to judge how well our new data support our hypotheses. In the area of marketing, we generally call this the Marketing Research Process. It is a specific application of the whole notion of Experimental Design.

Crucial steps in evaluating historical data include:

  1. Testing the data for accuracy - Are the data “true”?
  2. Testing the data for relevance – Are the data “important”?
  3. Testing the data for bias – Are the data “representative”?
  4. Testing the data for stability – Will the data still be “valid” when we act?

Statistical analysis of the historical dataset can be helpful, but frequently historical data contain hidden biases that resist statistical analysis. This is especially true for sampling issues. Often the approach requires investigations and judgements that can only help qualify the data, improve the validity of hypotheses and set the stage for additional, unbiased sampling or experimentation.

Turning your data analysis over to neophytes is like asking the kids to paint the fence. Yes, it will be a learning experience, so have plenty of paint.

There’s a great story told about statistician Abraham Wald. Dr. Wald was asked to help assess the damage being inflicted on Allied bombers in WWII. The RAF had been studying the flak damage on bombers and re-enforcing those areas with the greatest flak damage. The survivability of the bombers just did not seem to be improving. Dr. Wald looked at the problem and noted that the samples they needed to look at were the bombers that had been shot down – not those that had returned safely. They began randomly re-enforcing areas that had not been hit to see if survivability improved. It took some time and careful analysis, but this method slowly improved survivability.

Effective Data Modeling:

Once we have effectively evaluated historical data and obtained data that are unbiased and relevant, we are ready to compare “factors” (independent variables) and “results” (dependent variables) to see if we detect correlations. We can use statistical analysis to see if these correlations exceed the frequencies of pure randomness. When there seems to be a statistically significant and persistent correlation, we may reasonably hypothesize that there is some form of causality between the factors and the results.

At this point, we probably have insufficient confidence to claim that we have “proven” causality – especially if our data were collected under imperfectly controlled conditions. We might want to “test” these correlations by setting up an experiment (test market) where we manipulate the factors and see if results really do follow our hypothesized causal relationships.

If the consequences of being wrong are not too high, we might skip this testing step and provisionally implement a change, hoping we are right. We would want to be very careful and collect additional data to see if our ideas were proving to be right or not. We would do like Dr. Wald did in WWII, and adjust our actions based on feedback from “the field.”

Regardless of the method we use (initial testing or field testing), we are creating a “data model.” We are collecting evaluated data (evidence) and testing how those data elements are related to results (relationships). If the data model is a useful (though imperfect) representation of reality, we may use it to predict (hypothesize) new outcomes worthy of future consideration. We would be constantly testing, evaluating and improving our model. We are “learning” as an institution. If we follow this method we can begin to start talking about “Business Intelligence” in a useful, meaningful way.

In the area of marketing this approach is commonly known as the Marketing Research Process. We can summarize this process simply as (see: Burns, et. al., Marketing Research, ed. 8 for a more detailed 11 point process):

 

Research Process.png

 

We use this “Business Intelligence” approach to test our guesses and hone our understanding of the market and the buying habits of decision makers. We continuously improve our ability to predict and affect outcomes.

In B2B markets, the decision processes can be quite complex, but the rewards for proper understanding are tremendous.

Hence, whether you are selling toys, yoga classes or turbine blades, collecting and analyzing data on customers and prospects is a key to successful marketing.

EBDM & Marketing

Madison Avenue marketeers, entertainment moguls and almost anyone chasing consumer fetishes, frequently use Evidence-Based Decision-Making to tune their products and messages. Yet it is rare for objective evidence to be a part of B2B marketing. Hardly any manager, executive or sales person wants to admit that their “knowledge” of their market is barely more than a “gut feel.” Even those who have been selling their products or services for many years, rarely have objective evidence about the size of their market or the buying habits of their customers and prospects.

This is especially odd given the fickle of consumer behavior compared to the relative stability of B2B markets. If we gather the right information to predict the buying habits of a B2B population, it has great return on marketing data investments. Sadly, few seem to embrace this.

Furthermore, one would think that with the ubiquitous use of computers and improved ease of data collection, evidence-based marketing decisions would be the norm throughout all markets. Hardly anything can be farther from the truth. True, we now have lots of data about the numbers of clicks on our web-pages, but we probably have less information about our markets than ever before. We have tons of disconnected, confusing and contradictory data points, but hardly any information.

One of the main reasons for such a paucity of information is almost universal absence of effective data evaluation and experimental design. We simply do not look at what we have learned objectively and use that information to our advantage. We are especially resistant to considering the negation of our first assumptions and consider how additional learning can improve our chances of success.

The reader is invited to think through the Monte Hall “Paradox” for a personal learning experience in this area. Click Here.

Effective Data Evaluation:

We use historical data to develop useful hypotheses. This is the first step in learning. We must gather data points in order to intuit relationships that make sense to test. Too often, we confuse data collection with learning. It is not learning. It is only when we construct clear hypotheses about relationships between data elements and test them objectively that we can begin to learn.

Until we test our hypotheses against additional (preferably controlled) data elements, they are no better than prejudices.

We learn by looking at historical data (experience), formulating hypotheses (assumptions about relationships between data elements) and then collecting additional data to test the reliability of those presumed relationships. We then apply statistical analysis to judge how well our new data support our hypotheses. In the area of marketing, we generally call this the Marketing Research Process. It is a specific application of the whole notion of Experimental Design.

Crucial steps in evaluating historical data include:

  1. Testing the data for accuracy - Are the data “true”?
  2. Testing the data for relevance – Are the data “important”?
  3. Testing the data for bias – Are the data “representative”?
  4. Testing the data for stability – Will the data still be “valid” when we act?

Statistical analysis of the historical dataset can be helpful, but frequently historical data contain hidden biases that resist statistical analysis. This is especially true for sampling issues. Often the approach requires investigations and judgements that can only help qualify the data, improve the validity of hypotheses and set the stage for additional, unbiased sampling or experimentation.

Turning your data analysis over to neophytes is like asking the kids to paint the fence. Yes, it will be a learning experience, so have plenty of paint.

There’s a great story told about statistician Abraham Wald. Dr. Wald was asked to help assess the damage being inflicted on Allied bombers in WWII. The RAF had been studying the flak damage on bombers and re-enforcing those areas with the greatest flak damage. The survivability of the bombers just did not seem to be improving. Dr. Wald looked at the problem and noted that the samples they needed to look at were the bombers that had been shot down – not those that had returned safely. They began randomly re-enforcing areas that had not been hit to see if survivability improved. It took some time and careful analysis, but this method slowly improved survivability.

Effective Data Modeling:

Once we have effectively evaluated historical data and obtained data that are unbiased and relevant, we are ready to compare “factors” (independent variables) and “results” (dependent variables) to see if we detect correlations. We can use statistical analysis to see if these correlations exceed the frequencies of pure randomness. When there seems to be a statistically significant and persistent correlation, we may reasonably hypothesize that there is some form of causality between the factors and the results.

At this point, we probably have insufficient confidence to claim that we have “proven” causality – especially if our data were collected under imperfectly controlled conditions. We might want to “test” these correlations by setting up an experiment (test market) where we manipulate the factors and see if results really do follow our hypothesized causal relationships.

If the consequences of being wrong are not too high, we might skip this testing step and provisionally implement a change, hoping we are right. We would want to be very careful and collect additional data to see if our ideas were proving to be right or not. We would do like Dr. Wald did in WWII, and adjust our actions based on feedback from “the field.”

Regardless of the method we use (initial testing or field testing), we are creating a “data model.” We are collecting evaluated data (evidence) and testing how those data elements are related to results (relationships). If the data model is a useful (though imperfect) representation of reality, we may use it to predict (hypothesize) new outcomes worthy of future consideration. We would be constantly testing, evaluating and improving our model. We are “learning” as an institution. If we follow this method we can begin to start talking about “Business Intelligence” in a useful, meaningful way.

In the area of marketing this approach is commonly known as the Marketing Research Process. We can summarize this process simply as (see: Burns, et. al., Marketing Research, ed. 8 for a more detailed 11 point process):

 

Research Process.png

 

We use this “Business Intelligence” approach to test our guesses and hone our understanding of the market and the buying habits of decision makers. We continuously improve our ability to predict and affect outcomes.

In B2B markets, the decision processes can be quite complex, but the rewards for proper understanding are tremendous.

Hence, whether you are selling toys, yoga classes or turbine blades, collecting and analyzing data on customers and prospects is a key to successful marketing.