Targeted Marketing – A common sense approach to using Big Data

Targeted Marketing – A common sense approach to using Big Data

Indira Rao

Senior Marketing Professional; Telecom Industry, USA

---

Over the last several years, we have seen impressive advances made not only in data science, engineering and analytics, but also an increased availability of tools and applications that incorporate a variety of statistical and econometric models.

Increasingly, businesses are leveraging such sophisticated machine learning models, using hundreds or even thousands of variables, to predict customer buying behavior, based on historical trends and patterns. Experts are micro-segmenting customers into like groupings, allowing marketers to develop products and offers tailored to suit their needs.

As with any technological advancement, this exciting new world of big data frees up a lot of time, that was traditionally spent building and analyzing customer profiles. Instead, it allows marketers to focus on what they love the most – developing a palette of offers for customers and playing with scenarios in near real time using the models before testing them in the real world.

The possibilities seem endless, with little downside!  But, in order to ensure a high probability of success, it pays to be diligent and deliberate in planning and execution.

I am not a data engineer or scientist, so what I provide here is based on my experience as a Marketing professional, using machine learning and advanced analytics.

Mobile App

Now access all our services on mobile app

1. Data integrity

The data that is fed into modeling tools needs to be relevant, of high quality, trustworthy and thoroughly vetted. You can use legally sourced data regarding your customers and supplement it with data purchased from brokers/aggregators. For example, in the US, some of the most prominent vendors of Consumer data are Nielsen, Acxiom, Experian and others – each specializing in certain aspects of a consumer profile.

As to what type of data is relevant, cast a wide net at first – demographics, household income, education level, location, type of dwelling, consumption habits, brand and frequency of purchase, etc. The larger the data set, the greater the predictive power of the model.

2. Hypothesize, but steer clear of biases

We often pride ourselves in being experts in understanding our business and our customers, as we should! But this can also lead to preconceived expectations from the data models. As long as the inputs into the model are sound, let the model do its job in finding the most correlated features. After multiple iterations, the final output may yield a few or many predictors of likelihood to purchase a product, depending on the nature of the product or business.

3. Marrying science with soft data

Once the analysis is completed, you may find that most or all the results are just confirming your expectations. But there may also be a few surprises – you may want to check these out by, perhaps, surveying your sales teams or even some friendly customers, to validate what the data is showing. In the absence of clear and substantive data that disproves the results of the analytics, avoid modifications to the model.

4. Customer segmentation

One of the key outputs of machine learning models is a predictive decision tree based on statistical analysis. This technique results in a logical micro-segmentation of a customer base and provides focus areas where specific actions can be taken.

Imagine a company looking to sell smart refrigerators in the US. Using machine learning techniques, they can identify specific types of households that are most likely to be interested in purchasing these products.

The segmentation model could look like this –

5. Campaign development

In the example above, having identified the segments that have a high propensity to purchase a smart refrigerator, the marketer can invest specifically in targeted advertising and offers; and, even utilize a distribution channel that is traditionally considered more expensive, but may now yield higher returns. Instead of a broad-based campaign targeted at say, 200 million households, the investment could be more effectively used by targeting a fraction that is most lucrative, say 40 million.

6. Validating efficacy

Even with the best of intentions and upfront due diligence, initiatives often fail. The fault could lie with the data and analytics, inappropriate offers and pricing, or due to failures in execution. To maximize the chances of success, it is best to run focused trials when possible in a representative region of the country.

If the trial results show statistically significant benefits, the campaign can then be quickly scaled. 

Where the trial is inconclusive or substantial benefits have not been observed from the targeted marketing, the tendency within organizations is to abandon the initiative without fully understanding the root causes.

Often, the points of failure are –

  • Incorrect hypothesis
  • Corrupt or inadequate data – a critical input may have been omitted
  • Biases introduced into the study/modeling
  • Lack of trust in the results of analytics when they do not confirm expected output
  • A misreading of customer willingness to pay when developing pricing and offers
  • Trial execution failures and inadequate flexibility in applying quick learnings

Guarding against the above pitfalls, throughout the initiative, substantially increases chances of success. Data and facts by themselves are unbiased; it is human intervention that can determine the effectiveness of its use.

All content included in this article is strictly the personal perspective of Indira Rao and not of her employer AT&T.

Please do leave your comments at the bottom and do share with others if you like this article.

Subscribe
Notify of
guest

0 Comments
Inline Feedbacks
View all comments
---