Articles

Your Next Predictive Analytics Venture Will Fail

Here at Version 1 we have been providing consultancy services to help organisations implement predictive analytics projects for more than 20 years. As a core part of our business, we’ve seen how people and projects work in a wide variety of industries, with applications in both B2C and B2B markets. What we’ve seen and learned over this time is that many organisations suffer the same fate when they venture into a Predictive Analytics initiative.

More often than not your Predictive Analytics project represents unfulfilled potential.

Is your next Predictive Analytics project doomed to a similar fate? Here’s why the answer could be yes.

Nobody else in the business knows what you are doing, or why.

All too often the analysis is left to analysts, and there is not enough engagement from stakeholders throughout the business. These stakeholders can provide additional context, highlight business constraints and help provide an overview of other initiatives already in process. All of these are essential to understand at the outset of the project. Often the analysts consider that they have enough information already. Enough to be self-sufficient. Enough to be efficient. However, without open communication and an organisation wide collaborative spirit, it may be enough, but it is not optimal. Without identification and involvement of the right people it is always that one thing that somebody knew but wasn’t asked about, or the constraint somebody never shared that causes the project to hit a major road block.

Line of Business Managers have their own day job – they don’t need the extra work!

An increasingly common scenario is also that good models are produced with very good business context, but they fail because the people in charge of implementing the results are just too busy with their business as usual. Understanding who is in charge of deploying results, and what the optimal time and format of those results are, is imperative. They were identified and involved from the start, but when it came to using the results, they were just too busy. When you don’t work out the actions from the start, and you don’t assign responsibilities and time, the entire business can begin to lose faith. “Those models never worked” you hear, but in reality those models were never given, and could never reasonably have been given, a fair chance.

Nobody else can understand what your model is doing or saying.

Having been there myself, I’ve seen what happens when you strive for the perfect solution only to later realise that you made it too complex. So much so in fact, that the leaders of deploying the results, the business managers, the key stakeholders, they all grow sceptical because it is too difficult to understand what is happening or how. They gave their time, they scheduled it into their business as usual, but they just couldn’t understand it.

Now, more so than ever before, a predictive model must strike a balance. A balance beginning with accuracy – the model is certainly expected to perform well. A balance that also highlights the need for understanding. Is it better to have a clear, well understood model that could be more accurate? Or will you sacrifice the understanding for the best accuracy possible? Usability must also be considered. Will the results be presented in time and in suitable formats for appropriate action to be taken?

I often think of visualisation techniques here too. Whatever industry you work in, there is a growing expectation of strong visualisation of results. Without this, it is difficult to get people behind what the predictive analytics project is hoping to achieve. With it, there is an ever increasing responsibility to tell the story and demonstrate where the value to be added really is.

You predicted that your predictions would be perfect. They weren’t. What on earth is going on?

No model should ever be ‘perfect’. If it is, it took too long to produce and required much too many resources. Is it possible to have a ‘perfect’ model?

Checking inputs and data points for validity should be a strong part of the predictive analytics project lifecycle. More importantly, those who will be viewing results or using predictions should have awareness of this. Perhaps the first model contains predictors that didn’t make sense, or customers in a list that should never have been there. Or model development was rushed to get it finished on time. Maybe a data table contained information that was no longer accurate. The first model will and should always be a prototype model, a first iteration, that will need to be refined. Expecting otherwise is a trap that will cripple momentum.

You never measured if it worked or not.

Consider your last predictive model. Do you know how well it performed? What ROI did it bring? How much resource effort did it require? What areas are marked for improvement? In what area did the model perform the best?

Measuring performance of your models is almost as important as doing them in the first place. The organisations that thrive with predictive models are the ones that conduct A/B testing, that measure the resources and effort put into model building and offset them against the value added to the business. These organisations have distinctively measurable success criteria – based on numbers and KPIs, not just how well you understand the results. Your work is not done when you finish creating a predictive model. In many ways, your work has just begun. If you want the next model to be more accurate, if you want this model to get more buy-in from the business, if you want this model to adapt to changes in customer behaviour, if you want this model to work, then you should be measuring the results as clearly as possible.

Even if your venture into Predictive Analytics fails, can you learn from your mistakes and ensure success for your next project? Could you have really solved all these problems and more at once? Or are predictive analytics projects, by nature, part of a larger, continuous, ever-improving and iterative business initiative. Is it unrealistic, each time, to expect the next model to be the ‘perfect one’? An unheralded success? Should we perhaps take the failures, some small, some large, and accept them as part of every Predictive Analytics project. Should we just strive to improve, sometimes ever so slightly, in the next venture? Next time, fail again. Fail better.

Version 1’s experienced consultants are on hand to help you find the best software and license type for your analytical and usage requirements. Contact us to discuss your requirement and identify the best SPSS product for you.

Tools Covered

IBM SPSS Statistics

Related Solutions

Training

Tagged As Advanced Predictive Analytics & Deployment, IBM SPSS Statistics for Beginners

Need some help?

Image of three women working on laptops at a table for Version 1 SPSS Training

Learn how to use SPSS from the experts

With more than 20 years of delivering highly successful training programs, Version 1 offers a wide range of training options to best suit your requirements, enabling you to optimise your IBM SPSS Software, achieve your analytical goals and continually improve your results.

Related Articles

Take a look through our SPSS Articles covering a broad range of SPSS product and data analytics topics.

Arrange a free consultation to discuss your analytical needs, and identify the best solution for you.