[under construction — please ignore the mess]
Why should your organization be interested in analytics in the first place? Well, let’s start with a little history: here’s some lessons learned from analytics in the past in order to succeed with analytics in the future.
Using calculation devices to increase efficiency, cut costs, and increase profits is as old as the abacus. And every computer that has ever been made has been sold with the promise of providing business executives with the real-time information they need to make better decisions—including the very first ones used for business.
A version of Lyons Electronic Office which became the world’s first business computer in 1953 – and was used to run “predictive analytics.”
In the UK in 1953, the first ever computer designed to manage a business was created: LEO (short for Lyons Electronic Office). And the first application it ever ran was analytics.
Other computers existed at the time. Eniac, an “electronic brains” had been built by the US Army, designed to calculating missile trajectories. IBM was working with MIT on an air defense computer. But Lyon’s management decided that new computer technology could help with business, too
Lyons & Co. wasn’t a computer company. It didn’t even make calculating machines. Instead, it was famous around the world for cakes, tea, and sandwiches. Its 30,000 employees manufactured 36 miles of swiss roll cakes every day and uniformed staff served over 150 million meals a year in Lyons teashops across the UK.
The company struggled with an issue that is familiar to anybody running food businesses today—the bakery products were perishable. If they weren’t sold, they would be wasted, and if they ran out, valuable sales would be lost.
LEO was designed and built by Lyon’s own engineers to grapple with predictive analysis: the task of ensuring that each teashop would be restocked every day with just the right about of bread rolls, boiled beef, and ice cream.
And Lyons was excited by the possibility that LEO could help track the performance of each teashop, and each product sold, in order to enable managers to make well-informed decisions.
The very first job to run on the new computer in 1951 was “bakery valuations” and by 1954, the system was fully operational and churning out standard reports for managers across the organization.
For the first time, managers could see at a glance which products were popular and which were not. LEO handled the conversion of meal orders into the individual items required to make them, such as beef, carrots, and dumplings. And reports meant managers could easily compare the sales value of goods delivered to each store with the revenue for the month to check for efficiency and fraud.
The system was designed for “management by exception,” with reports on the top ten and worst ten performing stores for different products, so that managers could make adjustments. And they could compare the sales that were forecasted with the amounts that were actually sold to see if store managers were consistently under- or over-estimating demand.
LEO was even used to track the production quality of products such as the high-quality tea blends on which so much of the company’s reputation was based.
Some of the biggest cultural issues that make analytics deployments difficult today were also present in these early days.
The first problem was the company’s choice of key performance indicators lead to suboptimal practices. For example, waste was considered the ultimate sin of a store manager. The inevitable result was that the shops often had empty shelves in the afternoon, resulting in untapped revenue potential and dissatisfied customers.
The second issue was getting top managers to adapt their management style to the new possibilities. The most advanced data processing system in the world was in the hands of a conservative management whose style hadn’t changed since the previous century. The LEO team had designed the system to focus on the key figures required to run the teashops, rather than what the managers themselves had asked for. The managers still liked having their printouts of everything that was happening everywhere.
There’s a lot of industry hype about analytics, big data, and artificial intelligence. But it’s clear that the biggest goal of analytics—better decisions— has remained stable for a long time. And so have the biggest barriers to achieving it.
Computer manufacturers have been promising real-time information for executive decision-makers since the 1950s!
Analytics isn’t a technology problem
It’s a problem of business definitions and processes that aren’t optimized for data flow.
If LEO was able to provide business information in the 1950s, why is it still so hard today?! It’s because it’s hard, and because expectations keep rising.
Today’s computer systems have become very good at gathering and storing the kinds of data you need to run the business on a day-to-day basis.
But analyzing that data remains difficult. You need to gather information across multiple systems—not just inside your organization, but also from the market as a whole. You need to be able to compare the data to history in order to get context. Data from different systems will use different terms. Even as technology improves, the amount of data that we want to access and analyze grows with it (and beyond it!).
“Business people will always be dissatisfied with their information systems”
–Timo’s Law of Analytics
Survey after survey over the last twenty years has shown that the average satisfaction with corporate information systems has stayed steady, at around 15%
That doesn’t mean they were a mistake, and it doesn’t mean that the billions invested in analytics systems have been worthless.
It just means that computer systems have become more sophisticated, but haven’t kept up with executive’s needs. Ultimately, the job of an executive could be defined as “making decisions”—and that’s something computers will never do.
Decision=not enough information
A decision is a situation where information is lacking by definition.
Why? Because we only use the word “decision” when there is some uncertainty and human discretion involved.
For example, let’s image a simple decision where the information systems have provided all the information currently available to make that decision. There are two possibilities: either the data shows that the choice is obvious, and choosing yes or no is a formality, or there’s some need for reflection before choosing an answer.
The second option necessarily means that there are some extra factors that must be weighed that the computers cannot or have not provided. This is, by definition, the job of executives.
Of course, over time, both the data and the processing power of computers gets better, and so they can take over more of what we currently call “decisions”.
- People used to “decide” airline pricing; now computers do, and it’s just become part of the revenue optimizing application.
- Amazon’s systems choose what books to recommend to me, but do they really “decide” ?
- UPS drivers used to decide what route to take, now the system just tells them where to go, without any left-hand turns
- Or to take it to an extreme: automatic chokes on cars replaced a “decision” that drivers used to have to make about the fuel/air mixture — yet we would never call such a simple switch a “decision system.”
The bar is constantly being raised. Any choices that are automated just disappears into the mass of everything else that has already been automated. As computers take on the lower-level tasks, people are able to move on to more complicated and strategic levels of decision.
Of course computers can help make ever-more sophisticated choices, and there can be real competitive advantages to automating decisions that are currently carried out by people (see Smart Enough Systems by John Taylor and Neil Radon for a good overview of approaches to Enterprise Decision Management).
But to decide (and err) is human. Recognizing this and setting expectations appropriately can help smooth the relationship between the people that consume information and the groups that provide it.
This is not just a pointless debate about semantics: there are real world impacts on IT organizations and the BI industry.
Why is analytics hard? Ultimately, it comes down to data.
Business people don’t trust the data they are provided. Poor data quality is a given, but most organizations don’t know where or how much they are suffering — you need to shine a flashlight into the dark corners. And cleansing the data isn’t enough — the process and results must be transparent to the users (you trust Fedex to deliver, right? So why do you care about tracking?)
Different types of data
The challenges of data are sometimes described described using the “3Vs”: Volume, Velocity, and Variety., first coined by Doug Laney of Gartner over a decade. Since then, many others have tried to take it to 11 with additional Vs including Validity, Veracity, Value, and Visibility.
Transactions, Interactions, and Observations. This one is from Shaun Connolly of Hortonworks. Transactions make up the majority of what we have collected, stored and analyzed in the past. Interactions are data that comes from things like people clicking on web pages. Observations are data collected automatically.
Process-Mediated Data, Human-Sourced Information, and Machine-Generated Data. This is brought to us by Barry Devlin, who co-wrote the first paper on data warehousing. It is basically the same as the above, but with clearer names.
“To make good decisions, you have to be skilled at ignoring information”
Warnings of the exponential rise of data volumes predates computers. But now that almost every device and business process is generating data 24/7, there’s certainly a lot opportunity than in the past.
Over the last decade, a variety of new techniques have made it much more feasible and economically viable to process large amounts of information at high speed. These include the rise of cloud computing, in-memory processing, new open-source massively-parallel platforms, more powerful processors based on graphics technology.
New systems like sensors mean that previously invisible business processes can be revealed: ‘analyzing data that was previously ignored because of technology limitations.’
Most corporate computer systems are very good at recording what’s going on in the business. But by the time the information is available to management, it’s often too late to do anything about problems.
Instead, companies can now use ‘signal’ data to anticipate what’s going to happen, and intervene to improve the situation.
Examples include tracking brand sentiment on social media (if your ‘likes’ fall off a cliff, your sales will surely follow) and predictive maintenance (complex algorithms determine when you need to replace an aircraft part, before the plane gets expensively stuck on the runway).
There has been a big increase in business awareness that data is a valuable resource for discovering useful knowledge.
Iceberg Illustration here?
The Foundation for the Business models of the future
It’s now way beyond just “better decisions”
Loop – information gathered to make better decisions, but now they power things like the customer interaction.
The virtuous circle of information
More data = better customer experiences = more customers = more data. The sooner you get started, the better!
In his wonderful book The Human Face of Big Data, journalist Rick Smolan says big data is “the process of helping the planet grow a nervous system, one in which we are just another, human, type of sensor.