Gartner Data & Analytics London Day One

It’s Day One of the Gartner Data and Analytics Conference in London. Here are my thoughts on the sessions, updated through the day…

To kick things off: Business Intelligence and Analytics is more strategic than ever! It’s yet again a top priority for today’s CIOs — and has been now for 12 out of the last 13 years!

Themes of the keynote: Trust, Diversity, Complexity, and Literacy

TRUST

Trust in data has reached epidemic proportions in society — there’s lots of fake news around!

How can we trust data in business? Through verification and data transparency. Doing this requires talking about metadata — data about data. That’s a concept that non-experts find hard to grasp, but it’s worth the effort.

To succeed in the future, we need to take on board the lessons of Wikipedia — instead of a centralized group that tries to create and maintain a repository of business glossaries, we need to apply the lessons of consumerization and collaboration and let people create and tag metadata as part of their daily work.

“Data catalogues are a critical new capability. It’s the new black”. Demand is rising for these tools that allow a fast way to categorize disorganized data (but unless managed, they can lead to more silos).

Trust is even more important these days because of regulations such as GDPR. Are you able to explain why your predictive model treats this customer differently from this other one? You may be required to….

Dumping integrated data into a data lake doesn’t fix the problems. We need not just lifeguards (to save people when they are out of their depth!) but also “marine biologists” who can look at the data at the “molecular level”, and determine whether it’s safe to swim, or safe to drink.

“Data lakes do NOT repeat NOT replace data warehousing!”. Data lakes provide flexibility and scale, but not trust — this is where data warehouses, with a more formal approach, remain important. And it’s not just about checking data quality as it enters the system — there must be constant checks any time data is used to make decisions.

Conclusion — we need to drive more trust in data, more explicitly than ever, and this is increasingly important as diversity of data increases.

DIVERSITY

We need all types of diversity — of people, algorithms, and data.

People: many studies have shown that more diverse teams are better. For example, see the McKinsey report called “Why Diversity Matters”. Gender, race, differently abled, etc. E.g. SAP’s autism at work program.

Algorithms: there have been high-profile examples, such as using algorithms to assess the risk of a prisoner skipping bail — where we should probably not be trusting a computer program with that much power. Lots of examples in the (great) book “Weapons of Math Destruction”

Data: text, video, telemetry etc. E.g. Octo telemetrícs – uses lots of data from car sensors to transform insurance markets — it’s a win-win model. Because of the savings, claim that bad drivers can pay the same, while good drivers less, and everybody gets faster claims payout.

COMPLEXITY

The world has never been so complex. We need to understand complexity and respond in a timely way. Gartner has been talking about “Bimodal” IT for a while — two speeds, one about efficiency, stability, and scalability (mode 1) and the other about creativity, speed, and flexibility (mode 10). But collectively, we’ve hit a wall: companies have been able to do both mode 1 and 2, but separately. How can we do both at the same time?

But unlearning the lessons of the past. We used to use a “production line”, ERP-type approach, with different teams for different stages — ETL, data, reports, etc. — all centralized. But that’s not going to be creative or collaborative enough for the future. We have to start copying the approaches of open sourced competitions – such as the search for a solution to calculating Longitude, or Kaggle more recently. We need a more bottom-up approach, with allowing lots of experiments, lots of small bets, low risk — but maximizes the odds of doing something great.

How do we master complexity? With more context, through better data, and lower latency, we can act faster, and use complexity to competitive advantage.

LITERACY

The classic book “How to lie with statistics” is more relevant than ever — and business people continue to mostly ignore the lessons learned — sample bias, over reliance on averages etc. Take the classic example, where most people say a woman is more likely to be a bank teller and an activist than a bank teller alone (which makes no sense mathematically).

With training these biases can be overcome. It turns out that it’s the second biggest barrier to successful BI. It’s like learning a second language — there’s a specialist vocabulary and dialects by domaine (e.g. healthcare, marketing, etc.). What’s the solution? Consider creating a “driver’s license” for accessing data. And classic change efforts — blog, office hours, training courses, etc.

Summary

David Rowan Keynote — innovation is now coming from the edge…

Too many cool examples to count here. Everywhere and everyhow, people are using the latest technologies to change society — robots, traffic lights and AI, China facial recognition, e-Estonia — offering App Store for other types of apps… history of how quickly AI has been progressing. Sightcorp video, soul machines in Auckland, Creepy! Nvidia CES video, ship tracking,

“Things will n/ever go this slowly ever again!” etc. etc. read Wired magazine UK for more 🙂

Diversify

June Sarpong — author of the Diversify book.

Story — the team of African-American “human computers” at Langley for the war effort. The “Colored” computing team — as it was then called — grew and grew and added more gifted women. Then the Cold War started, and the US decided they had to launch a man into space. The team (now part of NASA) couldn’t complete the calculations, and couldn’t find anybody who could help — until they found the colored computing team (see the great book / movie “Hidden Figures”!).

UK example — Rosalind Franklin. Born in 1920, was obsessed at science. Studied Chemistry at Cambridge, helped create technology for gas masks, for example. While at Kings she worked to crack the DNA code, with a year’s worth of calculations. But her image was shown to Watson & Crick, who won the Nobel prize for the discovery of DNA. She never received the recognition she deserved why she was alive. Her work was also instrumental in another Nobel prize.

It’s important for these stories to be known, as role models, so that others are encouraged to go into these industries…

Go to the web site, check how you stand, and spread the word with the “-ism” calculator!

From BI to AI: Focus on Business Outcomes to Architect Your Data and Analytics

Joao Tapadhinas. Implementing data and analytics is complex and constantly-changing. But it’s important to consistently update the “map” to ensure you actually get the expected business outcomes. In particular, it’s all too easy to start with the technology changes, but as ever, it’s important to start with the business outcomes: how can data support the company strategy? And only then think about the technology and infrastructure required.

Introducing the notion of an “analytics hub” — “a high-level analytics construct that instantiates a cluster of analytics blocks with customized business context in order to deliver a target business outcome” and each “analytics block” is a “Granular, business-agnostic Analytics functions and their supporting technical and organizational components able to d3live3r narrow-scope analytics outcomes and, within the right business context, generate business outcomes”.

In other words, it’s useful to think about BI as a modular system. A collection of constantly-updated blocks, where each block is a collection of roles and skills, analytics capabilities, data capabilities, and processes/governance that come together to create a useable set of analytics. E.g. Visual Data Discovery.

It’s important to emphasize that each block isn’t about the technology, but the usage. For example: one company deployed self-service, and it proved very popular, BUT they didn’t update the processes. People could look at existing data within a few hours, but any time they needed new data, they had to go back to IT each time — which could take several months. So the users ended up creating spreadmarts in every direction, and chaos — three users in a meeting might have five different views of data!

Lesson learned — you can’t just update the technology. It’s important to take a step back and adapt all the other elements that make up an “analytics block”.

Traditional analytics tools are very good at providing reliable corporate information. But they typically require specialist report, limiting their flexibility. Five years ago, we started seeing the “analytics workbenches” emerge, where business users can connect to data, gather it together, and explore it themselves. The third area is “data science laboratories” that have been around for a long time, for very skilled users (data scientists). Most recently, the “artificial intelligence hub” — autonomous systems that learn to perform tasks. They have a very narrow focus — chat bots for example, or video analysis. But they’re very good at what they do, solving new problems.

What we need to do is offer the full range of capabilities to our business users — i.e. from BI to AI. How to do that?

The Gartner Analytics Evolution Framework. Seven steps, focused on the outcomes and value expected for the business — see the Gartner Analytics Atlas research report for more details.

  1. List and prioritize the target business outcomes, working with the business users. Ideally clear, measurable, time-bound etc.
  2. Define the business context of target outcomes — everything will depend on what’s going on in the business, from installing a new ERP system to changing customer expectations to big-pictures changes to the economy. This is the point where you should consider the links between your internal BI systems and how that data is going to be at the heart of your business models of the future.
  3. Select analytics blocks for clusters and embed them in the overall analytics landscape. Choose the high-priority areas to be implemented, taking into account the required integrations and overlaps between different blocks. See the Toolkit from the Gartner Analytics Atlas for a comprehensive list of analytics blocks. For example, there may be a cluster around reducing customer churn that includes dashboards, self-service analytics, machine learning etc. And then a different cluster around improving forecasting and demand management that might also include mobile BI, geospatial analysis, etc…
  4. Identify the people and data that will instantiate analytics hubs. Turn the idea into a plan — who’s going to be responsible for the different roles? Where is the data going to come from? Etc. Put it all together and use it to explain to people where you’re going…
  5. Assess the organizational readiness for the analytics blocks, clusters, and hubs. What’s likely to get in the way of executing on your plan! E.g. you may not have the skills, or the budget, or the right company culture, executive sponsorship, etc…
  6. Based on those roadblocks, go back to the business users to review the business outcome priorities. “You can’t have everything you wanted, so what can we do?”
  7. Design the evolution roadmap and establish timelines. Go execute on the realistic plan! (But, of course, be ready to reevaluate and reiterate each part). Be incremental, slowly building up your overall capabilities one project as a time.

Cloudy With A Chance of Wind

Empowering users with analytics in Vestas, world leader in wind turbines. Kim René Vittrup.

Wind turbines can be delayed by heavy winds — but once installed, it’s good news. Moving to the cloud is comparable — you can face headwinds and tailwinds. Here’s the story of how we implemented cloud analytics with SAP. I’m on the business side, not IT. A team of around ten employees in Denmark and India.

Head of Systems and Tools, Supply Chain Planning.

“In everything we do, speed matters!” Says Anders Runevad, Vestas President & CEO.

By the numbers:

Our value chain — we do project planning and design, then procure and manufacture with our partners, then we construct and install the turbines, and then we look after the operation and maintenance.

As you start projects, it sometimes feels very complicated, with gridlock. We had to find a way around that — Business Empowerment, a program called “a need for speed”. The first step was to select visualization software. We wanted a standard that would go across the company — SAP Analytics Cloud. Next we needed to remove bottle necks to speed up delivery time, then enhanced coordination and greater flexibility.

The path we’ve been on. Started with a clear strategy in 2015.

The rollout of SAP Analytics Cloud started in September 2017. We can now create dashboards like these in a matter of hours, without any help from IT. We leverage the advanced authorizations and the data available. We’re still in the early phase of adoption — currently 200 active users, but we’re ramping up, looking at increasing the adoption rates. It takes time to move people away from Excel and PowerPoint! But with a more automated system, the users can spend more time analyzing and less time preparing data.

We were helped by preparation. IT isn’t in control any more — the business does. We worked on key requirements, functional roles, guidelines for lead power users, standardization of templates, access procedures, and a business user forum for knowledge sharing. This was essential, because we were bombarded questions during the rollout.

The good news — very visual, people want to use it, available on mobile. Uptime has been very good (a couple of hours of downtime — SAP notices before our users did). Tool is very user friendly. Some of the drawbacks — still lacks some key functionality such as conditional formatting, currently hard to get up-to-date geo maps with live connections. But there are lots of regular updates. But that’s enough talking, here’s what it looks like in practice…

This view tracks the progress of key projects. I can do a cross-analysis or linked analysis — select just three employees for examples… can do analysis to compare employees, all very easy to use. Most of what I would like to show you is very confidential, so I can’t show it to you!

In this report I can see who is using the system — I can highlight a report and see the top twenty users of a particular report…

What are the business benefits?

  • More business empowerment — dashboards and analytics are established MUCH faster
  • Digital management reporting — PowerPoints are slowly dying -> more automation
  • Better business understanding — visual analytics gives employees the appetite to understand the numbers

Next: Digital Boardroom. But we have to get the fundamentals right first, so we need higher adoption through internal campaigns. We would like more live SQL connections, and we’re working on more power users.

Q&A:

  • Can you link to live on-premise data? Yes.
  • Can you blend corporate data with local personal data? Not yet, but it’s coming
  • Could trust in data be blocking faster rollout? Yes!
  • Will this replace on-premise analytics? No, the future is hybrid — both on-premise and cloud, combined.
  • How do users access it? Simple! Type “sac” in the browser, and we take you straight to it…
  • How’s performance? Was slowish today through a Danish VPN. But in general it’s great.
  • Lumira and WebI internally? Yes. We’re going to use WebI if Lumira isn’t sufficient… but the implementation of the on-premise went much slower, we’re only just starting…

What’s wrong with Master Data Management?

Andrew White.

MDM has gone “from the boardroom to the basement” — big, high-profile initiatives that weren’t as good successful as hoped.

ERP rollouts — master file has master data in it, right? But the projects became too big, and very expensive… trying to manage everything. The implementation was very complex, tended to be IT driven and few of them really touched on business processes — didn’t really tackle the problem, just addressed some of the data issues. Result — it became slower and slower and confusing and ultimately stopped…

We need to be more specific — hence “application data management” a new Gartner concept.

After 15 years, we have figured out:

  • How to align data to outcome (well, still working on this)
  • That not all data is equal.
  • That MDM is not about data — really!
  • That MDM does not exist alone.
  • That MDM should be preceded by classifying data

In six years time, we’re not going to be at a data and analytics summit — it’s going to be something else. MDM is a business process improvement project, not about data standards — that doesn’t get you anywhere. Even though the name says “data”, it’s not about data — there’s no way to get a business case for MDM, doesn’t exist. It’s only about the business process — customer relationships with and without bad data for example.

The new acronym — why? Here’s how the conversation usually goes:

Defining Application Data Management.

Toolkit to help companies identify how much data has to be shared across different applications — e.g. between CRM, ERP and e-commerce. As ever, start with the required outcomes and work to what you need. The amount of real MDM data — that needs to be shared across the systems — should be pretty small. Should be quick and flexible and easy to change.

The application data management is the data governance required for a specific application. Might require a different physical system — e.g. the application itself (if modern, flexible, etc.)

The wrinkle — what about the data that’s shared by, say, just two systems? In the past, we were encouraged to put that into the middle zone, which made it to big, complex, inflexible… but it needs to be treated separately. “Shared application data management” — but that’s not an acronym we’re emphasizing.

So — at least three different layers to work with.

For example, take the typical lifecycle of data quality in an ERP application. It typically fluctuates over time — starts off well, then the data quality goes down, gets cleaned up, etc. But with ADM, we’re going to focus on that data from the start, and keep it stable.

So we’re going to take our “customer ERP master file” — and divide it into real master data (central zone), the overlapping data (change it and it will break another application, but not everything). Then there might be data that is shared across modules of the ERP — e.g. Order management, inventory, and finance. Etc.

Analytics Self-Service Data and Analytics at Scale.

Self-service is a bit like plane travel — it used to be only for the elite, but now is available for most organizations. We can depend on it to get where we want to go, safely. So in some respects it’s now a commodity.

It’s time to take it to new heights, see it from a new perspective. Is it like commercial space travel? It’s going to take us a while to get there. But how do we scale?

Three key issues

  • How do we guide our self-service approach in the right direction?
  • What environment best supports a self-service approach?
  • What does the future of self-service data and analytics hold?

The four pillars of successful self-service:

  • A strong data foundation and governance. Letting people understand the data, and what the appropriate uses are.
  • The people — who does what? What does the business do, and what does IT do?
  • Process — how do we build an analytic culture? Move away from gut feel?
  • Technology — what are the tools we’re going to use?

But this isn’t very new — we’ve been talking about this for a long time. What’s fascinating is that we’re STILL talking about it. For successful self-service at scale, these things are still critical. Note that technology is last. We have to have the data in place before we can do anything… without that, the rest doesn’t matter. It doesn’t mean that technology isn’t important. But it’s not where we start. So many calls we get say “we’re going to do self-service — please help me choose the tool”. But, of course, self-service is about a lot more than just the tool.

For each pillar, what are the lessons learned?

Data

  • Understanding and use of data can make or break self-service initiatives. Not just data silos, but also “analytics silos”, how can we share the “analytics artifacts” better?
  • Empower business domain users to “own” their data. We need a crowdsourcing approach, with data catalogs, etc — through automation, but also through process.
  • Recognize that all data is the same — and so don’t govern it as though it is / use an incremental approach. Consider a light-touch approach. Deciding NOT to govern some data is a type of governance decision. Customer data? Yes! But some other data, maybe not.
  • Data literacy and certification training for business people facilitates ability to scale. Processes in place to drive successful data use.

People

  • The most important thing is that engagement equals trust. As we scale self-service, we’re really trying to build trust. In several different ways — it’s us trusting that the users are going to use the data in appropriate ways, but also that the users trust the results that they’re getting, and can share the results in trusted ways.
  • Leverage the resources you have. Who are the business users? What are they doing? What can we share more widely? In terms of knowledge of the data, in terms of analytics, in terms of data science capabilities (e.g. if available in small pockets in business units).
  • Consider establishing a multi-tiered organizational structure. Where should the management be? In IT or the business? It should be a multi-tiered combination of both — pieces of it are pushed out as close to the business as we can, while some is very centralized.
  • Engage the users in establishing process. It’s hard to find the balance. Work with the users to determine how people get access to the data and the systems. How will new data sources be added? Help them help you define the process. What doesn’t work is for IT to try to create everything and just push it out to the business users. Time and time again, the leaders had IT and the business working side by side.
  • Teach the users to fish. Don’t just give them access to the data and say “go for it”. We have to help them use the tools to fix real business problems. And once they have the data, what do they do next to actually change the business?

Process

  • Recognize that governance is more important than ever. Nobody’s role ever goes away with self-service! If anything you get busier.
  • Align self-service initiatives with prioritized organizational goals. What are important to your organization? Focus there first — because as you grow the initiative, you want to be able to show the value.
  • Capture anecdotes about measurable benefits and successes. Understand what came out of the initiatives, and share those stories — that becomes part of your ability to scale.
  • Build incrementally and agilely. The idea is to start small and take small steps. “How do you reign in shadow IT? You don’t! You embrace it! Find out why their doing that work, what is the barrier to doing it in another way? Ask the questions — ‘show me what you do’, work out how to do it from a more corporate point of view

Technology

  • Think end-to-end across the comprehensive analytic process
  • Recognize that not all analytics — nor users — are the same
  • Provide a toolbox of analytic capability, as opposed to one tool.
  • Teach uses to “fish” for insights.

What does the end-to-end process look like?

  • Data to Insight to Action to Impact.
  • Acquire to Organize to Analyze to Deliver to Measure.

In other words, don’t stop at analysis, getting people to look at data — work all the way through to impact. Think about using the cloud in order to scale fast. Typically today, we see a hybrid approach between cloud and on-premise.

Four-tier analytic architecture, for example: an information portal, an analytics workbench, data science laboratory, and an artificial intelligence hub.

The future is louder and clearer

  • Pervasive machine learning
  • Augmented analytics enables access
  • It’s more than structured vs unstructured — voice, video, image, etc.
  • Citizens get down to business.

Strategic planning assumptions:

Recommendations:

  • Recognize self-service as one component of your complete data and analytics strategy
  • Think end-to-end and think BIG
  • Plan and build a self-sustaining, self-service ecosystem incorporating more than just technology.
  • Design and prepare for flexibility, scalability, and change
  • Move from self-service to empowerment


Posted

in

by

Comments

One response to “Gartner Data & Analytics London Day One”

  1. David Gingell Avatar

    Thanks for the write up of the event. Very comprehensive. David Rowan’s keynote was one of the best I have seen for a long time,