The Business Impact of In-Memory Computing, From Run to Transform

donald feinberg banner 2

This is the second post based on a SAP-sponsored breakfast meeting organized in Sydney earlier this year, as part of a ANZ/APJ innovation analytics tour, with speaker Donald Feinberg, Gartner VP and Distinguished Analyst explaining the “Nexus of Forces”: social, mobile, cloud and information.

After covering why in-memory is disrupting everything, and why every organization will be running in-memory in 15 to 20 years time,

In this post, Donald explains the business impacts of the new in-memory computing possibilities, and in the next post, how to create an in-memory action plan.

These comments are based on my notes taken from the speech, formatted for legibility.


Business impact of in-memory computing

business-impact-of-in-memory

What is the impact of in-memory computing on your business? It’s about running the business, growing the business, and transforming the business, and you need to look at the business impact of this technology across all of these.

Run the business

One of the biggest advantages of memory that people forget is this: right now, you have lots of applications. And today, people typically have one application per server. Let’s say you have your corporate running on ten servers today, and it’s spread out across locations, because of storage access and the speed of the processors and the speed of the applications and the database access.

If I can consolidate that down to a single server, I‘m going to save a lot of money, right off the bat. Not only power, floor space, cooling, but replacement costs every three to four years for ten or twenty servers is more than one. It’s not necessarily a single server — it may be one or two — but it’s going to be much fewer.

The people required to maintain it are going to be fewer, your maintenance costs per year are going to be less, everything is less. So the speed of these in-memory technologies on just running your business – forget about transforming for a minute – is going to be a huge savings. Because if one applications runs a hundred times faster on a server, I can get more applications on that server.

When I said you you’re going to run your whole business in-memory in 10 or 15 years, I left off the fact that it’s going to be on a single server the size of what you think of as a desktop server, plugged into the wall with no special air conditioning needs. That’s the kind of miniaturization and speed that in-memory is bringing to the table, with huge savings.

I know many of you are saying “he’s not talking about high availability or disaster recovery”. All of that is coming — and it also is miniaturized. You’re not going to run your business on one of these, you’re going to run your business on two of them, sitting next to each other, duplicating everything it does, synchronously. That’s your high availability. Then you’ll put another one somewhere else, in somebody’s home, 250 or 800 kilometers away, and that’s your disaster recovery center. You hire a disaster recovery manager in Perth, and put the disaster recovery in his house — that’s the way it will be in the future.

Transform the business

The latency with in-memory is so low that you can do things synchronously that you wouldn’t have thought to do synchronously before. It’s not only a matter of how many things you can do, and how much you can fit into this box because of the speed, but it’s also because of what the latency is going to give you.

Why is that important? Think about where information and mobile and social come together, and you need to do messaging and things like that. Because of this lower latency, I can start to do things I couldn’t even consider before, because I couldn’t get it fast enough to even think about it.

How many of you may have applications that you thought about doing, but because things took so long on your system, it’s just not reasonable to do? I’m not talking about the demonstrable ones like if you’re in the manufacturing business, your MRP run takes four to six hours overnight, now you can run it in five seconds. So you can use the application differently.

And other things that you couldn’t do at all now become possible. As we start to do sentiment analysis, looking at social networks, and building it into a planning application that I’m running in seconds, that’s huge in the way you can change your business.

Think about if somebody says to you “I want to buy 10,000 cases” and you don’t even know if you can produce that. And then he says “I want it next week.”

How long does it take your company to commit to that, and to figure out a price, that may in fact be higher because I’m going to bounce other customers off the production line in order to get this done? If you can do that with a latency in seconds, it changes the way you do business.

That now is getting into “transform the business” because an application that you view as “a forecasting package that I run overnight” is not a just a forecasting package if I can run it in five seconds or two minutes. It becomes a sales tool, changing the way I’m doing business.

The example that I like to use is this: airlines want to sell you discount tickets. Most people don’t know that airlines re-price all the tickets on all their planes every night. So your company goes and buys a full-fare ticket because you need it refundable.

The next day, that flight may have two more discount tickets because they have a yield that they need for each plane, for each flight, so they can actually go through a whole calculation that tells them how many discount tickets they can have. Now, why is this valuable to them? Well, if you get on to, say, Qantas today and say “I want to go to Singapore and I want a discount ticket” and there are none on the day of the flight that you want, most of you wait until tomorrow to see if there are any, right?” Not true – most people don’t even know that happens. Instead, what you’re going to do is switch over to Singapore Airlines and if they have a ticket, you’re going to buy it and Qantas just lost the revenue.

But if Qantas could re-price every seat on every plane every time a ticket was sold, that business wouldn’t go away. If you had an application like that, which in-memory will allow you to do, and you went to the CEO of the airline and said “we have this application, do you want it?”, how much do you think they would be willing to pay? I’ll tell you — they won’t even ask how much it costs. That’s how much it transforms their business, and changes what they do. They’ll pay whatever you want.

In-memory computing technologies

taxonomy-of-in-memory

So far, we’ve been talking just about in-memory DBMS. Here are some of the other ways the technology is used.

In-memory data grids have been around a long time. If any of you do web applications, you may be using some them. Memcached is the one that comes to mind – an open-source product – where your data’s in memory, in the application, and scales across multiple computers, multiple servers. That technology’s been around a long time and enables some of the biggest web applications that you’re all using, including Amazon, including eBay, and all the spinoffs of those.

High-performance messaging infrastructure. Think about what happens if you want to send a message out to four or five thousand of your customers at a time. It’s an SMS message or whatever, in-memory’s going to be able to do that much quicker.

Wouldn’t it be nice if you’re an airline, and you’re cancelling a flight, to get those messages out quickly? Or, in retail, if you’re going to have a special pricing discount, you’re going to send out to all the customers registered on your site, and you’re a big retailer with one hundred thousand or a million customers, think about how high-performance messaging is going to happen.

Complex event processing. That’s what fraud detection is all about, especially for cloned cell phones, for trading fraud, for credit-card fraud, for anything where some analysis is taking place on streaming data coming into a computer and in real-time. I make a decision on an event that’s happening, and then do something about it.

In-memory application servers. These are necessary if you’re going to do this consolidation onto a single or double box of all your applications. Your application servers have to be in-memory, and they can’t be based on disk drives, or they’re not going to run as fast as all the other technology that is enabled with the applications running in the application server.

All of these together make up “in-memory technologies”. The providers of this technology are going to merge together and all of this is going to become an in-memory megadata platform over the next three to five years. Data grids are going to go away and just become part of the in-memory database. These two will be the first to merge, and they’re merging already with in-memory analytic applications and application servers.

That’s the future, as they merge together, which will enable you to run your whole business in memory.

Drivers of in-memory computing

drivers-of-in-memory

So what drives all this? Well, big data. Now remember “big data” is not just about volume. When we mention big data with respect to in-memory, people think we’re crazy, because big data is a lot of data, and people say “I’m not going to put a petabyte in memory: it’s too expensive!”

Big Data” is volume (big size) and/or velocity(how fast the data’s coming in) and/or the variety of data(unstructured data). In-memory can support velocity today, that’s one the first use case of it, high-speed data coming in through event processing, smart metering, etc. And it can support unstructured data. As the price comes down, as compression gets better, it’ll also get start to get larger and larger on volume of data.

Real-time analytics. For years, Gartner has said there is no such thing as “real-time.” Today, you are running analytics on data that is coming from a transaction system. If I have to say it that way, there’s a latency there. Some ETL or data integration process has to move data from the transaction system to the data warehouse before you can do those analytics. The only way you can do real-time analytics is if it’s being done on the transaction data when it’s completed. So that is one of the drivers for this.

24×7 with no batch windows. If you batch window drops to less than zero, you’re going to have to run things very quickly. Batch is going away. That Materials Requirement Planning batch run that takes six hours? If it starts to run in 3-4 seconds, it’s really no longer batch.

So the whole concept of batch disappears with in-memory technology. Any time you see words like “awareness” then you’re talking about in-memory. In order to make any applications aware of things it means real time, and it means you need the speed and low latency of in-memory technology to do it.

Inhibitors of in-memory computing adoption

So what’s slowing us down?

A lot of these are perceptions. So the perception that it’s a complex architecture: it doesn’t have to be.

The perception that it’s unrealistic: today, this technology is emerging, and yes, it’s disruptive, but no, you can’t do everything with it. So the expectations have to be set right. There are of course no standards, there aren’t a lot of skills and there’s not a lot of best practices yet, because this is just emerging with those. That will happen over the next few years.

So yes, there are many drivers, but at the same time there are many inhibitors, a lot of which you can change by setting expectations and perceptions correctly. So you start to think about IT looking at all this data and saying “what do I do with it all?” and the bottom line is: if your assumptions are that you can’t do anything with it, you’re not going to do anything with it.

 

See the first post, on why in-memory changes everything, or next post in the series: Part 3, how to create an in-memory action plan. In addition, if you’re interested in hearing Donald Feinberg talk about this, a web seminar is available (registration required)]


Posted

in

,

by

Tags:

Comments

5 responses to “The Business Impact of In-Memory Computing, From Run to Transform”

  1. […] is the underlying reason why Donald Feinberg (and Timo Elliott) are right on here. Every organization will be running in-memory… and […]

  2. […] In the first two posts, Donald covered why in-memory is disrupting everything, and why every organization will be running in-memory in 15 to 20 years time, and the business impacts of the new in-memory computing possibilities. […]

  3. Marc Avatar
    Marc

    Hi Timmo
    Great post. I especially like the airline ticket example. A question regarding in-memory analytics. What is your detailed thought about this, do you really think they will merge with in-memory DBMS or will there be a market for seperate in-memory analytical appliances?
    Br
    Marc

    1. Timo Elliott Avatar

      Marc, I think in-memory will replace disk at the heart of the “new corporate infrastructure platforms”, and eventually do most of the analytics, too. The new coresystems should also be able to provide cheap, flexible, easy “sandboxing” for anybody who wants to gather information from multiple sources and analyze it, removing the need for separate analytics silos — but that might take a while…

  4. […] « Big Data Poised To Take Over From Cloud Computing (in… | The Business Impact of In-Memory Computing, From Run to… » […]