Why In-Memory Computing Is Cheaper And Changes Everything

donald feinberg banner

I attended a SAP-sponsored breakfast meeting organized in Sydney earlier this year, as part of a ANZ/APJ innovation analytics tour. The meeting was hosted by Ryan Blackwood, Director of Strategy & Innovation for SAP Australia, and featured a talk by Donald Feinberg, Gartner VP and Distinguished Analyst on the “Nexus of Forces”: social, mobile, cloud and information.

Here the first part of the notes I took from Donald’s presentation, formatted for legibility. In Part 2, Donald talks about the business impact of in-memory technologies and in Part 3, how to create an in-memory action plan.


Some people think that we’re talking about in-memory because of all the hype that’s been going on in the market – but it’s not just SAP. Every single vendor out there is getting involved in in-memory technology. We actually started looking at in-memory technology at Gartner back in 2009 and in-memory databases have been around for years in some form going back to the 90s.

This technology is now becoming disruptive.

Common in-memory myths

in-memory-myths[3]

Myths:

  • Is in-memory hype spread by SAP? Not likely, because there are over 50 companies that have some type of in-memory technology.
  • It’s new and unproven? Wrong. We have been using in-memory technology since the 90s. We’re not talking here about caching in-memory. We did that in with 360s, with only 24k of memory. In-memory technologies that actually use in-memory not for cache, but for actual data that they fetch and change, has been around since the early 90s.
  • That in-memory technology is expensive and only if you have really deep pockets can you afford it? Not true. There are several in-memory technology vendors whose largest customer base is SME, who by definition doesn’t have a lot of money to spend. And the cost of the technology is coming down fast.
  • And this is not a niche technology just for analytics. We’re using it for all kinds of use cases today, such as trading fraud in the financial industry, for telephone fraud, for gaming where everything has to be instantaneous. Analytics will run faster, but it is not true that it’s only for analytics.

In-memory is going to change the way you do everything

In-memory computing will have a long term, disruptive impact by radically changing users’ expectations, application design principles, products’ architecture and vendors’ strategy

This is going to change the way you do everything. Everybody in this room will be running their entire IT organization in-memory in the next 15 to 20 years. It’s not going to happen overnight. But within the next 15 years, you will run your whole operation in memory. You won’t have tape drives, you won’t have disk drives. You’ll be using flash and memory. Flash will be your backup, your archive, and memory is where you’re going to run everything. And that’s absolutely a fact.

What is in-memory computing?

what-is-in-memory

When we talk about in-memory computing, we are talking about DRAM: the “d” stands for destructive: it doesn’t hold data it if you lose power. It’s not about flash or NAND memory. Flash is a form of memory, but it’s not what we’re talking about when we talk about in-memory computing.

All forms of flash today are used like disk drives. Even though we may remove the controller as a bottleneck, the applications are still doing I/O to a flash drive or a flash board. It is getting much more reliable and cheaper, so it is going to become a persistence mechanism replacing disk.

Today, the reliability of flash is longer than that of disk drives. If you replace your hardware every three to four years, and you have flash SSD and disk, you will probably not see a failure on the flash at all in that period of time, but I guarantee you that you will change disk drives.

When we talk about in-memory, we are talking about the physical database being in-memory rather than as it is “traditionally” done: on disk.

What is the difference? Database engines today do I/O. So if they want to get a record, they read. If they want to write a record, they write, update, delete, etc. The application, which in this case is a DBMS, thinks that it’s always writing to disk. If that record that they’re reading and writing happens to be in flash, it will certainly be faster, but it’s still reading and writing. Even if I’ve cached it in DRAM, it’s the same thing: I’m still reading and writing.

What we’re talking about here is the actual database is physically in in-memory. I’m doing a fetch to get data and not a read. So the logic of the database changes. That’s what in-memory is about as opposed to the traditional types of computing.

Why is it time for in-memory computing?

Why now? The most important thing is this: DRAM costs are dropping about 32% every 12 months. Things are getting bigger, and costs are getting lower. If you looked at the price of a Dell server with a terabyte of memory three years ago, it was almost $100,000 on their internet site. Today, a server with more cores — sixteen instead of twelve — and a terabyte of DRAM, costs less than $40,000.

In-memory results in lower total cost of ownership

So the costs of this stuff is not outrageous. For those of you who don’t understand storage, I always get into this argument: the total cost of acquisition of an in-memory system is likely higher than a storage system. There’s no question. But the total cost of TCO is lower – because you don’t need storage people to manage memory. There are no LUNs [logical unit numbers]: all the things your storage technicians do goes away.

People cost more than hardware and software – a lot more. So the TCO is lower. And also, by the way, power: one study IBM did showed that memory is 99% less power than spinning disks. So unless you happen to be an electric company, that’s going to mean a lot to you. Cooling is lower, everything is lower.

So don’t let somebody say to you we can’t go in-memory because it’s so much more money. Acquisition costs may be higher. If you calculate out a TCO, it’s going to be less.

>>Continue to Part 2, where Donald talks about the business impact of in-memory technologies, or Part 3, how to create an in-memory action plan.

In addition: if you’re interested in hearing Donald Feinberg talk about this, a web seminar is available (registration required)

Comments

12 responses to “Why In-Memory Computing Is Cheaper And Changes Everything”

  1. […] and Hadoop are enabling us to rethink the world of applications and databases. In-memory is the future of operational systems, dramatically accelerating the speed of business while radically simplifying complex IT layers, at […]

  2. […] But more and more companies are crunching the numbers and realizing that in-memory processing is simply a cheaper, better way to do analytics. […]

  3. […] the first two posts, Donald covered why in-memory is disrupting everything, and why every organization will be running in-memory in 15 to 20 years time, and the business impacts of the new in-memory computing […]

  4. […] covering why in-memory is disrupting everything, and why every organization will be running in-memory in 15 to 20 years time, Donald went on to look at the business impacts of the new in-memory computing […]

  5. […] covering why in-memory is disrupting everything, and why every organization will be running in-memory in 15 to 20 years time, Donald went on to look at the business impacts of the new in-memory computing […]

  6. Rick Avatar
    Rick

    I have no background in this regard, I’m not even a techie, so forgive me if I’m asking a stupid question, but I’m trying to figure this out.

    quote [When we talk about in-memory computing, we are talking about DRAM: the “d” stands for destructive: it doesn’t hold data it if you lose power.]

    So this means all data is lost in case the system goes down, right? Meaning that the data also needs to be stored persistently, correct? The picture explaining the difference between traditional an in-memory computing demonstrates this, by showing a DB outside of the memory for means of persistency, recovery, post-processing and backup. Seems all reasonable, but it indicates that one can’t actually do without a persistent repository such as a disk based storage system; with the (additional) costs of acquisition, with the LUN’s, with the expensive engineers to manage it all and ultimately with the hight

    1. Timo Elliott Avatar

      Rick, I think Donald covers one way to do backup/failover: two systems running simultaneously, each the backup to the other. Otherwise, in-memory systems tend to use SSD to back up data – but that can be one machine, so separate from the point about X times faster = fewer servers needed / need maintaining…

  7. […] « Why In-Memory Computing Is Cheaper And Changes Everything | L'Oréal: Turning Data Into A Customer Experience » […]

  8. […] the first two posts, Donald covered why in-memory is disrupting everything, and why every organization will be running in-memory in 15 to 20 years time, and the business impacts of the new in-memory computing […]

  9. Leo Avatar
    Leo

    Could you explain what you mean by
    “I’m doing a fetch to get data and not a read. So the logic of the database changes”
    thanks

    1. Timo Elliott Avatar

      I was Donald Feinberg saying this, not me, but here’s a link that does a reasonable job of explaining the differences — basically, you get all the overhead of pretending it’s a disk: http://www.mcobject.com/in_memory_database.