Table of Contents
- Summary
- Introduction
- Primary approaches to in-memory databases
- Historical context and trends
- An in-memory DBMS is much more than a DBMS in memory
- Summary of design principles for Microsoft’s in-memory OLTP
- Competitive approaches
- SQL Server 2014 in-memory design principles and business benefits
- Key takeaways
- Appendix A: Why an in-memory database is much more than a database in memory
- Appendix B: Case studies
- About George Gilbert
- About GigaOm
- Copyright
1. Summary
The emerging class of enterprise applications that combine systems of record and systems of engagement has geometrically growing performance requirements. They have to support capturing more data per business transaction from ever-larger online user populations. These applications have many capabilities similar to consumer online services such as Facebook or LinkedIn, but they need to leverage the decades of enterprise investment in SQL-based technologies. At the same time as these new customer requirements have emerged, SQL database management system (DBMS) technology is going through the biggest change in decades. For the first time, there is enough inexpensive memory capacity on mainstream servers for SQL DBMSs to be optimized around the speed of in-memory data rather than the perf0rmance constraints of disk-based data. This new emphasis enables a new DBMS architecture.
This research report addresses two audiences.
- The first is the IT business decision-maker who has a moderate familiarity with SQL DBMSs. For them, this explains how in-memory technology can leverage SQL database investments to deliver dramatic performance gains.
- The second is the IT architect who understands the performance breakthroughs possible with in-memory technology. For them, this report explains the trade-offs that determine the different sweet spots of the various vendor approaches.
There are three key takeaways.
- First, there is an emerging need for a data platform that supports a variety of workloads, such as online transaction processing (OLTP) and analytics at different performance and capacity points, so that traditional enterprises don’t need an internal software development department to build, test, and operate a multi-vendor solution.
- Second, within their data platform, Microsoft’s SQL Server 2014 In-Memory OLTP not only leverages an in-memory technology but also takes advantage of the ability to scale out to 64 virtual CPU processor cores to deliver a 10- to 30-times gain in throughput without requiring the challenge of partitioning data across a cluster of servers.
- Third, Oracle and IBM can scale to very high OLTP performance and capacity points, but they require a second, complementary DBMS to deliver in-memory technology. SAP’s HANA is attempting to deliver a single DBMS that supports a full range of analytic and OLTP workloads with the industry closely watching how well they optimize performance. NewSQL vendors VoltDB and MemSQL are ideal for greenfield online applications that demand elastic scalability and automatic partitioning of data.