Table of Contents
- Summary
- Market Framework
- Maturity of Categories
- Considerations for Implementing Data Lake Solutions
- Vendor Review
- Near-Term Outlook
- Key Takeaways
- Methodology
- About Andrew Brust
- About GigaOm
- Copyright
1. Summary
This report covers the technologies, vendors, and use-case observations around data lakes — special repositories for structured, semi-structured, and unstructured data, often large in volume, that need a place of its own, and appropriate platforms to process and query it. Data lakes contain the data that we previously did not analyze, did not save, and maybe did not even collect. But with today’s advances in container, cloud, and analytics technology, and with a sense that data is a strategic asset critical to competitiveness, we now see this same data as meriting storage, curation, and analysis.
Most everyone in the analytics market has an awareness of data lakes and some sense of what they are. Nonetheless, the term can be malleable, with different vendors and different analysts using the term in varying ways. That makes tracking and understanding the category a bit elusive, and it may lead some parties to be dismissive of data lakes as a “fad” and/or unimportant. But that would be a big mistake.
Even if the term began as something vague and approximate, it is evolving into something much more precise, which identifies technologies and an analytics modality that is growing in cohesion and importance.
The biggest use case for data lakes is ad hoc analysis of raw data, sometimes rather serendipitously gathered. Perhaps a point of sale system has produced a log of transactions in CSV (comma separated value) format, and it got added to the lake. Perhaps a public open data set with weather information was added there as well. Or maybe the output of a data processing job, in Parquet format, was saved to a folder somewhere. Proactive business people may want to blend and analyze this data in order to, for example, correlate same-store sales performance based on the impact of daily weather conditions at the point of sale. Data lakes are great for storing such data and executing such workloads.
More Work in the Warehouse
While the same work could be done with a data warehouse and a BI tool, doing so would require a lot of up-front work, planning, and decision making. All three data feeds would have to be loaded into the warehouse, using a data prep tool, an extract, transform and load (ETL) platform. or hand-coded SQL scripts. Tables with predefined schemas would need to be created so that the data would have someplace to land.
Moreover, a decision would have to be made, possibly by multiple parties, that the data warranted space in the warehouse to begin with. And, even if all that were done, the people who were interested in the analysis would need access to the warehouse and this particular data within it.
Compare this with firing up a data lake query engine, writing some SQL targeting one or more files in the lake, and getting back the results. Even if that query takes longer to run than it would on the warehouse (and it probably will), the time-to-value is orders of magnitude faster and the chance of a missed opportunity that much lower.
That is the beauty of a data lake. It is more inclusive, less formal, and more expedient. It casts a wider net, and provides more versatility. A data lake is critical to data-driven organizations and those looking to join their ranks, especially those on the digital transformation journey. A data warehouse helps you answer the “known unknowns.” A data lake helps you discover and answer the “unknown unknowns.”
This report will look at the key vendors in the data lake space, along with the storage and query technologies that comprise their stacks. After reading it, you should understand the major data lake building blocks and how each vendor combines them in its own data lake solution.
Key Findings
- While data warehouse vendors will market their platforms as data lake-capable, the warehouse and the lake remain unique repositories, with distinct workloads.
- A hallmark of the data lake is the ability to store the data in an “agnostic” repository against which numerous engines, both open source and proprietary, can connect to, transform, and query the data.
- Establishing a “data culture” justifies the data lake investment but, for many organizations, machine learning (ML) makes the investment pay off.
- The canonical data lake technology stack has largely shifted from the Hadoop Distributed File System (HDFS) and Hive to cloud object storage and some flavor of Presto.
- Parquet is the preferred file format for data in the lake, but there are plenty of CSV files out there, and that is not likely to change.
- Apache Spark is the data lake’s augmenting technology, enabling data transformation, streaming data ingest, and training of ML models in situ.
- The public cloud providers are dominating in the Data Lake mindshare, but other vendors may have more refined “turn-key” solutions.