GigaOm Research, Author at Gigaom https://gigaom.com/author/gigaom/ Your industry partner in emerging technology research Fri, 17 May 2024 05:37:23 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 GigaOm Names Circle CI a Leader in CI/CD for Kubernetes https://gigaom.com/video/gigaom-names-circle-ci-a-leader-in-ci-cd-for-kubernetes/ Tue, 07 May 2024 00:09:04 +0000 https://gigaom.com/?post_type=go-video&p=1030802 Analyst Darrell Kent joins Circle CI for an in depth webinar discussing CI/CD for Kubernetes

The post GigaOm Names Circle CI a Leader in CI/CD for Kubernetes appeared first on Gigaom.

]]>
Analyst Darrell Kent joins Circle CI for an in depth webinar discussing CI/CD for Kubernetes

The post GigaOm Names Circle CI a Leader in CI/CD for Kubernetes appeared first on Gigaom.

]]>
The Business Case for a SaaS Management Platform (SMP) https://gigaom.com/brief/gigabrief-the-business-case-for-a-saas-management-platform-smp/ Fri, 11 Feb 2022 21:51:16 +0000 https://research.gigaom.com/?post_type=go-bit&p=1002854 The post The Business Case for a SaaS Management Platform (SMP) appeared first on Gigaom.

]]>
The post The Business Case for a SaaS Management Platform (SMP) appeared first on Gigaom.

]]>
How cloud data warehouse vendors can benefit from a price benchmark https://gigaom.com/2021/12/10/how-cloud-data-warehouse-vendors-can-benefit-from-a-price-benchmark/ Fri, 10 Dec 2021 17:32:21 +0000 https://gigaom.com/?p=1001577 Why use a Field Test/Price Benchmark? Over the past few years, the requirement for low-cost cloud solutions has risen, with more organizations

The post How cloud data warehouse vendors can benefit from a price benchmark appeared first on Gigaom.

]]>
Why use a Field Test/Price Benchmark?

Over the past few years, the requirement for low-cost cloud solutions has risen, with more organizations choosing to work from home since the Covid pandemic. Services that allow organizations to be managed remotely, such as Cloud Data Warehouses, have become an integral part of the business strategy, but this shift has accelerated an already competitive market. A price benchmark helps companies recognize vendors that excel in the field.

Cloud Data Warehouses analyze data at speed to deliver analytics and actionable insights in real time. Data-driven organizations can utilize data warehouses and relational analytical databases for advanced analysis on various areas of the business, such as marketing, evaluating credit risks and detecting fraud. It’s therefore imperative to integrate a system that functions with the highest level of performance for cost, so it meets business requirements and budget.

A benchmark test gives an overview of a set metric for a service or infrastructure, resulting in demonstrable evidence of the solution’s ability. Customers can gain an understanding of how vendor solutions compare in like-for-like situations. A price benchmark establishes how costs relate to specific performance features, such as speed or latency, and can result in verifiable evidence of value for money. Competitor vendors are rated in terms of most to least costly against a series of queries, to produce an overview of comparable configurations.

Vendor services can often appear similar to customers and when marketing material for cost vs package is based on vendor-generated statistics, customers can lose trust in the solution. Data warehouse vendors need to present an argument for how their system compares and outperforms competitors based on metrics aligned with customer needs. Financial implications and performance are critical areas of interest for customers as they impact total cost of ownership, value, and user satisfaction.

Vendors can stay ahead of the market and improve profit margins with a price benchmark. The end report gives an invaluable level of insight into the chosen competitors’ pricing and operations, allowing vendors to strategize on the cost against scope and scale of the solution design.

Price Vs Performance

Cost is one of the main drivers for business decisions, as solutions need to suit financial situations. Often customers are drawn to lower-cost software without understanding how the price relates to performance or scalability. Although written literature states the importance of the individual metrics, it can be hard for organizations to comprehend this in terms of their own business. There is less confidence in internal marketing materials, which is where third party endorsement of value can make an impact.

The market for Cloud Data Warehouses is large, with new start-ups continually emerging and established vendors developing new services and upgrading current packages to stay ahead. The frequent change to services and costs can lead to frustration and confusion, particularly when vendors already offer multiple service packages at various prices. This can also make it difficult for customers to compare solutions on a like-for-like basis.

Benchmarks indicate the level at which systems operate and can target key financial metrics, such as price-per-query (regardless of time taken). In turn, literature can be designed around the market landscape for data warehouses and how solutions support specific use-cases. This can help customers understand the breakdown of costs.

Credible Testing Environments

Price performance tests derived from the industry-standard TPC Benchmark™ DS (TPC-DS) can provide a professional and fully established standard for systems. Solutions are configured to optimize their performance, so that outcomes are based on their maximum level of service, and cluster sizes are comparable to achieve similar hourly costs. This gives a higher level of insight into how similarities in performance are priced by individual vendors.

Benchmark tests try to align the hardware and software systems for comparable scenarios. As it can be difficult to compare fully managed cloud data warehouse platforms, price-per-hour can be used as the basis. By drilling down into the elapsed time of the test (past the fastest run query) and the cost of the platform, analysts are able to make conclusions on the overall price-performance of a platform.

Reports that offer a fully transparent overview of how the tests are performed allow vendors to draw their own conclusions from results, as well as offering trust and credibility in how the test was carried out. A step-by-step account of the test supports vendors, or customers, in recreating the assessment themselves, to corroborate results or rerun the configuration after changes have been made. 

The Optimal Time for Cloud Data Warehouses

Marketing cloud data warehouse solutions can be challenging when organizations are looking for low-cost solutions or they do not understand the full capabilities of the systems. Digital transformation accelerated as businesses turned to remote working during the pandemic and workers increased demand on working from home. It is now the optimal time to market data warehouses as businesses are looking to expand their analytical capabilities, but with a full market of vendors, standing out from the crowd is difficult.

Price benchmark testing delivers evidence of performance cost against chosen competitor vendors, so customers gain a clear comprehension of how cost is calculated and why solutions outrank their rivals. Results from benchmarks can be included in marketing collateral and can be the swaying factor in a customer’s business decision.

Contact GigaOm’s sales team today to learn more about the testing environments for data warehouses.

The post How cloud data warehouse vendors can benefit from a price benchmark appeared first on Gigaom.

]]>
The Benefits Of A Performance Benchmark For API Management https://gigaom.com/2021/12/09/the-benefits-of-a-performance-benchmark-for-api-management/ Thu, 09 Dec 2021 23:14:34 +0000 https://gigaom.com/?p=1001518 Introduction Application programming interfaces, or APIs, are now the standard infrastructure for communication software. Popular infrastructures, such as containers and Kubernetes, have

The post The Benefits Of A Performance Benchmark For API Management appeared first on Gigaom.

]]>
Introduction

Application programming interfaces, or APIs, are now the standard infrastructure for communication software. Popular infrastructures, such as containers and Kubernetes, have increased the demand for high-performance, lightweight solutions, and organizations are selecting API management as a preferred option, avoiding the costs of custom code. 

But with this increase in popularity comes an increase in vendors offering solutions, filling the market with good, and not so good, services. Vendors need to work harder to sell their product and prove that it is high-quality, cost effective and efficient. Solutions can often look the same in writing, but differ greatly in practice, so customers need direction in finding the best fit for their requirements.

APIs should work with high performance computing, to allow complex data to be processed at speed. Benchmark testing is a way of verifying the product, providing evidence that key features, such as speed of latency, work as they should. Using benchmarks, vendors can be tested and positioned alongside competitors, providing recognition for those who excel in the field, and proof that quality software is produced to industry standards. 

Benchmark and field test reporting can form the basis of tenders and marketing collateral, giving a third-party, independent perspective for an unbiased, professional overview. Customers respond better to marketing verified by an external source as it gives weight to the information being given, over and above material produced internally. It can also give understanding of how two, seemingly similar, API management solutions differ.

Pinpointing High-Quality API Management

API latency testing measures the delay as data is transferred from source to destination through a network connection. It is a key product feature for an API system: the quicker communications can be processed, the more appealing to customers. If websites have lots of latency, the bandwidth can be affected, so optimizing APIs can help reduce costs, increase customer traffic, and aid site functionality.

Customers looking for quicker software may not understand the full abilities of APIs and microservices, and creating a compelling and credible argument for the reliability of a solution can take on various forms, depending on individual requirements. The market is competitive, with more companies moving towards expanding their core platforms, and the popularity of API tools increasing, primarily due to the Service-Oriented Architecture movement. 

Offering a product that stands out, particularly when protocols and methods vary greatly within the infrastructures themselves, can present difficulties, for start-up companies competing with an already established market, and for long-term enterprises needing to advance and revamp products. 

Customers often look for the cheapest option without understanding the use case for performance-orientated products with a higher cost. An argument needs to be created for the execution and delivery of the API solution, to give customers understanding of how it functions in real-life scenarios. Benchmarks allow customers to see how solutions perform before committing to the service, and provide recognition for vendors offering high-quality services.

Difficulties in Testing Performance

Benchmark testing faces challenges, as may be predicted, particularly when software is in the cloud. Configurations may favour one vendor over another, or when testing fully managed, as-a-service offerings where the underlying configurations (processing power, memory, networking, etc.) are unknown. A like-for-like framework for testing allows configurations to be aligned across solutions. 

Benchmarking is often done behind closed doors, but transparent tests provide vendors and customers with complete knowledge of the analysis, the results and how conclusions are formed. Testing environments should be devised to be as realistic and accurate as possible, so outcomes mimic a credible scenario.

The goal in API management performance benchmark testing is to provide evidence of how well each of the platforms withstand significant transaction loads. For large and complex organizations, it’s imperative to choose a system that can process large amounts of data. A benchmark report not only gives the customer evidence of performance but shows evidence of who is best in the market.

Conclusion

APIs are driven by performance and the reduction of latency they can offer enterprises. The demand for APIs is increasing, and in turn, older solutions are evolving to include microservices, whilst new vendors try to find their footing within the market. 

Benchmark testing can offer vendors a verified, third-party endorsement using realistic environments to test performance and corroborate information in marketing material. GigaOm’s tests offer full disclosure, where other benchmarking companies conduct the tests behind closed doors. For business decision-makers needing support to promote an API solution within their company, these transparent and honest tests provide the evidence for a use case.

The post The Benefits Of A Performance Benchmark For API Management appeared first on Gigaom.

]]>
The Benefits of a Price Benchmark for Data Storage https://gigaom.com/2021/12/08/the-benefits-of-a-price-benchmark-for-data-storage/ Wed, 08 Dec 2021 17:03:50 +0000 https://gigaom.com/?p=1001445 Why Price Benchmark Data Storage? Customers, understandably, are highly driven by budget when it comes to data storage solutions. The cost of

The post The Benefits of a Price Benchmark for Data Storage appeared first on Gigaom.

]]>
Why Price Benchmark Data Storage?

Customers, understandably, are highly driven by budget when it comes to data storage solutions. The cost of switching, upkeep and upgrades are high risk factors for businesses, and therefore, decision makers need to look for longevity in their chosen solution. Many factors influence how data needs to be handled within storage, including data that is frequently accessed, or storing rarely-accessed legacy data. 

Storage performance may also be shaped by geographic location, from remote work or global enterprises that need to access and share data instantly, or by the necessity of automation. Each element presents a new price-point that needs to be considered, by customers and by vendors.

A benchmark gives a comparison of system performance based on a key performance indicator, such as latency, capacity, or throughput. Competitor systems are analyzed in like-for-like situations that optimize the solution, allowing a clear representation of the performance. Price benchmarks for data storage are ideal for marketing, showing customers exactly how much value for money a solution has against competitor vendors.

Benchmark tests reinforce marketing collateral and tenders with verifiable evidence of performance capabilities and how the transactional costs relate to them. Customers are more likely to invest in long-term solutions with demonstrable evidence that can be corroborated. Fully disclosed testing environments, processes, and results, give customers the proof they need and help vendors stand out from the crowd.

The Difficulty in Choosing

Storage solutions vary greatly, from cloud options to those that utilize on-premises software. Data warehouses have different focuses which impact the overall performance, and they can vary in their pricing and licensing models. Customers find it difficult to compare vendors when the basic data storage configurations differ and price plans vary. With so many storage structures available, it’s hard to explain to customers how output relates to price, appeal to their budget, and maintain integrity, all at the same time.

Switching storage solutions is also a costly, high-risk decision that requires careful consideration. Vendors need to create compelling and honest arguments that provide reassurance of ROI and high quality performance.

Vendors should begin by pitching their costs at the right level; they need to be profitable but also appealing to the customer. Benchmarking can give an indication of how competitor cost models are calculated, allowing vendors to make judgements on their own price plans to keep ahead of the competition. 

Outshining the Competition

Benchmark testing gives an authentic overview of storage transaction-based price-performance, carrying out the test in environments that imitate real-life. Customers can gain a higher understanding of how the product works in terms of transactions per second, and how competitors process storage data in comparison.

The industry-standard for benchmarking is the TPC Benchmark™ E (TPC-E), a recognized standard for storage vendors. Tests need to be performed in credible environments; by giving full transparency on their construction, vendors and customers can understand how the results are derived. This can also prove systems have been configured to offer the best performance of each platform.

A step-by-step account allows tests to be recreated by external parties given the information provided. This transparency in reporting provides more trustworthy and reliable outcomes that offer a higher level of insight to vendors. Readers can also examine the testing and results themselves, to draw independent conclusions.

Next Steps

Price is the driving factor for business decisions and the selection for data storage is no different. Businesses often look towards low-cost solutions that offer high capacity, and current trends have pushed customers towards cloud solutions which are often cheaper and flexible. The marketplace is full in regard to options: new start-ups are continually emerging, and long serving vendors are needing to reinvent and upgrade their systems to keep pace. 

Vendors need evidence of price-performance, so customers can be reassured that their choice will offer longevity and functionality at an affordable price point. Industry-standard benchmarking identifies how performance is impacted by price and which vendors are best in the market – the confirmation customers need to invest.

 

The post The Benefits of a Price Benchmark for Data Storage appeared first on Gigaom.

]]>
AIOps: The Next Big Thing in IT Operations https://gigaom.com/2020/06/26/aiops-the-next-big-thing-in-it-operations/ Fri, 26 Jun 2020 20:13:35 +0000 https://gigaom.com/?p=971396 Are you looking into AIOps strategies or solutions? Register to attend our free webinar on July 30th entitled, “AI Ops: Revolutionizing IT

The post AIOps: The Next Big Thing in IT Operations appeared first on Gigaom.

]]>

Are you looking into AIOps strategies or solutions? Register to attend our free webinar on July 30th entitled, “AI Ops: Revolutionizing IT Management with Artificial Intelligence”.

Register for this Webinar hbspt.cta.load(6337266, ‘0523e939-dd14-4ca0-b23a-d32e7fcdf0b7’, {});

IT Operations have seen huge changes in the past two decades, but none may be more important than the adoption of artificial intelligence (AI) and machine learning (ML) to speed, enhance, and automate monitoring and management of IT infrastructures. Since 2017, AIOps tools have leveraged big data and ML in day-to-day operations and promise to become an important tool for IT organizations of every size.

But what even is AIOps? Let’s take a look at the basics of the technology, explore what it was designed to do, and see how it is developing.

What is AIOps?

By leveraging big data and ML in traditional analytics tools, AIOps is able to automate some parts of IT operations and streamline other elements through insights gained from data. The aim is to reduce the time burden placed on IT ops teams by administrative and repetitive activities that are still vital to the operation of the larger enterprise.

AI-enabled Ops solutions are able to learn from the data that organizations produce about their day-to-day operations and transactions. In some cases, the tools can diagnose and correct issues using pre-programmed routines, such as restarting a server or blocking an IP address that seems to be attacking one of your servers. This approach provides a few advantages:

  1. It removes humans from many processes, only alerting when intervention is required. This means fewer operational personnel and lower costs.
  2. It integrates AIOps with other enterprise tools, such as DevOps or governance and security operations.
  3. It can detect trends and be proactive. For example, an AIOps tool can monitor an increase in errors logged by a switch and predict that it is about to fail.

AIOps Categorization

AIOps is really an existing category of tools known as CloudOps and Ops tools, repurposed with AI subsystems. This is leading to a number of new capabilities, such as:

  • Predictive failure detection: This is achieved by using ML to analyze the patterns of activity of similar servers and determine what has resulted in a failure in the past.
  • Self-Healing: Upon spotting an issue with the cloud-based or on-premises component, the tool can take pre-preprogrammed corrective action, such as restarting a server or disconnecting from a bad network device. This should address 80 percent of ops tasks, now automated for all but the most critical issues.
  • Connecting to remote components: The ability to connect into remote components, such as servers and networking devices both inside and outside of public clouds, is critical to an AIOps tool being effective.
  • Customized views: Information dashboards and views should be configurable for specific roles and tasks to promote productivity.
  • Engaging infrastructure concepts: This refers to the ability to gather operational data from storage, network, compute, data, applications, and security systems, and to both manage and repair them.

We can divide AIOps into four categories: Active, Passive, Homogeneous, and Heterogeneous:

Active
Active refers to tools that are able to self-heal system issues discovered by the AIOps system. This proactive automation, where detected issues are automatically remediated, is where the full value of AIOps exists. Active AIOps allows enterprises to hire fewer ops engineers while increasing uptime significantly.

Passive
Passive AIOps can look, but not touch. They lack the ability to take corrective action on issues they detect. However, many passive AIOps providers partner with third-party tool providers to enable autonomous action. This approach typically requires some DIY engagement from IT organizations to implement.

Passive AIOps tools are largely data-oriented and spend their time gathering information from as many data points as they can connect to. They also provide real-time and analytics-based data analysis to enable impressive dashboards for operational professions.

Homogeneous
These AIOps tools live on a single platform, for example employing AI resources native to a single cloud provider like Amazon AWS or Microsoft Azure. While the tool can manage services such as storage, data, and compute, it can only do so on that one provider’s platform. This can impair effective operational management for those servicing a hybrid or multi-cloud deployment.

Heterogeneous
Most AIOps tools are heterogeneous, meaning that they are able to monitor and manage a variety of different cloud brands, as well as native systems operating within the cloud providers. Moreover, these AIOps tools can manage traditional on-premises systems and even mainframes, as well as IoT and edge-based computing environments.

Conclusion

AIOps creates opportunities for efficiency and automation that will reduce costs for businesses and free up time for IT Operations to invest elsewhere, in more valuable activities. As the field evolves, so too will the tools, innovating and developing new abilities and consolidating existing capabilities into core services.

The post AIOps: The Next Big Thing in IT Operations appeared first on Gigaom.

]]>
2 Extract Load Transform Myths and Why They’re Wrong https://gigaom.com/2020/06/05/2-extract-load-transform-myths-and-why-theyre-wrong/ GigaOm Staff]]> Fri, 05 Jun 2020 20:17:08 +0000 https://gigaom.com/?p=966979 Evaluating your ETL / ELT Capabilities, or Data Migration and Transformation Needs? Register for this free ON-DEMAND GigaOm Webinar, ” A Cloud

The post 2 Extract Load Transform Myths and Why They’re Wrong appeared first on Gigaom.

]]>

Evaluating your ETL / ELT Capabilities, or Data Migration and Transformation Needs? Register for this free ON-DEMAND GigaOm Webinar, ” A Cloud Maturity Model: Migration, Movement, and Transformation”

Register for this Webinar hbspt.cta.load(6337266, ‘69494ba0-6482-4076-840c-971dd229aab0’, {});

 

In data warehousing, the decades-old concept of Extract, Transform, and Load (ETL) is well-known and familiar. Enterprise organizations use ETL to extract data from their packaged systems and some custom, in-house line-of-business applications; transform the structure so that the data from these separate systems can be correlated and conformed; and then load that neatened, coordinated data into the warehouse. Oftentimes, data from half a dozen systems can be integrated this way, and it works pretty well.

A new approach to pre-processing data for the warehouse has been gaining in favor and popularity, however. Extract-Load-Transform (ELT) modifies the sequence to load data before it is transformed. However, it is more than a simple arbitrary resequencing of the same steps. It is a fundamentally different approach to pre-processing data, in terms of both architecture and philosophy.

Unfortunately, misconceptions around ELT have sprung up, and these myths can discourage its adoption. Here, we tackle the two biggest myths around ELT and explore why they are wrong and why your organization should consider ELT if it hasn’t already.

Myth #1: ELT Is Just a Gimmicky Pivot on ETL

As a general statement, ELT is not just a novel exercise to show that changing the order of operations (i.e. transforming data after loading it, rather than before) yields an equivalent result. Instead, the ELT approach acknowledges that ETL platforms, which often run on a single server, take on an undue computing burden as the number of data sources and volume of data both increase.

In the “old days” of loading data from maybe half a dozen systems, at a frequency of once per day (or less), the burden was reasonable, and running it on ETL infrastructure took that load off the warehouse itself. This division made sense… then. In the present environment, however, data sources have increased by orders of magnitude, and load frequencies have increased dramatically – in some cases running almost continuously. This change means that the ETL infrastructure that formerly reduced load and contention on the warehouse can now become a point of failure in its continuous operation.

Furthermore, ELT systems can manage load logic natively, taking on scheduling, monitoring, and exception handling without requiring dedicated coding, and eliminating the range of errors such coding can introduce. Further, because the jobs leverage the computing power and MPP architecture of the corporate data warehouse (CDW), they run faster and provide the greater concurrency necessary to accommodate the increase in data sources, volumes, and load frequency. Transformation jobs, meanwhile, run on the warehouse itself and can take advantage of its (often much) greater scalability. This approach conforms much more closely to the principle of using the right platform for the right job.

Far from being a simple rearrangement of a process, ELT is a transformation of it. It frees up computing power, creates efficiencies in time and power use, and allows infrastructure to handle greater load.

Myth #2: ELT Implies a Schema-on-Read Approach

Identifying and untangling this myth involves some appreciation of nuance and clearly defining our terms. When we entered the era of Big Data (which, after all, is one of ELT’s catalysts), we also began endorsing a new proposition of working with analytic data, dubbed “schema-on-read.”

This approach, which works best for ad hoc analysis, involves deferring transformation until analysis time, rather than performing it in advance. With schema-on-read, data loading takes place on its own, just as it does with ELT. But while schema-on-read and ELT share that overlap, the two are not the same thing. And the distinction is a non-trivial one, especially in the case of the data warehouse.

Schema-on-read can work very well in data lake environments, where ad hoc analysis that explores “unknown unknowns” takes place. In such circumstances, it makes sense to defer the imposition of schema, because the context of the analysis is variable.

But the data warehouse scenario is different and, by its production nature, typically disqualifies schema-on-read.

While not invalidating that approach, the data warehouse model asserts that for certain analyses, especially those that execute repeatedly (and thus require optimized performance), data must be transformed in advance of analysis. This schema-on-write approach makes the data more consumable for drill-down analysis, avoids executing the same transformations repeatedly, and makes explicit the idea that formal schema is desirable for operational use cases.

Conclusion

As it turns out, ELT does not rule out schema-on-write at all; in fact, it accommodates it quite well. With ELT, data transformation still happens and can fit right into the schema-on-write pattern. Once the load step has completed, transformation can kick off in earnest. When it does, it executes as a dedicated process, using the engine underlying the data warehouse. ELT can also leverage the data warehouse’s native language, SQL, for its ability to effect data transformation declaratively, rather than requiring execution loops containing multiple imperative instructions, to get the job done.

Because most cloud data warehouses leverage a Massively Parallel Processing (MPP) architecture, transformation jobs running on them can execute efficiently, using the divide-and-conquer approach MPP uses to scale performance. And because many cloud data warehouses use columnar storage that allows large volumes of data to be placed in memory, ELT does not lose any of the memory-based performance that many ETL platforms support.

As a bonus, in cases where customers do prefer a data-lake-like approach using schema-on-read, ELT can accommodate it. The key takeaway, however, is that it does not require it. In short, just because ELT does not enforce imposing schema when data is first loaded, does not mean it precludes schema-on-write.

 

The post 2 Extract Load Transform Myths and Why They’re Wrong appeared first on Gigaom.

]]>
GigaOm Analysts Share Their 2020 Predictions for Enterprise IT https://gigaom.com/2019/12/31/gigaom-analysts-share-their-2020-predictions-for-enterprise-it/ Tue, 31 Dec 2019 14:07:32 +0000 https://gigaom.com/?p=964164 Here at GigaOm we’re looking at how leading-edge technologies impact the enterprise, and what organizations can do to gear up for the

The post GigaOm Analysts Share Their 2020 Predictions for Enterprise IT appeared first on Gigaom.

]]>
Here at GigaOm we’re looking at how leading-edge technologies impact the enterprise, and what organizations can do to gear up for the future. Here’s a take from several of our analysts about what to watch for in enterprise IT and beyond, this coming year and in years to come. 

From Kubernetesization to workforce automation, data center shrink and the rise of the architect, read on.

Andrew Brust, Big Data & Analytics

  • “Kubernetesization” and Containerization of the Data Analytics Stack both open source and commercial. To a large extent, this is one’s obvious. But it’s giving rise to something less so: a tendency to spin up clusters (be they for big data, data warehousing or machine learning) on a task-by-task basis. Call it extreme ephemeralism, if you’d like. It’s enabling a mentality of serverless everything. This architecture underlies the revamped Cloudera Data Platform, and it’s also being leveraged by Google for Spark on K8s on Cloud Data Proc. Ultimately, it’s enabling new workloads.
  • Warehouse & Lake Converge, But in a Fragmented Fashion — We see this with SQL Server 2019 (its “Big Data Clusters” integrate Spark), Azure Synapse Analytics (which is the revamp of Azure SQL Data Warehouse and will also integrate Spark, as well as Azure Data Lake Store) and in the Redshift Federated Query and Parquet Export features AWS announced at re:Invent.  Everyone is trying to bring warehouse and lake together, and make them operate like two different interfaces on a common data store.  But everyone’s doing it their own way (in Microsoft’s case, they’re doing it two different ways). This divergence dulls the convergence that everyone’s aiming for.
  • BI Goes Big Brand — Salesforce got Tableau, Google’s trying to close its acquisition of Looker and Microsoft’s got Power BI.  It’s getting harder to be an independent BI provider.  That’s probably why Qlik is building out its portfolio to be a comprehensive data analytics stack (including integration and data catalog) and not just BI. 
  • AI Tries to Get Its Act Together, and Has a Long Way to Go — New improvements announced for SageMaker, including AutoPilot, are steps in the right direction. Likewise many of the new features in Azure Machine Learning.  But AI is still notebook-oriented and sloppy.  It’s trying to get the DevOps religion and integrate with software development stacks overall, but it hasn’t fully happened yet.  The need here is getting more acute by the week.

Stowe Boyd, Future of Work

  • Workforce Automation — as distinct from back-office Robotic Process Automation (RPA), this is technology to automate rote, manual work, and maybe management work (like what middle managers do) in the not-too-distant future. 
  • Work Platforms. From Uber/Lyft (which have had such an impact on the transportation sector), to myriad others. Expect the range of platforms to expand across industries, both consumer-facing and within business. 
  • Work Chat — Slack has led the charge in chat-based collaboration and communication but was mainly for techies and coastal elites. Now, Microsoft Teams is taking the simplicity and effectiveness of work chat mainstream. 

JP Morgenthal, Digital Transformation & Modernization

  • Geopolitical & Market Forces — The US is heading into an election year with a President surrounded by controversy and who continues to levy tariffs against trading partners. Key financial analysts are predicting the bull market is slowing and will have a profound effect on the markets. Potentially, many businesses will panic and pull resources and attention from digital transformations. 
  • Increased, Yet Misguided Use of Cloud/Mobile — The above will have an effect on consumption of cloud and mobile platforms and services. Many will forge forward with a cloud strategy still believing it will save them money, even though this has been thoroughly disproved time and again over the past decade. These failures will only further inhibit these businesses from being able to compete effectively.
  • Technology-Driven Disruption — While the market will take a hit, money on the sidelines will continue to work: companies with products and offerings deemed disruptive will see big investments, as investors see the opportunity to unseat old-world monoliths. This will have a net effect of transforming the overall market landscape. Investments will be heavy in Artificial Intelligence, Digital Workers / Automation, consumer technologies, healthcare IT, and remote collaboration. Cloud will see benefits here as there will be a strong foundation for scaling in these areas.
  • Quantum comes to Deep Learning — Quantum computing will also make major leaps in 2020 and we will start to see its impact on deep learning and forecasting.

Enrico Signoretti, Data Storage &Cloud Infrastructure

  • Hybrid cloud is Still Considered as Another Silo, Alongside On-Prem & SaaS — Enterprise organizations want to manage their data in a better way and avoid silos. They still don’t know how (especially in the EU), nor do they understand the solutions available, their limitations, and their level of maturity. 
  • Storage Sees a Major Trend Towards Data Management — Connected to the above, and as data storage is increasingly commoditized, the differentiator comes from how you can manage the data saved in these storage systems. 
  • Investments Move Towards Edge Computing & Public Cloud, Shrinking Core Datacenters — This trend will stop sooner or later and we will see a balance emerge between the three, as data and applications become easier to move (for example based on Kubernetes plus adequate supporting infrastructure layers). It will take at least two years, but many vendors are already showing some very interesting developments.

Jon Collins, DevOps, Innovation & Governance

  • DevOps Gets a Rebrand — No innovation philosophy lasts forever, and DevOps is showing its own weaknesses as a developer-centric idea centered on speed when enterprises need something balanced across stakeholder groups with value-driven innovation at its core. As the software development and operations industry matures, it too is looking for ways to meet the needs of organizations that are more complex and less able to change. Expect tooling to follow suit, encompassing broader stakeholders and an end-to-end view which better meets enterprise needs. 
  • Containerization Trumps Serverlessness — The industry has only had to wait 35 years for this one, as distributed systems finally have sufficiently powerful networking infrastructure to deliver Yourdon and Constantine (et al)’s notions of software modularity. Or, in layperson’s terms, application chunks can exist anywhere and still talk to each other. Containerization, based on Kubernetes or otherwise, is a popular manifestation of such chunking: the need for code modules to be self-contained and location-independent overrides any “please run on our serverless platform” exclusivity. 
  • Multi-Cloud Creates & Next Licensing Battle — The big cloud players have a fight on their hands and they know it. The competitor is unbranded access to commoditized compute and data storage resources, based on open and de facto standards, leveraging easy-to-shift chunks of innovation (see also: containers). Faced with the onslaught of write-once-run-anywhere, vendors have three weapons: differentiation through manageability (a good thing), data gravity (a moveable feast) and indeed, existing contracts and volume discounts. 
  • Architects & Policy-Setters Become the New Kingmakers — Even as organizations continue to pivot towards technology-based innovation at scale, just doing it becomes less and less of a differentiator (example: all automotive manufacturers will have driverless cars, then what?). As a result, attention in DevOps and elsewhere will turn away from effectiveness (doing the right thing) and back to efficiency (doing things right), manifested in terms of architectural, process and governance excellence. 

The post GigaOm Analysts Share Their 2020 Predictions for Enterprise IT appeared first on Gigaom.

]]>