GigaOm Sonar Report for Edge Kubernetesv1.0

An Exploration of Cutting-Edge Solutions and Technologies

Table of Contents

  1. Summary
  2. Report Methodology
  3. Overview
  4. Considerations for Adoption
  5. GigaOm Sonar
  6. Market Landscape
  7. Near-Term Roadmap
  8. Analyst’s Take

1. Summary

Edge Kubernetes addresses a number of growing requirements in modern businesses. The increase in connected and smart devices across all areas of business is changing where and how we generate data. Processing that data to enable business outcomes can be challenging with traditional centralized computing models. From autonomous vehicles to fast food, to haulage and transport, to healthcare, and more, the endpoints we now connect are more varied than ever.

Connectivity back to centralized data centers may be unreliable or intermittent with mobile devices. Running the applications closer to the device and transferring only processed data and results back to a central location allows greater flexibility and faster response times. Cluster footprints can be as small or as large as necessary at the edge and can take advantage of the ruggedized form factors available to suit less hospitable environments. Most common use cases for edge Kubernetes can be found in data analytics, AI/ML workflows, image and video processing, robotic process automation, IoT, and other applications that benefit from speed in data processing and manipulation (see Figure 1).

Figure 1. Edge Kubernetes Overview

The two most common approaches to edge Kubernetes are:

  • Software defined: The software-defined approach enables the use of a wide range of hardware devices as well as of existing infrastructure components. It allows a greater degree of flexibility, so you can mix and match the hardware requirements to the location.
  • Appliance/specialized hardware: This approach is similar to using hyperconverged infrastructure appliances. It combines compute, storage, and networking resources with a software platform that manages and deploys the edge clusters. It may also include specialized hardware designed for AI/ML workloads that include GPUs or other dedicated hardware for a specific use case.

Solutions may support a combination of these two models. Software-defined deployments are usually beneficial when existing infrastructure is in place, possibly even existing hypervisor platforms, should a bare metal deployment not be required. The latter can be beneficial for Greenfield deployments and scenarios where bare metal deployments are not possible. Appliance deployments can also help provide a single point of contact for purchasing and supporting hardware.

How We Got Here

Over the last two decades, computing technology has improved dramatically. In fact, modern smartphone devices carry more processing power than the average data center servers used at the turn of the century. Along with this, the footprint of computing devices has shrunk. The advent of the Raspberry Pi and Next Unit of Computing platforms introduced low-power devices that can run enterprise-class applications in remote locations without sacrificing performance and stability.

Other key factors in the development of the edge and IoT space were improvements in connectivity, such as 4/5G mobile connectivity, and SD-WAN solutions that allowed for enterprise control and connectivity over consumer-grade internet connections.

Initial efforts in edge computing started within the virtualization space; however, the overhead of virtual machines and hypervisor software can make deployment at scale difficult. Deployments were limited to sites that could host enough resources to run the necessary applications, which often required additional power and cooling at the edge location. Then, along came container technology, which allowed the deployment of a minimal number of services and associated system files to support the application. The footprint required to deploy multiple applications was reduced from traditional pizza box-style servers to devices you can fit in your hand.

In recent years, Kubernetes emerged as the de facto standard for container orchestration, with wide adoption across multiple hyperscale cloud and on-premises solution providers. Bringing the scalability and orchestration capabilities of Kubernetes to the edge made running hundreds or even thousands of globally distributed data centers a reality.

About the GigaOm Sonar Report

This GigaOm report is focused on emerging technologies and market segments. It helps organizations of all sizes to understand the technology and how it can fit in the overall IT strategy, its strengths, and its weaknesses. The report is organized into four sections:

Overview: an overview of the technology, its major benefits, possible use cases, and relevant characteristics of different product implementations already available in the market.

Considerations for Adoption: An analysis of the potential risks and benefits of introducing products based on this technology in an enterprise IT scenario, including table stakes and key differentiating features, as well as consideration on how to integrate the new product with the existing environment.

GigaOm Sonar: A graphical representation of the market and its most important players focused on their value proposition and their roadmaps for the future. This section also includes a breakdown of each vendor’s offering in the sector.

Near-Term Roadmap: A 12-18 month forecast of the future development of the technology, its ecosystem, and major players of this market segment.

Full content available to GigaOm Subscribers.

Sign Up For Free