Brody Wright, Author at Gigaom Your industry partner in emerging technology research Tue, 15 Aug 2023 16:26:59 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.3 Edge Bare Metal Benchmark: Lumen vs AWS https://gigaom.com/report/edge-bare-metal-benchmark-lumen-vs-aws/ Tue, 15 Aug 2023 16:26:59 +0000 https://research.gigaom.com/?post_type=go-report&p=1015235/ Edge computing has emerged as a critical enabler in many business tech stacks, reducing the cost of improving and managing the customer

The post Edge Bare Metal Benchmark: Lumen vs AWS appeared first on Gigaom.

]]>
Edge computing has emerged as a critical enabler in many business tech stacks, reducing the cost of improving and managing the customer experience. Edge computing allows organizations to move the most essential portions of an application as close to the user as possible to improve interaction and delivery of data and services. Reductions in latency that might go unnoticed to a single end user can prove financially beneficial, and even critical, in machine transactions or high data volume applications.

Edge solutions are being used to enable a wide variety of applications, including:

  • Smart manufacturing
  • Edge POS for retail
  • Vehicle telematics across industries
  • Traffic analytics for urban planning
  • Security analytics in retail, manufacturing, healthcare, and others

These types of use cases also allow application designs that keep datasets and processing localized with the user, making room for additional data validation and security, and potentially reducing blast radius and data leakage while delivering tighter controls on privacy.

To ensure that edge-enabled applications can provide value to the business, it is important to test the latency of proposed edge servers (bare metal and for private cloud) to determine if they can deliver the required levels of performance and responsiveness. Latency is the time it takes for data to travel between two points on a network. If latency is too high, an application or service will suffer from poor performance. Testing for latency ensures that your application or service is running smoothly and that users have a good experience while also identifying network bottlenecks and other issues that can affect performance.

Most businesses want to avoid the overhead of owning hardware in multiple data center facilities worldwide. Edge bare metal and edge private cloud services address this concern, providing better performance, higher security, and more control over the underlying hardware. It is a solution consumed as cloud OpEx, maintained by the vendor as another cloud consumption service, and utilized as infrastructure as a service (IaaS) for your custom workload.

When we look at “ideal” application performance, we want to see a 20 millisecond (ms) round trip time (RTT) for application response that is visible to the end user. If we look specifically at network traffic, that RTT equates to a distance of about 1,000 miles—any greater and the data physically cannot get there quickly enough. However, when you consider the queries and responses necessary for the user to “see” a difference, the distance to achieve this RTT may in fact become far shorter. Moving the application so it resolves within this 20ms limit can offer a significant, measurable improvement in customer satisfaction and all the ancillary value that improving customer experience affords the business, without a substantial redesign of the application or stack.

This report focuses on the latency of two top bare-metal server providers: Lumen Edge Cloud Services (Lumen) and Amazon Web Services (AWS). For our testing, we defined a high-performance model, mixing applications that can process 1,000 transactions per second and demand a maximum latency of 5ms or less, while the NGINX reverse proxy handles HTTP requests.

On the edge with bare metal, with a minimum latency of 4.3ms, Lumen’s transactions had a 100% success rate, zero drops, and minimal latency. AWS also had a 100% success rate. Its transaction rate was higher by 16ms, taking a minimum of 23ms overall. This suggests that Lumen is a better choice for edge bare metal.

As we set a desired threshold of 20ms in total for application responsiveness, and this is a simple application, it is easy to see how the difference of 11.7ms would make a significant difference to your digital and customer experiences.

Our findings: Lumen Edge Bare Metal servers offer faster response times and reduced operational costs, as well as improved scalability, security, reliability, revenue, competitive edge, and reputation. In our analysis, these findings make Lumen an ideal edge solution for IoT devices, video conferencing, surveillance systems, and artificial intelligence.

A big part of this performance advantage is Lumen’s density of edge locations compared to AWS. Lumen simply offers more edge locations due to its primary business value as a large internet service provider.

The post Edge Bare Metal Benchmark: Lumen vs AWS appeared first on Gigaom.

]]>
Video SDK Benchmark Comparison https://gigaom.com/report/video-sdk-benchmark-comparison/ Wed, 24 May 2023 15:30:58 +0000 https://research.gigaom.com/?post_type=go-report&p=1013015/ This GigaOm benchmark assesses two video conferencing SDKs which are used to integrate into deployed applications and environments.

The post Video SDK Benchmark Comparison appeared first on Gigaom.

]]>
This GigaOm Benchmark report was commissioned by Zoom.

As the demand for agile development grows due to digital modernization, organizations are turning to software development kits (SDKs) to build real-time customer engagement solutions. These SDKs enable businesses to quickly and efficiently interact with their customers in real-time, providing personalized experiences and building trust and loyalty.

Live, multi-participant audio/video SDKs are a type of software that enables multiple users to communicate with each other in real-time using audio and video. These SDKs provide functionality for online meetings, webinars, and video conferencing, as well as activities like employee training and customer support. They typically provide capabilities like screen sharing, file sharing, and text chat. They are used to develop enterprise applications that require video interaction in real-time and are cloud-hosted platforms offered by service providers.

The decision to invest in live, multi-participant audio/video application functionality is typically influenced by key factors, including time to value and total cost of ownership (TCO). We implemented the same enterprise-applicable application using two industry-leading solutions—Zoom and Jitsi.

This benchmark report provides insight into how well each product supports enterprise applications requiring real-time customer engagement. We capture time to value by assessing the steps involved in initial setup, scaled deployment, and production integration, and we assess TCO by evaluating both software and labor costs associated with a deployment. The benchmark can also be used to determine the implementation steps for each solution.

The post Video SDK Benchmark Comparison appeared first on Gigaom.

]]>
New Microsoft Teams Performance Benchmark https://gigaom.com/report/new-microsoft-teams-performance-benchmark/ Mon, 27 Mar 2023 15:00:28 +0000 https://research.gigaom.com/?post_type=go-report&p=1012929/ This GigaOm Benchmark report was commissioned by Microsoft. Microsoft Teams (Teams) is a collaboration platform that combines workplace chat, video meetings, file

The post New Microsoft Teams Performance Benchmark appeared first on Gigaom.

]]>
This GigaOm Benchmark report was commissioned by Microsoft.

Microsoft Teams (Teams) is a collaboration platform that combines workplace chat, video meetings, file storage, and application integration. It is part of the Microsoft 365 suite of applications and is designed to help teams stay organized and connected. Teams allows users to communicate and collaborate in real-time, share files, and access applications from a single platform. As application performance is key to productivity, Teams has a regular cadence of releases focused on enhancements to functionality, performance, and ease of use.

This report benchmarks the performance differences between two versions of the Microsoft Teams desktop application for Windows: classic Teams (version 1.1, release June 2022) and new Teams (version 2.1, preview release March 2023). It showcases the improvements provided by new Teams by comparing the results of tests across three domains—installation behavior, application responsiveness, and resource utilization. Our findings:

  • Installation behavior: Up to 3x faster time to install
  • Application responsiveness: Up to 2x faster launching Teams, 2.5x faster joining meetings, and 1.7x faster loading chats and channels
  • Resource utilization: Overall 50% reduction in our tests of disk and memory resource consumption

New Teams outperformed classic Teams in all tests. Our timed tests showed new Teams across all three testbeds to be on average 2x faster than classic Teams when starting up, joining meetings, and switching chats and channels. Some of the results were remarkable. Join time performance more than tripled for our high-end system, while general purpose and low-end systems saw 2.5x and 2x improvements, respectively. We saw similar 2x to 3x improvements in application launch times for our two higher-end platforms, as well as a nearly 70% reduction in disk space consumption.

Our analysis of typical usage patterns shows that businesses can expect employees to save up to five minutes per user per week when using new Teams vs. classic Teams (Figure 1). We then applied these gains across three typical organization types to gauge aggregate impacts: a 250-employee small-to-medium business (SMB), a 2,000-employee midsize enterprise, and a 55,000-employee large enterprise.

Figure 1. Aggregate Minutes Spent per Employee per Week (lower is better)

The post New Microsoft Teams Performance Benchmark appeared first on Gigaom.

]]>
Security Information and Event Management: A MITRE ATT&CK Framework Competitive Evaluation https://gigaom.com/report/security-information-and-event-management-a-mitre-attck-framework-competitive-evaluation/ Wed, 25 Jan 2023 20:27:41 +0000 https://research.gigaom.com/?post_type=go-report&p=1011549/ Security information and event management (SIEM) technology supports threat detection, compliance, and security incident management through the collection and analysis (near real-time

The post Security Information and Event Management: A MITRE ATT&CK Framework Competitive Evaluation appeared first on Gigaom.

]]>
Security information and event management (SIEM) technology supports threat detection, compliance, and security incident management through the collection and analysis (near real-time and historical) of security events and a wide variety of other event and contextual data sources. SIEM applications combine multiple information security data sources into a single tool to support incident detection, risk management, and compliance activities.

Configured properly, a SIEM solution can detect known threats, correlate logs, and create actions in response to threat. These capabilities provide the base for a security monitoring and alerting strategy, which in turn drives the requirements of an information security event management solution.

This GigaOm benchmark report aims to reveal how well vendor solutions perform in detecting attacks that leverage techniques recognized by the MITRE ATT&CK framework and the implications for real-world behavior and effectiveness. We explain the test methodology, reveal how well each vendor performed, and discuss the implications of their results. We also provide a hands-on assessment of each solution, focusing on ease of use and effective UI.

We tested four SIEM products in this report: Micro Focus ArcSight, Splunk Enterprise Security, IBM QRadar, and Microsoft Sentinel. Micro Focus ArcSight and Splunk Enterprise Security both excelled in detecting and logging the battery of attacks, each scoring 10 out of 10 in our series. IBM QRadar failed to catch many of the attacks in our tests and fell short of Micro Focus and Splunk in the quality of results presentation. Finally, we included in our evaluation Microsoft Sentinel, which at the time of this testing was equipped with a pre-release implementation of the MITRE ATT&CK framework. While we provide a hands-on assessment of the Sentinel product in this report, the tool did not produce usable results in our detection tests and therefore was not included in that portion of our evaluation.

The post Security Information and Event Management: A MITRE ATT&CK Framework Competitive Evaluation appeared first on Gigaom.

]]>