Table of Contents
1. Executive Summary
Application programming interfaces, or APIs, are a ubiquitous method and de facto standard of communication among modern information technologies. The information ecosystems within large companies and complex organizations encompass a vast array of applications and systems, many of which have turned to APIs for exchanging data as the glue that holds these heterogeneous artifacts together. APIs have begun to replace older, more cumbersome methods of information sharing with lightweight, loosely-coupled microservices. This change allows organizations to knit together disparate systems and applications without creating technical debt from tight coupling with custom code or proprietary, unwieldy vendor tools.
APIs and microservices also allow companies to create standards and govern the interoperability of applications—both new and old—building modularity. They broaden the scope of data exchange with the outside world, particularly mobile technology, smart devices, and the Internet of Things (IoT), because organizations can share data securely with non-fixed-location consumers and producers of information.
The popularity and proliferation of APIs and microservices have created a need to manage the multitude of services a company relies on—both internally and externally. APIs vary greatly in protocols, methods, authorization/authentication schemes, and usage patterns. Additionally, IT teams need greater control over their hosted APIs, such as rate limiting, quotas, policy enforcement, and user identification, to ensure high availability while preventing abuse and security breaches. APIs have enabled their own economy by allowing the transformation of businesses into a platform (and even a platform into a business). Exposing APIs opens the door to many partners who can co-create and expand the core platform without knowing anything about the underlying technology.
Still, many organizations depend on their apps, APIs, and microservices for high performance and availability. For this report, we define “high performance” as companies that experience workloads of more than 1,000 transactions per second (tps) and need a maximum latency below 30 milliseconds across their landscape. For these organizations, the need for performance is equivalent to the need for management because they rely on these API transaction rates to keep up with the speed of their business operations. For them, an API management solution must not become a performance bottleneck. On the contrary, many of these companies are looking for a solution to load balance across redundant API endpoints and enable high transaction volumes. Imagine a financial institution with 1,000 transactions happening per second—that translates to 86 million API calls in a single 24-hour day! So performance is a critical factor when choosing an API management solution.
This report reveals the results of performance testing we completed on these API and microservices management platforms: Kong Enterprise, Google Cloud Apigee X, and MuleSoft Anypoint Flex Gateway.
In this performance benchmark, Kong came out a clear winner—particularly because of its higher rate of transactions per second. Kong’s maximum transactions per second throughput, achieved with 100% success (no 5xx or 429 errors) and less than 30ms maximum latency, was 54,250. By contrast, Apigee X’s maximum throughput was 1,750, and the highest throughput we saw on a MuleSoft Anypoint Flex Gateway was 1,250 responses per second.
Testing hardware and software in the cloud is very challenging. Configurations may favor one vendor over another in feature availability, virtual machine processor generations, memory amounts, storage configurations for optimal input/output, network latencies, software and operating system versions, and the workload itself. Even more challenging is testing fully managed, as-a-service offerings for which the underlying configurations (processing power, memory, networking, etc.) are unknown. Our testing demonstrates a narrow slice of potential configurations and workloads.
As the report’s sponsor, Kong opted for a default Kong installation and API gateway configuration out-of-the-box—the solution was not tuned or altered for performance. The Anypoint Flex Gateway was also not tuned or altered for performance. The fully managed Apigee X was used “as-is” since, by virtue of being fully managed, we have no access to, visibility into, or control of its respective infrastructure.
We hope this report is informative and helpful in uncovering some of the challenges and nuances of API management platforms.
We have provided enough information in the report for anyone to reproduce this test. You are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.
2. Full Cycle API Management
This report focuses on API management platforms deployed in the cloud. The cloud enables enterprises to differentiate and innovate rapidly with microservices. API endpoints can be cloned and scaled in a matter of minutes. The cloud is a disruptive technology, offering elastic scalability vis-à-vis on-premises deployments, enabling faster server deployment and application development while allowing cost savings on compute. For these reasons and others, many companies have leveraged the cloud to maintain or gain momentum as a business.
This report examines the results of a performance benchmark test completed with three popular API management vendors, Kong, Apigee, and Mulesoft—full-cycle API management platforms with practically limitless scale-out potential and architectures for large-scale, high-performance deployments. Despite these similarities, there are some distinct differences between the platforms.
Kong
Kong was known originally as Mashape until the release of its API platform. It had a keen eye on delivering performance with a lightweight, cloud-native infrastructure when it based its API Gateway and platform on a lightweight proxy—which is known for its ability to handle more than 10,000 simultaneous connections with minimal memory usage and reverse proxy with caching. Kong became an open-source project in 2015. Today, it is used by well over 5,000 organizations on 400,000 running instances and has had 54 million downloads from GitHub.
The Kong API Gateway is available as an open-source software component (OSS) that has an impressive range of functionalities, including open-source plugin support, load balancing, and service discovery. Kong Enterprise (KE) 2.7—the edition tested in this benchmark—features expanded functionalities, such as the management dashboard, a customizable developer portal, security plugins, metrics, and 24×7 support. In this report, any mention of Kong should be applied to Kong Enterprise as well.
Kong and Kong Enterprise can be deployed either in the cloud or on-premises. For us, the installation took less than 10 minutes from scratch on an Amazon Web Services (AWS) EC2 instance. Debian and RedHat-based package managers (Yum and Apt-Get) have Kong in their repositories, and Docker and CloudFormation options are also available.
Kong can operate as a single node or you can join nodes to each other to form a cluster. In a single-node configuration, the PostgreSQL or Cassandra database can live on the same instance as Kong. In a cluster configuration (as pictured below), the database is on a separate instance. Scaling horizontally is simple. Kong is stateless, so adding nodes to the cluster is as easy as pointing a new node to the external database, so it can fetch all the configuration, security, services, routes, and consumer information it needs to begin processing API requests and responses. Also characteristic of a cluster environment is a load balancer (such as Nginx or HAProxy) used at the edge to provide a single address for clients and to distribute requests among the Kong nodes using a chosen strategy (e.g., round robin or weighted).
Kong has a thriving ecosystem of plugins (referred to as the Kong Hub) supporting both open-source and enterprise plugins, such as LDAP authentication, CORS, Dynamic SSL, AWS Lambda, Syslog, and others. Kong is based on Nginx and allows users to create their own plugins using LuaJIT.
Apigee X
Apigee has been around for a long time, since well before the advent of containers. Google acquired Apigee in September 2016 to give itself an API management solution to compete with the products of large cloud vendors like Amazon API Gateway and Microsoft’s Azure API Management. Apigee’s latest microservices product, called X, was released in early 2021. Apigee X is available on-premises (a deployment they call Hybrid Cloud) and as software as a service (SaaS) X on Google Cloud Platform. In fact, Apigee even exhorts potential customers on its own website to “think twice” about an on-premises deployment, calling it “an iceberg of maintenance and cost.” This might result from Google’s influence on Apigee as it prefers to see the product deployed in Google Cloud.
Since Google is clearly recommending its Apigee X fully managed offering to customers, we tested it out-of-the-box with a Standard-level license—permitting 180 million API calls per year and no rate limiting or bandwidth reduction.
MuleSoft Anypoint
MuleSoft has been offering middleware solutions since 2006 and entered API-based development in 2013 with the acquisition of ProgrammableWeb. In 2018, Salesforce acquired MuleSoft and consolidated its offerings and services onto its Anypoint Platform, which includes various components and services. Anypoint Management Center is a web interface to centrally analyze, manage, and monitor APIs and integrations. Design Center provides developers an environment to design and build APIs. Also, Exchange is a library for providers to share their APIs, templates, and other assets. MuleSoft continues to offer its Mule runtime engine as a hybrid-cloud solution. Mule runs as an agent on local infrastructure and connects enterprise applications on-premises and to Anypoint, eliminating the need for custom point-to-point integration.
For our testing, we chose Anypoint Flex Gateway. MuleSoft promotes this solution as ultrafast and designed to manage APIs running anywhere. The gateway is built to deliver the performance required for demanding applications while providing enterprise security and management. Anypoint Flex Gateway can be deployed as a Linux service, a Docker container, or a Kubernetes ingress controller. It can be managed centrally in the Anypoint API Manager web interface, and connected to existing APIs in Exchange through Connected Mode. Alternatively, Anypoint Flex Gateway can be managed locally in completely private environments.
3. GigaOm API Workload Test Setup
The benchmark was designed to test the performance of three API and microservice management platforms—Kong, Apigee, and Mulesoft. The goal was to ascertain how well each of these platforms withstands significant transaction loads to simulate the use case of a high-performance, high-availability environment within companies that rely heavily on APIs and expect superior results from their API gateways.
API Workload Test
The GigaOm API Workload Field Test is a simple workload designed to attack an API or an API management worker node (or a load balancer in front of a cluster of worker nodes) with a barrage of identical GET requests at a constant number of requests per second (RPS).
To perform the attacks, we used the HTTP load-testing tool Vegeta, a free-to-use workload test kit available on GitHub. The Vegeta tool returns a results bin file that contains the latencies and status code of every request. The attacker measured latency as the time elapsed from the point when an individual API request was made to when the API response was received. Thus, if we tested 1,000 (RPS) for 60 seconds, the attack tool recorded 60,000 latency values. We used that data to compile and interpret the results of the test.
The test also requires a backend API that can listen and respond to requests. In this case, our back-end API listens for a GET request such as:
http://ipaddress/api
The API would respond with a string of 1,024 pseudorandom Unicode characters, such as:
taZ3psgHkQ...
For these tests, we used a request payload size of 1KB.
The back-end API we used is further documented in the Appendix.
We completed three attempts per test on each platform, configuration, and request rate. We started with an attack rate of 1,000 RPS and scaled up to 5,000 RPS, 10,000 RPS, and 20,000 RPS. We ran each test for 60 seconds. We captured the latencies at the 50th, 90th, 95th, 99th, 99.9th, and 99.99th percentiles and the maximum latency seen during the test run. We recorded the test run that resulted in the lowest maximum latency or the highest success rate in the event of errors. Error status codes included HTTP status code 429 “Too Many Requests” and any 5xx codes, most often 500 “Internal Server Error.” A success rate of 100% meant all requests returned a 200 “OK” status code.
The results are shared in the Field Test Results section.
Test Environments
A goal in the benchmark environments’ setup was to create as close to an apples-to-apples comparison as possible. This is a challenge in modern cloud infrastructures compared to the closed loop, “sterile” lab environments of previous benchmarking. There was also the added complexity of using Apigee SaaS while Kong’s on-premises offerings were tested.
Both Kong and MuleSoft’s on-premises offerings can be obtained freely from Yum and Apt-Get repositories. Apigee X as an on-premises offering is not readily available, so its SaaS offering was tested. For the benefit of our audience, it makes sense to test the software the competitors are selling.
Going in, we still required assurance that we are working with like-for-like infrastructure for all competitors. Apigee documents and recommends that on-premises customers deploy their message processors on machines with at least 8 CPU cores and 21GB of memory. Thus, we can only assume that Apigee SaaS customers also get at least 8 cores and 16GB of memory. Accordingly, we chose a c5n.2xlarge EC2 instance type, which also has 8 cores and 21GB of RAM for Kong and MuleSoft.
Also, for Kong, infrastructures for other required services were deployed, which included:
- HAProxy load balancer
- Kong PostgreSQL database
All extra services were placed on c5n.2xlarge EC2 instances with 8 cores and 21GB of RAM. Our benchmark was designed so that these services would not be bottlenecks because we were most interested in the raw processing power of the API gateways themselves.
Unfortunately, guaranteeing the same configuration was virtually impossible. Apigee X has an auto-scale feature that will scale up to meet demands. For Kong, we tested a single node and a 3-node cluster.
Additionally, Kong boasts it can run on even very small hardware configurations. Therefore, we separately tested Kong on an EC2 c5n.large instance with 2 CPU cores and 4GB of RAM.
There were 20 API endpoints built specifically for these tests. The API endpoints were built using NGINX open source. They responded to every API request with a fixed number of kernel random characters (urandom) data (1KB). Local response times were approximately 2 microseconds.
Benchmark Configurations
We also conducted the benchmark test using a few different like-for-like configurations:
- Reverse proxy/pass-through (no authentication)
- With authentication/authorization enabled
For the first configuration, we used each platform “out-of-the-box” as a reverse proxy or pass-through without requiring authentication or authorization. Then, we implemented both JSON Web Tokens (JWT) from a server on the same virtual private cloud (VPC) as the API endpoints. We also tested a third-party authentication server using OAuth with OpenID Connect, hosted by Google Identity Platform. In both cases, we created 25 OpenID users and randomly selected them to send API requests to prevent the API gateway from caching the authorized request. Finally, we tested a multiple plugin scenario by enabling logging and JWT authentication simultaneously.
Results may vary across different configurations, and again, you are encouraged to compile your own representative workloads and test compatible configurations applicable to your requirements.
4. Test Results
Latency
This section analyzes the latencies in milliseconds from the various 60-second runs of each of the scaled GigaOm API Workload Field Tests described above. A lower latency is better—meaning API responses are coming back faster. Also, the latency reveals the response time at the 50th, 90th, 95th, 99th, 99.9th, and 99.99th percentiles and the maximum latency. These are important values for service-level agreements (SLAs) and determining the slowest response times a user might experience.
Figure 1 shows the results with a 1 KB response and authentication turned off. Kong latency is minimal in this test, while Apigee X and Mulesoft have over 10x Kong’s latency at all percentiles, including 457x at the 99.99th. The max latency for Kong was just 1.4 ms, versus 625.5 ms for Apigee X and 659 ms for Mulesoft.
Figure 1. 1,000 Requests per Second and Success Rate
At 2,000 RPS and with authentication off, there is more latency (Figure 2). Starting at the 95th percentile, Apigee X had thousands of times more latency than Kong. Mulesoft fared much better than Apigee X but was still 100 times the latency of Kong at the 95th percentile and exceeded Apigee X’s latency starting at the 99.99th percentile.
Figure 2. 2,000 Requests per Second and Success Rate
Figure 3 shows the results at 5,000 RPS and with authentication off. Here again Kong continued to display very low latency. Apigee X and Mulesoft could not perform at this level, so no results are shown in the chart..
Figure 3. 5,000 Requests per Second
Figure 4 shows the latency results at 10,000 RPS and with authentication off. Kong continues to produce very low latency, while neither Apigee X nor Mulesoft could perform at this level.
Figure 4. 10,000 Requests per Second
Finally, at 20,000 RPS and authentication off, we see Kong continue to produce very low latency. Once again, Apigee X and Mulesoft were unable to perform at this level. (Figure 5)
Figure 5. 20,000 Requests per Second
The next test incorporates OAuthV2 at 1,000 RPS to measure system behavior (Figure 6). Here we see that, starting at the 99th percentile, Apigee X and Mulesoft had hundreds of times the latency of Kong.
Figure 6. 1,000 Requests per Second – OAuth On
The final latency test employs JWT authorization at 1,000 RPS (Figure 7). Starting at the 99th percentile, Apigee X and Mulesoft produced latencies hundreds of times greater than that produced by Kong EE.
Figure 7. Requests per Second – JWT Auth On
Maximum Throughput
The maximum transaction throughput achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency is shown on the Maximum Throughput charts. Here we see Kong achieve a peak result of 54,250 RPS, while Apigee X achieved 1,750 and Mulesoft 1,250. (Figure 8).
Figure 8. Maximum Throughput: Kong, Mulesoft, and Apigee X
Finally, we applied the same test to measure maximum transactions per second throughput achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency to view the behavior of different Kong configurations. This test measured one-node Kong EE systems outfitted with 2 cores and 4GB of RAM, 4 cores and 8GB of RAM, and 8 cores and 16GB of RAM. Figure 9 shows the result.
Figure 9. Maximum Throughput with Different Kong Configurations
5. Conclusion
This report outlines the results from a GigaOm API Workload Field Test. We experimented with different transaction loads and authentications, including none, and consistently received better transactions per second from Kong Enterprise over Apigee X and Anypoint Flex Gateway.
Kong’s maximum transactions per second throughput, achieved with 100% success (no 5xx or 429 errors) and with less than 30ms maximum latency, was 54,250. Apigee X’s was 1,750 and MuleSoft’s was 1,250.
For this test using this particular workload with these particular configurations, API requests came back with the lowest latencies and highest throughput on Kong versus the Apigee and Mulesoft solutions.
Keep in mind, optimizations on all platforms would be possible as the offerings evolve or internal tests point to different configurations.
6. Appendix: Recreating the Test
The back-end API used in this test was a custom NGINX configuration developed by GigaOm. The application works by binding the API application to port 1980 with NGINX and listening for GET requests, such as:
GET http://fqdn-or-ip-address:1980
The API would respond with a string of pseudorandom Unicode characters from /dev/urandom, such as:
taZ3psgHkQ
The following is the NGINX configuration for the backend API. You are free to use and modify at your own discretion. GigaOm makes no warranty or claim for its use beyond the scope of this test or report.
worker_processes auto; worker_cpu_affinity auto; error_log /var/log/nginx/error.log; pid /run/nginx.pid; worker_rlimit_nofile 20480; events { accept_mutex off; worker_connections 10620; } http { access_log off; server_tokens off; keepalive_requests 10000; tcp_nodelay on; server { listen 1980 reuseport; location / { return 200 "JXvkE5pBpPN3T8bknNsqaM0kKu0j8BCV0S6TkNljlpDCYi8dIdn2TL11oHv1iFkJAjj8VDnEcBoJSy73QTuCcI8oeCna3jg34beyd7n5fZ22WSZP6gynF6PF5lMKsJTRRFr1ur5trPpTU4nvzJOsbGY6O1bAoeCNTG1VpDHZXQH67wZi35mNj6flLR3glKJwkwXzdrVgbeivVbT2fOz9zjxr0U8A4SONXYRyEr4jZzCqlYG4EuV08X4e5unvkO410ZeRrt31arys9hwR8tuCSi4a6KUsVeA5eZ74GQMv2NByz7R5BFCHIg0BbtexFsxdE9RZyj2sINlqbTQHNqwuiWDRG1CSJdOrTXYNmNz98Ib9BtAGMY7ikINWTeCaH8Qjet6wsDMyLbMjDfH3TjBTeMJDVyLItqfY6MZbblEiEV0mNVBFlG0pn3s12EP0X7DzgIfSP6vU3jVdsuEWENja6DdWG0zciTAMbe4xwRpyG0GWLsmoUoEVAOPsWPeMthsLmjKO2WBQ9vUub2XV0IyO09vZKGajMaEZnXSqhblRrKYcknK7Is2TIgI6o6C0iIKEql1jhdJAl5iFj4VytPftb9k8qbA5QE4dr2wcjWp8b0Rw9wBx9xYUDIkJO6IdrZqgR1APvAF9UyokXgTkHtYycEC1QG0GSUhAT61FjGxtkZU86rV4djttr8zwJaKH7B126rSwvCVWYM82SRxZVJ2RkyQ3xOaRM9DilXg4J90LSAlYu2TUpZpkym8Uk0qOsIWPr2e9jwLkonfdh2AqRX4QS9tCrvA2pfwLEptRNxsVLKmNb2BJpt2YQ7K5OdYmW5oLwKTYtaB2sbCKQCGXWiieLfgt70gdumDsrBM8QslALQLZhX24rfadHvQ9sUKUrW7KW3rkAhxJ1cvvU1up8NHzal67KFLtFS8bJCb22cFL6L7sHynseVS9a1YxYOSroaRDhz0WX4xdW7UyJ4GrsqE9sXd66U8iAv78IaprC3M3HnJyieqyGzewvqSkAvhcnBKj"; } } }
We wanted to create a back-end API that was as performant and lightweight as possible so the latency generated by the application itself was minimized.
7. Disclaimer
Performance is important, but it is only one criterion for an API and microservice management selection. This test is a point-in-time check on specific performance. There are numerous other factors to consider in selection across factors of administration, features and functionality, workload management, user interface, scalability, vendor reliability, and numerous other criteria. Our experience shows that performance changes over time and is competitively different for different workloads. Also, a performance leader can hit up against the point of diminishing returns, and viable contenders can quickly close the gap.
GigaOm runs all of its performance tests to strict ethical standards. The results of the report are the objective results of the application of load tests to the simulations described in the report. The report clearly defines the selected criteria and process used to establish the field test. The report also clearly states the tools and workloads used. The reader is left to determine how to qualify the information for their individual needs. The report does not make any claim regarding the third-party certification and presents the objective results received from the application of the process to the criteria as described in the report. The report strictly measures performance and does not purport to evaluate other factors that potential customers may find relevant when making a purchase decision.
This is a sponsored report. Kong chose the competitors, the test, and the Kong configuration. GigaOm chose the most compatible configurations as-is, out-of-the-box, and ran the testing workloads. Choosing compatible configurations is subject to judgment. We have attempted to describe our decisions in this report.
8. About Kong
Kong makes securing, managing and orchestrating microservice APIs easier and faster than ever. That’s why it powers trillions of API transactions. That’s why technology companies, major banks, e-commerce innovators, and government agencies put Kong in front of their most important web workloads. And that’s why developers around the globe enthusiastically contribute innovations on top of the Kong platform.
Kong focuses on encompassing technology innovation for customer success. Not only does Kong Inc. build a world-class platform for powering microservice API development, it enables customers to succeed in realizing maximum value from their microservice infrastructure with comprehensive services to deliver even higher levels of agility, security, and scale.
9. About William McKnight
William McKnight is a former Fortune 50 technology executive and database engineer. An Ernst & Young Entrepreneur of the Year finalist and frequent best practices judge, he helps enterprise clients with action plans, architectures, strategies, and technology tools to manage information.
Currently, William is an analyst for GigaOm Research who takes corporate information and turns it into a bottom-line-enhancing asset. He has worked with Dong Energy, France Telecom, Pfizer, Samba Bank, ScotiaBank, Teva Pharmaceuticals, and Verizon, among many others. William focuses on delivering business value and solving business problems utilizing proven approaches in information management.
10. About Jake Dolezal
Jake Dolezal is a contributing analyst at GigaOm. He has two decades of experience in the information management field, with expertise in analytics, data warehousing, master data management, data governance, business intelligence, statistics, data modeling and integration, and visualization. Jake has solved technical problems across a broad range of industries, including healthcare, education, government, manufacturing, engineering, hospitality, and restaurants. He has a doctorate in information management from Syracuse University.
11. About GigaOm
GigaOm provides technical, operational, and business advice for IT’s strategic digital enterprise and business initiatives. Enterprise business leaders, CIOs, and technology organizations partner with GigaOm for practical, actionable, strategic, and visionary advice for modernizing and transforming their business. GigaOm’s advice empowers enterprises to successfully compete in an increasingly complicated business atmosphere that requires a solid understanding of constantly changing customer demands.
GigaOm works directly with enterprises both inside and outside of the IT organization to apply proven research and methodologies designed to avoid pitfalls and roadblocks while balancing risk and innovation. Research methodologies include but are not limited to adoption and benchmarking surveys, use cases, interviews, ROI/TCO, market landscapes, strategic trends, and technical benchmarks. Our analysts possess 20+ years of experience advising a spectrum of clients from early adopters to mainstream enterprises.
GigaOm’s perspective is that of the unbiased enterprise practitioner. Through this perspective, GigaOm connects with engaged and loyal subscribers on a deep and meaningful level.
12. Copyright
© Knowingly, Inc. 2022 "API and Microservices Management Benchmark" is a trademark of Knowingly, Inc. For permission to reproduce this report, please contact sales@gigaom.com.