Reviewing Architectural and Data Aspects of Microservices Applications

Microservices architectures have reached the mainstream. A 2016 Cloud Native Computing Foundation (CNCF) survey found that 23% of respondents had deployed containers in production; by 2020, this figure had jumped to 92%. In last year’s survey, attention turned to the proportion of applications using containers, and 44% of respondents reported that containers were used for most or all production applications.

To date, much of the focus around microservices has been on the applications being built, rather than on broader aspects like data management. To navigate a path toward better adoption, let’s start by exploring the terminology behind microservices. We can then consider their challenges and how to respond from an architecture and data management perspective.

The Terminology Behind Microservices

Microservices are built on the twin pillars of architectural principle and web-based practice. We can trace these pillars back through service-oriented architectures (SOA) and object orientation, each serving as a foundation for scalable, adaptable, and resilient applications.

The advent of the web, together with protocols such as HTTP and SOAP, catalyzed the creation of highly distributed applications that use the internet as a communications backbone and cloud providers as a processing platform. It was from this foundation that microservices emerged as the basis for cloud-native applications.

The final pieces of the microservices puzzle were put in place when engineering teams rallied around the Docker container format, giving microservices a standard shape and interface. Adoption of Kubernetes as an approach to container deployment and orchestration soon followed.

Today, container-based, Kubernetes-orchestrated microservices are the standard for building cloud-native applications, and adoption has been swift. Just three years ago, 20% of enterprise organizations had deployed Kubernetes-based microservices. At the time of writing, a majority of enterprises have deployed Kubernetes.

The advantages of the microservices are legion, not only because of the modular approach to application building, but because so many elements of the stack are available. A broad range of data management platforms, application libraries, security capabilities, user interfaces, development tools, and operational and third-party capabilities are built on (or support) microservices.

As a result, developers need only care about the core of their application. Teams can build and deploy new functionality much faster, responding to the urgency that organizations face around digital transformation. The drive to transform has also fanned the cloud-native flames, creating yet more impetus toward microservices.

Data Challenges of Microservices Adoption

Certain aspects of microservices create new opportunities and reset the context for application builders:

Microservices design best practice: Careful design is needed to ensure each microservice is self-contained and minimizes dependencies with others. Otherwise, microservices can end up too large and monolithic or too small and complex.

Stateless and stateful communication: Microservices-based applications work best when communication of state is minimized—that is, one microservice knows very little about the condition of another. State must be stored somewhere—directly or derived from other data.

Third-party API and library management: A huge advantage of the microservices model is that applications can build on third-party libraries, stacks, and services. These integrations are generally enabled by an application programming interface (API).

Use of existing applications and data stores: A microservices application may depend on an existing application or data store, which cannot be changed for regulatory or cost reasons. Even if accessible via API, it may not have been designed for a distributed architecture.

Rate of change: Microservices applications tend to be developed according to Agile and DevOps approaches, which set expectations for fast results and continuous delivery of new features and updates.

Each of these brings its share of challenges for engineering teams, each of which brings data management ramifications. These include:

Performance, Scalability, and Availability
As a microservices application becomes more complex, managing the network of states across it becomes increasingly difficult, creating communication overheads. The relationship between microservices and how data is stored and managed can become the greatest bottleneck due to data distribution and synchronization challenges across the architecture. Wait states at API gateways can also reduce performance, impacting scalability and causing availability risks. Legacy data stores may lack cloud-native features required for microservices, creating additional overheads in terms of interfacing.

Maintainability and Fragility
The inherent complexity of microservices applications can make them harder to maintain, particularly if microservices are too large, too small, or if data pathways are sub-optimal. Maintenance overheads can conflict with DevOps approaches; simply put, the troubleshooting and resolution of issues can slow down development and creation of new features across data management and other parts of the architecture.

Manageability and Security
The above sets of challenges can manifest in terms of operational overheads. Day-two operations for microservices applications require a detailed grasp of the application architecture, what is running where, and the sources of issues. Particular issues can arise in the relationship with runtime APIs and legacy data stores. Meanwhile, application complexity and use of third-party libraries expands the attack surface for the application, increasing security risk.

Addressing Challenges – Review and Improve

Here are a few things to consider to ensure you start building microservices applications the right way. First, if the problem is architectural, think about the solution architecturally by taking data management and other aspects into account. Second, understand that no organization operates in a greenfield environment.

A strength of the microservices approach is its notion of right-sizing, or what we might call the Goldilocks principle; microservices can be too big or too small, but they can also be just right. This means they operate standalone, contain the right elements to function, and are developed and maintained by domain experts.

Usefully, you can apply this principle to a new design or an existing application. While it is not straightforward to get a microservices architecture right, analysis of the problem space and creation of an ideal microservices model can take place relatively quickly. This model can then be mapped to the existing architecture as a review process.

This exercise identifies areas of weakness and offers opportunities to resolve performance bottlenecks and other issues—not least of which is how data is stored and managed. It may be that a part of the application needs to be refactored, which is a decision for engineering management. For example, you could look toward architectural patterns, such as caching and aggregators for data movement, strangler and façade for legacy systems, and Command and Query Responsibility Segregation (CQRS) for scalability, performance, and security.

We give an example of CQRS below, representing a payment application and using Redis as the data store. In this example, two microservices are deployed: one manages payment approvals and makes updates; the other enables queries on payment history based on a cached version of the data store such that performance impact is minimized. The Redis Data Integration capability tracks updates and updates the cached version in real time, further reducing the load on the microservices.

A second facet of the architecture review is to consider the data management, API gateways, third-party services, development, and operational tooling already in use. As the example shows, a data management platform such as Redis may already have features that can support the application’s architectural needs. The same point applies across other platforms and tooling. We advise working with existing suppliers and reviewing their solutions to understand how to meet the needs of the application under review without having to deploy additional capabilities.

Conclusion

In summary, microservices is not about rewriting your application in its entirety. By reviewing your architecture alongside existing third-party capabilities, you can establish a roadmap for the application that addresses existing challenges around scaling, performance, security, and more, preventing these aspects from impairing application effectiveness in the future.

Ultimately, no organization can assume that microservices will “just work.” However, the modularity of microservices means that it is never too late to start applying architectural best practice principles to existing applications and benefit from improved scalability, resiliency, and maintainability as a result. Plus, you may find out that you can do so much more with what you already have.