Microservices Architecture - Patterns




API Gateway

Microservices provide fine-grained APIs based on client needs. Each client needs to interact with multiple services to collect relevant information. For example, a payment client needs to fetch data from several services such as product, customer, and billing.
API Gateway takes care of the external traffic of coordinating with different services. The API Gateway encapsulates the internal system architecture and provides an API that is tailored to each client.
The API Gateway is liable for Protocol translation, Composition, and Request Routing.
Benefits:

  • Insulates the clients from internal structure of the application
  • Provides specific API to each client, reducing the number of round trips between the client and application.
  • Simplifies the client by transferring logic for calling various services from the client to API gateway.

Communication

In a monolithic application, components invoke each another through function calls. But in a microservices-based application, it is distributed across multiple services. Each service instance must interact with the other using an Inter-process communication (IPC) mechanism.. The services can communicate within themselves, either through Messaging or Remote Procedure Invocation.

Interaction Style


There are several client service interaction styles classified along two dimensions.
  • One-to-one or One-to-many -- One-to-one – Every client request is processed by specifically one service instance. -- One-to-many – Each request is processed by several service instances.
  • Synchronous or Asynchronous -- Synchronous – The client expects a timely response from the service and might even block while it waits. -- Asynchronous – The client does not block while waiting for a response, and the response is not necessarily sent immediately.

One to One Interaction Types

Request/response – A client sends a request to service and waits for a response. The client expects the response to arrive in a timely fashion. Sometimes request could even block while waiting.
Notification (one-way request) – A client transmits a request to a service, however, a reply is not expected or sent.
Async/Request response – A client transmits a request to service, which responds asynchronously. The client does not block while waiting and is designed with the assumption that the response might not arrive for a while.
One to Many Interaction Types

Subscribe/publish – A client publishes a notification message that is used by zero or multiple interested services.
Async/publish responses- A client then publishes a request message and waits a certain amount of time for responses from related services.

Service Discovery & Registry

Ever wondered, how do clients of a service/router know about the available instances of a service? The solution is Service Registry.
  • Service Registry acts as a database of services, their instances, and corresponding locations.
  • Service instances are registered with the service registry on startup and deregistered on shutdown.
  • Clients of the service/router query the service registry to find the available instances of a service.
  • Some of the famous service registries include - Eureka, Apache Zookeeper, Consul, etc.
Service instances have dynamically assigned network locations. Service instances change dynamically due to Auto-scalingFailures and Upgrades. The client needs Service Discovery Mechanism to over come this. There are two types of Service Discovery:
  • Client-side discovery
  • Server-side discovery

Client-side discovery
The clients take the onus and are responsible for determining the network locations of available service instances by querying a service registry. This is a straight forward approach.
The clients use intelligent application-specific load-balancing decisions using hashing. But on the other hand, all service clients should have this client-side service discovery logic for each programming language and framework.
Netflix OSS is a good example for this.

Server-side discovery
The client makes a request via a load balancer. The load balancer queries the service registry and routes each request to an available service instance.
AWS Elastic Load Balancer (ELB) is an example of a server-side discovery router.
Main benefit
  • Details of discovery are abstracted from the client, thus eliminating the need to implement discovery logic for each programming language and framework.


Circuit Breaker
It functions similar to the electrical circuit breaker.
When the number of consecutive request failures crosses a threshold,
  • The circuit breaker trips for limited timeout period and all attempts to invoke the remote service fail immediately.
  • After the expiry of the timeout, the circuit breaker allows a limited number of test requests to pass through.
  • If the requests are successful, the circuit breaker continues normal operation. If there is a failure, the timeout period starts again.
Data Management

Unlike Monolithic architecture where data resides typically in Relational Database Management Systems (RDBMS) and could be easily accessed using SQL queries, data access is a little complex in Microservices architecture. This is because data owned by each microservice is private to that microservice and can only be accessed through its API.
Data Encapsulation showcases that the microservices are loosely coupled and can evolve independent of one another.

Event Driven Architecture

The Microservices trigger and update the business entities based on any event and publishes an event when some action occurs.
For example - A payment Microservice, publishes an event when something notable happens, such as when it updates a business entity. Other microservices inventory, subscribe to those events. When a microservice gets an event, it can update its business entities, which might result in more events being published.


Benefits
  • It allows the implementation of transactions, which span multiple services and offer ultimate consistency.
Challenges
  • The model is more complex. Often you must implement rollback compensating transactions to recover from application-level failures.
  • The service subscribers must be able to detect and ignore duplicate events.
Command Query Responsibility Segregation (CQRS)

Queries in a microservice architecture are implemented through CQRS. In CQRS, the application is split into two components:
  • command-side takes care of creating, updating, and deleting requests and emits events when data changes.
  • query-side executes queries against one or more materialized views, which are kept updated by subscribing to the stream of events released when data changes.
Takeaways
  • CQRS supports event driven architecture.
  • Few complex domains may be simpler to tackle by utilizing CQRS.
  • While handling high-performance applications, CQRS allows separation of a load from reads and writes, allowing to scale independently.
Service Deployment

Microservice based application consists of tens or hundreds of services. Services are written in a variety of languages, frameworks, and each service acts as a mini-application on its own
- Specific deployment
- Resource
- Scaling 
- Monitoring requirements 

The number of instances of each service is based on the demand for that service.
Each service instance must be offered with appropriate I/O resources, memory, and CPU. Also deploying services must be quick, cost-effective, and reliable.
Service Deployment is classified into three types-
  • Multiple Service Instances per Host Pattern
  • Service Instance per Host Pattern
  • Service Instance per Container Pattern
Multiple Service Instances per Host Pattern

Run multiple service instances provisioned in one or more physical or virtual hosts. The two flavors of this pattern -
  • Each service instance acts as process or a process group
  • Run multiple service instances in the same process or process group

Service Instance per Host Pattern

Here you package each service as a virtual machine (VM) image. e.g., Amazon EC2 AMI. Each service instance is a VM, that is launched using VM image.
Key advantages include:
  • Each service instance runs in complete isolation. It has a fixed amount of CPU and memory and can’t steal resources from other services.
  • Easy to leverage mature cloud infrastructure by leveraging features such as load balancing and auto scaling.
Netflix video streaming service is the best example for this.

Service Instance per Container Pattern

Here you will package the service as a (Docker) container image and deploy each service instance as a container.
Containers are a virtualization mechanism at the operating system level. It consists of one or more processes running in a sandbox. One can limit a container’s memory and CPU resources.
Examples of container technologies include Docker and Solaris Zones.
Cashing

A web cache (or HTTP cache) is an information technology for the temporary storage (caching) of web documents, such as HTML pages and images, to reduce bandwidth usage, server load, and perceived lag. A web cache system stores copies of documents passing through it; subsequent requests may be satisfied from the cache if certain conditions are met.

Cache is the capability to store data temporarily to lessen the loading times and I/O of a system. Caching helps in improving the performance of the system. The goal of caching in microservices is:
  • Identifying what can be cached
  • How to cache data for faster response
REST

Key components of REST architecture style that support caching:
  • Client-Server - Separate the interface from the server
  • Client-Stateless - No client contexts stored on the server between requests
  • Cacheable - Clients can cache responses, but servers should be clear what could be and could not
  • Layered System - a client cannot identify whether this is connecting directly to the server or through an intermediary. Caches/Proxies to be transparent to the layers above them.
  • Code on Demand - The server can transmit the code that could be executed in the client.
  • Uniform Interface - URLs, URIs some of the standards to identify.

HTTP 1.1 

HTTP 1.1 has set of headers that supports caching. Cache-Control - provides a lot of attributes on caching for both responses & requests.
Responses include public, private, no-cache, no-store, no-transform, proxy-revalidate, maxage that controls the level of caching done.
Expires - mentions when the resource is stale.

No cache
In this case, there is no concept of caching. The web server connects with Service directly.
Every request has to hit the service to get the related information. This will add overhead to service with multiple hits and would develop performance issues.

Share Cache
Here you will observe multiple web servers using a shared cache/proxy to access service.
This will yield better performance as some data gets cached in between. And all requests will not hit the service.

Distributed Cache
With multiple Cache systems, this reduces loading. Internet Cache Protocol (ICP) ensures consistency among the cache systems.

Local and Remote Cache
Here, there will be local cache happening within the Web server, which reduces the Network usage, there by providing very high performance.


Security

The security aspect, related to the identity of the requestor services to handle the request, is taken care by API Gateway. The API Gateway authenticates the user and transfers an access token (e.g., JSON Web Token) that securely recognizes the user in each request to the services.
The access token is issued upon successful authorization which is used for authenticating all requests.
A user will get a token when a user logs into an application. This token will help other services to identify the user.

Microservice Chassis

Development of any new application, requires significant time to put in place, the mechanisms to handle cross-cutting concerns such as:
  • Externalized configuration - credentials, and network locations of external services such as databases and message brokers.
  • Logging - configuring of a logging framework (for audit trail) such as log4j or logback.
  • Health checks - to check the application status, a URL that a monitoring service can “ping” to determine the health of the application.
  • Metrics - track application performance measurements to know more about what and how the application is doing.
With Microservice chassis, it becomes effortless and quick to get started with developing a microservice.
Some of the popular Microservice chassis frameworks include:
  • Java - Spring Boot and Spring Cloud, Dropwizard
  • Go - Gizmo, Micro, Go kit
Challenges Adopting to a new programming language or framework would be difficult, as there is need for microservice chassis for each programming language/framework
Observability

Observability patterns focus on:
  • Application logging - Errors, warnings, information and debug messages about action are tracked in the log file.
  • Application metrics - Gathers statistics about individual operations of each service and aggregates metrics in centralized metrics service, for reporting and alerting.
  • Audit logging - Records user activity in a database.
  • Exception tracking - Report all exceptions to a centralized exception tracking service that aggregates and tracks exceptions and notifies developers.
  • Distributed tracing - Each external request is assigned a Unique ID, which is passed to all services involved in handling the request and also included in application log messages.
  • Health check API - Returns the health of the service. A load balancer, service registry, or monitoring service can ‘ping’ this API to verify service instance availability.


References
  • Fresco Play

Comentarios

Entradas populares de este blog

U3 - 1. Naturaleza de la negociación

U3 - 5. Estrategia y técnicas de negociación integrativa

U3 - Ensayo de la negociación