Microservices and Service Deployment


A Microservice application is made up of tens or even hundreds of services, written in different languages and frameworksEach service is a mini application that must be provided with the appropriate memory, CPU and other resources. In spite of the complexity, deploying services must be reliable, fast and cost-effective. The process should be automated as much as possible from Continuous Integration (CI) to Continuous Deployment (CD). This could save a lot of time and money.

Packaging Services


Every service might require a different set of dependencies for its execution. Satisfying dependencies of all the services can be a tedious and challenging job. Hence, you need both the service and its dependencies packed as single Docker or VMI images, before the service deployment.
You can package all the services as separate Docker or VMI images and create isolation. These images are then used to create the instance of the service. Services isolation is needed for the following reasons:
  • Deploying multiple micro services on a VM can influence/disturb other micro services running on same VM.
  • One micro service might generate so much load and consume all the resources of your machine, that other micro services might die.
  • You can easily scale up a micro service running on individual VM when the load increases.
  • When all the processes running on a VM belongs to one micro service, it becomes easy to spot the naughty one to analyze the error.
  • You can easily decorate the entire environment of the VM with all the libraries and dependencies required by microservice, and deliver it as a single image (Virtual Machine Image).

VM or Container Technology


Containers act as a small box that isolates applications and allows them to run within a single kernel and OS. They are configured with scripts and libraries that the application depends on.
Virtual machine Images provide stricter isolation to applications, as each VM has its own kernel and OS.
Containers packages the application first then deploy them on servers. VM is created first on the host machine; then applications are deployed on them.
Both containers and VMs are virtualization technology but they differ in few areas:
  • OS: All containers share host machine's OS, while each VMs have their own OS.
  • Load: Containers are lightweight, whereas VMs are heavy.
  • Security: Containers are less secured, whereas VMs are more secured
  • Portability: Docker containers are easily portable but with the same kernel's OS as of previous host machine.

Deployment Strategies


Most common challenges while deploying services to live production environment, from the final testing stage are to minimize downtime as much as possible, and to rollback immediately if things did not work out as expected. It is possible to ensure safer deployments by reducing downtime and risks through following strategies:
  • Blue-Green Deployment
  • Canary Releasing
The spirit of Blue-Green Deployment is deploying at once and the spirit of Canary Release is deploying incrementally.

In Blue-Green Deploymentyou have two identical production environments, called Blue and Green. One of them, let's say Blue, is live and another one (Green) is idle, which has the new version of the software. When confident after final testing, you switch the router to the new environment (Green) to handle all the traffic and requests. Blue is idle now.
If things did not go as expected, the router is changed back to Blue. Else, if you are happy with the deployment, the Green continues to handle traffic and the Blue environment is used for next version deployment.
This approach is also known as Red-Black or A/B Deployment.

In Canary Release, you will gradually roll out the new software to a group of users, to verify it is working as expected. Once you are confident with the new version, then you can gradually increase traffic to the new version by deploying it to more servers of your infrastructure.

Deployment Patterns


There are different ways in which you can deploy your micro services

Multiple Services in Single Server


This is a traditional approach to application deployment, where you configure each server (physical or virtual) and run multiple services.
Pros:
  • Efficient use of resources
  • Faster deployments: Just copy the service to host and run it.
Cons:
  • No isolation of service instance
  • You cannot easily monitor or limit resources used by each service instance.
  • Complexity increases as microservices can be written in different languages or frameworks, Development team will have to share lots of details (dependencies and libraries to run service) with Operations team to run the service successfully.


Single Service in Single Server - Each Service as VMI

In this pattern of deployment, you run single service on each Virtual Machine. 
Pros:
  • Easy to monitor and allocate amount of CPU and memory to each service.
  • Isolation for each service.
  • Packing each service as VMI acts as a black box. This helps you to encapsulate the service implementation technology.


Cons:
  • Less efficient in resource utilization.
  • VMs are heavy and slow to build (except Boxfuse).


Single Service in Single Server - Each Service In a Container


In this pattern of deployment, you 
run single service on each container

Pros:
  • Similar benefits as VM
  • Lightweight
  • Fast to build


Cons:
  • Less mature infrastructure than VM (rapidly increasing though).
  • Containers share the same kernel of host OS, which makes them less secure than VM.


Serverless Deployment


Upload your services to public cloud service provider and run it whenever you want. Cloud service providers will take care of your (physical/virtual/containers) and other requirements.
Few environments that you can use for Serverless Deployment:
  • AWS Lambda
  • Azure Functions
  • Google Cloud Functions


Pros:
  • Faster software release.
  • Reduced cost of development and operations
  • Allows developers to focus on code and deliver updates faster with zero administration work
Cons:
  • Security issue might come as the server and resources are not in your control.
  • As you do not control servers, you cannot install any monitoring software. You will have to depend on tools from vendors for monitoring and debugging your services.
  • On scaling up, the server might need some time before it could handle requests. This problem is known as cold start.


Automate Deployment


The main aim of micro services is independent deployment. Manual deployment or correction is not practically possible due to a large number of micro services; the process has to be automated. There are different ways in which you can automate deploying services:
  • Installation Scripts
  • Deployment Tools

Installation Scripts


  • Only install the necessary software packages, generate configuration files and create user accounts on your computer
  • Such scripts, when called repeatedly, might fail. For example, a script is called to update configuration file or account that is already present in the machine would fail, as they cannot be overwritten easily.
  • You can implement these using Shell scripts.

Deployment Tools

  • You can use other DevOps tools like Puppet, Chef, and Ansible, to deploy and configure your servers.
  • You can describe the desired state that your system is supposed to look after installation.
  • Running the same installation (for example, Ansible script/playbook) multiple times will not do any further changes to your system as the system is already in the desired state.
  • You can easily configure multiple servers at the same time.


Inter Service Communication


Just as the phrase goes: Good communication is the key to success in life; so is the communication among your micro services, to make a successfully running application.
Wait, did you just wonder how micro services would communicate when they are in isolation?
Well, you can achieve Service Inter-Operability through:
  • Synchronous Communication
  • Asynchronous Communication

Asynchronous Communication


In Asynchronous Communication, the client (you may think of your browser) sends a message to a service, assuming it will not receive the reply immediately from service. The client does not get blocked. Hence the user can continue other work.
Example: You can start ten message threads with your ten friends on Fresco Talk and handle the response as they come in (async).
In short, asynchronous communication does not require a response to proceed to the next task.
  • Standard protocols used in Asynchronous Communications are AMQP and STOMP.
  • Open source messaging systems you can choose from RabbitMQ, Apache Kafka, Apache ActiveMQ, and NSQ.
Pros:
  • Client and service need not be available at the same time.
  • The client need not use service discovery mechanism to determine the location of a service instance.
  • No blocking
  • Provides good user experience.
Cons:
  • Response time is unpredictable.
  • It is more complicated as the client needs to match the response with the request because the service response was not immediate and meanwhile the client would have sent multiple requests to other services.

Synchronous Communication


In Synchronous Communication, the client sends a request to service and waits for the service to respond immediately.
Example: In online bank transactions, you are not supposed to refresh website and cannot do any other task on the page until the response comes
In short, the response is a must to proceed to next task in synchronous communication.
  • Standard protocols used in Synchronous Communications are REST and Thrift.
  • Open-source API design tools you can choose from RAML and Swagger.
Pros:
  • Simple to implement.
  • You can easily test an HTTP API from your browser (using Chrome Extension: Postman) or from CLI (using curl).
  • This technique is firewall friendly.
  • Response is received immediately.

Cons:

  • For long-running operations, user experience degrades.
  • Client and service should be available for the duration of the exchange.
  • Clients must know the location of the service instance, using service discovery.

Message Formats


These are two message formats that can be used to transfer data among Micro services.
  • Text format: JSONXML
  • Binary format

Service Mesh


service mesh is a dedicated infrastructure layer for handling the traffic between service-to-service communication. It is implemented as an array of lightweight network proxies that are deployed alongside application code, without the application needing to be aware. Few tools to implement Service Mesh: Linkerd and Istio.

Comentarios

Entradas populares de este blog

U3 - 5. Estrategia y técnicas de negociación integrativa

U3 - Ensayo de la negociación

U3 -2. Estilos alternativos, estrategias y técnicas de negociación