Skip to main content

Service Oriented Architecture -> Micro-Services -> Server-Less Computing

The IT landscape is ever-changing and ever-moving. We have seen it change from SOA to MicroServices and later to Serverless Computing. Now it is time to write about Kubernetes as many companies are embracing containerization and developing microservices-type applications.
Containerization is getting increasingly popular. The emergence of DevOps fuelled the growth of Kubernetes. The entire process of managing infrastructure/application deployments is changing to fully automated process by using declarative configuration files. Developers are able to move the containers deployed in their laptops to staging, pre-prod and production and achieve continuous integration and continuous delivery (CI/CD). Kubernetes options like ConfigMaps allow you to decouple all configuration artifacts from an image and keep containerized apps portable by allowing you to keep all configurations in an external file.
You can create your entire application as an image. It is a snapshot of your application packaged with all the dependencies. You can use this image to create containers. Most softwares are available as images in Dockerhub. Containers can be orchestrated, managed and connected to the outside world as services by using Kubernetes. Kubernetes has become the default standard for cloud portable application architecture.
I recently stopped downloading software into my Mac. Now, I use them as containers. Any new software that I need, I first check if it is available in a container registry such as Docker hub. We can download software without fear of downloading useless packages. The container can be destroyed after use.
You can create Docker images for your application and upload images into Docker hub or private Docker registries. Make sure you respect the 12factor standards! It is easier to share your application with other developers or move the application from development to production environment.
Most of the cloud providers have adopted containerization. Containers are the best way to achieve multi-cloud portability for your applications. AWS offers EKS, ECS and Fargate. Google Cloud Platform (GCP) offers Google Kubernetes Engine (GKE). Azure offers Azure Container Service (AKS).

What exactly is Docker and how does it work?

Docker is a software that can run on your computer which enables you to run containers. By creating containers, you can package your application and its dependencies as a unit of software. First install docker on your computer, then create a text document named as dockerfile. Try some of the Docker commands.
From the dockerfile, we can build an image and create a container from the image. For e.g.you can install and run webserver nginx as a container.
Always use a minimalist operating system for Docker images as you need a small container to save memory and CPU. E.g. - size of ubuntu is 85.8MB, however, if you use alpine operating system, size is only 4.41MB.

So, why do I need Kubernetes? I just need Docker, right?

The answer is yes, we just need Docker if we execute the application in a single instance of the container. For large Docker-based deployments, we need Kubernetes. Usually an application consists of many microservices. Each microservice may be running in many containers. How would all of these containers be coordinated and scheduled? How do all the different containers in your application communicate with each other? How can container instances be scaled? Kubernetes provides a standardized means of orchestrating containers and deploying distributed applications. Docker Swarm is another orchestration engine. The main value Kubernetes provide is the self healing capability. In Kubernetes, there is no difference between application and infrastructure, these go together. In traditional application, you create infrastructure and configure it and deploy applications. In Kubernetes world, you install the Kubernetes cluster, install application and infrastructure together using a single command.
In a traditional architecture, Virtual machine (or server) is the center of universe. In Kubernetes, it is nodes. You can consider node as a virtual machine.

Try it yourself !

The best way to learn any software is to try doing it yourself, instead of reading documents. Like any other software, download the required software from internet and install it on your computer.
  • Install Docker for Mac.
  • Install VirtualBox for Mac
  • Install kubectl for Mac.
  • Install Minikube
  • Install Helm
  • Go go terminal and issue command > sudo minikube start. The Kubernetes cluster with one master and one worker node will be up and running.
  • You can create a sample Ngnix deployment using the command > kubectl apply -f https://k8s.io/examples/application/deployment.yaml
  • You can expose an external IP address to access an application in a cluster by creating a service out of it using command > kubectl expose deployment hello-world --type=LoadBalancer --name=my-service
Command>kubectl cluster-info provide information about all services running in Kubernetes cluster. Try different Kubernetes abstracts such as Services, Nodes, Pods, Replica sets, NameSpace, Replication Controllers, ConfigMap and labels. Other kubectl commands can be referred from here.

What is Kubernetes Architecture?

Kubernetes cluster is a combination of physical or virtual machines to run your application. Each machine is called a node. There are master and worker nodes. The machine that manage the Kubernetes cluster is called master node. The machines that run the Docker containers are called worker nodes. Admins use command line tool Kubectl to manage cluster.
Kubernetes master node consists of API server, Scheduler, Control Manager and etcd (stores the state of the system). Kubernetes worker node consists of kubelet (Kubernetes agent), kube-proxy and pods. The etcd is a distributed key-value pair. Kubernetes uses etcd as a primary data store which stores and replicates all kubernetes cluster state. We can communicate with Kubernetes cluster using command line interface kubelet.
There are plenty of documents available to explain the purpose of each of the components of kubernetes ecosystem such as node, pod, etcd, API server, kubelet etc. I'll not be delving into the details here.

How exactly does Kubernetes work?

Kubernetes provides an automated way of infrastructure setup along with application deployment. The entire process can be managed by using declarative configuration files. This fits perfectly with DevOps.
Kubernetes cluster consists of one or more master node and worker nodes. Once you have the cluster up and running, deploy containerized application using the deployment configuration file and execute it using the Kubectl command line tool.
In a traditional application, you need to start application using startup scripts after deploying your application. In kubernetes world, there are no separate startup scripts for initiating the application. Once cluster is installed with master and worknodes and created deployment, the applications will be up and running as pods.
The nodes within each Kubernetes cluster communicates via kube-proxy. We can group together pods using a Kubernetes resource called service. By default, services are available only inside Kubernetes cluster. Service creates a logical abstraction of multiple pods. Each service has service type which can be defined as LoadBalancer to expose the application to outside of Kubernetes cluster. Also many companies use Api gateway (Apigee) to enable communication across microservices running in different Kubernete clusters.
One of the best feature of Kubernetes is, it has a built-in DNS Service. DNS translates IP addresses to names. Each container has its own IP address. Each service get it's own DNS Name. In Kubernetes, the nodespods and services all have it's own IPs. Kubernetes provides a DNS name automatically for every service deployed in the cluster. Kubernetes provides a concept of namespace (virtual clusters). This can be used to separate environments. The namespace-name is part of the DNS name.
In a typical application, you create a load balancer for your web or app servers and associate a domain name with it. Kubernetes also has a similar concept. In order to expose the application outside cluster, we need to assign load balancer (or nodeport or clusterIP) in deployment configuration as part of service. Use an external load balancer such as ELB (AWS Elastic Load Balancing) or you can expose your namespace using the kubectl expose command. As mentioned, Kubernetes uses as external load balancing service such as ELB or Nginx for making services available outside Kubernetes cluster. Kubernetes pods are immutable. Birth and death of pods are controlled by replicaset.
Kubernetes allows us to create multiple Kubernetes resources using deployment configuration files. It is good to know the purpose and use of each of the services.

Quick comparison between traditional Web application and Kubernetes-based application

Conclusion

IT evolves from Virtualization to Containerization, from Containerization to Serverless Computing. Microservices is replacing SOA (and other architecture models) as the primary model for Application architecture. Highly available and reliable multi-master Kubernetes clusters will be the new way of managing IT infrastructure and application deployments. It is important to understand the Kubernetes constructs such as Services, Nodes, Pods, Replica sets, NameSpace, Replication Controllers, ConfigMap and labels.
The way AWS is proceeding, I wonder (in near future) we may have full abstraction layer for container orchestration. AWS already have ECS, EKS & Fargate. In future, we may forget Kubernetes altogether! Anyone remember Xen project?
Instead of traditional deployments, the CI/CD on Kubernetes with Jenkins (and Ansible, Helm or other CI/CD tools) is already common. Many companies adopted cloud-native Serverless model for some computing needs. Instead of Platform as a Service (PAAS), Container as a Service (CAAS) will be popular soon. Most serverless computing runs on cloud, however, with the introduction of outpost by AWS and VMware, it may be possible for on-premise computing shift into serverless mode!

Comments

Popular posts from this blog

What Why How SDN..???????

What is SDN?   If you follow any number of news feeds or vendor accounts on Twitter, you've no doubt noticed the term "software-defined networking" or SDN popping up more and more lately. Depending on whom you believe, SDN is either the most important industry revolution since Ethernet or merely the latest marketing buzzword (the truth, of course, probably falls somewhere in between). Few people from either camp, however, take the time to explain what SDN actually means. This is chiefly because the term is so new and different parties have been stretching it to encompass varying definitions which serve their own agendas. The phrase "software-defined networking" only became popular over roughly the past eighteen months or so. So what the hell is it? Before we can appreciate the concept of SDN, we must first examine how current networks function. Each of the many processes of a router or switch can be assigned to one of three conceptual planes of operatio...

NETWORKING BASICS

This article is referred from windowsnetworking.com In this article series, I will start with the absolute basics, and work toward building a functional network. In this article I will begin by discussing some of the various networking components and what they do. If you would like to read the other parts in this article series please go to: Networking Basics: Part 2 - Routers Networking Basics: Part 3 - DNS Servers Networking Basics: Part 4 - Workstations and Servers Networking Basics: Part 5 - Domain Controllers Networking Basics: Part 6 - Windows Domain Networking Basics: Part 7 - Introduction to FSMO Roles Networking Basics: Part 8 - FSMO Roles continued Networking Basics: Part 9 – Active Directory Information Networking Basics: Part 10 - Distinguished Names Networking Basics, Part 11: The Active Directory Users and Computers Console Networking Basics: Part 12 - User Account Management Networking Basics: Part 13 - Creating ...

How to install and setup Docker on RHEL 7/CentOS 7

H ow do I install and setup Docker container on an RHEL 7 (Red Hat Enterprise Linux) server? How can I setup Docker on a CentOS 7? How to install and use Docker CE on a CentOS Linux 7 server? Docker is free and open-source software. It automates the deployment of any application as a lightweight, portable, self-sufficient container that will run virtually anywhere. Typically you develop software on your laptop/desktop. You can build a container with your app, and it can test run on your computer. It will scale in cloud, VM, VPS, bare-metal and more. There are two versions of docker. The first one bundled with RHEL/CentOS 7 distro and can be installed with the yum. The second version distributed by the Docker project called docker-ce (community free version) and can be installed by the official Docker project repo. The third version distributed by the Docker project called docker-ee (Enterprise paid version) and can be installed by the official Docker project repo.  This page shows...