WHAT IS kubernetes?

Let’s start with a brief history of the evolution of application deployment.
Traditional Application Deployment Era: In the traditional environment, organizations used to run applications on physical servers.
There was no way to define resource boundaries in a physical server and this caused resource allocation problems.
For example, if multiple applications ran on a physical server, there were instances where one application would take up most of the resources, and as a result, the other applications would underperform. A solution for this was to run each application on a different physical server. But this had a problem- it gave rise to many physical servers that were not scalable. As a result, resources were underutilized, and it was expensive for organizations to maintain many physical servers.
This problem gave rise to a solution: Traditional Virtual Machines.
Figure 1: Traditional Architecture vs Virtual Architecture
Virtualization Era: Virtualization added a hypervisor that allowed virtual machines to share underlying hardware resources. Thus, it allowed better utilization of resources in a physical server because all VM’s shared the underlying hardware resources. Consequently, it allowed you to run multiple Virtual Machines (VMs) on a single physical server’s CPU. It allowed better scalability because an application could be added or updated easily, reduction of hardware costs occurred, and much more. But there was a problem.
Container deployment era: As discussed in my previous article, there was a problem with traditional Virtual Machines as well. Each Virtual machine required its own components to run. This included its own O.S., applications, and hardware. Containers overcame this problem. They were similar to VMs, but they had relaxed isolation properties- which meant that the Operating System (OS) could be shared among the applications. Therefore, containers are considered lightweight. Similar to a VM, a container has its own filesystem, CPU, memory, process space, and more. But, since they are decoupled from the underlying infrastructure, they are portable across clouds and OS distributions.
Figure 2: Virtualization vs Container Technology
Containers have become popular because they have many benefits. Some of the container benefits discussed were-
1. Very Fast Startup Time
Containers start-up in a few seconds or less, compared to traditional VMs that typically require a few minutes. This is because containers do not need for the entire OS to boot up before it starts functioning.
2. Highly portable
They are highly portable and lightweight due to their small sizes. Thus, you can easily move them across platforms.
3. Benefits of Open- Source
Building Linux containers can help you take leverage of a wide community of contributors. This helps to foster the rapid development of a wide ecosystem of related projects fitting the needs of all sorts of different organizations, big and small.
4. Fewer Resources
This translates directly into huge cost benefits as containers do not utilize as many resources as traditional Virtual Machines.
Containers are therefore highly relevant and very suitable in today’s agile environment
Then, what is Kubernetes and why do I need it?
Containers are a good way to bundle and run your applications.
However, there was a major problem with containers: container management.
In a production environment, you need to manage the containers that run the applications and ensure that there is continuous communication between them.
A major drawback of containers is that they need to be physically set up. For example, if you have deployed say, 5 containers, each port must be configured, mapped and handshake-signals be programmed to individual ports. In runtime, applications need to communicate with each other. If a container is mapped incorrectly, the containers will not be able to communicate with each other and the applications will not run/ perform their functions correctly.
Effective mapping of containers is not an issue in case you have to manage only a few containers- say 5 to 10. But imagine if you are running big applications. This would require over 100 + containers.
Could you be sure, in that case, that your containers are effectively managed?
What about when the containers need to be scaled up? Suppose, you run an E-commerce platform that gets a huge surge in traffic on weekends and medium to less traffic on weekdays. Imagine performing manual scaling up and down every weekend! It would turn container management a nightmare for your I.T Team!
Wouldn’t container management be easier if this task was handled by a system?
That’s how Kubernetes comes to the rescue! Kubernetes provides you with a framework to run distributed container systems resiliently. It takes care of your scaling requirements, failover, deployment patterns, and more.
so, what is Kuberenetes?
Kubernetes is a container management tool which automates container deployment, container (-de) scaling and container load balancing.
Features of Kubernetes :
  • Service discovery and load balancing
Kubernetes can expose a container using the DNS name or using their own IP address. If traffic to a container is high, Kubernetes is able to load balance and distribute the network traffic so that the deployment is stable.
Autoscaling is never a functionality in container technology. Kubernetes allows you to automatically mount a storage system of your choice, such as local storages, public cloud providers, and more.
  • Automated rollouts and rollbacks
Again, autoscaling is never a functionality in container technology. But with Kubernetes, you can describe the desired state for your deployed containers using Kubernetes, and it can change the actual state to the desired state at a controlled rate. For example, you can automate Kubernetes to create new containers for your deployment, remove existing containers and adopt all their resources to the new container.
  • Automatic bin packing
Kubernetes allows you to specify how much CPU and memory (RAM) each container needs. When containers have resource requests specified, Kubernetes can make better decisions to manage the resources for containers.
  • Self-healing
Kubernetes restarts containers that fail, replaces containers, kills containers that don’t respond to your user-defined health check, and doesn’t advertise them to clients until they are ready to serve.
Kubernetes lets you store and manage sensitive information, such as passwords. You can deploy and update secrets and application configuration without rebuilding your container images, and without exposing secrets in your stack configuration.
A managed service provider can help better recommend a Kubernetes deployment. Consider consulting General Technologies Here.

Comments

Popular posts from this blog

Why an Annual maintenance contract for a server will provide you peace of mind-

How does Software Defined Networking (SDN) Work?

CITRIX VDI as a solution