Kubernetes and its Use-Cases
Following are the topics covered in this blog:
- What is Kubernetes?
- Kubernetes Components & Architecture
- How Kubernetes works
- Why use Kubernetes?
- Feature of Kubernetes
- Industry Use-Case Study of Kubernetes
What is Kubernetes?
Kubernetes is an open source system to deploy, scale, and manage containerized applications anywhere.
Kubernetes automates operational tasks of container management and includes built-in commands for deploying applications, rolling out changes to your applications, scaling your applications up and down to fit changing needs, monitoring your applications, and more — making it easier to manage applications.
- Containers offer a way to package code, runtime, system tools, system libraries, and configs altogether. This shipment is a lightweight, standalone executable. This way, your application will behave the same every time no matter where it runs (e.g, Ubuntu, Windows, etc.). Containerization is not a new concept, but it has gained immense popularity with the rise of microservices and Docker.
Kubernetes Components & Architecture
Below are the main components found on the master node:
- etcd cluster — a simple, distributed key value storage which is used to store the Kubernetes cluster data (such as number of pods, their state, namespace, etc), API objects and service discovery details. It is only accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.
- kube-apiserver — Kubernetes API server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets/controllers and others), serving as frontend to the cluster. Also, this is the only component that communicates with the etcd cluster, making sure data is stored in etcd and is in agreement with the service details of the deployed pods.
- kube-controller-manager — runs a number of distinct controller processes in the background (for example, replication controller controls number of replicas in a pod, endpoints controller populates endpoint objects like services and pods, and others) to regulate the shared state of the cluster and perform routine tasks. When a change in a service configuration occurs (for example, replacing the image from which the pods are running, or changing parameters in the configuration yaml file), the controller spots the change and starts working towards the new desired state.
- cloud-controller-manager –— is responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, when a controller needs to check if a node was terminated or set up routes, load balancers or volumes in the cloud infrastructure, all that is handled by the cloud-controller-manager.
- kube scheduler — helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node. For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node.
Below are the main components found on a (worker) node:
- kubelet — the main service on a node, regularly taking in new or modified pod specifications (primarily through the kube-apiserver) and ensuring that pods and their containers are healthy and running in the desired state. This component also reports to the master on the health of the host where it is running.
- kube-proxy — a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.
kubectl command is a line tool that interacts with kube-apiserver and send commands to the master node. Each command is converted into an API call.
How Kubernetes works?
Kubernetes Works Like an Operating System
Kubernetes is an example of a well-architected distributed system. It treats all the machines in a cluster as a single pool of resources. It takes up the role of a distributed operating system by effectively managing the scheduling, allocating the resources, monitoring the health of the infrastructure, and even maintaining the desired state of infrastructure and workloads. Kubernetes is an operating system capable of running modern applications across multiple clusters and infrastructures on cloud services and private data center environments.
Like any other mature distributed system, Kubernetes has two layers consisting of the head nodes and worker nodes. The head nodes typically run the control plane responsible for scheduling and managing the life cycle of workloads. The worker nodes act as the workhorses that run applications. The collection of head nodes and worker nodes becomes a cluster.
Why(when) use Kubernetes?
When you should use it
If your application uses a microservice architecture
If you have transitioned or are looking to transition to a microservice architecture then Kubernetes will suit you well because it’s likely you’re already using software like Docker to containerize your application.
If you’re suffering from slow development and deployment
If you’re unable to meet customer demands due to slow development time, then Kubernetes might help. Rather than a team of developers spending their time wrapping their heads around the development and deployment lifecycle, Kubernetes (along with Docker) can effectively manage it for you so the team can spend their time on more meaningful work that gets products out the door.
“Our internal teams have less of a need to focus on manual capacity provisioning and more time to focus on delivering features for Spotify.” — Spotify
Lower infrastructure costs
Kubernetes uses an efficient resource management model at the container, pod, and cluster level, helping you lower cloud infrastructure costs by ensuring your clusters always have available resources for running applications.
Feature of Kubernetes
- You can use it to deploy your services, to roll out new releases without downtime, and to scale (or de-scale) those services.
- It is portable.
- It can run on a public or private cloud.
- It can run on-premise or in a hybrid environment.
- You can move a Kubernetes cluster from one hosting vendor to another without changing (almost) any of the deployment and management processes.
- Kubernetes can be easily extended to serve nearly any needs. You can choose which modules you’ll use, and you can develop additional features yourself and plug them in.
- Kubernetes will decide where to run something and how to maintain the state you specify.
- Kubernetes can place replicas of service on the most appropriate server, restart them when needed, replicate them, and scale them.
- Self-healing is a feature included in its design from the start. On the other hand, self-adaptation is coming soon as well.
- Zero-downtime deployments, fault tolerance, high availability, scaling, scheduling, and self-healing add significant value in Kubernetes.
- You can use it to mount volumes for stateful applications.
- It allows you to store confidential information as secrets.
- You can use it to validate the health of your services.
- It can load balance requests and monitor resources.
- It provides service discovery and easy access to logs.
Industry Use-Case Study of Kubernetes
Companies can use Kubernetes to facilitate the versatile application support that can cut down on hardware costs and lead to more efficient architecting. It is one of several choices in new container architectures, for bringing a higher level of innovation to the design of a hardware and software environment.
Almost 25,341 companies that use Kubernetes. The companies using Kubernetes are most often found in United States and in the Computer Software industry. Kubernetes is most often used by companies with 10–50 employees and 1M-10M dollars in revenue.
Vodafone Group is one of the world’s leading global telecommunications company that provides technology services. They have expertise in a variety of Internet of Things(IoT) and connectivity products for both consumers and businesses, as well as mobile financial services and digital transformation in emerging markets.
Vodafone struggled with an old, monolithic architecture that had incurred high levels of complexity, interdependency, and a substantial upgrade deficit. As a result, they could no longer maintain it and needed to start fresh with new technology.
In 2016, Vodafone launched a new digital strategy to deliver the best customer services. The digital strategy includes implementing cloud-native software and container orchestration platform Kubernetes, to best enable local markets to share assets across different parts in the world.
Tinder’s move to Kubernetes was to drive Tinder Engineering towards container architecture and immutable deployment so engineers could focus more on their code than operation tasks such as infrastructure, application build, and deployment.
Launched in 2008, Spotify is one of the largest music streaming subscription service that has grown to over 200 million monthly active users across the world. Spotify aims to empower creators and enable an immersive listening experience for their customers. They have containerized their microservices and managed it through an in-house container orchestration service called Helios in 2014.
Thanks for your time….