Skip to main content

Kubernetes - Concepts, Components & Use-Cases



Kubernetes (K8s) is an open-source container-orchestration platform for automating application deployment, management and scaling.

Concepts

1. Kubernetes uses the concepts of pods - an object that consists of one or more containers which share network namespace

2. Kubernetes automates deploying, scaling and managing containerized application on a group (cluster) of (bare metal or virtual) servers, such as ensuring that in case a container within a pod crashes, it will be restarted.

Use case:

A developer needs 5 application containers on a host. The way he can do it using docker is type the command: "docker run <application_name>" individually 5 times, to create 5 containers on the host machine.

What if a production environment requires 200 containers?


If you have an automation script that does run the above command 200 times to create 200 containers, how do you monitor them? How do you ensure the underlying host resources aren't stretched or depleted?

K8s to the rescue! It can deploy, monitor, heal and redeploy containers automatically as needed.


Kubernetes Components

1. API Server - Acts as front end. All talk to API server to interact with K8s cluster

2. Scheduler - Identifies the right node to place the container based on multiple parameters

3. Controller - This is the brain behind orchestration. It takes care of nodes. Handles situation where node becomes unavailable. Also takes care of replications

4. Container Runtime - This is the container runtime which each worker node hosts

5. kubelet - Agent that runs on each node of the cluster and responsible for communication with Master

6. etcd - Reliable key-value store (consider a database) that is fast, secure and reliable. It stores all the data about K8s cluster (configuration etc.)


Master node components:

Kube-apiserver, etcd, controller, schedular

Worker node components:

kubelet, kube-proxy, container runtime

(kube-proxy ensures necessary rules are in place to allow containers in worker nodes to reach each other)


Node : A node is a machine (physical or virtual) on which Kubernetes is installed. This is where containers will be deployed on.

Cluster : A cluster is a group of Nodes. Even if one node fails, you will be able to access your application

Master : A Master is responsible for managing all worker nodes in a cluster. Master is responsible for actual orchestration.

Pod : A Pod is the basic execution unit of a Kubernetes application. It is the smallest and simplest unit in the K8s object model that you create or deploy.

    A pod represents processes running on your cluster.

    A Pod encapsulates an application's container (in some cases, multiple containers), storage, resources, a unique network identity (IP address), as well as options that govern how the containers should run.

    You can deploy multiple containers in a Pod, but those kind of deployments are very rare.


ReplicaSet

A ReplicaSet's purpose is to maintain a stable set of replica pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical pods. Even if you have one pod on a node, the ReplicaSet makes sure that one pod is always available. If something goes wrong with the pods, a new pod will be created by ReplicaSet, thus providing HA. ReplicaSet can span across multiple nodes.


DaemonSet

A DaemonSet runs one copy of your pod on each worker node in the cluster. When a new node is added to the cluster, a copy of the pod is automatically created on the node. When a node is removed, the pod will also be removed. It ensures one copy of the pod is always present on all nodes. The definition file of a DaemonSet is similar to that of a ReplicaSet. The only difference is the "Kind" in the YAML file.


Service

Service is a way to expose an application - running on a set of pods as a network service. The purpose of a service is to group a set of pods to a single resource. You can create many services within a single application. This ensures that you always have access to a group of pods even as they're added or torn down.


Deployments

Deployments provide the capability to upgrade the underlying instances seamlessly using rolling updates, undo changes, pause and resume changes as required.

Inside a deployment, there is a ReplicaSet. Inside a ReplicaSet, there are pods, and inside pods there are containers. The ReplicaSet ultimately creates the pods with the name of Deployment and ReplicaSet. Creating a deployment automatically creates a ReplicaSet within the name of the deployment.


Horizontal Pod Autoscaler

The HPA automatically scales the number of pods in a replication controller, deployment, ReplicaSet or StatefulSet based on standard or custom metrics.


StatefulSet

StatefulSet pods are created and deployed in a sequence. After the first pod is deployed, it must be in a running state for the next pod to be deployed. Each pod is assigned a unique index starting from suffix O and incremented by one. Each Pod thus gets a unique name combined with the StatefulSet name. There are no random names for the Pods. The last instance is deleted first, followed by second, then last, and so on.


Volumes

Docker containers are meant to be transient. This means they last only for a short period of time. They are called upon to process data and then are destroyed once they are finished. The same is true for the data within the container. To persist data within a container, we attach a volume at the time that they are created. Hence, the data processed by the container will stay in this volume even if the container itself is destroyed.

Similar to container data, the pods created are transient in nature. Hence, a volume is attached to the pod so that the data generated or processed by the pod remains, even if the pods are deleted.

Kubernetes supports different types of storage solutions include NFS, ceph, GlusterFS, Flocker, AWS EBS, Azure Disk and others.


Persistent Volumes

Persistent Volumes allows central management of the volumes for pods.

1. Persistent Volume Claims (PVC) - Persistent Volumes is a cluster-wide pool of storage volumes configured by an administrator to be used by users deploying applications on the cluster. The users can select the storage from this pool using Persistent Storage Claims (PVC).

2. Storage Allocation : Persistent Volumes allow the use of a large pool of storage and allow the pods to carve out storage from the pool as and when required.

3. Volume Challenges : When you have a large environment with a lot of pods to be deployed, you will have to configure volume and volume storage for each pod using the definition file. Whenever a change is to be made with respect to volumes or storage, the user will have to make those changes on all of the pods.


Container Network Interface

CNI is a set of standards that define how a plugin should be developed to solve networking challenges in container runtime environments.

Plugin-Based Networking Solution - The goal of CNI is to create a generic plugin-based networking solution for containers.




CNI Plugin - CNI Plugin is configured in the Kubelet service in each node. Kubelet looks into the CNI config directory to find which plugin needs to be used.





Comments

Popular posts from this blog

Checkpoint - Exporting Objects in CSV format

Be it a Network Operations Manager, Security Architect or a Security Auditor, the people up the hierarchy always harangue the Security Engineers to compile the list of firewall objects or rules or policies or the traffic statistics and so on.. This can turn out to be quite hectic especially if there are no built in features to systematically provide the output in a "layman-readable" format. Come, Checkpoint's "Object Explorer..."  which not only provides the output in the "layman-readable" format, but also provides in-built filtering mechanisms, thereby ensuring that the Security Engineer doesn't have to rely on Google for building his scarce Microsoft Excel data filtering skills. The following screenshots will show how easy it is, with Checkpoint R80.10 to generate the firewall configuration inventory. On the SmartConsole Unified Portal, navigate to Menu >> Open Object Explorer... Select the Categories you wish to see in your output: Click o

MITRE ATT&CK - Kerberos Vulnerabilities and Security

From the previous post, the summary of Kerberos authentication process is as below: For the initial authentication, the user’s client machine sends a request to the KDC  Authentication Service (AS) . The request includes details like the user’s username, and the date and time. All information except the username is encrypted using the hash of the user’s password. The KDC AS uses the username to look up its copy of the user’s password hash and uses it to decrypt the rest of the request. If the decryption is successful, that means the client used the correct password hash and the user has successfully authenticated. Once the user is authenticated, the KDC AS sends the user’s client a  ticket granting ticket   (TGT) . The TGT includes a unique session key and a timestamp that specifies how long that session is valid (normally 8 or 10 hours). Importantly, before sending the TGT, the KDC encrypts it using the password hash for a special account, the  KRBTGT account.  That password hash is s

Tejas Jain - GCP Constraints & Random Facts

1.  Google Cloud Interconnect Security Cloud Interconnect does not encrypt the connection between your on-premises network and Google's network. Cloud VPN cannot be used with Dedicated Interconnect For additional security, use application-level encryption or your own VPN 2. While using Cloud CDN, the default time-to-live (TTL) for content caching is 3600 seconds = 60 mins 3. Cloud NAT sends only the translation logs and error logs to Cloud Logging service. 4. GCP Dedicated Interconnect - On Premises network device requirements:     10-Gbps circuits, single mode fiber or 100-Gbps circuits, single mode fiber     IPv4 link local addressing     LACP, even if you are using single circuit     EBGP-4 with multi-hop     802.1Q VLANs 5. While using Cloud VPN, the recommended MTU to be configured on the peer VPN  gateway = 1460 bytes 6. Each instance must have at least one network interface. The maximum number of network instances per instance is 8, depending on the instance's machine