top of page
Writer's pictureRex Andrew

Kubernetes Overview


Kubernetes Infrastructure


Kubernetes is an open source container deployment and management platform. It offers container orchestration, a container runtime, container-centric infrastructure orchestration, load balancing, self-healing mechanisms, and service discovery. Kubernetes architecture, also sometimes called Kubernetes application deployment architecture or Kubernetes client server architecture, is used to compose, scale, deploy, and manage application containers across host clusters.



Kubernetes Architecture / Components


A Kubernetes cluster is a form of Kubernetes deployment architecture. Basic Kubernetes architecture exists in two parts: the control plane and the nodes or compute machines. Each node could be either a physical or virtual machine and is its own Linux environment. Every node also runs pods, which are composed of containers.


Kubernetes architecture components or K8s components include the Kubernetes control plane and the nodes in the cluster. The control plane machine components include the Kubernetes API server, Kubernetes scheduler, Kubernetes controller manager, and etcd. Kubernetes node components include a container runtime engine or docker, a Kubelet service, and a Kubernetes proxy service.


Master Components


kube-apiserver – Kubernetes API server is the central management entity that receives all REST requests for modifications (to pods, services, replication sets/controllers and others), serving as frontend to the cluster. Also, this is the only component that communicates with the etcd cluster, making sure data is stored in etcd and is in agreement with the service details of the deployed pods.


kube-controller-manager – runs a number of distinct controller processes in the background (for example, replication controller controls number of replicas in a pod, endpoints controller populates endpoint objects like services and pods, and others) to regulate the shared state of the cluster and perform routine tasks. When a change in a service configuration occurs (for example, replacing the image from which the pods are running, or changing parameters in the configuration yaml file), the controller spots the change and starts working towards the new desired state.


cloud-controller-manager – is responsible for managing controller processes with dependencies on the underlying cloud provider (if applicable). For example, when a controller needs to check if a node was terminated or set up routes, load balancers and volumes in the cloud infrastructure, all that is handled by the cloud-controller-manager.


etcd – a simple, distributed key value storage which is used to store the Kubernetes cluster data (such as number of pods, their state, namespace, etc), API objects and service discovery details. It is only accessible from the API server for security reasons. etcd enables notifications to the cluster about configuration changes with the help of watchers. Notifications are API requests on each etcd cluster node to trigger the update of information in the node’s storage.


kube-scheduler – helps schedule the pods (a co-located group of containers inside which our application processes are running) on the various nodes based on resource utilization. It reads the service’s operational requirements and schedules it on the best fit node. For example, if the application needs 1GB of memory and 2 CPU cores, then the pods for that application will be scheduled on a node with at least those resources. The scheduler runs each time there is a need to schedule pods. The scheduler must know the total resources available as well as resources allocated to existing workloads on each node.


Node (worker) components


kubelet – the main service on a node, regularly taking in new or modified pod specifications (primarily through the kube-apiserver) and ensuring that pods and their containers are healthy and running in the desired state. This component also reports to the master on the health of the host where it is running.


kube-proxy – a proxy service that runs on each worker node to deal with individual host subnetting and expose services to the external world. It performs request forwarding to the correct pods/containers across the various isolated networks in a cluster.



Namespaces


In Kubernetes, namespaces provides a mechanism for isolating groups of resources within a single cluster. Names of resources need to be unique within a namespace, but not across namespaces. Namespace-based scoping is applicable only for namespaced objects (e.g. Deployments, Services, etc) and not for cluster-wide objects (e.g. StorageClass, Nodes, PersistentVolumes, etc).


Functionalities of a Namespace in Kubernetes

  • Namespaces help pod-to-pod communication using the same namespace.

  • Namespaces are virtual clusters that can sit on top of the same physical cluster.

  • They provide logical separation between the teams and their environments.

Kubernetes starts with four initial namespaces:


default

Kubernetes includes this namespace so that you can start using your new cluster without first creating a namespace.

kube-node-lease

This namespace holds Lease objects associated with each node. Node leases allow the kubelet to send heartbeats so that the control plane can detect node failure.

kube-public

This namespace is readable by all clients (including those not authenticated). This namespace is mostly reserved for cluster usage, in case that some resources should be visible and readable publicly throughout the whole cluster. The public aspect of this namespace is only a convention, not a requirement.

kube-system

The namespace for objects created by the Kubernetes system.


The following command is used to control the namespace.

$ kubectl create –f namespace.yml ( command to create a namespace.)  
$ kubectl get namespace ( list all the available namespace.)
$ kubectl get namespace <Namespace name>  (get a particular namespace whose name is specified in the command)
$ kubectl describe namespace <Namespace name> (describe the complete details about the service)
$ kubectl delete namespace <Namespace name> (delete a particular namespace present in the cluster.)


Kubernetes Secrets


In Kubernetes, a Secret is an object that contains a small amount of sensitive data such as login usernames and passwords, tokens, keys, etc. The primary purpose of Secrets is to reduce the risk of exposing sensitive data while deploying applications on Kubernetes.


Key points on Kubernetes secrets:

  • You create Secrets outside of Pods — you create a Secret before any Pod can use it.

  • When you create a Secret, it is stored inside the Kubernetes data store (i.e., an etcd database) on the Kubernetes Control Plane.

  • When creating a Secret, you specify the data and/or stringData fields. The values for all the data field keys must be base64-encoded strings. Suppose you don’t want to convert to base64. In that case, you can choose to specify the stringData field instead, which accepts arbitrary strings as values.

  • When creating Secrets, you are limited to a size of 1MB per Secret. This is to discourage the creation of very large secrets that could exhaust the kube-apiserver and kubelet memory.

  • Also, when creating Secrets, you can mark them as immutable with immutable: true. Preventing changes to the Secret data after creation. Marking a Secret as immutable protects from accidental or unwanted updates that could cause application outages.

  • After creating a Secret, you inject it into a Pod either by mounting it as data volumes, exposing it as environment variables, or as imagePullSecrets. You will learn more about this later in this article.


Types of Secret

When creating a Secret, you can specify its type using the type field of the Secret resource, or certain equivalent kubectl command line flags (if available). The Secret type is used to facilitate programmatic handling of the Secret data.


Kubernetes provides several built-in types for some common usage scenarios. These types vary in terms of the validations performed and the constraints Kubernetes imposes on them.


​Built-in Type

Usage

Opaque arbitrary

user-defined data

kubernetes.io/service-account-token

ServiceAccount token

kubernetes.io/dockercfg

serialized ~/.dockercfg file

kubernetes.io/dockerconfigjson

serialized ~/.docker/config.json file

kubernetes.io/basic-auth

credentials for basic authentication

kubernetes.io/ssh-auth

credentials for SSH authentication

kubernetes.io/tls

data for a TLS client or server

bootstrap.kubernetes.io/token

bootstrap token data


Kubernetes Cluster Networking


Container-to-container


When you have one or more containers within a pod that shares the same host networking. So pods will get its own IP address, All container shares same ip address but it works on different port. Communication between containers happens within the pod itself on different port. So all containers will be able to communicate each other by default.


Pod-to-Pod Networking


Each pods will get its own ipaddress. there are sub types within Pod to Pod communcation, that is.


Intra-node Pod Network – Communication of pods running on a single node.

Each pod running on single worker node will have the communication by default, because All ip address of pods will be different and assigned from your local network. since it shares the same host.


Inter-node Pod Network – Communication of pods running in different nodes.

When you have pod running on multiple worker nodes, communication between these pods happens through network plugin that will create some route tables. It forwards the traffic from any pod to any destination pods.


Pod-to-Service Networking


Kubernetes is designed to allow pods to be replaced dynamically, as needed. This means that pod IP addresses are not durable unless special precautions are taken, such as for stateful applications. To address this issue and ensure that communication with and between pods is maintained.


Kubernetes services manage pod states and enable you to track pod IP addresses over time. These services abstract pod addresses by assigning a single virtual IP (a cluster IP) to a group of pod IPs. Then, any traffic sent to the virtual IP is distributed to the associated pods.


This service IP enables pods to be created and destroyed as needed without affecting overall communications. It also enables Kubernetes services to act as in-cluster load balancers, distributing traffic as needed among associated pods.


Service is kubernetes resource type that expose our application to outside the cluster. Through which pod can send the traffic to services.


Internet-to-Service Networking

The final networking situation that is needed for most deployments is between the Internet and services. Whether you are using Kubernetes for internal or external applications, you generally need Internet connectivity. This connectivity enables users to access your services and distributed teams to collaborate.


When setting up external access, there are two techniques you need to use — egress and ingress. These are policies that you can set up with either whitelisting or blacklisting to control traffic into and out of your network.


In order to access our application from outside the cluster, external traffic should be allowed to reach the server within the cluster.



Kubernetes services


ClusterIP

  • ClusterIP is the default and most common service type.

  • Kubernetes will assign a cluster-internal IP address to ClusterIP service. This makes the service only reachable within the cluster.

  • You cannot make requests to service (pods) from outside the cluster.

  • You can optionally set cluster IP in the service definition file.


NodePort

  • NodePort service is an extension of ClusterIP service. A ClusterIP Service, to which the NodePort Service routes, is automatically created.

  • It exposes the service outside of the cluster by adding a cluster-wide port on top of ClusterIP.

  • NodePort exposes the service on each Node’s IP at a static port (the NodePort). Each node proxies that port into your Service. So, external traffic has access to fixed port on each Node. It means any request to your cluster on that port gets forwarded to the service.

  • You can contact the NodePort Service, from outside the cluster, by requesting <NodeIP>:<NodePort>.

  • Node port must be in the range of 30000–32767. Manually allocating a port to the service is optional. If it is undefined, Kubernetes will automatically assign one.

  • If you are going to choose node port explicitly, ensure that the port was not already used by another service.


LoadBalancer

  • LoadBalancer service is an extension of NodePort service. NodePort and ClusterIP Services, to which the external load balancer routes, are automatically created.

  • It integrates NodePort with cloud-based load balancers.

  • It exposes the Service externally using a cloud provider’s load balancer.

  • Each cloud provider (AWS, Azure, GCP, etc) has its own native load balancer implementation. The cloud provider will create a load balancer, which then automatically routes requests to your Kubernetes Service.

  • Traffic from the external load balancer is directed at the backend Pods. The cloud provider decides how it is load balanced.

  • The actual creation of the load balancer happens asynchronously.

  • Every time you want to expose a service to the outside world, you have to create a new LoadBalancer and get an IP address.


ExternalName

  • Services of type ExternalName map a Service to a DNS name, not to a typical selector such as my-service.

  • You specify these Services with the `spec.externalName` parameter.

  • It maps the Service to the contents of the externalName field (e.g. foo.bar.example.com), by returning a CNAME record with its value.

  • No proxying of any kind is established.


Ingress in Kubernetes

Ingress is an object that allows access to Kubernetes services from outside the Kubernetes cluster. You can configure access by creating a collection of rules that define which inbound connections reach which services.

An Ingress does not expose arbitrary ports or protocols. Exposing services other than HTTP and HTTPS to the internet typically uses a service of type NodePort or LoadBalancer.

Ingress controllers

  • Ingress controller is an application that runs in a cluster and configures an HTTP load balancer according to Ingress resources. The load balancer can be a software load balancer running in the cluster or a hardware or cloud load balancer running externally. Different load balancers require different Ingress controller implementations.

  • In order to Ingress resource work, the cluster must have an ingress controller running.

  • You can deploy any number of ingress controllers within a cluster.

  • There are many different Ingress controllers, and there’s support for cloud-native load balancers (from GCP, AWS, and Azure).

  • e.g. Nginx, Ambassador, EnRoute, HAProxy, AWS ALB, AKS Application Gateway



Replica Set Vs Replication Controller


Replica Set and Replication Controller do almost the same thing. Both ensure that a specified number of pod replicas are running at any given time. The difference comes with the usage of selectors to replicate pods. Replica Set uses Set-Based selectors while replication controllers use Equity-Based selectors.


Equity-Based Selectors: This type of selector allows filtering by label key and values. So, in layman’s terms, the equity-based selector will only look for the pods with the exact same phrase as the label.

Example: Suppose your label key says app=nginx; then, with this selector, you can only look for those pods with label app equal to nginx.


Selector-Based Selectors: This type of selector allows filtering keys according to a set of values. So, in other words, the selector-based selector will look for pods whose label has been mentioned in the set.

Example: Say your label key says app in (Nginx, NPS, Apache). Then, with this selector, if your app is equal to any of Nginx, NPS, or Apache, the selector will take it as a true result.



Kubernetes Cheat Sheet

This page contains a list of commonly used kubectl commands and flags. Kubectl apply - Creating objects # create resource(s) kubectl...

Comments


bottom of page