Skip to content

Kubernetes Short Notes (1)

  • Ops
Tags:

Cluster Architecture

Master Node

  • ETCD cluster
  • kube-scheduler
  • kube-controller-manager

These components communicate via kube-api server

Worker Node

  • container runtime engine, e.g. Docker, Rocket, ContainerD
  • kubelet: agent that runs and listen for instructions from kube-api
  • containers

The services deploy within worker nodes communicate with each other via kube-proxy

Objectives

ETCD

  • a distributed reliable key-value store
  • client commuications on port 2379
  • server to server on port 2380

kube-api

  • primary management component

  • setup:

    1. using kube-admin tools

      • deploy kube-api as a pod in kube-system namespace

      • the manifests is at /etc/kubernetes/manifests/kube-apiserver.yaml

      • the options is at /etc/systemd/system/kube-apiserver.service

      • search for kube-apiserver process on master node

  • example: apply deployment using kubectl

    1. authenticates user
    2. validate the HTTP requests
    3. the kube-scheduler monitored the changes from the kube-api, then:

      • retrieve the node information from kube-api
      • schedule the pod to some node through kube-api to kubelet

    4. update the pod info to ETCD

kube-controller-manager

  • continuously monitors the state of components
  • the controllers packages into a single process called Kube-Controller-Manager, which includes:
    1. deployment-controller, cronjob, service-account-controller …
    2. namespace-controller, job-contorller, node-controller …
    3. endpoint-controller, replicaset, replication-controller(replica set) …
  • remediate situation

kube-scheduler

  • decide which pod goes to which node
    1. filter nodes
    2. rank nodes

kubelet

  • follow the instruction from kube-scheduler to controll the container runtime engine (e.g. docker) that run or remove a container
  • using kube-admin tools to deploy cluster, the kubelet are not installed by default in worker nodes, need intstall manually

kube-proxy

  • runs on each nodes in the cluster
  • create iptables rules on each nodes to forward traffic heading to the IP of the services to the IP of the actual pods
  • kube-admin tool deploy kube-proxy as daemonset in each nodes

pod

  • the container are encapsulated into a pod
  • is a single instance of an application, the smallest object in k8s
  • containers in same pod shares storages and network namespaces, created and removed in the same time
  • multi-container pod is rare use case

ReplicationController

  • apiVersion support in v1
  • the process to monitor the pods
  • maintain the HA and specified number of pods that running on all nodes
  • only care about the pod which RestartPolicy is set to Always
  • scalable and replacable application should be managed by the controller
  • use cases: rolling updates, multiple release tracks (multiple replication controller replica the same pod but using different labels)

ReplicaSets

  • next generation of ReplicationController
  • api version support in apps/v1
  • enhance the filtering in .spec.selector (the major difference)
  • be aware of the non-template pod that has same lables
  • using Deployment as a replacement is recommended, it own and manage its ReplicaSets

Deployment

  • provide replication vis replicaset and other:
    • rolling update
    • rollout
    • pause and resume

Namespace

  • namespaces created at cluster creation

    1. kube-system
    2. kube-public
    3. default

  • each namespace can be assigned quota of resources

  • a DNS entry with SERVICE_NAME.NAMESPACE.svc.cluster.local format is automatically created when at service creation

    1. the cluster.local is the default domain name of the cluster

  • permanently config the namespace

ResourceQuota

  • useful to limit the compute resources for single namespace

Service

  • NodePort: listen to a port on the node and forward request to the pod image
    1. the NodePort in range 30000-32767
    2. only the port is required, targetPort will be the same if not sepcified, nodePort can be automatically allocated
    3. the service use the Random Algorithom to balance the load between pods
    4. the service is automatically configured by k8s to span across the cluster and map the target port to the same node port across the nodes
  • ClusterIP: default, create a virtual IP inside the cluster
    1. group the pod together and provide a single Endpoint to access
    2. each service get a name and a reliable IP adress
    3. a default service kubernetes will create by k8s at launch with port 443
  • LoadBalancer

Run a curl application is helpful for manual testing

發佈留言

發佈留言必須填寫的電子郵件地址不會公開。 必填欄位標示為 *