Explaining Kubernetes Clusters VS Workloads Views in GCP Console

The aim of this page📝 is to explain the diff between the view of Cluster and Workload as encountered in GCP Console → Kubernetes Engine

Pavol Kutaj
4 min readNov 6, 2023
What can I get from each one?
  • Cluster offers you to configure nodes, i.e. machines that pods are running on
technical implementation
  • It is more physical configuration of resources than logical/semantic view. Ensuring that servers run as they should.
  • In Kubernetes, a workload is an application running in one or more Kubernetes (K8s) pod or, essentially, a way of describing a type of microservice that comprises an application.
business implementation
  • For instance, an application might have a frontend workload and a backend workload made up of a dozen pods, each across a Kubernetes cluster.
  • There is a 1:many relationship between an application and its workloads.
  • An application can be composed of multiple workloads, each representing a different component or microservice of the application. Each workload can be made up of multiple pods, and each pod can contain one or more containers.
  • This architecture allows for high flexibility and scalability, as each component can be scaled and managed independently.
  • Pods then are logical groupings of containers running in a Kubernetes cluster that controllers manage as a control loop.
  • A controller monitors the current state of a Kubernetes resource and makes the requests necessary to change its state to the desired state.
  • Workload resources configure controllers that ensure the correct pods are running to match the desired state that you have defined for your application.
  • Workloads are objects that set deployment rules for pods.
  • A single Kubernetes pod can run multiple containers, each running a different workload or component of an application.
  • This is known as a multi-container pod.
  • The containers in a multi-container pod are scheduled on the same node, share the same network namespace (including IP address and network ports), and can communicate with each other using localhost.
  • They can also share storage volumes. This pattern is useful for sidecar applications, adapters, and proxies. — However, it’s important to note that each container in a pod is isolated and runs its own processes.
  • Based on these rules, Kubernetes performs the deployment and updates the workload with the current state of the application.
  • Workloads let you define the rules for application scheduling, scaling, and upgrading.
  • Namespace in Kubernetes is a way to group and isolate resources — we are using that to separate different architectural components in a given environment
  • In the context of Google Cloud Platform (GCP) and Kubernetes, a namespace is a way to group and isolate resources.
  • In GCP, Config Connector uses namespaces to organize Google Cloud resources using Kubernetes configuration.
  • These resources can be organized at the Project, Folder, and Organization levels.
  • You can add Annotations to your Config Connector Namespaces to map resources to a Project, a Folder or Organization.
  • In Kubernetes, namespaces are used to manage multiple teams or projects. It can be thought of as a virtual cluster inside your Kubernetes cluster.
  • You can have multiple namespaces inside a single Kubernetes cluster, and they are all logically isolated from each other.
  • On the other hand, a container can be considered synonymous with a Linux network namespace.
  • Each container runtime uses a namespace differently. For example, containers in Docker get their own namespace, while in CoreOS’ rkt, groups of containers share namespaces, each of which is called a pod.
  • So, while both namespaces and containers use the concept of isolation and resource management, they are used in different contexts and have different functionalities.
  • Namespaces are more about logical separation and organization of resources in cloud platforms like GCP and Kubernetes, while containers are about process isolation and resource control in the Linux operating system.

LINKS

--

--

Pavol Kutaj

Today I Learnt | Infrastructure Support Engineer at snowplow.io with a passion for cloud infrastructure/terraform/python/docs. More at https://pavol.kutaj.com