02 Kubectl
Kubectl is almost the only needed to talk to Kuberenetes.
It uses a config file located at ~/.kube/config which specifies the Kubernated API address and the path to TLS cers used to authenticate.
Some common CLI command to retrieve information within K8s:
- kubectl get
: returns resource lists or details e.g. node to retrieve cluster info or pods to retrieve pods list. - kubectl get nodes -o wide: details node information.
- kubectl get no -o yaml: details node as yaml.
Once a node name is retrieved is possible to use the describe command on it:
- kubectl describe
/ => kubectl describe node/minikube - kubectl describe
=> kubectl describe node minikube
Resource Types
K8s cluster includes several resource types depending on its distro.
Documentation
- It's possible to get the resource types list of the current cluster by running: kubectl api-resources
- It's possible to look at a resource type definition by running: kubectl explain type
- It's possible to view the definition of a field in a resource by running: kubectl explain node.spec
- Get explanation of all fields and sub-field by running: kubectl explain node --recursive
Get Commands
- kubectl get services / kubectl get svc => list services on the cluster.
- kubectl get pods => list of pods on the cluster.
- kubectl get namespaces / kubectl get ns => retrieves pods from the default namespace.
- kubectl get pods -A (--all-namespace) => retrieves all pods belong all namespaces.
- kubectl get pods --namespace=kube-system => retrieves pods from kube-system namespace.
- watch kubectl get pods => interactive real time visualization of pods.
- kubectl get pods -w => interactive real time visualization of pods.
Namespaces
Namespaces allow us to segregate resources.
It's possible to use namespaces for:
- multiadministration systems with fine-grained permissions
- network policies
- control the network layer
Usually K8s has some namespace by default:
- default namespace
- kube-system namespace: contains the K8s control plane.
- kube-plublic namespace: used for cluster bootstrapping
- kube-node-lease namespace: contains 1 Lease object per node, node leases are a way to implement node hearthbeats (healthchecks).
Deployments
When dealing with Kubernetes isn't possible to run a container directly.
It has a run command similarly to Docker which runs a pods.
kubectl run pingpong --image alpine ping 1.1.1.1
kubectl create deployment pingpong --image alpine -- ping 1.1.1.1
When executing kubectl run, a generator is called.
A generator is something like a template (blueprint) to create resource objects.
The generator instead of creating the pod directly it creates an abstraction named "deployment".
Techincally the deployment is a K8s resource, an object in the etcd database, a configuration known to K8s as a "spec", or specification.
The deploymnt isn't going to create the pods directly, it contains an object named ReplicaSet that will instantiate the pods.
Deployment, ReplicaSet and Pod are all K8s resources and all of them are abstractions (layers of functionalities).
Finally the Pod resource layer is going to run the container, which is a real object running in Docker; the container will be started using the ping command (ping 1.1.1.1).
Layers of abstraction are managed each other:
- The Deployment resource at the top is managing and watching the ReplicaSet to make use that it's behaving correctly.
- The ReplicaSet is watching the pods to make sure that the number of pods is always the same of the specified replicas number.
Is possible to delete the created pod as follow:
kubectl delete pod/pingpong
kubectl delete deployment/pingpong cronjob/sleep
Similarly to Docker is possible to define how to restart the container using --restart=onFailure flags for example, --restart=Never might be useful for containers containing a job instead.
Whenever you delete a replicated Pod, the ReplicaSet will detect the change and it will respawn it.
The kubectl run command has been deprecated and replaced with kubectl create command:
kubectl create deployment
kubectl create job
kubectl create cronjob
kubectl run is now meant to be used only to start one-shot pods, it isn't a production command.
The most common practice is to create the deployment from a yaml file:
# using create -f flag is optional
kubectl create -f foo.yaml
# using apply -f flag is required
kubectl apply -f foo.yaml
It's possible to manage resources using create, update, delete commands to explictly say what you are changing.
When using apply command, changes defined inside the yaml file are simply applied; everything will be created/updated/deleted based off what's written inside the yaml file.
When a command like kubectl run web --image nginx --replicas=3
is ran happens the following:
- the command is sent the API server.
- the API server looks for specs of the Deployment resource.
- the API server stores in the etcd the ReplicaSet resource required by Deployment specs.
- The API server looks for specs of the ReplicaSet resource.
- The API server stores in the etcd the 3 requested identical Pod resources.
- The scheduler assigns a node to each Pod based off its algorithm.
- The Kubelet on worker nodes detects that a pod has been assigned to it, it changes its status to creating.
- The Kubelet says to the Docker engine to create the container and launch application.
- Once container is started the Kubelet will updated the etcd to change its status to running.
Everything happens on a polling interval basis.
Application Logs
Similarly to Docker, is possible to inspect application logs:
kubectl logs deploy/pingpong --tail=200 --follow
The logs command is able to stream only 1 pod a time, if more then one pod is specified for the log stream, then a random one is chosen.
To view logs of a single pod is possible to retrieve the extact pod name including its hash using kubectl get pods -A
.
It possible to streaming logs of multiple pods as follow:
kubectl logs -l run=pingpong -f deploy/pingpong --tail=200 --follow
-l flag is the selector, selector allows to catch up multiple objects based on labels.
It's possible to stream a max of 8 streams concurrency.
Stern
Stern is an open-source project by Wercker.
Stern allows to tail multiple pods on K8s and multiple containers withing the pod.
Each result is color coded for quicker debugging.
# logging all pods with name containing pingpong
stern pingpong
# adding timestamps to the stream
stern pingpong --timestamps
Scaling Deployments
K8s has a scale command to scale up containers:
kubectl scale deploy/pingpong --replicas 3
kubectl scale deployment pingpong --replicas 3
CronJobs
In the most cases Deployments are what is necessary to bring a containter in production.
If the pod containing a job is started using --schedule flag, a cronJob (or maintenance job) is created (doesn't create a Deployment resource for that).
kubectl run --schedule="*/3 * * * *" --restart=OnFailure --image=alpine sleep 10
Yaml diff
kubectl -apply -f just-a-pod.yaml
# edit just-a-pod.yaml
kubectl diff -f just-a-pod.yaml
Kubernetes
Nginx
Docker
Bootstrap
CI
GO