Javascript is required
/devops/kubernetes/k8s-mastery/04-app-deployments.md

04 App Deployments

Service discovery

Similarly to Docker IP addresses and FQDNs aren't hard-coded in the code, we connect to a service name.

Hostnames are hard-coded in the code, usually as environment variables.

Image Registries

Kubernetes doesn't have a image registry and does have a build feature built-in.

Technically a registry is just a huge HTTP-based storage; it has a garbage collector of old images/image layers and a caching system.

There are serveral type of registries on the internet:

  • SaaS products like Docker Hub, Quay, Gitlab, ...
  • Cloud providers: ACR (Azure), ECR (AWS), GCR (Google), ...
  • Commercial products to run an own registry: Docker Enterprise DTR, Quay, GitLab, ...
  • Open source products to run an own registry: Quay, Portus, OpenShift, GitLab, Harbor, Kraken, ...

Deploying a web application

version: "2"

services:
  rng:
    image: dockercoins/rng:v0.1

  hasher:
    image: dockercoins/hasher:v0.1

  webui:
    image: dockercoins/webui:v0.1
    ports:
    - "8000:80"

  redis:
    image: redis

  worker:
    image: dockercoins/worker:v0.1
  • create one deployment for each component (each service defined in a docker-compose.yml for ex.).
  • expose deployments that need to accept connections (hasher, redis, rng, webui) on a ClusterIP (the worker is outbound only).
  • it's possible to pull redis from the official registry.
  • It's necessary to build and push the others image to some regitry.
# creating deployments
kubectl create deployment redis --image=redis
# in this case images are pulled from the DockerHub.
kubectl create deployment hasher --image=dockercoins/hasher:v0.1
kubectl create deployment rng --image=dockercoins/rng:v0.1
kubectl create deployment webui --image=dockercoins/webui:v0.1
kubectl create deployment worker --image=dockercoins/worker:v0.1

# checking service status
kubectl logs deploy/rng

# exposing services on the cluster internal network, creating reachable DNS
kubectl expose deployment redis --port 6379
kubectl expose deployment rng --port 80
kubectl expose deployment hasher --port 80
# creating the WebUI service that has to be exposed to internet
kubectl expose deployment webui --type=NodePort --port=80

# the service type ClusterIP exposes services internally,
# the service type NodePort exposes services to the internet.

# checking the port was allocated:
kubectl get svc

Once the app is deployed, it's time to scale it up:

kubectl scale deployment worker --replicas=4

Is necessary to analyze the deployment in order to understand how to scale services, by only scaling the worker is possible that at a certain amount of instances will result in a bottleneck from not scaled services.

Testing service using httping:

HASHER_IP=$(kubectl get svc hasher -o go-template={{ .spec.clusterIP }})
RNG_IP=$(kubectl get svc rng -o go-template={{ .spec.clusterIP }})

httping -c 3 $HASHER_IP
httping -c 3 $RNG_IP

If pings to a service take much more time than an another it may needs to be scaled up.

Kubernetes

Docker

Git

CI

GO

AWS

KubernetesDockerGitCIGOAWSscalingmdgitlabstorage