How to up Docker image in Kubernetes



What is the need of docker?
Earlier, the process for deploying a service was slow and painful. First, the developers were writing code; then the operations team would deploy it on bare metal machines, where they had to look out for library versions, patches, and language compilers for the code to work. If there were some bugs or errors, the process would start all over again, the developers would fix it, and then again the operational team was there to deploy.
There was an improvement with the creation of Hypervisors. Hypervisors have multiple Virtual machines or VMs on the same host, which may be running or turned off. VMs decreased the waiting time for deploying code and bug fixing in a big manner, but the real game changer was Docker containers.

What is docker?
Docker is computer software used for Virtualization in order to have multiple Operating systems running on the same host. Unlike Hypervisors which are used for creating VM (Virtual machines), virtualization in Docker is performed on system-level in so-called Docker containers.
As you can see the difference in the image below, Docker containers run on top of the host's Operation system. This helps you to improves efficiency. Moreover, we can run more containers on the same infrastructure than we can run Virtual machines because containers use fewer resources.
Unlike the VMs which can communicate with the hardware of the host (ex: Ethernet adapter to create more virtual adapters) Docker containers run in an isolated environment on top of the host's OS. Even if your host runs Windows OS, you can have Linux images running in containers with the help of Hyper-V, which automatically creates small VM to virtualize the system's base image, in this case, Linux.

Docker Engine

Docker is the client-server type of application which means we have clients who relay to the server. So the Docker daemon called: dockerd is the Docker engine which represents the server. The docker daemon and the clients can be run on the same or remote host, and they communicate through command line client binary, as well as a full RESTful API to interact with the daemon: dockerd.

Docker Images

Docker images are the "source code" for our containers; we use them to build containers. They can have software pre-installed which speeds up deployment. They are portable, and we can use existing images or build our own.

Registries

Docker stores the images we build in registries. There are public and private registries. Docker company has public registry called Docker hub, where you can also store images privately. Docker hub has millions of images, which you can start using now.

Docker Containers

Containers are the organizational units of Docker. When we build an image and start running it; we are running in a container. The container analogy is used because of the portability of the software we have running in our container. We can move it, in other words, "ship" the software, modify, manage, create or get rid of it, destroy it, just as cargo ships can do with real containers.
Docker architecture 

Installing Docker on Linux.

To install docker, we need to use the Docker team's DEB packages. For that, first, we need to install some prerequisite packages.
Step 1) Adding prerequisite Ubuntu packages
$ sudo apt-get install \
apt-transport-https \
ca-certificates curl \
software-properties-common
*the sign "/" is not necessary it's used for the new line, if want you can write the command without using "/" in one line only.
Step 2) Add the Docker GPG key
$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
Step 3) Adding the Docker APT repository
$ sudo add-apt-repository \ 
"deb [arch=amd64] https://download.docker.com/linux/ubuntu \ 
$(lsb_release -cs) \
stable"
You may be prompted to confirm that you wish to add the repository and have the repository's GPG key automatically added to your host.
The lsb_release command should populate the Ubuntu distribution version of your host.
Step 4) Update APT sources
$ sudo apt-get update
We can now install the Docker package itself.
Step 5) Installing the Docker packages on Ubuntu
$ sudo apt-get install docker-ce
The above-given command installs Docker and other additional required packages. Before Docker 1.8.0, the package name was lxc-docker, and between Docker 1.8 and 1.13, the package name was docker-engine.




                                                                                                                                              Kubernetes is a portable, extensible open-source platform for managing containerized workloads and services, that facilitates both declarative configuration and automation. It has a large, rapidly growing ecosystem. Kubernetes services, support, and tools are widely available.
Google open-sourced the Kubernetes project in 2014. Kubernetes builds upon a decade and a half of experience that Google has with running production workloads at scale, combined with best-of-breed ideas and practices from the community.

Why do I need Kubernetes and what can it do?

Kubernetes has a number of features. It can be thought of as:
  • a container platform
  • a microservices platform
  • a portable cloud platform and a lot more.
Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.

Why do I need Kubernetes and what can it do?

Kubernetes has a number of features. It can be thought of as:
  • a container platform
  • a microservices platform
  • a portable cloud platform and a lot more.
Kubernetes provides a container-centric management environment. It orchestrates computing, networking, and storage infrastructure on behalf of user workloads. This provides much of the simplicity of Platform as a Service (PaaS) with the flexibility of Infrastructure as a Service (IaaS), and enables portability across infrastructure providers.

Nodes
A node is a worker machine in Kubernetes, previously known as a minion. A node may be a VM or physical machine, depending on the cluster. Each node contains the services necessary to run pods and is managed by the master components. The services on a node include the container runtime, kubelet and kube-proxy. 



Node status
A node’s status contains the following information:
  • Addresses
  • Condition
  • Capacity
  • Info
Master Node Communication

  • Cluster to master: All communication paths from the cluster to the master terminate at the api server (none of the other master components are designed to expose remote services). In a typical deployment, the api server is configured to listen for remote connections on a secure HTTPS port (443) with one or more forms of client authentication enabled. One or more forms of authorization should be enabled, especially if anonymous requests or service account tokens are allowed.
    Nodes should be provisioned with the public root certificate for the cluster such that they can connect securely to the api server along with valid client credentials. For example, on a default GKE deployment, the client credentials provided to the kubelet are in the form of a client certificate. Pods that wish to connect to the apiserver can do so securely by leveraging a service account so that Kubernetes will automatically inject the public root certificate and a valid bearer token into the pod when it is instantiated. The kubernetes service (in all namespaces) is configured with a virtual IP address that is redirected (via kube-proxy) to the HTTPS endpoint on the apiserver. The master components also communicate with the cluster apiserver over the secure port. As a result, the default operating mode for connections from the cluster (nodes and pods running on the nodes) to the master is secured by default and can run over untrusted and/or public networks.
  •  Master to cluster: There are two primary communication paths from the master (apiserver) to the cluster. The first is from the apiserver to the kubelet process which runs on each node in the cluster. The second is from the apiserver to any node, pod, or service through the apiserver’s proxy functionality.

How to package a web application in a Docker container image, and run that container image on a Google Kubernetes Engine cluster 

Objectives

To package and deploy your application on GKE, you must:
  1. Package your app into a Docker image
  2. Run the container locally on your machine (optional)
  3. Upload the image to a registry
  4. Create a container cluster
  5. Deploy your app to the cluster
  6. Expose your app to the Internet
  7. Scale up your deployment
  8. Deploy a new version of your app

Before you begin

Take the following steps to enable the Kubernetes Engine API:
  1. Visit the Kubernetes Engine page in the Google Cloud Platform Console.
  2. Create or select a project.
  3. Wait for the API and related services to be enabled. This can take several minutes.
  4. Make sure that billing is enabled for your Google Cloud Platform project.

Option A: Use Google Cloud Shell

To use Google Cloud Shell:
  1. Go to the Google Cloud Platform Console.
  2. Click the Activate Cloud Shell button at the top of the console window.

Option B: Use command-line tools locally

  1. Install the Google Cloud SDK, which includes the gcloud command-line tool.
  2. Using the gcloud command line tool, install the Kubernetes command-line tool. kubectl is used to communicate with Kubernetes, which is the cluster orchestration system of GKE clusters:
    gcloud components install kubectl
  3. Install Docker Community Edition (CE) on your workstation. You will use this to build a container image for the application.
  4. Install the Git source control tool to fetch the sample application from GitHub.


Set defaults for the gcloud command-line tool

To save time typing your project ID and Compute Engine zone options in the gcloud command-line tool, you can set the defaults:
gcloud config set project [PROJECT_ID] gcloud config set compute/zone us-central1-b


Step 1: Build the container image

GKE accepts Docker images as the application deployment format. To build a Docker image, you need to have an application and a Dockerfile.
For this tutorial, you will deploy a sample web application called hello-app, a web server written in Gothat responds to all requests with the message “Hello, World!” on port 80.
The application is packaged as a Docker image, using the Dockerfile that contains instructions on how the image is built. You will use this Dockerfile to package your application.
To download the hello-app source code, run the following commands:
git clone https://github.com/GoogleCloudPlatform/kubernetes-engine-samples cd kubernetes-engine-samples/hello-app
Set the PROJECT_ID environment variable in your shell by retrieving the pre- configured project ID on gcloud by running the command below:
export PROJECT_ID="$(gcloud config get-value project -q)"
The value of PROJECT_ID will be used to tag the container image for pushing it to your private Container Registry.
To build the container image of this application and tag it for uploading, run the following command:
docker build -t gcr.io/${PROJECT_ID}/hello-app:v1 .
This command instructs Docker to build the image using the Dockerfile in the current directory and tag it with a name, such as gcr.io/my-project/hello-app:v1. The gcr.io prefix refers to Google Container Registry, where the image will be hosted. Running this command does not upload the image yet.
You can run docker images command to verify that the build was successful:
docker images
Output:
REPOSITORY                     TAG                 IMAGE ID            CREATED             SIZE
gcr.io/my-project/hello-app    v1                  25cfadb1bf28        10 seconds ago      54 MB

Step 2: Upload the container image

You need to upload the container image to a registry so that GKE can download and run it.
First, configure Docker command-line tool to authenticate to Container Registry (you need to run this only once):
gcloud auth configure-docker
You can now use the Docker command-line tool to upload the image to your Container Registry:
docker push gcr.io/${PROJECT_ID}/hello-app:v1

Step 3: Run your container locally (optional)

To test your container image using your local Docker engine, run the following command:
docker run --rm -p 8080:8080 gcr.io/${PROJECT_ID}/hello-app:v1
If you're on Cloud Shell, you can can click "Web preview" button on the top right to see your application running in a browser tab. Otherwise, open a new terminal window (or a Cloud Shell tab) and run to verify if the container works and responds to requests with "Hello, World!":
curl http://localhost:8080
Once you've seen a successful response, you can shut down the container by pressing Ctrl+C in the tab where docker run command is running.

Step 4: Create a container cluster

Now that the container image is stored in a registry, you need to create a container cluster to run the container image. A cluster consists of a pool of Compute Engine VM instances running Kubernetes, the open source cluster orchestration system that powers GKE.
Once you have created a GKE cluster, you use Kubernetes to deploy applications to the cluster and manage the applications’ lifecycle.
Run the following command to create a two-node cluster named hello-cluster:
gcloud container clusters create hello-cluster --num-nodes=2
It may take several minutes for the cluster to be created. Once the command has completed, run the following command and see the cluster’s three worker VM instances:
gcloud compute instances list
Output:
NAME                                          ZONE           MACHINE_TYPE   PREEMPTIBLE  INTERNAL_IP  EXTERNAL_IP     STATUS
gke-hello-cluster-default-pool-07a63240-822n  us-central1-b  n1-standard-1               10.128.0.7   35.192.16.148   RUNNING
gke-hello-cluster-default-pool-07a63240-kbtq  us-central1-b  n1-standard-1               10.128.0.4   35.193.136.140  RUNNING

Step 5: Deploy your application

To deploy and manage applications on a GKE cluster, you must communicate with the Kubernetes cluster management system. You typically do this by using the kubectl command-line tool.
Kubernetes represents applications as Pods, which are units that represent a container (or group of tightly-coupled containers). The Pod is the smallest deployable unit in Kubernetes. In this tutorial, each Pod contains only your hello-app container.
The kubectl run command below causes Kubernetes to create a Deployment named hello-web on your cluster. The Deployment manages multiple copies of your application, called replicas, and schedules them to run on the individual nodes in your cluster. In this case, the Deployment will be running only one Pod of your application.
Run the following command to deploy your application, listening on port 8080:
kubectl run hello-web --image=gcr.io/${PROJECT_ID}/hello-app:v1 --port 8080
To see the Pod created by the Deployment, run the following command:
kubectl get pods
Output:
NAME                         READY     STATUS    RESTARTS   AGE
hello-web-4017757401-px7tx   1/1       Running   0          3s

Step 6: Expose your application to the Internet

By default, the containers you run on GKE are not accessible from the Internet, because they do not have external IP addresses. You must explicitly expose your application to traffic from the Internet, run the following command:
kubectl expose deployment hello-web --type=LoadBalancer --port 80 --target-port 8080
The kubectl expose command above creates a Service resource, which provides networking and IP support to your application's Pods. GKE creates an external IP and a Load Balancer (subject to billing) for your application.
The --port flag specifies the port number configured on the Load Balancer, and the --target-port flag specifies the port number that is used by the Pod created by the kubectl run command from the previous step.
Once you've determined the external IP address for your application, copy the IP address. Point your browser to this URL (such as http://203.0.113.0) to check if your application is accessible.

Step 7: Scale up your application

You add more replicas to your application's Deployment resource by using the kubectl scale command. To add two additional replicas to your Deployment (for a total of three), run the following command:
kubectl scale deployment hello-web --replicas=3
You can see the new replicas running on your cluster by running the following commands:
kubectl get deployment hello-web
Output:
NAME        DESIRED   CURRENT   UP-TO-DATE   AVAILABLE   AGE
hello-web   3         3         3            2           1m
kubectl get pods
Output:
NAME                         READY     STATUS    RESTARTS   AGE
hello-web-4017757401-ntgdb   1/1       Running   0          9s
hello-web-4017757401-pc4j9   1/1       Running   0          9s
hello-web-4017757401-px7tx   1/1       Running   0          1m
Now, you have multiple instances of your application running independently of each other and you can use the kubectl scale command to adjust capacity of your application.
The load balancer you provisioned in the previous step will start routing traffic to these new replicas automatically.

Step 8: Deploy a new version of your app

GKE's rolling update mechanism ensures that your application remains up and available even as the system replaces instances of your old container image with your new one across all the running replicas.
You can create an image for the v2 version of your application by building the same source code and tagging it as v2 (or you can change the "Hello, World!" string to "Hello, GKE!" before building the image):
docker build -t gcr.io/${PROJECT_ID}/hello-app:v2 .
Then push the image to the Google Container Registry:
docker push gcr.io/${PROJECT_ID}/hello-app:v2
Now, apply a rolling update to the existing deployment with an image update:
kubectl set image deployment/hello-web hello-web=gcr.io/${PROJECT_ID}/hello-app:v2
Visit your application again at http://[EXTERNAL_IP], and observe the changes you made take effect.

Cleaning up

To avoid incurring charges to your Google Cloud Platform account for the resources used in this tutorial:
After completing this tutorial, follow these steps to remove the following resources to prevent unwanted charges incurring on your account:
  1. Delete the Service: This step will deallocate the Cloud Load Balancer created for your Service:
    kubectl delete service hello-web
  2. Delete the container cluster: This step will delete the resources that make up the container cluster, such as the compute instances, disks and network resources.
    gcloud container clusters delete hello-cluster

Comments