Announcing Bito’s free open-source sponsorship program. Apply now

Get high quality AI code reviews

A Comprehensive Guide to Docker and Kubernetes: Containerization and Orchestration Made Easy

40

Table of Contents

In this guide, we will discuss the fundamentals of container orchestration with Docker and Kubernetes. To assist you in getting started, we will go over the most popular commands and give real-world examples.

Docker

Docker is a platform that allows you to run and manage containers. Containers are isolated environments that contain all the dependencies and configurations required to run an application.

a. Docker Installation:

To start using Docker, you need to install it on your system. You can download the Docker installation package from the Docker website.

b. Docker Command Line Interface:

The Docker CLI is the main interface that you use to interact with the Docker platform. The most common Docker CLI commands are:

  • docker run: This command is used to run a new container from a Docker image. For example, to run an Ubuntu image, you can use the following command:
docker run ubuntu
  • docker ps: This command lists all the running containers on your system.
  • docker images: This command lists all the Docker images stored on your system.
  • docker stop: This command stops a running container. For example, to stop a container with the ID 7ab8, you can use the following command:
docker stop 7ab8
  • docker rm: This command removes a stopped container. For example, to remove the container with the ID 7ab8, you can use the following command:
docker rm 7ab8
  • docker pull: This command downloads a Docker image from a registry to your system. For example, to download the latest version of the Ubuntu image, you can use the following command:
docker pull ubuntu

Deploying an Application with Docker

Now that you have a basic understanding of Docker, let’s dive into deploying an application with it.

Step 1: Choose an Application

For this guide, we’ll be deploying a simple Node.js web application. You can use any application you like, but make sure it has a Dockerfile to build the image.

Step 2: Write a Dockerfile

A Dockerfile is a script that contains all the instructions to build a Docker image. It’s used to specify the base image, application dependencies, and how the application should run in a container.

Here’s an example of a Dockerfile for a Node.js application:

# Use the official Node.js image as the base image

FROM node:16

# Set the working directory in the container to /app

WORKDIR src/app

# Copy the package.json and package-lock.json files to the container

COPY package*.json ./

# Install the application dependencies

RUN npm install

# Copy the rest of the application files to the container

COPY . .

# Specify the command to run the application

CMD ["node", "server.js"]

Step 3: Build the Docker Image

Once you’ve written your Dockerfile, you can build the Docker image using the following command:

docker build -t my-node-app .

The -t option is used to specify the name and tag of the image. The `.` at the end of the command specifies the location of the Dockerfile.

Step 4: Run the Docker Container

Once the image is built, you can run it as a container using the following command:

docker run -p 3000:3000 my-node-app

The -p option is used to map the host’s port 3000 to the container’s port 3000. This will allow you to access the application from your host machine.

Step 5: Access the Application

You should now be able to access the application by opening a web browser and navigating to http://localhost:3000.

Kubernetes

Kubernetes is a platform for automating the deployment, scaling, and management of containerized applications. It provides a declarative approach to defining and managing the desired state of your applications and their dependencies.

a. Kubernetes Installation:

To start using Kubernetes, you need to install a cluster. You can install a cluster on your local machine using Minikube or on a cloud provider such as Google Cloud, Amazon Web Services (AWS), or Microsoft Azure.

b. Kubernetes Command Line Interface:

The Kubernetes CLI is the main interface that you use to interact with a Kubernetes cluster. The most common Kubernetes CLI commands are:

  • kubectl run: This command is used to create a new deployment in a Kubernetes cluster. For example, to create a deployment named nginx that runs the Nginx image, you can use the following command:
kubectl run nginx --image=nginx
  • kubectl get: This command is used to retrieve information about the resources in a Kubernetes cluster. For example, to retrieve information about all deployments, you can use the following command:
kubectl get deployments
  • kubectl delete: This command is used to delete a resource in a Kubernetes cluster.

Deploying an Application with Kubernetes

Now that you’ve seen how to deploy an application with Docker, let’s look at how to deploy it with Kubernetes.

Step 1: Choose a Cluster

You can either use a cloud-based Kubernetes service like Google Kubernetes Engine (GKE) or a self-hosted solution like Minikube. For this guide, we’ll be using Minikube.

Step 2: Start the Cluster

To start a Minikube cluster, run the following command:

minikube start

Step 3: Create a Kubernetes Deployment

A Kubernetes deployment is used to manage the running instances of your application. You can create a deployment using a YAML file.

Here is an example of a deployment manifest for a simple web application:

apiVersion: apps/v1

kind: Deployment

metadata:

name: my-web-app

spec:

replicas: 3

selector:

matchLabels:

app : my-web-app

template:

metadata:

labels:

app: my-web-app

spec:

containers:

- namex: my-web-app

image: my-web-app:1.0

ports:

- containerPort: 80

This deployment manifest specifies that we want to run 3 replicas of our web application, with the label “app: my-web-app”. The template section specifies the container image we want to use for our web application, and the port that should be exposed.

To create the deployment, you can use the following command:

kubectl apply -f deployment.yaml

This command will create the deployment in the Kubernetes cluster and start the specified number of replicas. You can check the status of the deployment using the following command:

kubectl get deployments

This command will show you the status of all deployments in the cluster, including the number of replicas that are running and the status of each replica.

Step 4: Exposing Applications with Services

Once your deployment is running, you will need to expose it to the outside world so that users can access it. This is done using a Kubernetes service. A service is a higher-level object in Kubernetes that provides a stable IP address and DNS name for your application. It also provides load balancing and proxying capabilities to help distribute traffic to your replicas.

Here is an example of a service manifest for our web application:

apiVersion: v1

kind: Service

metadata:

name: my-web-app

spec:

selector:

app: my-web-app

ports:

- name: http:

port: 80

targetPort: 80

type: ClusterIP

This service manifest specifies that we want to expose our web application on port 80, with a stable IP address and DNS name. The selector section specifies that the service should route traffic to pods with the label “app: my-web-app”, which matches the label on our deployment.

To create the service, you can use the following command:

kubectl apply -f service.yaml

This command will create the service in the Kubernetes cluster and expose your application to the outside world. You can check the status of the service using the following command:

kubectl get services

This command will show you the status of all services in the cluster, including the IP address and port of each service.

Step 5: Scaling Applications with Deployments

One of the key benefits of using Kubernetes for container orchestration is its ability to easily scale applications. Scaling refers to the process of increasing or decreasing the number of replicas of a deployment to handle changing workloads. In Kubernetes, this can be achieved using the kubectl scale command.

To scale a deployment, you need to specify the deployment name and the number of replicas you want to have. For example, to scale a deployment named “nginx-deployment” to 5 replicas, the command would be:

kubectl scale deployment nginx-deployment --replicas=5

You can also check the current replicas of a deployment using the following command:

kubectl get deployment nginx-deployment

In this command, the output will include information about the deployment, such as the name, desired replicas, and current replicas.

It’s important to note that scaling a deployment does not automatically update the resources required by the containers. To update the resources, you will need to update the deployment’s specification and apply the changes.

In conclusion, scaling is an important aspect of container orchestration, and Kubernetes provides an easy way to scale applications with the kubectl scale command. By using this command, you can handle changing workloads and ensure your applications are running optimally.

CMD [“node”, “server.js”]

You should now be able to access the application by opening a web browser and navigating to `http://localhost:3000`.

Picture of Adhir Potdar

Adhir Potdar

Adhir Potdar, currently serving as the VP of Technology at Bito, brings a rich history of technological innovation and leadership from founding Isana Systems, where he spearheaded the development of blockchain and AI solutions for healthcare and social media. His entrepreneurial journey also includes co-founding Bord Systems, introducing a SaaS platform for virtual whiteboards, and creating PranaCare, a collaborative healthcare platform. With a career that spans across significant tech roles at Zettics, Symantec, PANTA Systems, and VERITAS Software, Adhir's expertise is a blend of technical prowess and visionary leadership in the technology space.

Picture of Amar Goel

Amar Goel

Amar is the Co-founder and CEO of Bito. With a background in software engineering and economics, Amar is a serial entrepreneur and has founded multiple companies including the publicly traded PubMatic and Komli Media.

Written by developers for developers

This article was handcrafted with by the Bito team.

Latest posts

Recent releases: Pick your AI model, create PR from IDE, integrated Linter feedback, and more

PEER REVIEW: Shubham Gupta, Chief Technology Officer at ToolJet

Ultimate Java Code Review Checklist

Ultimate Python Code Review Checklist

13 Best Java AI Coding Tools 2024 [Free & Paid]

Top posts

Recent releases: Pick your AI model, create PR from IDE, integrated Linter feedback, and more

PEER REVIEW: Shubham Gupta, Chief Technology Officer at ToolJet

Ultimate Java Code Review Checklist

Ultimate Python Code Review Checklist

13 Best Java AI Coding Tools 2024 [Free & Paid]

From the blog

The latest industry news, interviews, technologies, and resources.

Get Bito for IDE of your choice