What is Docker and why should you use it?
Docker is a software platform that allows you to build, run, and share applications using containers. Containers are isolated environments that package up the code, libraries, and dependencies of an application, ensuring that it works consistently across different platforms. Docker makes it easy to create, manage, and deploy containers using simple commands and a standard format.
Benefits of using Docker
Some of the benefits of using Docker are:
- Portability: You can run your application on any machine that has Docker installed, without worrying about compatibility issues or configuration changes. You can also move your application between different environments, such as development, testing, and production, with minimal effort.
- Efficiency: Docker uses a layered filesystem that caches and reuses common components, reducing the size and build time of your images. Docker also allows you to run multiple containers on a single host, sharing the same kernel and resources, which improves the performance and utilization of your system.
- Reproducibility: Docker ensures that your application runs the same way every time, regardless of where and how it is deployed. You can also version and track the changes of your images, and rollback to a previous state if needed.
- Isolation: Docker provides a high level of isolation between your containers, preventing them from interfering with each other or with the host system. You can also limit the resources and permissions of each container, enhancing the security and stability of your application.
- Collaboration: Docker makes it easy to share your images and code with others, using a central repository called Docker Hub. You can also use Docker Compose to define and run multi-container applications, and Docker Swarm to scale and orchestrate your containers across multiple hosts.
Docker vs virtual machines?
Docker and a virtual machine are two different ways of running applications in isolated environments. The main difference between them is the level of abstraction and the amount of resources they use.
A virtual machine (VM) is a software emulation of a physical machine, that runs a complete operating system (OS) and the applications on top of it. A VM requires a hypervisor, which is a software layer that creates and manages the VMs on a host machine. A VM has its own kernel, drivers, libraries, and binaries, and can run any OS that is compatible with the host machine’s hardware. A VM provides a high level of isolation and security, but also consumes a lot of resources and takes a long time to boot.
Docker is a software platform that uses containers to run applications. A container is a lightweight and portable package that contains the code, libraries, and dependencies of an application, but shares the kernel and the resources of the host machine. Docker uses a daemon, which is a background process that manages the containers on a host machine. Docker also uses images, which are read-only templates that define how to create and run containers. Docker provides a high level of portability and efficiency, but also relies on the host machine’s OS and kernel features.
Here is a summary of the main differences between Docker and a virtual machine:
Docker | Virtual Machine |
---|---|
Uses containers to run applications | Uses hypervisors to run operating systems |
Shares the kernel and the resources of the host machine | Has its own kernel, drivers, libraries, and binaries |
Uses images to create and run containers | Uses disk images to create and run VMs |
Provides a high level of portability and efficiency | Provides a high level of isolation and security |
Relies on the host machine’s OS and kernel features | Can run any OS that is compatible with the host machine’s hardware |
How to get started with Docker
To get started with Docker, you need to install Docker on your machine, which includes the Docker Engine, the Docker CLI, and other tools.
Once you have installed Docker, you can open a terminal and run the following command to verify that it is working:
docker --version
This should display the version of Docker that you have installed.
Next, you can run the following command to pull and run a simple hello-world image from Docker Hub:
docker run hello-world
This should print a message that confirms that Docker is able to communicate with the Docker daemon, pull the image from Docker Hub, and run it in a container.
You can also run the following command to see the list of images and containers that you have on your machine:
docker images
docker ps -a
These commands will show you the name, tag, size, status, and other details of your images and containers.
Docker uses Dockerfiles to know how to build an image
To create a container, you need to have an image that contains your application code, libraries, configuration files, environment variables, and runtime. A Dockerfile is a text file that contains the instructions for building your image.
Creating a Dockerfile is like having simply putting what commands to run to build and launch an application. It's different depending on the progrmming stack.
In this post, I will show you how to create a Dockerfile for a simple NodeJS app that uses the Express and Bootstrap frameworks.
Step 1: Create a package.json file
The first step is to create a package.json file that specifies the dependencies and scripts for your app. You can use the npm init
command to generate a basic package.json file, or you can create one manually. Here is an example of a package.json file for our app:
{
"name": "nodejs-app",
"version": "1.0.0",
"description": "A simple nodejs app with Docker",
"main": "index.js",
"scripts": {
"start": "node index.js"
},
"dependencies": {
"express": "^4.17.1",
"bootstrap": "^5.1.3"
}
}
The package.json file defines the name, version, description, main entry point, start script, and dependencies of our app. The start script tells Docker how to run our app when the container is launched. The dependencies are the modules that our app needs to function properly. In this case, we need express for creating a web server, and bootstrap for styling our website.
Step 2: Create an index.js file
The next step is to create an index.js file that contains the code for our app. The index.js file is the main entry point of our app, and it is the file that the start script in the package.json file will execute. Here is an example of an index.js file for our app:
// Import the express module
const express = require('express');
// Create an express app
const app = express();
// Serve static files from the public directory
app.use(express.static('public'));
// Define the port number
const port = 3000;
// Start the server and listen on the port
app.listen(port, () => {
console.log(`App listening at http://localhost:${port}`);
});
The index.js file imports the express module, creates an express app, serves static files from the public directory, defines the port number, and starts the server. The public directory contains the HTML, CSS, and JS files for an example website.
Step 3: Create a Dockerfile
The final step is to create a Dockerfile that contains the instructions for building our image. The Dockerfile should be in the same directory as the package.json and index.js files. Here is an example of a Dockerfile for the app:
# Use the official node image as the base image
FROM node:18
# Set the working directory to /app
WORKDIR /app
# Copy the package.json and package-lock.json files to the working directory
COPY package*.json ./
# Install the dependencies
RUN npm install
# Copy the rest of the files to the working directory
COPY . .
# Expose the port that the app listens on
EXPOSE 3000
# Define the command to run the app
CMD ["npm", "start"]
The Dockerfile uses the node:14 image as the base image, which provides a node environment with version 14. The WORKDIR instruction sets the working directory to /app, where we will copy our files and run our commands. The COPY instruction copies the package.json and package-lock.json files to the working directory, and the RUN instruction installs the dependencies using npm install. The COPY instruction then copies the rest of the files to the working directory, including the index.js and public files. The EXPOSE instruction exposes the port 3000, which is the port that our app listens on. The CMD instruction defines the command to run the app, which is npm start.
Step 4: Build and run the image
Now that we have our Dockerfile, we can build and run our image using the docker commands. To build the image, we need to use the docker build command with a tag name for our image. For example, we can use the following command to build our image and name it nodejs-app:
docker build -t nodejs-app .
The -t option specifies the tag name for our image, and the . specifies the context for the build, which is the current directory. The docker build command will read the Dockerfile and execute the instructions to create our image.
To run the image, we need to use the docker run command with the name of our image and a port mapping. For example, we can use the following command to run our image and map the port 3000 of the container to the port 8080 of the host:
docker run -p 8080:3000 nodejs-app
The -p option specifies the port mapping, which follows the format hostPort:containerPort. The docker run command will create and start a container from our image, and execute the CMD instruction to run the app.
This example app should now be served at http://localhost:8080
Conclusion
I hope this blog post helped you with finding out what Docker exactly is. Text isn't everything though. There's more. You can build new applications with it in mind or self-host existing Docker images, such as the same Ghost blog you're reading, or Plausible Analytics, or NextCloud, Uptime Kuma.. and lots more!
A neat however skill is Docker compose. Check it out!