ToolBox Hub

Docker for Beginners: Complete Guide to Containers in 2026

Docker for Beginners: Complete Guide to Containers in 2026

Learn Docker from scratch. Understand containers, images, Dockerfile, Docker Compose with practical examples for web developers.

March 17, 20267 min read

Why Docker Matters in 2026

"Works on my machine" is a phrase that has haunted software development for decades. Docker solved this problem. In 2026, Docker and container-based development is not optional knowledge β€” it is a baseline expectation for any developer working on real projects. Whether you are deploying to AWS, Google Cloud, Fly.io, or a bare metal server, containers are the universal unit of deployment.

This guide is written for developers who know how to code but have not yet dived into containers. By the end, you will understand the core concepts, know how to write a Dockerfile, and be comfortable using Docker Compose for local development.

Containers vs Virtual Machines

Before Docker, teams used virtual machines (VMs) to isolate environments. A VM includes an entire operating system β€” kernel, drivers, system libraries, and your application. This works, but it is heavy: a VM might take minutes to start and use gigabytes of memory.

Containers work differently. They share the host operating system's kernel but isolate the application's filesystem, processes, and networking. The result:

FeatureVirtual MachineContainer
Startup timeMinutesSeconds
SizeGBsMBs
OS includedFull OSJust libraries/deps
Isolation levelStrongStrong (with caveats)
Resource usageHighLow
PortabilityGoodExcellent

A container is not a VM β€” it is a process with an isolated view of the filesystem and network. This makes containers much lighter and faster than VMs while providing sufficient isolation for most use cases.

Core Docker Concepts

Images

A Docker image is a read-only template that contains everything needed to run an application: code, runtime, libraries, environment variables, and configuration. Think of an image as a recipe or a snapshot.

Images are built in layers. The base layer might be Ubuntu or Alpine Linux. The next layer adds Node.js. The next adds your package.json and installs dependencies. The final layer adds your application code.

Containers

A container is a running instance of an image. You can create many containers from the same image, just like you can run the same program multiple times. Containers are ephemeral β€” when you stop and remove a container, the data inside it is gone (unless you use volumes).

Volumes

Volumes are the mechanism for persisting data beyond a container's lifecycle. Mount a volume from your host machine into a container to keep database files, uploaded assets, or development code synchronized.

Registry

Docker Hub is the default public registry for Docker images. It hosts official images for Node.js, Python, PostgreSQL, Nginx, and thousands of other tools. You pull images from registries and push your own images to share with your team or deploy to production.

Installing Docker

Download Docker Desktop from docker.com. It includes the Docker daemon, the CLI, and Docker Compose. After installation, verify it works:

docker --version
# Docker version 26.x.x

docker run hello-world
# This pulls the hello-world image and runs it

Your First Dockerfile: A Node.js App

Let us create a Dockerfile for a simple Express.js application. Assume the following project structure:

my-app/
  src/
    index.js
  package.json
  package-lock.json
  Dockerfile
  .dockerignore

The src/index.js file:

const express = require('express');
const app = express();
const PORT = process.env.PORT || 3000;

app.get('/', (req, res) => {
  res.json({ message: 'Hello from Docker!', timestamp: new Date().toISOString() });
});

app.listen(PORT, () => {
  console.log(`Server running on port ${PORT}`);
});

The Dockerfile:

# Stage 1: Use the official Node.js 20 LTS image as base
FROM node:20-alpine

# Set the working directory inside the container
WORKDIR /app

# Copy package files first (for layer caching optimization)
COPY package*.json ./

# Install dependencies
RUN npm ci --only=production

# Copy the rest of the application source
COPY src/ ./src/

# Expose the port the app runs on
EXPOSE 3000

# Define the command to run the application
CMD ["node", "src/index.js"]

The .dockerignore file (prevents unnecessary files from being copied):

node_modules
npm-debug.log
.git
.gitignore
README.md
.env

Build and run the image:

# Build the image and tag it
docker build -t my-app:latest .

# Run a container from the image
docker run -p 3000:3000 my-app:latest

# Visit http://localhost:3000

The -p 3000:3000 flag maps port 3000 on your machine to port 3000 in the container.

Docker Compose for Local Development

Running individual containers with long docker run commands gets unwieldy fast. Docker Compose lets you define a multi-container application in a single YAML file.

Here is a docker-compose.yml for a Node.js app with a PostgreSQL database and Redis:

version: '3.9'

services:
  app:
    build: .
    ports:
      - "3000:3000"
    environment:
      - NODE_ENV=development
      - DATABASE_URL=postgresql://postgres:password@db:5432/myapp
      - REDIS_URL=redis://redis:6379
    volumes:
      - ./src:/app/src  # Live reload during development
    depends_on:
      db:
        condition: service_healthy
      redis:
        condition: service_started
    restart: unless-stopped

  db:
    image: postgres:16-alpine
    environment:
      POSTGRES_USER: postgres
      POSTGRES_PASSWORD: password
      POSTGRES_DB: myapp
    volumes:
      - postgres_data:/var/lib/postgresql/data
    healthcheck:
      test: ["CMD-SHELL", "pg_isready -U postgres"]
      interval: 5s
      timeout: 5s
      retries: 5

  redis:
    image: redis:7-alpine
    volumes:
      - redis_data:/data

volumes:
  postgres_data:
  redis_data:

Start everything with one command:

docker compose up -d        # Start in detached mode
docker compose logs -f app  # Follow logs for the app service
docker compose down         # Stop and remove containers
docker compose down -v      # Also remove volumes (deletes data)

Essential Docker Commands

# Images
docker pull node:20-alpine          # Download an image
docker images                       # List local images
docker rmi my-app:latest            # Remove an image
docker image prune                  # Remove unused images

# Containers
docker ps                           # List running containers
docker ps -a                        # List all containers (including stopped)
docker stop <container-id>          # Stop a running container
docker rm <container-id>            # Remove a stopped container
docker logs <container-id>          # View container logs
docker logs -f <container-id>       # Follow logs in real time
docker exec -it <container-id> sh   # Open a shell inside a container

# Building
docker build -t myapp:v1.0 .        # Build with a tag
docker build --no-cache -t myapp .  # Build without cache

# Compose
docker compose up -d                # Start services in background
docker compose down                 # Stop services
docker compose ps                   # List service statuses
docker compose exec app sh          # Shell into a running service

Common Mistakes to Avoid

1. Running as root in containers

By default, containers run as root. Add a non-root user to your Dockerfile:

RUN addgroup -S appgroup && adduser -S appuser -G appgroup
USER appuser

2. Not using .dockerignore

Without a .dockerignore, Docker copies your node_modules directory into the build context, slowing builds dramatically and sometimes causing platform-specific binary issues.

3. Storing secrets in images

Never use ENV MY_SECRET=value in a Dockerfile for sensitive data. Instead, pass secrets at runtime:

docker run -e DATABASE_URL="$DATABASE_URL" my-app

Or use Docker secrets for production deployments.

4. Ignoring image layer caching

Order your Dockerfile commands from least to most frequently changed. Put COPY package*.json and RUN npm install before COPY src/ so that dependency installation is cached unless package.json changes.

5. Using latest tags in production

Pin your image versions explicitly. FROM node:20-alpine (not FROM node:latest) ensures reproducible builds.

Next Steps

Once you are comfortable with the basics, explore:

  • Multi-stage builds: Separate your build environment from your runtime image for smaller production images
  • Docker Scout: Scan your images for vulnerabilities
  • Docker Swarm or Kubernetes: Orchestrate containers at scale
  • GitHub Actions + Docker: Automate building and pushing images in CI/CD

Docker is one of those tools that feels complex at first and obvious once it clicks. The mental model β€” package everything your app needs into a portable, reproducible unit β€” is simple and powerful. Invest a day to get comfortable with the commands in this guide, and you will find containers become second nature.

Related Posts