Docker for Beginners on a VPS: From Zero to a Deployed App

System AdminOctober 15, 2020287 views6 min read

Why Docker Belongs in Your Hosting Workflow

Docker has moved from a niche devops tool to a standard part of the deployment stack. If you manage a VPS and deploy web applications, Docker gives you portable, reproducible environments that eliminate the "it works on my machine" problem. Instead of installing dependencies directly on your server and hoping everything stays compatible, you package your application and its entire runtime into a container that runs the same way everywhere.

This guide is for hosting customers who have a VPS (or dedicated server) and want to go from zero Docker knowledge to a deployed application. No prior container experience required — just a working server with SSH access and a willingness to learn a new way of deploying software.

Containers vs Virtual Machines: A Quick Comparison

Virtual machines emulate entire operating systems, each with its own kernel, memory allocation, and disk image. Containers share the host operating system's kernel and isolate only the application layer — libraries, binaries, and configuration files. This makes containers far lighter, faster to start (seconds vs minutes), and more efficient with resources.

A single VPS that can comfortably run two or three virtual machines can run dozens of containers. This density is why containers are popular for microservices, development environments, and hosting multiple applications on a single server.

Installing Docker on Your VPS

Docker runs on all major Linux distributions. The installation process involves adding Docker's official repository and installing the Docker Engine package. Avoid installing Docker from your distribution's default package repository — the version is often outdated. Always use Docker's official installation instructions for your specific distribution.

After installation, add your user account to the docker group so you can run Docker commands without sudo. Then verify the installation with docker run hello-world, which downloads a tiny test image and prints a confirmation message. If you see the message, Docker is working.

Core Concepts You Need to Understand

Images

A Docker image is a read-only template that contains everything your application needs: the operating system base layer, runtime (Node.js, Python, PHP), your application code, and configuration. Images are built from a Dockerfile — a text file with instructions that Docker follows to assemble the image layer by layer.

Containers

A container is a running instance of an image. You can think of the image as the recipe and the container as the dish. You can run multiple containers from the same image, each isolated from the others. Containers have their own filesystem, network interfaces, and process space.

Volumes

Containers are ephemeral by default — when a container is removed, its filesystem changes are lost. Volumes provide persistent storage that survives container restarts and removals. Use volumes for database data, uploaded files, logs, and any data that must persist.

Networks

Docker creates isolated networks that containers use to communicate with each other. By default, containers on the same Docker network can reach each other by container name. This is how your web application container talks to your database container without exposing the database to the public internet.

Writing Your First Dockerfile

A Dockerfile is a set of instructions for building your image. Here is the thought process behind a typical web application Dockerfile:

  1. Start from a base image: Choose an official runtime image — node:20-alpine for Node.js, python:3.12-slim for Python, php:8.3-fpm for PHP. Alpine and slim variants are smaller and have fewer unnecessary packages (which also means a smaller attack surface).
  2. Set a working directory: Define where your application code lives inside the container.
  3. Copy dependency files first: Copy your package.json, requirements.txt, or composer.json before copying the rest of the code. This takes advantage of Docker's layer caching — dependencies only get reinstalled when the dependency file changes, not on every code change.
  4. Install dependencies: Run the appropriate install command for your package manager.
  5. Copy application code: Copy the rest of your source code into the container.
  6. Expose the port: Declare which port your application listens on.
  7. Define the start command: Specify the command that runs when the container starts.

Build the image with docker build -t my-app:latest . and run it with docker run -d -p 80:3000 my-app:latest. That command maps port 80 on your server to port 3000 inside the container, making your application accessible via the server's public IP.

Docker Compose: Managing Multi-Container Applications

Most real applications need more than one container. A typical setup might include a web application container, a database container, and perhaps a Redis container for caching. Docker Compose lets you define all of these services in a single YAML file and manage them together.

A docker-compose.yml file describes each service, its image or build context, port mappings, volumes, environment variables, and network connections. Starting the entire stack is a single command: docker compose up -d. Stopping it is docker compose down.

Docker Compose is the most practical tool for deploying applications on a single VPS. It handles service dependencies, networking, and volume management without the complexity of orchestration platforms like Kubernetes.

Deploying to Your VPS

Here is a straightforward deployment workflow for a Dockerized application on a VPS:

  1. Develop and test locally: Build and run your containers on your local machine. Verify that everything works.
  2. Push your code to a repository: Use Git to push your application code (including the Dockerfile and docker-compose.yml) to a remote repository.
  3. Pull on the server: SSH into your VPS, pull the latest code, and run docker compose up -d --build. Docker rebuilds changed images and restarts affected containers.
  4. Set up a reverse proxy: Use Nginx or Traefik as a reverse proxy in front of your application containers. The reverse proxy handles HTTPS termination, domain routing, and serving static files efficiently.
  5. Automate with a deploy script: Write a simple shell script or use a CI/CD pipeline that SSHs into the server, pulls the latest code, and runs the build and deploy commands. This reduces human error and makes deployments repeatable.

Security Considerations

  • Do not run containers as root: Create a non-root user inside your Dockerfile and run the application as that user. This limits the damage if a container is compromised.
  • Keep images updated: Base images receive security patches regularly. Rebuild your images periodically to pick up these updates.
  • Do not store secrets in images: Environment variables, API keys, and database passwords should be passed to containers at runtime through environment variables or Docker secrets — never baked into the image.
  • Limit container capabilities: Docker allows you to drop Linux capabilities that your container does not need. Reducing capabilities reduces the attack surface.
  • Use a .dockerignore file: Prevent sensitive files (like .env, .git, and node_modules) from being copied into the image during build.

Monitoring and Logging

Docker containers write logs to stdout and stderr by default, and Docker captures them. View logs with docker logs container-name. For production deployments, configure a logging driver that ships logs to a centralized logging service — this prevents logs from consuming disk space and gives you searchable, persistent records.

Monitor container health with docker stats for real-time CPU, memory, and network usage. For more sophisticated monitoring, tools like Prometheus and Grafana integrate well with Docker environments and provide dashboards, alerting, and historical trend analysis.

When Not to Use Docker

Docker is not the right answer for everything. If you are running a single WordPress installation on a managed hosting plan, Docker adds complexity without meaningful benefit. If your team has no container experience and your deployment process works fine, introducing Docker just for the sake of using containers is not a sound strategy. Use Docker when it solves a real problem: environment consistency, multi-service deployment, isolation, or portability.

Wrapping Up

Docker on a VPS is a powerful combination. You get the control and cost-effectiveness of a VPS with the portability and reproducibility of containers. Start with a single application, learn the basics of images, containers, and volumes, then graduate to Docker Compose for multi-service stacks. The investment in learning pays off quickly in faster deployments, fewer configuration surprises, and a cleaner server environment.

DevOpsMySQLBackup