We’ve covered quite a lot on our journey to automate the creation of one single VPS, hosting multiple websites or webapps. As you can see in the below diagram, we’re deploying to a Digital Ocean droplet, using Terraform and cloud-init. All code goes in git of course. Docker and Dockerfiles have been covered in part II of this series.

Today we’ll be having fun with (for the Greater blogs) new technology, we’ll be setting up and putting our one-page HTML and WordPress site in containers using a mix of Docker magic and Docker compose.

Last time

This is the status:

Git Dockerfile
GitLab Docker compose
Digital Ocean droplet One-page HTML site
Digital ocean API WordPress site
Terraform Nextcloud app
Cloud-init Traefik reverse proxy
Docker TLS

Today we’ll fill in some more blanks.

Docker compose

Docker compose has always been a separate binary to be installed separately of Docker. For some this was a bit of a hassle and now very recently they’ve changed it to be, surprise, even more of a hassle.

For years, compose has been a separate binary called docker-compose. Now it’s still a binary called docker-compose, but you must treat it like a plugin, place it in a special location and invoke it as a docker subcommand (i.e., docker compose). It’s a bit silly in my opinion, but whatever, it’s still a pretty awesome app. Installing via the default Ubuntu repos will install version 1.x and we would like 2.x. This is how you would install it manually (check releases here):

				
					 cd
 wget https://github.com/docker/compose/releases/download/<INSERT VERSION>/docker-compose-linux-x86_64
 mv docker-compose-linux-x86_64 docker-compose
 chmod +x docker-compose 
 sudo mkdir -p /usr/local/lib/docker/cli-plugins
 sudo mv docker-compose /usr/local/lib/docker/cli-plugins
 docker compose --help

				
			

You might have noticed that I already incorporated these steps in the user_data / cloud-init script so we’re all good to go.

As with Docker, normally you would not install this via cloud-init, simply because it is too specialized. Everything you do via cloud-init will end up on every server you rollout, and not every server is a Docker server (or it might be in your case, but more often it is not).

Compose files

Preferable we would want to run as little docker commands as possible. As we did with Terraform before, we would like to put our configuration in code and then use a single command to spin up a bunch of containers, that are connected and work together. Well, you’ve probably guessed it, this is where Docker compose and it compose files come in.

A compose file is in YAML format, which is hated by a lot of developers (first hit on Google), but is used about everywhere and also for compose we’ll need to work with it. This is the general idea:

  1. Create a compose file named docker-compose.yaml
				
					
version: '3'

services:
  plex:
    image: linuxserver/plex
    container_name: plex
    restart: always
    environment:
      - VERSION=docker
    volumes:
      - .:/media/
  cache:
    image: redis:alpine
    container_name: cache
    restart: always
    ports:
      - 6379:6379
  proxy:
    image: nginx
    container_name: proxy
    restart: always
    ports:
      - 8080:80
      - 4443:443

				
			

2. Start your configuration

				
					 docker compose up -d

				
			

3. Check your running containers

				
					 docker compose ps
				
			

4. Tear it down

				
					 docker compose down
				
			

This is just an example to show you how easy it can be, but the containers are not actually talking to each other. Let’s change this by setting up some more usable examples.

The plan

First of all, and this is maybe most important regarding compose, is that the reference documentation is outstanding. You can find any option you’ll need to build your containers, spin them up, expose ports, use volumes, you name it.

The plan for today and the next post is to build no less than 3 webapps:

  1. A one-page HTML site
  2. A WordPress website
  3. A Nextcloud instance for Dropbox-like filesyncing

And then connect them to the outside world via Traefik, all secured with TLS. You’ll need a domain name and/or (temporary) access to its DNS records. If you don’t have or have access to a domain name, you can find a lot of offers out there where you can get a domain name for under €5 a year. Just do some searching around.

Anyway, I’ll be using devops.tf to play around with. Let’s go!

DNS

In the previous posts we’ve created a Digital Ocean droplet with Terraform. Look up the public IP from the DO web interface or when logged in via SSH (with the command ip a s). With the IP in hand, we can easily add a few records to make sure our webapps resolve to our droplet. See the screenshot below for the general idea.

One-page HTML

The onepager is of course an easy one: it’s a single website, powered by Nginx in this example. Before I forget, two assumptions/requirements:

  • You’ll put all code in git
  • Every app gets its own repo

To avoid unnecessary delay, we’ll be doing our work as root on your droplet. Just as a refresher (from Greater git post 1 and 2 as well) and to be completely clear:

				
					 sudo -i
 mkdir /opt/devops.tf && cd /opt/devops.tf
 git config --global init.defaultBranch main
 git config --global user.name "Henk Batelaan"
 git config --global user.email "henk@greater.nl"
 git config --global pull.rebase false
 git init

				
			

We can create a onepager in 101 ways, I like to group my stuff, which certainly pays of when dealing with more than one container (which you will see later on):

				
					 mkdir -p web/data
 vim web/data/index.html
 vim /web/nginx.conf

				
			

You can put your stunning HTML in the index.html or anything you like. I simply put ‘This is my one-page HTML!’. This post is not about creating webpages, so what more can I say?

Add a nginx.conf in your root working directory:

				
					server {
    listen 80;

    root /usr/share/nginx/html;
    index index.html;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    client_max_body_size 64M;
}

				
			

As with the index.html there are many ways you can configure a Nginx server and place its configuration files. This is just one of them and all we need quite frankly.

Finally create the docker-compose.yaml in the root working directory:

				
					version: "3"

services:
  devops.tf:
    container_name: devops.tf
    image: nginx:1.21.5
    restart: unless-stopped
    volumes:
      - ./web/data:/usr/share/nginx/html
      - ./web/nginx.conf:/etc/nginx/conf.d/default.conf
      - /etc/localtime:/etc/localtime:ro
    ports:
      - 8181:80
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s

				
			

So, this is what we’ve created:

				
					 tree
.
├── docker-compose.yaml
└── web
    ├── data
    │   └── index.html
    └── nginx.conf

2 directories, 3 files

				
			

Spin it up:

				
					 docker compose up -d
				
			

This is as simple as it gets. On the host itself you can curl to localhost:8181 and you can access your website via ip_number:8181 or www.yourdomain.xyz:8181 (with the DNS records we’ve created earlier). From now on, we’ll only use the latter. Don’t worry about the 8181-port thingy, we’ll get rid of it in the next blog.

Let’s break down the compose file:

  • version. Defines the compose specification and compatibility with certain compose functionality. When you simply put this on ‘3’ like I did, it is compatible with all 3.x versions
  • services. A service definition contains configuration that is applied to each container started for that service. Next to ‘version’, ‘volumes’ and ‘networks’ it lives at the root of your compose file
  • tf. The name of the service. Contains your configuration and shows up in a docker compose ps
  • container_name. The name of the container. Also shows up in a docker compose ps and a docker container ls as well
  • image. The image your container will be running. Here it’s a standard Nginx image from Docker hub, but you can use other repositories as well or omit it altogether when using the build directive (more explanation later on)
  • restart. You’ve got a couple of options here from which in my opinion the unless-stopped option makes most sense, i.e., “always restart a container, unless I stop it manually”
  • volumes. Mount host accessible folders (before the : ) to specific folders in the container (after the : ). The localtime is there for host to container time synchronization
  • ports. Map host accessible ports (before the : ) to specific ports in the container (after the : )
  • healthcheck. A cool little healthcheck that you can see the result of in a docker compose ps and a docker container ls. The check is run in the container. When you’re only running one container of each service, as is default on our non-distributed system, this has no direct use unfortunately. But you could do some external scripting with it

Before moving on there is a cool alternative to this setup, which is mainly the reason I did a crash course on Dockerfiles in the last post. This alternative is to replace the image directive with the build directive:

				
					version: "3"
services:
  devops.tf:
    container_name: devops.tf
    build:
      context: ./web
      dockerfile: ./Dockerfile

				
			

This will build the image from the Dockerfile in the web directory, which we must then create of course. This web/Dockerfile can be as simple as:

				
					FROM nginx:1.21.5
				
			

Or add some stuff on your own like explained in the previous post:

				
					FROM nginx:1.21.5
RUN apt update -y && apt install -y \
  curl \
  wget \
  git \
  && apt clean \
  && rm -rf /var/lib/apt/lists/* /tmp/* /var/tmp/*

				
			

Rebuild and restart your container after edits:

				
					 docker compose build devops.tf
 docker compose restart devops.tf
				
			

This certainly makes local development more flexible and depending on your needs, this can be a very handy trick. I’ll use this approach from now on so you can have more practice with it.

WordPress website

With the WordPress site, we’ll let some containers work together and do some other new and cool stuff. This is what we’ll be creating:

				
					 tree -a
.
├── .env
├── db
│   └── Dockerfile
├── docker-compose.yaml
├── php
│   ├── Dockerfile
│   ├── healthcheck.sh
│   └── php-uploads.ini
└── web
    ├── Dockerfile
    └── nginx.conf
				
			

The WordPress application consists of a database (wp-db), the main PHP WordPress application (nc-php) and the web frontend (nc-web).

Start here:

				
					 mkdir /opt/wordpress.devops.tf && cd /opt/wordpress.devops.tf
 git init
 mkdir db php web
				
			

Next the ‘.env’ file:

				
					MYSQL_ROOT_PASSWORD=w4dwso39KhfghWS5so6P490jkTXutphk
MYSQL_USER=main-user
MYSQL_PASSWORD=jBets2ghnnza64QZSDwrfF4qsd33
				
			
As you can see, this file holds some sensitive data. We can use these as variables in our compose file later. The compose file gets committed to the repo, the .env file absolutely not. Put this file in your .gitignore. The db/Dockerfile:
				
					FROM mysql:8.0.28
				
			

The php/Dockerfile:

				
					FROM wordpress:5.8.3-fpm-alpine
RUN apk add --no-cache fcgi
RUN echo "pm.status_path = /status" >> /usr/local/etc/php-fpm.d/zz-docker.conf
COPY ./healthcheck.sh /usr/local/bin/healthcheck
				
			
The php/healthcheck.sh is directly from this page (credits to the original author). Name it healthcheck.sh and make it executable

The php/php-uploads.ini

				
					file_uploads = On
memory_limit = 64M
upload_max_filesize = 64M
post_max_size = 64M
max_execution_time = 600
				
			

The web/Dockerfile:

				
					FROM nginx:1.21.5-alpine
				
			

The web/nginx.conf:

				
					
server {
    listen 80;

    root /var/www/html;
    index index.php;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log;

    client_max_body_size 64M;

    location / {
        try_files $uri $uri/ /index.php?$args;
    }

    location ~ ^/\.user\.ini {
        deny all;
    }

    location ~ \.php$ {
        try_files $uri =404;
        fastcgi_split_path_info ^(.+\.php)(/.+)$;
        fastcgi_pass wp-php:9000;
        fastcgi_index index.php;
        include fastcgi_params;
        fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
        fastcgi_param PATH_INFO $fastcgi_path_info;
    }
}

				
			
Mind the fastcgi_pass php:9000 here, this should point to your WordPress container (for our setup it is good as-is, but I wanted to mention it in case you are changing names or whatever). For the rest, it’s just Nginx configuration parameters. I will not dive deeper into it. Finally the glue that binds them together, the docker-compose.yaml in your root working directory:
				
					version: "3"

services:
  db:
    container_name: wp-db
    build:
      context: ./db
      dockerfile: ./Dockerfile
    restart: unless-stopped
    volumes:
      - /etc/localtime:/etc/localtime:ro
      - wp_db_vol:/var/lib/mysql
    env_file: .env
    environment:
      MYSQL_DATABASE: wordpress-db
    healthcheck:
      test: ["CMD", "mysql", "--host=wp-db", "--user=$MYSQL_USER", "--password=$MYSQL_PASSWORD", "--silent", "--execute", "SELECT 1;"]
      interval: 10s
      timeout: 2s
      retries: 2
      start_period: 10s

  php:
    container_name: wp-php
    build:
      context: ./php
      dockerfile: ./Dockerfile
    depends_on:
      - wp-db
    restart: unless-stopped
    volumes:
      - ./php/php-uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
      - /etc/localtime:/etc/localtime:ro
      - wp_web_vol:/var/www/html
    env_file: .env
    environment:
      WORDPRESS_DB_HOST: wp-db:3306
      WORDPRESS_DB_NAME: wordpress-db
      WORDPRESS_DB_USER: $MYSQL_USER
      WORDPRESS_DB_PASSWORD: $MYSQL_PASSWORD
    healthcheck:
      test: ["CMD", "healthcheck"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s

  web:
    container_name: wp-web
    build:
      context: ./web
      dockerfile: ./Dockerfile
    depends_on:
      - wp-php
    restart: unless-stopped
    volumes:
      - ./web/nginx.conf:/etc/nginx/conf.d/default.conf
      - /etc/localtime:/etc/localtime:ro
      - wp_web_vol:/var/www/html
    ports:
      - 8282:80
    healthcheck:
      test: ["CMD", "curl", "--fail", "http://localhost"]
      interval: 30s
      timeout: 5s
      retries: 3
      start_period: 10s

volumes:
  wp_db_vol: {}
  wp_web_vol: {}
				
			
Some basics we already discussed at the onepager, but let’s see what nice things we’re adding here into the mix:
  • services. As you can see, we have 3 services this time (which you also probably noticed from our directory structure). Different with the onepager is that we did not name the services and the container_names the same. This is just a personal choice
  • env_file. Points to the .env file we created earlier. For the MySQL container (we’ve called the container wp_db), it will automatically use all key – value pairs that are in there
  • environment. All environment variables our container can use in addition to the data from the .env At the php service, we’re just referencing our $MYSQL variables
  • healthcheck. Different services require different healthchecks and I set them up for all our services. See the usage of the $MYSQL variables in the database healthcheck. Also, to make the php healthcheck work, a little customization was needed in the php/Dockerfile
  • depends_on. An easy and effective way to make sure services start in the correct order, i.e. the database should always start first
  • ports. See how the web service if the only one with ports exposed? The containers can talk amongst each other on a private Docker (bridge) network and only the frontend needs exposure
  • volumes. We’re not simply using local folders anymore; we’re using Docker volumes to store our data now. In the services section you can spot them immediately because they don’t start with a dot or slash (e.g., web:/var/www/html) and at the bottom we’re defining them with default settings (e.g., web: {})
Let’s fire it all up and do comes checks:
				
					 docker compose up -d
 watch docker compose ps
 docker compose logs -f
				
			
When all seems ready, browse to wordpress.yourdomain.xyz:8282 and be greeted with the default WordPress installation. To celebrate, we’ll be removing everything 😉 Bring your stack down with:
				
					 docker compose down
				
			

Since WordPress has the nasty habit of keeping the complete URL including the port hardcoded in the config files and database, we need to remove the Docker volumes if we want to later use it without problems with our new Traefik setup.

				
					 docker volume ls
 docker volume rm wordpressdevopstf_wp_db_vol wordpressdevopstf_wp_web_vol
				
			

Yes, this will remove data, but if you’ve only followed these instruction, there is nothing of importance in it yet.

In my opinion this is already pretty awesome, but next time we’ll be concluding this series with the Nextcloud installation for Dropbox-like filesharing and the long-awaited Traefik reverse proxy.

Greater BV

Boeingavenue 217 (4th floor)
1119 PD Schiphol-Rijk

hello@greater.nl
backoffice@greater.nl
sales@greater.nl

Backoffice: +31 (0)85 303 64 28
Sales: +31 (0)85 303 64 23