Giving you a high overview of what we’ll be doing, the plan is to automate the creation of one single VPS, hosting multiple websites or webapps. We’re putting these sites or apps in containers and distribute the incoming requests with a cloud native reversed proxy. Since I’ll be introducing quite a lot of new technologies in this Greater blog series, I’m dividing this into four posts.
Although we can setup the multi webservices on about any platform out there (including AWS, Azure and GCP) I would like to do it on a Digital Ocean (DO) droplet. A Droplet is just a VPS with a non-standard name. Check out my ‘Cloud technology‘ post for some talk about VPS’s, types of cloud, the shared responsibility model and other useful stuff.
The Plan
We’ll be using quite a lot of technologies; some we’ve already discussed but a lot are new to this series. Let’s name them:
The plan is to use Terraform and cloud-init to get a single droplet up and running in a minute or two. We’ll automatically install needed tools like Docker and Docker compose. With the tools we’ll configure and spin up as much containers as your droplet can handle, every container serving one of your websites. Traefik will be our main entry point and will take care for routing to the proper sites and TLS termination.
Digital Ocean steps
If you have an API you can talk to, you can automate stuff. Let’s create an account if you haven’t already and get this API ready to go.
- Sign up at the DO signup page https://cloud.digitalocean.com/registrations/new
- In the left bar click API and create a new token with ‘Generate New Token’ (not a ‘new Spaces Access Key’)
- Put the token in a safe place like a password manager
- Now in the left bar go to ‘Settings’ – ‘Security’ and add a SSH key
- Paste in your public key as created in the first git post, name it and save
- With Terraform we can reference this key, or create one from a local file
Terraform
- You declare what you want your infrastructure to look like
- You plan and apply your configuration
- Terraform keeps the configuration it knows to be operational in its state file
- At a next plan and apply, Terraform compares your declaration (the .tf files) with the state file (.tfstate) and makes adjustments accordingly
tfplan
files from git (line 30). First steps
Getting started with Terraform and git, first setup your project locally.
cd && mkdir multiweb-do && cd multiweb-do
git config --global init.defaultBranch main && git init
vim README.md
vim .gitignore
.gitignore
with the earlier provided example. Create the providers.tf
file:
terraform {
required_providers {
digitalocean = {
source = "digitalocean/digitalocean"
version = "2.16.0"
}
}
}
provider "digitalocean" {
token = var.do_token
}
Provider config like this can always be found in the Terraform Registry.
.terraform
directory, providers, and lock file. Let’s get this code online at GitLab. Create a new project by following the steps at ‘Remotes’ in the first git post, commit and push your code to GitLab:
git add .
git commit -am "First commit to get us started"
git remote add origin git@gitlab.com:iohenkies/multiweb-do.git
git push -u origin main
.tf
file and will ask you from the web interface to store your Terraform state in GitLab. Totally optional but you could give it a try. Create a droplet
.tf
files. I like to split them up quite a bit but maintain readability and a logical approach. For a single droplet not too important but know that Terraform simply reads all .tf
files and doesn’t care what you put where. security.tf
. Here we will be using and adding our SSH key to be created at DO and added to our droplet later. If needed check out SSH key creation at my first git post.
# This will create the SSH key so we can login
resource "digitalocean_ssh_key" "henk_batelaan" {
name = "SSH Key Henk Batelaan"
public_key = file("/Users/iohenkies/.ssh/id_rsa.pub")
}
main.tf
where we are creating a resource of type digitalocean_droplet
called web01
with a couple of key/value pairs and putting it in a newly created project called ‘multisite’:
# We are creating a separate project for our web01 droplet
resource "digitalocean_project" "multisite" {
name = "multisite"
description = "The is our multisite project with one droplet"
purpose = "Web Application"
environment = "Development"
resources = [digitalocean_droplet.web01.urn]
}
# Create a new Web droplet for our multisite project in the Amsterdam region
resource "digitalocean_droplet" "web01" {
image = "ubuntu-20-04-x64"
name = "web01"
region = var.region
size = var.size
tags = var.tags
backups = var.backup
ssh_keys = [digitalocean_ssh_key.henk_batelaan.fingerprint]
}
I’m not a magician creating all this stuff out of thin air. No, we’ve got some excellent documentation. For the structure and keys check out the provider’s docs and for the possible values the API reference guide.
Variables
var.keyname
. We need to declare these values. Create a variables.tf
file:
# Let’s enable backups by default
variable "backup" {
type = string
default = true
description = "The Digital Ocean default is false; we want it to be true"
}
# Declaration of the Digital Ocean API token
variable "do_token" {
type = string
description = "This is our token to authorize against the Digital Ocean API"
}
# Declaration of the region variable
variable "region" {
type = string
default = "ams3"
description = "This is our region variable. The default is Amsterdam 3"
}
# Declaration of the size variable with a default size specified
variable "size" {
type = string
default = "s-1vcpu-1gb"
description = "This is our size variable. The default is the smallest size possible"
}
# Default droplet tags
variable "tags" {
type = list(any)
default = ["ubuntu", "docker", "terraform"]
description = "Just some default tags going with our droplet"
}
main.tf
we’re referencing the region, size, tags and backups. Since we’ve specified a default, these get filled in automatically. - Use export
TF_VAR_do_token="your token here"
from the command line - Create a file
terraform.tfvars
with one linedo_token="your token here"
DIGITALOCEAN_TOKEN='insert your token'
curl -X GET \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $DIGITALOCEAN_TOKEN" \
"https://api.digitalocean.com/v2/droplets?page=1&per_page=1" | jq
Outputs
outputs.tf
. Here we’re putting stuff Terraform needs to spit out after every run or after running the terraform output
command. In this case we’ll immediately get the IP address we can use to SSH to.
output "web01_ip" {
value = digitalocean_droplet.web01.ipv4_address
}
This gets everything ready to deploy your droplet!
Format and check your code:
terraform init
terraform fmt
terraform validate
Plan and apply your code:
terraform plan --out firstrun.tfplan
terraform apply firstrun.tfplan
Although you can omit the plan file, it’s generally good practice to use it. This run will create our resources and spit out the IP address we can connect to. A FQDN (Fully Qualified Domain Name) would be better, but I can’t be sure you have a valid domain name yet.
SSH into your shiny new droplet:
ssh root@ipaddress
You can also check the web interface for the project, droplet, backup, tags, SSH key, etc. Next, we will make a couple of important additions to our script. First destroy everything:
terraform destroy
User data
Going from such a default installation to something more to your liking is easier than you might think. I do want to point out that there are multiple strategies to setup your default server, too many to sum up, really.
In this context I would like to differentiate two strategies:
- Develop – Deploy – Configure. This is the more traditional deployment model using mutable infrastructure
- Mutable as in infrastructure you deploy and can and will change afterwards
- Starting with a distro’s base image and configuring stuff with cloud-init and/or Ansible is an example here
- Develop – Configure – Deploy. This is the deployment model that has gotten more traction in the last several years, using immutable infrastructure
- Immutable as in you deploy something and never change it after deployment
- If you do need a configuration change, you edit your base image, deploy a new server, and kill off the old server
- Creation and deployment of your own images using Packer is an example here
I will probably do a post on the second option soon (specifically Packer), but for this post I’m using the first option with the more traditional Develop – Deploy – Configure model.
So now that we have the theory out of the way, let’s get practical. We started out with the default DO Ubuntu image, and I would like to add some customization. In DO and many other providers, we can use the ‘user data’, where you can use cloud-init syntax to setup your instance. We can add our config to the already created ‘digitalocean_droplet’ resource as an extra configuration parameter (pasting the full code for completeness).
# Create a new Web droplet for our multisite project in the Amsterdam region
resource "digitalocean_droplet" "web01" {
image = "ubuntu-20-04-x64"
name = "web01"
region = var.region
size = var.size
tags = var.tags
backups = var.backup
ssh_keys = [digitalocean_ssh_key.henk_batelaan.fingerprint]
user_data = <
sudo: ['ALL=(ALL) NOPASSWD:ALL']
groups: [sudo, docker]
shell: /bin/bash
packages:
- apt-transport-https
- ca-certificates
- containerd.io
- curl
- gnupg-agent
- dnsutils
- docker-ce
- docker-ce-cli
- git
- jq
- nmap
- psmisc
- screen
- ufw
- vim
package_update: true
package_upgrade: true
write_files:
- path: /etc/ssh/sshd_config
content: |
PrintMotd no
Banner none
HostKey /etc/ssh/ssh_host_rsa_key
HostKey /etc/ssh/ssh_host_ecdsa_key
HostKey /etc/ssh/ssh_host_ed25519_key
SyslogFacility AUTHPRIV
LogLevel VERBOSE
PermitRootLogin no
MaxAuthTries 3
MaxSessions 2
AuthorizedKeysFile .ssh/authorized_keys
PasswordAuthentication no
ChallengeResponseAuthentication no
GSSAPIAuthentication no
GSSAPICleanupCredentials no
UsePAM yes
AllowAgentForwarding no
AllowTcpForwarding no
X11Forwarding no
TCPKeepAlive no
Compression no
ClientAliveCountMax 2
UseDNS no
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL LANGUAGE
AcceptEnv XMODIFIERS
Subsystem sftp /usr/lib/openssh/sftp-server
runcmd:
- apt-get update
- timedatectl set-timezone Europe/Amsterdam
- timedatectl set-ntp on
- ufw allow 22/tcp
- ufw enable
- sed -i -e '/pam_motd.so/s/^/# /g' /etc/pam.d/sshd
- sed -i -e '/#startup_message/s/^#//' /etc/screenrc
- reboot
EOF
}
This does all kinds of simple, yet awesome stuff. Just do a new terraform plan and apply as explained earlier and after a couple of minutes login to your new droplet:
ssh cloud_deploy@ip_address
The droplet is done under a minute, but remember that we’re adding all kinds of stuff with cloud-init. If you login too soon, you will end up on a system that is half configured or not reachable at all. So for this reason, just give it about 10 minutes to do all the magic.
I will explain what actually happened here, next time when we continue setting up our webserver. You can probably guess most options on your own, but I will elaborate on a couple.
For now, delete your instance when you’re ready:
terraform destroy
See you next time.