Did you know that you can navigate the posts by swiping left and right?
A while back i did some work building a Packer configuration that creates a Hashistack (Vault, Consul and Nomad) SD card image for a Raspberry Pi. Since then i’ve used the cluster for a few different things and i’m just now starting to write up the interesting stuff.
I’ve made a couple of changes to the image, for example changing DNSmasq for unbound and upgrading to the latest versions of the Hashicorp tools. The procedure for creating the image and bootstrapping the cluster is the same though so heres the previous post and the bitbucket repository if you’re playing along at home. The image provides a functional Vault/Consul/Nomad cluster and just requires Vault initialising in the first instance.
I needed an ingress controller that can direct traffic to all the Nomad workloads I want to run inside the cluster, i also need something to terminate SSL connections. Traefik does these things, and it supports configuration from the Consul Catalog.
job "traefik-system" {
region = "global"
datacenters = ["DC0"]
type = "system"
This declares the Traefik job. The type of “system” means this job will run on all Nomad clients
group "traefik" {
vault {
policies = ["nomad-traefik-policy"]
}
Create the task group and assign a Vault policy to the task. This will allow the secrets required in templates below to be read on job startup
volume "acme_volume" {
type = "host"
source = "acme_volume"
read_only = false
}
Use the Nomad host volume called acme_volume
, this is where LetsEncrypt state is kept. The host volume is pre-configured in the Hashistack SD card image, in the Nomad server configuration
task "traefik" {
driver = "docker"
volume_mount {
volume = "acme_volume"
destination = "/data"
}
config {
image = "traefik:v2.2"
network_mode = "host"
volumes = [
"local/traefik.toml:/etc/traefik/traefik.toml",
]
}
Configuration for the Traefik docker task. The configuration file traefik.toml is rendered in a template below and mounted within the task container on startup
template {
data = <<EOH
AWS_ACCESS_KEY_ID = "{{ with secret "kv/data/traefik/aws" }}{{.Data.data.aws_access_key_id}}{{end}}"
AWS_SECRET_ACCESS_KEY = "{{ with secret "kv/data/traefik/aws" }}{{.Data.data.aws_secret_access_key}}{{end}}"
AWS_HOSTED_ZONE_ID = "{{ with secret "kv/data/traefik/aws" }}{{.Data.data.aws_hosted_zone_id}}{{end}}"
AWS_REGION = "{{ with secret "kv/data/traefik/aws" }}{{.Data.data.aws_region}}{{end}}"
EOH
destination = "secrets/file.env"
env = true
}
A template containing AWS access secrets for ACME. The env=true setting causes Nomad to render the template and inject the environment variables into the container on startup
template {
data = <<EOF
[entryPoints]
[entryPoints.http]
address = ":80"
[entryPoints.https]
address = ":443"
[entryPoints.traefik]
address = ":81"
[api]
dashboard = true
insecure = true
[log]
level = "DEBUG"
[certificatesResolvers.{{ with secret "kv/data/traefik/acme" }}{{.Data.data.resolver_name}}{{end}}.acme]
email = "{{ with secret "kv/data/traefik/acme" }}{{.Data.data.resolver_email}}{{end}}"
storage = "/data/acme.json"
[certificatesResolvers.{{ with secret "kv/data/traefik/acme" }}{{.Data.data.resolver_name}}{{end}}.acme.dnsChallenge]
provider = "route53"
delayBeforeCheck = 0
# Enable Consul Catalog configuration backend.
[providers.consulCatalog]
prefix = "traefik"
exposedByDefault = false
[providers.consulCatalog.endpoint]
address = "127.0.0.1:8500"
scheme = "http"
EOF
destination = "local/traefik.toml"
}
A template for the Traefik configuration. ACME settings for the domain name are retrieved from Vault by Nomad at job launch. This configuration enables Traefik’s integration with the Consul catalog. Endpoint configuration will come from Nomad job annotations. This configuration is rendered and mounted inside the container on startup
resources {
cpu = 100
memory = 128
network {
mbits = 10
port "http" {
static = 80
}
port "https" {
static = 443
}
port "api" {
static = 81
}
}
}
Resources declaration for the Traefik task declaring static ports to map. api
will run the Traefik dashboard on port 81
service {
name = "traefik"
check {
name = "alive"
type = "tcp"
port = "http"
interval = "10s"
timeout = "2s"
}
}
Nomad service check to verify the Traefik task is running as expected. If this health check fails, the Nomad task is restarted
Write the secrets required for the job to Vault:
vault kv write kv/traefik/aws aws_access_key_id=your_access_key aws_hosted_zone_id=your_route53_zone_id, aws_region=aws_region, aws_secret_access_key=your_secret_access_key
vault kv write kv/traefik/acme resolver_email=your_email_address, resolver_name=someName
The Nomad job requires a Vault policy in order to read secrets:
path "kv/data/traefik/*" {
capabilities = ["read"]
}
Run Terraform from the vault-tf directory to apply the required policies:
Run Terraform from the nomad-tf directory to deploy the Nomad job for Traefik:
Once that is applied, head to http://any-raspberry-pi-ip:81 to see the Traefik dashboard:
Since I already have Jenkins running i’m going to route access to that through Traefik, letting it handle the SSL certificates. This is simple to achieve, I just have to alter the Jenkins nomad job definition to add the Traefik annotations:
task "jenkins-task" {
driver = "docker"
service {
port = "http"
name = "jenkins"
tags = [
"traefik.enable=true",
"traefik.http.routers.jenkins.rule=Host(`jenkins.mydomain.com`)",
"traefik.http.routers.jenkins.tls.certresolver=someName"
]
}
Deploy this with Terraform:
Jenkins is now accessible through Traefik: https://jenkins.mydomain.com