Intro

Recently I’ve been quite busy. I was helping with the organisation of HackConf this weekend which was both awesome and exhausting.

Therefore this blog will be covering something simpler - how I created this site. I won’t cover the ceation of the actual ui - it is a statically generated website from markdown files (you can find the sources here).

Just google for a static site generator and you’ll find plenty. I’m using Hugo and I quite like it.

Step 1: Get a domain

You could say I wasn’t particularly creative when choosing the domain name but oh well. I’m using the GoDaddy registrar and I’m rather happy so far. Their documentation is a bit sluggish especially when it comes to more advanced DNS needs but overall they are fine.

Step 2: Setup DNS and run test site

Next we need a DNS A record to point to my website’s host. I am hoting the website myself on my own infrastructure which I may describe in another blog post.

Open some firewall ports and we have a running website publicly accessible!

Done? Well if you were making a website in 2010 yes, maybe you’re done, however, in 2018 we have DevOps and SecOps and what not.

As a security wanna-be consultant I’d like my site to be secure (despite it being just static content).

I wanted to add some other stuff:

Step 3: Let’s Encrypt it

If you don’t know what Let’s Encrypt CA do and you own a website, you should get familiar with them. The TL;DR is that they give free SSL certificates with almost no configuration required by you.

Since most people don’t have a PhD in Cryptography, and setting up SSL may require one, I used the certbot client. If you have a general idea of what you need to do, the certbot client will take care of the crypto for you. It supports many platforms and webservers and makes setting up let’s encrypt certificates as easy as clicking a few buttons on the installation prompt.

All good so far, I span up a nginx container real quick and ran the installer. The nginx container uses a lightweight Debian 9 (at the time of writing) and is supported by certbot. The installer is quite user-friendly, however, at the end of the installation I hit an error:

Like most devs, I’m quite lazy, however, this error meant I had to do some more reading about certbot and let’s encrypt. If you notice the error message says something about lack of sufficient authorization - absolutely wrong!

I noticed the URI it was trying to access - “/.well-known/” - Tried opening that url and it gave me a 404. F**K!

After long hours of reading into the documentation of let’s encrypt, certbot, understanding how SSL certificates work and what not I finally managed to narrow down the issue. Some of the complications came from the fact that the installation is in a container so I wasn’t quite sure where the issue was - was it permissions, was it invalid config …

The ip you can see on the error page was also dodgy - why was my domain resolving to another IP? I rerun the certbot client a few times and the ip seemed to change in seemingly random fashion - some times it would show this weird ip and others it would resolve to the correct one. Let’s look my domain again then

Aha that looks like an issue! After some more time swearing I looked at the DNS web panel to find this weird record that was pointing to @ and resolving to PARKED. So wtf is this PARKED thing?

Obvious right?…

After removing the parked record, I rerun the client again but it failed once more:

This seems worrying. After consulting the documentation it turned out I have reached the rate limits for this hostname and I had to wait before I could issue any new certificates for this domain…

After waiting out the rate limit, I had some more issues with nginx config and docker but I finally got the craved message:

Certbot can tweak your webserver’s configuration to redirect all request to port 443. In the case of nginx it is really important that you set the server_name parameter to point to your domain. Otherwise it will not be able to determine which config to alter.

Finally got it working with HTTPS

Step 4: Automate deployment

So all I had to do now is automate the deployment process. At that moment I had a local work copy of the project on my laptop and a separate one on the hosting server. My goal was to edit a markdown file, commit and the changes would be resembled on the website. So the workflow I came up with looked like this:

So rebuilding the static files is as easy as running the hugo command. Now, applying the changes to a remote server without rsync-ing tar files would require some more configuration.

Step 4.1: Setup bare repository

So what is a bare repository? This answer explains is pretty well. Basically, the repository created with git init or git clone is called a working directory - we use them while developing the project. A bare repository on the other hand, is used solely for sharing purposes - GitHub is the best example for a bare repository.

So lets create a directory that will hold the repository index:

mkdir blog.git
cd blog.git
git init --bare

I also need to add the remote origin to my git client:

git remote add production ssh://<user>@host/path/to/bare/repo.git

Step 4.2: Git hooks

Git hooks are scripts that execute on various git events - commits, push and receive. This is a wonderful feature and if you haven’t used them before, you should check them out.

Most of the time I am using the post-receive hook that triggers right after code has been pushed to a bare repository. The script I wrote checks out the code from the repository and keeps only the static files:

#!/bin/bash
# Checkout repo
git --work-tree=/ansible_files/nginx/blog --git-dir=/ansible_files/nginx/blog.git/ checkout -f
# remove all but public folder
find /ansible_files/nginx/blog -maxdepth 1 | grep -v public | grep 'blog/' | xargs rm -rf
mv /ansible_files/nginx/blog/public/* /ansible_files/nginx/blog
rmdir /ansible_files/nginx/blog/public
docker exec nginx-blog service nginx reload

Step 5: Deploy one-liner

So having this setup enables me to push changes to the live website using this simple one-liner:

hugo && git add . && git commit -m '<commit message>' && git push production

Automate the infrastructure setup

I’d like to automate to setup process so if I were to set the entire thing again, it would be as easy as picking the right ansible role to run.

This would save me the pain of going through all the stuff I’ve already gone through before and waste time not learning anythin new. I’d rather spend this time tweaking a role that can be reused later on.

I like to segregate my services - each thing in its own box. So instead of setting aside an entire vm just for a single web server I run all the stuff in containers on a dedicated container host vm.

Ansible + Docker = ♥

Here is the ansible role I’m using to build the container and set everything up:

---

- name: Setup nginx data dir
  file:
      path: "{{ nginx_data_dir_path }}"
      state: directory
      mode: 0755

- name: Setup letsencrypt data dir
  file:
      path: "{{ nginx_letsencrypt_dir_path }}"
      state: directory
      mode: 0755

- name: Copy config files over
  copy:
    src: "{{ item }}"
    dest: "{{ nginx_root_dir }}"
  with_items:
      - "{{ role_path }}/files/nginx.conf"
      - "{{ role_path }}/files/default.conf"

- name: Copy SSL data if present
  when: ssl_info_tar_name is defined
  register: ssl_data
  copy:
      src: "{{ role_path }}/files/{{ ssl_info_tar_name }}"
      dest: "{{ nginx_root_dir }}"

- name: Extract SSL archive
  when: ssl_data.changed
  unarchive:
      remote_src: yes
      src: "{{ nginx_root_dir }}/{{ ssl_info_tar_name }}"
      dest: "{{ nginx_root_dir }}/"


- name: Setup nginx container
  docker_container:
    name: "{{ nginx_container_name }}"
    image: nginx
    volumes:
        - "{{ nginx_data_dir_path }}:/usr/share/nginx/html"
        - "{{ nginx_letsencrypt_dir_path }}:/etc/letsencrypt"
        - "{{ nginx_root_dir }}/nginx.conf:/etc/nginx/nginx.conf"
        - "{{ nginx_root_dir }}/default.conf:/etc/nginx/conf.d/default.conf"

    published_ports:
        - "{{ static_ip_net_gw }}:80:80/tcp"
        - "{{ static_ip_net_gw }}:443:443/tcp"
    state: started
    restart_policy: unless-stopped
    networks:
        - name: container_network
          ipv4_address: "{{ nginx_ip }}"
    purge_networks: yes

The corresponding vars file can be found on my github. This role sets up the nginx container and, if present, will add the corresponding SSL certificate to the web server.

The following one-liner can be run on a Debian system to setup certbot and request a new certificate for th provided domain.

echo 'deb http://ftp.debian.org/debian stretch-backports main' >> /etc/apt/sources.list && apt update && apt-get install -y python-certbot-nginx -t stretch-backports && certbot --nginx -d viktorbarzin.me --webroot-path /usr/share/nginx/html

I just don’t feel like messing with files in containers with ansible since it’s not as trivial and I don’t really have the need for it yet.

Conclusion

I hope you found this article useful. For me it was exhilarating messing around with Let’s Encrypt and certbot. Writing playbooks is always fun so there’s that.