How is this blog deployed?

14.06.2025

My previous blog entry explained how this site is generated with a simple ansible setup. However, simply generating the site is obviously not enough, as it needs to also be deployed via some method. In this blog post, I'll explain my thought process behind the implementation.

Overview of the environment

This site is hosted on a server that runs caddy. All external web traffic terminates to caddy, which then does the needful based on the requested URL. One of the major features of caddy is the extremely user-friendly TLS certificate automation, which is naturally leveraged here. This particular site is by no means the only web service hosted on the server, which does have an impact on the requirements and design principles.

List of design principles

Overview of the solution

As one can probably guess from the list of design principles, the only real solution is to host the site from a container. Fortunately this is not really complex or difficult, and actually is pretty easy to implement. Since I have full control of the environment anyway, I don't have to concern myself with orchestration or registries. I can simply build the container locally and call it a day. Only real prerequisite for this is having the relevant packages installed, more on that later.

Container content

$ cat templates/Containerfile.j2
FROM fedora:42
RUN dnf update -y && dnf install -y lighttpd && dnf clean all
RUN mkdir -p /var/www/site
ADD ./configuration/lighttpd.conf /etc/
ADD site/ /var/www/site/
RUN chown -R lighttpd:lighttpd /var/www/site
USER lighttpd
EXPOSE 3000/tcp
    

As shown above, the Containerfile remains quite simple. It just installs lighttpd to serve HTTP traffic, copies the locally generated site and configuration files to the container, assigns the content with necessary permissions, sets the user for the container, and finally declares the port and protocol to expose.

The base image was determined to be a fedora one mostly out of personal preference, but also due to being well supported and a relatively lightweight one. The eventual image size is 256 MB, which is slim enough for this purpose. For example, the stock nginx image weighs 197 MB. I personally value using well supported and often updated general-purpose images over application specific ones. This simplifies things for me, as I can keep making the same basic assumptions about the images regardless of the eventual use case. Frequent updates often guarantee a better information security posture, which never hurts. For comparison, even though there is a frequently updated lighttpd image which only weighs 84.9 MB, I have absolutely no idea about the company behind it and thus, no guarantee that the image will be updated and kept small in the future.

Lighttpd is the web server of choise mostly due to having a very minimal configuration requirement. For now, it is enough to just serve the traffic via plain old HTTP for caddy.

$ cat templates/lighttpd.conf.j2
server.document-root = "/var/www/site"
index-file.names = ("index.html")
server.port = 3000
    

The configuration file is naturally generated from a template with ansible, and effectively only consists of three simple lines of configuration.

Managing the container

Building the container is as simple as running podman build . -t onnilampi.fi:latest, and then launching it with podman run --publish 3000:3000 localhost/onnilampi.fi:latest lighttpd -D -f /etc/lighttpd.conf. This starts the container to serve traffic from port 3000, which is accessed by caddy upon an external request with the reverse_proxy directive.

Now, building and running the container manually with podman is of course a completely adequate way to go here. Still, I prefer to explicitly manage these kinds of things via systemd, which provides an extremely neat user experience: all I have to do is run systemctl --user start onnilampi.fi and the container is started to be managed as a normal systemd unit. Building the container image still has to be done by some other means, but that could also be automated to happen via systemd. I chose not to do so, since the image is only built after generating the site content and thus, easily automated as a part of the site generation process. More on that later.

$ cat templates/onnilampi.fi.container.j2
[Unit]
Description=Container to host onnilampi.fi

[Container]
Image=localhost/onnilampi.fi:latest
Exec=lighttpd -D -f /etc/lighttpd.conf
PublishPort=3000:3000
    

The .container unit file is a very simple one, and once again templated via ansible. Systemd uses this file to generate the actual .service -file that's used to launch the container.

Automating all of this

Even though the setup is fairly simple, there are still several files to maintain, and operations to carry out. As time progresses, this will become a very tedious thing to manage unless it is documented and preferably automated well. Luckily, ansible is a tool purpose-made for this use case, and since it is already a requirement for generating the site, might as well use it for the rest of the tasks.

A simple playbook is used to ensure that all the requirements are installed, and provided that they are, run some tasks to build and manage the site container. Additionally, some systemd configuration is done on user level. Running all of this in systemd user mode is a very good way to ensure proper isolation between the service and the system it runs on both from runtime and UX perspective: this ensures that all management commands done via systemctl need to explicitly state the --user parameter in order to be aware of the service and even work in the first place.

All files are templated with ansible, even if they have no dynamic content in them. This is done in order to ensure that every single required file has its own ansible task to ship it into the correct location and override any manual modifications. The templating module also provides a neat validation feature, which can be leveraged as per need. This is the method of how the generated HTML-files are validated.

Conclusions

This setup provides me with a pretty simple way of maintaining the site content and configuration with a workflow like this, after modifying the content locally and pushing it into git:
  1. SSH into the server, enter the project directory and pull the latest content from git.
  2. Setup the python environment.
  3. Install dependencies:
  4. Generate the site with ansible-playbook generator.yml
  5. Build and deploy the container with ansible-playbook setup.yml

In the future, it might be warranted to split the deployment tasks from the setup playbook, but right now it wouldn't really bring any real benefit. RSS feed generation and mTLS configuration for inter-server communications are next on the docket, which might increase the complexity quite a lot. In addition, validation and error handling is a must as the complexity increases, but for now the coverage is adequate for my personal use.

The site content and configuration is hosted in gitlab, feel free to take a look!

Go back to list of entries