<rss version="2.0" xmlns:atom="http://www.w3.org/2005/Atom">
<channel>
 <title>blog - onnilampi</title>
 <description>RSS feed from the blog of Onni Lampi</description>
 <link>https://onnilampi.fi/blog</link>
 <atom:link href="https://onnilampi.fi/blog/feed.xml" rel="self" type="application/rss+xml" />

  <item>
 <title>Embracing music chunking</title>
 <description>
 <![CDATA[<p>
First of all, sorry about the term "music chunking".
I tried to come up with a good translation for the finnish word of "käyrätoisto" (you know, the opposite of "suoratoisto", streaming in english).
This is the best I can do.
</p>
<p>
Few friends of mine spontaneously decided to stop using all streaming services for the entirety of February.
This included all streaming services, but the main focus kind of organically gravitated towards not using streaming for music.
After all, it is relatively easy not to stream any video content for a month, but losing access to services such as Spotify, Apple Music and Tidal is a major change in modern daily music consumption.
Anyway, this initiative greatly inspired me, and even though I didn't really dive into the deep end directly <i>(i.e. purchase a casette walkman for portable use. See picture.)</i>, I was invested into the scheme at the latest when one of my friends announced that he had "refined a cosmic 80's mixtape casette from his vinyl collection".
I had to get myself a copy of that mixtape, but first I needed equipment to play said mixtape.
</p>
<img src="https://onnilampi.fi/static/MEGA-BASS.png" alt="Sony Walkman with MEBA BASS" width="650">
<p style="text-align: center;"><a href="https://jan.systems/">A friend of mine</a> rocking his Walkman</p>
<p>
So, how does one go around purchasing equipment for playing casettes nowadays?
There are some contemporary options available, but many of then are subject to bad build quality, very expensive pricing and/or bad availability.
I quickly determined that the best way is to opt for a slightly riskier option of bying one second-hand from some bloke who's been resourceful enough to post theirs for sale online.
This option is a bit riskier since all devices which have to physically move something in order to operate are eventually subject to having their drive-mechanisms fail.
</p>
<img src="https://onnilampi.fi/static/music-chunking-shrine.png" alt="JVC KD-D2, bunch of casettes, Fisher TAD-M77" width="650">
<p style="text-align: center;">JVC KD-D2, bunch of casettes, Fisher TAD-M77</p>
<p>
Nevertheless, I managed to hunt down a relatively reasonably priced stereo system which come equipped with all I needed.
The <b>Fisher TAD-M77</b> is a relatively consumer-grade piece of stereo equipment from the 1980's, and this particular unit even included a turntable for playing vinyl records, in addition to having a built-in CD-player, FM-radio and two casette-decks.
150 euros changed hands and I was the happy owner of a brand new (relatively speaking) stereo system.
Misfortune struck almost immediately, as the final functioning casette deck broke before I had the chance to use it at all.
I'm relatively certain that I can fix both of those decks once I find the time, but before that I had to get myself a discrete casette deck for mixtape-playing purposes.
Two days, 60 euros and a trip to Korso later, I had one in my hands, and this <b>JVC KD-D2</b> worked flawlessly.
The nice gentleman who sold it to me was also kind enough to supply me with a bunch of random casettes, many of which contained absolute bangers!
</p>
<p>
I purposefully made sure not to even attempt to get a setup that is somehow "pure" or "sounds optimal", as my aim is to simply gain access to music chunking with as little effort as possible.
Once my collection of physical music grows, there might be reason enough to invest in a somewhat audiophile-compliant setup, but as of now my first priority is to just start accumulating music on physical media, and have the capability to play it.
As long as I don't actively damage my physical media, I'm comfortable having it sound like dogshit for the time being.
</p>
<p>
All in all, I'm pretty happy with the fact that I finally took the plunge back into the world of physical media.
Time will tell how much I'll invest my time and money into this new hobby, but knowing me and accounting for the fact that I already have been looking for both outer and inner sleeves for my vinyl records, it is a fair assumption that you'll see more of this in this blog in the years to come.]]>
 </description>
 <link>https://onnilampi.fi/blog/embracing-music-chunking</link>
 <pubDate>Thu, 5 Mar 2026 22:00:00 +0200</pubDate>
 <guid isPermaLink="false">8107fd72-2ee1-4125-ab3a-e6aec85317f0</guid>
 </item>
  <item>
 <title>Learnings from six months of PGP email usage</title>
 <description>
 <![CDATA[<p>
As I laid out in this <a href="https://onnilampi.fi/blog/my-pgp-infra">previous blog post</a>, I recently opted to setup a completely independent and self-managed PGP-infrastructure to replace the previous one managed by Proton Mail.
So far, the thing has worked surprisingly well, and I've yet to figure out a reason not to keep maintaining this setup.
I revently had a bit of a stumble though, and figured it would be worthy of a short post here.
</p>

<p>
Even though my public PGP-key is primarily used for email, there is technically nothing that prevents people from using it for other PGP-related duties. 
This isn't really a problem, but recently I ran into some issues with decryption when a friend of mine sent me a file encrypted with my public PGP key.
It turned out that there is no built-in way to have Thunderbird decrypt attachments, even if it knows and fully controls the encryption key.
I had to resort to exporting the key from Thunderbird, and using GnuPG to decrypt the file, which wasn't exactly difficult, but it was a bit annoying to have to spend an hour trying to figure that out.
Thunderbird wasn't exactly helpful in its error messages.
<b>The lesson here is that you can't purely trust Thunderbird to handle all the PGP keys for you.</b>
</p>

<p>
I even tried to simply point GnuPG to the Thunderbird directory where the keys are stored, only to be faced with a passphrase prompt.
This is to be expected, apart from the fact that none of my passphrases used with Thunderbird worked.
It turned out that this was and is actually a security feature within Thunderbird, explicitly developed to prevent this kind of behaviour.
Thunderbird assigns each key it creates with a random passphrase, which is not accessible to the user, even if they know the Thunderbird master password.
This means that even if some malicious actor gains access to the keys, they are useless as is without the random passphrase.
Kinda neat, I guess.
This was the reason as to why I had to export the keys, since this allowed me to assign them with a known passphrase.
In the end everything worked out fine.
<b>The lesson in short is that if you lose your Thunderbird user profile, the keys themselves are useless even if you have them backed up.</b>  
</p> ]]>
 </description>
 <link>https://onnilampi.fi/blog/6-months-of-pgp-email</link>
 <pubDate>Fri, 16 Jan 2026 21:15:00 +0200</pubDate>
 <guid isPermaLink="false">1b129a87-bb85-45ed-8f8b-73ab0c9cfda1</guid>
 </item>
  <item>
 <title>Plans for Summer 2026</title>
 <description>
 <![CDATA[<p>As of writing this blog post, Summer 2026 is still over 6 months away.
Someone might think it's too early to commit to anything, but I disagree.
Earlier the commit, better the outcome, when it comes to holiday planning.
</p>

<p>I'm in an extremely privileged position to have the opportunity of taking 8 weeks of paid holiday per year.
Since three of those weeks can't be carried over from one year to the next, I need to carefully plan the usage in order to avoid having to spend them when I don't want to.
In addition to these, I also usually have a bunch of flex-time hours to spend as well from working long days.
In practice, this has amounted to around 9 weeks of paid holiday annually, and allows me to have a long holiday during the summer months, while leaving a bunch of weeks to spend in the winter as well.
</p>

<p>In July 2025, I was planning to hike across the <a href="https://www.luontoon.fi/fi/kohteet/muotkatunturin-eramaa">Muotkatunturi wilderness area</a>, but had to adjust my route somewhat due to the extreme heat.
</p>

<img src="https://onnilampi.fi/static/muotkatunturi2025_temperature.png" alt="Muotkatunturi temperatures in 2025" width="650">
<p>
I still had a great time, but the fact that I didn't manage to hike across it kinda bugs me.
Therefore in 2026, I shall attempt this again, this time around a month earlier to avoid the scorching July temperatures.
While I'm at it, might as well attend the <a href="https://msfilmfestival.fi/">Midnight Sun Film Festival</a> before the actual hike.
In fact, I already booked travel for that.
I attended the festival in 2024 and can only recommend to any fan of cinema! It is a very unique experience in all the positive sense of the term.
</p>

<p>
So, the preliminary plan is clear:
</p>
<ul>
<li>Midnight Sun Film Festival <b>June 10th - 14th</b></li>
<li>Hiking in Muotkatunturi from Muotkanruoktu to Karigasniemi <b>June 14th - ?</b></li>
<li>????</li>
<li>Profit</li>
</ul>

<img src="https://onnilampi.fi/static/leg1_2026.png" alt="Muotkatunturi route 2026" width="650">

<p>Looking at the plan, ambition kind of creeps up on me.
I've been pondering the possibility to extend my hike all the way up to Utsjoki, across the Paistunturi wilderness area.
This requires careful planning though, since I don't really want to carry over two weeks of food with me unless that's absolutely necessary.
Last year my daily rations including fuel and whatnot weighed around 850 grams, which does add up quite quickly.
Good thing is that by making a pit stop in Karigasniemi, I could technically just either buy more rations from there, or even mail pre-made ones there to await pick-up by me.
This way, I'd only need to get more fuel and maybe some small things from Karigasniemi.
This plan would come with the benefit of being able to make the decision to hike across the <a href="https://www.luontoon.fi/en/destinations/kevo-strict-nature-reserve">Kevo Strict Nature Reserve</a> at the last possible moment in Karigasniemi.
Obvious disadvantage would however be that I could not book the return trip well in advance, and would most likely be forced to pay a higher fare. In addition, the cost and trouble of mailing rations to Karigasniemi would be lost as well.
</p>

<img src="https://onnilampi.fi/static/leg2_2026.png" alt="Potential longer extra leg after Muotkatunturi 2026" width="650">

<p>Regardless, the plans are shaping up to look like taking entire June off at least, and most likely a few weeks from July as well.
I guess the dream would be to spend most of June and the entirety of July on holiday, but that might not be possible for understandable business reasons at my employer.
</p>

<p>I'll try to update this blog whenever my plans become more concrete.
Needless to say, I'm already quite hyped!
</p>]]>
 </description>
 <link>https://onnilampi.fi/blog/summer-2026-plans</link>
 <pubDate>Sun, 23 Nov 2025 20:06:00 +0200</pubDate>
 <guid isPermaLink="false">b5bfeb5d-5ff7-45dc-88a8-d173d47ecd97</guid>
 </item>
  <item>
 <title>Overview of my PGP key infrastructure</title>
 <description>
 <![CDATA[<p>I recently added a new page for this site: <a href="https://onnilampi.fi/keys">onnilampi.fi/keys</a>.
The page contains a list of all PGP keys I've used in the past, as well as the ones currently being used by me.
Most notably, one of the old keys is the one I used with Proton Mail for almost 10 years, and recently retired as I moved away from using Proton Mail as my "main" email provider.
The move away from Proton Mail was not directly related to PGP, but was mostly driven by an unsatisfactory UX that's been bugging me for a while.
This being said, I will keep using Proton Mail for more anonymous communications in the future as well, as I'm very happy about the service in general.
It just didn't really fit the bill for my personal preference of Thunderbird-based email.
</p>

<p>Anyway, back to the topic of PGP keys.
As I like to do, I started this small project by listing a bunch of requirements I wanted to fulfill:</p>

<ul>
    <li>Once a key is generated, it stays in place and is never moved as a part of normal operation.</li>
    <li>I want to rotate the keys often as a part of normal operations. Key rotation should therefore be easy.</li>
    <li>People who are interested in obtaining my public key need to have readily available access to it via multiple means.</li>
    <li>Preferably, all keys are explicitly revoked when retired.</li>
    <li>Generated keys need to be easily transportable and operable with multiple different tools to avoid vendor lock.</li>
</ul>

<p>Luckily, the venerable Thunderbird is nowadays equipped with pretty much everything I need to achieve those goals.
Things that Thunderbird doesn't automatically achieve, I pretty much need to handle with some process anyway.
As icing on the cake, Thunderbird stores the keys in a format that's directly usable by GnuPG.
All I have to do is declare the profile directory as the GnuPG home directory: <code>gpg --homedir ~/.thunderbird/profile_dir_name</code> and everything works seamlessly.
Now, I don't really use GnuPG to manage the email keys (more on that later), but the compatibility is handy for exporting, importing and modifying the keys, if necessary.
</p>

<p>The lifecycle of a PGP key looks roughly like this in my setup:</p>

<ol>
    <li>A key is generated with Thunderbird with an expiration time of a year or two.</li>
    <li>The newly generated key is shipped to <a href="https://keys.openpgp.org">keys.openpgp.org</a> from the Thunderbird UI.</li>
    <li>I receive a link via email that's used to confirm the pairing between the email address in question, and that particular key. This immediately ensures that the key is available from that service by simply querying my email address.</li>
    <li>The key is actively used, and it's public part added as an attachement for all outgoing emails. Also, all outgoing email is cryptographically signed with the corresponding private part of the key.</li>
    <li>Once the key expiration time approaches, the key is revoked. Revoking is the only situation where the key is allowed to be moved, for example in a situation where the OS is re-installed.</li>
    <li>New key is generated, and the old, now revoked, key becomes a part of a legacy keys that are shipped alongside the new public key.</li>
    <li>The private part of the old key is left untouched, and deleted after some time.</li>
</ol>

<p>This approach is practically 100% driven by the sublimely good Thunderbird UI, which is able to facilitate all of the mentioned steps automatically.
Only in a situation where I permanently lose access to the keys unexpectedly, do I need to accept the fact that I can't explicitly revoke a key.
Even in that situation, I'm still able to just start over, import all the public keys I had from a backup (which doesn't contain the private keys), and update the newly created key to the keys.openpgp.org -portal.
I do have to accept the fact that there is technically a valid key present, but a relatively short key expiration interval should limit that adequately.
Might be I'll eventually make that interval into something like 6 months, but remains to be seen.
</p>

<p>In summary, I'm nowadays completely independent from any individual email provider, when it comes to encrypting my emails, which is cumbersome but also kinda neat.
</p>]]>
 </description>
 <link>https://onnilampi.fi/blog/my-pgp-infra</link>
 <pubDate>Sat, 12 Jul 2025 21:39:28 +0300</pubDate>
 <guid isPermaLink="false">4d194cd1-2a73-4484-8e1e-63b489a819ea</guid>
 </item>
  <item>
 <title>Site tooling is now complete</title>
 <description>
 <![CDATA[<p>In the last three entries in this blog I've explained the thought process behind developing the tooling to build this static site
<a href="https://onnilampi.fi/blog/blog-generation">[1]</a>
<a href="https://onnilampi.fi/blog/blog-deployment">[2]</a>
<a href="https://onnilampi.fi/blog/rss-feed-available">[3]</a>.
The tooling is now considered complete, since all of the original design principles are met:
</p>

<ul>
    <li>Content will be created exclusively by me and thus, no CMS is required.</li>
        <ul>
        <li>Achieved with ease, all content is generated based on static files.</li>
        </ul>
    <li>Content is generated fairly rarely.</li>
        <ul>
        <li>This remains true. Content is relatively easy to create by just copy-pasting a file to the correct folder and adding an entry to <code>vars/posts.yml</code></li>
        </ul>
    <li>Content will mostly consist of text.</li>
        <ul>
        <li>The tooling is very much geared towards templating text files, and non-text files are only supported by mindlessly copying them from a folder to another.</li>
        </ul>
    <li>The content needs to be easily stored in git.</li>
        <ul>
        <li>This works out pretty well, although serving large files would require shipping them via git, which might not be that convenient.</li>
        </ul>
    <li>Ansible is neat.</li>
        <ul>
        <li>It sure is!</li>
        </ul>
    <li>The content needs to live inside a simple directory structure in order to facilitate serving it from a static webroot.</li>
        <ul>
        <li>This is not leveraged when storing the live site, as it's anyway served from a container. The container does serve it from a static webroot though.</li>
        </ul>
    <li>Succesfully generating the site needs to be a sign of everything at least being not horribly broken.</li>
        <ul>
        <li>In order to achieve this, a few simple ansible modules had to be written. The end result is that the tooling is able to start by validating the variables used for generating the site, and finish by validating a few things about the generator output. Thus, when <code>generator.yml</code> reports a successfull run, a very strong assumption can be made about everything being in order.</li>
        </ul>
    <li>Components of the pages, such as the footer, need to be shared by default.</li>
        <ul>
        <li>This is surprisingly easy to do, although I originally kind of overdid the shared templating. Now templates are primarily leveraged in the blog posts, and other pages are assumed to be a bit more "manual" to edit.</li>
        </ul>
    <li>As the server-wide caddy configuration covers several other independent applications, the deployment method can't require changes to this configuration or restarting the service upon each individual site deployment.</li>
        <ul>
        <li>This is achieved by simply managing the site container with systemd. Deployment of the site is as simple as pulling the latest changes from git, and running two ansible-playbooks.</li>
        </ul>
    <li>The deployment must be completable without any elevated privileges.</li>
        <ul>
        <li>Systemd user mode enables this pretty neatly.</li>
        </ul>
    <li><a href="https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html">Systemd+podman</a> kicks ass and should be leveraged here if possible.</li>
        <ul>
        <li>This is indeed leveraged, and it works flawlessly.</li>
        </ul>
    <li>The deployment artifact utilized by caddy needs to be immutable and all modifications to the site need to generate a new artifact instead of modifying an existing one.</li>
        <ul>
        <li>The entire site is indeed running in a container, and all changes warrant a new image to be created. No information is ever injected into the image after its generation.</li>
        </ul>
</ul>

<h2>Various remarks about the implementation</h2>

<h3>RSS feed generation</h3>

<p>In order to provide a neat interface to read the blog, an RSS feed is required.
I made the decision to exclusively leverage the same data and files that are used to generate the site itself.
This makes it so that the feed comes "for free", as long as there is a single static template file that's baked into the feed.xml.
</p>
<p>The RSS feed did however trigger the need for being more strict on the data organization, and prompted a complete refactoring of the site templates.
This proved useful very quickly, and paid itself back by enabling a pretty simple data validation setup.
</p> 

<h3>Site and data validation</h3>
<p>Initially, the setup simply ensured that all generated HTML was valid.
This worked well, but had a few shortcomings: Firstly, it required manually updating the same information in several places, which was prone to manual errors, as listed in the the <a href="https://onnilampi.fi/blog/blog-generation">blog post</a> about the site generator. Secondly, the HTML validation doesn't cover validating the generated RSS feed.
</p>
<p>As all the information about the posts was eventually consolidated into a simple dictionary, the input data was simple to validate by simple ansible modules.
Practically all there is to do is checking that the relevant values are unique.
Additionally the value of the individual post path is checked to actually exist in the repository.
After the site has been generated, the feed timestamps and directory structure of the site is checked.
The workflow of adding a blog posts is as simple as first appending the dictionary with the new post details, and then adding the actual content to a new file.
</p> 
<p>To conclude, a succesfull generation guarantees that:</p>
<ul>
    <li>Blog post GUIDs are unique.</li>
    <li>Blog post links to be generated are unique.</li>
    <li>Blog posts have a separate, unique file that stores the post content.</li>
    <li>All subpages in the site contain some content, such as s dummy index.html.</li>
    <li>Timestamps in the RSS feed are in US English to appease all possible feed readers.</li>
</ul>

<h3>Thank you lighttpd 2.6.2025 - 21.6.2025</h3>
<p><a href="https://www.lighttpd.net">Lighttpd</a> is replaced with <a href="https://caddyserver.com">caddy</a>.
This was initially done due to debugging images not being served correctly, as debugging the lighttpd configuration was a bit cumbersome.
The lighttpd-based setup worked well, but it is a bit easier to use a well maintained and documented server in the container.
</p>

<h3>Hidden blog posts are supported</h3>
<p>Initially, a hidden post was generated by simply omitting the post link from the blog index.
This proved cumbersome as the blog index generation was automated alongside the RSS feed, so the post inclusion is now simply controlled by a post-specific variable, which has to be explicitly set for each post.
Simply declaring the post to have the <code>public: false</code> attribute will exclude it from the blog index and the RSS feed.
</p>

<h3>The site only contains absolute paths</h3>
<p>A simple variable in the generator playbook determines the base URL of the site.
For local development, it is convenient to be able to set the URL to simply be <code>http://localhost:3000</code>, and automatically update everywhere with <code>ansible-playbook generator.yml --extra-vars "base_url=http://localhost:3000"</code>.
The default value remains the public URL of the site.
</p>

<h2>In conclusion</h2>
<p>I really liked this project, and am pretty happy about the eventual outcome.
I learned quite a lot of small morsels of new information, and managed to complete this project in less than three weeks.
</p>
<p>I most likely won't really add new features to the site generator, but I might do several non-functional refactorings and other similar additions, such as moving all the modules to a new purpose-made ansible collection.
</p>]]>
 </description>
 <link>https://onnilampi.fi/blog/completed-site-tooling</link>
 <pubDate>Sat, 21 Jun 2025 21:32:28 +0300</pubDate>
 <guid isPermaLink="false">580e8a74-e1ac-46db-b91f-2da6c633ace1</guid>
 </item>
  <item>
 <title>RSS feed available for the blog</title>
 <description>
 <![CDATA[        <p>Went through the trouble of adding a template file that generates an <a href="https://onnilampi.fi/blog/feed.xml">RSS feed</a> for the site.
        This feature also required some other changes, but mostly it just took a while to wrap my head around the <a href="https://www.rssboard.org/rss-specification">RSS spec</a>, which in itself is pretty simple, but getting the <a href="http://www.rssboard.org/rss-validator/check.cgi?url=https%3A//onnilampi.fi/blog/feed.xml">validator</a> to be happy took a while.
        For example, turns out that RSS readers favor GUID-entries for the posts, which I just decided to generate manually with <code>uuidgen</code> as a part of adding the new entry.</p>

        <p>Additional improvement is that now all the metadata about the blog posts lives in a <a href="https://gitlab.com/omnez/onnilampi.fi/-/blob/main/vars/posts.yml">variable file</a>.
        This same file is used to generate the blog index, which is neat.
        It is no longer possible to forget adding the entry to the index.</p>
        <p>In addition, serving static images is for whatever reason broken currently, I'll debug that someday when I have time.</p>]]>
 </description>
 <link>https://onnilampi.fi/blog/rss-feed-available</link>
 <pubDate>Sat, 14 Jun 2025 15:06:00 +0300</pubDate>
 <guid isPermaLink="false">0e059081-e628-41a1-ab21-a6e336ecdbf9</guid>
 </item>
  <item>
 <title>How is this blog deployed?</title>
 <description>
 <![CDATA[
    <p>My <a href="https://onnilampi.fi/blog/blog-generation/">previous blog entry</a> explained how this site is generated with a simple ansible setup.
    However, simply generating the site is obviously not enough, as it needs to also be deployed via some method.
    In this blog post, I'll explain my thought process behind the implementation.</p>


    <h2>Overview of the environment</h2>
    <p>This site is hosted on a server that runs <a href="https://caddyserver.com/">caddy</a>.
    All external web traffic terminates to caddy, which then does the needful based on the requested URL.
    One of the major features of caddy is the extremely user-friendly TLS certificate automation, which is naturally leveraged here.
    This particular site is by no means the only web service hosted on the server, which does have an impact on the requirements and design principles.</p>

    <h2>List of design principles</h2>
    <ul>
        <li>As the server-wide caddy configuration covers several other independent applications, the deployment method can't require changes to this configuration or restarting the service upon each individual site deployment.</li>
        <li>The deployment must be completable without any elevated privileges.</li>
        <li><a href="https://docs.podman.io/en/latest/markdown/podman-systemd.unit.5.html">Systemd+podman</a> kicks ass and should be leveraged here if possible.</li>
        <li>The deployment artifact utilized by caddy needs to be immutable and all modifications to the site need to generate a new artifact instead of modifying an existing one.</li>
    </ul>

    <h2>Overview of the solution</h2>
    <p>As one can probably guess from the list of design principles, the only real solution is to host the site from a container.
    Fortunately this is not really complex or difficult, and actually is pretty easy to implement.
    Since I have full control of the environment anyway, I don't have to concern myself with orchestration or registries.
    I can simply build the container locally and call it a day.
    Only real prerequisite for this is having the relevant packages installed, more on that later.
    </p>
    <h3>Container content</h3>
    <pre>
$ cat templates/Containerfile.j2
FROM fedora:42
RUN dnf update -y && dnf install -y lighttpd && dnf clean all
RUN mkdir -p /var/www/site
ADD ./configuration/lighttpd.conf /etc/
ADD site/ /var/www/site/
RUN chown -R lighttpd:lighttpd /var/www/site
USER lighttpd
EXPOSE 3000/tcp
    </pre>
    <p>As shown above, the Containerfile remains quite simple.
    It just installs <a href="https://www.lighttpd.net/">lighttpd</a> to serve HTTP traffic, copies the locally generated site and configuration files to the container, assigns the content with necessary permissions, sets the user for the container, and finally declares the port and protocol to expose.</p>
    <p>The base image was determined to be a fedora one mostly out of personal preference, but also due to being well supported and a relatively lightweight one.
    The eventual image size is 256 MB, which is slim enough for this purpose.
    For example, the stock nginx image weighs 197 MB.
    I personally value using well supported and often updated general-purpose images over application specific ones.
    This simplifies things for me, as I can keep making the same basic assumptions about the images regardless of the eventual use case.
    Frequent updates often guarantee a better information security posture, which never hurts.
    For comparison, even though there is a frequently updated <a href="https://hub.docker.com/u/tgbyte">lighttpd image</a> which only weighs 84.9 MB, I have absolutely no idea about the <a href="https://www.tgbyte.de/">company</a> behind it and thus, no guarantee that the image will be updated and kept small in the future.</p>

    <p>Lighttpd is the web server of choise mostly due to having a very minimal configuration requirement.
    For now, it is enough to just serve the traffic via plain old HTTP for caddy.
    </p>
    <pre>
$ cat templates/lighttpd.conf.j2
server.document-root = "/var/www/site"
index-file.names = ("index.html")
server.port = 3000
    </pre>
    <p>The configuration file is naturally generated from a template with ansible, and effectively only consists of three simple lines of configuration.</p>
    <h3>Managing the container</h3>
    <p>Building the container is as simple as running <code>podman build . -t onnilampi.fi:latest</code>, and then launching it with <code>podman run --publish 3000:3000 localhost/onnilampi.fi:latest lighttpd -D -f /etc/lighttpd.conf</code>.
    This starts the container to serve traffic from port 3000, which is accessed by caddy upon an external request with the <a href="https://caddyserver.com/docs/caddyfile/directives/reverse_proxy">reverse_proxy</a> directive.</p>

    <p>Now, building and running the container manually with podman is of course a completely adequate way to go here.
    Still, I prefer to explicitly manage these kinds of things via systemd, which provides an extremely neat user experience: all I have to do is run <code>systemctl --user start onnilampi.fi</code> and the container is started to be managed as a normal systemd unit.
    Building the container image still has to be done by some other means, but that could also be automated to happen via systemd.
    I chose not to do so, since the image is only built after generating the site content and thus, easily automated as a part of the site generation process.
    More on that later.</p>
    <pre>
$ cat templates/onnilampi.fi.container.j2
[Unit]
Description=Container to host onnilampi.fi

[Container]
Image=localhost/onnilampi.fi:latest
Exec=lighttpd -D -f /etc/lighttpd.conf
PublishPort=3000:3000
    </pre>
    <p>The <code>.container</code> unit file is a very simple one, and once again templated via ansible.
    Systemd uses this file to generate the actual <code>.service</code> -file that's used to launch the container.</p>

    <h3>Automating all of this</h3>
    <p>Even though the setup is fairly simple, there are still several files to maintain, and operations to carry out.
    As time progresses, this will become a very tedious thing to manage unless it is documented and preferably automated well.
    Luckily, ansible is a tool purpose-made for this use case, and since it is already a requirement for generating the site, might as well use it for the rest of the tasks.</p>

    <p>A simple <a href="https://gitlab.com/omnez/onnilampi.fi/-/blob/main/setup.yml?ref_type=heads">playbook</a> is used to ensure that all the requirements are installed, and provided that they are, run some tasks to build and manage the site container.
    Additionally, some systemd configuration is done on user level.
    Running all of this in systemd user mode is a very good way to ensure proper isolation between the service and the system it runs on both from runtime and UX perspective: this ensures that all management commands done via <code>systemctl</code> need to explicitly state the <code>--user</code> parameter in order to be aware of the service and even work in the first place.</p>

    <p>All files are templated with ansible, even if they have no dynamic content in them.
    This is done in order to ensure that every single required file has its own ansible task to ship it into the correct location and override any manual modifications.
    The templating module also provides a neat <a href="https://docs.ansible.com/ansible/latest/collections/ansible/builtin/template_module.html#parameter-validate">validation</a> feature, which can be leveraged as per need.
    This is the method of how the generated HTML-files are validated.

    <h2>Conclusions</h2>
    This setup provides me with a pretty simple way of maintaining the site content and configuration with a workflow like this, after modifying the content locally and pushing it into git:
    <ol start=0>
        <li>SSH into the server, enter the project directory and pull the latest content from git.</li>
        <li>Setup the python environment.</li>
        <li>Install dependencies:
            <ul>
                <li><code>pip install -r requirements.txt</code></li>
                <li><code>ansible-galaxy install -r requirements.yml</code></li>
            </ul></li>
            <li>Generate the site with <code>ansible-playbook generator.yml</code></li>
            <li>Build and deploy the container with <code>ansible-playbook setup.yml</code></li>
    </ol>
    <p>In the future, it might be warranted to split the deployment tasks from the setup playbook, but right now it wouldn't really bring any real benefit.
    RSS feed generation and mTLS configuration for inter-server communications are next on the docket, which might increase the complexity quite a lot.
    In addition, validation and error handling is a must as the complexity increases, but for now the coverage is adequate for my personal use.
    </p> 
    <p>The site content and configuration is hosted in <a href="https://gitlab.com/omnez/onnilampi.fi">gitlab</a>, feel free to take a look!</p>
]]>
 </description>
 <link>https://onnilampi.fi/blog/blog-deployment</link>
 <pubDate>Sat, 14 Jun 2025 13:00:00 +0300</pubDate>
 <guid isPermaLink="false">dbbbc7c1-b15e-4cab-a180-2f652a187515</guid>
 </item>
  <item>
 <title>How is this blog generated?</title>
 <description>
 <![CDATA[        <p>As I outlined in the <a href="https://onnilampi.fi/blog/first-entry">first entry</a>, this entire site is generated using ansible templates and a simple playbook.
        For now, it fits the purpose surprisingly well.
        Sure, I can't neatly write just a bunch of .md-files and sprinkle some metadata on top, but I don't think I'd need to.
        Being honest, I'll most likely update this blog every few months at most, so small burden from having to repeat myself and essentially write these pages in pure HTML is not that significant.</p>
        <p>In this blog post, I'll go through the setup and list down the main considerations behind it.
        I'll explain the deployment model and automation in a later post.</p>

        <h2>List of design principles</h2>
        <ul>
        <li>Content will be created exclusively by me and thus, no CMS is required.</li>
        <li>Content is generated fairly rarely.</li>
        <li>Content will mostly consist of text.</li>
        <li>The content needs to be easily stored in git.</li>
        <li>Ansible is neat.</li>
        <ul>
        <li>Who said this is a non-opinionated design? ;)</li>
        </ul>
        <li>The content needs to live inside a simple directory structure in order to facilitate serving it from a static webroot.</li>
        <li>Succesfully generating the site needs to be a sign of everything at least being not horribly broken.</li>
        <li>Components of the pages, such as the footer, need to be shared by default.</li>
        </ul>

        <h2>How are these principles implemented?</h2>
        <p>Simply put, all content is stored and versioned as templates in a directory structure like this:</p>

        <pre>
$ tree templates
templates
|-- blog
|   |-- blog-generation.j2
|   |-- first-entry.j2
|-- blog.html.j2
|-- footer.j2
|-- header.j2
|-- index.html.j2
|-- list.html.j2
|-- onnilampi.fi.container.j2
        </pre>
        <p>The template directory also contains a template for managing the site container, but as said, more on that in a later post.</p>

        <p>Templates are utilized by a simple Ansible playbook, which first generates a directory in the site hierarchy, and then places an index-file in that directory. Here is an example of how the blog posts are handled:</p>
        <pre>
        
- name: Generate directories for blog entries
  ansible.builtin.file:
    path: "./site/blog/{{ item }}"
    state: directory
    mode: '0755'
  with_items:
    - first-entry
    - blog-generation

- name: Generate blog entry
  ansible.builtin.template:
    src: "blog/{{ item }}.j2"
    dest: "./site/blog/{{ item }}/index.html"
    mode: '0644'
    validate: "tidy -eq %s"
  with_items:
    - first-entry
    - blog-generation
            </pre>

        <p>The tasks above simply loop through all the declared templates, output .html-files, and use <a href="https://www.html-tidy.org/">tidy</a> to validate the output. This ensures that all output is valid HTML, and weird templating or formatting errors are caught early on.</p>

        <p>The implementation has a tiny trap in the sense that all new blog posts need to be updated in three places: in two Ansible tasks and one static HTML-file.
        There really is no way around this as long as I want to keep the setup light, but luckily the potential damage of this is relatively easy to mitigate.
        In total, there are 8 different combinations of what can happen:</p>

<table class="tg"><colgroup>
<col style="width: 125px">
<col style="width: 125px">
<col style="width: 125px">
<col style="width: 199px">
</colgroup>
<thead>
  <tr>
    <th class="tg-73oq">Directory task</th>
    <th class="tg-73oq">Template task</th>
    <th class="tg-73oq">Blog index</th>
    <th class="tg-73oq">Outcome</th>
  </tr></thead>
<tbody>
  <tr>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">All works as expected</td>
  </tr>
  <tr>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">Hidden entry is generated</td>
  </tr>
  <tr>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">Empty directory created</td>
  </tr>
  <tr>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">Empty directory created</td>
  </tr>
  <tr>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">Templating will fail</td>
  </tr>
  <tr>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">Templating will fail</td>
  </tr>
  <tr>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">UPDATED</td>
    <td class="tg-73oq">Broken link is added to index</td>
  </tr>
  <tr>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">NOT UPDATED</td>
    <td class="tg-73oq">Nothing happens</td>
  </tr>
</tbody></table>
        <p>As seen in the table, several different situations can happen. 3/8 are non-intrusive, so those don't require any action.
        The two failures are desired, and the other three should preferably also fail.
        As I'm a lazy person, I simply implemented a task that fails if there are empty directories:
        </p>


        <pre> 
- name: Check that there are no empty directories
  ansible.builtin.command:
    argv:
      - find
      - ./site
      - -empty
      - -type
      - d
  register: empty_dirs
  changed_when: false
  failed_when: "empty_dirs.stdout_lines | length != 0"
        </pre>
        <p>
        Down the line, I might write an ansible module to check for the broken links.
        For now, I'll just accept the fact that it is possible to add broken links into the blog index.
        The two remaining outcomes of creating hidden entries and doing nothing are considered features for now.
        </p>

        <h2>Conclusion</h2>
        <p>I'm relatively happy with the result, to be honest.
        Most of the requirements are fulfilled quite well, and this current implementation contains a minimal amount of code.
        There are also several interesting opportunities for future development work when it comes to generating a static site with Ansible.
        Things such as automatically checking for broken links and generating an RSS feed are on the roadmap.</p>
        <p>Stay tuned for part 2, where I'll cover all the deployment tooling and infrastructure!</p>
]]>
 </description>
 <link>https://onnilampi.fi/blog/blog-generation</link>
 <pubDate>Tue, 3 Jun 2025 20:00:00 +0300</pubDate>
 <guid isPermaLink="false">c119f2eb-20d2-4d53-bd7d-594729789072</guid>
 </item>
  <item>
 <title>Hello World!</title>
 <description>
 <![CDATA[        <p>As the title suggests, this is the first entry in my new blog.
        My intention is to dredge up all the previous entries as well (within reason) to eventually find a home here.
        </p>
        <p>I ended up re-writing the entire thing to, well, work. The new setup is essentially just a small chunk of ansible playbooks and templates acting as a rudimentary static site generator. So far so good!</p>]]>
 </description>
 <link>https://onnilampi.fi/blog/first-entry</link>
 <pubDate>Mon, 2 Jun 2025 18:00:00 +0300</pubDate>
 <guid isPermaLink="false">97dbe05b-2a8e-45e2-b4a5-732484ba4032</guid>
 </item>
 
</channel>
</rss>
