About This Site

Rational

I know I know, its just a static site, what more can there possibly be to that? Well there’s a bit more. Nothing special, but I think is worth sharing and ofcource reading.

Let’s start

First of all this site has done a lot of moving around. Its a hugo site and at the time of this writing its is hosted on a k3s cluster, running on a proxmox cluster, running on 3 Lenovo ThinkStations, at my house. But that wasn’t always the case, so let’s take it from the start.

Amazon s3

This site started as an s3 static site. Writing posts was not my first intention since I am an engineer first and a blogger after that(no siht sehrolck).

I created a hugo site and fiddled around with some templates bedore ending up with the theme you see now. I then created a bucket on s3, enabled public access and Static website hosting and I was ready. I built my site and uploaded using s3cmd. After that I created a cloudfront distribution and an amazon ssl certificate. I pointed my domain to the cloudfront and the blog was live and…empty. All I had to do now is write some stuff…Where is the fun to that right?

So everytime I write a new blogpost(not very frequently but still), I needed to:

Gitlab pages

That sounded like a lot of work since I was dreaming of writing a lot of articles every week. No time left for all that tech stuff. So I decided to move it to gitlab pages instead. I mean, the code was already there and gitlab pages was also free. Freer than s3 and cloudfront at least. l created a gitlab CI file for the repo as follows:

image: monachus/hugo:v0.55.6
variables:
  GIT_SUBMODULE_STRATEGY: recursive
pages:
  script:
    - hugo
  artifacts:
    paths:
      - public
  only:
    - master

That was easy. After that, I pointed my domain to the gitlab pages and created the validation endpoints in my website and the blog was ready…and empty again…

Self hosted

That remained the case, for almost 4 years until I decided to move all my repos from gitlab to github. In the meantime I did update the hugo engine a few times and did write a few posts. Now at that point I just wanted to try out github actions and self-hosted runners. I decided to host the blog on my self on my Proxomox cluster based on 3 lenovo ThinkStation P330 Tiny! To take it a step further I decied to create a k8s cluster on top of it. K8s has been a vital software for me the past years. I have been running k8s on production for almost 7 years now and I feel more than comfortable with it. But I have never installed it myself. I have always consuming managed clusters from Amazon, Google and Azure.

After some research, I decided that k3s will be the easiest and cleanest solution for a 3 node k8s cluster.

The first thing I did was prepare 3 vms for k3s. I went with the latest Debian Bookworm image and applied the latest STIG guidelines using ansible-hardening: https://github.com/openstack/ansible-hardening

After that I was ready to install k3s. I decided to go with an all master/agent cluster with etcd instead of sqlite for HA.

I played a bit with the k3s install script, which I have to say makes installing k8s very easy, but since this was going to be a cluster I also wrote a small Ansible role that uses the k3s script along with some configuration files to bootstrap k8s on 3 nodes easily.

By default k3s installs Traefik, ServiceLB, flannel and helm-controler. At the moment this was not exactly how I wanted the k3s cluster, so I disabled traefik and the helm-controler. For Ingress controler I decided to continue with ingress-nginx, mostly because I was more familiar with it. Helm controler was just something that made no sence to me since I was going to be managing all helms and manifests with ArgoCD.

Now k8s without a proper CD is like coffee with sugar. I just dont like it and so shouldn’t you. So I installed ArgoCD.

Next stop: Dockerize my hugo site. Hugo site is just some static files, so a simple nginx will do:

FROM klakegg/hugo:latest AS builder
ADD . /app
WORKDIR /app
RUN hugo --minify

FROM nginxinc/nginx-unprivileged:alpine-slim
COPY --from=builder /app/public /usr/share/nginx/html

At the first stage of this Dockerfile I use the latest hugo image to build the site. On the second stage I copy it to a slim and rootless nginx image. I decided to use multi stage because it makes absolutely perfect sence and also helps me with keeping the Docker image minimal and secure. Also I tried to keep the image small enough so I dont hit the limit of the free Github Registry.

At that point I need to create the a helm chart. That’s easy, helm create blog and we are almost ready.

Now to the CI part:

name: build hugo site
on: [pull_request, push]
jobs:
  build:
    runs-on: [self-hosted]
    steps:
      - name: Checkout repository
        uses: actions/[email protected]
        with:
          submodules: 'true'
      - name: login to ghcr.io
        uses: docker/[email protected]
        with:
          registry: ghcr.io
          username: ${{ github.actor }}
          password: ${{ secrets.REGISTRY_TOKEN }}
      - name: Set output
        id: vars
        run: |
          calculatedSha=$(git rev-parse --short ${{ github.sha }})
          echo "::set-output name=short_sha::$calculatedSha"
          calculatedBrnach=$(git branch --show-current)
          echo "::set-output name=branch::$calculatedBrnach"          
      - name: Build, tag, and push image
        if: ${{ github.event_name != 'pull_request' }}
        env:
          GHR_REGISTRY: ghcr.io/myghr
          GHR_REPOSITORY: parask.me
          COMMIT: ${{ steps.vars.outputs.short_sha }}
          BRANCH: ${{ steps.vars.outputs.branch }}
        run: |
          docker build -t $GHR_REGISTRY/$GHR_REPOSITORY:$COMMIT .
          docker push $GHR_REGISTRY/$GHR_REPOSITORY:$COMMIT
          docker tag $GHR_REGISTRY/$GHR_REPOSITORY:$COMMIT $GHR_REGISTRY/$GHR_REPOSITORY:$BRANCH
          docker push $GHR_REGISTRY/$ECR_REPOSITORY:$BRANCH          

The above GH Action will checkout the blog repo, build it and then push it to my GH registry. My tagging pattern is, a static tag, like the $BRANCH and a versioned tag like the $COMMIT.

Next step the helm chart. I created one with the helm new command. I changed the repository and tag values to match my registry and also configured ingress. After that I commited the code to my argocd repository. Argocd requires an Application to manage my blog helm chart:

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
 name: myblog
 namespace: argocd
 annotations:
   argocd-image-updater.argoproj.io/write-back-method: git
   argocd-image-updater.argoproj.io/write-back-target: helm
   argocd-image-updater.argoproj.io/git-branch: master
   argocd-image-updater.argoproj.io/image-list: blog=ghcr.io/myghr/parask.me:latest
   argocd-image-updater.argoproj.io/blog.helm.image-name: image.repository
   argocd-image-updater.argoproj.io/blog.helm.image-tag: image.tag
   argocd-image-updater.argoproj.io/blog.update-strategy: latest
   argocd-image-updater.argoproj.io/blog.allow-tags: regexp:^[0-9a-f]{7}$
spec:
 project: default
 source:
   repoURL: [email protected]:mygithub/infra.git
   path: charts/blog
   targetRevision: HEAD
   helm:
     releaseName: paraskme
 destination:
   namespace: default
   server: 'https://kubernetes.default.svc'
 syncPolicy:
   automated:
     prune: true
     selfHeal: true
   syncOptions:
     - CreateNamespace=true

Additionaly to argocd I installed argocd-image-updater. This sister app will be monitoring my ghr for new container tags and update the helm values with the latest tag. You can see that in the annotations above.

After a few minutes GH Action built my site and pushed it to GHR. Image updater updated the helm chart values and also ArgoCD deployment my chart to k8s. My blog was available within my network but not to the world(Who cares…). Anyway….

Cloudflared

For exposing my blog to the world I didnt want to do any portforwarding and use ddns. I instead used cloudflared. Cloudflared is a zero trust service from cloudflare. Its a tunneling software that creates a secure connection from your network to the cloudflare network. For that I created a tunnel token from cloudflare dashbaord and using argocd I installed the cloudflared helm chart.

apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
  name: cloudflared
  namespace: argocd
spec:
  project: default
  source:
    repoURL: tccr.io/truecharts
    chart: cloudflared
    targetRevision: 9.3.0
    helm:
      releaseName: cloudflared
      values: |
        image:
          repository: cloudflare/cloudflared
          pullPolicy: IfNotPresent
          tag: 2024.2.1@sha256:8124930145ba79535f2a9fb83bb9fb0abbeb8fdab94f4d72ae34deeeaee8774d
        workload:
          main:
            podSpec:
              containers:
                main:
                  args:
                    - tunnel
                    - --no-autoupdate
                    - run
                  env:
                    TUNNEL_TOKEN: "your tunnel token here"        
  destination:
    namespace: default
    server: 'https://kubernetes.default.svc'
  syncPolicy:
    automated:
      prune: true
      selfHeal: true

I then pointed my domain parask.me to my cloudflared tunnel id and finnaly my blog was live again. This has been an amazing journey. I have learned so many thing through out those years moving my blog around. If only I blogged more I could have also learned to write better…

So thats it for now. Ofcource this is not the end. The next plans include IPv6 talos and baremetal deployment so stay tuned!

Thank you for stopping by!

Fin

Comments

comments powered by Disqus