The full cycle developer blog

A blog by Geert van der Cruijsen about everything software development related

Hosting your HUGO blog on Github Pages in 1 hour

How to get a Hugo blog up and running with Search and comments for free within an hour

I started this blog a week ago and it’s actually my 3rd blog I created. I created my first blog around 2010 as self hosted wordpress website. Then when doing more Azure stuff I wanted to start of with a clean cheat again and created a new blog in 2016 using Wordpress again but then hosted on Azure https://mobilefirstcloudfirst.net. This worked but I was never happy with it. The last 2 years I didn’t blog at all so when I started with my new years resolution of blogging again I decided I needed a new blog, new domain name and something that used markdown for editing.

Choosing Hugo

I chose Hugo after looking into some of the common blogging platforms that are based on static site generators such as Jekyll and Hugo. After browsing through some of the themes and did some comparison I decided to try out Hugo since it looked more popular and just wanted to see how ard it was.

HUGO

A couple of hours later the website was up and running on my machine and I was already tweaking away at some UI improvements.. Point proven lets see what I needed to do.

My blog using Hugo hosted on Github Pages, what is in the box?

I wanted to create a blog website that was based on markdown files that i could store in git. I already had some experience with GitHub Pages so I though lets see how hard that is. After a couple of hours I had the followng up and running:

  • Hugo CMS
  • Finding a theme
  • Algolia Search
  • GitHub Pages for hosting
  • Github discussions for comments using Giscuss
  • Automatically deploy using Github Pages

Hugo CMS

The Hugo CMS was quite easy. I just followed the tutorial on the Hugo website to create a new empty website. It took me a couple of minutes.

Adding a Theme

Hugo has a list of themes on it’s website. The Clean White Hugo Theme looked quited good so I gave that a go. What I looked for in a theme was a nice clean interface and options for comments + search. This team had it all so I just installed it as a git submodule and when I ran the hugo serve command it was working from the get go

mkdir themes
git submodule add https://github.com/zhaohuabing/hugo-theme-cleanwhite.git themes/hugo-theme-cleanwhite

After my initial playing around with setting up the website I wanted to customize some things so what i did was that I actually forked the github repo and made the changes on my fork. All my changes are open source and can be found on this repo

Changes that I made are:

  • Showing blog preview / full posts in the homepage instead of the really small summaries.
  • Added headers to the blog preview using images
  • Added an option to show banners on the sidebar.
  • Some small css tweaks
  • Added full posts to the RSS Feed instead of the summary
  • Fixed some issues in the algolia search json that was being generated

All these changes were quite simple without me knowing anything about Hugo, themes or GO templating.

Hugo does not have support for search out of the box by itself. Luckely the theme I chose had this covered and supported search by a free 3rd party service. The only thing that was needed was creating an account and set up the API key and create an index.

Algolia

Algolia works with sending json files to the index. The json files are generated during compilation of the Hugo website and this json can then be uploaded on the website of Algolia for a first test. This manual labor is not something I would prefer so I automated this in the next step where I also deploy everything to Github Pages.

Deploying every commit to Github Pages and host it on my custom domain

Github Actions + Pages

I’m a developer so I wanted a fully automated workflow. My content is stored in a git repo on Github and I wanted that every commit I made to main would be automatically deployed to the website.

GitHub offers this flow with no effort for Jekyll but for HUGO some small additions needed to be done. Before we create our workflow we need to add a specific file to our repo called CNAME so github pages links this page to our custom domain.

CNAME File

When you change the domain name for your website hosted using GitHub pages Github automatically creates a commit with a CNAME file in it. This file is used by GitHub to know which domain name is used for the website. Since my workflow was pushing all contents of the ./public folder GitHub was actually removing this file every time after the Hugo workflow ran. Manually adding the CNAME file to the static folder helped solve this issue.

My CNAME file content:

fullcycledeveloper.com

Now that everything is ready we can create our workflow file. My file looks like this, The steps are quite self explanatory but I’ve added some comments below each action to explain what I’m doing.

name: github pages # Name of the workflow

on:
  push:
    branches:
      - main
  pull_request: 

jobs:
  build:
    runs-on: ubuntu-latest
    defaults:
      run:
        working-directory: blog
    steps:
      - uses: actions/checkout@v2
        with:
          submodules: true  # Fetch Hugo themes (true OR recursive)
          fetch-depth: 0    # Fetch all history for .GitInfo and .Lastmod

      - name: Setup Hugo
        uses: peaceiris/actions-hugo@v2
        with:
          hugo-version: 'latest'
        # Downloads & installs HUGO cli

      - name: Build
        run: hugo --minify
        # Do a HUGO build of your markdown files to generate a stetic website that is stored in the `./public` folder

      - name: Deploy
        uses: peaceiris/actions-gh-pages@v3
        if: github.ref == 'refs/heads/main'
        with:
          github_token: ${{ secrets.GITHUB_TOKEN }}
          publish_dir: ./blog/public
        # takes the public folder and pushes the changes to a new branch called gh-pages
          
      - uses: wangchucheng/algolia-uploader@master
        with:
          app_id: 9AWS4CUHVW
          admin_key: ${{ secrets.ALGOLIA_ADMIN_KEY }}
          index_name: fullcycledeveloper-blog
          index_file_path: ./blog/public/algolia.json
        # Uploads the algolia json file to the index on algolia website

After each commit the generated website is pushed to the gh-pages branch. From here we can enable Github Pages in the Settings menu of the repository. Point it to the right branch and connect the domain name that you specified in the CNAME file. You’ll have to do some DNS settings to make sure you are the actual owner of that domain and voila your website is ready to go!

Github Pages settings

Adding comments to the posts using Giscuss (based on Github Discussions)

A static website by default is just static.. That is ok for most scenarios but for a blog I wanted to add comments to the posts. I used Giscuss to do that. The Hugo Theme I used (Clean White Hugo Theme ) has built in support for multiple comment systems. Since my main target audience is developers I chose Giscuss as the comment system.

The only thing i needed to do is set the properties to the right Github Discussions space in my configuration .toml file.

[params.giscus]
  data_repo="geertvdc/geertvdc.github.io"
  data_repo_id="<REPO_ID>"
  data_category="blog-comments"
  data_category_id="<CATEGORY_ID>"
  data_mapping="blog"
  data_reactions_enabled="1"
  data_emit_metadata="0"
  data_theme="light"
  data_lang="en"
  crossorigin="anonymous"

After doing that it just works! I didn’t have to sign up for any new platform since my code is already on Github and now everything is in the same place.

Hopefully this helps people set up a simple blog website as well. I’m really happy in how it turned out and it really was a breeze to set it up.

Happy Blogging!

Geert van der Cruijsen


Generating sandbox APIs for your ASP.Net Core Web APIs

Give your API consumers a sandbox to test your API from day 1

When building APIs I often want to share my contract as soon as possible. Especially when you know your consumers and are you’re open for feedback and discussion on the contract of the API.

Contract first or Code first?

Although I want to share my contract as soon as possible I don’t like these API contract designer tools. My approach is often just creating an empty API in Asp.Net Core by defining the controllers and classes that define the contract. Adding Swashbuckle to your project will generate a Swagger file / Open API Spec based on the controllers and classes that you can use to document your API.

Giving your consumers a testable solution as soon as possible

So an early API specification / contract is quite easy to create. But what if what if we could actually generate a working API from this specification that returns fake data? There are certain tools that can do this for you like Open API Mock. This tool can generate a working API from an API specification and return fake data based on additional extension properties in the specification.

Let’s take a look at a basic swagger file (weather forecast from the built in template when you create a new Asp.Net Core Web API Project). I’ve added some extension properties (x-faker) to the specification by hand so Open API Mock knows what kind of fake data it should generate. Open API Mock generates this data through a library called Faker. It has a wide range of fake data generators from numbers, to street names, phone numbers, bank details etc. Also available in many country specific variants.

"schemas": {
  "WeatherForecast": {
    "type": "object",
    "properties": {
      "date": {
        "type": "string",
        "format": "date-time",
        "x-faker": "date.recent"
      },
      "temperatureC": {
        "type": "integer",
        "format": "int32",
        "x-faker": "datatype.number(-10,35)"
      },
      "temperatureF": {
        "type": "integer",
        "format": "int32",
        "readOnly": true,
        "x-faker": "datatype.number(105)"
      },
      "summary": {
        "type": "string",
        "nullable": true,
        "x-faker": "lorem.paragraph"
      }
    },
    "additionalProperties": false
  }
}

We can now use this swagger file and feed it to Open API Mock to generate a working API by using Docker to run the Open API Mock container.

docker run -v "[path to your]swagger.json:/app/schema.json" -p "8080:5000" jormaechea/open-api-mocker

Now when we test this API you’ll see that Open API Mock returns fake data specified in the faker extension properties of the specification.

[
  {
    "date":"2022-01-05T22:25:30.366Z",
    "temperatureC":-6,
    "temperatureF":41,
    "summary":"Praesentium iste natus temporibus omnis nihil perspiciatis quo. Rerum odit blanditiis quia autem et earum magnam quod. Suscipit voluptate quia voluptatibus ea reiciendis. Sed praesentium sed in est."
  }
]

Adding this functionality automatically to your Swashbuckle generated swagger file

So manually adding a swagger file is nice but what if you want to add this functionality automatically to your swagger file when it’s being generated by Swashbuckle? I’ve created a small extension to Swashbuckle, published through Nuget called Swashbuckle.AspNetCore.ExtensionProperties. What this allows you to do is to add attributes to your classes that are used in the Open API Specification that is generated by Swashbuckle.

If you install the Nuget package you can add attributes in the following way:

public class WeatherForecast
{
    [Faker(fakerValue:"date.recent")]
    public DateTime Date { get; set; }

    [Faker(fakerValue:"datatype.number(-10,35)")]
    public int TemperatureC { get; set; }

    [Faker(fakerValue:"datatype.number(105)")]
    public int TemperatureF => 32 + (int)(TemperatureC / 0.5556);

    [Faker(fakerValue:"lorem.paragraph")]
    public string? Summary { get; set; }
}

Generating the specification file by running the API will result in the exact same json as mentioned earlier in this post.

The source of this nuget package is open source and can be found on my Github repo. It can also be used to add other types of extension properties that you might want to add to your specification file for other tools that process your swagger files.

So whenever we would start a new API Project we could have a running sandbox solution that returns fake data for our API in only a few minutes by just following the following steps mentioned earlier:

  1. Create a new Asp.Net Core Web API Project with Controllers + Classes
  2. Add Swashbuckle to the project
  3. Add the Swashbuckle.AspNetCore.ExtensionProperties NuGet package to the project
  4. Add Faker attributes to your properties in your Classes.
  5. Extract the .json file from your API by running the API and browsing to the Swagger UI on [api]/swagger/
  6. Run the Open API Mock using your swagger file

Using this in Automation / pipelines and automatically posting it to Azure API Management

The steps above are quite easy to do but I always try to set up all deployments through continous delivery pipelines. So what I like to do for every change made to the API is to automatically deploy it’s specification to Azure API Management. There is quite some information on how to do this either by infrastructure as code tech such as ARM or TerraForm but also using the Azure CLI or Powershell. I’ll skip on the basics of how to import an API to API management but would like to focus on how you could work with both a sandbox and the actual API.

Blogs & documentation on Azure API Management deployments from delivery pipelines.

Sandboxes in Azure API Management.

Azure API management has the notion of “Products”. These products are a way you can group certain APIs together and give a specific audience access to these APIs. When working with sandboxes I always prefer to create 2 products, one for the sandbox APIs and one for the actual APIs. This way you have 2 APIs for each of your API. 1 sandbox API in the sandbox product and 1 actual API in the production product.

Sandboxes are APIs that do not contain any real data so opening them up for everyone is a best practice so people can get inspired by browsing through the API list and playing around with them.

How to easily get the Open API Specification from a ASP.Net Core Web API project in a pipeline

The Swashbuckle project comes with a CLI tool to download the Open API specification file with a simple command. You can download and use the Swashbuckle.AspNetCore.Cli tool by running the following command:

dotnet tool install -g --version 6.2.3 Swashbuckle.AspNetCore.Cli

swagger tofile --output swagger.json YourApi/bin/Debug/net6.0/YourApi.dll "v1" 

After extracting the Open API Specification file you can now pass it into the Open API Mock container and spin that container up somewhere in the cloud on for example an Azure Container Instance, an Azure Web App for Containers or an AKS cluster if you are already using those. After that hook up the Azure API Management backend to your mock and you’re ready to go.

Conclusion

Hopefully this post, the nuget package and other tools I mentioned helps in shifting left the communication of API designs with your consumers so they have an early view on the contract but also an actual working API sandbox.

Geert van der Cruijsen


Hello world!

Welcome to the new blog by Geert van der Cruijsen about Software development

As a software developer there is only 1 title you can use to create your first post on a new blog. “Hello World!”. 👋

So why this blog?

It’s actually my 3rd blog website I created. I started of years ago with a blog using Wordpress that I hosted myself. Then about 6 years ago I was working a lot with Azure and wanted to move away from self hosting so I created a new blog (still using Wordpress) but then hosted on Azure. This worked quite well at that time but I lost passion for blogging a couple of years ago.

Now I wanted to start writing some blogposts again so I had 2 options: dust of my old blog or create a new one. I never really liked the whole Wordpress experience and wanted to move to a static site generator so this was the right time to make the switch to Hugo hosted on Github Pages.

The source of this blog can be found here on my Github Repo: Geertvdc/geertvdc.github.io.

Full cycle developer?

As my dayjob I work as a Senior Consultant at Xpirit where I help companies build better software. I do this by improving the Engineering culture, introduce new technology and coaching employees from CTOs to Developers. I believe that building the best software is done by teams who follow the DevOps mantra: “You build it, you run it!”. I love this mantra and try to improve teams on all aspects from architecture, design, implementation, testing, deployments, maintenance, and support. That’s why I called this blog “The full cycle developer blog”.

Sharing Knowledge

Sharing Knowledge is one of my passions. Either by writing articles or blogposts but also by public speaking at international conferences and meetups.

What to expect on this blog?

Posts by me on this blog can cover all aspects of software development from technical posts about a certain technology to non technical posts about organizational culture or personal development.

So who am I?

My name is Geert van der Cruijsen. I’m a senior consultant working at Xpirit in the Netherlands but more important i’m also a husband of my wife Patty and father of my daughters Lauren & Amber.