One Year of Wodly
8th May 2020Wodly Azure Web Optimization Docker
This month marks one year since I launched my side project Wodly. Wodly is a performance tracking platform dedicated to functional fitness. As an overview, Wodly contains a searchable public catalog of CrossFit style workouts. Every day, a workout is added to Wodly as the "Workout of the Day" (WOD - hence the name!). Users can log in to Wodly and record their results for the workouts. As well as this, users can create their own workouts and publish them publically or leave them private. Wodly also provides graphs of performance over time. Users can integrate Wodly with their Fitbit account to get deeper insights into their performance.
When you are the sole owner and maintainer of a project you have to wear many hats. Over the last year I have been Wodly's only developer, infrastructure engineer, designer, customer support rep and content creator. While working on Wodly I have grown in so many different ways; as an engineer, a writer and a content creator - I have learned so much. In the last year I also completed my CrossFit Level 1 training course; and Wodly's workout programming has improved as a result.
In this article I'm going to talk about how Wodly is set up from a tech perspective, our architecture, our deployment pipeline and some learning from working on this platform for a year. Finally, I'll talk about the future for Wodly and where I want to bring the platform.
If you want to check out Wodly for yourself, you can find it at wodly.net.
The architecture of Wodly is fairly simple. The platform consists of an MVC based web application, a SQL database and a blob storage container. Wodly also has a performance analysis tool, however this is incidental to the functionality of the platform.
From my time studying engineering, an ideal was instilled in me which I have kept for many years. That is; an engineer should only choose a more complex solution to a problem when the simple solution stops being effective. Keep it simple, stupid (KISS) are words to live by as an engineer. I have applied this principle to Wodly from the start and added complexity as and when required.
At Wodly's core is a Dotnet Core 2.1 web app running in Microsoft Azure. When I was planning how I was going to build Wodly, Dotnet Core was the obvious choice for me. I have been using Dotnet for many years both professionally and in my own projects (all of my side projects are now built in Dotnet Core, running on Linux, developed on Mac). Up to this point I was deploying Dotnet Core apps to Azure App Services running inside Azure App Service Plans (running Linux) directly from Azure's integration with Github. Azure App Services have some really simple and nice things for setting up a professional website. All Azure App Services are placed behind a load balancer and a very simple system is in place for adding SSL bindings and specifying custom domains if you host your DNS with a provider other than Microsoft. So, App Services are pretty great to work with and fairly easy to set up. However, there are some things I don't like about the out of the box deployment flow.
While the out of the box set up supports continuous deployment (CD) you need to set up Visual Studio Team Services (VSTS) to get continuous integration (CI) running smoothly with deployments being gated on tests. Using VSTS might not be an issue for you, personally I don't like using VSTS that much (possibly a discussion for another time). Another issue with the out of the box deployment system is the deployment script required by azure to install the app. Azure uses Kudu to power it's deployment pipeline directly from Github, to deploy the app you need to provide a script to hook into the deployment pipeline. This script builds your app, installs dependencies and can be used to install npm packages etc. The script is a little difficult to reason about and hard to test locally. Another issue with this deployment approach is that the final deployable asset is tightly coupled to the Kudu system and not very portable.
With the above considerations I decided to take a different approach to the deployment pipeline and tried to make it as platform agnostic as possible. Over the last few years I have been working with Docker as part of my development environment for many different projects. Docker has some nice properties that make it very portable to many different platforms. Once you package your app and build a Docker image, it can be deployed to any environment that supports Docker. Plus one for portability here. I was happy to have a portable solution but I still wanted to use Azure to host my app. Conveniently, Azure have a Container App Service which runs inside an App Service Plan. These behave in almost the exact same way as the classic App Services. The only difference being the App Service will pull the latest version of an image from a wide range of possible registries and spin up a new container to run inside the App Service Plan. This is all done behind your load balancer with no downtime during deployments.
With the approach decided, I started to look into the tooling around my deployment pipeline and the end to end flow. The basics of the end to end flow is as follows;
Github Master Branch -> Circle CI -> Azure Container Registry -> Azure App Service
As part of the end to end flow, the Circle CI step does most of the heavy lifting. This step uses a set of custom bash scripts to build the app, install dependencies, bundle and minify web assets. These scripts also execute a docker file which assembles a very light weight Docker image for the app (~50MB). Circle CI also executes tests and on success pushes the built image to the Azure Container Registry. On success of push to the registry, Azure executes a webhook which triggers the deployment of the image to the app container in the App Service.
In the early days of Wodly, the app existed on it own running without any real way for me to tell what was going on with it. This is obviously not an ideal situation to be in when your app is in the wild. To check things were working as expected, I used to click around the site every few hours to make sure everything was working as expected... It amuses me that I used to do this and makes my cringe slightly. There was a point when I started to see more traffic on the site. I started to think about how I might get some insights into the sites usage patterns and monitor its performance. Around this time I had introduced some caching to the website to cache some highly trafficked and infrequently modified data. After I added this, it was apparent that I had no idea if the caching was actually giving me the benefit I thought it might.
One of the first pieces of custom information I added to App Insights was metrics on the caching mentioned above. I added some logic in the bowels of the caching service to instrument cache hit and miss rates. Seeing this data in the dashboard was really useful to me. Thankfully I learned that my caching approach was the right descision to make and was delivering the performance improvement that I had hoped it would. Another great piece of instrumentation I added was a view of which of my articles in the blog section of the site were being read the most; and where people were finding the articles. I added metrics which tracked views of individual articles and tracked the referrer in the request to see where people found the links to the articles. This has been great for me to track the performance of articles and their distirbution around the internet. Interestingly, reddit.com is one of Wodly's top referrers!
In the early days I managed to piece together the UI in a way that made sense to me and that I thought looked pretty good. For the most part, I am fairly happy with the UI and have made many improvements on it over the last year. Learning the best practices for building a good frontend on a website has been a huge learning curve for me. One tool that really helped me to understand frontend web performance was Google's PageSpeed Insights tool. This is a fantastic tool which when given a url to a website, loads the page and breaks down it's performance to give you a raking from 0-100. As well as a ranking, it gives you a whole list of best practices, and recommendations.
The first time I ran the PageSpeed Insights tool for Wodly, it was more out of interest than anything. The results shocked me. When I ran the tool I was given a very low performance score in the mid 60s. The report on the page had huge blocks of red all over it pointing out loads of performance issues. After following the recommendations I managed to vastly improve Wodly's score. As of writing this, Wodly has a page speed score of 99/100(!!!) - I am very proud of this.
So what did I do to fix these issues? There were some amazingly simple ones that I hadn't even thought of. One of the first and best ones I implemented was returning static content with an efficent caching policy. To achive this I had to add a small middleware to the backend which would apply a cache TTL header of one year to all static content requests. This vastly improves page load speed for subsequent requests for the same content after the initial page load. A word of warning on doing this, a large cache TTL is good practice. However, if you modify your static content frequently, you run the risk of your user's browser having old cached content loaded. For this, you should employ cache busting to your static content in which static content is loaded with a version stamp which is modified after each build.
Another issue pointed out by the page speed insights was the number of requests required to load the page content. Wodly uses a few different pieces of third party JS and CSS and loading each of these individually was impacting performance. This one made me feel stupid - the obvious solution to this is bundling the content. I already had a bundle step in my build pipeline, was I using it properly? NO! After some face palming, I rewrote the bundling logic to include only my required scrips and stylesheets into single files. This cut down the number of requests and reduced the overall payload size on loading pages.
As I said before, front end work isn't exactly in my wheelhouse, and so the optimisations will continue as I learn more. It can always get better and you can always learn more!
It can't be understated the impact that building Wodly has had on me as a professional engineer and fitness fanatic. I have learned so much and plan to keep learning, improving and adding new features to Wodly. In general the plan is to keep growing the platform, gathering more users and gaining more social media influence. If none of this happens, I won't be upset, I will be happy to have worked so hard on something that I love and get some much satisfaction from. I have said it before and I will say it again, building something yourself just for the sole purpose of building something is a perfectly valid reason to build anything.
In practical terms, I plan to continue programming workouts and writing articles to share my knowledge and experience with anyone who wants it. There is also a plan to continue building on Wodly's public API and build out a mobile app to improve the usability and ease of access of Wodly across platforms.
Whatever the future holds, I am excited to see what year two brings for Wodly!
Thanks for reading!