Sunday, August 20, 2017

My Blog has a New Home!

Yes! That's right. For more reasons than I can list here, I've packed up and moved on to a new blog. It's really easy to remember: https://blog.jeremylikness.com/.

There are a lot of reasons I moved, and I shared them all in this blog post.

This site will remain online as an archive of older articles. If you would like to see an article migrated to the new site, use the contact form to contact me!

Thanks,

Jeremy Likness

Tuesday, August 15, 2017

Herding Cattle with the Azure Container Service (ACS)

Docker is an amazing tool that transforms how DevOps teams build software at scale. Containers can’t be treated like pets for Docker to effectively meet enterprise demands. The care, attention, and feeding simply doesn’t make sense when dealing with hundreds or even thousands of interconnected microservices. The herd of containers needs to be wrangled, or orchestrated, by a tool.

herding

Although several tools exist, Azure Container Service provides a single interface to set up complex orchestration clusters regardless of whether you prefer Mesos DC/OS, Docker Swarm, or Kubernetes. I recently presented this talk at the Docker Atlanta Meetup group. I covered what ACS is, walked through creating clusters for DCOS, Swarm, and Kubernetes, and demonstrated deploying Docker containers and scaling them out.

View the deck below, and check out the accompanying walkthrough video.

To learn more about Azure Container Service, read the official documentation.

Are you running containers in production? I’d love to hear about your experience and tips in the comments below!

Regards,

Friday, August 11, 2017

Docker Containers at Scale with Azure Web App on Linux

The Azure team recently announced a new feature that is currently in preview called Azure Web App on Linux. This feature enables you to host web apps natively on Linux. The service enables you to choose an initial size and number of instances for your hosts, then manually scale up and out or add automatic rules to respond to load.

azurecontainerwebapp

It comes out of the box with support for several application stacks including Node.js, PHP, .NET Core, and Ruby.

Pssst. Lean a little closer. I have a little secret for you: it’s all based on Docker containers.

That’s right! What that means is even if your platform and language of choice aren’t listed in the “supported stacks,” you can publish anything you can containerize. To prove it, I’ve made a short five minute video. I demonstrate taking my little go app that is available as a public Docker Hub image and deploying it to Azure, then scaling it out. Check it out:

If you’re thinking, “Not another orchestrator” … think again. The goal is not to compete with mature orchestrators like Kubernetes. Instead, the web app provides a ton of features that make your life easier and streamline the DevOps experience, such as:

  • Kudu, the deployment engine that will manage deployments for you from sources ranging from GitHub and Visual Studio Team Services to local git repositories
  • Shared storage to aggregate log files
  • SSH access to your containers
  • Auto-scaling

You can even set up separate instances in different regions around the globe and load balance them using Azure’s Traffic Manager. There are many more features to review that you can learn more about by reading the Azure Web App on Linux documentation. If you have questions, comments, feedback, or corrections, don’t forget that the documentation is open source and accepting pull requests!

Thanks,

Friday, July 21, 2017

Build and Deploy a .NET Core Web App from Linux to a Linux Web App Service on Azure

… and do it in less than ten minutes!

netcorethumb

I love .NET Core. It allows me to apply the C# skills I’ve been using for over a decade now (combined with my experience with the .NET Framework and its myriad APIs) to cross-platform development. I can create lightweight, portable apps and websites and containerize them, run them on the familiar Windows-based IIS server or deploy them as scalable Azure App Service endpoints.

Tools like Visual Studio Code and the Azure Command Line Interface (CLI) make it possible to leverage a consistent experience regardless of the platform I’m developing on. Although I carry a Windows 10 Surface Book laptop around, I’m just as comfortable dropping to bash on Ubuntu to build an app in Linux.

This short video demonstrates just how straightforward it is to build and deploy a .NET Core app from Linux to an Azure Web App on Linux.

There are three main steps:

  • Build the app
  • Create and configure the App Service
  • Deploy the App Service

To build the app, create the directory and use the .NET Core tool:

mkdir mymvc
cd mymvc
dotnet new mvc
dotnet restore
dotnet run

Next, create a resource group, an app service plan, and a web app. Configure the web app to run the correct Linux version and launch your app, and set it up for git-based deployment. You can capture the endpoint to deploy to in the same step. You may need to pick your own unique app name.

az group create -n my-linux-group -l westus

az appservice plan create -g my-linux-group -n my-linux-plan --is-linux -l westus

az webapp create -n my-linux-app -p my-linux-plan -g my-linux-group

az webapp config set -n my-linux-app -g my-linux-group --linux-fx-version "DOTNETCORE|1.1.0" --startup-file "dotnet mymvc.dll"

url=$(az webapp deployment source config-local-git --name my-linux-app \
--resource-group my-linux-group --query url --output tsv)

echo $url

Finally, you need a way to deploy your files. There are several ways, but local git is a great option. You can publish your files and initalize the git repository, and then subsequent updates will only deploy the changes (you publish, then commit the changes and push). These steps initialize the repository and connect it to the remote endpoint and push the deployment files:

dotnet publish -c Release

cd bin/Release/netcoreapp1.1/publish

git init

git add -A

git commit -m "Initial Deployment"

git remote add azure $url

git push azure master

At this stage, you should have a Linux app up and running! Learn more by exploring the Web Apps overview.

Friday, July 14, 2017

Cluster Management and Orchestration of Docker Containers at Scale with Azure Container Service

I recently recorded a course for Cloud Academy to introduce the Azure Container Service. Containers have already transformed the way developers approach enterprise applications and are rapidly gaining momentum in production. To manage containers at scale requires an orchestrator, or a platform to manage the deployment, scaling, and resiliency of containers across clustered hosts. The Azure Container Service simplifies standing up and managing your orchestrator of choice in the Azure cloud.

cloudacademy

The video starts with an introduction and overview of containers and orchestrators, demonstrates how to set up a new environment using the Azure portal, then dives into creating everything you need to stand up the cluster, private networks, load balancers and other assets for each orchestrator using the Azure Command Line Interface (CLI). In most cases you can stand up the assets needed, including virtual machine scale sets, master nodes and private agents, with just two commands.

intro

In addition to SSH key generation, I walk through standing up each orchestrator, deploying a container, and scaling it out to multiple instances.

The orchestrators covered include:

Cloud Academy provides several subscription options and contains hours and hours of cloud-related content. You can access the course here: Introduction to the Azure Container Service. Check it out and let me know your thoughts!

Friday, June 30, 2017

Create a Developer Build Workflow with Docker and Multi-Stage Builds

I meet with my application development team every week to review how everyone is doing, go through active projects, discuss revenue and track the sales pipeline. To facilitate the meeting, I wrote an Angular 2 application that interfaces with exported files and APIs to pull together the information in a central, cohesive format. Angular is changing rapidly and sometimes that can create conflicts.

The Node.js package manager npm uses semantic versioning. This means some packages might specify a range or “greater than a version.” If someone pulls down the package after the dependent packages have been updated, different users may end up with different builds on their machines. I recently ran into this trying to get the application development project to build for a team member who is taking it over. We tried installing dependent packages, re-installing Node, updating npm, and several other options but the behavior on their machine just wasn’t the same as the behavior on mine.

That’s when it clicked. What is one of the main advantages of the Docker that I discuss when I present at conferences? The fact that “it works on my machine” is no longer a valid excuse. Instead, we use:

“It works on Docker.”

If you can get your container to run in Docker, it stands to reason it should run consistently on any Docker host.

The recent Docker release supports a concept called multi-stage builds. This allows you to create interim containers to perform work, then use output from those containers to build other images. This is perfect for building the sync project (and in the future means developers won’t even need Node on their machine to build it, although who doesn’t want Node?)

First, I created a .dockerignore file. I don’t use end-to-end testing and the build machine will install its own dependencies, so I ignored both of those folders.

dockerignore

Next, I got to work on the Dockerfile.

dockerfile

The Angular Command Line Interface (CLI) uses Node, so it made sense to start with a Node image. I am giving this container the alias build so I can reference it later. On the image, I install the specific version of the CLI I am using. Then I create a directory, copy the Angular source to that directory, install dependencies and build for production. This creates a folder named dist on the container with the assets I need to run my application.

dist

Next, in the same Dockerfile I continue with my target image. The app is run locally for the team so I don’t need massive scale, therefore I started with the simple and extremely small busybox image.

busybox

I create a directory for the web, then copy the assets from the previous image (remember, I gave it the alias build) into the new directory. I expose the HTTP port and instruct the container to run the http daemon in the foreground (so the container keeps running) on port 80 with a home directory of www.

Next is a simple command I run from the root of the project to build the image:

docker build -t ng2appdev .

docker-build

The first time takes a bit of time as it pulls the images and preps them, but eventually it successfully builds and generates an image that has everything needed to run the app. I can launch it like this:

docker run -d -p 80:80 ng2appdev

That instructs Docker to run the image I just created in “detached” mode (in the background) and expose port 80. Then I browse to localhost and the app is there, ready and waiting! The image doesn’t take up too much space, either, so it’s easy to pass around:

image-size

In fact, why even make anyone bother with the build? To improve the process we can do two things:

1. Create an image called ng2appdevbuilder that has the CLI installed. That way we don’t have to reinstall it every time or worry about it becoming deprecated in the future. It encapsulates a stable, consistent build environment.

2. On a build machine, automate it to pull down the latest application source code and assets, use the build image to build the app, then publish the result. Trigger this each time the source code is changed.

Now anyone who wants to run the latest can simply pull the most recent image and go for it. That’s the power of Docker!

Saturday, June 10, 2017

Stuffing Angular into a Tiny Docker Container (< 2 MB)

I’ve been doing a lot of work with Angular and Docker lately. In one of my workshops I demonstrate how to take an Angular app and related services then package them as containers and leverage Docker Compose to spin them up with a single command. You can read about a more involved example at Build and Deploy a MongoDB Angular NodeJS App using nginx in Three Steps with Docker.

Using the nginx container, my Angular images average several hundred megabytes in size (click to view full size).

bigimages

Microservices have transformed the way modern apps are architected. For example, Angular can produce a highly optimized production distribution of its assets as static files. The website for Angular can be incredibly simple because it relies on separate services and APIs to do the heavy lifting, and any browser-based logic is encapsulated in the JavaScript libraries that are included.

Therefore, standing up the front end should be a lot easier! In fact, even with a minimal web server, containers are easy to spin up and scale out, so load doesn’t necessarily have to be a concern of the web server now when it can be managed by the orchestrator. As an experiment, I set out to see how small I can make an Angular app.

If you have Docker installed, you can run the tiny Angular app yourself. Instructions are available at this link.

In searching for a solution I came across BusyBox, a set of Unix commands in an extremely small container that is around a megabyte in size. BusyBox contains httpd, an HTTP daemon. Let’s see how small we can make our Angular app!

The app we’ll target is the simple Angular app I built for Music City Code. You can clone the repo from GitHub here. When you build the app, it creates a simple interactive fractal app using a bifurcation diagram.

Let’s build an optimized distribution of our Angular app. This generates about 400kb of assets.

ngbuild

Next, create a Dockerfile with these contents:

from busybox:latest
run mkdir /www
copy /dist /www
expose 80
cmd ["httpd", "-f", "-p", "80", "-h", "/www"]

The steps are straightforward. It pulls down the latest busybox image, creates a directory, copies the Angular assets into the directory, exposes the web port and instructs the container to run the httpd daemon on startup.

Add a .dockerignore and ignore the src, e2e, and node_modules directories.

Let’s build our container:

dockerbuild

To see the image that was built, you can type:

docker images | grep "tinyng"

(Replace grep with find on Windows machines) and on my machine it is 1.57 MB.

Running it and browsing to localhost proves it is working:

docker run –d –p 80:80 jlikness/tinyng

So I push to Docker Hub, and confirm the size there:

dockerhub

The experiment is finished. Success! Of course now we have to try it under load and scale, but it’s good to know there is a path to optimize the size of your containers!

Wednesday, June 7, 2017

Music City Code 2017

This was my first year to attend the Music City Code conference in “the Music City” Nashville, Tennessee. It was held on the beautiful Vanderbilt University campus, where they advertise a 3-to-1 squirrel to student ratio. I imagine it was about 5-to-1 squirrels to conference attendees.

vanderbilt

I presented two talks there, the last talk of the first day and the first talk of the last day.

Rapid Development with Angular and TypeScript

This is a talk focused on demonstrating just how powerful Angular, TypeScript, and the Angular CLI are to rapidly build apps. Don’t be scared by this 360 photo, if you can’t see the audience just scroll it around.

The first half of the talk focused on the features Angular provides, as well as an overview of TypeScript. The second is a hands-on demo building an app using services, settings, rendering, data-binding, and a few other features.

You can access the deck and source code here.

Contain Your Excitement

The next talk focused on containers. In true “Inception” style, the container talk itself can be run from a container.

The talk briefly covers the difference between “metal” in your data center and Docker containers, then walks through building simple containers and evolves to multi-stage containers, using Docker compose, and an overview of orchestrators like Kubernetes.

As part of the talk, I took a 360 degree picture with my Samsung Gear 360 (I have the older model, there is a newer 2017 version). I updated the source with the embed link, then synced my changes to GitHub. This triggered an automated build that prepared a Node.js environment, ran unit tests, then built and published the Docker image to demonstrate the continuous integration and deployment pipeline that is possible with containers.

You can access the source code here and run the container from here.

Parting Thoughts

As far as conferences go, this is one of my favorites. There were great people, terrific speakers, friendly and helpful volunteers, and a fun venue. The food was awesome and speakers were able to stay in the dormitories on campus which was a fun experience. I definitely look forward to coming back in future years!

Wednesday, April 5, 2017

Sunday, April 2, 2017

Build and Deploy a MongoDB Angular NodeJS App using nginx in Three Steps with Docker

Docker is a pretty amazing tool.

To prove it, I want to show you how you can build, deploy, and stand-up an N-tier Angular app backed by MongoDB in just three steps. Literally. Without installing any prerequisites other than Docker itself.

usdadiagram

First, seeing is believing. Once you have Docker installed (OK, and git, too), type the following commands:

git clone https://github.com/JeremyLikness/usda-microservice.git

cd usda-microservice

docker-compose up

It will take some time for everything to spin up, but once it does you should see several services start in the console. You’ll know the application is ready when you see it has imported the USDA database:

usdaimports

After the import you should be able to navigate to localhost and run the app. Here is an example of searching for the word “soy” and then tapping “info” on one of the results:

usdaweb

On the console, you can see the queries in action:

usdasearch

How easy was that?

Of course, you might find yourself asking, “What just happened?” To understand how the various steps were orchestrated, it all begins with the docker-compose.yml file.

The file declares a set of services that work together to define the app. Services can depend on each other and often specify an image to use as a baseline for building a container, as well as a Dockerfile to describe how the container is built. Let’s take a look at what’s going on:

Seed

The seed service specifies a Dockerfile named Dockerfile-seed. The entire purpose of this image is to import the USDA data from flat files along with some helper scripts, then expose the data through a volume so that they can be imported into the database. It is based on an existing lightweight Linux container called Ubuntu.

Containers by default are black boxes. You cannot communicate with them and are unable to explore their contents. The volume command exposes a mounting point to share data. The file simply updates the container to the latest version, creates a directory, copies over a script and an archive, then unzips the archive and changes permissions.

db

The db service is the backend database. It inherits from the baseline mongo image that provides a pre-configured and ready-to-run instance of mongodb.

You’ll notice a command is specified to run the shell script seed.sh. This is a bash script that does the following:

  1. Launch mongodb
  2. Wait until it is running
  3. Iterate through the food database files and import them into the database
  4. Swap to the foreground so it continues running and can be connected to

At this point, Docker created an interim container to stage data, then used that data to populate a mongo database that was created from the public, trusted registry and is now ready for connections and queries.

descriptions

The next container has a directory configured for the build (“./descriptions”), so you can view the Dockerfile in that directory to discern its steps. This is an incredibly simple file. It leverages an image from node that contains a build trigger. This allows the image definition to specify how a derived image can be built.

In this instance, the app is a Node app using micro. The build steps simply copy the contents into the container, run an install to load dependent packages, then commit the image. This leaves you with a container that will run the microservice exposed on port 3000 using node.

Going back to the compose file, there are two more lines called “links” and “ports” respectively. In this configuration, the mongodb container is not available outside of the host or even accessible from other containers. That is because no ports are exposed as part of its definition.

The “link” directive allows the microservice to connect with the container running the database. This creates a secure, internal link – in other words, although the microservice can now see the database, the database is not available to any other containers that aren’t linked and not visible outside of the Docker host.

On the other hand, because this service will be called from an Angular app hosted in a user’s web browser, it must be exposed outside of the host. The “port” directive maps the internal port 3000 to an external port 3000 so the microservice is accessible.

This service exposes two functions: a list of all groups that the user can filter, and a list of descriptions of nutrients based on search text.

nutrients

Nutrients is another microservice that is setup identical to descriptions. It exposes the individual nutrients for a description that was selected. The only difference in configuration is that because it runs on the same port (3000) internally, it is mapped to a new port (3001) externally to avoid a duplicate port conflict.

ngbuild

This image points to an Angular app and is used as an interim container to build the app (in production deployments it is more common to have a dedicated build box perform this step). I included this to demonstrate how powerful and flexible containers can be.

Inside the Dockerfile, the script installs node and the node package manager, then the specific version of the angular-cli used to build the app. Once the Angular CLI is installed, a target directory is created. Dependent packages are installed using the node package manager, and the Angular CLI is called to build a production-ready image with ahead-of-time compilation of templates. This produces a highly optimized bundle.

The compose file specifics a volumes directive that names “ng2”. This is a mount point to share storage between containers. The ngbuild service mounts “ng2” to “/src/dist” which is where the build is output.

web

Finally, the web service hosts the Angular app. There is no Dockerfile because it is completely based on an existing nginx image. The “ng2” mount points to “/usr/share/nginx/html” which is where the container serves HTML pages from by default. The “ng2” shared volume connects the output of the build from ngbuild to the input for the web server in web.

This app uses the micro-locator service I created to help locate services in apps. The environment.ts file maps configuration to endpoints. This allows you to specify different end points for debug vs. production builds. In this case the root service is mapped to port 3000, while the nutrients is mapped to the root of 3001.

Even though the services are running on different nodes, the micro-locator package allows the code to call a consistent set of endpoints. You can see this in the descriptions component that simply references “/descriptions” and “/groups” and uses the micro-locator service to resolve them in its constructor.

They are mapped to the same service in configuration but if groups was later pulled out to a separate end point, the only thing you would need to change is the configuration of the locator itself. The end code remains the same.

The standard web port 80 is exposed for access, and the service is set to depend on descriptions so it doesn’t get spun up until after the dependent microservices are.

Summary

The purpose of this project is to demonstrate the power and flexibility of containers. Specifically:

  1. Availability of existing, trusted images to quickly spin up instances of databases or node-based containers, for example
  2. Least privilege security by only allowing “opt-in” access to services and file systems
  3. The ability to compose services together and share common resources
  4. The ease of set up for a development environment
  5. The ability to build in a “clean” environment without having to install prerequisites on your own machine

Although these features make development and testing much easier, Docker also provides powerful features for managing production environments. These include the ability to scale services out on the fly, and plug into orchestrators like Kubernetes to scale across hosts for zero downtime and to manage controlled roll-outs (and roll-backs) of version changes.

Happy containers!

Monday, January 9, 2017

Comprehensive End-to-End Angular 2 with Redux and Kendo UI DevOps Example

I know, the title is a mouthful. But it describes exactly what I’m sharing! Before and during the Christmas break I created a project to illustrate how to leverage Redux in Angular 2 apps. As part of the process I integrated Kendo UI and created a full DevOps solution with continuous integration and automated gated deployments to a Docker host on Azure.

To start with, read about Improving the State of your App with Redux.

reduxflow

After you get the gist of how and why the app was built, learn how I set up DevOps Continuous Deployment with Visual Studio Team Services and Docker.

containerstack

Finally, a common question related to Angular 2 is how to test it. On this blog, learn about Integrating Angular 2 Unit Tests with Visual Studio Team Services using PhantomJS and JUnit to report back test results.

testresults

Even if you don’t use VSTS for your automation, many of the processes and steps described in this triad of articles will help you build your own deployment pipeline.

Happy DevOps in 2017!