Friday, July 21, 2017

Build and Deploy a .NET Core Web App from Linux to a Linux Web App Service on Azure

… and do it in less than ten minutes!

netcorethumb

I love .NET Core. It allows me to apply the C# skills I’ve been using for over a decade now (combined with my experience with the .NET Framework and its myriad APIs) to cross-platform development. I can create lightweight, portable apps and websites and containerize them, run them on the familiar Windows-based IIS server or deploy them as scalable Azure App Service endpoints.

Tools like Visual Studio Code and the Azure Command Line Interface (CLI) make it possible to leverage a consistent experience regardless of the platform I’m developing on. Although I carry a Windows 10 Surface Book laptop around, I’m just as comfortable dropping to bash on Ubuntu to build an app in Linux.

This short video demonstrates just how straightforward it is to build and deploy a .NET Core app from Linux to an Azure Web App on Linux.

There are three main steps:

  • Build the app
  • Create and configure the App Service
  • Deploy the App Service

To build the app, create the directory and use the .NET Core tool:

mkdir mymvc
cd mymvc
dotnet new mvc
dotnet restore
dotnet run

Next, create a resource group, an app service plan, and a web app. Configure the web app to run the correct Linux version and launch your app, and set it up for git-based deployment. You can capture the endpoint to deploy to in the same step. You may need to pick your own unique app name.

az group create -n my-linux-group -l westus

az appservice plan create -g my-linux-group -n my-linux-plan --is-linux -l westus

az webapp create -n my-linux-app -p my-linux-plan -g my-linux-group

az webapp config set -n my-linux-app -g my-linux-group --linux-fx-version "DOTNETCORE|1.1.0" --startup-file "dotnet mymvc.dll"

url=$(az webapp deployment source config-local-git --name my-linux-app \
--resource-group my-linux-group --query url --output tsv)

echo $url

Finally, you need a way to deploy your files. There are several ways, but local git is a great option. You can publish your files and initalize the git repository, and then subsequent updates will only deploy the changes (you publish, then commit the changes and push). These steps initialize the repository and connect it to the remote endpoint and push the deployment files:

dotnet publish -c Release

cd bin/Release/netcoreapp1.1/publish

git init

git add -A

git commit -m "Initial Deployment"

git remote add azure
https://username@my-linux-app.scm.azurewebsites.net:443/my-linux-app.git

git push azure master

At this stage, you should have a Linux app up and running! Learn more by exploring the Web Apps overview.

Friday, July 14, 2017

Cluster Management and Orchestration of Docker Containers at Scale with Azure Container Service

I recently recorded a course for Cloud Academy to introduce the Azure Container Service. Containers have already transformed the way developers approach enterprise applications and are rapidly gaining momentum in production. To manage containers at scale requires an orchestrator, or a platform to manage the deployment, scaling, and resiliency of containers across clustered hosts. The Azure Container Service simplifies standing up and managing your orchestrator of choice in the Azure cloud.

cloudacademy

The video starts with an introduction and overview of containers and orchestrators, demonstrates how to set up a new environment using the Azure portal, then dives into creating everything you need to stand up the cluster, private networks, load balancers and other assets for each orchestrator using the Azure Command Line Interface (CLI). In most cases you can stand up the assets needed, including virtual machine scale sets, master nodes and private agents, with just two commands.

intro

In addition to SSH key generation, I walk through standing up each orchestrator, deploying a container, and scaling it out to multiple instances.

The orchestrators covered include:

Cloud Academy provides several subscription options and contains hours and hours of cloud-related content. You can access the course here: Introduction to the Azure Container Service. Check it out and let me know your thoughts!

Friday, June 30, 2017

Create a Developer Build Workflow with Docker and Multi-Stage Builds

I meet with my application development team every week to review how everyone is doing, go through active projects, discuss revenue and track the sales pipeline. To facilitate the meeting, I wrote an Angular 2 application that interfaces with exported files and APIs to pull together the information in a central, cohesive format. Angular is changing rapidly and sometimes that can create conflicts.

The Node.js package manager npm uses semantic versioning. This means some packages might specify a range or “greater than a version.” If someone pulls down the package after the dependent packages have been updated, different users may end up with different builds on their machines. I recently ran into this trying to get the application development project to build for a team member who is taking it over. We tried installing dependent packages, re-installing Node, updating npm, and several other options but the behavior on their machine just wasn’t the same as the behavior on mine.

That’s when it clicked. What is one of the main advantages of the Docker that I discuss when I present at conferences? The fact that “it works on my machine” is no longer a valid excuse. Instead, we use:

“It works on Docker.”

If you can get your container to run in Docker, it stands to reason it should run consistently on any Docker host.

The recent Docker release supports a concept called multi-stage builds. This allows you to create interim containers to perform work, then use output from those containers to build other images. This is perfect for building the sync project (and in the future means developers won’t even need Node on their machine to build it, although who doesn’t want Node?)

First, I created a .dockerignore file. I don’t use end-to-end testing and the build machine will install its own dependencies, so I ignored both of those folders.

dockerignore

Next, I got to work on the Dockerfile.

dockerfile

The Angular Command Line Interface (CLI) uses Node, so it made sense to start with a Node image. I am giving this container the alias build so I can reference it later. On the image, I install the specific version of the CLI I am using. Then I create a directory, copy the Angular source to that directory, install dependencies and build for production. This creates a folder named dist on the container with the assets I need to run my application.

dist

Next, in the same Dockerfile I continue with my target image. The app is run locally for the team so I don’t need massive scale, therefore I started with the simple and extremely small busybox image.

busybox

I create a directory for the web, then copy the assets from the previous image (remember, I gave it the alias build) into the new directory. I expose the HTTP port and instruct the container to run the http daemon in the foreground (so the container keeps running) on port 80 with a home directory of www.

Next is a simple command I run from the root of the project to build the image:

docker build -t ng2appdev .

docker-build

The first time takes a bit of time as it pulls the images and preps them, but eventually it successfully builds and generates an image that has everything needed to run the app. I can launch it like this:

docker run -d -p 80:80 ng2appdev

That instructs Docker to run the image I just created in “detached” mode (in the background) and expose port 80. Then I browse to localhost and the app is there, ready and waiting! The image doesn’t take up too much space, either, so it’s easy to pass around:

image-size

In fact, why even make anyone bother with the build? To improve the process we can do two things:

1. Create an image called ng2appdevbuilder that has the CLI installed. That way we don’t have to reinstall it every time or worry about it becoming deprecated in the future. It encapsulates a stable, consistent build environment.

2. On a build machine, automate it to pull down the latest application source code and assets, use the build image to build the app, then publish the result. Trigger this each time the source code is changed.

Now anyone who wants to run the latest can simply pull the most recent image and go for it. That’s the power of Docker!

Saturday, June 10, 2017

Stuffing Angular into a Tiny Docker Container (< 2 MB)

I’ve been doing a lot of work with Angular and Docker lately. In one of my workshops I demonstrate how to take an Angular app and related services then package them as containers and leverage Docker Compose to spin them up with a single command. You can read about a more involved example at Build and Deploy a MongoDB Angular NodeJS App using nginx in Three Steps with Docker.

Using the nginx container, my Angular images average several hundred megabytes in size (click to view full size).

bigimages

Microservices have transformed the way modern apps are architected. For example, Angular can produce a highly optimized production distribution of its assets as static files. The website for Angular can be incredibly simple because it relies on separate services and APIs to do the heavy lifting, and any browser-based logic is encapsulated in the JavaScript libraries that are included.

Therefore, standing up the front end should be a lot easier! In fact, even with a minimal web server, containers are easy to spin up and scale out, so load doesn’t necessarily have to be a concern of the web server now when it can be managed by the orchestrator. As an experiment, I set out to see how small I can make an Angular app.

If you have Docker installed, you can run the tiny Angular app yourself. Instructions are available at this link.

In searching for a solution I came across BusyBox, a set of Unix commands in an extremely small container that is around a megabyte in size. BusyBox contains httpd, an HTTP daemon. Let’s see how small we can make our Angular app!

The app we’ll target is the simple Angular app I built for Music City Code. You can clone the repo from GitHub here. When you build the app, it creates a simple interactive fractal app using a bifurcation diagram.

Let’s build an optimized distribution of our Angular app. This generates about 400kb of assets.

ngbuild

Next, create a Dockerfile with these contents:

from busybox:latest
run mkdir /www
copy /dist /www
expose 80
cmd ["httpd", "-f", "-p", "80", "-h", "/www"]

The steps are straightforward. It pulls down the latest busybox image, creates a directory, copies the Angular assets into the directory, exposes the web port and instructs the container to run the httpd daemon on startup.

Add a .dockerignore and ignore the src, e2e, and node_modules directories.

Let’s build our container:

dockerbuild

To see the image that was built, you can type:

docker images | grep "tinyng"

(Replace grep with find on Windows machines) and on my machine it is 1.57 MB.

Running it and browsing to localhost proves it is working:

docker run –d –p 80:80 jlikness/tinyng

So I push to Docker Hub, and confirm the size there:

dockerhub

The experiment is finished. Success! Of course now we have to try it under load and scale, but it’s good to know there is a path to optimize the size of your containers!

Wednesday, June 7, 2017

Music City Code 2017

This was my first year to attend the Music City Code conference in “the Music City” Nashville, Tennessee. It was held on the beautiful Vanderbilt University campus, where they advertise a 3-to-1 squirrel to student ratio. I imagine it was about 5-to-1 squirrels to conference attendees.

vanderbilt

I presented two talks there, the last talk of the first day and the first talk of the last day.

Rapid Development with Angular and TypeScript

This is a talk focused on demonstrating just how powerful Angular, TypeScript, and the Angular CLI are to rapidly build apps. Don’t be scared by this 360 photo, if you can’t see the audience just scroll it around.

The first half of the talk focused on the features Angular provides, as well as an overview of TypeScript. The second is a hands-on demo building an app using services, settings, rendering, data-binding, and a few other features.

You can access the deck and source code here.

Contain Your Excitement

The next talk focused on containers. In true “Inception” style, the container talk itself can be run from a container.

The talk briefly covers the difference between “metal” in your data center and Docker containers, then walks through building simple containers and evolves to multi-stage containers, using Docker compose, and an overview of orchestrators like Kubernetes.

As part of the talk, I took a 360 degree picture with my Samsung Gear 360 (I have the older model, there is a newer 2017 version). I updated the source with the embed link, then synced my changes to GitHub. This triggered an automated build that prepared a Node.js environment, ran unit tests, then built and published the Docker image to demonstrate the continuous integration and deployment pipeline that is possible with containers.

You can access the source code here and run the container from here.

Parting Thoughts

As far as conferences go, this is one of my favorites. There were great people, terrific speakers, friendly and helpful volunteers, and a fun venue. The food was awesome and speakers were able to stay in the dormitories on campus which was a fun experience. I definitely look forward to coming back in future years!

Friday, April 7, 2017

Create a Serverless Angular App with Azure Functions and Blob Storage

As DevOps continues to blur the lines between traditional IT operations and development, platforms and tools are rapidly evolving to embrace the new paradigm. While the use of containers explodes throughout global enterprises, another technology has been rapidly gaining momentum. On Amazon AWS it’s referred to as AWS Lambda, on Azure it’s name is Azure Functions and in the Node.js world a popular option is webtask.io.

The technology is referred to as “serverless” and is the ultimate abstraction of concerns like scale, elasticity and resiliency that empowers the developer to focus on one thing: code. It’s often easier to understand when you see it in action, so this post will focus on creating a completely functional Angular app with absolutely no provisioning of servers.

To follow along you’ll need an Azure subscription. If you don’t have one there is a free $200 credit available (in the U.S.) as of this writing. After you have your account, create a new resource group and give it a name. This example uses “ServerlessAngular.”

step01resourcegroup

A resource group is simply a container for related resources. Groups make it easy to see aggregate cost of services, can be created and destroyed in a single step and are a common security boundary in the Azure world. Once the resource group is provisioned, add a resource to it. Use the add button, type “function” in the search box and choose “Function App.”

step02functionapp

Now you can pick what the name of your function app is, choose the subscription it will bill against, assign it to a resource group, pick a location and associate it with a storage account. Here are some of the options I chose to create the app with the name “angularsvc.”

step03appname

Function apps need storage, so tap the “Storage Account” option and use an existing account or create a new one. In this example I create one called “angularsvcstorage” to make it clear what the storage is for.

step04fnstorage

After you hit the create button, it may take a few seconds to several minutes to provision the assets for the function app. These include a service plan for the app, the storage, and the app itself. After everything is created, click on the function app itself to begin coding.

step05intofunction

There are a few starter examples to choose from, but for this demo you’ll select, “Create your own custom function.” You can choose a language, scenario, and template to start with. I picked “JavaScript” for “Core” and selected “HttpTrigger-JavaScript.”

step06template

A function app may have multiple functions. For this example we’ll name the function “xIterate” and open it for anonymous consumption.

step07name

The app I’m using for this example is based on an Angular and TypeScript workshop I gave at DevNexus. The module “55-Docker” contains a sample app with an Angular project and a Node.js service that I adapted from containers for this serverless example. The app generates a bifurcation diagram. Here is the code for the function:

Now you can click “save and run” to test it. The script expects a parameter to be passed, so expand the right pane, choose the “Test” tab and add a parameter for “r.”

step09test

The screen capture shows the result of successfully running the function. You can see it returned a status code “200 OK” and an array of values.

To further test it, click the “get function URL” link in the upper right, and call it directly from your browser like this:

https://angularsvc.azurewebsites.net/api/xIterate?r=2.7

Now the function is ready to go. On a system with Node, NPM, and the latest Angular-CLI installed, execute this command to create a new Angular project:

ng new ng-serverless

After the project is created, navigate to its folder:

cd ng-serverless

Then generate a service to call our function:

ng g service iterations

Using your favorite code editor (mine is Visual Studio Code), open app.module.ts and add the import and edit the provider line to import the service:

Next, edit the iterations.service.ts file to implement the service. Use the function URL you created earlier. The service generates several values for “r”, calls the API to get the “x” iteration values, then publishes them as tuples back to the subscriber. Be sure to update the URL for the service to match your own.

Edit the app.component.html template to add a canvas that will host the diagram:

Finally, edit app.component.ts to call the service and populate the diagram.

Now, the app is ready to run. Use the Angular-CLI to launch a local server: 

ng serve

And, if it doesn’t open for you automatically, navigate to the local port in your web browser:

http://localhost:4200

You’ll see the app loads but then nothing happens. If you open the console, it will be riddled with errors like this:

corsproblem

The problem is that Cross Origin Resource Sharing (CORS) hasn’t been configured for the function, so for security reasons the browser is blocking requests to the Azure app function from the localhost domain. To fix this, open the function app back up in the Azure Portal, tap “Function app settings” in the lower left and choose “Configure CORS.”

step10configurecors

Add http://localhost:4200 to the list of allowed domains, save it, then refresh your browser. If all goes well, you should see something like this:

localresult

That’s great – now you have a serverless API, but what about the website itself? Fortunately the Angular app itself is completely clientside so it can be exported as a static site. Let’s configure Azure Blob Storage to host the website assets. Add a new resource to the existing resource group. Search for “blob” and choose the “Storage account” option.

step11blob

Give it a name like “ngweb”, choose “Standard”, pick a replication level (I chose the default), and pick a region.

step12nameit

Once this storage is provisioned, tap on “Blobs” for the blob service. The blob service is segmented into named containers, so create a container to host the assets. I named mine “bifurc” for the app. The access type is “Blob.”

step13createblob

After the container is provisioned, you can choose “container properties” to get the URL of the container. Copy that down. In your Angular project, generate an optimized production build:

ng build --prod --aot

This will build the static assets for your application in the “dist” folder. Open dist/index.html and edit the file to update the base URL. This should point to your container (the final slash is important):

<base href="https://ngweb.blob.core.windows.net/bifurc/">

Back in the Azure portal, the container provides an “upload” option to upload assets. Upload all of the files in “dist” as “block blob” until your container looks something like this:

step14uploads

You can either navigate to the container URL you saved earlier or click on index.html and copy its URL. You should see the app try to load, but it will fail. CORS strikes again! Head back over to the function service and add the blob storage domain (just the domain, the path is not necessary) to the list of allowed servers. For this example, I added https://ngweb.blob.core.windows.net/.

Eureka!

At this point you should have a working application. If you want to host this for production and give it a more user-friendly name, you can use a CNAME entry in DNS to point a custom domain name to blob storage.

Of course, this demonstration only scratched the surface of what is possible. You can build authentication and authorization into Azure Functions, securely store secrets like database credentials, trigger functions based on resource changes (such as running one when a SQL table is updated or a file is uploaded into blob storage) and much more.

In just minutes developers can now provision a highly elastic, scalable, and resilient web application without having to worry about how it is load-balanced. Your boss will give you extra points for saving cash because the functions and blobs are charged based on usage – no more ongoing costs to keep a VM running when it isn’t being used. All of these resources support deployment from a continuous deployment pipeline and in Azure you can create an automation script to generate a template for building out multiple environments like Dev, QA, Staging, and Production.

Enjoy your new super powers!

Wednesday, April 5, 2017

Celebrating Twenty Years of Open Source

The recent announcement about CodePlex shutting down spurred me to migrate a few old projects and led to reflection about my past contributions to open source.

The concept of open source has been around for decades. I wrote my first line of code in 1981 and it was only a few years later that the GNU Project was launched with the vision of free software and massive collaboration. Before the Internet became ubiquitous, we would share code through local gatherings called “swap meets.”

Imagine convention halls filled with tables and power cords so computer enthusiasts (I think we were called “geeks”) could drag their twenty-pound towers and fifty-pound monitors around and copy disks from each other. OK, let’s be honest. There may have been a little bit of pirating going on at those meets, too.

Much activity in those days was focused on breaking or “cracking” increasingly sophisticated anti-copy security measures in commercial software. When successful, developers would leave their calling card by programming “intros” with personal brands and “shout-outs.” Many “crews” of developers created longer running “demos” to show off their coding ability. When I was 14 I labored with a good friend through one summer to  create a humbly named “8th wonder of the world.”

8th Wonder of the World

The capture doesn’t do it justice as much of the flicker is non-existent on a “real” breadbox (what we called the Commodore 64), but I was hardly keeping pace with groups like Crest who were able to write code that tricked the hardware into producing higher resolution graphics than it was supposed to support and could create images with 64k colors on an 8-bit machine that only had a palette of 16.

In those days we shared our creations by swapping disks or going through the painfully long process of uploading and downloading to and from bulletin board systems (BBS).

Open Source Quake?

As software evolved through phases like “Shareware” and launched successful franchises like Doom, I continued to stay passionate about programming. After Quake was released, I had a very unique opportunity to dabble in gaming. I had previously avoided it due to a severe weakness with vector math and ray tracing equations, but Quake provided it’s own language and allowed me to focus on game logic rather than worry about trivial effects like 3D graphics and gravity.

I had some initial success with a modification I wrote under the nickname “Nikodemos” called Midnight Capture the Flag. An eager gamer shared the idea with me in a chat room but lacked the programming chops to pull it off, so I rolled up my sleeves and got to work. The concept was great because most modifications required downloading assets over extremely slow connections.

MidCTF was a “server modification” so it could be played without any downloads by the player. It took advantage of the revolutionary 3D audio Quake provided, and used an algorithm to scan levels, turn off all the lights, and equip the player with a flashlight.

midctf

Before GitHub and CodePlex we had well known public FTP sites to upload code to, so I started uploading code and posting about it on various news sites. The MidCTF mod went viral (yep, we even had viral those days) and brought me to the attention of a group that was writing a TC (short for “total conversion”) of Quake.

This game, called SWAT Quake, had completely custom gameplay.

swatbtn

swatani

I had to write from scratch features that are now commonplace. For example, there were limited team sizes so I had to queue players who connected but couldn’t play yet. To make their lives interesting, I figured out how to anchor “cameras” in levels, use algorithms to determine which cameras had the highest density of players within sight and auto-switch to view the action.

Another fun modification was “climbing gloves.” A fugitive trying to escape security could use these to climb walls. To make this work, I wrote an action to “engage” the claws. While engaged, I shot a vector from the player forward and determined whether they were in contact with a “wall.” If they were, I applied an acceleration to the player that was slightly higher than the reverse of gravity so they would “climb.”

It was a surprisingly realistic effect, because an opponent could shoot a rocket and miss, but the explosion would knock the player away from the wall and cause them to fall quite convincingly. I consider those initial Quake modifications my first “open source” projects that were posted on public servers. But what about business-focused open source?

The Real Job

After SWAT was released, I did the unthinkable: I started dating my soon-to-be wife, and I got a “real job.” This left me little time to focus on my gaming pursuits. At the time, open source was considered by most corporations to be completely incompatible with enterprise apps. I believe there were three driving factors that slowed adoption:

  1. Share and share-alike … no one wanted to have to share their proprietary “secret sauce” code, and the perception was most libraries required this to be used.
  2. The perception that open source projects were in perpetual beta, never quite finished, not as polished as professional code and would be a nightmare to support.
  3. Security concerns.

All three are very valid reasons and haven’t gone away. What has changed however is the momentum with which projects have been adopted and accepted as a part of the enterprise. That would come much later, however. At the start of my professional career, movies like AntiTrust reflected the general sentiments at the time: corporations were greedy, code-mongering organizations who refused to give an inch back to the community, while open source developers worked out of garages with zero income.

For a decade, open source wasn’t even on my radar. Fast forward …

Sterling

On June 28th, 2010 I “checked in” the code to my first open source project to CodePlex. You can view the migrated commit on GitHub here. Like most popular projects, this wasn’t a case of building something I hoped people would use, but rather addressing a need I knew developers had. Sterling was and probably will always be my largest open source effort.

Sterling-Final-Small

In general, a common complaint about Silverlight was that it had no local database. Although best practices at the time embraced an N-tier architecture that relied on web services to shuttle data to and from the browser, it was difficult to cache assets locally. Fortunately, Silverlight had the concept of “isolated storage” and just needed some help making it easier to store and retrieve data.

At the time I had been experimenting with reflection to dynamically create entities from JSON, and realized that it was not difficult to walk the properties of an object and use a binary writer to serialize them. Classes are just bags of properties, some of which are classes themselves. Sterling would allow you to define a database, register classes, and use lambda expressions to describe keys and indexes.

The first iteration of Sterling came fairly quickly, but it was the optimizations that made it unique. I quickly deduced that it would be faster to store properties and fields in a cache rather than use reflection each pass. The algorithms to handle arrays and dictionaries were long and unwieldy until I had a flash of insight that enabled me to refactor about 1000 lines of code that handled only a few cases to 300 lines that handled almost every case.

The introduction of Windows Phone 7 led to a major surge of interest in Sterling. I rewrote the engine to decouple storage from the database logic, so I could use a common “core” then write adapters for the phone vs. Silverlight. This made it easy to write a server-side .NET version for caching as well. I introduced triggers, decoupled logging, provided a way for custom serialization schemes to handle complex types or just make “short cuts” if their serialization was already handled, and allowed “byte interceptors” that could manipulate the stream for compression and/or encryption.

Embracing Open Source

There was no doubt in my mind that Sterling would be open source. This not only enabled adoption and helped solve a real problem developers were having, but also enabled contributions from the community. I had several collaboraters do everything from push bug fixes to write new adapters and even port the code over to the Windows Runtime when it came out.

Ultimately, Sterling did well for preserving state and querying small collections but it fell apart with extremely large datasets. Eventually solutions like SQLite were ported over and helped fill the demand. I had one fleeting offer from a major tools vendor to purchase Sterling that fell apart when the technology officer left for another company.

Jounce

On October 4, 2010 I committed the initial build for Jounce, a framework for building Silverlight business applications.

Jounce-Small

Again, Jounce was not an attempt to create something people would adopt, but a reaction to a problem that needed solving. After delivering dozens of Silverlight presentations, two things became evident to me: first, a lot of developers were building the applications the wrong way. They weren’t taking advantage of built-in features and were locking in code that would make it extremely difficult to migrate and support later on.

Second, although several frameworks like Prism existed, they were overly complicated to most developers and difficult to adopt. Developers were creating dependencies on a framework and then only using 10% of it.

I recognized the several successful Silverlight production projects I wrote, including a social media analytics tool I wrote for Microsoft and a pilot slate application for Rooms To Go, had several patterns in common. They leveraged the Model-View-ViewModel (MVVM) pattern, used some simple command and messaging patterns, and relied on the Managed Extensibility Framework (MEF) for dependency resolution and dynamic model loading.

I encapsulated these features into Jounce, uploaded it to CodePlex, and created a video to show how easy it was to get started:

Jounce: Getting Started from Jeremy Likness on Vimeo.

Jounce was specifically designed for Silverlight and did not survive Silverlight’s demise just a year later. Some other powerful frameworks designed during that period, however, still live on, like MVVM Light.

Modern Times

The version control systems I used in the past centered around TFSVC. (With one exception: I did install an instance of Darcs at one company and use it for awhile). Git was a mystery to me. Some associates of mine insisted it was the future and that I should get to know it.

In January of 2014 I made my first commit to GitHub with jsEventAgg, a simple pub/sub platform for JavaScript. To really get to learn and understand git, I leveraged it for a talk at a code camp that featured an Angular Health App. I created the application and committed after each major step, then pulled the commits to show each stage of development.

This was the start of me moving entirely over to GitHub and all of my open source projects have been hosted there since.

The majority of open source projects I publish are proof-of-concepts, demos, and tutorials. There are a few exceptions. For example, jlikness.watch made it possible to see just how bad dirty checking could get in Angular 1.x apps, and qorlate made it easier to coordinate promises and even deal with them like streams (a la RxJs). jsInject is a functional JavaScript dependency-injection module designed to teach how Angular 1.x dependency injection worked, and my most recent module provides service location for microservices as a micro-locator.

For code snippets, I share JavaScript solutions via jsFiddle.net.

Containers

Perhaps the most amazing evolution of open source to me has been the introduction of containers. Although not necessarily tied to source, they facilitate open source development in ways that were never possible before.

For example, if you work on Angular apps you’ve probably been hit with the migration of the Angular Command Line Interface (CLI) to a new package name. If you’ve installed the latest version and have projects made with the old tool, they won’t work … that is, unless you leverage a solution like this ng2build container. It allows you to mount a path and run the older version of the CLI inside of a container.

As of this writing there are thousands of available containers, and popular ones have been pulled millions of times by developers. Of course, the technology that makes this possible (Docker) is, well, open source.

Just five years ago I had to ask special permission to use open source tools in enterprise projects. I thought that Microsoft might revoke my MVP status because the technologies I evangelized the most were open source. Today, it is fairly common to see open source in almost every project. Microsoft has it’s own open source, cross-platform IDE in Visual Studio Code, collectively through its employees makes the most open source contributions on GitHub and even has a website dedicated to “the open source discussion.”

20 Years

Although several decades have gone by, one passion of mine has remained consistent. Software development transformed my life and my personal mission continues to be to empower developers to be their best. There has always been a platform for sharing code, best practices, and mentoring developers on new projects, but open source is the catalyst that not only transforms the lives of developers but boosts start-ups and injects new enthusiasm into large corporations that are learning the true meaning of “agile.”

One last bit of advice. I have done very well in my professional programming career, and open source always served to move it forward. There is a myth that “giving away” code somehow may diminish your value, but I disagree. Keep in mind there will always be proprietary algorithms and solutions that are a company’s “secret sauce” and I do not believe all code should be in the public domain. However, embracing open source and sharing common tools and libraries or even custom games gave me unique experiences and enabled me to be a part of programming history that I otherwise would not have had access to. I got to be an active participant in the longest interview.

One door closes, and other doors open. If you are not participating in an open source project or sharing your knowledge with the world, why not start today? You never know just who you might impact or how your contributions may transform technology as we know it. After all, most of the disruptive technologies we know of today (Netflix, Uber, PayPal, etc.) are largely composed of and on open source frameworks, libraries, platforms, and tools. 

Sunday, April 2, 2017

Build and Deploy a MongoDB Angular NodeJS App using nginx in Three Steps with Docker

Docker is a pretty amazing tool.

To prove it, I want to show you how you can build, deploy, and stand-up an N-tier Angular app backed by MongoDB in just three steps. Literally. Without installing any prerequisites other than Docker itself.

usdadiagram

First, seeing is believing. Once you have Docker installed (OK, and git, too), type the following commands:

git clone https://github.com/JeremyLikness/usda-microservice.git

cd usda-microservice

docker-compose up

It will take some time for everything to spin up, but once it does you should see several services start in the console. You’ll know the application is ready when you see it has imported the USDA database:

usdaimports

After the import you should be able to navigate to localhost and run the app. Here is an example of searching for the word “soy” and then tapping “info” on one of the results:

usdaweb

On the console, you can see the queries in action:

usdasearch

How easy was that?

Of course, you might find yourself asking, “What just happened?” To understand how the various steps were orchestrated, it all begins with the docker-compose.yml file.

The file declares a set of services that work together to define the app. Services can depend on each other and often specify an image to use as a baseline for building a container, as well as a Dockerfile to describe how the container is built. Let’s take a look at what’s going on:

Seed

The seed service specifies a Dockerfile named Dockerfile-seed. The entire purpose of this image is to import the USDA data from flat files along with some helper scripts, then expose the data through a volume so that they can be imported into the database. It is based on an existing lightweight Linux container called Ubuntu.

Containers by default are black boxes. You cannot communicate with them and are unable to explore their contents. The volume command exposes a mounting point to share data. The file simply updates the container to the latest version, creates a directory, copies over a script and an archive, then unzips the archive and changes permissions.

db

The db service is the backend database. It inherits from the baseline mongo image that provides a pre-configured and ready-to-run instance of mongodb.

You’ll notice a command is specified to run the shell script seed.sh. This is a bash script that does the following:

  1. Launch mongodb
  2. Wait until it is running
  3. Iterate through the food database files and import them into the database
  4. Swap to the foreground so it continues running and can be connected to

At this point, Docker created an interim container to stage data, then used that data to populate a mongo database that was created from the public, trusted registry and is now ready for connections and queries.

descriptions

The next container has a directory configured for the build (“./descriptions”), so you can view the Dockerfile in that directory to discern its steps. This is an incredibly simple file. It leverages an image from node that contains a build trigger. This allows the image definition to specify how a derived image can be built.

In this instance, the app is a Node app using micro. The build steps simply copy the contents into the container, run an install to load dependent packages, then commit the image. This leaves you with a container that will run the microservice exposed on port 3000 using node.

Going back to the compose file, there are two more lines called “links” and “ports” respectively. In this configuration, the mongodb container is not available outside of the host or even accessible from other containers. That is because no ports are exposed as part of its definition.

The “link” directive allows the microservice to connect with the container running the database. This creates a secure, internal link – in other words, although the microservice can now see the database, the database is not available to any other containers that aren’t linked and not visible outside of the Docker host.

On the other hand, because this service will be called from an Angular app hosted in a user’s web browser, it must be exposed outside of the host. The “port” directive maps the internal port 3000 to an external port 3000 so the microservice is accessible.

This service exposes two functions: a list of all groups that the user can filter, and a list of descriptions of nutrients based on search text.

nutrients

Nutrients is another microservice that is setup identical to descriptions. It exposes the individual nutrients for a description that was selected. The only difference in configuration is that because it runs on the same port (3000) internally, it is mapped to a new port (3001) externally to avoid a duplicate port conflict.

ngbuild

This image points to an Angular app and is used as an interim container to build the app (in production deployments it is more common to have a dedicated build box perform this step). I included this to demonstrate how powerful and flexible containers can be.

Inside the Dockerfile, the script installs node and the node package manager, then the specific version of the angular-cli used to build the app. Once the Angular CLI is installed, a target directory is created. Dependent packages are installed using the node package manager, and the Angular CLI is called to build a production-ready image with ahead-of-time compilation of templates. This produces a highly optimized bundle.

The compose file specifics a volumes directive that names “ng2”. This is a mount point to share storage between containers. The ngbuild service mounts “ng2” to “/src/dist” which is where the build is output.

web

Finally, the web service hosts the Angular app. There is no Dockerfile because it is completely based on an existing nginx image. The “ng2” mount points to “/usr/share/nginx/html” which is where the container serves HTML pages from by default. The “ng2” shared volume connects the output of the build from ngbuild to the input for the web server in web.

This app uses the micro-locator service I created to help locate services in apps. The environment.ts file maps configuration to endpoints. This allows you to specify different end points for debug vs. production builds. In this case the root service is mapped to port 3000, while the nutrients is mapped to the root of 3001.

Even though the services are running on different nodes, the micro-locator package allows the code to call a consistent set of endpoints. You can see this in the descriptions component that simply references “/descriptions” and “/groups” and uses the micro-locator service to resolve them in its constructor.

They are mapped to the same service in configuration but if groups was later pulled out to a separate end point, the only thing you would need to change is the configuration of the locator itself. The end code remains the same.

The standard web port 80 is exposed for access, and the service is set to depend on descriptions so it doesn’t get spun up until after the dependent microservices are.

Summary

The purpose of this project is to demonstrate the power and flexibility of containers. Specifically:

  1. Availability of existing, trusted images to quickly spin up instances of databases or node-based containers, for example
  2. Least privilege security by only allowing “opt-in” access to services and file systems
  3. The ability to compose services together and share common resources
  4. The ease of set up for a development environment
  5. The ability to build in a “clean” environment without having to install prerequisites on your own machine

Although these features make development and testing much easier, Docker also provides powerful features for managing production environments. These include the ability to scale services out on the fly, and plug into orchestrators like Kubernetes to scale across hosts for zero downtime and to manage controlled roll-outs (and roll-backs) of version changes.

Happy containers!

Monday, January 9, 2017

Comprehensive End-to-End Angular 2 with Redux and Kendo UI DevOps Example

I know, the title is a mouthful. But it describes exactly what I’m sharing! Before and during the Christmas break I created a project to illustrate how to leverage Redux in Angular 2 apps. As part of the process I integrated Kendo UI and created a full DevOps solution with continuous integration and automated gated deployments to a Docker host on Azure.

To start with, read about Improving the State of your App with Redux.

reduxflow

After you get the gist of how and why the app was built, learn how I set up DevOps Continuous Deployment with Visual Studio Team Services and Docker.

containerstack

Finally, a common question related to Angular 2 is how to test it. On this blog, learn about Integrating Angular 2 Unit Tests with Visual Studio Team Services using PhantomJS and JUnit to report back test results.

testresults

Even if you don’t use VSTS for your automation, many of the processes and steps described in this triad of articles will help you build your own deployment pipeline.

Happy DevOps in 2017!