Friday, April 7, 2017

Create a Serverless Angular App with Azure Functions and Blob Storage

As DevOps continues to blur the lines between traditional IT operations and development, platforms and tools are rapidly evolving to embrace the new paradigm. While the use of containers explodes throughout global enterprises, another technology has been rapidly gaining momentum. On Amazon AWS it’s referred to as AWS Lambda, on Azure it’s name is Azure Functions and in the Node.js world a popular option is webtask.io.

The technology is referred to as “serverless” and is the ultimate abstraction of concerns like scale, elasticity and resiliency that empowers the developer to focus on one thing: code. It’s often easier to understand when you see it in action, so this post will focus on creating a completely functional Angular app with absolutely no provisioning of servers.

To follow along you’ll need an Azure subscription. If you don’t have one there is a free $200 credit available (in the U.S.) as of this writing. After you have your account, create a new resource group and give it a name. This example uses “ServerlessAngular.”

step01resourcegroup

A resource group is simply a container for related resources. Groups make it easy to see aggregate cost of services, can be created and destroyed in a single step and are a common security boundary in the Azure world. Once the resource group is provisioned, add a resource to it. Use the add button, type “function” in the search box and choose “Function App.”

step02functionapp

Now you can pick what the name of your function app is, choose the subscription it will bill against, assign it to a resource group, pick a location and associate it with a storage account. Here are some of the options I chose to create the app with the name “angularsvc.”

step03appname

Function apps need storage, so tap the “Storage Account” option and use an existing account or create a new one. In this example I create one called “angularsvcstorage” to make it clear what the storage is for.

step04fnstorage

After you hit the create button, it may take a few seconds to several minutes to provision the assets for the function app. These include a service plan for the app, the storage, and the app itself. After everything is created, click on the function app itself to begin coding.

step05intofunction

There are a few starter examples to choose from, but for this demo you’ll select, “Create your own custom function.” You can choose a language, scenario, and template to start with. I picked “JavaScript” for “Core” and selected “HttpTrigger-JavaScript.”

step06template

A function app may have multiple functions. For this example we’ll name the function “xIterate” and open it for anonymous consumption.

step07name

The app I’m using for this example is based on an Angular and TypeScript workshop I gave at DevNexus. The module “55-Docker” contains a sample app with an Angular project and a Node.js service that I adapted from containers for this serverless example. The app generates a bifurcation diagram. Here is the code for the function:

Now you can click “save and run” to test it. The script expects a parameter to be passed, so expand the right pane, choose the “Test” tab and add a parameter for “r.”

step09test

The screen capture shows the result of successfully running the function. You can see it returned a status code “200 OK” and an array of values.

To further test it, click the “get function URL” link in the upper right, and call it directly from your browser like this:

https://angularsvc.azurewebsites.net/api/xIterate?r=2.7

Now the function is ready to go. On a system with Node, NPM, and the latest Angular-CLI installed, execute this command to create a new Angular project:

ng new ng-serverless

After the project is created, navigate to its folder:

cd ng-serverless

Then generate a service to call our function:

ng g service iterations

Using your favorite code editor (mine is Visual Studio Code), open app.module.ts and add the import and edit the provider line to import the service:

Next, edit the iterations.service.ts file to implement the service. Use the function URL you created earlier. The service generates several values for “r”, calls the API to get the “x” iteration values, then publishes them as tuples back to the subscriber. Be sure to update the URL for the service to match your own.

Edit the app.component.html template to add a canvas that will host the diagram:

Finally, edit app.component.ts to call the service and populate the diagram.

Now, the app is ready to run. Use the Angular-CLI to launch a local server: 

ng serve

And, if it doesn’t open for you automatically, navigate to the local port in your web browser:

http://localhost:4200

You’ll see the app loads but then nothing happens. If you open the console, it will be riddled with errors like this:

corsproblem

The problem is that Cross Origin Resource Sharing (CORS) hasn’t been configured for the function, so for security reasons the browser is blocking requests to the Azure app function from the localhost domain. To fix this, open the function app back up in the Azure Portal, tap “Function app settings” in the lower left and choose “Configure CORS.”

step10configurecors

Add http://localhost:4200 to the list of allowed domains, save it, then refresh your browser. If all goes well, you should see something like this:

localresult

That’s great – now you have a serverless API, but what about the website itself? Fortunately the Angular app itself is completely clientside so it can be exported as a static site. Let’s configure Azure Blob Storage to host the website assets. Add a new resource to the existing resource group. Search for “blob” and choose the “Storage account” option.

step11blob

Give it a name like “ngweb”, choose “Standard”, pick a replication level (I chose the default), and pick a region.

step12nameit

Once this storage is provisioned, tap on “Blobs” for the blob service. The blob service is segmented into named containers, so create a container to host the assets. I named mine “bifurc” for the app. The access type is “Blob.”

step13createblob

After the container is provisioned, you can choose “container properties” to get the URL of the container. Copy that down. In your Angular project, generate an optimized production build:

ng build --prod --aot

This will build the static assets for your application in the “dist” folder. Open dist/index.html and edit the file to update the base URL. This should point to your container (the final slash is important):

<base href="https://ngweb.blob.core.windows.net/bifurc/">

Back in the Azure portal, the container provides an “upload” option to upload assets. Upload all of the files in “dist” as “block blob” until your container looks something like this:

step14uploads

You can either navigate to the container URL you saved earlier or click on index.html and copy its URL. You should see the app try to load, but it will fail. CORS strikes again! Head back over to the function service and add the blob storage domain (just the domain, the path is not necessary) to the list of allowed servers. For this example, I added https://ngweb.blob.core.windows.net/.

Eureka!

At this point you should have a working application. If you want to host this for production and give it a more user-friendly name, you can use a CNAME entry in DNS to point a custom domain name to blob storage.

Of course, this demonstration only scratched the surface of what is possible. You can build authentication and authorization into Azure Functions, securely store secrets like database credentials, trigger functions based on resource changes (such as running one when a SQL table is updated or a file is uploaded into blob storage) and much more.

In just minutes developers can now provision a highly elastic, scalable, and resilient web application without having to worry about how it is load-balanced. Your boss will give you extra points for saving cash because the functions and blobs are charged based on usage – no more ongoing costs to keep a VM running when it isn’t being used. All of these resources support deployment from a continuous deployment pipeline and in Azure you can create an automation script to generate a template for building out multiple environments like Dev, QA, Staging, and Production.

Enjoy your new super powers!

Wednesday, April 5, 2017

Celebrating Twenty Years of Open Source

The recent announcement about CodePlex shutting down spurred me to migrate a few old projects and led to reflection about my past contributions to open source.

The concept of open source has been around for decades. I wrote my first line of code in 1981 and it was only a few years later that the GNU Project was launched with the vision of free software and massive collaboration. Before the Internet became ubiquitous, we would share code through local gatherings called “swap meets.”

Imagine convention halls filled with tables and power cords so computer enthusiasts (I think we were called “geeks”) could drag their twenty-pound towers and fifty-pound monitors around and copy disks from each other. OK, let’s be honest. There may have been a little bit of pirating going on at those meets, too.

Much activity in those days was focused on breaking or “cracking” increasingly sophisticated anti-copy security measures in commercial software. When successful, developers would leave their calling card by programming “intros” with personal brands and “shout-outs.” Many “crews” of developers created longer running “demos” to show off their coding ability. When I was 14 I labored with a good friend through one summer to  create a humbly named “8th wonder of the world.”

8th Wonder of the World

The capture doesn’t do it justice as much of the flicker is non-existent on a “real” breadbox (what we called the Commodore 64), but I was hardly keeping pace with groups like Crest who were able to write code that tricked the hardware into producing higher resolution graphics than it was supposed to support and could create images with 64k colors on an 8-bit machine that only had a palette of 16.

In those days we shared our creations by swapping disks or going through the painfully long process of uploading and downloading to and from bulletin board systems (BBS).

Open Source Quake?

As software evolved through phases like “Shareware” and launched successful franchises like Doom, I continued to stay passionate about programming. After Quake was released, I had a very unique opportunity to dabble in gaming. I had previously avoided it due to a severe weakness with vector math and ray tracing equations, but Quake provided it’s own language and allowed me to focus on game logic rather than worry about trivial effects like 3D graphics and gravity.

I had some initial success with a modification I wrote under the nickname “Nikodemos” called Midnight Capture the Flag. An eager gamer shared the idea with me in a chat room but lacked the programming chops to pull it off, so I rolled up my sleeves and got to work. The concept was great because most modifications required downloading assets over extremely slow connections.

MidCTF was a “server modification” so it could be played without any downloads by the player. It took advantage of the revolutionary 3D audio Quake provided, and used an algorithm to scan levels, turn off all the lights, and equip the player with a flashlight.

midctf

Before GitHub and CodePlex we had well known public FTP sites to upload code to, so I started uploading code and posting about it on various news sites. The MidCTF mod went viral (yep, we even had viral those days) and brought me to the attention of a group that was writing a TC (short for “total conversion”) of Quake.

This game, called SWAT Quake, had completely custom gameplay.

swatbtn

swatani

I had to write from scratch features that are now commonplace. For example, there were limited team sizes so I had to queue players who connected but couldn’t play yet. To make their lives interesting, I figured out how to anchor “cameras” in levels, use algorithms to determine which cameras had the highest density of players within sight and auto-switch to view the action.

Another fun modification was “climbing gloves.” A fugitive trying to escape security could use these to climb walls. To make this work, I wrote an action to “engage” the claws. While engaged, I shot a vector from the player forward and determined whether they were in contact with a “wall.” If they were, I applied an acceleration to the player that was slightly higher than the reverse of gravity so they would “climb.”

It was a surprisingly realistic effect, because an opponent could shoot a rocket and miss, but the explosion would knock the player away from the wall and cause them to fall quite convincingly. I consider those initial Quake modifications my first “open source” projects that were posted on public servers. But what about business-focused open source?

The Real Job

After SWAT was released, I did the unthinkable: I started dating my soon-to-be wife, and I got a “real job.” This left me little time to focus on my gaming pursuits. At the time, open source was considered by most corporations to be completely incompatible with enterprise apps. I believe there were three driving factors that slowed adoption:

  1. Share and share-alike … no one wanted to have to share their proprietary “secret sauce” code, and the perception was most libraries required this to be used.
  2. The perception that open source projects were in perpetual beta, never quite finished, not as polished as professional code and would be a nightmare to support.
  3. Security concerns.

All three are very valid reasons and haven’t gone away. What has changed however is the momentum with which projects have been adopted and accepted as a part of the enterprise. That would come much later, however. At the start of my professional career, movies like AntiTrust reflected the general sentiments at the time: corporations were greedy, code-mongering organizations who refused to give an inch back to the community, while open source developers worked out of garages with zero income.

For a decade, open source wasn’t even on my radar. Fast forward …

Sterling

On June 28th, 2010 I “checked in” the code to my first open source project to CodePlex. You can view the migrated commit on GitHub here. Like most popular projects, this wasn’t a case of building something I hoped people would use, but rather addressing a need I knew developers had. Sterling was and probably will always be my largest open source effort.

Sterling-Final-Small

In general, a common complaint about Silverlight was that it had no local database. Although best practices at the time embraced an N-tier architecture that relied on web services to shuttle data to and from the browser, it was difficult to cache assets locally. Fortunately, Silverlight had the concept of “isolated storage” and just needed some help making it easier to store and retrieve data.

At the time I had been experimenting with reflection to dynamically create entities from JSON, and realized that it was not difficult to walk the properties of an object and use a binary writer to serialize them. Classes are just bags of properties, some of which are classes themselves. Sterling would allow you to define a database, register classes, and use lambda expressions to describe keys and indexes.

The first iteration of Sterling came fairly quickly, but it was the optimizations that made it unique. I quickly deduced that it would be faster to store properties and fields in a cache rather than use reflection each pass. The algorithms to handle arrays and dictionaries were long and unwieldy until I had a flash of insight that enabled me to refactor about 1000 lines of code that handled only a few cases to 300 lines that handled almost every case.

Home_sterlingspeed

The introduction of Windows Phone 7 led to a major surge of interest in Sterling. I rewrote the engine to decouple storage from the database logic, so I could use a common “core” then write adapters for the phone vs. Silverlight. This made it easy to write a server-side .NET version for caching as well. I introduced triggers, decoupled logging, provided a way for custom serialization schemes to handle complex types or just make “short cuts” if their serialization was already handled, and allowed “byte interceptors” that could manipulate the stream for compression and/or encryption.

Embracing Open Source

There was no doubt in my mind that Sterling would be open source. This not only enabled adoption and helped solve a real problem developers were having, but also enabled contributions from the community. I had several collaboraters do everything from push bug fixes to write new adapters and even port the code over to the Windows Runtime when it came out.

Ultimately, Sterling did well for preserving state and querying small collections but it fell apart with extremely large datasets. Eventually solutions like SQLite were ported over and helped fill the demand. I had one fleeting offer from a major tools vendor to purchase Sterling that fell apart when the technology officer left for another company.

Jounce

On October 4, 2010 I committed the initial build for Jounce, a framework for building Silverlight business applications.

Jounce-Small

Again, Jounce was not an attempt to create something people would adopt, but a reaction to a problem that needed solving. After delivering dozens of Silverlight presentations, two things became evident to me: first, a lot of developers were building the applications the wrong way. They weren’t taking advantage of built-in features and were locking in code that would make it extremely difficult to migrate and support later on.

Second, although several frameworks like Prism existed, they were overly complicated to most developers and difficult to adopt. Developers were creating dependencies on a framework and then only using 10% of it.

I recognized the several successful Silverlight production projects I wrote, including a social media analytics tool I wrote for Microsoft and a pilot slate application for Rooms To Go, had several patterns in common. They leveraged the Model-View-ViewModel (MVVM) pattern, used some simple command and messaging patterns, and relied on the Managed Extensibility Framework (MEF) for dependency resolution and dynamic model loading.

I encapsulated these features into Jounce, uploaded it to CodePlex, and created a video to show how easy it was to get started:

Jounce: Getting Started from Jeremy Likness on Vimeo.

Jounce was specifically designed for Silverlight and did not survive Silverlight’s demise just a year later. Some other powerful frameworks designed during that period, however, still live on, like MVVM Light.

Modern Times

The version control systems I used in the past centered around TFSVC. (With one exception: I did install an instance of Darcs at one company and use it for awhile). Git was a mystery to me. Some associates of mine insisted it was the future and that I should get to know it.

In January of 2014 I made my first commit to GitHub with jsEventAgg, a simple pub/sub platform for JavaScript. To really get to learn and understand git, I leveraged it for a talk at a code camp that featured an Angular Health App. I created the application and committed after each major step, then pulled the commits to show each stage of development.

This was the start of me moving entirely over to GitHub and all of my open source projects have been hosted there since.

The majority of open source projects I publish are proof-of-concepts, demos, and tutorials. There are a few exceptions. For example, jlikness.watch made it possible to see just how bad dirty checking could get in Angular 1.x apps, and qorlate made it easier to coordinate promises and even deal with them like streams (a la RxJs). jsInject is a functional JavaScript dependency-injection module designed to teach how Angular 1.x dependency injection worked, and my most recent module provides service location for microservices as a micro-locator.

For code snippets, I share JavaScript solutions via jsFiddle.net.

Containers

Perhaps the most amazing evolution of open source to me has been the introduction of containers. Although not necessarily tied to source, they facilitate open source development in ways that were never possible before.

For example, if you work on Angular apps you’ve probably been hit with the migration of the Angular Command Line Interface (CLI) to a new package name. If you’ve installed the latest version and have projects made with the old tool, they won’t work … that is, unless you leverage a solution like this ng2build container. It allows you to mount a path and run the older version of the CLI inside of a container.

As of this writing there are thousands of available containers, and popular ones have been pulled millions of times by developers. Of course, the technology that makes this possible (Docker) is, well, open source.

Just five years ago I had to ask special permission to use open source tools in enterprise projects. I thought that Microsoft might revoke my MVP status because the technologies I evangelized the most were open source. Today, it is fairly common to see open source in almost every project. Microsoft has it’s own open source, cross-platform IDE in Visual Studio Code, collectively through its employees makes the most open source contributions on GitHub and even has a website dedicated to “the open source discussion.”

20 Years

Although several decades have gone by, one passion of mine has remained consistent. Software development transformed my life and my personal mission continues to be to empower developers to be their best. There has always been a platform for sharing code, best practices, and mentoring developers on new projects, but open source is the catalyst that not only transforms the lives of developers but boosts start-ups and injects new enthusiasm into large corporations that are learning the true meaning of “agile.”

One last bit of advice. I have done very well in my professional programming career, and open source always served to move it forward. There is a myth that “giving away” code somehow may diminish your value, but I disagree. Keep in mind there will always be proprietary algorithms and solutions that are a company’s “secret sauce” and I do not believe all code should be in the public domain. However, embracing open source and sharing common tools and libraries or even custom games gave me unique experiences and enabled me to be a part of programming history that I otherwise would not have had access to. I got to be an active participant in the longest interview.

One door closes, and other doors open. If you are not participating in an open source project or sharing your knowledge with the world, why not start today? You never know just who you might impact or how your contributions may transform technology as we know it. After all, most of the disruptive technologies we know of today (Netflix, Uber, PayPal, etc.) are largely composed of and on open source frameworks, libraries, platforms, and tools. 

Sunday, April 2, 2017

Build and Deploy a MongoDB Angular NodeJS App using nginx in Three Steps with Docker

Docker is a pretty amazing tool.

To prove it, I want to show you how you can build, deploy, and stand-up an N-tier Angular app backed by MongoDB in just three steps. Literally. Without installing any prerequisites other than Docker itself.

usdadiagram

First, seeing is believing. Once you have Docker installed (OK, and git, too), type the following commands:

git clone https://github.com/JeremyLikness/usda-microservice.git

cd usda-microservice

docker-compose up

It will take some time for everything to spin up, but once it does you should see several services start in the console. You’ll know the application is ready when you see it has imported the USDA database:

usdaimports

After the import you should be able to navigate to localhost and run the app. Here is an example of searching for the word “soy” and then tapping “info” on one of the results:

usdaweb

On the console, you can see the queries in action:

usdasearch

How easy was that?

Of course, you might find yourself asking, “What just happened?” To understand how the various steps were orchestrated, it all begins with the docker-compose.yml file.

The file declares a set of services that work together to define the app. Services can depend on each other and often specify an image to use as a baseline for building a container, as well as a Dockerfile to describe how the container is built. Let’s take a look at what’s going on:

Seed

The seed service specifies a Dockerfile named Dockerfile-seed. The entire purpose of this image is to import the USDA data from flat files along with some helper scripts, then expose the data through a volume so that they can be imported into the database. It is based on an existing lightweight Linux container called Ubuntu.

Containers by default are black boxes. You cannot communicate with them and are unable to explore their contents. The volume command exposes a mounting point to share data. The file simply updates the container to the latest version, creates a directory, copies over a script and an archive, then unzips the archive and changes permissions.

db

The db service is the backend database. It inherits from the baseline mongo image that provides a pre-configured and ready-to-run instance of mongodb.

You’ll notice a command is specified to run the shell script seed.sh. This is a bash script that does the following:

  1. Launch mongodb
  2. Wait until it is running
  3. Iterate through the food database files and import them into the database
  4. Swap to the foreground so it continues running and can be connected to

At this point, Docker created an interim container to stage data, then used that data to populate a mongo database that was created from the public, trusted registry and is now ready for connections and queries.

descriptions

The next container has a directory configured for the build (“./descriptions”), so you can view the Dockerfile in that directory to discern its steps. This is an incredibly simple file. It leverages an image from node that contains a build trigger. This allows the image definition to specify how a derived image can be built.

In this instance, the app is a Node app using micro. The build steps simply copy the contents into the container, run an install to load dependent packages, then commit the image. This leaves you with a container that will run the microservice exposed on port 3000 using node.

Going back to the compose file, there are two more lines called “links” and “ports” respectively. In this configuration, the mongodb container is not available outside of the host or even accessible from other containers. That is because no ports are exposed as part of its definition.

The “link” directive allows the microservice to connect with the container running the database. This creates a secure, internal link – in other words, although the microservice can now see the database, the database is not available to any other containers that aren’t linked and not visible outside of the Docker host.

On the other hand, because this service will be called from an Angular app hosted in a user’s web browser, it must be exposed outside of the host. The “port” directive maps the internal port 3000 to an external port 3000 so the microservice is accessible.

This service exposes two functions: a list of all groups that the user can filter, and a list of descriptions of nutrients based on search text.

nutrients

Nutrients is another microservice that is setup identical to descriptions. It exposes the individual nutrients for a description that was selected. The only difference in configuration is that because it runs on the same port (3000) internally, it is mapped to a new port (3001) externally to avoid a duplicate port conflict.

ngbuild

This image points to an Angular app and is used as an interim container to build the app (in production deployments it is more common to have a dedicated build box perform this step). I included this to demonstrate how powerful and flexible containers can be.

Inside the Dockerfile, the script installs node and the node package manager, then the specific version of the angular-cli used to build the app. Once the Angular CLI is installed, a target directory is created. Dependent packages are installed using the node package manager, and the Angular CLI is called to build a production-ready image with ahead-of-time compilation of templates. This produces a highly optimized bundle.

The compose file specifics a volumes directive that names “ng2”. This is a mount point to share storage between containers. The ngbuild service mounts “ng2” to “/src/dist” which is where the build is output.

web

Finally, the web service hosts the Angular app. There is no Dockerfile because it is completely based on an existing nginx image. The “ng2” mount points to “/usr/share/nginx/html” which is where the container serves HTML pages from by default. The “ng2” shared volume connects the output of the build from ngbuild to the input for the web server in web.

This app uses the micro-locator service I created to help locate services in apps. The environment.ts file maps configuration to endpoints. This allows you to specify different end points for debug vs. production builds. In this case the root service is mapped to port 3000, while the nutrients is mapped to the root of 3001.

Even though the services are running on different nodes, the micro-locator package allows the code to call a consistent set of endpoints. You can see this in the descriptions component that simply references “/descriptions” and “/groups” and uses the micro-locator service to resolve them in its constructor.

They are mapped to the same service in configuration but if groups was later pulled out to a separate end point, the only thing you would need to change is the configuration of the locator itself. The end code remains the same.

The standard web port 80 is exposed for access, and the service is set to depend on descriptions so it doesn’t get spun up until after the dependent microservices are.

Summary

The purpose of this project is to demonstrate the power and flexibility of containers. Specifically:

  1. Availability of existing, trusted images to quickly spin up instances of databases or node-based containers, for example
  2. Least privilege security by only allowing “opt-in” access to services and file systems
  3. The ability to compose services together and share common resources
  4. The ease of set up for a development environment
  5. The ability to build in a “clean” environment without having to install prerequisites on your own machine

Although these features make development and testing much easier, Docker also provides powerful features for managing production environments. These include the ability to scale services out on the fly, and plug into orchestrators like Kubernetes to scale across hosts for zero downtime and to manage controlled roll-outs (and roll-backs) of version changes.

Happy containers!