Sunday, April 6, 2014

JavaScript Chaos with Canvas

I learned about chaos theory quite by accident. On a family vacation we stopped by a campground along the Blue Ridge Parkway in the Appalachian Mountains. I was not an outdoor enthusiast at the time so while my parents were hiking I preferred to head over to the arcade to play Donkey Kong and Tempest. One rainy day even the video games were boring so I sauntered over to the main cabin and started leafing through their modest library of books. In between westerns and romance novels I glimpsed a book with the intriguing title Chaos: Making a New Scienceand ended up reading it from cover to cover.

To put things in perspective, this was not the 2008 edition with a fancy new cover. This was the original 1987 edition and the copy I grabbed was only a year old. What I loved about the book was the fact it didn’t just cover various aspects of chaos theory but also shared the actual equations and algorithms used to generate the various fractals and graphics. At the time I had a graphing calculator and was able to program most of the examples.

The beauty of chaos is that most functions are iterative and the fact that “period 3 implies chaos” means some very simple equations can provide impressive results. Take, for example, the bifurcation diagram. The equation it is based on is simple:

f(x1) = R * x0 * (1 – x0)

That is, to get the next value for x, subtract the previous value from 1, multiply by itself and a factor called “R.” R is a sort of arbitrary constant that relates to environment. Presumably lower R means lower resources and higher R means higher resources. If you plot across values of R, what you find is that in many cases the population does what you might expect and stabilizes at a certain value. For higher R, things get interesting. The population can explode, causing over-crowding and therefore starvation resulting in a drop that then grows, etc. and fluctuates between two levels. As R approaches 4, things get even more interesting and it ends up with a seemingly random spread of values. However, within the increasing values you can find little bands or breaks in the pattern where the population stabilizes to a single value again. If you were to blow up those regions you’d find they look very similar to the overall graph, a feature referred to as self-similarity.

The addition of the canvas tag also makes it easy to play with these equations in the browser. Working with a canvas is straightforward. Define it in your HTML and reference it in your JavaScript. Grab a context – in this case I’m using a 2-dimensional context – and use the context to draw. Here’s some definitions for the iterator function.

var c = document.getElementById("c"),
        ctx = c.getContext("2d"),
        w = c.width,
        h = c.height,
        st = 2 / w,
        fx = function (x1, r) {
            return r * x1 * (1 - x1);
        it = function (r) {
            var idx = 0,
                x = 0.5,
                xc = w * ((r - 2) / 2);
            while (idx++ < 2000) {
                x = fx(x, r);
                ctx.fillRect(xc, h - (x * h), 1, 1);

Note I’m figuring out how many pixels I can use to step over R values but otherwise I’m just running through the iterator function and plotting 1 x 1 rectangles. Next, I set the fill style then iterate over R and use the timeout function to queue up the rendering without blocking too much of the UI thread:

ctx.fillStyle = "rgba(32,64,128,0.05)";
    for (r = 2; r < 4; r += st) {
        (function (r1) {
            setTimeout(function () {
            }, 0);

This short amount of code produces an amazingly intricate graph, as you can see here:

Another example I love for its simplicity is called “the chaos game” and generates what is referred to as Sierpinski’s gasket. The game is simple. I pick three starting points that help bound/define my shape. Then I pick a random point we’ll call the “game point.” In each “turn” of the game, I roll a 3-sided die and determine which one of the starting points I’m going to use. I calculate a point halfway between the game point and the base point I picked, and that becomes the new game point. That’s it – each turn is picking one of three base points, traveling halfway to that point from an arbitrary spot, then plotting it. Here I define a the initial points (you can see they form a sort of triangle), generate my game point, provide some helper functions for getting a random number and plotting a point, and create my iterator function that will run 10,000 times.

var c = document.getElementById("c"),
        ctx = c.getContext("2d"),
        w = c.width,
        h = c.height,
        rfn = function(b) { return Math.random() * b; },
        px = [0, w/2, w],
        py = [0, h, h/2],
        gx = rfn(w),
        gy = rfn(h),
        pl = function(x, y) { ctx.fillRect(x, y, 1, 1); },
        iter = 0;
        i = function() {
            pl(gx, gy);
            pt = Math.floor(rfn(3));
            gx = (gx + px[pt])/2;
            gy = (gy + py[pt])/2;
            if (++iter < 10000) {
                setTimeout(i, 0);

To kick things off I set the fill style then call the iterator function. The result is amazing.

These are just a few of the simple yet elegant ways JavaScript can chart the mysterious terrains of chaos theory. I also consider this a metaphor for just how far the web has come. I still remember writing these routines in C++ on my old IBM PC and waiting for 20 minutes while the bifurcation diagram rendered on an 800 x 600 display. It is amazing how fast JavaScript is able to crunch these numbers and the browser is able to render the graphics in mere seconds. Computing power and the web have certainly come a long way. 25 years later, I still get excited about chaos theory and am reminded of why I fell in love with programming in the first place: it only takes minutes to take an abstract idea like a mathematical formula and use code to turn it into a tangible, discernable image on your display. That, in my book, is magic.

If you want to learn more magic, check out Jeff Prosise’s lesson on the HTML5 Canvas API. Use promo code LIKNESS-13 to watch it for free!

Sunday, March 30, 2014

Angular: The Modern HTML5 Answer to Silverlight’s MVVM

First, let me say that I realize Silverlight in no way “owns” the Model-View-View Model pattern. I’ve written on this extensively in my article “MVVM Explained.” However, I believe it gained the most public exposure through various implementations on that platform and this is what led to its adoption on the web. In fact, you might think its strange to think that Microsoft could have possibly influenced the sacred land of open source JavaScript, but if you trace the history of view models you’ll find not only did they segue from WinForms to XAML technologies, but they initially made their way to the web via early data-binding implementations like KnockoutJS.

I have to admit I was extremely skeptical when I first saw the concept of creating a view model in a browser. As much as I loved XAML’s data-binding it felt like a square peg and a round hole. What really concerned me the most was the extra work involved to take an ordinary JavaScript object (JSON) and turn it into a view model using things like ko.observable. This has been vastly improved and simplified, and as I began to adopt the strategy for the web I realized quickly it made sense. It wasn’t that some developers were trying to force old habits into a new area of development. Instead, they were taking some tried and true techniques for building scalable applications and development teams and porting them over to what was often considered the “wild west of JavaScript.” I mean, how on earth do you domesticate those duck types? 

In this article I’ll compare the Silverlight technology with it’s modern equivalent and explain why used correctly they should make life easy, not difficult.


XAML stands for Extensible Application Markup Language. It was born in the .NET world for WPF and Silverlight but lives on with a managed WinRT implementation in Windows 8.1 and later. It is often mistaken for a “UI language” because that what it was great at doing: declaratively defining your user interface. I say “mistaken” because XAML does not have to be tied to anything visual. It is really just a simple way to create rich object graphs in a declarative fashion. For example, I could something like this imperatively through code:

var car = new Car();
car.engine = new Hemi({cylinders = 8});

Or take advantage of XAML to declaratively create it like this:

        <Hemi cylinders="8"/>

The declarative method is very well suited for defining rich object graphs which is how we design most user interfaces. It is not as well suited for logic and flow control. It turns out that non-programmers can learn declarative syntax far more readily than the imperative rules of most programming languages. I assume this is why website designers outnumber web developers by a long shot, but I’m more interested in the fact that I can work with my designer and they can edit the code and understand the markup.

HTML5 is the “XAML” for Angular apps and the concepts are very similar.


Any programmer worth their salt is going to cringe when they have to write the same block of code more than a few times. The principal has been ratified as Don’t Repeat Yourself or DRY and when it comes to the UI this is done through controls, templates, and even control templates. In Silverlight you could create different types of templates, such as control templates to define the structure and look and feel for a reusable component (in Angular we do that with HTML and CSS), but you could also create data templates that showed how to transform data into a visual representation. A typical Silverlight data template might look like:

<DataTemplate x:Name="TestTemplate">
    <StackPanel Orientation="Horizontal">
        <TextBlock x:Name="tbName" Text="{Binding Name}"></TextBlock>
        <Button x:Name="btnEdit" Width="20" Content="Edit" Click="btnEdit_Click" ></Button>

The same scenario played out in an Angular app uses directive and “mustaches” to define a template like this:

<div class="row-fluid">
    <div class="text-sm">{{name}}</div>
    <button ng-click="edit()">Edit</button>

See anything similar? Are we really talking about the difference between Mars and Venus here? I don’t think so! You could also define custom controls of various types for reuse, just as Angular enables through its directives. (It turns out this isn’t a unique problem, so JavaScript will address it directly via the Web Components specification).

Dependency Properties and Data-Binding

Of course nothing would work in the above examples if it weren’t for data-binding. In Silverlight, you implemented data-binding two ways. You could either implement an INotifyPropertyChanged interface that you call any time a property, well, changed, or you could derive from a special base class called DependencyObject that enabled dependency properties. It should be no surprise that the KnockoutJS library, which pioneered the JavaScript view model and sprang up within Microsoft projects, followed this in extending an “observable” object with observable wrappers.

Thank goodness the Angular team took a different approach!

It was precisely the amount of overhead to take an ordinary property and make it “observable” that drove most Silverlight developers nuts. We even went to great lengths to build highly custom solutions that taught the static C# language how to have some dynamic fun.

Angular uses dirty checking instead. It takes any plain object, parses your markup, and creates listeners to events and watchers that react when a part of the model mutates. These, taken together, form a digest loop that enables changes to propagate through the system. Although this can be less efficient than explicit object implementations, it is tremendously easier to implement because your existing JSON objects just work “as is.” You don’t need to transform them when you hydrate them from a service call, for example. The digest loop itself will get more efficient with new tools like Zone and will go away entirely when wizards with tall hats who wave their wands over JavaScript enable Object.observe as harmony (ECMAScript version 6) comes knocking.


A neat flavor of the dependency property system in Silverlight was attached properties. These are properties you can project onto other entities. For example, a grid might want to keep track of what cell a piece of text is in. The text doesn’t know it’s in a grid, but the grid will project a column and row number any way to keep track of it. In C# this was like magic. The attached properties made it possible, and the logical extension of that was behaviors. These were reusable snippets that you could attach. For example, maybe one behavior would automatically detect a size change and crop the geometry of an object. Another behavior might force data-bindings to update as a user typed input in a text box.

In the Angular world behaviors are first class. JavaScript, and by extension the DOM itself, is dynamic and extensible out of the box. You can easily add new functions and properties to existing functions and objects to extend them as you need. It would be nice to have some markup that could make something appear based on a condition, animate, or even format itself like a grid … and indeed, with Angular we can. In  the declarative HTML world, Angular’s directives once again come to the rescue, this time by providing testable, reusable behaviors.

Value Converters

In Silverlight, value converters help transform data into information. You won’t believe how many Silverlight developers would break into fisticuffs at conferences over whether it was acceptable to format a date in a view model or if it should be done in a reusable value converter that is declaratively attached in XAML. I tended to like the converters because I could write and test them independently and easily remove or add them when designers changed their mind (I know it doesn’t happen, of course, just being hypothetical).

Angular makes it extremely easy to create the same thing in what is called a “filter”. The syntax is incredibly less verbose. Here’s the sample with both approaches – again, we’re on the same page. Can you start to envision how your Silverlight XAML will translate to Angular yet?

        <local:DateConverter x:Key="convertDate"/>
    <TextBlock Text="{Binding CurDate, Converter={StaticResource convertDate}},
<!-- Angular -->
<div class="row-fluid">{{curDate | date:'shortDate'}}</div>


View Models

Behind all of this data-binding is the construct known as the view model. In Silverlight, there were massive debates that raged over precisely what each “leg” of the MVVM stool was and meant. There were the “no code-behind” purists and the “getter vs. value converter” camps and the “model is data” vs. “model is behavior.” I tend to like to keep things simple. When I speak of MVVM, I’m referring to the model which is most of your app: it’s how you are dealing with and solving the problem your app addresses. It includes services, validation logic, business rules, etc. You can choose to pass a “data model” to simplify things but you have to have something that supports the rest of the app. The next piece is the view which is the UI, pure and simple. Finally, you have the view model that serves to synchronize the state of the view. It does this through data-binding, whether there are bound properties or events.

Angular keeps these aspects fairly distinct. The reason you want this separated is not to make building a page more complicated. If that happens, you’ve failed. It should get easier and part of that is because you can build pieces like the view and the view model in parallel. Part of it is because you can test the view model without the view, so you don’t have to spin up XAML (Silverlight) or a phantom browser (Angular) just to make it work. In Angular, you can either use a controller with $scope and that two together make a view model, or more recently you can use the “controller as” convention to make a true thing that we call a controller but really acts as a view model in the browser.


In Silverlight view models could implement IDataErrorInfo and other versions of the same interface (depending on whether you wanted synchronous or asynchronous validation). The implementation would hold a dictionary of grievances by property that controls then recognized and rendered. A chunk of XAML that supported validations might look like this:

<TextBox Text="{Binding Name, Mode=TwoWay, ValidatesOnDataErrors=True}"/>

This would result in highlighting the input field and showing the validation exception to the side. Angular uses a combination of built-in HTML attributes and directives to facilitate declarative validations:

<input type="email" ng-model="" required />

This would automatically generate two validations for the field: an email validation, and a required validation. You can also create custom error conditions and write JavaScript logic for validation. Either way, the errors are exposed for data-binding like this:

<span ng-show="form.userEmail.$error.required">Email is required.</span>
<span ng-show="form.userEmail.$">You must enter a valid email.</span>

Again, very similar concepts and implementation between XAML/C# and HTML5/JavaScript.


Of course an app is only as good as it does, and a lot of what apps do is communicate with servers. Fortunately I’ve found that a well constructed Silverlight app is also mostly ready to be a well-constructed Angular app. I’ve found sometimes the disconnect when talking about patterns comes when one person is used to working on a team of one on a product over a period of time, versus talking to developers who are used to working with larger teams and separate design teams in parallel. The former often bemoan the complexity of “those patterns” and want to know why I’d ruin the perfect thing they have going with jQuery. For their situation, they may be right.

The instant you start building a larger team and trying to scale out work, however, things change. I have several examples of this from my Silverlight days. On one project, the design was built in parallel to the development effort – 6 months each. On another project, the services weren’t stood up yet and were being built in parallel. In both cases, the structure of separating the design (view) from the presentation logic (view model) and business logic (services, app model) paid off.

For the former, we were able to agree that the design might change how something is represented but we agreed on what would be represented. That meant I could build view models (controllers, $scope) and test them fully, as well as stand up the API for “doing stuff” independent of the design. As the design clicked in place, it was tweaks to XAML (HTML5) and we were out the door.

For the second, I abstracted the idea of the service call from the view model (controllers, $scope). I knew I would need to get a list of widgets, or display a given widget, so our team built those facets and just mocked the service layer. Once the services were stood up, we adapted the fa├žade to make real-time calls and cut over the project quickly. What’s nice about this architecture is that this type of app is fairly easy to port over to Angular. No, I don’t have some magic HTML5 generator that runs over the XAML, but we could analyze the app and see what filters (value converters) and validations were needed, what view models (controllers, $scope) looked like, and start to build out the UI without having to touch the back end. The same services were there (granted, if you are using something like WCF with SOAP or OData this is more complicated because Angular speaks best with REST out of the box, but there are plenty of client libraries for bridging those gaps as well).

What’s really cool is if you follow TDD, your specs are already written. You can take the same spec you wrote for your view model (given user when email is invalid then prevents submission) and write the same specs for your Angular controller as you start to port code over, and how satisfying is it when you see two sets of identical green specs, one from the Silverlight Unit Testing Framework and the other from Jasmine?


A lot of people invested time and energy into Silverlight over the course of its history. As HTML5 was emerging I was a very vocal proponent for Silverlight as I watched the browser wars clash over consistent implementation. The decision was taken out of my hands when browsers began refusing the Silverlight plug-in and mobile smart phones and tablets exploded. I quickly became a convert for HTML5 and JavaScript for line of business apps (initially I felt they would all need to be native) as the specification matured and frameworks like Angular emerged along with tools like TypeScript that addressed my main concerns. Although a lot of people were emotional about Silverlight, it turns out the reasons they loved it were because of features like data-binding, testability, the language it was written in, reusability, etc. These same patterns are now being brought to bear on the web and not only make it easier to build enterprise apps with large teams using JavaScript and HTML5, they also provide a path for migrating legacy systems over. My colleague Noel has written a very in-depth whitepaper about our experience with this, and I highly recommend you go grab a copy to read yourself.

Sunday, February 9, 2014

Use Zone to Trigger Angular Digest Loop for External Functions

To continue my series on the power of Zone, I examine yet another powerful and useful way you can use zones to improve your application. If this is your first time learning about Zone, read my introduction to Zone titled Taming Asynchronous Tasks in JavaScript with Zone.js. Anyone familiar with Angular apps has run into the concept of the $digest loop. This is essentially a pass to update data-binding. When the model is mutated, which can happen a number of ways, anything observing the model is notified. The watchers may also mutate the model further which results in a recursive call until either the changes settle or the maximum recursion is reached.

The problem with this approach is that Angular is only aware of changes that happen within the loop. Directives that ship with Angular are automatically called within the loop so their changes are propagated. It is not uncommon to introduce a third-party library that may mutate the model somehow. When this happens, Angular is not aware of the changes because it happens outside of the loop. When you are able to intercept the update yourself, you can make it inside of a call to $apply which will notify Angular of the change. However, you don’t always have the luxury of intercepting third-party modules.

To demonstrate how Zone can help, I created a simple (contrived) scenario. Assume you have a simple clock you wish to display:

<div ng-app="myApp" ng-controller="myController">
    {{timer.time | date:'HH:mm:ss'}}

The timer, however, comes from a third party control. It exposes a timer object and runs on a timer but you can only kick it off – you have no control of the actual code that makes the updates. Again, to keep it simple, assume this amazing control works like this (remember, you only have access to the externalTimeObj, and something else kicks it off).

var externalTimeObj = {
    time: new Date()
setInterval(function() {
    externalTimeObj.time = new Date();
}, 1000);

In Angular, you can capture the timer and place it on your scope:

var app = angular.module("myApp", []);
app.value("timerObj", externalTimeObj);
app.controller("myController", function($scope, timerObj) {
    $scope.timer = timerObj;

But the problem is you don’t see any updates (here’s the proof). Even though the timer object is updating, it happens outside of the digest loop so Angular isn’t aware. What’s worse is that the only way you can update your timer to call $apply is to get in and modify the source code which isn’t ever a great idea. If only there was a way to create an execution context that would automatically update the digest loop when done. Then you could call the initialization methods for the third-party timer control from within that execution context and capture the updates. Wait, there is! We have Zone.

First, create a zone that is dedicated to initiating the digest loop when it’s work is finished. OK, don’t, because I’ve already done it and it looks like this:

var digestCapture = null;
var digestZone = (function () {
    return {
        digest: function() { },
        onZoneEnter: function () {
            if (digestCapture) {
                zone.digest = digestCapture;
                zone.onZoneEnter = function() {};
        onZoneLeave: function () {

The digestCapture variable is necessary to allow Angular to initialize the call into the digest loop. If we bootstrap Angular in this zone we’ll create too much overhead because Angular methods that already execute in the digest loop will trigger a redundant call when the task is complete. Notice how once the digest call is captured, the check for it is removed by replacing the onZoneEnter function with a no-op.

There is no need to change any of the Angular code (that’s why I love this approach – open to extensibility, closed to change). Simply add the code to capture the digest function:$rootScope){
    digestCapture = function() {

There is also no need to change the third-party control. Instead, we just move our third-party control initialization into the zone (again, we’re using the interval here but this could be any call into the third-party API as once it happens in a zone it is captured for that zone).

zone.fork(digestZone).run(function() {
    setInterval(function() {
        externalTimeObj.time = new Date();
    }, 1000);

That’s it! Now we’ve got a way to initialize our third-party controls and ensure any time they return from an asynchronous task that we are able to notify Angular our model has mutated by running the digest loop. See the code running yourself.