Wednesday, December 11, 2013

The Windows Runtime and the Web

A big “thank you” to the Chattanooga .NET Users Group for hosting my talk last night.

I presented “Windows Runtime and the Web.” Although this talk is similar to one I presented earlier at DevLink, I updated the content exclusively for Windows 8.1 and added some new features. The topics I cover include:

  • The WebView Internet Explorer control
  • Using HttpClient for RESTful service calls and more advanced manipulation of HTTP-based content
  • OData / WCF Data Services Client
  • SOAP clients
  • Syndication of RSS and ATOM Feeds
  • WebSockets
  • TCP Sockets
  • Windows Azure Mobile Web Services (WAMS)
  • Live Tiles

I uploaded the full deck to SlideShare and the source code is all available on the WinRT Examples CodePlex site.

Here is the talk summary:

The Windows Runtime is the runtime that drives Windows 8.1 and the new Windows Store apps. The runtime enables developers to build rich client apps that run natively on Window 8.1 devices. In this session, Jeremy Likness explores the various built-in components and APIs that enable Windows Store apps to connect to SOAP, REST, and OData endpoints and syndicate RSS and Atom feeds as well as connect to sockets and cloud services. Learn how these tools make it easy to build Windows Store apps that are alive and connected to the internet.

Friday, October 25, 2013

Throttling Input in AngularJs Applications using UnderscoreJs Debounce

There are numerous scenarios to throttle input so that you aren’t reevaluating your filters every time they change. The more appropriate term is “debounce” because essentially you are waiting for the input to settle before you invoke a function, so you stop bouncing to the server. The canonical case would be a user entering input into a text box to filter a list. If your filter involves some overhead (for example, it is implemented using a REST resource that executes a query on a backend database) you don’t want to keep rerunning and reloading the results while the user is typing. Instead, you want to wait for them to finish typing their filter and then perform the task once.
A simple solution to this problem is here: http://jsfiddle.net/nZdgm/
Let’s assume you have a list ($scope.list) that you expose as a filtered list ($scope.filteredList) based on anything that contains the text typed into $scope.searchText. Your form would look something like this (ignore the throttle checkbox for now):
<div data-ng-app='App'>
    <div data-ng-controller="MyCtrl">
        <form>
            <label for="searchText">Search Text:</label>
            <input data-ng-model="searchText" name="searchText" />
            <br/>
            <input type="checkbox" data-ng-model="throttle">&nbsp;Throttle
            <br/>
            <label>You typed:</label> <span>{{searchText}}</span>
        </form>
        <ul><li data-ng-repeat="item in filteredList">{{item}}</li></ul>
    </div>
</div>

The typical scenario is to watch the search text and react instantly. This method handles the filter:
var filterAction = function($scope) {
    if (_.isEmpty($scope.searchText)) {
        $scope.filteredList = $scope.list;
        return;
    }
    var searchText = $scope.searchText.toLowerCase();
    $scope.filteredList = _.filter($scope.list, function(item) {
        return item.indexOf(searchText) !== -1;
    });
};

The controller asks the scope to $watch like this:
$scope.$watch('searchText', function(){filterAction($scope);});
This will fire every time you type. To settle things down, use the built-in debounce function that comes with UnderscoreJs. The function is simple: pass it a function to debounce with a time in milliseconds. It will delay actually calling the function you pass until at least the time delay has passed since the last time it was passed. In other words, if we use 1 second (which I did in this example to exaggerate the effect) and the function is called repeatedly as I’m typing in the search box, it will not actually fire until I stop typing and wait for at least 1 second.
You may be tempted to simply debounce the filter action like this:
var filterThrottled = _.debounce(filterAction, 1000);
$scope.$watch('searchText', function(){filterThrottled($scope);});


However, this poses a problem. The debounce uses a timer, which ends up outside of Angular’s digest loop, so nothing will be reflected in the UI because Angular doesn’t know about it. Instead, you must wrap it in a call to $apply:
var filterDelayed = function($scope) {
    $scope.$apply(function(){filterAction($scope);});
};

Then you can watch it and only react once the input settles:
var filterThrottled = _.debounce(filterDelayed, 1000);
$scope.$watch('searchText', function(){filterThrottled($scope);});


Of course the full example provides a throttle so you can see the difference between the “instant” filtering and the delayed filtering. The fiddle for this again is online at: http://jsfiddle.net/nZdgm/
Enjoy!

Friday, October 11, 2013

My XML is Alive! An Intro to XAML

I have to admit I have been working with XAML for so long that I tend to take it for granted. That's why I was excited when the Gwinnett Georgia Microsoft Users Group requested a talk that would be an introduction to XAML. It seemed a great way to get back to why XAML is so powerful - and why it's counterparts in the web like AngularJS are so effective. I enjoyed putting the talk together and more importantly had a fantastic time delivering the talk. The talk includes source code that steps through XAML, including a class library that illustrates how XAML instantiates objects even if they aren't "XAML aware", an example of how data-binding would be done manually if it wasn't build into the XAML system, and samples that illustrate the Visual State Manager along with the MVVM pattern.

Here is the description of the talk:

Extensible Application Markup Language, better known as XAML (pronounced “zammel”), is a language developed by Microsoft that is based on XML. It provides a declarative way to instantiate rich object graphs – in other words, through XAML you are able to create instances of classes, set properties, and define behaviors. Most commonly used to describe the user interface for technologies like Silverlight, WPF, and Windows 8.1, XAML provides a separation of concerns between the presentation and business logic for an app and gives the designer the flexibility to create experiences that interact with code through data-binding. This enables design-time data and true parallel workflows between designers and developers. Jeremy Likness will walk you through XAML, including how it is used by various technologies and the advantages it provides when building applications.

You can grab the deck for the talk at SlideShare and the link to the source is on the last slide.

Enjoy!

Jeremy Likness

Thursday, September 19, 2013

10 Reasons Web Developers Should Learn AngularJS

There is no doubt that AngularJS – the self-proclaimed “superheroic JavaScript framework” – is gaining traction. I’ll refer to it frequently as just “Angular” in this post. I’ve had the privilege on working on an enterprise web application with a large team (almost 10 developers, soon growing to over 20) using Angular for over half of a year now. What’s even more interesting is that we started with a more traditional MVC/SPA approach using pure JavaScript and KnockoutJS before we switched over to using the power-packed combination of TypeScript and Angular. It’s important to note that we added comprehensive testing using Jasmine but overall the team agrees the combination of technologies has increased our quality and efficiency: we are seeing far fewer bugs and delivering features far more quickly.

Executive summary:

If you are familiar with Angular, this post may give you some ideas to think about you hadn’t encountered before. If you know Angular and are trying to justify its adoption at your company or on your project, this post can provide you with some background information that may help. If you have no idea what Angular is, read on because I’ll share why it’s so powerful and then point you to resources that will get you up to speed, quickly.

I can only assume other organizations are seeing positive results after adopting Angular. According to Google Trends the popularity of AngularJS (blue) compared to KnockoutJS (red) and “Single Page Applications” (yellow) is exploding.

angulartrends

One of the first single-track AngularJS conferences, ng-conf, sold out hundreds of tickets in just a few minutes.

This post isn’t intended to bash KnockoutJS or Ember or Backbone or any of the other popular frameworks that you may already be using and are familiar with. Instead, I’d like to focus on why I believe AngularJS is gaining so much momentum so quickly and is something anyone who works on web applications should take very seriously and at least learn more about to decide if it’s the right tool to put in your box.

1. AngularJS Gives XAML Developers a Place to Go on the Web

I make this bullet a little “tongue-in-cheek” because the majority of developers using Angular probably haven’t touched XAML with a 10 foot pole. That’s OK, the reasons why XAML became popular in the Microsoft world through WPF, Silverlight, and now Windows Store app development are important to look at because they translate quite well to Angular. If you’re not familiar with XAML, it is a declarative language based on XML used to instantiate object graphs and set values. You can define various types of objects with properties and literally mark up a set that will get created. The most common types of objects to create are user interface elements such as panels and controls that create a display. XAML makes it easy to layout complex UIs that may change over time. XAML supports inheritance (properties defined as children of parents can pick up values set higher in the tree) and bubbles events similar to the HTML DOM.

Another interesting component of XAML is the support for data-binding. This allows there to exist a declared relationship between the presentation layer and your data without creating hard dependencies between components. The XAML layer understands there is a contract – i.e. “I expect a name to be published” and the imperative code simply exposes a property without any knowledge of how it will be rendered. This enables any number of testing scenarios, decouples the UI from underlying logic in a way that allows your design to be volatile without having to refactor tons of code, and enables a truly parallel workflow between designers and developers.

This may sound like lip-service but I’ve been on many projects and have seen it in action. I recall two specific examples. One was a project with Microsoft that we had to finish in around 4 months. We estimated a solid 4 months of hands-on development and a separate design team required about 4 months of design before all was said and done – they went from wireframes to comps to interactive mock-ups and motion study and other terms that make me thankful I can let the designers do that while I focus on code. Of course if we followed the traditional, sequential approach, we would have missed our deadline and waited 8 months (4 months of design followed by 4 months of coding). XAML allowed us to work in parallel, by agreeing upon an interface for a screen – "These are the elements we’ll expose.” The developers worked on grabbing the data to make those properties available and wrote all of the tests around them, and the designers took the elements and manipulated, animated, and moved them around until they reached the desired design. It all came together brilliantly in the end.

The other real world example was a pilot program with a cable company. We were building a Silverlight-based version of their interactive guide. The only problem was that they didn’t have the APIs ready yet. We were able to design the system based on a domain model that mapped what the user would experience – listings, times, etc. – then fill those domain objects with the APIs once they were defined and available. Again, it enabled a parallel workflow that greatly improved our efficiency and the flexibility of the design.

I see these same principles reflected in the Angular framework. It enables a separation of concerns that allows a true parallel workflow between various components including the markup for the UI itself and the underlying logic that fetches and processes data.

2. AngularJS Gets Rid of Ritual and Ceremony

egyptian-gods

Picture Credit: Piotr Siedlecki

Have you ever created a text property on a model that you want to bind to your UI? How is that done in various frameworks? In Angular, this will work without any issues and immediately reflect what you type in the span:

<input data-ng-model=’synchronizeThis’/><span>{{synchronizeThis}}</span>

Of course you’ll seldom have the luxury of building an app that simple, but it illustrates how easy and straightforward data-binding can be in the Angular world. There is very little ritual or ceremony involved with standing up a model that participates in data-binding. You don’t have to derive from an existing object or explicitly declare your properties and dependencies – for the most part, you can just pass something you already have to Angular and it just works. That’s very powerful. If you’re curious how it works, Angular uses dirty tracking.

Although I understand some other frameworks have gotten better with this, moving away from our existing framework where we had to explicitly map everything over to an interim object to data-bind to Angular was like a breath of fresh air … things just started coming together more quickly and I felt like was duplicating less code (who wants to define a contact table, then a contact domain object on the server, then a contact JSON object that then has to be passed to a contact client-side model just to, ah, display details about a contact?)

3. AngularJS Handles Dependencies

Dependency injection is something Angular does quite well. I’ll admit I was skeptical we even needed something like that on the client, but I was used to the key scenario being the dynamic loading of modules. Oh, wait – what did you say? That’s right, with libraries like RequireJS you can dynamically load JavaScript if and when you need it. Where dependency injection really shines however is two scenarios: testing and Single Page Applications.

For testing, Angular allows you to divide your app into logical modules that can have dependencies on each other but are initialized separately. This lets you take a very tactical approach to your tests by bringing in only the modules you are interested in. Then, because dependencies are injected, you can take an existing service like Angular’s $HTTP service and swap it out with the $httpBackend mock for testing. This enables true unit testing that doesn’t rely on services to be stood up or browser UI to render, while also embracing the ability to create end-to-end tests as well.

Single Page Applications use dynamic loading to present a very “native application” feel from a web-based app. People like to shout the SPA acronym like it’s something new but we’ve been building those style apps from the days of Atlas and Ajax. It is ironic to think that Ajax today is really what drives SPA despite the fact that there is seldom any XML involved anymore as it is all JSON. What you’ll find is these apps can grow quickly with lots of dependencies on various services and modules. Angular makes it easy to organize these and grab them as needed without worrying about things like, “What namespace does it live in?” or “Did I already spin up an instance?” Instead, you just tell Angular what you need and Angular goes and gets it for you and manages the lifetime of the objects for you (so, for example, you’re not running around with 100 copies of the same simple service that fetches that contact information).

4. AngularJS Allows Developers to Express UI Declaratively and Reduce Side Effects

There are many advantages to a declarative UI. I mentioned several when I discussed XAML earlier in this post, but HTML is in the same boat. Having a structured UI makes it easier to understand and manipulate. Designers who aren’t necessarily programmers can learn markup far easier than they can programming. Using jQuery you end up having to know a lot about the structure of your documents. This creates two issues: first, the result is a lot of unstable code working as “glue” that is tightly coupled to changes in the UI, and second, you end up with plenty “magic” because it’s not evident from looking at the markup just what the UI will do. In other words, you may have a lot of behaviors and animations that are wired up “behind the scenes” so it’s not apparent from looking at the form tags that any validation or transitions are taking place.

By declaring your UI and placing markup directly in HTML, you keep the presentation logic in one place and separated from the imperative logic. Once you understand the extended markup that Angular provides, code snippets like the one above make it clear where data is being bound and what it is being bound to. The addition of tools like directives and filters makes it even more clear what the intent of the UI is, but also how the information is being shaped because the shaping is done right there in the markup rather in some isolated code.

Maintaining large systems – whether large software projects or mid-sized projects with large teams – is about reducing side effects. A side effect is when you change something with unexpected or even catastrophic results. If your jQuery depends on an id to latch onto an element and a designer changes it, you lose that binding. If you are explicitly populating options in a dropdown and the designer (or the customer, or you) decides to switch to a third party component, the code breaks. A declarative UI reduces these side effects by declaring the bindings at the source, removing the need for hidden code that glues the behaviors to the UI, and allowing data-binding to decouple the dependency on the idea (i.e. “a list”) from the presentation of the idea (i.e. a dropdown vs. a bulleted list).

5. AngularJS Embraces ‘DD … Er, Testing

man-1378643715O0oIt doesn’t matter if you embrace Test-Driven Development, Behavior-Driven Development, or any of the driven-development methodologies, Angular embraces this approach to building your application. I don’t want to use this post to get into all of the advantages and reasons why you should test (I’m actually amazed that in 2013 people still question the value) but I’ve recently taken far more of a traditional “test-first” approach and it’s helped. I believe that on our project, the introduction of Jasmine and the tests we included were responsible for reducing defects by up to 4x. Maybe it’s less (or it could be more) but there was a significant drop-off. This isn’t just because of Angular – it’s a combination of the requirements, good acceptance criteria, understanding how to write tests correctly and then having the framework to run them – but it certainly was easier to build those tests. (Photo credit: George Hodan).

If you want to see what this looks like, take a look at my 6502 emulator and then browse the source code. Aside from some initial plumbing, the app was written with a pure test-first approach. That means when I want to add an op code, I write tests for the op code then I turn around and implement it. When I want to extend the compiler, I write a test for the desire outcome of compilation that fails, then I refactor the compiler to ensure the test passes. That approach saved me time and served to both change the way I structured and thought about the application, but also to document it – you can look at the specs yourself and understand what the code is supposed to do. The ability to mock dependencies and inject them in Angular is very important and as you can see from the example, you can test everything from UI behaviors down to your business logic.

6. AngularJS Enables Massively Parallel Development.

One of the biggest issues we encountered early in the project was developers stepping on each other’s toes. Part of this is just a discipline and even with raw JavaScript you can follow patterns that make it more modular, but Angular just took it to another level. That’s not to say it completely eliminates dependencies, but it certainly makes them easier to manage. As a specific case in point, there is a massive grid in the application that is used to drive several key operations. In a traditional JavaScript application it could have been a merge nightmare to scale this across a large team. With Angular, however, it was straightforward to break down the various actions into their own services and sub-controllers that developers could independently test and code without crashing into each other as often.

Obviously for larger projects, this is key. It’s not just about the technology from the perspective of how it enables something on the client, but actually how it enables a workflow and process that empowers your company to scale the team. 

7, AngularJS Enables a Design <—> Development Workflow.

OK, who am I kidding, right? You can get this with HTML and CSS and other fun technologies. The reason this gets its own bullet, however, is because of just how well this works with Angular. The designer can add markup without completely breaking an application because it depends on a certain id or structure to locate an element and perform tasks. Instead, rearranging portions of code is as easy as moving elements around and the corresponding code that does the binding and filtering moves with it. Although I haven’t yet seen a savvy environment where the developers share a “design contract” with the UI/UX team, I don’t doubt it’s far off – essentially the teams agree on the elements that will be displayed, then design goes and lays it out however they want while development wires in the $scope with their controllers and other logic, and the two pieces just come together in the end. That’s how we did it with XAML and there is nothing preventing you from doing the same with Angular.

If you’re a Microsoft developer and have worked with Blend … wouldn’t it be cool to see an IDE that understands Angular and could provide the UI to set up bindings and design-time data? The ability is there, it just needs to be built, and with the popularity I’m seeing I don’t doubt that will take long.

8. AngularJS Gives Developers Controls.

One of the most common complaints I heard about moving to MVC was “what do we do with all of those controls?” The early perception was that controls don’t work/wouldn’t function in the non-ASP.NET space but web developers who use other platforms know that’s just not the case. There are a variety of ways to embrace reusable code on the web, from the concept of jQuery plugins to third-party control vendors like one of my favorites, KendoUI.

Angular enables a new scenario known as a “directive” that allows you to create new HTML elements and attributes. In the earlier example, the directive for the “data-ng-model” allowed data-binding to take place. In my emulator, I use a directive to create two new tags: a “console” tag that writes the console messages and a “display” tag that uses SVG to render the pixels for the emulator (OK, by this time if you’ve checked it out I realize it’s more like a simulator). This gives developers their controls – and more importantly, control over the controls.

Our project has evolved with literally dozens of directives and they all participate in previous points:

  • Directives are testable
  • Directives can be worked on in parallel
  • Directives enable a declarative way to extend the UI, rather than using code to wire up new constructs
  • Directives reduce ritual and ceremony
  • Directives participate in dependency injection

Remember how I mentioned the huge grid that is central to the project? We happen to use a lot of grids (as does almost every enterprise web application ever written). We use the KendoUI variant, and there are several steps you must take to initialize the grid. For our purposes, many of the configuration options are consistent across grids, so why make developers type all of the code? Instead, we enable them to drop a new element (directive), tag it with a few attributes (directives), and they are up and running.

9. AngularJS Helps Developers Manage State.

I hesitate to add this point because savvy and experienced web developers understand the concept of what HTTP is and how to manage their application state. It’s the “illusion” of state that was perpetuated by ASP.NET that confuses developers when they shift to MVC. I once read on a rather popular forum a self-proclaimed architect declare that MVC was an inferior approach to web design because he had to “build his own state management.” What? That just demonstrates a complete lack of understanding of how the web works. If you rely on a 15K view state for your application to work, you’re doing it wrong.

I’m referring more to client state and how you manage properties, permissions, and other common cross-cutting concerns across your app in the browser. Angular not only handles dependency injection, but it also manages the lifetime of your components for you. That means you can approach code in a very different way. Here’s a quick example to explain what I mean:

One of the portions of the application involved a complex search. It is a traditional pattern: enter your search criteria, click “search” and see a grid with the results, then click on a row to see details. The initial implementation involved two pages: first, a detailed criteria page, then a grid page with a pane that would slide in from the right to reveal the details of the currently selected row. Later in the project, this was refactored to a dialog for the search criteria that would overlay the grid itself, then a separate full screen page for the details.

In a traditional web application this would involve rewriting a bit of logic. I’d likely have some calls that would get detail information and expect to pass them on the same page to a panel for the detail, then suddenly have to refactor that to pass a detail id to a separate page and have that page make the call, etc. If you’ve developed for the web for any amount of time you’ve had to suffer through some rewrites that felt like they were a bit much for just moving things around. There are multiple pieces of “state” to manage, including the selection criteria and the identifier for the detailed record being shown.

In Angular, this was a breeze. I created a controller for the search dialog, a controller for the grid, and a controller for the detail page. A parent controller kept track of the search criteria and current detail. This meant that switching from one approach to the other really meant just reorganizing my markup. I moved the details to a new page, switched the criteria to a dialog, and the only real code I had to write was a new function to invoke the dialog when requested. All of the other logic – fetching and filtering the data and displaying it – remained the same. It was a fast refactoring. This is because my controllers weren’t concerned with how the pages were organized or flowed – they simply focused on obtaining the information and exposing it through the scope. The organization was a concern of routing and we used Angular’s routing mechanisms to “reroute” to the new approach while preserving the same controllers and logic behind the UI. Even the markup for the search criteria remained the same – it just changed from a template that was used as a full page to a template that was used within a dialog.

Of course, this type of refactoring was possible due to the fact the application is a hybrid Single Page Application (SPA).

10. AngularJS Supports Single Page Applications.

In case you missed it, this point continues the last one. Single Page Applications are becoming more popular for a good reason. They fill a very specific need. More functionality is being moved to the web, and the browser is finally realizing its potential as a distributed computing node. By design, SPA applications are far more responsive (even though some of that is perception). They can provide an experience that feels almost like a native app in the web. By rendering on the client they cut down load on the server as well as reduce network traffic – instead of sending a full page of markup, you can send a payload of data and turn it into markup at the client.

In our experience, large apps make more sense to build as hybrid SPA apps. By hybrid I mean instead of treating the entire application as a single page application, you divide it into logical units of work or paths (“activities”) through the system and implement each of those as a SPA. You end up with certain areas that result in a full page refresh, but the key interactions take place in a series of different SPA modules. For example, administration might be one “mini” SPA app while configuration is another. Angular provides all of the necessary infrastructure from routing (being able to take a URL and map it to dynamically loaded pages), templates, to journaling (deep linking and allowing users to use the built-in browser controls to navigate even though the pages are not refreshing) needed to stand up a functional SPA application, and is quite adept at enabling you to share all of the bits and pieces across individual areas or “mini-SPA” sections to give the user the experience of being in a single application.

Conclusion

Whether you already knew about Angular and just wanted to see what my points were, or if you’re getting exposed for the first time, I have an easy “next step” for you to learn more. I recorded a video that lasts just over an hour covering all of the fundamentals you need to get started with writing Angular applications today. Although the video is on our “on demand training” site, I have a code you can use to get free access to both the video and the rest of the courses we have on WintellectNOW. Just head over to my Fundamentals of AngularJS video and use the code LIKNESS-13 to get your free access.

Monday, September 9, 2013

Synergy between Services and Directives in AngularJS

You’ve probably heard it a thousand times now. “AngularJS teaches HTML new tricks.” The way it does that is through directives. In my last related post I covered how to build a testable filter. Directives can be tested in a similar fashion, but what happens when they have to interact with the rest of your application? Instead of teaching your controllers how to talk to UI components, or overloading the $scope object, look to services as the mortar to hold pieces of your app together.

The console in my 6502 simulator is rendered with a single directive:

<div class="column"><console></console></div>

Instead of walking you through the directive, however, I’d like to start with the specs for the console. Here are the relevant specs:

consoletests

The size of the buffer is taken from a set of global constants that are configurable and will automatically update the tests. That’s it – that’s all I want from my console service. Notice that I don’t care how it is rendered yet, I’m just concerned about how the rest of my app will interface with it.

From the tests I was able to determine an interface for the service:

export interface IConsoleService {
    lines:
string
[];
    log(message:
string);
}

And from the interface I could then easily implement a class that would pass the tests:

export class ConsoleService implements IConsoleService {
    
   
public lines: string
[];

   
constructor
() {
       
this
.lines = [];
    }

   
public log(message: string
) {
       
this
.lines.push(message);

       
if (this
.lines.length > Constants.Display.ConsoleLines) {
           
this.lines.splice(0, 1);
        }            
    }
}

Now any other component that needs to write to the console can simply have the service injected and log a message like this:

this.consoleService.log("CPU has been successfully reset.");

What’s nice about AngularJS is that we now have a fully functional app even though we haven’t even considered the UI yet. For the UI, we’ll use a directive. The directive will have the console service injected and will watch it for changes. The template was simple enough I decided to inline it. It is simply a styled divider with a collection of spans that are separated by newlines. The directive sets the scope to the array of messages. This automatically sets angular to watch for changes and will refresh/redraw the console when needed. The only other piece I added was my own watcher to scroll the div so that the top is always displayed (otherwise you will just see the scrollbar grow while you are looking at the oldest lines). The entire setup is implemented like this:

public static Factory(
    consoleService: Services.ConsoleService) {
   
return
{
        restrict:
"E"
,
        template:
           
"<div class='console'><span ng-repeat='line in lines'>{{line}}<br/></span></div>"
,
        scope: {},
        link:
function
(scope: IConsoleScope, element) {
           
var
e: JQuery = angular.element(element);
            scope.lines = consoleService.lines;
            scope.$watch(
"lines"
, () => {
               
var div: HTMLDivElement = <HTMLDivElement>$(e).get
(0).childNodes[0];
                $(div).scrollTop(div.scrollHeight); 
            },
true);                    
        }
    };
}

None of the components have any dependency on the console directive, so it can be changed, updated, or manipulated without impacting the rest of the app. The directive itself doesn’t even have a direct dependency on the console service – instead, the factory that returns the service has the service injected and uses it to set up the directive. You could easily mock or stub a simpler implementation of the console service to test out the directive.

For a more involved example, take a look at the display service and directive. This is a little more involved because the directive uses SVG to render a bung of rectangles that are then colored according to the corresponding memory address. A separate palette component is used to generate the palette. Even with this more complex configuration, the CPU service simply informs the display service when memory changes without any knowledge of how the display is being rendered. If I wanted to, I could switch to a canvas-based implementation and only have to change one component within the application.

Although you can browse the full JavaScript source on the simulator site, the TypeScript project is available at CodePlex.

Monday, August 26, 2013

Full HP TouchSmart 15t-J000 Quad Edition Notebook Review

I finally decided to pull the trigger on a new laptop – this one mainly for at home use so a lot more powerful than my portable Lenovo Yoga 13 that I absolutely love. I mentioned before that I wanted to go lightweight for travel, but unfortunately I still haven’t found that ultimate laptop that has all of the specs in one package – i.e. fast SSD, high memory, thin form factor, etc. So, my trade-off will be to have a bigger laptop that I use here and take on the road only when needed, and then keep the lighter Yoga for when I travel or want to be portable around the house. The ASUS VivoTab which has served me well (I use it every day) will go to my daughter and I’ll use the Yoga in its stead.

I’m not much of a do-it yourself type person so I wanted something I could configure completely and order online. I’ve had tremendous success with electronics and yes, even computers, through Amazon.com (I am aware of NewEgg.com and TigerDirect.com, etc.) so I decided to go there. There are two different vendors I’ve worked with through the Amazon site and I’m completely satisfied with both. I received my Lenovo from Eluktronics, Inc.and the TouchSmart came from ULTRA Computers. In both cases I wanted something custom (for the Lenovo it was an i7 when the only listed model was an i5, and for the TouchSmart I wanted a graphics card added to the 500 GB SSD model). In both cases the sellers responded immediately, set up a SKU to satisfy my request, then built and shipped it immediately. Couldn’t be more pleased. I can’t recall the return policy from Eluktronics but ULTRA Computers also offered me a 60-day return policy.

Configuration

To be completely honest HP is not my first choice for a vendor – I’ve had a lot of success with ASUS, Dell, and Samsung products, and of course there is the MacBook series that can’t be ignored. However, after doing extensive research the only configuration I could come up with was through the HP product. Here’s the hardware specs I ended up with:

Intel Core i7-4700MQ (2.4 GHz, 6 MB cache, 4 cores with 8 logical) – turbo boost up to 3.2 GHz
Intel HD Graphics 4600 on the board but added the NVidia GeForce GT 740 with 2 GB dedicated memory
16 GB DDR 3 RAM
512 GB Samsung 840 SSD
RealTek 10/10/1000 Gigabit Ethernet LAN
Intel 802.11 b/g/n WLAN (does NOT appear to be dual-band), WiDi
4x 3.0 USB
1 full-size HDMI
1 RJ45
1 headphone/microphone combo
Multi-format card reader
15.6” diagonal LED-backlit touch screen 1080p
Full-size island keyboard with number pad
HP TrueVision HD WebCam
Integrated Dual Array Microphone
Beats Audio w/ 2 subwoofers, 4 speakers

Unboxing

Unlike the ASUS and Lenovo, the HP packaging was direct and to the point – not much invested in the presentation, but for me I could really care less. I want the goods. So here are the goods, sans boxes and packaging foam:

From the top:

WP_000510

Note the reflective sticker (you can see me in the reflection). I’ll share the first con here: build quality. Although this will work fine for me, some will expect a more expensive laptop to be solid throughout. The keyboard and bottom feel like solid aluminum, but tapping on the lid gives the impression it is plastic painted to look like metal. I’m not sure if it is, but it certainly gives that sound and feel. It’s a little like a cheaper Acer laptop I used to own – you can press hard on the lid and see artifacts on the display on the inside. Some people will run the other way because of this, but I’ve had plenty of laptops built this way and haven’t had an issue so it’s not a deal breaker for me.

Even though this laptop is larger (15”) and heavier (5+ lbs.) it feels nothing like the thick, brick-like Dell I had before. When just reading specs I was concerned, but upon receiving it, my fears were unfounded. There is a nice taper that despite the power it is packing gives it a slim face:

WP_000511

When facing the laptop, the left side has a security cable slot, vents, HDMI, USB 3.0 charging port, USB 3.0 port, digital card reader, and hard drive and power LEDs. This angle demonstrates the taper so you can see the tilt of the keyboard:

WP_000514

The right side features the dual audio out/in (microphone and headphones), 2x USB 3.0 ports, Ethernet status lights, full size RJ-45 jack, AC adapter light and the connector for the adapter.

WP_000512

Windows 8 and Support Woes

The second con was not really HP’s problem until I contacted support and discovered they were completely incapable of advanced troubleshooting. In the middle of installing my development environment, I did a reboot and suddenly found no Windows Store apps would launch. Apparently this is a widespread issue because a few searches found a lot of frustrated people who found their system suddenly stopped allowing them to launch Windows Store apps. The common things (latest display driver, ensuring more than 1024 x 768 resolution, etc.) were already addressed on my machine. Another common tact is to uninstall, then reinstall the apps – but my challenge was the Windows Store app itself wouldn’t launch (even after clearing the cache). I tried the system refresh and that failed. HP Support was clueless and obviously following a script, so I ended up just reinstalling Windows 8.

This was a little challenging as well. After installing it (happens FAST due to the SSD) I went onto the HP site and asked it to identify my system. It identified the wrong system so when I installed the Intel Chipset driver it froze my system. That meant a wipe and reinstall.

This time I specifically selected the model I knew it was, and the drivers worked fine. Once I got through that step the system worked well (even better without the bloatware on it) and I was very satisfied. A major “pro” with Windows 8 is the profile synchronization. After installing the OS, most of my tiles “went live” with data. All of the Windows 8 apps remembered my settings and just started working. I was surprised by how much actually synchronizes – for example, I went to the command prompt and it was configured exactly the way I like it (I make it use a huge font so it’s easy to present with).

Keyboard and Number Pad

The larger form factor gave HP room to provide a generous keyboard.

WP_000506

The keys are laid out nicely. I love the “island style” (keys are recessed so the lid closes flush without contacting them) and the keys have good travel. Having a full-sized number pad is also great. So how is the keyboard over all? My response here will be a bit loaded, so bear with me. If I hadn’t purchased a Lenovo Yoga I would say this is a great keyboard – one of the best I’ve typed on. I have read that some people find it loses key presses but that has not been an issue for me. It feels natural, I’m able to type extremely fast and don’t feel cramped. However, since I have used a Lenovo I can say it is still inferior to the Yoga keyboard – I still believe Lenovo makes the best keyboards in the industry. Anytime I feel the typing experience is great on the HP, I go over to my Yoga and suddenly it just feels better – the keys feel more solid, and the keyboard is a lot quieter than the HP.

The keyboard follows the ultrabook trend of wiring the function keys to actual functions rather than the function keys themselves, so a developer has to remember to hold down Fn + F5 when debugging for example. This can be turned off in the BIOS. The keyboard has substantial flex. When you press keys on the left side you can definitely see all of the keys around it moving. This is a deal breaker for some people. It doesn’t bother me and I wouldn’t have noticed it if I didn’t know to look for it but it is something to keep in mind (I suggest trying out a model if it’s a concern). The arrow keys are horrible – the up and down take up the space of a single key and I’m constantly missing the right one when I try to travel. I’ll need to retrain myself to use the arrows on the number pad instead. The page up/down/home/end keys are horizontal on the upper right while the Lenovo has them vertical so I am having to retrain myself there.

Touch Pad

I was pleasantly surprised by the touch pad. I’ve given up my mouse completely so I now exclusively use the touch pad, therefore it is important to me that it works well, is responsive and handles gestures for pinch, zoom, rotate, and scrolling. When I first started using the laptop I did not like the rough texture – the Yoga is glass and smooth so this felt a little odd to me. I had no issue with no separate left/right click buttons because that is how the Yoga is configured and I’m used to it. What is interesting is now that I’ve used it a couple of days, I’m more used to it now and the Yoga feels “slippery” with the glass. I guess it’s a question of what you use more.

The pad itself is incredibly responsive. I had read some concerns over quality but I haven’t found anything other than a glitch where two finger scrolling stopped working once. I had to reboot and it magically came back and hasn’t repeated itself but something to keep in mind. The Lenovo machines also suffered from these types of issues in the early days before the drivers stabilized. I use different gestures including Windows 8 app and charm bars and these all work flawlessly. Some people commented on the touch pad being off center. If you look at the image, while it is off center relative to the frame of the laptop, it is actually centered on the keyboard itself which is what is important and feels well-placed to me.

Wrist detection is fine and I have had no issues with accidently brushing the touch pad and having the cursor jump when typing. I also haven’t experienced the converse: i.e. the touch becoming unresponsive because a portion of my finger is resting on it. However, I must also add I try to type ergonomically which means I don’t rest my wrists on the keyboard – they brush it but I do keep them elevated.

This entire blog post was composed on the new laptop.

Display

The model I have comes with a 1080p touch display. The display itself is an interesting combination of pros and cons. So, first the cons. Again, I’m spoiled by the Yoga because it has a beautiful, clear display with incredible viewing angles. It is bright and readable from almost any angle and while it is not full matte so I do get some reflections, they don’t interfere with every day use. The HP display on the other hand is extremely glossy. I purposely took a picture that shows my reflection as well as a strange line of “light” across the right side which is the reflected sunlight through the edge of some blinds that are behind me. It is very reflective, more so than the Yoga, so I don’t imagine it would do well on a deck with sunshine. Fortunately, I typically have it set up in my office with no sunlight so it’s not a deal breaker but again something to look at.

WP_000505

The viewing angles are also limited. Vertical is not so bad (you can see from straight on to down, but don’t try to look “up” from the bottom or it will wash out) but horizontal is a very narrow range. I guess optimistically you could say it has built in privacy protection. Some people are very passionate about their viewing angles and I thought it would be a problem but honestly after several days of use I haven’t noticed them. I’ve used it both on my desk, on a table in the office and on my lap and it works fine. When viewed straight on, it is very bright and clear.

That segues into the pros: the display is very crisp and clear. I don’t even bump the font size despite it being 1080p because I love my large workspace and can read even 8pt fonts fine. I used a monitor calibration tool and it did very well across the board (again, provided I am in the right viewing window). For what I do – heavy development and writing – it is perfect. Honestly I feel like I fooled myself into settling for the Yoga’s 720p display because it is just so much more functional for me to have the full 1080p. When you are in the optimal viewing angles the display is superb, it only loses that quality when you are viewing at extreme angles which typically wouldn’t be the case anyway. Although the picture above shows the glossiness of the display, I worked for a full day in that same environment with no issues or distractions because it is clear and bright when viewed straight on.

Fingerprint Scanner

How cool is that? I thought this would be a more “nice to have” and gives someone bragging rights (I have Mission Impossible on my laptop) but now I’m going to be spoiled even more. It turns out despite the possible added security (and I know fingerprint readers have been around for quite some time) it is convenience that wins out. As a corporate computer it is constantly locked when I step away, and being able to unlock and log in with a simple swipe on my index finger is quite nice.

Audio

I didn’t really factor audio into my purchase decision at all. This is because I typically use headphones and microphones so the built-in is just a bonus. The model comes with Beats Audio which I assumed was sort of a gimmick – OK, great, two sub woofers and four speakers, wow (twirl finger in the air). Upon receiving it … WOW! The audio is fantastic. In fact, I switched from using Skype with headphones to using the speakers (I work from home, so no one to distract – if my daughter is on one of her classes in the next room I’ll go back to headphones for that). The sound is really, really clear and has a great range despite coming from a laptop. I am really impressed with how they engineered the audio. I played some Netflix with the audio jacked up all of the way and it sounded fantastic, including the lower ends. I can see this as an easy “set it on the table and watch videos without headphones” laptop.

The built-in microphones are great as well. There is an array on either side of the web came and I’m assuming it uses those for noise cancellation, etc. I know on Skype with the speakers blasting out and me talking, the other attendees told me there was absolutely no echo and my voice came through crystal clear. So checkbox for remote team collaboration with this laptop (Skype, GTM, and WebEx now fully tested).

Backlit Keyboard

The backlit keyboard was another thing I wrote off as a “senseless perk” and never paid much attention to. It’s an option on this but mine was fully loaded so they threw it in for me and … I like it! In fact, I love it. I didn’t realize how often I’d come into the office in the dark and not want to turn on a bright light but would fumble around on the keyboard. My testing was rebuilding it while watching The Hobbit with my daughter. Lights were off for full movie effect and the backlight allowed me to tap in commands occasionally when needed. I saw one review that complained about the bleed around the keys but it’s really only evident when you are looking at the keyboard from the front (when would you do that?) From a normal position it is perfectly fine. Here’s a snapshot of the keyboard with the lights on:

WP_000517

The only forehead slap with this is the way you turn it on and off. There is a function key wired to do this (see F5 in the image), but although the wireless (airplane mode, F12) key has it’s own little indicator for when wireless is on or off, the backlight has no indicator. If I were to engineer this keyboard, I’d have a little light on the backlight function key so you can find it in the dark to turn on the keyboard. As it  is you have to fumble around or tilt the screen to find it, THEN once it’s on the keys are all illuminated.

Performance and Compatibility

So here is where it gets really exciting. The performance of this laptop is outstanding. It features the latest Intel 4th generation “Haswell” chips, a great integrated video card (not one for extreme gamers but fine for a discrete GPU on a development machine) and a blazing fast SSD. The SSD is the Samsung 840 series and it whirs. My “non-standard” benchmarks were taking publication of a database that includes populating tens of thousands of sample records from 3 – 4 minutes on my Lenovo Yoga to under one minute on this machine. Builds go from 5 – 7 minutes for a 40+ project solution down to 2 – 3 minutes. Installing office (yes, ALL of Office) took about 5 minutes. Visual Studio took longer to download than to install. Bottom line: it performs well.

I had no issues for drivers for anything I plugged into it – my Targus USB was recognized immediately, and so was the Lenovo ThinkVision mobile I use when on the road. My headphones connect over BlueTooth without any problems.

I’m sure you can look up various benchmarks and other performance specs online but here’s the Windows Experience Index from my laptop to give you an idea of how it does:

experienceindex

The Intel 4600 built-in GPU is what kicks the overall score down. You can see the NVidia is slightly better and then everything else is great. The SSD really makes a difference – it’s almost twice as fast as the one in my Yoga. When I go onsite and power on my laptop, it’s literally turn it on, wait a few seconds, and swipe my index finger. I’m there and ready to go.

Battery

This is something I haven’t tested extensively. My brief test involved switching to the integrated graphics card then watching full screen video. I used the Beats Audio for a portion and then switched to head phones. After 3 hours the battery showed about 45 minutes remaining but the estimates seemed to consistently run low so I’m guessing I would have had another few hours. It’s enough to power the laptop through a flight from Atlanta to Seattle, so that works for me.

Conclusion

I had some trepidation when I made the purchase. I had read some negative reviews (the computer is overall rated very high on HP’s own site and Amazon) and had a bad experience with the Windows Store issue, but once I worked through that and used it, I definitely think it’s a keeper. If you don’t need the memory and 4th generation processors then I’d suggest going for a Yoga or Samsung Chronos (and if you can wait, Samsung will likely come out with more powerful configurations) but otherwise this is the best performance I could find that still maintains a good form factor despite the size. The only two things I’d change would be better viewing angles on the display and a Lenovo keyboard, but until Lenovo gives me an option with the 4th generation processors that has 16 GB of RAM and is SSD equipped, I’m sticking with this one. This is definitely the development workhorse I was hoping for. I’ll give my Asus to my daughter, and my Yoga will become my “convenience” laptop that I keep downstairs for social media and quick updates or watching videos while I keep this one plugged into my Targus USB 3.0 docking station so I get a full three monitors.

Wednesday, August 14, 2013

Testable Filters with TypeScript, AngularJS and Jasmine

The T6502 Emulator displays a set of registers to indicate the status of the program counter (where in memory the CPU is looking for the next set of instructions), the values of registers (temporary storage in the CPU) and a special register called the “processor status.” The processor status packs a lot of information into a single byte because it contains flag that indicate whether the last operation dealt with a zero value, a negative value (based on “two’s complement” addition whereby the high-order bit is set), if there as a carry from an addition, and so forth. It makes sense to display this register as a series of individual bits to see what’s going on.

The initial status is rendered like this:

PC SP A X Y NV-BDIZC Runtime IPS
0x200 0x100 0x0 0x0 0x0 00000000 0.000 0

To see how this is done through AngularJS, take a look at the source code:

<table>
    <tr><th>PC</th><th>SP</th><th>A</th><th>X</th><th>Y</th><td>NV-BDIZC</td><th>Runtime</th><th>IPS</th></tr>
    <tr>
        <td>{{cpu.rPC | hexadecimal}}</td>
        <td>{{cpu.rSP | hexadecimal}}</td>
        <td>{{cpu.rA | hexadecimal}}</td>
        <td>{{cpu.rX | hexadecimal}}</td>
        <td>{{cpu.rY | hexadecimal}}</td> 
        <td>{{cpu.rP | eightbits}}</td> 
        <td>{{cpu.elapsedMilliseconds / 1000 | number:3}}</td>               
        <td>{{cpu.instructionsPerSecond}}</td>               
    </tr>
</
table>

Notice the special braces for binding. They reference an object (the CPU) and a value. The pipe then passes the contents to a filter, in this case one called “eightbits” because it unrolls the status into its individual bits.The binding happens through scope. Scope is the glue between the model and the UI. Conceptually, scope looks like this:

angularscope

The scope was set to an instance of the CPU in the main controller:

$scope.cpu = cpuService.getCpu();

A powerful feature of angular is the ability to separate the declarative UI logic from the imperative business and presentation logic. In the case of the register, we want to show the individual bits. However, in the CPU model it is truly a byte register. The model of our CPU shouldn’t have to change just because someone wants to see the register in a different way – that is a design decision (it is part of the user interface and how the user experiences the information). Exposing the individual bits on the model is the wrong approach. Instead, we want to create a filter which is designed specifically for this scenario: manipulating output.

For convenience, I created a Main module that exposes a universal “App” class with some static fields. These fields make it easy to access and register modules. Keep in mind this is not necessary – you can access a module from anywhere within an angular app simply by calling angular.module() and passing the name. However, I find for both tests and the production app having something like this eases development and makes the references fast and easy.

module Main {
    export class App {
        public static Filters: ng.IModule = angular.module("app.filters", []);
        public static Directives: ng.IModule = angular.module("app.directives", []);        
        public static Services: ng.IModule = angular.module("app.services", []);
        public static Controllers: ng.IModule = angular.module("app.controllers", ["app.services"]);    
        public static Module: ng.IModule = angular.module("app"
            ["app.filters", "app.directives", "app.services", "app.controllers"]);    
    }
}

Note the “main” module ties in all of the dependent modules, but the filters are in their own module that can run independently of everything else. I’m using definition files from Definitely Typed to make it easy to discover and type the functions I’m using within angular from TypeScript. Now the filter can be defined. First, I want to define a spec for the filter. This describes how the filter will behave. To do this, I integrated Jasmine with the solution. Jasmine is self-described as “a behavior-driven development framework for testing JavaScript code.” It is also easy to use TypeScript to generate Jasmine tests.

The test harness simply includes angular, Jasmine, the main “app” that provides the statically registered modules, and then whatever individual pieces of the app I wish to test. In the case of the filter, I decided to call it “eightbits” so the eightbits spec looks like this (all of the source is available via the T6502 CodePlex site):

Note that I really was concerned with three cases – this is by no means an exhaustive test of every possible permutation. I want to be sure invalid input is simply echoed back, that valid input is displayed as bits and that if I have a small number that the bits are appropriately padded with zeroes so I get 00000001 instead of 1.

module Tests {

    describe("eightbits filter", () => {
  
        var filter: any;

        beforeEach(() => {    
            module('app');          
        });

        beforeEach(() => {
            inject(($filter) => {
                filter = $filter('eightbits');
            });
        });

        describe("given invalid input when called", () => {
            it("then should return the value back", () => {        
                expect(filter('zoo')).toEqual('zoo');                
            });
        });

        describe("given valid input when called", () => {
          
            it("then should return the bits for the number", () => {
                expect(filter(0xff)).toEqual('11111111');                
            });
            
            it("with smaller number then should pad bits to 8 places", () => {
                expect(filter(0x01)).toEqual('00000001');                
            });          
        });
    });
}

Let’s break down what happened. First, I described the test suite (eightbits filter). I provided a variable to hold the instance of the filter. Before each test, I run the module alias. This is provided by angular-mocks.js and enables us to stand up modules for testing. Next, I use the inject method to handle dependencies. $filter is a specific angular service. By passing it as a parameter to the injection method, angular will look at any dependencies wired up so far and provides the service. This service allows me to ask for a filter by name, so when the filter is registered, the injector will pick it up and provide it to me.

Now that I have an instance of the filter, the test conditions play through. When the filter is passed zoo, we want zoo to bounce right back. When it is passed a byte with all bits set, we want that to reflect in the result, and when I pass a single bit set I check for the padding. Of course, we haven’t built the filter yet so all of these tests fail (but you may find it interesting that this will compile, since I’m referencing the filter via the filter service and not as a direct reference).

I can now write the filter – note that the filter registers itself with angular using the same name I used in the test.

module Filters {

    export class EightBitsFilter {
       
        public static Factory() {
            return function(input): string {
           
                var padding: string = "00000000";
                
                if (angular.isNumber(input)) {
                    var result = padding + input.toString(2);
                    return result.substring(result.length - 8, result.length);
                }

                return input;
            }
        }
    }

    Main.App.Filters.filter("eightbits", [EightBitsFilter.Factory]);
}

I am using a factory pattern that provides a way for angular to create an instance of the filter. Angular will call this and keep track of the instance and inject it anywhere it is used. After the definition of the filter, I get a reference to the angular module for filters, and call the filter method. This is passed the name of the filter and its dependencies, which in this case is just the factory to create it. The signature provided by angular is to take in a string and return a string. I check to see if it is a number (otherwise I just return the value, as in the “zoo” case), then I cast it to binary and pad it as necessary.

Here I’ve been able to test a filter that is used in the UI. I was able to provide a spec for it that describes all of my expectations, so anyone looking at the test knows how it is supposed to behave. And, by registering the filter, I no longer have to worry about how bits are exposed by my model. Instead, I simply bind them to the scope and pass them through the filter to appear in the UI.

If you’re not convinced how powerful this feature is, imagine an enterprise customer that requires you to format the dates in a special format. They are still arguing over the exact date format, so instead of waiting or risking refactoring, you build a filter. For now, it simply spits out the date. Down the road, you get the details for how to display it but the date must be formatted differently in various areas of the application. Is that a problem? No! You can easily create a service that represents where the user is, then allow the directive to query the service and format the date accordingly. You make a change in one, possibly two places, and roll out the change, rather than having to search every place in the application you used the date to make an update. Starting to see the value in this?

Many posts cover services, filters, and directives in isolation, so in my next angular-related post I’ll share how to use a service in combination with a directive to create a testable UI artifact that your code can interact with independent of how the UI is actually implemented.