Saturday, December 24, 2016

2016 in Review

I’ve heard a lot of complaints about 2016 in general, but for me it was an abundant year. Personally I enjoyed a challenge I gave myself to speak more often than 2015, and believe I succeeded by giving well over one dozen talks on topics ranging from Angular 2 and TypeScript to DevOps.

This year I became a Samsung fan. I traded in my Windows Phone for a Samsung Galaxy 7. I also purchased the Oculus-powered Gear VR and have enjoyed it more than I imagined. I still spend time in VR every week and find the immersive experience far more rewarding than the augmented reality of the HoloLens.

I ditched my Microsoft Band 2 after having to replace it under warranty for the third time. I waited patiently and picked up a Samsung Gear S3.

gears3-1

After a lot of research I decided I didn’t need a hardcore fitness band, but preferred a smartwatch that had great fitness features. The Gear S3 looks sharp:

gears3-2

It has a great battery life (over a day so I can always find time to charge), recharges wireless and fast, and has great functionality. For example, you’d think setting reminders using just your voice or answering the phone through the microphone and speaker in your watch are novelty items, but I’ve done both in practice and it’s helped me out in situations when I didn’t want to pull my phone from my pocket.

gears3-3

The fitness features are phenomenal. It has great detection for steps and flights of stairs, very accurate heart rate (I know because I monitor mine manually quite frequently), auto-detects exercise, and has a running mode that makes it easy to see your lap time, stride, and other stats “on the run.”

Trust me, I didn’t receive any of these items as promotional gear and was surprised to find myself getting so many Samsung products. The most fun I had was with my Gear 360. This boasts two fish-eye lenses with slightly more than 180 degrees of view, so it can simultaneously capture all directions and digitally glue the seams. I purchased it to capture the summits of mountains we climbed over the summer, like our first 14,000 foot tall peak (click the images to scroll in 360 degrees):

…and the grueling half marathon hike up Pike’s Peak:

In general I was not surprised to see our project work pick up momentum with Angular 2. I was surprised to see the strides we made with Agile and adoption of DevOps. We were able to automate deployment for several large customers and closed out the year with a strong focus on containers (Docker), mobile (with both Ionic and Xamarin), and mobile DevOps.

The Year of the Blog

This blog should get a new name because I haven’t posted nearly as much content related to C#. Most of my focus has been on front-end, JavaScript and TypeScript development, with a lot of Angular 2, as well as Agile and DevOps concerns. Total sessions and page views are down from last year, but I’m not surprised because I did not post nearly as frequently. Instead, I presented dozens of talks, published several articles, and focused heavily on building the app dev practice at iVision.

2016blog

Top referrals to my personal blog were from Twitter, Stackoverflow, a previous company I worked for and the DZone.

Demographics remain similar, with a dominantly young, male audience, the majority from the U.S. followed by India, the United Kingdom, Germany, and Russia.

Chrome continues to dominate the browsers that visit my site, edging up to 73% from last year’s 67%. Firefox holds second place at 11% and Internet Explorer holds a close tie with Safari at just around 5% each.

Despite publishing several dozen posts this year, older articles still remain the most viewed. The top three were:

  1. The Top 5 Mistakes AngularJS Developers Make (but when I give Angular talks, it seems like people are still making them)
  2. Model-View-ViewModel (MVVM) Explained (that is a 6 year old article – also translated to Spanish!)
  3. Windows 8 Icons (huh?!)

A goal of mine for 2017 is to be more consistent with this blog and balance my writing between work, this blog, and the various sites I freelance for.

The Year in GitHub

Last year I became really active with GitHub. I’m finally used to and comfortable with git and do so much work with Node.js and other open source projects it was just a logical transition.

2016github

I closed out the previous year explaining how dependency injection works with the jsInject project (I plan to package that for npm soon); rewrote my Angular 1.x Health App in Angular 2, ECMAScript 2015, and with Redux and Kendo UI;  ported my 6502 emulator to Angular 2 with TypeScript; wrote a text-based adventure game; created a full day Angular 2 and TypeScript workshop; and recently published a simple microservices locator.

I’m looking forward to many more open source projects in 2017!

The Year in Twitter

This year was interesting on Twitter. I continued to gain steady followers but it was much more “two steps forward, one-and-a-half steps back” than previous years. I’m not sure if it’s due to the breadth of topics, typical attrition, or some other factor, but it has been slower growth.

jeremytwitter

To date, all of my Twitter growth has been organic. I haven’t applied to any “gain x followers” and do not follow others for a follow back. I try to keep the list of those I follow at just under 1,000 and prune the list based on signal-to-noise ratio, people I genuinely know and people with topics or feeds that I am passionate about.

Most Viewed

With no competition, Angular 2 was the topic on Twitter this year. All top views are related (even RxJS is popular due to its inclusion in the Angular 2 distribution).

Most Liked

The most liked followed the most viewed closely with the exception of:

Most Clicked

The top clicked tweets align with the top viewed and liked. The fifth most popular was this critique of Angular 2 which many told me "wasn't fair" but I like to post contrasting articles for balance in my feed:

2017 Predictions

What do I predict for 2017?

Personally, I will continue to speak as often as I can. I love reaching the developer community and all of my talks are based on real world experience. I’ve found it helps to connect when you are able to relate technology in terms of “lessons learned” and case studies “in the real world.”

I hope to continue to grow my knowledge in the DevOps space and believe we will see an exponential increase of containerized (Docker-based) workloads in 2017. With the introduction of .NET Core and SQL Server on Linux, we’re also going to see many organizations shift from traditional Windows-based infrastructure to commodity Linux machines running as VM and Docker hosts. Finally I do believe in 2017 the “native vs. hybrid” debate will fade and it will truly become, “What is your development tool of choice?” as options like NativeScript and Xamarin enable developers to write native mobile apps using the language and development environments of their choice.

My last prediction? I think I’ll be using a lot of Visual Studio Code next year.

I wish you a very Merry Christmas and Happy New Year.

Until next time,

Thursday, December 8, 2016

Integrating Angular 2 Unit Tests with Visual Studio Team Services

DevOps focuses on continuous delivery of value by removing barriers between application development and operations teams. A crucial component of the DevOps pipeline is continuous integration (CI). CI is the process of automating a build and tests to ensure a stable code branch. Traditionally this has been difficult to achieve in web-centric and Single Page Applications (SPA) that focus on the front-end, but modern libraries and tools make it a lot easier to “chase the unicorns.”

junitvsts

I recently published an application to GitHub that features Angular 2, Redux, and Kendo UI. It also continuously integrates and deploys to a Docker host. You can view the running app here. It is run as a Docker container, managed by a Docker host, on an Ubuntu (Linux) server hosted in Azure.

Although the source is hosted on GitHub, I connected the repository to Visual Studio Team Services (VSTS) for continuous integration and deployment. The app has over sixty (60) automated tests that are run as part of integration. It is critical to ensure that all tests pass before the Docker image is created and deployed to production. It is also important to see the detailed results of tests so that broken builds can be quickly triaged and addressed.

Good Karma

I used the Angular Command Line Interface to scaffold the project. Out of the box, the Angular-CLI leverages Jasmine to define tests and Karma to as its test-running suite. A Jasmine test might be a simple unit test based on pure JavaScript:

Angular 2 provides a test service that enables integration-style tests that interact with actual web components:

Either way, the tests “as configured” aren’t good enough for an automated build via VSTS for two reasons:

1. They depend on a browser to host the tests (Chrome, by default) that isn’t available on the build server.

2. They only generate output to the console and don’t create a file that can be parsed for test results.

The Phantom Browser

The first step is to get rid of the browser dependency. Fortunately, a project was created to provide a “headless browser” or one that runs without rendering “real” UI, and it is called PhantomJS. To include it in my project, I issued the following command:

npm i phantomjs-prebuilt --save-dev

This creates a development dependency on the pre-built version of PhantomJS so that the project can pull it down and install it as a dependency. It adds the reference to the project’s package.json file.

The next step is to add a launcher to Karma. These packages help link Karma to browsers so Karma is able to launch the host to run the tests. The Karma launcher is installed like this:

npm i karma-phantomjs-launcher --save-dev

Finally, you need to edit the karma.conf.js configuration file to include the launcher:

Now you verify the setup by running the tests through PhantomJS:

ng test --browsers=PhantomJS

You should see the same output you normally see from Chrome, with the exception that no external browser is launched.

Note: some build machines may require you to install additional prerequisites for PhantomJS. For example, Ubuntu requires additional font libraries to be installed.

JUnit of Measure

The next requirement is to generate output that can be parsed by the build. Karma uses reporters to provide test results, and ships with a “progress” reporter that writes test results out to the command line.

testrun

By default, VSTS is able to process JavaScript unit tests in the JUnit format. Karma has a JUnit reporter that can be installed:

npm i karma-junit-reporter --save-dev

This can be added to the Karma config file the same way the PhantomJS launcher was. Now you can run tests using the --reporters=junit flag and the test run will generate a file named TESTS-browser_(platform).xml. For example, a local run on Windows 10 creates TESTS-Chrome_54.0.2840_(Windows_10_0.0.0).xml. If you open the file, you’ll see XML that defines the various test cases, how long they ran, and even a structure that holds the console output.

Configuring VSTS

I assume you know how to configure builds in VSTS. If not, check out the full CI/CD article. The build steps I created look like this:

buildsteps

The first step ensures the Angular Command Line interface is installed on the environment. The package manager command is install and the arguments are:

-g [email protected]

(This is the version the project was built with). The second step installs the dependencies for the project itself and just uses the install command with no arguments.

With the Angular-CLI installed, we can now run a command to execute the tests and generate the output file. I use two reporters. The progress reporter allows me to see the progress of the test run in the console output for the build and will abort the build if any tests fail. The JUnit reporter writes the test results file. The tool is ng and the arguments:

test --watch=false --single-run=true --reporters=junit,progress --browsers=PhantomJS

The next step instructs the VSTS agent to read the test results file and integrate it into the build results. This is what the configuration looks like:

publishtest

Here is a snippet of the test run output:

testrunoutput

That’s it! Now the automated build can create the Angular 2 production application after verifying tests successfully ran. The VSTS build log will contain specific test results and even allow you to set up a widget to chart pass/fail percentage over time. Release Management can then take the results of a successful build and deploy to production. For a more comprehensive overview of the CI/CD process, please read DevOps: Continuous Deployment with Visual Studio Team Services and Docker.

Happy DevOps!

Saturday, July 30, 2016

An Adventure in Redux: Building redux-adventure

Redux is a “predictable state container for JavaScript apps.” If you’re like me, reading about a new technology is nice but it takes a good project to really understand it. For some reason, when I hear “state machine” I immediately think of the Z-machine that was created “on a coffee table in Pittsburgh in 1979” that revolutionized computer games in the early 80s by bringing text-based adventure games to myriad platforms.

thumbnail

I originally thought of re-factoring my 6502 emulator to use Redux, but realized it would be a far bigger task to take on so I decided to build something from scratch instead. Borrowing from an app I wrote for a book I published a few years ago, I built redux-adventure using Angular 2 and TypeScript with the Angular-CLI.

Redux Concepts

There are numerous tutorials online that cover Redux. One problem I find is that a lot tend to overcomplicate the description and throw graphs that make it look far more involved than it really is. Rather than re-inventing the wheel, I’ll share a simple description here and then walk through the app that uses it.

Redux is a simple state management tool. Your application may transition through multiple states. At any given time you may raise an event, or create an action, that results in a new state. State is immutable, so actions will never modify the existing model that represents your state but instead will generate a new model. This is the concept that is sometimes difficult to understand.

Redux keeps track of state for you, and offers three key services (there are other APIs, but I’m keeping this simple).

  • The ability to dispatch an action, indicating a transition in state
  • A set of reducers that respond to an action by providing the new state
  • A subscription that receives a notification any time the state changes

The game

The redux-adventure game is fairly straightforward. You are dropped in a random room in a dungeon and must explore the dungeon to find various artifacts. You can look or travel in the four compass directions, and if there is an item you can get it to put it into your inventory. You win the game by retrieving all of the available items.

State

The state itself is really just a domain model represented by a plain-old JavaScript object (POJO). A “thing” or artifact has a name and a description. Then there are rooms that look like this:

Notice that a room may contain more than one inventory item. It also keeps track of other rooms based on compass direction and walls where there are no rooms to navigate to.

The world itself is represented by a dungeon class that contains rooms, the player’s inventory, the count of total items they must obtain, the current room, a console that contains the text displayed to the user, and a flag indicating whether or not the player has won.

There is also a dungeonMaster that generates the world from some seed information and randomly generates walls. Any classes or services with behavior have their own tests. Now that we have the world defined, what can we do?

Actions

The user can type in any number of commands that are represented by the action list. Although an action may start as these commands, based on the current state they end up being translated into four key actions:

  • Move: updates the current room to the room the user has navigated to, and updates the console to indicate the movement and display the description of the new room
  • Get: transfers inventory from the current room to the user
  • Text: adds a line of text to the console
  • Won: transfers the final item of inventory to the user, sets the won flag, and updates the console to indicate the user has won

The createAction method is responsible for this logic. TypeScript allows me to write interfaces to make it more clear what an action inspects. Here is the “get” action’s interface:

And here is the code that takes the original action and transforms it into an internal one:

Notice that one “incoming” action can translate to three “internal” actions: text with a snarky comment when there is nothing to get, an action to transfer the inventory to the user, and an action to indicate the user has won.

The translation of actions is fully testable. Note that to this point we’ve been working in pure TypeScript/JavaScript – none of this code depends on any external framework yet.

Reducers

Reducers may take awhile to get used to, but in essence they simply return a new state based on an action and ensure the existing state isn’t mutated. The easiest way to tackle reducers is from the “bottom up” meaning take the lower level properties or nested objects and handle their state, then compose them into higher levels.

As an example, a room contains a set of inventory items. The “get” action transfers inventory to the user, so the things property of the room is updated with a new array that no longer contains the item. Here is the TypeScript code:

If the ellipses notation is confusing, it’s part of a newer spec that allows for composition of items. It essentially represents a portion of the array. What is returned is a new array that no longer has the item. Here is the JavaScript:

You can view the corresponding tests written in TypeScript here. Notice that in the tests, I use Object.freeze to ensure that the original instances are not mutated. I freeze both the individual items and the list, and then test that the item is successfully removed.

Another reducer will operate on the array of inventory items for the player. Instead of removing the item as it does from the room, it will return a new array that adds the item to the player’s inventory.

The reducer for the room calls the reducer for the things property and returns a new room with properties copied over (and, in the case of navigating to the room, sets the visited flag).

You can view the main reducer code to see the logic of handling various actions, and calling other reducers as well (i.e. main calls the reducer for the rooms list, and rooms calls the reducer for the individual room).

In the end, the tests simply validate that the state changes appropriately based on an action and doesn’t mutate the existing state.

At this stage the entire game logic is complete – all state transitions through to a win are there, and we could write some simple AI to have a robot play the game and output its results. Everything is testable and we have no dependencies on any frameworks (including Redux) yet.

This is a powerful way to build software, because now whether you decide to use Angular, React, plain JavaScript or any other framework, the main business logic and domain remains the same. The code doesn’t change, the tests are all valid and framework agnostic, and the only decision is how you render it.

The Redux Store

The purpose of Redux is to maintain the state in a store that handles the actions and applies the reducers. We’ve already done all of the legwork, all that’s left is to create the store, respond to changes in state, and dispatch actions as they occur.

The root component of the Angular application handles all of this:

Notice how simple the component is! It doesn’t have to handle any business logic. It just creates the store, refreshes a property when the state changes, and dispatches actions.

The template is simple as well. It lists the console, provides a parser to receive user input if the game hasn’t been won yet, and renders a map of the rooms.

With this approach, the components themselves have no business logic at all, but simply respond to the bound data. Let’s dig a little deeper to see.

Components

Approaching the application in this fashion makes it very easy to build components. For example, this is the console component. It does just two things: exposes a list of text, and responds to changes by setting properties on the div element so that it always scrolls the latest information into view:

If you’re nervous about seeing HTML elements mixed in with the component, don’t worry! They are completely testable without the browser:

The parser component solely exists to take input and dispatch actions. The main component listens to the parser and uses the event emitter to dispatch actions to the Redux store (that code was listed earlier). The parser itself has an action to emit the input, and another action that auto-submits when the user hits ENTER from within the input box:

After playing the game I realized it would be a lot easier to test if I had a map, so I created the map component to render the grid and track progress. The map component itself simply translates the list of rooms into a matrix for rendering cells. For each cell, a green square indicates where the user is, a white square is a visited cell (with walls indicated) and a black cell is a place on the map that hasn't been explored yet.

Despite the heavy manipulation of styles to indicate background colors and walls, this component is also completely testable without relying on the browser.

Conclusion

You can view the full source code on GitHub and play the game here. Overall, building this was a great learning experience for me. Many of the articles I read had me slightly confused and left me with the feeling it was overcomplicating things, but having gone through the process I can clearly see the benefits of leveraging Redux for apps.

In general, it enables me to build a domain using vanilla TypeScript/JavaScript and declare any logic necessary on the client in a consistent way by addressing actions and reducers. These are all completely testable, so I was able to design and validate the game logic without relying on any third party framework.

Linking Redux was an easy step, and it made the logic for my components even easier. Instead of encapsulating services to drive the application, I was able to create a store, respond to changes to state within the store, and build every component as a completely testable, independent unit.

What do you think? Are you using Redux in your apps? If you are, please use the comments below to share your thoughts.

Thursday, July 21, 2016

Back to the ngFuture

Angular 2.0 is close to production ready release. Initially the community was in an uproar over the lack of backwards compatibility, but that has changed in recent months with the release of version 1.5 and several modules including ngUpgrade. In this talk, Jeremy Likness discusses the differences between major production versions of Angular, the options for migrating your apps to 2.0, and demonstrates how to get your apps back into the future with the tools that are available today.

Special thanks to the Atlanta AngularJS Meetup group for hosting this event! You can view the deck here:

You can also Visit the GitHub repository to download the code examples or run them live in your browser.

As a fun technology aside, here's a 360 degree photo I took with my Samsung Gear 360 at the Ponce City Market in downtown Atlanta right before I presented the talk (click the photo to be able to view in 360 and use your mouse or phone to scroll around the view).

Enjoy!

Jeremy Likness

Wednesday, June 29, 2016

Learn the Angular 2 CLI Inside and Out

Angular 2 represents a major step in the evolution of modern web front-end frameworks,but it comes with a price. From TypeScript compilation to running test scripts,bundling JavaScript, and following the Angular 2 Style Guide, "ng2 developers" are faced with myriad problems to solve and challenges to overcome.
Fortunately, there exists a way to simplify the process of building Angular 2 applications. Whether your goal is to stand up a rapid prototype or build an enterprise-ready line of business application that is continuously deployed to the cloud, the Angular CLI is a tool that you don't want to code without.

» Read the full article: Rapid Cross-Platform Development with the Angular 2 CLI

Jeremy Likness

Tuesday, May 17, 2016

Tic-Tac-Toe in Angular 2 and TypeScript

I’ve built a lot of small apps and games over the years, often to either learn a new framework or platform, or to help teach it. It’s always fun to dig up old code and migrate it to new technologies. For example, I took the Silverlight C# code to generate a plasma effect from my Old School app and ported it to JavaScript with optimizations. I also recently built a bifurcation diagram with ReactJs (RxJS) and div tags.

The other day I reviewed some older projects and came across an article I wrote as an introduction to Silverlight. I used a tic-tac-toe game and built the logic to enable a computer opponent.

tictactoe

I realized this would be a perfect project for Angular 2 so I proceeded to make the port. You can play it here to test it out. I did not build it responsive (shame on me, being lazy) so I may refactor it in the future to make it easier to play on phones. Tablets and computers should be fine.

Scaffolding

To create my project I used a combination of cross-platform tools including Visual Studio Code and Node.js. I also used the Angular-CLI for just about everything. The first step is get to a Node.js command prompt, then install the Angular CLI and initialize the project:

npm i angular-cli –g
ng new tic-tac-toe-ng2
cd tic-tac-toe-ng2
ng serve

By now I had a working project that I could navigate to, run unit tests against:

ng test

…and even run an end-to-end set of tests:

ng e2e

Great! Now to start porting the code!

Quotes

The original game had some wonky quotes that would show up in each tile that isn’t clicked yet. It would also give you a different, random quote if you tried to tap on a cell while it was the computer’s turn. We’ll get to the computer strategy in a bit (I built in a random delay for the computer to “think”).

A reusable unit of JavaScript code in Angular 2 is referred to as a “service” and we scaffold it like this:

ng g service quotes

This will generate the service and the specification (tests) for the service. One great feature of Angular is that it addresses testing right out of the box.

I really didn’t have many specifications, other than making sure I get an actual string from the service and that multiple calls randomly return all available quotes .

(Note: For the test, I iterate a large number of times to check the quotes, but technically it’s not a “good” test because you could randomly miss a quote and the test will fail even though the service is doing what it is supposed to).

The service itself just randomly sorts an array each time a quote is requested and returns the first element.

I repeated that pattern for the “bad quotes” (i.e. when you click out of turn) and then turned my attention to individual cells on the tic-tac-toe board.

Cell and Game States

Thinking about the game, I determined there would be exactly three states for a cell to be in:

export enum State {
    None = 0,
    X = 1,
    O = 2
}

Either “not played” or marked with an ‘X’ or an ‘O’. I created an interface for the data of a cell to represent where it is on the grid, the current state, and whether it is part of a winning row.

export interface ICell {
    row: number;
    col: number;
    state: State;
    winningCell: boolean;
}

Finally, the game flow will either allow a turn, or end in a win or a draw.

export enum GameState {
    XTurn = 0,
    OTurn = 1,
    Won = 2,
    Draw = 3
}

With these in place, I then built the component for an individual cell.

The Cell

To scaffold a component I used the following syntax:

ng g component cell

This created a sub-directory with the related files (CSS, HTML, code-behind and test). In the CSS you can see the styles to define the size (sorry, this one isn’t responsive for now, but that can be readily fixed), margins, etc.

Components have their own specifications. For example, one parameter that is input to the cell is the row and column. In the test, we create a test component that wraps the tested component, then verify the data-binding is working correctly (note the bindings in the template should match what is picked up by the component):

The “builder” is defined earlier in the source and spins up the instance of the test controller to host the component. This is all generated for you by the command line interface.

The cell itself takes several inputs, specifically the row, column, state, and whether it is a winning cell.

It also exposes an event when it is tapped. The template uses these to display either a random quote, an ‘X’, an ‘O’, and also color the background if the cell is part of a winning row.

The logic triggered when it is tapped checks to make sure it hasn’t already been set and whether it is a valid turn (the valid turn property is bound, so it can be set for testing or bound to other logic for the running application). If it is not the user’s turn, a random quote is set on the square to react to the tap. 

Another interesting behavior to note is the way the cell reacts to changes.

Components can implement the OnChanges interface that will fire when the model mutates. This is useful for responding to change without setting up individual watches (as was the case in Angular 1.x). Instead of using a timer to update quotes as I did in the old Silverlight app, I decided that I could just update the quotes randomly when changes occur.

The Matrix

“The Matrix is everywhere. It is all around us. Even now, in this very room … it is the world that has been pulled over your eyes to blind you from the truth.” – Morpheus

OK, the tic-tac-toe matrix isn’t quite as interesting. The matrix service is what manages the game state. The state machine for the game allows alternating between turns and ends at a draw or win. It is illustrated like this:

tictacstate

This is captured through the specifications for the matrix service. The service itself builds up a list of “winning rows” to make it easy to determine if a given row is a draw, a winning row, or still has open slots.

Each time the state changes, the logic first checks to see if the game was won and whether the computer or the user won it:

Next, it checks to see if any slots are available. This is the “dumb logic” for a draw. I could have eliminated this code as it was the first pass at the algorithm, but I decided a two phase would be fine to illustrate as the second pass does a more intelligent look “row-by-row”. If it’s not a draw, it switches to the next turn.

The main component binds to the matrix service and uses this to drive the state of the individual cells.

Strategies

To drive the computer’s moves, I created two strategies. The first strategy is a simple one and simply picks a random empty cell for the computer move. Notice it is a simple function that is exported.

For the hard strategy, I devised a simple algorithm. Each row is assigned a point value based on the state of the row. The point values are listed here:

  • Row is a draw (one from each) – 0 points
  • Row is empty – 1 point
  • Row has one of theirs – 10 points
  • Row has one of mine – 50 points
  • Row has two of theirs – 100 points
  • Row has two of mine – 1000 points

This is an aggressive (not defensive) strategy because there are more points assigned to building up a winning row than blocking the opponent’s. Each empty cell is assigned a weight based on the sum of all intersecting rows, and then the highest weighted cell wins. Here is a visualization where the computer is “O”:

tictacstrategy

The top grid shows the row values (first and last columns on the top represent the diagonals) and the bottom grid shows the computed values for the cell. Note the highest score is the cell that will win the game.

The logic is encapsulated in the hard strategy function. The pseudo-code follows:

  1. Create a matrix of cell ranks
  2. Iterate each row. If the cell is occupied, set it’s weight to a negative value.
  3. Sum the count of X and O values for the row.
  4. Update the cell’s weight based on the logic described earlier.
  5. Sort by weight.
  6. Create a short array of the cells with the highest weight (in case multiple cells “tie”)
  7. Pick a cell and populate it.

That’s it – a simple strategy that works well.

Putting it All Together

The main component orchestrates everything. A flag is synchronized with the strategy service to run the selected algorithm, and the matrix is consulted for the first turn. (Note the MatrixService is bootstrapped with the main component so the same copy is available throughout).

On initialization, it is determined whether the component is running with a “slow computer.” This is the default and uses a timeout to emulate time for the computer to decide it’s next move. It makes for more realistic gameplay. When set to false, the statements execute immediately to make it easier for testing.

The remaining methods simply check for the game state and pass it through to properties on the component for data-binding, and advance the state. The user is responsible for tapping a cell to trigger their move. This is handled by the stateChange method:

The template the generates the cells iterates through the grid and binds the cell attributes to each CellComponent:

The updateStats method queries the matrix to determine the game state. If it is the computer's turn, the computerMove method is called. This simply calls the strategy service to make the next move and passes control back to the user. That's pretty much it!

Bonus Opportunity: if you like challenges, the AI in this is not perfect. You can take on a two-part challenge. First, the computer is absolutely beatable when you have the first turn. If you solve it, comment here and let us know the solution! My only hint is that it does not start with placing an X in the middle. Second, once you've done that, is there a better algorithm that can beat the winning strategy?

You can view the entire project (along with projects to install and run it locally) at the tic-tac-toe-ng2 repository. I hope this helps illustrate building applications with Angular 2 and TypeScript using the Angular Command Line interface.

Please share your thoughts and comments below, and if you get bored, play some tic-tac-toe!

Until next time,

The Angular 2 CLI and TypeScript

AngularJS is the incredibly popular framework for building single-page web applications. Version 2.0 is a major leap from the 1.x version designed to address shortcomings in the original 5+ year old framework and to embrace modern browsers and language features. It is being written using TypeScript, a superset of JavaScript that allows you to build code using next generation features and compile it to JavaScript that will run on current browsers. Visual Studio Code is the perfect platform to explore Angular applications because it is free, open source, and cross-platform and supports advanced features such as extensions, code completion and IntelliSense. In this session Jeremy Likness goes hands-on to show you how to set up your environment and build your first application while teaching you about the advantages of the framework and language based on his years of in-the-field experience architecting enterprise Angular applications.

In this talk I focused on scaffolding the app using the Angular-CLI to rapidly build a reference app. Here is the video:

The deck has most references, and stay tuned for a new post that will go into more detail with building a tic-tac-toe game with a computer opponent! Here is the deck:

Jeremy Likness

Sunday, April 24, 2016

The Three D’s of Modern Web Development

Modern web development using JavaScript has evolved over the past decade to embrace patterns and good practices that are implemented through various libraries and frameworks. Although it is easy to get caught up in the excitement of frameworks like AngularJS or the KendoUI implementation of MVVM (that’s Model-View-ViewModel, which I’ll explain more about in an upcoming article), it is important to remember the fundamental patterns and repeatable practices that make development easier. In fact, it is difficult to make a qualified decision about your development stack without understanding “how” and “why” a particular tool, library, or framework may benefit the application and, more importantly, your team.

I recently authored a series of articles for the Telerik Developer Network that covers what I believe are three fundamental concepts that have revolutionized modern web app development.

You can read the series here:

  1. Declarative vs. Imperative
  2. Data-Binding
  3. Dependency Injection

The three D’s are just a few of the reasons why JavaScript development drives so many consumer and enterprise experiences today. Although the main point of this series was to demonstrate the maturity of front-end development and the reason why JavaScript development at enterprise scale is both relevant and feasible today, it is also the answer to a question I often receive. “Why use Angular 2 and TypeScript?” My answer is this: together, Angular and TypeScript provide a modern, fast, and practical implementation of the three D’s of modern web development.

Regards,

Wednesday, March 23, 2016

TypeScript 1.8 to Future-Proof JavaScript Apps

There is no denying the trend to deliver critical business apps through the browser using Single Page Application frameworks that rely heavily on JavaScript. Traditionally frowned upon as a loosely typed language not fit for large scale development or teams, JavaScript is rapidly evolving with the latest ECMAScript 2015/6 specifications. TypeScript is a technology that can help teams future proof their applications and build them at scale.

By serving as a superset of JavaScript and enabling definition files to describe existing JavaScript libraries, TypeScript can be integrated seamlessly into existing projects. It provides syntax that aligns with the current specifications and will compile to various versions of JavaScript and leverage different libraries that provide module support. Learn how TypeScript improves the development experience by providing development and compile-time checks, type safety, interfaces, true class inheritance, and other features that accelerate delivery and improve the quality and stability of Single Page Applications.

Watch the full video from a recent webinar I gave with Kliant covering TypeScript 1.8:

You can download the code examples and deck here.

Until next time,

Saturday, March 19, 2016

Lessons Learned (Re) Writing a 6502 Emulator in TypeScript with Angular 2 and RxJs

In preparation for an upcoming webinar I decided to build a sizeable Angular 2 project. I’ve always been fascinated with the Commodore 64 and the 6502 chipset is not a tough one to emulate. I also wrote it the first time several years back, so I had a good baseline of code to draw from. Thus the Angular 2 6502 emulator was born.

screenshot

Want to see it in action? Here is the running version.

Building a Console that Works

The first thing I needed was a decent console to give information to the user. The console itself isn’t too difficult. A service holds an array of strings, splices them when it gets beyond a certain size, and exposes a method to clear them.

Here is the basic code:

Now you may notice there is a new class (to some) for notifying subscribers when a console message is sent. Angular 2 relies heavily on Reactive Extensions for JavaScript (RxJS).

RxJS

In a nutshell, this library provides a different mechanism for dealing with asynchronous workflows and streams. You essentially end up observing a collection and are handed off “items” like an iterator and then can deal with it as you choose.

You can see that emitting an event is easy, but what does it look like consuming it?

Interacting with the DOM

The component for the console simply data-binds the console messages to a bunch of text contained inside of a div element. If that’s all it had to do there would be no need for an event emitter. Unfortunately because new items are appended at the bottom, the div can quickly fill up with scroll bars and new messages are no longer in view.

Fixing this is simple. First, the component needs to know when the contents of the div may have changed (via the event fired from the service). Second, it simply needs to interact with the DOM element so it can scroll into view.

How do we reference the element associated with an Angular component? Take a look at this source:

There are just two steps needed to reference the element. First, import the ElementRef class. Second, include it on the constructor so it is injected. You probably noticed that the component doesn’t do anything with it in the constructor. This is because the constructor is called before the UI has been wired up and rendered, so there is nothing inside of the element.

So how do you know when the element is ready?

Angular 2 Lifecycle

Angular 2 components have a lifecycle and if certain methods are present, they are called during a specific phase. I provided a high level list at an Angular 2 talk I gave. In this case, the component implements a method that is called after the view is initialized. Once initialized, it’s possible to grab the child div because it’s rendered.

A nice feature of TypeScript is auto-completion and documentation. By casting the element to the HTMLDivElement type I get a clearly typed list of APIs available for that element. The component subscribes to the console events (notice it uses a de-bounce rate of 100ms so if a ton of messages are sent at once, it will still only fire 10 times a second to avoid locking the UI). When the event fires, a property on the div is set and that will force it to scroll to the most recent message.

A similar lifecycle method is used in the main app to indicate everything is initialized.

Creating the Display

The next challenge was creating a graphical display. I decided to implement the emulator using a memory-mapped, palette-based display. This means a block of memory is reserved to represent “pixels” of the display, and when a value is set on a specific address, the number corresponds to a palette entry of red, green and blue values.

If you’re curious you can see the code for the algorithm I use to generate the palette. It basically builds up a distinct set of red, green, and blue values and then sorts them based on their luminosity based on a well known equation. Each item has a hexadecimal rendition and the last five palette slots are manually built as shades of gray.

The display service simply keeps track of a pixel buffer that represents the display, then makes a callback to the component when a value changes.

The display component is a little more involved. It holds an array of values that represent rectangles that will be drawn as a “pixel.” These are literally scalable vector graphics-based rectangles inside an svg element. The entries hold x and y offsets from the upper corner of the display, the width and height (you can adjust this if you like), and the current fill color for that pixel.

The component sets a callback on the display service to be notified of any changes. In this case, a callback is used because it is up to 50x faster than using an event emitter (I tested this). The callback ensures the request is in the valid range of memory and is a byte of data, then cross-references the palette to set the proper fill value.

Finally, this is all rendered by Angular as an array of rectangles. Angular’s dirty tracking will check when the fill is updated and update the attribute. Here is the HTML for the display:

The CPU State

The emulator CPU provides the simplest possible mechanism for emulating the operation of instructions. It contains the memory registers, the stack, a program counter (the area of memory that the next machine instruction will be read from), and tracks things like how many instructions per second it is able to execute.

I decided to take a straightforward approach, and have the CPU manage memory, registers, stack, and program counter but create “self-aware” operation classes that would actually “do work.”

For that reason the CPU itself doesn’t have any logic like loading or adding accumulators. Instead, it knows how to look at memory, update memory (including triggering a call to the display service), set and reset flags, pop addresses and handle various modes for addressing memory.

It has instructions to run, step through code line-by-line for the debug mode, and can be halted. When halted or in an error state, the only recovery is to reset it. Basically the workflow for the cpu is to look at the operation at the program counter, ask it to execute itself, then update the program counter and grab the next instruction.

The executeBatch() function is where I did most of the optimization. Running the program until it stops would lock the UI, and running a single instruction per event loop was slow. The CPU therefore compromises by running up to 255 instructions before it uses setTimeout to allow other events to queue.

(This is an area where using animation frames might make sense, but I haven’t explored that avenue yet).

Angular 1.x users will note I can call setTimeout directly and not rely on a $timeout service. Angular 2 is able to detect when I’ve modified the data model even from within an asynchronous function due to the magic of ZoneJS.

Loading Op Codes

An operation is fairly simple. It exposes how it handles addresses, the numeric value of the instruction, the name, how many bytes the instruction and any parameters take up, and can compile itself to text.

There is an execute method that uses the CPU to do work.

Operation codes derive from a base class:

And then implement themselves like this:

opcode

The example snippet is an operation that adds a value to the accumulator (a special memory register) with a value. It will set the carry bit if there is an overflow, and it is in immediate mode which means the byte after the instruction contains the value to add (as opposed to obtaining it from a memory location, for example). When executed, it uses the utility methods on the OpCodes class to add with carry and passes in the instance of the cpu and the address it is at.

The OpCodes utility will pop the instruction, inspect the address mode, pull the value, perform the add, and update the program counter with the cpu’s help.

Because each operation has multiple addressing modes, and there is one class per combination of op code and address mode, there end up being several hundred classes. This presents a challenge from the perspective of wiring up the op codes into an array unless they “self-register.” In my original emulator, that’s exactly what they did, but with modern TypeScript there’s an even better way!

TypeScript Decorators

TypeScript decorators are heavily used by Angular 2. I decided to create one of my own. First, I created an array of operations to export and make available to the CPU, compiler, and other components that need it. Next, I created a function to serve as a class decorator. It simply takes the target (which is the class, or more precisely the function constructor) then creates the instance and stores it in an array.

With that simple step, because the signature is correct, I can now use that function as a decorator and simply adorn each class that should register as an op code with the @IsOpCode attribute.

Take a look and see for yourself! Now when I implement the other op codes I haven’t finished yet, they will be automatically registered if I decorate them.

The Compiler

Aside from the individual op code definitions, the most logic exists in the compiler class. This is because it is a full two-pass assembly compiler that parses labels and handles both decimal and hexadecimal entries as well as label addition and memory syntax.

The compiler has to be able to take a pass and lay out how much memory each instruction takes so it can assign memory addresses to labels, then use that for “label math” (when the programmer adds or subtracts and offset from a label), then parse the code and get the instructions and memory addresses written out correctly.

Needless to say, a lot of regular expressions are involved. You can look at the source code linked earlier to see how I used interfaces and broke the compiler steps into methods to make it easier to organize all of the steps required.

The CPU Display

The CPU was the easiest component to build. It simply exposes the CPU service:

And then binds to values and cpu methods directly:

There is a compiler component that interacts with the compiler class, and uses forms to collect data. If you go beyond straight data-binding, forms are actually quite interesting.

Forms in Angular 2

The compiler component uses Angular 2’s form builder to expose the form for entering code, loading code, and setting the program counter. The form builder is used to create a group of controls. Each has a name, an initial value, and a set of validators. In the case of the program counter, a “required” validator is combined with a customer validator that parses to ensure it is a valid hexadecimal address.

You can see that in the “pcValidator” method. Passed validations return null, failed return … well, whatever you want except null. I reference the control group to create individual references for each control, so I can use the value of the control for compiling, setting the program counter, etc.

The controls have their own method to update values (such as loading the decompiled code into the corresponding control). I also use it to load source code for some of the pre-built programs I included. Unlike Angular 1.x, the Http service uses RxJS to return its results.

HTTP in Angular 2

There are a few steps to interact with HTTP in Angular 2 apps. First, you must include a JavaScript file if you are using bundles and not loading all of Angular 2 at once. In your app bootstrapper you’ll want to import HTTP_PROVIDERS and bootstrap them.

The component will import Http and should also import the Rx library to handle the special observable results. After that, it’s fairly straightforward.

The get call is issued and returns an observable that you can begin chaining methods onto. In this example, I’ve mapped the result to text (I’m not getting a JSON object but the entire contents of a text file) and then I subscribe.

The subscription has methods for when the next item appears, when an exception is thrown, and when the observable stream reaches the end. I leverage this to grab the loaded data and set it onto the control and then send a console message when it’s loaded.

The Final Results

After everything is said and done I was very pleased with how quickly I was able to leverage the existing code and migrate it to Angular 2 and the more current version of TypeScript. A few things I noted along the way:

  • Dynamic static properties like arrays aren’t really a good idea in the modular/asynchronous world – instead of having a static array for the op codes, I made an instance and exported it so it could be imported by other modules
  • Interfaces are useful for annotations but not very useful for dependency injection in Angular 2 – I found if I didn’t want to use magic strings, I had to still reference the implementation and use the @Inject decorator to make it work if I wanted to declare variables by their interface type
  • Performance surprised me (in a good way)

I’d like to elaborate on that last bullet. In the original implementation, I had a display service that explicitly held an array of svg elements and set the fill attribute on them directly when a value changed. With the Angular 2 version, I rely completely on Angular’s dirty tracking and let Angular re-render the element (or set the attribute) when the values change. Despite that difference, the improved data-binding is incredible and the new app keeps pace with the old one based on instructions per second.

Over all it was a great and fun experience. I still have some op codes to add to the mix and my binary coded decimal (BDC) mode is broken, so if you want to tinker, I do accept pull requests.

Until next time,

Tuesday, March 15, 2016

Angular 2 Change Detection and HTTP Services, ASP.NET Core Authentication, VS 2005 End of Life on Tue Mar 15 2016

Understand change detection in Angular 2, look at authentication in the cross-platform ASP.NET Core, sign up for a webinar to discuss the changes to .NET and understand how to create specialized HTTP services in Angular 2.

Until next time,

Wednesday, March 9, 2016

Angular NG2, NodeJS on Raspberry Pi, Skype Collaboration and JavaScript on Wed Mar 9 2016

Angular 2 run blocks, a new level of collaboration, NodeJS on Raspberry Pi and a proposal for weak references in JavaScript.

Until next time,

Sunday, February 28, 2016

30 Years of “Hello, World”

I recently took a vacation the same week as the 4th of July and had lots of time to reflect upon my career to date. It was a little shocking to realize I’ve been writing code for nearly 30 years now! I decided to take advantage of some of the extra time off to author this nostalgic post and explore all of the languages I’ve worked with for the past 30 years. So this is my tribute to 30 years of learning new languages starting with “Hello, World.”

The first programming language I learned was TI BASIC, a special flavor of BASIC written specifically for the TI 99/4A microcomputer by Microsoft. BASIC, which stands for Beginner’s All-purpose Symbolic Instruction Code, was the perfect language for a 7-year old to learn while stuck at home with no games. The language organized lines of codes with line numbers and to display something on the screen you “printed it” like this:

1981 – TI BASIC

ti994abasic

I spent several months writing “choose your own adventure” games using this flavor of BASIC, and even more time listening to the whistles, crackles, and hisses of a black tape cassette recorder used to save and restore data. Probably the most exciting and pivotal moment of my young life was a few years later when my parents brought home a Commodore 64. This machine provided Commodore BASIC, or PET BASIC, right out of the box. This, too, was written by Microsoft based on the 6502 Microsoft BASIC written specifically for that line of chips that also happened to service Apple machines at the time.

1984 – Commodore BASIC

c64basic

The question mark was shorthand for the PRINT command, and the weird characters afterwards were the abbreviated way to type the RUN command (R SHIFT+U - on the Commodore 64 keyboard the SHIFT characters provided cool little graphics snippets you could use to make rudimentary pictures).

I quickly discovered that BASIC didn’t do all of the things I wanted it to. The “demo scene” was thriving at the time and crews were making amazing programs that would defy the limits of the machine. They would do things like trick the video chip into drawing graphics that shouldn’t be possible or scroll content or even move data into the “off-limits” border section of the screen. Achieving these feats required exact timing that was only possible through the use of direct machine language code. So, I fired up my machine monitor (the name for the software that would allow you to type machine codes directly into memory) and wrote this little program:

1985 – 6502 Machine Code

c64machine

This little app loaded the “Y-accumulator” with an index, then spun through memory starting at $C100, sending the characters one at a time to a ROM subroutine that would print them to the display. This is the equivalent of a for loop (for y = 0; y <= 0x0d, y++) in machine code. The RTS returns from the subroutine. In order to execute the program, you had to use the built-in SYS command that would call out to the memory address (unfortunately, you had to convert from hexadecimal $C000 to decimal 49152, but otherwise it worked like a charm). I had the PETSCII characters for “HELLO, WORLD” stored at memory address $C100 (yes, the Commodore 64 had it’s own special character page). Here is the result:

c64machinecall 

Of course life got a little easier when I moved from raw machine code to assembly. With assembly, I could pre-plan my software, and use labels to mark areas of memory without having to memorize memory addresses. The exact same program shown above could be written like this:

1986 – 6502 Assembly

* = $C000       ;set the initial memory address
CHROUT = $FFD2  ;set the address for the character out subroutine
         LDY #$00
LOOP     LDA HELLO, Y
         CMP #$00

         BEQ END
         JSR CHROUT
         INY
         BNE LOOP
END      RTS
HELLO    ASC 'HELLO, WORLD.' ; PETSCII
HELLOEND DFB 0 ; zero byte to mark the end of the string

About that time I realized I really loved writing software. I took some courses in high school, but all they taught was a silly little Pascal language designed to make it “easy” to learn how to program. Really? Easy? After hand-coding complex programs using a machine monitor, Pascal felt like a lot of overkill. I do have to admit the syntax for “Hello, World” is straightforward.

1989 – Pascal

program HelloWorld;
begin
  writeln('Hello, World.');
end

I thought the cool kids at the time were working with C. This was a fairly flexible language and felt more like a set of functional macros over assembly than an entirely new language. I taught myself C on the side, but used it only for a short while.

1990 – C

#include <stdio.h>
main()
{
  printf("Hello World");
}

The little program includes a library that handles Standard Input/Output and then sends the text on its way. Libraries were how C allowed us to develop cross-platform – the function was called the same thing whether you were on Windows or Linux, but the library itself implemented all of the low-level routines needed to make it work on the target machine. The above code was something I would tinker with on my Linux machine a few years later. It’s hard to describe if you weren’t into computers during this time, but it felt like you weren’t a true programmer unless you built your own custom Linux installation. By “built your own” I mean literally walked through the source and customized it to match the specific set of hardware you owned. The most fun was dealing with video cards and learning about “dot clocks” and all of the nuances of making the motherboard play nicely with the graphics chip. Anyway, I diverge.

C was not really a challenge for me to learn, but I quickly figured out the cool kids were doing something different and following this paradigm known as “object-oriented programming.” Machine code and assembly are probably the farthest you can get from OO, so the shift from procedural to object-oriented was a challenge I was ready to tackle. At the time you couldn’t simply search online for content (you could, but it was using different mechanisms with far fewer hits) so I went out and bought myself a stack of C++ books. It turns out C++ supports the idea of “objects.” It even used objects to represent streams and pipes to manipulate them. This object-oriented stuff also introduced the idea of namespaces to better manage partitions of code. All said, “Hello, World” becomes:

1992 – C++

#include <iostream>
using namespace std;
int main()
{
  cout << "Hello World";
  return 0;
}

I headed off to college and was disappointed that the place I went did not have courses that covered the “modern” languages I was interested in like C and C++. Instead, I had to muddle through a course where homework was performed on the mainframe we called “Cypher” using an interesting language called Fortran that actually cares about what column you put your code in! That’s right, the flavor of the language at the time designated column 1 for comments, columns 1 – 5 for statement labels, column 6 to mark a continuation, and only at column 7 could you begin to write real code. I learned enough of Fortran to know I never wanted to use it.

1993 – Fortran

       PROGRAM HELLOWORLD
       PRINT *, 'Hello, World!'
       END

Because I wasn’t much into the main courses I spent most of the evenings down in the computer lab logging onto the massive Unix machines the college had. There I discovered the Internet and learned about the “old school” way of installing software: you pull down the source, build it, inspect the errors, tweak it, fix it, and get a working client. Honestly, I don’t know how you could use Unix without learning how to program based on the way things ran back then so I was constantly hacking and exploring and learning my way around the system. One fairly common thing to do would be execute commands that would dump out enormous wads of information that you then had to parse through using “handy” command line tools. One of the coolest languages I learned during that time was PERL. It doesn’t do the language justice to treat it with such a simple example, but here goes:

1993 – PERL

$welcome = "Hello World";
print "$welcome\n";

At the same time I quickly discovered the massive World Wide Web (yes, that’s what we called it back then … the Internet was what all of those fun programs like Gopher and Archie ran on, and the World Wide Web was just a set of documents that sat on it). HTML was yet another leap for me because it was the first time I encountered creating a declarative UI. Instead of loading up variables or literals and calling some keyword or subroutine, I could literally just organize the content on the page. You’d be surprised that 20 years later, the basic syntax of an HTML page hasn’t really changed at all.

1993 – HTML 

<html>
<head><title>Hello, World</title></head>
<body><h1>Hello, World</h1></body>
</html>

This was an interesting time for me. I had moved from my personal computers (TI-99/4A and Commodore 64 with a brief period spent on the Amiga) to mainframes, and suddenly my PC was really just a terminal for me to connect to Unix mainframes. I also ran a Linux OS on my PC because that was the fastest way to connect to the Internet and network at the time – the TCP/IP stack was built-in to the OS rather than having to sit on top like it did in the old Windows versions (remember NETCOM anyone?) . Most of my work was on mainframes.

I did realize that I was losing touch with the PC world. At that time it was fairly obvious that the wild days of personal computing were over and the dust had settled around two machines: the PC, running Windows, for most of us, and the Mac for designers. That’s really what I believed. I had a roommate at the the time who was all over the Mac and at the time he designed coupons. He had all of these neat graphics design programs and would often pull out Quark and ask me, “What do you have on the PC that could do this?” I would shrug and remind him that I can’t even draw a circle or square so what on earth would I do with that graphics software? I liked my PC, because I understood software and I understood math so even if I couldn’t draw I could certainly use math to create fractal graphics or particle storms. Of course, doing this required having a graphics card and wasn’t really practical from a TELNET session to a Unix box, so I began learning how to code on the PC. At the time, it was Win32 and C++ that did the trick. You can still create boilerplate for the stack in Visual Studio 2012 today. I won’t bore you with the details of the original “HELLO.C” for Win32 that spanned 150 lines of code.

1994 – Win32 / C++ (Example is a bit more recent)

 win32

Dropping to the command line and executing this gives us:

win32output

My particle streams and Mandelbrot sets weren’t doing anything for employment, however, so I had to take a different approach. Ironically, my professional start didn’t have anything to do with computers at all. I started working for an insurance company taking claims over the phone in Spanish. That’s right. In the interview for a lower wage job that I was “settling for” to pay the bills while I stayed up nights and hacked on my PC I happened to mention I spoke Spanish. They brought in their bilingual representative to interview me and I passed the test, and within a week I was in a higher paid position learning more Spanish in a few short calls than I had in all my years in high school.

I was young and competitive and we were ranked based on how many claims we successfully closed in a day. I was not about to fall behind just because the software I was using tended to crash every once in awhile. It was a completely new system to me – the AS/400 (now called the iSeries) – but I figured it out anyway and learned how to at least restart the claims software after a crash. The IT department quickly caught on and pulled me aside. I was afraid I was in trouble, but instead they extended me an offer to move into IT. I started doing third shift operations which meant basically maintaining the AS/400 systems and swapping print cartridges on the massive printers that would print out policy forms and claims.

When I went to operations the process for swapping printing cartridges took most of the shift. This is because certain forms were black ink only, but other forms had green or red highlights. The printers could only handle one ink profile so whenever a different type of form was encountered, we’d get an alert and go swap everything out. I decided this was ridiculous so I took the time to teach myself RPG. I wrote a program that would match print jobs to the ink color and then sort the print queue so all black came together, all green, etc. This turned an 8 hour job into about a 2 hour one and gave me lots of time to study RPG. The original versions – RPG II and RPG III – were crude languages originally designed to simply mimic punch card systems and generate reports (the name stands for Report Generator). Like Fortran, RPG was a positional language.

1995 – RPG

I              'HELLO, WORLD'        C         HELO
C           HELO      DSPLY
C                     SETON                     LR

Note the different types of lines indicated by the first character (actually it would have been several columns over but I purposefully omitted some of the margin code). This defines a constant, displays it, then sets an indicator to cause the program to finish.

After working in operations I landed a second gig. The month-end accounting required quite a bit of time and effort. The original system was a Honeywell mainframe that read punch cards. A COBOL program was written that read in a file that emulated a punch card and output another file that was then pumped into the AS/400 and processed. After this, the various accounting figures had to match. Due to rounding errors, unsupported transactions, and any other number of issues the figures almost never matched so the job was to investigate the process and find out where it broke, then update the code to fix it. We also had an “emergency switch” for the 11th hour that would read in the output data and generate accounting adjustments to balance the books if we were unable to find the issues. Although I didn’t do a lot of COBOL coding, I had to understand it well enough to read the Honeywell source to troubleshoot issues on the AS/400 side.

1995 – COBOL

IDENTIFICATION DIVISION.
PROGRAM-ID. HELLO.
ENVIRONMENT DIVISION.
DATA DIVISION.
WORKING-STORAGE SECTION.
01 WELCOME-MESSAGE           PIC X(12).
PROCEDURE DIVISION.
PROGRAM-BEGIN.
    MOVE "Hello World" TO WELCOME-MESSAGE.
    DISPLAY WELCOME-MESSAGE.
PROGRAM-DONE.
    STOP RUN.

It was only a short time later that the top RPG guru came to our company to give us a three day class because the coolest thing was happening in the AS/400 world. Not only were the AS/400 machines moving to 64-bit (and everyone knows that double bits is twice as nice, right?) but the RPG language was getting a facelift and with version IV would embrace more procedural and almost object-oriented principles than ever before. How cool was that? We jumped into training and I laughed because all of the old RPG developers were scratching their heads trying to muddle through this “new style of programming” while I was relieved I could finally get back to the more familiar procedural style I was used to with C and C++ rather than the tight, constricted, indicator and column-based language RPG had been.

Some developers may get a kick out of one of the '”features” that really knocked the socks off everyone. The language required the instructions to begin at a certain column, and inputs into the instructions would precede them. This was a very limited space so you could really only load constants of a few characters, otherwise you had to specify them as constants or data structures and read them in. The new language moved the keyword column to the right so there was more room for the “factor one” position. That meant we could now do “Hello, world” in just a few lines. The language was also more “procedural” so you could end a program by returning instead of setting on the indicator (although if I remember correctly, a return on the main program really just set that indicator under the covers).

1996 – RPG/ILE

C     'HELLO, WORLD' DSPLY 
C                    RETURN

The AS/400 featured a database built into the operating system called DB2. For the longest time the database only supported direct interaction via RPG or other software and did not support the SQL syntax. It was rolled out as a special package called SQL/400 but the underlying support was there. I wrote one of my first published (print) articles about tapping into SQL for the AS/400 in 1998 (Create an Interactive SQL Utility). There are probably a million ways to do “Hello, World” in SQL but perhaps the easiest is this:

1998 – SQL 

SELECT 'HELLO, WORLD' AS HELLO

I apologize for stepping out of chronological order but the SQL seemed to make sense as part of my “main” or “paid” job. At the same time I had been doing a lot of heavy gaming, starting with DOOM (the first game I was so impressed with, I actually sent in the money to purchase the full version), continuing with DOOM II and HEXEN and culminating in Quake. If you’re no familiar with the history of first-person shooters, Quake was the game that changed the history of gaming. It offered one of the first “true” 3D worlds (the predecessors would simulate 3D with 2D maps that allowed for varying floor and ceiling heights) and revolutionized the death match by supporting TCP/IP and using advanced code that allowed for more gamers in the same map than ever before.

It also was extremely customizable. Although I am aesthetically challenged and never caught on to creating my own models or maps, I jumped right into programming. Quake offered a C-based language called QuakeC that you would literally compile into a special cross-platform byte code that could run on all of the target platforms that Quake did. I quickly wrote a number of modifications to do things like allow players to catch fire or cause spikes to ricochet realistically from walls. Someone in a chat room asked me to program an idea that I became famous for which was called “MidnightCTF” and essentially took any existing map and turned off all of the lights but equipped the players with their own flashlight. Quake was one of the first games to support true 3D sound so this added an interesting dimension to game play.

Someone even included a code snippet from one of my modifications in the “Dictionary of Programming Languages” under the QuakeC entry. Nikodemos was the nickname I used when I played Quake. The “Hello, World” for QuakeC is really just a broadcast message that gets sent to all players currently in the game.

1996 – QuakeC

bprint("Hello World\n");

By this time I realized the Internet was really taking off. I had been frustrated in 1993 when I discovered it at college and no one really knew what I was talking about, but just a few years later everyone was scrambling to get access (a few companies like AOL and Microsoft with MSN actually thought they could build their own version … both ended up giving in and plugging into THE Internet). I realized that my work on mainframes was going to become obsolete or at best I’d be that developer hidden in the back corner hacking on “the old system.” I wanted to get into the new stuff.

I transferred to a department that was working on the new stuff – an application that was designed to provide visibility across suppliers by connecting several different systems in an application written with VB6 (COM+) and ASP.

1998 – VB6 (COM) w/ ASP

Public Class HelloWorld
    Shared Public Function GetText() As String
        return "Hello World"
    End Function
End Class
<%@ Page Language="VB" %>
<OBJECT RUNAT=SERVER SCOPE=Session ID=MyGreeting PROGID="MyLibrary.HelloWorld">
</OBJECT>
<HTML>
<HEAD><TITLE><%= MyGreeting.GetText() %></TITLE></HEAD>
<BODY><H1><%= MyGreeting.GetText() %></H1></BODY>
</HTML>

At the time I had the opportunity to work with a gifted architect who engineered a system that at the time was pretty amazing. Our COM+ components all accepted a single string parameter in the interface because incoming information was passed as XML. This enabled us to have components that could just as easily work with messages from the web site as they could incoming data from a third-party system. It was a true “web service” before I really understood what the term meant. On the client, forms were parsed by JavaScript and packaged into XML and posted down so a “post” from the web page was no different than a post directly from the service. The services would return data as XML as well. This would be combined with a template for the UI (called PXML for presentation XML) and then an XSLT template would transform it for display. This enabled us to tweak the UI without changing the underlying code and was almost like an inefficient XAML engine. This was before the .NET days.

JavaScript of course was our nemesis because we had to tackle how to handle the various browsers at the time. Yes, the same problems existed 15 years ago that exist today when it comes to JavaScript and cross-browser compatibility. Fortunately, all browsers agree on the way to send a dialog to the end user.

1998 – JavaScript

alert('Hello, World.');

A lot of our time was spent working with the Microsoft XML DLLs (yes, if you programmed back then you remember registering the MSXML parsers). MSXML3.DLL quickly became my best friend. Here’s an example of transforming XML to HTML using XSLT.

1998 – XML/XSLT to HTML

<?xml version="1.0"?>
<hello>Hello, World!</hello>
<?xml version='1.0'?>
<xsl:stylesheet version="1.0"
      xmlns:xsl="http://www.w3.org/1999/XSL/Transform">
    <xsl:template match="hello">
        <html>
            <head><title><xsl:value-of select="."/></title></head>
            <body><h1><xsl:value-of select="."/></h1></body>
        </html>
    </xsl:template>
</xsl:stylesheet>
<%
Const MSXMLClass = "MSXML2.DOMDocument"
Set XSLT = Server.CreateObject(MSXMLClass)
Set XDoc = Server.CreateObject(MSXMLClass)
XDoc.load(Server.MapPath("hello.xml"))
XSLT.load(Server.MapPath("hello.xsl"))
Response.Clear
Response.Charset = "utf-8"
Response.Write XDoc.transformNode(XSLT)
%>

I spent several years working with that paradigm. Around that time I underwent a personal transformation and shed almost 70 pounds to drop from a 44” waist down to 32” and became very passionate about fitness. I started my own company “on the side” and eventually left the company I was at to become Director of IT for a smaller company that was providing translation services to hospitals and had a Spanish-language online diet program. Once again I was able to tap into my Spanish-speaking ability because the translations were from English to Spanish and vice versa. I learned quite a bit about the differences between various dialects and the importance of having targeted translations. I also rewrote an entire application that was using ASP with embedded SQL calls and was hard-coded to Spanish to be a completely database-driven, white-labeled (for branding) localized app (the company was looking to branch into other languages like French). It was an exciting time and while I used the Microsoft stack at my job, the cost of tools and servers led me to the open source community for my own company. That’s when I learned all about the LAMP stack … Linux OS, Apache HTTP Server, MySQL Database, and PHP for development. Ironically this experience later landed me one of my first consulting gigs working for Microsoft as they attempted to reach out to the open source community for them to embrace Silverlight … but that’s a different story.

2002 – PHP

<?php
$hello = 'Hello, World.';
echo "$hello";
?>

Several years passed working on those particular platforms when I had the opportunity to move into yet another position to build the software department for a new company. I was the third employee at a small start-up that was providing wireless hotspots before the term became popular. If you’ve ever eaten at a Panera or Chick-fil-A or grabbed a cup of coffee at a Caribou Coffee then you’ve used either the software I helped write, or a more recent version of it, to drive the hotspot experience. When I joined the company the initial platform was written on Java. This was a language I’d done quite a bit of “tinkering” with so it wasn’t a gigantic leap to combine my C++ and Microsoft stack skills to pick it up quickly.

2004 – Java

public class Hello {
    public static void main(String[] args) {
        System.out.println("Hello, World");
    }
}

I’ve got nothing against Java as a language, but the particular flavor we were using involved the Microsoft JVM that was about to get shelved, and a custom server that just didn’t want to scale. I migrated the platform over to .NET and it was amazing to see a single IIS server handling more requests than several of the dedicated Java servers could. I say, “migration” but it was really building a new platform. We looked into migrating the J++ code over to C# but it just wasn’t practical. Fortunately C# is very close to Java so most of the team was able to transition easily and we simply used the existing system as the “spec” for the new system to run on Windows machines and move from MySQL to SQL Server 2005. Note how similar “Hello, World” is in C# compared to Java.

2005 – C#

public class Hello
{
   public static void Main()
   {
      System.Console.WriteLine("Hello, World!");
   }
}

Part of what made our company so successful at the time was a “control panel” that allowed us to manage all of our hotspots and access points from a central location. We could reboot them remotely, apply firmware updates, and monitor them with a heart beat and store history to diagnose issues. This software quickly evolved to become a mobile device management (MDM) platform that is the flagship product for the company today. They rebranded their name and came out with the product but our challenge was providing an extremely interactive experience in HTML that was cross-browser compatible (the prior solution used Microsoft’s custom Java applets). We succeeded in building a fairly impressive system using AJAX and HTML but our team struggled with complex, rich UIs when they had to test across so many browsers and platforms. While we needed to maintain this for the hotspot login experience, the management side was more flexible so I researched some alternative solutions.

When I discovered Silverlight, I was intrigued but decided to pilot it first. I was able to stand up a POC of our monitoring dashboard in a few weeks and everyone loved it so we decided to go all-in. At my best guess our team was able to go from concept to delivery of code about 4 times faster using Silverlight compared to the JavaScript and HTML stack. This was while HTML5 was still a pipe dream. We built quite a bit of Silverlight functionality before I left. By that time we were working with Apple on the MDM side and they of course did not want Silverlight anywhere near their software, and HTML5 was slowing gaining momentum, so I know the company transitioned back, but I was able to enjoy several more years building rich line of business applications in a language that brought the power of a declarative UI through XAML to as many browsers and platforms as were willing to allow plugins (I hear those aren’t popular anymore).

2008 – Silverlight (C# and XAML)

<UserControl x:Class="SilverlightApplication1.MainPage">
<Grid x:Name="LayoutRoot" Background="White">
<TextBlock x:Name="Greeting"></TextBlock>
</Grid> </UserControl>
public partial class MainPage : UserControl
{     
public MainPage()    
{        
InitializeComponent();        
Loaded += MainPage_Loaded;    
}    

void MainPage_Loaded(object sender, RoutedEventArgs e)    
{        
Greeting.Text = "Hello, World.";    
} }

Silverlight of course went down like a bad stock. It was still a really useful, viable technology, but once people realized Microsoft wasn’t placing much stock in it (pardon the pun) it was dead on arrival – had really nothing to do with whether it was the right tool at the time, and everything to do with the perception of it being obsolete. HTML5 also did a fine job of marketing itself as “write once, run everywhere” and hundreds of companies dove in head first before they realized their mistake (it’s really “write once, suck everywhere, then write it again for every target device”).

The parts we loved about Silverlight live on, however, in Windows 8.1 with the XAML and C# stack. For kicks and giggles here’s a version of “Hello, World” that does what the cool kids do and uses the Model-View-ViewModel (MVVM) pattern.

2011 – WinRT / C#

public class ViewModel
{
    public string Greeting
    {
        get
        {
            return "Hello, World";
        }
    }
}

<Grid Background="{StaticResource ApplicationPageBackgroundThemeBrush}">
    <Grid.DataContext>
        <local:ViewModel/>
    </Grid.DataContext>
    <TextBlock Text="{Binding Greeting}"/>
</Grid>

While Windows 8.1 has kept me occupied through my writing and side projects, it’s still something new to most companies and they want a web-based solution. That means HTML and JavaScript, so that’s what I spend most of my time working with. That’s right, once I thought I got out, they pulled me back in. After taking a serious look at what I hate about web development using HTML and JavaScript, I decided there had to be a better way. Our team got together and looked at potential ways and found a pretty cool solution. Recently a new language was released called TypeScript that is a superset of JavaScript. This doesn’t try to change the syntax and any valid JavaScript is also valid TypeScript. The language, however, provides some development-time features such as interfaces that help shape API calls and provide rich discovery (without ever appearing in the generated code) while also giving us constructs like classes with inheritance, strongly typed variables, and static modifiers that all compile to perfectly valid, cross-browser JavaScript.

Using TypeScript was an easy decision. Even though it is in beta, it produces 100% production ready JavaScript, so if we found it wouldn’t work well we knew we could yank the plug and just move forward with the JavaScript. It turns out it was incredibly useful – even a few skeptics on the team who were JavaScript purists and hated any attempt to “modify the language” agree that TypeScript gives us an additional level of control, ability to refactor, and supports parallel development and has accelerated our ability to deliver quality web-based code.

2012 – TypeScript

class Greeter {
    public static greeting: string = "Hello, World";
    public setGreeting(element: HTMLElement): void {
        element.innerText = Greeter.greeting;
    }
}

var greeter: Greeter = new Greeter();
var div: HTMLElement = document.createElement("div");
greeter.setGreeting(div);
document.body.appendChild(div);

TypeScript wasn’t the only change we made. We also wanted to remove some of the ritual and ceremony we had around setting up objects for data-binding. We were using Knockout which is a great framework but it also required more work than we wanted. Someone on our team investigated a few alternatives and settled on AngularJS. I was a skeptic at first but quickly realized this was really like XAML for the web. It gave us a way to keep the UI declarative while isolating our imperative logic and solved yet another problem. Our team has been happily using a stack with TypeScript and AngularJS for months now and absolutely loves it. I’m working on a module for WintellectNOW because I believe this is a big thing. However, if 30 years have taught me anything, it’s this: here today, gone tomorrow. I’m not a C# developer, or a JavaScript developer, or an AngularJS wizard. Nope. I’m a coder. A programmer. Pure, plain, and simple. Languages are just a tool and I happen to speak many of them. So, “Hello, World” and I hope you enjoyed the journey … here’s to the latest.

2013 – AngularJS

<div ng-app>
    <div ng-init="greeting = 'Hello, World'">
        {{greeting}}
    </div>
</div>

“Goodbye, reader.”