Friday, July 29, 2011

Reflection, Lambda, and Expression Magic

Sometimes I love how a little reflection can work magic. In this case I was building what I'll call a "non-intrusive" validation system. The project contains entities that are generated by templates, and it would be extremely difficult to crack open those templates to put in a lot of custom validation code. So I decided to approach it using a helper class that would attach to the generated entities.

As always, I work backwards. I like to have readable, fluent interfaces, so I decide what I want them to look like and then implement the classes and interfaces to make them work. In this case the entities implemented INotifyPropertyChanged and IDataErrorInfo. This makes it easy on the validation class because if it needs to validate a property, it can simply hook into the property change notification and pick up the new value for validation. Problem solved.

The validation mechanism ended up looking like this:


So, how do we get there? I'm not going to give you all of the code but I'll touch on a few salient points. First, creating a common interface for validations makes it easy for the validation helper to parse through a dictionary (i.e. if property "x" comes in, run all validations for "x"). Implementing the validators with a constructor makes it easy to pass parameters. In the case of the FieldIsInRange there is actually a constructor that takes the minimum and maximum values. When you use reflection, you pass in parameters and it finds the matching constructor for you. This means I can have an extension method like this:

public static ValidationHelper WithValidation<TValidator, TProperty>(
   this ValidationHelper helper,
   Expression<Func<TProperty>> propertyExpression,
   params object[] args)
   where TValidator : IMyValidator

And I can simply pass the arguments through to reflection and it will match the appropriate constructor like this:

var validator = (TValidator)Activator
    .CreateInstance(typeof(TValidator), args);

The first parameter after the "extension" property of the helper is a property expression. You may find it is familiar because that's the guidance that Prism provides for strongly-typed property names and has been the topic of much debate. I settled on that format for my Jounce framework and while I initially used it to get property names, it actually comes in very handy for validation.

I won't rehash extracting the property name because it's been done. There are lots of debates over the relative performance of different methods. Here's a hint: for me, most of my property change notification is so I know when the user changes a field. Last time I checked, a user can't type fast enough to make my property changes fire so often that I'm really going to lose sleep over milliseconds here or there. I'd rather have something that is strongly typed so I don't have refactoring nightmares and put some of that plumbing in than have something that is convuluted to implement and code but saves me a tiny fraction of time that the user won't even notice.

You can see the source code in the ExtractPropertyName method by navigating to the BaseNotify class in Jounce - here is a direct link to the source for your viewing pleasure.

In the validation engine, we get an added benefit. What's passed in with this format:


Has a specific signature. It's an expression that contains a function that returns a property, and looks like this: Expression<Func<TProperty>>. Pretty cool. So we can walk the expression tree and get the property name. Once the property change notification fires, we can inspect the arguments for the event and see the property name, and if it is one we want to validate it, we go ahead and perform the validation. But of course, to validate it, we need to get the value. How do we do it?

Often I see the solution og taking the property name that's been extracted from the expression and then using reflection to find the getter and invoking it, like this:

var value = (TProperty) typeof(T)
                  .Invoke( instance, null );

That will work, but it's like tying your arms into a pretzel and reaching around your back to scratch your nose. Doesn't really make sense when you've got the expression already — what was passed in? It's a function that resolves the value! So we use some expression magic to run the lambda expression.

Given the propertyExpression passed in with the signature Expression<Func<TProperty>>, all you have to do is compile the body of the expression. The following code will extract the value, first by compiling the function to a lambda expression, then by calling the lambda expression:

var lambdaExpression = Expression.Lambda<Func<TProperty>>(
var value = (TProperty) lambdaExpression(); 

Now you can pass that value (which was resolved to the most current value of the property, without using reflection) into the validation routines. One of those validation routines is a generic routine to compare values. Why write this multiple times when the framework provides a nice IComparable interface for things that can be compared? The framework I built works with strings, so my first challenge was parsing the strings to the target type before comparing them. This is where we can have some fun.

Here is the code to parse any string to any type ... provided the type supports the TryParse method. First, you get the type and you create a variable to hold the output of the parse call:

public static bool TryParse(string input, out T value) 
   var type = typeof(T); 
   value = default(T); 

Next, you build the parameters to the call. You want the method with the signature that takes a string type for the first parameter, and a reference to the type for the output parameter. Here's where it is important to know your C# and your CLR. The "out" keyword is enforced by the compiler, and is a language construct. The framework knows nothing about it. To the framework, it's simply a reference type. You might not know that you can create special types that are tagged as "reference" - read the documentation here for more information. If you suffix a type name with an ampersand (&) you are flagging that type to be passed by reference.

So, here's the rest of the method down to the call:

var parameters = new[]
   System.Type.GetType(string.Format("{0}&", type.FullName))

var method = type.GetMethod(

if (method == null)
   return false;

var invokeParameters = new object[] { input, value };
var returnValue = (bool) method.Invoke(null, invokeParameters);

Now comes just a slight trick that will cause problems if you're not aware of it. You tagged the parameter as a reference type, but most of the time your comparisons will be dealing with value types (integers, decimals, etc.) so what happens in the call is that the types are boxed. It's a common mistake to make that call above and assume your value will now be populated if the returnValue is true. Imagine your frustration when you run this and it is always set to the default value instead! What gives? It turns out the call does update the reference — but not the one you provided. It updates the reference in the array you passed in!

The last step in the global TryParse method is therefore to pull that value back out and return the result:

public static bool TryParse(string input, out T value) 
   if (returnValue)
      value = (T)invokeParameters[1];

   return returnValue;

And there you have it ... the generic try parse. Now if you wan to compare the values, you can cast to IComparable and simply use:

public int ParseAndCompare(string valueOne, string valueTwo) 
   where T: IComparable 
   T parsedOne, parsedTwo;
   if (TryParse(valueOne, out parsedOne) && 
       TryParse(valueTwo, out parsedTwo))
       return parsedOne.CompareTo(parsedTwo);

   throw new ArgumentException("One of the values didn't parse.");

I will admit the details of the implementation can get a little bit complex if you're not familiar with expression trees or reflection. However, the beauty of this approach is that the core pieces are not likely to change much as the basic infrastructure to build the validations on top of. The payoff is enormous, because now a developer can write a new validation by implementing a simple interface and then add the validation using the fluent syntax I showed earlier. That makes the code both readable and easy to maintain.

Jeremy Likness

Tuesday, July 19, 2011

Wednesday, July 13, 2011

Consumerization of IT and Silverlight Line of Business

Yesterday I had the opportunity to present a demo at the Worldwide Partner Conference (WPC) in Los Angeles. The session was called "Profiting from the Consumerization of IT with Windows Devices and Windows 7 Enterprise." The focus was on how consumer-driven trends impact the enterprise and ways to work with, rather than against, that trend to create successes.

In recent months there were two events that were perceived as blows to the success of Silverlight at large. The first was the announcement that it would not be supported on the iPhone. While this caused a lot of concern and grief, it did not bother me because I knew that the phone is a different target and it would never make sense to run the same application "out of the box" on that space. Even when the Windows Phone 7 was announced along with the support of Silverlight, developers quickly realized that while there are many assets they can share between projects that target the desktop, slate, and the phone, there is still a substantial bit of development specifically targeted to the smaller form factor of the phone.

The bigger trend was when the iPad exploded into popularity. This resulted in a typical scenario: CEO purchases iPad, loves it, brings it into the office and asks, "Why can't all of our applications run on this thing?" and then IT scrambles to rewrite everything using HTML5. Of course, they quickly found that path wasn't easy, either, and although they could produce content for the iPad using HTML5, that application was not the same application they'd be using for other targets.

Our company recently completed a project for Rooms to Go. This what I was asked to demo on stage to present our solution and why it is so important in this space. The company sells furniture with packaged deals that allow you to literally "buy the room." The problem they were solving was a customer experience issue. Sales would engage with a customer on the floor, but if the customer wanted more information about a package, wanted to estimate delivery costs or even just share information, the salesperson had to break away, locate a kiosk, and log in there. It was a very disruptive process.

Rooms to Go wanted something they could carry with them and use on the floor to engage with the customer without having to break away. They wanted it to be easy and intuitive to minimize disruption with training and learning curves. They wanted something rugged so that if it happened to get dropped on the floor (or have something spilled on it) it could keep on running (can you imagine dropping an iPad on the floor?).

The device they went with is the Motion Computing CL900 and I'll let you visit the link for all of the statistics. Why did they go this route, as opposed to some of the other popular devices?

  • They needed something that had a security story. There is no security story around many of the other consumer devices. This runs Windows 7 and integrates with Active Directory, honors group policies and interfaces with all of their existing infrastructure.
  • They wanted to run their legacy applications. One distinct difference between Windows tablets and other consumer products is that they run the full Windows operating system and are not just a "phone on a slate" that doesn't make calls. This means it was easy for them to load their existing apps right on the tablet and still use them even if they weren't built with "touch first" in mind.
  • They wanted to leverage existing mindshare. By building the application in Silverlight they were able to stick with a development environment (Visual Studio) and a language (C#) they were familiar with. Our team worked shoulder to shoulder with theirs to deliver the product and ensure they understood the framework and owned the code. The application used my Jounce MEF and MVVM framework for Silverlight.

There are many more facets to the story that you can learn about in the case study and by watching the video (here is a direct link.)

If you are a Silverlight developer and are shaking in your boots over recent announcements, I wouldn't be. While the future is still not clear and we won't know much more until the BUILD conference still weeks away, keep in mind this scenario and the fact that companies want to have an engaging, interactive, .NET and Windows based experience they can deliver, and the demand for Silverlight line of business is only growing. It will only increase in my opinion with the release of version 5. With the number of companies that are still on Windows XP it would take 300,000 upgrades a day to convert them to Windows 7 before end of support ... and that means regardless of the Windows 8 story, there is still going to be a strong platform and support base for delivering Silverlight line of business applications.

The slate is the perfect example of a use case for Silverlight Line of Business. I'll be delivering a talk about developing for slate using Silverlight at reMIX South in the Atlanta area on August 6th. I'll cover how we were able to use existing frameworks and libraries like the LightTouch library to quickly develop a comprehensive solution that solved a real world problem. I hope to see you there. If there is nothing else you take away from this, hopefully I've demonstrated the real world demand and application of solutions for line of business applications written in Silverlight. Jeremy Likness

Tuesday, July 5, 2011

Worldwide Partner Conference and Silverlight

I've seen a lot of speculation around the future of Silverlight and how it compares to HTML5 lately. If you've followed my posts and tweets you'll find that I still believe Silverlight is strong in the line of business area and in fact my company Wintellect is still doing quite a bit with it.

The interest is so strong that I'll be traveling to Los Angeles in a week to present at the Worldwide Partners Conference (WPC). The session speaks to the business side of Silverlight but with a very powerful message. It is a breakout session titled, "Profiting from Consumerization of IT with Windows Devices and Windows 7 Enterprise" and is on Thursday July 12th from 4:30pm to 5:30pm PST.

While I can't reveal all of the details just yet, my portion of the session is based on a case study from a very successful project our company recently completed. The client wanted to enhance the ability of their salespeople to interact with customers by providing an interactive touch experience on a Windows slate device. We built that experience in Silverlight and used it to interface with their existing backend systems. Silverlight made it easy and fast for us to develop a rich, interactive application with seamless integration to existing and new services on their backend. We were able to share development with their team while building a highly modular solution, and because of the parallel development and design it was completed in a very short time frame.

I'll post more details as they become available so that you can learn more about the case study and how Silverlight combined with the slate provided the solution they were looking for. The target device was Windows because the customer had an existing Windows infrastructure and also felt the security around Windows-based devices was much stronger than the story with competing devices like the iPad.

If you're at the conference and are able to join me I look forward to seeing you there. Otherwise, stay tuned as more details are released. I love sharing case studies with Silverlight because at the end of the day I can tell you how valuable and useful I think it is, but its the customer and their perception that ultimately determines success.

Jeremy Likness

Friday, July 1, 2011

Quick Tip: Fixing those Stubborn References

I am working on a project that uses a mixed set of assemblies. Some are in the .NET Framework 3.5, and others are in version 4.0. The project is being converted to use the Managed Extensibility Framework (MEF). In order for the parts to play nicely together, all projects must use the same version of the System.ComponentModel.Composition.dll assembly.

The problem is that MEF is a part of the core framework in 4.0. When you add a reference, there is a set of internal pathways that Visual Studio uses to resolve the location. Even if you browse to a local version of the DLL, as I did, once it is added the core framework version ends up being referenced.

The solution is simple but not straightforward. To fix any reference that keeps bouncing back to the GAC or framework version, follow these steps:

Unload the Project

We'll need to edit the project file itself, and that is not possible while it is loaded in the Solution Explorer. Right-click on the project node, and choose "Unload project." The project should go to slightly grayed and show (Unavailable) next to it.

Edit the Project

Now, right-click the project again and you'll receive a different context menu. This time, choose "Edit (projectname).csproj" and you will see the XML for the project file itself. Parts of project files are grouped in a tag called ItemGroup and you should find one that contains references, like this:

    <Reference Include="PresentationCore" />
    <Reference Include="PresentationFramework" />
    <Reference Include="System.ComponentModel.Composition" />

Change to an Explicit Reference

The first step to making the reference lock is to make it explicit. The generic reference above will find the framework version. Instead, change it to the explicit version - in this case, the version of MEF that was released in Februrary 2010 just prior to being integrated into the full framework. The new reference looks like this:

<Reference Include="System.ComponentModel.Composition, Version=2010.2.11.0, Culture=neutral, processorArchitecture=MSIL"/>

Provide a Hint Path

Finally, if the assembly is being packaged with the project, it can help to let Visual Studio know where to find it. This is done with a hint path. To add the hint, simply open the tag and then add the relative path to the assembly from the current project (if you have to use an absolute path, your solution won't transfer well to other machines including the build box).

In this example, there is a higher level folder named "External" that holds external references, so the full reference with the hint path looks like this:

<Reference Include="System.ComponentModel.Composition, Version=2010.2.11.0, Culture=neutral, processorArchitecture=MSIL">

Now you can save and close the project. Right-click on the project and choose "reload" project to make it available again, then right-click on the reference and view the properties to verify it is now "locked" to the version you want.

Jeremy Likness