Friday, August 27, 2010

Fluent Validation with MEF and PRISM

Validation is a topic that comes up quite frequently. One concern is how to honor DRY (don't repeat yourself) principles and create a framework that is reusable across layers. There are some very robust and tested frameworks out there, but sometimes they might be more than what you are looking for. In this post, I'll show you how to build a standalone validation library that can be shared between Silverlight and the server, then integrated via MEF and PRISM for fluent validation.

Rules and Violations

First things first. We need to lay down the validation framework. One technique I've learned when working with Silverlight is to build your main classes as Silverlight classes first, so they take advantage of the "lower denominator." Class libraries built in Silverlight are then easily shared on the server - the easiest, "out of the box" way is to create a mirrored .NET project and link the classes.

I suspect when a rule is violated, it might pertain to more than one property, so I created a rule violation to look like this:

public class RuleViolation 
{
    public RuleViolation(string message, IEnumerable<string> properties)
    {
        Message = message;
        Properties = properties;
    }

    public string Message { get; private set; }

    public IEnumerable<string> Properties { get; private set; }
}

Next, we need to define what a validation rule looks like. What I want to do is define various rules and tag them with MEF to export, and have a centralized place to request and execute the rules. The interface I decided to go with looks like this:

public interface IValidationRule
{
    IEnumerable<RuleViolation> Validate(string propertyName, object propertyValue);

    IEnumerable<RuleViolation> Validate(string propertyName, object propertyValue, Dictionary<string,object> parameters);        
}

We can simply pass a property and a value, possibly with additional parameters, and receive a list of violations. The parameters dictionary is one place some might complain. Instead of having fine-grained validations that each have different signatures, I like to compromise and use an open interface. I can still type it somewhat (I'll show you how) but I prefer the flexibility of requesting the common interface over keeping track of all of the various method signatures.

Defining Rules

To explain what I mean, let's build two validation rules, one that takes parameters, and one that doesn't. Here is a rule that says, "this is required." First, we'll create a base validation class to take care of common tasks so our actual rules are easier to implement:

public abstract class BaseRule : IValidationRule
{
    private readonly List<RuleViolation> _violations = new List<RuleViolation>();

    protected void AddViolation(string message, string property)
    {
        AddViolation(message, new[] { property });
    }

    protected void AddViolation(string message, IEnumerable<string> property)
    {
        var violation = new RuleViolation(message, property);
        _violations.Add(violation);
    }

    protected abstract void _Validate(string propertyName, object propertyValue, Dictionary<string,object> parameters);

    public IEnumerable<RuleViolation> Validate(string propertyName, object propertyValue)
    {
        return Validate(propertyName, propertyValue, null);
    }

    public IEnumerable<RuleViolation> Validate(string propertyName, object propertyValue, Dictionary<string, object> parameters)
    {
        _violations.Clear();
        _Validate(propertyName, propertyValue, parameters);
        return new List<RuleViolation>(_violations);
    }
}

Let's get one assumption out the way. This code is not thread-safe. You'll notice I'm using state on the class to handle the validation, by clearing the collection, then calling the derived class, and finally returning the result. If two threads entered this class, there would be an issue.

You might want to add the extra plumbing to make it thread safe, and that's fine by me. I personally am not planning to execute anywhere other than within my view model getters and setters. I'm only validating as the result of user input (except in my testing, of course, where I can control this by creating new instances) which is via databinding. That means that if I code this correctly, any validations will actually only come in one thread: the main UI thread.

Here's what a required value looks like:

public class ValueIsRequired : BaseRule 
{
    protected override void _Validate(string propertyName, object propertyValue, Dictionary<string,object> dictionary)
    {
        if (propertyValue == null || string.IsNullOrEmpty(propertyValue.ToString()))
        {
            AddViolation("Value is required.", propertyName);
        }
    }
}

Notice that I'm ignoring the dictionary. When checking to see if only alpha characters are in a string (such as a title, for example) I might want to make it optional whether or not to allow spaces in between characters. Here's a rule that takes in an optional parameter:

public class ValueOnlyContainsAlpha : BaseRule 
{
    private const string PATTERN = @"[^a-zA-Z]";
    private const string PATTERNWITHSPACE = @"[^a-zA-Z ]";
        
    public const string ALLOW_SPACE = "AllowSpace";

    protected override void _Validate(string propertyName, object propertyValue, Dictionary<string,object> dictionary)
    {
        if (propertyValue is string && Regex.IsMatch(propertyValue.ToString().Trim(), 
            dictionary.GetParameter<bool>(ALLOW_SPACE) ? PATTERNWITHSPACE : PATTERN))
        {
            AddViolation("Value must be alpha only.", propertyName);                
        }
    }
}

(We could extend that a little more and have a separate message to define spaces or not ... this is just here to illustrate the point.) If you're wondering about the GetParameter method, I extended the dictionary like this:

public static T GetParameter<T>(this Dictionary<string, object> dictionary, string parameterName)
{
    return dictionary == null
                ? default(T)
                : dictionary.ContainsKey(parameterName) ? (T) dictionary[parameterName] : default(T);
}

This is just a convenience to cast the value or get the default if it is not in the dictionary.

Exporting Rules

The idea with rules is that it should be easy to export them to the system so a centralized location can pass them out. In this case, I can bind validations later in the game but whatever is using the validations must be aware of the type of the rule. To me, that makes perfect sense - a range validation doesn't make sense unless I know there is a range validation rule.

Let's create an export attribute and some metadata for the rule, and we'll just export them by type:

[MetadataAttribute]
[AttributeUsage(AttributeTargets.Class,AllowMultiple = false)]
public class ExportRuleAttribute : ExportAttribute
{
    public ExportRuleAttribute(Type ruleType) : base(typeof(IValidationRule))
    {
        Rule = ruleType;
    }

    public Type Rule { get; set; }
}

public interface IExportRuleMetadata
{
    Type Rule { get; }
}

Now I can export my rules like this:

[ExportRule(typeof(ValueIsRequired))]
public class ValueIsRequired : BaseRule 

We need to "catch" the rules somewhere. I'll create a signature that mirrors the validation signature, with a factory:

public interface IValidationFactory
{
    IEnumerable<RuleViolation> ValidateThat<T>(string propertyName, object propertyValue) where T : IValidationRule;

    IEnumerable<RuleViolation> ValidateThat<T>(string propertyName, object propertyValue, Dictionary<string,object> parameters) where T : IValidationRule;
}

Here is where the fluent interface starts to creep in. Notice these are not just method names, but names that are starting to make a statement ("validate that ..."). Let's implement this with MEF:

[Export(typeof(IValidationFactory))]
public class ValidationFactory : IValidationFactory
{
    [ImportMany(AllowRecomposition = true)]
    public Lazy<IValidationRule,IExportRuleMetadata>[] Rules { get; set; }

    private IValidationRule GetRule<T>() where T: IValidationRule
    {
        return (from rules in Rules
                where rules.Metadata.Rule.Equals(typeof (T))
                select rules.Value).FirstOrDefault();
    }

    public IEnumerable<RuleViolation> ValidateThat<T>(string propertyName, object propertyValue) where T : IValidationRule
    {
        return GetRule<T>().Validate(propertyName, propertyValue);
    }

    public IEnumerable<RuleViolation> ValidateThat<T>(string propertyName, object propertyValue, Dictionary<string, object> parameters) where T : IValidationRule
    {
        return GetRule<T>().Validate(propertyName, propertyValue, parameters);
    }
}

We capture all of the rules. When the client requests to "validate that..." we'll fetch the type and pass through the parameters.

Fluent Interface

Before I hook this in, I want to add a few extension methods to make the interface more fluent. Remember how I mentioned I'm using a dictionary for additional parameters, but I can type it somewhat? You saw the example in the alpha with the optional spaces. I can send in a parameter that is "true" for allowing spaces. Let's make it easier to pass that parameter, so our interface can look like this:

Validator.ValidateThat<ValueOnlyContainsAlpha>(propertyName,value,ValueOnlyContainsAlpha.ALLOW_SPACE.AsParameterWithValue(true));

That makes it a little easier to read without making a dictionary, adding the values, etc. In fact, we'll also want to extend this (for example, if we need a minimum and maximum range) so I'll also add an extension for additional parameters. The goal is to do this:

Validator.ValidateThat<ValueIsInRange>(propertyName
   ,value
   ,ValueIsInRange.MIN.AsParameterWithValue(1)
    .WithParameter(ValueIsInRange.MAX,10));

Here's what the extension methods look like:

public static Dictionary<string, object> AsParameterWithValue(this string parameterName, object parameterValue)
{
    return new Dictionary<string, object> {{parameterName, parameterValue}};
}

public static Dictionary<string, object> WithParameter(this Dictionary<string, object> dictionary, string parameterName, object parameterValue)
{
    dictionary.Add(parameterName, parameterValue);
    return dictionary;
}

The last extension I want to create for our fluent interface is to allow chaining of validations. In most situations, you'll have more than one validation. Because validations return an enumeration of violations, it is easy enough to combine them by merging their results. Here's what the goal is:

var violations = Validator.ValidateThat<ValueIsRequired>(propertyName,value)
.AndAlso(()=>Validator.ValidateThat<ValueOnlyContainsAlpha>(propertyName
   ,value
   ,ValueOnlyContainsAlpha.ALLOW_SPACE.AsParameterWithValue(true)));

We accomplish the merge like this:

public static IEnumerable<RuleViolation> AndAlso(this IEnumerable<RuleViolation> source, Func<IEnumerable<RuleViolation>> target)
{
    var list = new List<RuleViolation>(source);
    list.AddRange(target());
    return list;
}

Using PRISM to Host the Validations

Now the validation project can easily be shared/linked to .NET and used for server-side validation. We're more concerned with the immediate UI feedback. How does this engine integrate with Model-View-ViewModel?

PRISM provides MVVM guidance in their preview 4. Download it here. There is a base MVVM class in the MVVM quickstart that implements helpers for notify property changed events as well as IDataErrorInfo, a standard means for identifying validation errors. John Papa discusses this interface here.

What we want to do is hook into the supplied class and make it easy to validate. The reference implementation already has methods for setting and clearing errors against properties, so we just want to provide a validation "front-end" to hook into this feature.

First, we'll expose our validation factory by importing it into the base view model class:

[Import]
public IValidationFactory Validator { get; set; }

Next, we'll supply a method that derived view models can use to hook into validations:

protected virtual void WithValidationFor<T>(Expression<Func<T>> propertyExpression, IEnumerable<RuleViolation> violations)
{
    var propertyName = ExtractPropertyName(propertyExpression);

    var hasViolations = false;

    foreach(var violation in violations)
    {
        hasViolations = true;
        SetError(propertyName, violation.Message);
    }

    if (!hasViolations)
    {
        ClearErrors(propertyName);
    }
}

Very simple - we simply parse the validations, and if any exist, we set them on the property, otherwise we reset them.

Putting it All Together

I'm not supplying full source on purpose - obviously not everyone will use the PRISM reference implementation for a solution and may have another framework they'd prefer to use, and the intent here is educational. In this example, I can validate that a name is required and must be alpha only (with spaces) like this on a view model derived from the base:

private string _name;

public string Name
{
    get { return _name; }
    set
    {
        _name = value;
        RaisePropertyChanged(() => Name);
        var propertyName = ExtractPropertyName(() => Name);
        WithValidationFor(()=>Name,
            Validator.ValidateThat<ValueIsRequired>(propertyName,value)
            .AndAlso(
            ()=>Validator.ValidateThat<ValueOnlyContainsAlpha>(propertyName,value,ValueOnlyContainsAlpha.ALLOW_SPACE.AsParameterWithValue(true))));
    }
}

And there you have it: a (somewhat) fluent interface for validation that sits on top of PRISM's MVVM quickstart guidance.

Jeremy Likness

Sunday, August 22, 2010

Coroutines for Asynchronous Sequential Workflows using Reactive Extensions (Rx)

I've been doing quite a bit with Reactive Extensions (Rx) for Silverlight lately. One idea that I keep exploring is the concept of creative intuitive sequential workflows for asynchronous operations. You can read about my explorations using Wintellect's own Power Threading Library in this post along with a simple solution using an interface and enumerators in part 2 of that series. I'm tackling the problem from a different angle.

First, my disclaimers: this is more of an exploratory, "Here's what can be done" post. Please don't take this as best practice or guidance, but more of another way of looking at the solution as well as an opportunity to dive deeper into Reactive Extensions. Also understand I am by no means an expert with Rx so I welcome feedback related to this experiment.

The concept is straightforward: there are often times we want an asynchronous set of operations to perform sequentially. Perhaps you must load a list from a service, then load the selected item, then trigger an animation. This can be done either by chaining the completed events or nesting lambda expressions, but is there a cleaner way?

To me, a cleaner solution would make it easy to see what happens sequentially, and also easy to fire the sequence. If it becomes too complex to force the sequential workflow, it buys us nothing from the traditional methods of wiring into completed events or nesting lambda expressions.

So, I started with the idea of having a helper object that I could feed operations to in sequence, then fire it off. What would that look like?

First, I made an interface to keep track of operations as they run and complete. This is by no means a perfect design but for now it's a sort of "recrusive container" for work done. The interface looks like this:

public interface ISequentialResult 
{
    List<ISequentialResult> Results { get; set; }
    object Value { get; set; }        
}

The result is simple: it aggregates all previous results, and contains a pointer to whatever is processing the current action in the sequence. Simple enough. Now on to a base class:

public abstract class BaseResult<T> : ISequentialResult 
{
    protected IObserver<ISequentialResult> Observer { get; set; }

    protected BaseResult(IObserver<ISequentialResult> observer)
    {
        Results = new List<ISequentialResult>();
        Observer = observer;
    }

    public List<ISequentialResult> Results { get; set;}

    public object Value { get; set; }       

    public T TypedValue
    {
        get { return (T) Value; }
        set { Value = value; }
    }
}

The base class does a little more for us. First, it introduces the concept of an IObserver. The observer is simply a class that receives notifications, and the type is the type of the notification. In this case, we allow for a derived type that will create the observer as well as take our "object" value and type it to a specific value. You'll see how the observer works with our sequences in a bit.

Next, I created three implementations. One is just a default that does nothing, and the other two handle Click events and Storyboard interactions. Let's take a look at the handler for storyboards:

public class StoryboardResult : BaseResult<Storyboard>  
{
    public StoryboardResult(IObserver<ISequentialResult> observer, Storyboard sb) : base(observer)
    {
        TypedValue = sb;
        sb.Completed += SbCompleted;
        sb.Begin();
    }

    void SbCompleted(object sender, EventArgs e)
    {
        TypedValue.Completed -= SbCompleted;
        Observer.OnNext(this);
        Observer.OnCompleted();
    }       
}

You'll notice a few interesting things. We take in the story board, wire into its Completed event, then fire it off (we could have changed the interface to have an Execute method to give this more flexibility). The story board completion is the important piece to consider. First, the event is unhooked. Then, we pass the result to the observer. The observer is waiting for ISequentialResult notifications. We provide it with one, but only after the story board is completed. We also tell the observer "we're done" (an observer can listen for multiple push notifications, so we could have created a handler that never completed, or one that aggregated multiple storyboards and only completed when all story boards were done).

With that in mind, take a look at the Click handler:

public class ClickResult : BaseResult<ButtonBase>
{
    public ClickResult(IObserver<ISequentialResult> observer, ButtonBase button) : base(observer)
    {
        TypedValue = button;
        button.Click += ButtonClick;
    }

    void ButtonClick(object sender, System.Windows.RoutedEventArgs e)
    {
        TypedValue.Click -= ButtonClick;
        Observer.OnNext(this);
        Observer.OnCompleted();                 
    }
}

This is very similar to the storyboard. There is nothing to "kick off" but instead this result simply registers itself whenever the target is clicked. Why would we do that? I'll get to the example in a minute.

The most interesting part of workflow is the sequencer itself. This is the class that will manage the asychronous events and coordinate them. The key is that while they run sequentially, they also run asynchronously so there is no blocking.

Here is the sequencer class:

public class Sequencer : IDisposable 
{
    private IDisposable _sequence;
    private ISequentialResult _result;

    private readonly List<Func<IObserver<ISequentialResult>,ISequentialResult,ISequentialResult>> _sequences
        = new List<Func<IObserver<ISequentialResult>, ISequentialResult,ISequentialResult>>();

    public void AddSequence(Func<IObserver<ISequentialResult>, ISequentialResult,ISequentialResult> sequence)
    {
        _sequences.Add(sequence);
    }
        
    public void Run(Action<Exception> exception, Action<ISequentialResult> completed)
    {
        _sequence = Observable.Iterate(_Sequencer)
            .Subscribe(
                next => { },
                exception,
                ()=>completed(_result));
    }

    private IEnumerable<IObservable<Object>> _Sequencer()
    {
        var x = 0;

        _result = new DefaultResult(null) {Value = this};

        while (x < _sequences.Count)
        {
            var sequence = _sequences[x];
            var step = Observable.Create<ISequentialResult>(
                observer =>
                    {
                        var newResult = sequence(observer, _result);
                        newResult.Results.Add(_result);
                        newResult.Results.AddRange(_result.Results);
                        _result = newResult;
                        return () => { };
                    }
                ).Start();

            yield return step;                

            x++;
        }

        _result.Results.Add(_result);
    }

    public void Dispose()
    {
        _sequence.Dispose();
    }
}

There's a lot going on here, so let's break it down.

First, you'll notice two references: one to a disposable object, and one to a result. More on those in a bit.

The list may seem confusing at first, but it's only because of all of the type specifiers. Each "step" is represented by a function. The sequence will call the function with an observer and the previous result, and expect to get a new result back. Think of it as "here's who is watching, and what happened the last time ... now give me what you have."

A method is provided to add these steps to the sequence.

The run method is interesting. This is where we set up our disposable reference because we're creating a long-running observer that we want to dispose of when the sequence itself is disposed. The iterate function takes a list of observable sequences and observes them sequentially. We provide an enumerator that feeds the observable sequences, and for each sequence in the "outer loop" (the main algorithm that drives the sequential work flow) we simply drive through the collection. If an exception is encountered, we'll call back with the exception. When completed, we call back with the final result.

To better understand what's going on, take a look at the enumerator. For each iteration, we create a new observable stream. We call the function I mentioned earlier and pass the observer in. When we receive the result, we stack it recursively and store it, then iterate to the next in line. The yield statement ensures the sequential operation completes (when we call the OnCompleted in our result) before the next step begins.

That's the complicated part: setting up the core framework. Now comes the easy part: plugging into it.

In the XAML I placed a large red rectangle. There are three storyboards tied to the rectangle. One changes the colors, one shrinks it, and one twists it using the plane projection. There are also three buttons. One button kicks off the story boards. One button kicks off a sequential workflow that will run the story boards in order. The final button resets the story boards by calling Stop on them.

Let's have some fun. First, kicking off the story boards is simple enough:

private void Button_Click(object sender, RoutedEventArgs e)
{
    StoryTwist.Begin();
    StoryShrink.Begin();
    StoryColor.Begin();
}

The reset button is also easy:

private void Button_Click_2(object sender, RoutedEventArgs e)
{
    StoryTwist.Stop();
    StoryShrink.Stop();
    StoryColor.Stop();
}

Now for the sequential storyboards. This is where things should get easier for us. Instead of having to wire in several completed events or nesting lambdas, let's see what it looks like to run them in order using our sequencer:

private void Button_Click_1(object sender, RoutedEventArgs e)
{
    if (_sequence != null)
    {
        _sequence.Dispose();
    }

    _sequence = new Sequencer();
    _sequence.AddSequence((o, r) => new StoryboardResult(o, StoryColor));
    _sequence.AddSequence((o, r) => new StoryboardResult(o, StoryShrink));
    _sequence.AddSequence((o, r) => new StoryboardResult(o, StoryTwist));
    _sequence.Run(ex => MessageBox.Show(ex.Message),
                    result =>
                        {
                            foreach (var storyboard in
                                result.Results.Select(sequence => sequence.Value).OfType<Storyboard>())
                            {
                                storyboard.Stop();
                            }
                            _sequence.Dispose();
                            _sequence = null;
                        });
}       

So what's going on? If we have a previous sequence running, we dispose it so we can start fresh. Then, we simply add each story board in the order we want it to fire (the observer is provided to us by the sequencer). Finally, we run it. If there is an error, show it. When the sequence is done, we iterate the results and find any result that was a story board and call the "stop" method. This means after the sequence completes, it will automatically restore the rectangle to its original state.

Finally, to show just how powerful it is to drive sequential workflows without blocking, I added one more method:

private void _WaitForThreeClicks()
{
    _buttonSequence = new Sequencer();
    _buttonSequence.AddSequence((o, r) => new ClickResult(o, ResetButton));
    _buttonSequence.AddSequence((o, r) => new ClickResult(o, ResetButton));
    _buttonSequence.AddSequence((o, r) => new ClickResult(o, ResetButton));
    _buttonSequence.Run(ex=>MessageBox.Show(ex.Message),
        result=>
            {
                MessageBox.Show("You clicked the Reset button 3 times!");
                _buttonSequence.Dispose();
                _WaitForThreeClicks();
            });
}

This is a fun method. It literally creates three sequences, all "waiting" for the reset button to be clicked. After the sequence completes, we show a message indicating that 3 clicks happened, then restart the sequence recursively. I'll call it the first time just after IinitializeComponent:

public MainPage()
{
    InitializeComponent();  
    _WaitForThreeClicks();
}      

There you have it! When you run the code, you'll get this:


Notice that you can keep clicking the sequence to start it over: there is no blocking. And whatever order you decide to click on other buttons, the reset button always faithfully shows a "3 click" message on the third click.

You can download the full source code for this solution here.
Jeremy Likness

Thursday, August 19, 2010

Simplifying Silverlight Web Service Calls with Reactive Extensions (Rx)

I've been working with the Reactive Extensions (Rx) library quite a bit lately and am very impressed. While it is a new way of thinking about services, it certainly makes life much easier. In this example, I'll show you a way to simplify your web service calls using Rx. In fact, even if you don't use Reactive Extensions, you may benefit from the proxy wrappers that I'll describe.

I'm assuming you are familiar with Silverlight, web services, and have some exposure to the Managed Extensibility Framework. You'll also want to make sure you've got the latest version of Rx for Silverlight 4.

Let's get started! First, create a new Silverlight 4 Application. Keep all of the defaults: we do want a web project, but we aren't using RIA.

The Service: Server Side

Let's create a simple calculator service. Sure, it is a simple example, but it will make it easier to focus on the details of Rx rather than puzzling over a more complex web service example.

Create a new service and call it "Calculator." Just place it in the root of the web application. Create a contract and implement it, so that your service ends up looking like this:

namespace RxWebServices.Web
{
    [ServiceContract(Namespace = "http://csharperimage/")]
    public interface ICalculator
    {
        [OperationContract]
        long Add(int operand1, int operand2);       
    }

    [AspNetCompatibilityRequirements(RequirementsMode = AspNetCompatibilityRequirementsMode.Allowed)]
    public class Calculator : ICalculator
    {
        readonly Random _random = new Random();

        public long Add(int operand1, int operand2)
        {
            Thread.Sleep(TimeSpan.FromMilliseconds(_random.Next(1000) + 50));
            return operand1 + operand2;
        }        
    }
}

Notice I built in a delay. This is important to see how Rx helps us handle the asynchronous nature of web service calls.

Go ahead and hit CTRL+F5 to build and run without debugging. This will set up the service end point for us to grab in Silverlight. Now, in your Silverlight project, let's set some things up.

The Service: Client Side

First, we want to add some references to both Reactive Extensions (Rx) and the Managed Extensibility Framework. Below, I've highlighted the references to add:


Now we can add our service reference. Right-click references, choose "Add Service" and select "Discover Services in Solution". You should be able to select the calculator service. Put it in the namespace "Calculator Service" as depicted below.


Making Things Easy: ServiceProxy

Services can seem complex, but with the factory patterns provided by the framework and the support of relative paths, abstracting the creation of an end point is easy. I like to create a proxy class thas manages the end points for me. In this example, I store the end point as a constant. However, you can easily make it a parameter for your Silverlight application and construct it on the fly. All my "consumer" really cares about is the service contract, not the details of how to wire in the service endpoint. So, let's make it easy. Take a look at the following class. The class itself is never instanced directly, but it will export the service contract so that wherever I import it, I'll have a fully wired version of the proxy ready to use.

Create a folder called "Implementation" and add "ServiceProxy.cs". Your class will look like this:

namespace RxWebServices.Implementation
{
    public  class ServiceProxy
    {
        private const string CALCULATOR_SERVICE = "../Calculator.svc";
        private const string NOT_SUPPORTED = "Type {0} not supported";                

        private static readonly Dictionary<Type, Uri> _serviceMap
            = new Dictionary<Type, Uri> {{typeof (ICalculator), new Uri(CALCULATOR_SERVICE,UriKind.Relative)}};

        public static T GetProxyFor<T>()
        {
            if (!_serviceMap.ContainsKey(typeof(T)))
            {
                throw new TypeLoadException(string.Format(NOT_SUPPORTED, typeof (T).FullName));
            }

            return
                new ChannelFactory<T>(new BasicHttpBinding(), new EndpointAddress(_serviceMap[typeof (T)])).
                    CreateChannel();
        }

        [Export]
        public ICalculator CalculatorService
        {
            get { return GetProxyFor<ICalculator>(); }
        }
    }
}

Take a look. We are mapping the service contract to the end points. In our case, it is relative to the site serving the Silverlight application. Because the application is in ClientBin, we back up one level to access the service. Note this will work just as easily for a service hosted somewhere else: I would simply specify a relative or absolute uri. We only have one service, but the dictionary makes it easy to map multiple ones. The export uses the channel factory to generate an instance and return the client.

Our Internal Contract

I rarely let the rest of my application concern itself with the details of the service. Any other area of my application is simply asking for results based on input, regardless of how it is obtained. Therefore, I'll create a very light contract for the calculator service internally - one that is easy to mock and test.

Create a folder called "Contract" and add one interface, ICalculatorService. The interface looks like this:

namespace RxWebServices.Contract
{
    public interface ICalculatorService
    {
        IObservable<long> Add(int operand1, int operand2);
    }
}

Here is where things get interesting. You should be familiar with IEnumerable which we'll call a "pull" sequence of elements: you pull the values from the iterator. With Reactive Extensions, we invert this using IObservable to create a "push" sequence. With the push sequence, you subscribe and receive an event (pushed to you) when an element is available. In this case, we'll subscribe by sending in two operands, and wait to be pushed the result when it comes back.

Wrapping the Service

Now we've got a service proxy and an interface. Let's satisfy the contract. I'll show you the code, then explain it. Under the implementation folder, create a Calculator.cs class and wire it up like this:

namespace RxWebServices.Implementation
{
    [Export(typeof(ICalculatorService))]
    public class Calculator : ICalculatorService, IPartImportsSatisfiedNotification
    {
        [Import]
        public ICalculator CalculatorProxy { get; set; }

        private Func<int,int,IObservable<long>> _calculatorService;            

        public IObservable<long> Add(int operand1, int operand2)
        {
            return _calculatorService(operand1, operand2);
        }

        public void OnImportsSatisfied()
        {
            _calculatorService = Observable.FromAsyncPattern<int, int, long>
                (CalculatorProxy.BeginAdd, CalculatorProxy.EndAdd);
        }
    }
}

Let's break it down. First, you'll notice we import the calculator service. This is the actual proxy we set up in the previous class. When the import is satisfied, we use a helper method provided by Rx to convert the asynchronous call into an observable list. The FromAsyncPattern takes in the types of the inputs, followed by the type of the output. It creates a function that, when called, returns an observable list of the results. In this case, we cast it from the beginning call to our calculator service to the return call. This is the way we take the asynchronous call and turn it into an observable list.

When we actually want to use the method, we call the function with the inputs, and receive the output as the observable. Thus, we do all of the conversion internally, hide the implementation details, and just return a stream that can be subscribed to in order to fetch the results.

Take a look at the signature for the actual service:

private interface ICalculator
{
    IAsyncResult BeginAdd(int operand1, int operand2, AsyncCallback callback, object asyncState);
    
    long EndAdd(IAsyncResult result);
}

To use Rx, we want a function that takes all of the inputs up until the AsyncCallback parameter, and returns an observable list of the return value. In this case, our two inputs are int, and it returns a long, so our function signature is Func<int,int,IObservable<long>>. By using these same types on the FromAsyncPattern extension method, Rx will return us the appropriate function and expect a pointer to the methods to start and the end the call.

Fibonnacci Sequence

Now we can get to the fun part: using the service. We'll use the service two different ways to illustrate how the observable lists work. In the MainPage.xaml, add some rows, a set of buttons, and a stackpanel. Generate code behind for the buttons. It will look something like this:

<Grid x:Name="LayoutRoot" Background="White">
    <Grid.RowDefinitions>
        <RowDefinition Height="Auto"/>
        <RowDefinition Height="*"/>
    </Grid.RowDefinitions>
    <StackPanel Orientation="Horizontal" HorizontalAlignment="Center">
        <Button Content=" GO " Click="Button_Click" Margin="5"/>
        <Button Content=" GO " Click="Button_Click_1" Margin="5"/>
    </StackPanel>
    <StackPanel Orientation="Horizontal" Grid.Row="1" HorizontalAlignment="Stretch" VerticalAlignment="Stretch" x:Name="MainSurface"/>
</Grid>

Next, let's go to the code behind and wire in the first example. First, we'll add some properties we're going to be using:

[Import]
public ICalculatorService Calculator { get; set; }

private IDisposable _sequence;

private readonly Subject<long> _watcher = new Subject<long>();

private int _x, _y, _iterations;

The first piece is the service, which we import using MEF. When we subscribe to services, we receive a disposable observer. In order to cancel observations in progress and start new ones, we'll keep a reference to this using the _sequence field.

What's Your Favorite Subject?

The subject is interesting. Subjects are used to set up a publisher/subscriber model. The subject here is a long. Anyone with access to the subject can publish (send it a long value) and/or subscribe (receive notifications when values are published). We'll use this to bridge between our UI and the service.

Finally, we've got some local variables to use to keep track of state.

Next, we'll set everything up in the constructor:

public MainPage()
{
    InitializeComponent();

    if (DesignerProperties.IsInDesignTool) return;

    CompositionInitializer.SatisfyImports(this);

    _watcher.ObserveOnDispatcher().Subscribe(
        answer =>
            {
                var grid = new Grid
                                {
                                    Width = answer,
                                    Height = answer,
                                    Background = new SolidColorBrush(Colors.Red),
                                    Margin = new Thickness(5, 5, 5, 5)
                                };
                var tb = new TextBlock {Margin = new Thickness(2, 2, 2, 2), Text = answer.ToString()};
                grid.Children.Add(tb);
                MainSurface.Children.Add(grid);
                _Add();
            });
}

The first thing you'll notice is that if we're in the designer, all bets are off and we drop out. Otherwise, we compose the parts, which gives us our service import. Next, we'll subscribe to our subject. Notice that we don't have any service interaction yet. The subscription basically breaks down like this:

  • I'm interested in the subject with long values
  • When something happens, let me know on the dispatcher thread (as I'm going to do something with the UI)
  • When a long value is observed, give it to me: I'll make a grid as big as the value I received, put some text inside it, add it to the stack panel and then call the _Add method

That's very simple and straightforward. No we can explore the missing method. First, let's kick things off when the user clicks the first button. I want to use the add service to compute a fibonnacci ratio (each number is the sum of the previous two, started with 1 and 1). I'll implement the button click code-behind and add the missing method here:

private void Button_Click(object sender, RoutedEventArgs e)
{
    if (_sequence != null)
    {
        _sequence.Dispose();
    }

    MainSurface.Children.Clear();

    _x = 1;
    _y = 1;
    _iterations = 0;

    _watcher.OnNext(_x);            
}      
  
private void _Add()
{         
    _sequence.Dispose();

    if (++_iterations == 20)
    {
        return;
    }

    _sequence = Calculator.Add(_x, _y).Subscribe(answer =>
                                            {
                                                _x = _y;
                                                _y = (int)answer;
                                                _watcher.OnNext(answer);                                                     
                                            });
}

So the first part should be straight forward. If we had another sequence, dispose it. This will cancel any observations in progress. Clear the surface, initialize our variables, and then call the OnNext function on our subject. What's that? Simple: we just published a number. The subject will receive the number (1) and then push it to any subscriptions. We subscribed earlier, so we'll create a 1x1 grid and call the _Add method.

This method is even more interesting. First, we stop after 20 iterations. No sense in going to infinity. Next, we subscribe to the calculator service. Subscriptions to observable lists are the same as subscriptions to subjects. We're asking to watch for a value, and initiating the "watch" by sending in our first values (1 and 1). When we receive the answer, we shift the numbers to continue the sequence, and then publish the number to the subject.

This allows us to "daisy chain" service calls. We wait until we receive the first answer before we ask the next question. At this point, if you hit F5 (or CTRL-F5) to run it, and click the first button, you should see this:

Note if you keep clicking while it is rendering, it will start over. There will be no "hanging" calls because the calls are daisy chained. We are also not blocking the UI while waiting, or you wouldn't be able to click the button again. You can clearly see the delays on the server as the results are returned.

Here is a simplified overview of what is happening:

Random Addition

Now we'll throw another function into the mix. It's time to set up the second button. For this button, we're going to add two methods. The first is an enumerable that returns nothing but random numbers. It loops infinitely so we obtain as many numbers as we like, and we'll receive them in tuples:

private static IEnumerable<Tuple<int,int>> _RandomNumbers()
{
    var random = new Random();

    while (true)
    {
        yield return Tuple.Create(random.Next(100), random.Next(100));                
    }
}

In the event handler for the second button, add this bit of code:

private void Button_Click_1(object sender, RoutedEventArgs e)
{
    if (_sequence != null)
    {
        _sequence.Dispose();
    }

    MainSurface.Children.Clear();

    _sequence = _RandomNumbers()
        .ToObservable()
        .Take(20)                
        .Subscribe(numbers => Calculator.Add(numbers.Item1, numbers.Item2)
                                    .ObserveOnDispatcher()
                                    .Subscribe(result =>
                                                    {
                                                        var text = string.Format("{0}+{1}={2}", numbers.Item1,
                                                                                numbers.Item2, result);
                                                        var tb = new TextBlock
                                                                    {Margin = new Thickness(5, 5, 5, 5), Text = text};
                                                        MainSurface.Children.Add(tb);
                                                    }));                
}    

This is a little different. First, we're taking the enumerable list of random numbers and turning it into an observable list so the values will be pushed to us. This is just by way of demonstration; we could have just as easily iterated the list with a foreach loop instead. What's interesting here is that I can limit how many I grab with the Take(20) extension. I subscribe like I do to any other observable list, and when I receive the next number pair, I turn around and subscribe to the calculator service to add the numbers for me. Instead of publishing the result to the subject, I'm handling it myself. I observe on the dispatcher thread, then add a text block with the addition statement to the stack panel.

Go ahead and run the application, click the button, and you'll receive output that looks like this:

Observations (Pardon the Pun)

If you run this and click the go button, you might notice something interesting. No matter how many times you click, you get the full sequence of numbers. In other words, if I let 5 numbers come back, then click go, I'll receive a sequence of 35 numbers, not 25.

Even more interesting is if you click the second go button, wait until most (but not all) of the 20 numbers return, then click the first go button. You'll see the screen clear, but you'll receive a few sequences of added numbers before the fibonnacci sequence starts.

What's Going On?

But we disposed of the subscription, right? Not exactly. In this implementation, we always getting the same service subscription. The subscription we cancel is the outer observation. To better understand this, load up a tool like Fiddler and watch the service calls. In the first example, the call is made, there is a delay, it returns, and then the next call is made.

In the second example, however, almost all of the calls are made almost all at once. They return at different rates due to the delays on the server. So, when you start a new sequence, you subscribe to the same service and therefore get to watch the results return that hadn't made it back from the initial calls.

This is important to understand as you are building your extensions, because in some cases you might want a new observable sequence, while in others it makes sense to keep the existing one. It depends on your needs and the desired behavior.

Hopefully this will help open some new doors of understanding about Reactive Extensions!

Click here to download the source code for this article

Jeremy Likness

Tuesday, August 17, 2010

Managed Extensibility Framework Quickstart: Hello, MEF

I've been working on quickstarts for a community team that I'm a member of. The team is called MEFContrib. We write extensions to the Managed Extensibility Framework as well as supporting manuals and documentation. I've been tasked with the quickstarts and as I release them I'll post them for you.

Obviously, a quickstart should be, well, quick, and easy. We also decided quickstarts would be independent so that you can jump into any topic without having to read the prior ones. Today's post is simply a "Hello, MEF" for a quick introduction to using MEF. I will also follow up with a version specifically for Silverlight.

I've embedded the quickstart video below, and you can read the full article (with video included) by clicking here.

Enjoy!

Viewing this content requires Silverlight. You can download Silverlight from http://www.silverlight.net/getstarted/silverlight3.

Jeremy Likness

Tuesday, August 10, 2010

Sharing Silverlight Commands with the Managed Extensibility Framework

View models can expose commands that are bound to controls. Commands encapsulate an action and a permission, and the advantage to binding is that controls that support commands will automatically disable when the command cannot execute. Commands can also be easily added to view models and tested.

It is common to find commands that are used universally. In Windows Presentation Foundation (WPF), this is managed by a set of global commands that can be accessed via static classes. Silverlight does not provide the equivalent, but it is easy enough to compose and share commands using MEF.

There were specifically two scenarios I needed to tackle in a recent project.

The First Scenario: Command Sharing

It is quite common to have a command that can be executed from multiple places. An "expand" command, for example, represents a generic behavior that is applied to specific controls. If the behavior can be extrapolated from the control itself, why duplicate it across multiple controls? It is far more efficient to define the behavior in one place and then share the behavior where it is needed.

The Second Scenario: Late Command Binding

Sometimes commands may provide a layer of de-coupling. Specifically, I wanted to provide a command to switch to a different type of view. Because my view models are shared by different views, I didn't want to create a tight coupling by referencing the views directly. In fact, the views were being dynamically loaded in a separate XAP file, so there was no reference I could create in the view model. To solve this I used MEF to late-bind the command when the view becomes available.

The Setup

The setup is quite simple. Because I want to import the command directly, I'm not going to use metadata. Instead, I'll create a common static class that is shared across my project that exposes some constants to define the commands. I'll use the constants as the contract name for importing and exporting. It looks like this:

public static class Commands 
{
   public const string EXPORT_COMMAND = "ExportCommand";
}

Now I can request the command in my view model, by importing it directly:

public class MyViewModel
{
   [Import(Commands.EXPORT_COMMAND, AllowDefault=true)]
   public DelegateCommand<object> ImportedCommand { get; set; }
}

The "allow default" is important. If I don't have the command available, MEF will throw an exception when trying to set the import. With allow default, the property simply remains as null until the import can be satisfied. Once my dynamic module loads with the corresponding export, it will be wired in for me. Note that I can import the same command in multiple places to satisfy the need to share it.

Now I can export the command. I might create a class specifically for this or place it in a view model in the imported XAP file. Regardless, an exported command looks something like this:

public class Exports
{
   [Import]
   public INavigation Navigation { get; set; }

   [Export(Commands.EXPORT_COMMAND)]
   public DelegateCommand<object> ExportedCommand 
   {
      get 
      {
         return new DelegateCommand<object>(
            obj => Navigation.NavigateTo(typeof(SomeView)));
      }
   }
}

This way I'm able to expose a command for navigation without knowing what the view or navigation looks like ... instead, I let the XAP that does know handle it for me.
Jeremy Likness

Thursday, August 5, 2010

Unit Testing XAML Data-Bindings in Silverlight

In an interview earlier this year with MSDN geekSpeak, I discussed unit testing for Silverlight and some of the frameworks that are available. One audience member raised a very important question: "How do we test the XAML?" Specifically, what happens when we hand off XAML to a designer or another developer, and they accidently remove a data-binding or other critical element, then pass it back to us?

The Problem

Many developers don't realize this, but the view is not decoupled from the underlying code. Unless you are using a convention-based approach (and then an affinity still exists, albeit expressed via the convention), you are likely either wiring in events and managing the view from code-behind, or using data-binding to bind a view to an underlying view model or binding model. One mistake people tend to make is the assumption that the view is completely decoupled from the view model.

The truth is, the data-bindings are a double-edged sword. First, make no mistake, you are explicitly referring to a contract when you specify data-binding. You might not need to know the specific type of the underlying data context, but you are pointing to specific sources and property paths on "some object" that must be satisfied. The "double-edge" is that the data-binding framework will silently swallow any exceptions if you happen to bind to the wrong path. Fat-finger "country" as "county" and you may find a big blank label appearing where you were hoping to see "United States."

Even more likely is the fact that often you may have a designer creating XAML assets, an integrator moving those assets into the project and then a developer working on the actual code. At any phase of the hand-off, someone might get fancy and decide to replace a combo box with a list box and forget to update the data-bindings. The compiler won't complain at all, because there is no check for the absence of data-binding, so you'll likely not catch the error until you begin testing the application.

The Solution

While there is no easy way to "type" the view and force the compiler to emit errors when the data-binding is incorrect, you can create unit tests on those views to test the expected bindings. You can also test for attached behaviors and other artifacts in the XAML that are expected for the application to run successfully.

Let's take a look at a simple view model I like to use when demonstrating the Model-View-ViewModel (MVVM) pattern. I call it Cascadia because it facilitates the classic case of cascading dropdowns. The view model sits on top of a service for fetching countries and states or provinces, that is injected via the Managed Extensibility Framework. MEF is outside the scope of this post, and we won't need it for testing anyway because we'll mock the dependencies. Here is my view model:

[Export]
public class CascadiaViewModel : INotifyPropertyChanged, IPartImportsSatisfiedNotification 
{
    [Import]
    public IAddressService AddressService { get; set; }

    public CascadiaViewModel()
    {
        Countries = new ObservableCollection<Country>();
        States = new ObservableCollection<State>();

        if (!DesignerProperties.IsInDesignTool) return;

        // design-time controls

        var country = new Country {CountryCode = "US", CountryName = "United States"};
        var state = new State {StateCode = "GA", StateName = "Georgia", StateCountry = country};
        Countries.Add(country);
        States.Add(state);
        CurrentCountry = country;
        CurrentState = state;
    }

    public ObservableCollection<Country> Countries { get; set; }

    private Country _country;

    public Country CurrentCountry
    {
        get { return _country; }
        set
        {
            _country = value;
            _UpdateStates();
            _RaisePropertyChanged("CurrentCountry");
        }
    }

    private void _UpdateStates()
    {
        if (DesignerProperties.IsInDesignTool)
        {
            return;
        }

        States.Clear();
        AddressService.GetStatesForCountry(CurrentCountry.CountryCode, states=>
           {
              foreach(var state in states)              
              { States.Add(state); }
              CurrentState = States[0];
           });
    }

    public ObservableCollection<State> States { get; set; }

    private State _state;

    public State CurrentState
    {
        get { return _state; }
        set
        {
            _state = value;
            _RaisePropertyChanged("CurrentState");
        }
    }

    private void _RaisePropertyChanged(string propertyName)
    {
        var handler = PropertyChanged;
        if (handler != null)
        {
            handler(this, new PropertyChangedEventArgs(propertyName));
        }
    }

    public event PropertyChangedEventHandler PropertyChanged;

    public void OnImportsSatisfied()
    {
        AddressService.GetCountries(countries =>
        {
            foreach (var country in countries)
            {
                Countries.Add(country);
            }
            CurrentCountry = Countries[0];
        });
    }
}

Notice that I am providing some design-time support right in the view model so it will appear in the designer when a view is databound. There may be some arguments about the concept of a "designer" leaking into my view model, but in my opinion, the view model is specific to views. It is not "business logic" but an entity responsible for coordinating the hand-off between underlying business logic and the view itself. Therefore, why not have the intelligence to provide some sample data during design time?

The behavior is straightforward. We populate a list of countries, synchronize a default country, then populate a list of states for the selected country and synchronize a default state. The expected behavior is that I can change the selected state, and when I change the selected country, my state list is refreshed and a new default picked. All of this can be tested before we even worry about the service or the view, in my opinion one of the advantages of the MVVM pattern.

Next, let's take a look at the XAML for a view that binds to the view model. Here is the markup:

<UserControl x:Uid="UserControl_1" x:Class="Cascadia.MainPage"
   xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation"
   xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
   xmlns:d="http://schemas.microsoft.com/expression/blend/2008"
   xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006"
   mc:Ignorable="d" 
   d:DesignHeight="300" 
   d:DesignWidth="400">
  <Grid x:Uid="LayoutRoot" x:Name="LayoutRoot" 
      Background="White" DataContext="{Binding Source={StaticResource VMLocator},Path=Cascadia}">
    <Grid.ColumnDefinitions>
      <ColumnDefinition x:Uid="ColumnDefinition_1" Width="Auto" />
      <ColumnDefinition x:Uid="ColumnDefinition_2" Width="Auto" />
    </Grid.ColumnDefinitions>
    <Grid.RowDefinitions>
      <RowDefinition x:Uid="RowDefinition_1" Height="Auto" />
      <RowDefinition x:Uid="RowDefinition_2" Height="Auto" />
      <RowDefinition x:Uid="RowDefinition_3" Height="Auto" />
      <RowDefinition x:Uid="RowDefinition_4" Height="Auto" />
    </Grid.RowDefinitions>
    <TextBlock x:Uid="TextBlock_1" Text="Select Your Country:" Grid.Row="0" Grid.Column="0" AutomationProperties.AutomationId="TextBlock_1" />
    <ComboBox x:Uid="cbCountries" x:Name="cbCountries" Grid.Row="0" Grid.Column="1" ItemsSource="{Binding Countries}" SelectedItem="{Binding CurrentCountry,Mode=TwoWay}" DisplayMemberPath="CountryName" />
    <TextBlock x:Uid="TextBlock_2" Text="Select Your State:" Grid.Row="1" Grid.Column="0" AutomationProperties.AutomationId="TextBlock_2" />
    <ComboBox x:Uid="cbStates" x:Name="cbStates" ItemsSource="{Binding States}" SelectedItem="{Binding CurrentState,Mode=TwoWay}" Grid.Row="1" Grid.Column="1" DisplayMemberPath="StateName" />
    <TextBlock x:Uid="TextBlock_3" Text="Selected Country:" Grid.Row="2" Grid.Column="0" AutomationProperties.AutomationId="TextBlock_3" />
    <TextBlock x:Uid="tbSelectedCountry" x:Name="tbSelectedCountry" Text="{Binding Path=CurrentCountry.CountryName}" Grid.Row="2" Grid.Column="1" Height="16" VerticalAlignment="Bottom" />
    <TextBlock x:Uid="TextBlock_4" Text="Selected State:" Grid.Row="3" Grid.Column="0" AutomationProperties.AutomationId="TextBlock_4" />
    <TextBlock x:Uid="tbSelectedState" x:Name="tbSelectedState" Text="{Binding Path=CurrentState.StateName}" Grid.Row="3" Grid.Column="1" />
  </Grid>
</UserControl>

This basically shows two drop-downs, with two labels that synchronize with the selected items. The "affinity" or "coupling" is introduced when we specify a data-binding path and expected something specific, such as CurrentState.StateName, to be present and correct. Our application is not functioning correctly if these bindings don't work.

If you're wondering about the Uid and AutomationProperties, both are important for globalization/localization as well as accessibility. In fact, I'm excited to be on one of the teams that is previewing UI Automation for Silverlight. This allows recording and/or creating coded UI tests (similar to WPF) and running/testing those against Silverlight applications. I've been very pleased with what the tool can do so far and am excited about the upcoming release to the public (this is not the same as the "manual" automation that I blogged about here.)

The Test

Now we can get down to writing a test to ensure that the bindings are working properly. In The Art of Unit Testing author Roy Osherove lists several properties of a good unit test. Two of those include it should run at the push of a button and it should run quickly. In this case, the workflow is that we receive XAML from a designer or integrator. Upon checkin, either we have an automated build process that runs our unit tests automatically, or we pull it down and as a manual process set the test page as the startup page and hit F5 to run the tests. Either way, we can check very quickly whether or not our bindings were preserved.

What does a test look like? In the setup for our test, we'll mock out the service and specify the countries and states we want. I'll also override the view locator specified in XAML and directly bind to the view model for our test. This is the test initialize method:

private MainPage _target;
private CascadiaViewModel _viewModel;
private Mock<IAddressService> _service;

private Country[] _countries;
private State[] _states;

[TestInitialize]
public void TestInit()
{
    var us = new Country {CountryCode = "US", CountryName = "United States"};
    var georgia = new State {StateCode = "GA", StateCountry = us, StateName = "Georgia"};
    var mexico = new Country {CountryCode = "MX", CountryName = "Mexico"};
    var oaxaca = new State {StateCode = "OA", StateCountry = mexico, StateName = "Oaxaca"};

    _target = new MainPage();

    _countries = new[] {us, mexico};
    _states = new[] {georgia, oaxaca};

    _viewModel = new CascadiaViewModel();

    _service = new Mock<IAddressService>();
    _viewModel.AddressService = _service.Object;

    _service.Setup(s => s.GetCountries(It.IsAny<Action<IEnumerable<Country>>>()))
        .Callback((Action<IEnumerable<Country>> action) => action(_countries));

    _service.Setup(s => s.GetStatesForCountry(It.IsAny<string>(), It.IsAny<Action<IEnumerable<State>>>()))
        .Callback(
            (string countryCode, Action<IEnumerable<State>> action) =>
            action(from s in _states where s.StateCountry.CountryCode.Equals(countryCode) select s));

    GetUiElement<Grid>("LayoutRoot").DataContext = _viewModel;
}

private T GetUiElement<T>(string name) where T : UIElement
{
    return (T) _target.FindName(name);
}

Notice at this point we are just setting up the mocks and data-binding the view model. Next, we'll test the actual bindings:

[Asynchronous]
[TestMethod]
public void TestCountrySelection_UpdatesViewModelAndDataBindings()
{
    _target.Loaded +=
        (o, e) =>
            {
                _viewModel.OnImportsSatisfied(); // set this up

                var comboBox = GetUiElement<ComboBox>("cbCountries");
                var stateComboBox = GetUiElement<ComboBox>("cbStates");
                var textBlock =
                    GetUiElement<TextBlock>("tbSelectedCountry");
                var stateTextBlock =
                    GetUiElement<TextBlock>("tbSelectedState");


                var defaultStates = from s in _states
                                    where
                                        s.StateCountry.CountryCode.Equals(
                                            _countries[0].CountryCode)
                                    select s;

                // defaults
                Assert.AreEqual(_countries[0], _viewModel.CurrentCountry,
                                "Country combo box failed to initialize current country.");
                Assert.AreEqual(_countries[0].CountryName, textBlock.Text,
                                "Country combo box failed to initialize the text block.");
                Assert.AreEqual(_states[0], _viewModel.CurrentState,
                                "Failed to initialize current state.");
                Assert.AreEqual(_states[0].StateName, stateTextBlock.Text,
                                "Failed to initialize the state text block.");

                CollectionAssert.AreEquivalent(comboBox.Items, _countries,
                                                "Failed to data-bind countries.");
                CollectionAssert.AreEquivalent(stateComboBox.Items,
                                                defaultStates.ToArray(),
                                                "Failed to data-bind states.");
                        
                comboBox.SelectedItem = _countries[1];

                var mexicoStates = from s in _states
                                    where
                                        s.StateCountry.CountryCode.Equals(
                                            _countries[1].CountryCode)
                                    select s;

                // defaults
                Assert.AreEqual(_countries[1], _viewModel.CurrentCountry,
                                "Country combo box failed to update current country.");
                Assert.AreEqual(_countries[1].CountryName, textBlock.Text,
                                "Country combo box failed to update the text block.");
                Assert.AreEqual(_states[1], _viewModel.CurrentState,
                                "Failed to update current state.");
                Assert.AreEqual(_states[1].StateName, stateTextBlock.Text,
                                "Failed to update the state text block.");

                CollectionAssert.AreEquivalent(stateComboBox.Items,
                                                mexicoStates.ToArray(),
                                                "Failed to update data-binding for states.");

                EnqueueTestComplete();
            };

    TestPanel.Children.Add(_target);
}

We are doing a few things here. Unit tests follow the pattern: arrange, act, assert. I am arranging the view model and services in the beginning. Note that we wait until the target view is loaded before acting on it, so that data-binding has a chance to take effect. We are placing the view on a test surface, but tools like StatLight can effectively mock the surface to allow the tests to be run offline in an automated fashion. The test is asynchronous because we must wait for the loaded event to fire before acting and asserting, and the last step we do is add the view to the test surface.

When the view is loaded, I'm first testing some pre-conditions. These are the default bindings we'd expect from the view model being initialized. Then, I change the selection and assert that the changes happened as expected.

This test will function on several levels. First, because I'm grabbing controls by name, if the names change (for example, we change a combo box to a list box and change the name to "lbCountries") then the test will fail. If I don't have an affinity for named controls (i.e. everything happens through databinding) then I will change my strategy to use the automation id. I'll call the control something generic like "CountryList" and will find it regardless of the control type or name.

Second, we have some very specific behaviors we expect, such as changing a country and having the state list and default state update as well. This is fully tested and if any of the data-bindings are broken, the test will fail. I can quickly determine if the XAML was corrupted and fix the problem before it gets into production. I think most developers will agree that the easiest way to fix a bug is to find it as close to the source as possible.

What if I needed to test a custom control, or even do something simple like emulate a button click to test data-binding to an ICommand object? No problem. This is where we would use automation peers. For example, to simulate a button click, I can include the following code:

var btn = GetUiElement<Button>("btnCommand");
var automation = new ButtonAutomationPeer(btn);
var provider = automation.GetPattern(PatternInterface.Invoke) as IInvokeProvider;
provider.Invoke();

The above code will simulate a button click, and then I can test my command to ensure it fired correctly. For custom controls, you can provide your own automation peer, which is a good idea anyway for accessibility.

Conclusion

Hopefully this article demonstrated another way to catch issues fast, close to the source and before the code even makes it to your QA machine. Indirectly, I've shown how MVVM can improve the development process through both testability and workflow (i.e. being able to work on independent pieces, including XAML and view logic, separately). You also witnessed how unit testing provides value to the development process and why most of the time the net impact is faster time to "production ready" in spite of the overhead of writing tests.
Jeremy Likness

Tuesday, August 3, 2010

Using Reactive Extensions (Rx) to Simplify Asynchronous Tests

Reactive Extensions (Rx) is a product from Microsoft Research that simplifies the management and composition of asynchronous events. If you read my earlier post on Asynchronous Workflows, you'll understand why the asynchronous programming model can sometimes lead to confusing and hard-to-maintain code.

There is nothing wrong with the model, but the fact that you must subscribe to an event or register a callback means your code can end up nested and scattered to the winds. Add to that the complexity of aggregating events (for example, firing multiple asynchronous events and then waiting for them all to complete before going on) and it's enough to require a visit to the pharmacy. OK, that was a bad lead-in because of course that's where you can get your Rx (prescription).

There is a lot to explore in the framework, but you can start by visiting the main site and downloading the extensions (they are available for .NET 3.5, .NET 4.0, Silverlight 3, Silverlight 4 and JavaScript). Probably the best and easiest way to learn using Rx is to follow the hands-on labs that you can download in PDF format from this blog post.

I'd like to give a gentle introduction by demonstrating how it solved a very specific need of mine. A very common pattern is what I'll call "synchronized lists" where one list triggers a change in the other. The classic example is selecting a country to get a list of states for that country. Selecting a country means we fetch or filter a new list of states, and if we're kind to the end user we'll also give them a default (either a state or perhaps a "select one" prompt). One test would be changing the country and validating that the list of states updated correctly. Another test, important for data-binding, is to ensure that setting the country raises both the country and the state property changed events. If someone were to come along and change the setter to the private property, the property change notification would fail to fire and we'd lose our data-binding.

To test is straightforward: set up the view model, register to the event, then listen and confirm we get the properties we are looking for. Implementing it is another story. The original syntax would involve registering a lambda that adds to a list, then waiting for a particular time (to make sure all events have fired) and then check the list. There is no "clean" way to do it without timers or a counter or making assumptions about the order the properties may come in.

The Reactive Extensions simplify this tremendously. Take a look at the following code:

public void TestChangeCurrentCountry_RaisesPropertyChanged()
{
    var properties = new List<string>();
    var expected = new[] {"CurrentCountry", "CurrentState"};

    // set it up
    _target.OnImportsSatisfied();

    var propertyChanged = (from evt 
      in Observable.FromEvent<PropertyChangedEventArgs>(_target, "PropertyChanged")
      select evt.EventArgs.PropertyName).Take(2);
                
    using (propertyChanged.Subscribe(
      properties.Add,
      ex => Assert.Fail(ex.Message),
      () =>
         {                                                 
            CollectionAssert.AreEquivalent(expected, properties,
            "Invalid properties were raised on the notify change event.");
            EnqueueTestComplete();
         }))
    {
        _target.CurrentCountry = _countries[1];
    }
}

What's going on here? I set up a list to grab properties as they are raised, and I initialize a list of expected properties. This makes the conditions of the test very clear. I'm using MEF, so what happens in unit tests is that I set up the view model with a bunch of mock objects to return my data: i.e. "if you ask for a country, I'll give you this" and so forth. I explicitly call "OnImportsSatisfied" to simulate what would happen if I were actually composing the view model with MEF (in this case, however, my test is composing it).

In this case, the view model will call the country service to populate the list, then the state service, and set defaults. All of these are fed based on the mock configurations.

Once we have the view model prepared, the Rx steps in. I set up an observable collection that is based on the property changed event. Notice we pass the expected EventArgs, the object to inspect and the event to listen for. The power of Rx is that it will asynchronously build the list for me as the event is raised, in this case returning the property names (because of my select statement). I am only expecting 2, so I can use the standard LINQ "Take(2)" for the list to stop after the second property changed event (otherwise, it would continue listening and building out the list ad infinitum).

The using statement provides an action as items are added (note I'm using a method group to specify that the string is passed to the add method of the properties list.) If an exception is thrown, I'll fail the test gracefully with an assert. The last parameter is the action to perform when complete, which is after both properties have been added. Here, I compare the two lists and let the Silverlight Unit Testing Framework know that the test is complete.

The entire statement wraps the functionality I'll use to trigger the behaviors - in this case, changing the country to set off the property change notification.

While this is a very simple example, I encourage you to download the hands on labs to see more complex examples that compose multiple asynchronous events together in a single collection. It's a powerful framework that deserves a close look!

Jeremy Likness