Friday, February 26, 2010

MEF instead of PRISM for Silverlight 3 Part 1 of 2: Dynamic Module Loading

Recently I've been having lots of conversations about the Managed Extensibility Framework (MEF), the Composite Application Library (CAL or PRISM), and how they relate. One point of confusion that many people has comes when they try to force the two solutions to work together. In a recent conversation, I mentioned that PRISM has some great features, but that if you are only using it for dynamic module loading and view management, MEF should do fine. Then I promised to post a blog with a reference project ... and here it is.

Click here to read the Spanish version (Leer en espaƱol)

Download the source for this project

First, let me share that I love PRISM and have been working with it in almost all of my projects for the past year. My Wintellect colleague Rik Robinson has an excellent article on PRISM you can read here. You can also scan this blog with the PRISM tag. However, I've started to really enjoy working with MEF and believe it is quickly becoming the solution of choice for composite Silverlight applications ... especially with it's inclusion in the upcoming 4.0 release. In these 2 posts I'll show you how to tackle dynamic module loading and region management using exclusively MEF instead of PRISM.

I'm working with the preview 9 of the bits and showing what can be done in the current production release of Silverlight which is version 3. To start with, I simply create a new Silverlight Application. I add a folder called "MEF" and throw the bits in there, which is really two DLLs: System.ComponentModel.Composition and System.ComponentModel.Composition.Initialization. I reference those.

The "Bootstrapper" in MEF

Next I create a new empty user control called Shell.xaml (sound familiar?) that just has a grid and a text block so I know it's there. If you are familiar with PRISM, you are familiar with the concept of the Bootstrapper class to wire everything up. With MEF, we'll do it a little different. First, I go inside the shell and simply decorate it with the "export" attribute:

[Export]
public partial class Shell
{
    public Shell()
    {
        InitializeComponent();
    }
}

Next, I go into my main application class (App.cs) and add a property to import the Shell, like this:

...
[Import]
public Shell RootView { get; set; }
...

Finally, in the application start-up method, instead of newing up a "main page" which no longer exists, I simply ask MEF to satisfy my import of the shell, then assign it to the root visual:

private void Application_Startup(object sender, StartupEventArgs e)
{
    CompositionInitializer.SatisfyImports(this);
    RootVisual = RootView;
}

That's it! When I hit F5 I see the text I placed in the shell, so I know it's being placed in the root visual. We're good to go! I'm going to tackle dynamic loading first, then look at region management. I'll need some buttons to trigger the loading, so we'll want to roll a command behavior. This is one of the nice things that comes with PRISM (the commands), but let's see how easy or difficult it is to roll our own.

Custom Commands

Microsoft Expression Blend provides a set of base triggers and behaviors that make it easy to add new functionality like commanding. If you don't have the full tool, don't despair - these are also included in the free SDK you can download. I'm including a reference to System.Windows.Interactivity.

I want to do a trigger action. A trigger action allows me to act on any event, which I prefer over hard-coding just to a button click.

Before I get ahead of myself, though, let's create our command. While Silverlight 3 doesn't have a built-in command object, it does provide the ICommand interface. I'm going to do a partial implementation. I say "partial" because the interface allows for something to be passed to the command, and I'm simply raising the command and assuming null for the sake of this blog post:

public class CommandAction : ICommand 
{
    private readonly Func<bool> _canExecute;

    private readonly Action _execute; 

    public CommandAction(Action action, Func<bool> canExecute)
    {
        _execute = action;
        _canExecute = canExecute;
    }

    public bool CanExecute(object parameter)
    {
        return _canExecute();
    }

    public void Execute(object parameter)
    {
        _execute();
    }

    public void RaiseCanExecuteChanged()
    {
        EventHandler handler = CanExecuteChanged;
        if (handler != null)
        {
            handler(this, EventArgs.Empty);
        }
    }

    public event EventHandler CanExecuteChanged;
}

While I haven't looked at the code in PRISM, I assume this is very close to how they wire the DelegateCommand. We hold a function that determines if the command is enabled, and an action to call when the command is invoked. Whoever creates the CommandAction is responsible for passing those delegates in and raising "can execute changed" when appropriate.

Now I can start to work on the command behavior. Because of my unique command type, I'm just going to allow you to pass the name of the command on the data bound object and will dissect the rest. If you bind a view model with "ActionCommand" then I'll let you wire up it up that way. Because we're using MEF to bind our view models, my behavior will get wired well before the bindings happen. I'll need to know when the data context changes, so I build a data context helper. It basically uses dependency properties to listen to a "dummy" binding for changes, then calls an action when the changes happen. It looks like this and is loosely based on a previous post I made on data context changed events:

public static class DataContextChangedHandler
{
    private const string INTERNAL_CONTEXT = "InternalDataContext";
    private const string CONTEXT_CHANGED = "DataContextChanged";

    public static readonly DependencyProperty InternalDataContextProperty =
        DependencyProperty.Register(INTERNAL_CONTEXT,
                                    typeof(Object),
                                    typeof(FrameworkElement),
                                    new PropertyMetadata(_DataContextChanged));

    public static readonly DependencyProperty DataContextChangedProperty =
        DependencyProperty.Register(CONTEXT_CHANGED,
                                    typeof(Action<Control>),
                                    typeof(FrameworkElement),
                                    null);

    
    private static void _DataContextChanged(object sender, DependencyPropertyChangedEventArgs e)
    {
        var control = (Control)sender;
        var handler = (Action<Control>) control.GetValue(DataContextChangedProperty);
        if (handler != null)
        {
            handler(control); 
        }
    }

    public static void Bind(Control control, Action<Control> dataContextChanged)
    {
        control.SetBinding(InternalDataContextProperty, new Binding());
        control.SetValue(DataContextChangedProperty, dataContextChanged); 
    }
}

In this case, I'm scoping specifically to a control, which is where I believe the lowest level of "is enabled" gets implemented for controls that can react to changes, so that makes sense for our behavior. When I call bind, I pass it a control and a delegate to call when that control's data context changes. Now we can build in our behavior:

public class CommandBehavior : TriggerAction<Control>
{
    public static readonly DependencyProperty CommandBindingProperty = DependencyProperty.Register(
        "CommandBinding",
        typeof(string),
        typeof(CommandBehavior),
        null);

    public string CommandBinding
    {
        get { return (string)GetValue(CommandBindingProperty); }
        set { SetValue(CommandBindingProperty, value); }
    }

    private CommandAction _action; 

    protected override void OnAttached()
    {
        DataContextChangedHandler.Bind(AssociatedObject, obj=>_ProcessCommand());
    }

    private void _ProcessCommand()
    {
        if (AssociatedObject != null)
        {

            var dataContext = AssociatedObject.DataContext;

            if (dataContext != null)
            {
                var property = dataContext.GetType().GetProperty(CommandBinding);
                if (property != null)
                {
                    var value = property.GetValue(dataContext, null);
                    if (value != null && value is CommandAction)
                    {
                        _action = value as CommandAction;
                        AssociatedObject.IsEnabled = _action.CanExecute(null);
                        _action.CanExecuteChanged += (o, e) => AssociatedObject.IsEnabled = _action.CanExecute(null);

                    }
                }
            }
        }
    }

    protected override void Invoke(object parameter)
    {
        if (_action != null && _action.CanExecute(null))
        {
            _action.Execute(null);
        }
    }      
}

OK, let's step through it. I'm using the System.Windows.Interactivity namespace to define a trigger action, which can be bound to any event. I'm scoping it to a control. When my behavior is attached, I'm binding to the data context changed event. I ask it to call my method to process the command when the data context changes (presumably to bind our command). When that fires, I grab the property that is named in my behavior from the data context, cast it to a command, and wire it up to automatically change whether the host control is enabled based on the CanExecute method of the control. When my behavior is invoked, I check this again and then execute.

ViewModel Glue

There's been a lot of discussion (including in this blog) around how to glue the view model to the view. I personally like to keep it simple and straightforward. Here's a view model stubbed out for the shell that lets me click a button to dynamically load a view. I want to disable the button once clicked so they don't load the view more than once.

[Export]
public class ShellViewModel
{
    public ShellViewModel()
    {
        ViewEnabled = true;
        ViewClick = new CommandAction(_ViewRequested,
                                      () => ViewEnabled);
    }

    private bool _viewEnabled;

    public bool ViewEnabled
    {
        get { return _viewEnabled; }
        set
        {
            _viewEnabled = value;
            
            if (ViewClick != null)
            {
                ViewClick.RaiseCanExecuteChanged();
            }
        }
    }

    public CommandAction ViewClick { get; set; }

    private void _ViewRequested()
    {
        ViewEnabled = false;
    }
}

Note I use a property to determine if the command is enabled, then call a method to process it when fired. We'll add more to that method in a minute. Let's get this into our shell. Because the shell is wired up in the application through the initializer, there is no need to initialize or compose again. This is done recursively. So, let me add a little bit to my shell:

<UserControl x:Class="RegionsWithMEF.Shell"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
    xmlns:interactivity="clr-namespace:System.Windows.Interactivity;assembly=System.Windows.Interactivity"
    xmlns:command="clr-namespace:RegionsWithMEF.Common.Command;assembly=RegionsWithMEF.Common"
    >
    <Grid x:Name="LayoutRoot" Background="White">
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto"/>
            <RowDefinition Height="Auto"/>
            <RowDefinition/>
        </Grid.RowDefinitions>
        <TextBlock Text="Main MEF Shell"/>
        <Button Content="Load Dynamic View into Region" Width="Auto" Height="Auto" Grid.Row="1">
            <interactivity:Interaction.Triggers>
                <interactivity:EventTrigger EventName="Click">
                    <command:CommandBehavior CommandBinding="ViewClick"/>
                </interactivity:EventTrigger>
            </interactivity:Interaction.Triggers>
        </Button>
    </Grid>
</UserControl>

Notice how I use the interactivity namespace to define a trigger for the button. The trigger is for the "click" event, but the way we built the behavior, it can easily fire based on other events as well. I bring in my command behavior, and point it to the "ViewClick" property on my view model. A more advanced implementation would turn that into a full blown binding property and allow binding syntax but for now we'll stick with the simple property name.

In the code behind, I wire in my view model:

[Import]
public ShellViewModel ViewModel
{
    get { return LayoutRoot.DataContext as ShellViewModel; }
    set { LayoutRoot.DataContext = value; }
}

That's it! I run the project to test it, and get a nice text block with a button. When I click the button, it disables immediately. Now let's tackle that dynamic view!

Controlling the Catalog and Container

We want to dynamically load some modules, but first we'll need to tweak our container. By default, MEF is going to create a container in our main application and compose parts to it. Unfortunately, that means when we load other modules/xap files, they won't have access to the container! We need to fix this.

First, I'll create a service to expose an AggregateCatalog that I can add other catalogs to. I want to get the catalog and add parts to it:

public interface ICatalogService
{
    void Add(ComposablePartCatalog catalog);

    AggregateCatalog GetCatalog();

}

Next, I will implement this by creating an aggregate catalog. I'm going to assume when the service is created that we want to include the currently loaded assemblies. This assumption may be wrong and down the road we might inject the catalog, but for now we'll iterate the current deployment (set of running assemblies) and pull them in to parse into our catalog:

public class CatalogService : ICatalogService
{
    private read-only AggregateCatalog _catalog = new AggregateCatalog();

    public CatalogService()
    {
        foreach (AssemblyPart ap in Deployment.Current.Parts)
        {
            StreamResourceInfo sri = Application.GetResourceStream(new Uri(ap.Source, UriKind.Relative));

            if (sri != null)
            {
                Assembly assembly = ap.Load(sri.Stream);
                _catalog.Catalogs.Add(new AssemblyCatalog(assembly));
            }
        }
    }    

    public void Add(ComposablePartCatalog catalog)
    {
        _catalog.Catalogs.Add(catalog);
    }

    public AggregateCatalog GetCatalog()
    {
        return _catalog; 
    }
}

Addendum: thanks to Glenn Block for pointing out a fix here. I am manually loading the assemblies, and this is no longer necessary. I kept the block here to be consistent with the code download, but here is the "fix" you can make it easily load the existing assemblies:

public class CatalogService : ICatalogService
{
    private read-only AggregateCatalog _catalog = new AggregateCatalog();

    public CatalogService()
    {
       // empty deployment catalog parses existing XAP by default
       Add(new DeploymentCatalog());
    }    

    public void Add(ComposablePartCatalog catalog)
    {
        _catalog.Catalogs.Add(catalog);
    }

    public AggregateCatalog GetCatalog()
    {
        return _catalog; 
    }
}

Now we just need to tweak the main application. We'll instantiate the service, then tell it how to expose itself (sounds weird, I know, but when I bring in a module, I want it to be able to add itself to the aggregate catalog, so the catalog service must be exported). We'll let MEF know we want to use this specific container moving forward, and then we'll initialize as before. The new method looks like this:

private void Application_Startup(object sender, StartupEventArgs e)
{
    ICatalogService service = new CatalogService();
    var container = new CompositionContainer(service.GetCatalog());
    container.ComposeExportedValue(service);
    CompositionHost.Initialize(container); 
    CompositionInitializer.SatisfyImports(this);
    RootVisual = RootView;
}

Dynamic Loading of Modules

MEF Preview 9 comes with a deployment catalog that gives us what we need. First, I want to hide the implementation details of loading a view. I'll create an enumeration of views that are available (in this case, only one) and an interface to call when I'm ready to navigate to a view.

For now, I want to just grab the view and verify it's loaded. Then we'll wrap up for today and tackle the region management in the next post.

I extended my shell view model to anticipate two views. In fact, to make the example more interesting, I will load the second view into the first view. Therefore, the second button is only enabled once the first button has been clicked, like this:

SecondViewClick = new CommandAction(
    _SecondViewRequested,
    () => !ViewEnabled && SecondViewEnabled);

Here's our views:

public enum ViewType
{
    MainView,
    SecondView
}

And the interface for our view navigation:

public interface INavigation
{
    void NavigateToView(ViewType view);
}

So now I can go back to my view model and wire in the navigation, as well as make the call. Here is the modified code:

[Import]
public INavigation Navigation { get; set; }

private void _ViewRequested()
{
    ViewEnabled = false;
    Navigation.NavigateToView(ViewType.MainView);
}

private void _SecondViewRequested()
{
    SecondViewEnabled = false;
    Navigation.NavigateToView(ViewType.SecondView);
}

Now I will add my modules. As I mentioned, I just want them to load for now. I can deal with the views later. So I create two more Silverlight applications in my solution called "DynamicModule" and then "SecondDynamicModule.cs" ... yes, the second is a typo because I thought I was adding a class, and was too lazy to re-factor it. So there. I imagine when I load a module I'll want it to do perform some "introductory" functions, so let's define an initialization interface:

public interface IModuleInitializer
{
    void InitModule();
}

In both of my dynamic modules, I add a class that implements this interface and export it. I gave them different names to show it's the interface the matters, but here's what one looks like (right now I just return, but I can set the breakpoint there to test when and if the module is loaded and called):

[Export(typeof(IModuleInitializer))]
public class ModuleInitializer : IModuleInitializer
{
    public void InitModule()
    {
        return; 
    }              
}

OK, now I can implement my INavigation interface. We're going to do two things. First, we'll map views to modules so we can dynamically load them using the DeploymentCatalog. Second, we'll import a collection of module initializers. This is the super-cool feature of MEF: recomposition. When we load the module and put it in our main catalog, it will recompose. This should fire a change to our collection of initializers. We can take the new items and call them, and we'll be initialized. Here's what it looks like:

[Export(typeof(INavigation))]
public class ViewNavigator : INavigation 
{        
    [Import]
    public ICatalogService CatalogService { get; set; }

    [ImportMany(AllowRecomposition = true)]
    public ObservableCollection<IModuleInitializer> Initializers { get; set; }

  private readonly List<IModuleInitializer> _modules = new List<IModuleInitializer>();

    private readonly Dictionary<ViewType, string> _viewMap =
        new Dictionary<ViewType, string>
            {
                { ViewType.MainView, "DynamicModule.xap" },
                { ViewType.SecondView, "SecondDynamicModule.cs.xap" }
            };   

    private readonly List<string> _downloadedModules = new List<string>();

    public ViewNavigator()
    {
        Initializers = new ObservableCollection<IModuleInitializer>();
        Initializers.CollectionChanged += Initializers_CollectionChanged;
    }
    
    void Initializers_CollectionChanged(object sender, System.Collections.Specialized.NotifyCollectionChangedEventArgs e)
    {
        if (e.NewItems != null)
        {
            foreach(var item in e.NewItems)
            {
                var initializer = item as IModuleInitializer; 
                if (initializer != null)
                {
                    if (!_modules.Contains(initializer))
                    {
                        initializer.InitModule();
                        _modules.Add(initializer);
                    }
                }
            }
        }
    }                                                                                              
  
    public void NavigateToView(ViewType view)
    {
        if (!_downloadedModules.Contains(_viewMap[view]))
        {
            var catalog = new DeploymentCatalog(_viewMap[view]);
            CatalogService.Add(catalog);
            catalog.DownloadAsync();
            _downloadedModules.Add(_viewMap[view]);
        }
    }               
}

Right now the navigate to view isn't very flexible. It loads the catalogs but doesn't do anything with the views. I'm also not handling errors when the XAP isn't loaded. We are tracking modules and xaps that were already loaded, so we don't load them again or re-initialize the module. We'll get to that in the next installment when we wire in regions. For now, we can put a breakpoint in the "return" of the init modules, fire up the application, and click our buttons to watch them get dynamically loaded into the system (you can use Fiddler to watch the packets come over the wire). Pretty exciting!

In the next installment, I'll show you how MEF will handle region management.

Download the source for this project

Jeremy Likness

Thursday, February 25, 2010

Vancouver Olympics - How'd We Do That?

The Silverlight team recently posted a blog entry entitled "Vancouver Olympics - How'd we do That?" in which they detailed the massive effort across multiple partners to pull together the on-line solution for streaming HD videos, both live and on demand.

This was an exciting post for Wintellect and me because it detailed the effort we contributed to making the project a success. What's more important to me for the Silverlight community, however, is that you don't overlook the key detail they posted on the blog:

Development of a Silverlight application to interface with the service agents via WCF. This tool allows Microsoft to quickly publish new configuration files (such as web.config and clientaccesspolicy.xml) to groups of servers, as well as dynamically manipulate configuration settings through a visual interface (for example, turning off compression on a remote smooth streaming server when required) on multiple remote servers at once.

This is important because as a Silverlight developer I am constantly bombarded with questions about how viable Silverlight is, if it is ready for the enterprise and what practical application it has for business. This is a specific example where it was the right tool for the right job.

I won't go into more details about the tool other than what Microsoft has publicly shared, but I wanted to emphasize that this is another example of how Silverlight helps to solve real world problems. When looking at the requirements for the health monitoring, we had several options for a solution. In this case, WPF wouldn't have worked because they would have had to remote into the environment in order to access the tool. However, using ASP.NET and JavaScript would have required jumping through some major hoops and use of heavy controls to correctly render the user interface they required. The tool essentially parses any XML-base configuration file and allows on-line navigation and updates to that document. While this would have been possible using script, it would have taken much longer and required an extended cycle of testing to stabilize.

Due to Silverlight's unique data-binding model and full LINQ to XML support, we were able to build a solution quickly and effectively. We were also able to use the unit testing framework to build hundreds of unit tests that helped ensure a stable, thorough, and well-tested solution. Again, using scripting would have required more complicated tools that simulated user-clicks, etc. With Silverlight, we can place controls on a test surface and simulate the click events to test everything from behaviors down to internal class methods.

PRISM, or the Composite Application Library, was also crucial with the success of keeping the module light weight. There were literally dozens of functions the tool could provide but only a few might be used at any given time, so we separated the functions into independent modules that dynamically loaded as needed to provide a streamlined experience.

While I was excited to work on the project with my company, I am even more excited that it demonstrates yet again how Silverlight is the right tool, right here, right now for enterprise class applications. This is another "case study" to add to your arsenal as you answer questions about, "Is it ready?" or "Has it been done before." I believe the fact that it was so crucial both in the front end (with the live streaming experience) and back-end (with the management tools) for an incredible event like the Olympics really proves how mature and ready Silverlight is for business class applications.

Jeremy Likness

Tuesday, February 23, 2010

Top 10 Silverlight Myths and the Facts to Bust Them

Silverlight is a client side plug-in based technology that has been in production since late 2007. I've been a web developer for well over a decade now, and recently have focused almost exclusively on Silverlight since version 3.0 was released. It astounds me how many people still resist Silverlight because they either don't understand what it is or haven't taken the time to research the capabilities it provides. Silverlight is a strong, mature technology that is being used on production sites to deliver powerful applications right here, right now. It's not emerging and it's not experimental.

From my experience answering questions about Silverlight on both Twitter and various forums and discussion groups, I've come up with ten common myths that I hear of over and over again. My hope is this one post can serve as a central place to address those myths, provide the facts that bust the myths, and help potential Silverlight users and developers better understand and take advantage of what I consider to be amazing technology. Feel free to share your thoughts in the comments at the end of this post and share this link with anyone who will gain value learning about the truth!

Myth: "Silverlight is mainly for video."

Fact: Video is only the tip of the iceberg.

Silverlight is a cross-browser, cross-platform, and cross-device plug-in used for creating rich applications on the Internet. In addition to a powerful video stack that makes it easy to deliver video using most of the widely available codecs, Silverlight also boasts a powerful client networking stack (making it easy to connect to third-party services like Facebook and Twitter, using SOAP, REST, or even raw TCP sockets). It has a robust data access model that uses a concept known as data-binding to render data. This makes it ideal for line of business applications due to the relative ease of taking business classes and exposing the data through a rich, interactive user experience. Silverlight also boasts a very robust layout and styling engine and comes with literally hundreds of controls and behaviors ready to be integrated into your applications.

To see what's possible with Silverlight, take a look at the Silverlight Showcase. For an example of how Silverlight provides an effective "line of business" experience, check out Microsoft's Health CUI Patient Journey Demonstrator. Here is another list of 25 inspiring Silverlight projects.

Myth: "Silverlight requires Microsoft web servers to run."

Fact: Silverlight applications can be served from any web server that supports configuring MIME types.

Silverlight is packaged into a file called a XAP file. This file is actually a special zip file that contains the content necessary to run a Silverlight application. Because this content is completely self contained, there are only two steps required to successfully run the application: first, the server must provide the content to the end user via their browser, and second, the browser must load and run the content via the Silverlight plug-in.

Take a look at Embedding Silverlight in PHP. If you wish to host the Silverlight application on a server such as Apache, you simply need to store the XAP file on the server, and add the MIME type for the extension .xap of application/x-Silverlight-app and the user's browser will do the rest!

Myth: "Microsoft tools for Silverlight are expensive."

Fact: You can develop fully functional Silverlight applications at no cost.

Actually, you can grab everything you need to develop Silverlight absolutely free. Take a look at the Silverlight Get Started page. There, you can download everything you need to build fully functional Silverlight applications. There is an SDK, toolkits, extensions, and even links to helpful tutorials to get started. While there are trial versions of additional tool sets such as Expression to help you get started, these are not required to build your applications.

Are you more of an open source person and prefer a different IDE? Check out the Eclipse Tools for Silverlight, an open source IDE that allows Java developers to build Silverlight applications.

Keep in mind the SDK works perfectly well with the Express (free) edition of the Visual Studio IDE that is available for download here. You can grab the 2008 edition for Silverlight 3 development and the upcoming 2010 for future Silverlight 4.

Myth: "Silverlight doesn't work on ... (Mac, Chrome, etc.)"

Fact: Silverlight is supported on all of the most popular browser and operating system combinations.

This is a common myth. Silverlight support is broad and while there are some obscure combinations that are not supported yet, all of the major used browser and platform combinations are supported.

According to the W3Schools Browser Statistics, the top three browsers are Internet Explorer (versions 6, 7, and 8) followed by Firefox, Chrome, and Safari. Opera trails at 2.2% of the market.

The top 5 browser operating systems are all Windows operating systems which account for 90% of browsers that visit the website. Mac commands 6.8% and Linux 4.6% as of January 2010.

Based on these statistics, Silverlight is supported on the major operating systems and browsers (FireFox, IE, and Safari). While there is not official Microsoft support for Chrome until Silverlight 4, most report it runs just fine in that browser as well. The plug-in is supported on Intel-based Macs for both Safari and Firefox. While there is currently no Linux support, the Moonlight project is working toward that goal with Microsoft's full blessing. There is no PowerPC support for anything other than version 1.0, but the majority of Mac computers are Intel-based according to this chart.

Myth: "Silverlight is buggy."

Fact: Silverlight is in its third version production release and is used in many production websites without issue.

This is one of those claims I believe comes from two sources: people who simply don't want to install or play with Silverlight, so they call websites "buggy" when they can't access them due to lack of the Silverlight plug-in, and people who don't understand the difference between a buggy framework or plug-in and bad code.

Like any other platform, Silverlight is simply a staging area for applications. It's up to the development community to write those applications. Some developers are certainly more skilled than others. So far I have yet to find a viable production "bug" related to the Silverlight runtime itself (as opposed to errors or bugs introduced by the developers). That doesn't mean they don't exist, but I haven't found it and I've written and supported a lot of code.

If you're concerned about how stable Silverlight is, consider this: it is robust enough to power Sunday Night Football, the 2008 Beijing Olympics (yes, that was two years ago) and the 2010 Winter Olympics. Artist Alicia Keys used Silverlight to stream her "Live from the Apollo" concert.

More importantly, there are a host of companies, including Fortune 500 companies, that use Silverlight every day to drive critical business operations. As an example, read about how my company, Wintellect, worked with a financial solutions provider to build a web-based check deposit solution using Silverlight. Then click here to launch a Bing search that shows several more case studies for your review.

Myth: "HTML5 is going to kill Silverlight."

Fact: HTML5 and Silverlight are different technologies that provide solutions to different problems.

This is a common topic of conversation on the web. Due mostly to the myth that Silverlight is only for video; the fact that many web browsers are adopting the new <video> tag, and major sites like YouTube are now supporting the standard, has led some to believe that both Flash and Silverlight are going away.

This couldn't be farther from the truth. According to several tracking sites like RIAStats.com, the adoption rate of Silverlight is growing dramatically. Meanwhile, some browsers have adopted certain portions of the HTML5 standard and tout their support of "open standards" despite the fact that HTML5 isn't actually a standard yet. Take a look at the W3C schedule for this. It won't even be an official recommendation until 2012, and isn't expected to be finalized until 2022. I'm not making this up, read the facts for yourself!

So as much as the dream is alive, any adoption now is not support for open standards, but premature support for draft standards that are subject to change.

Don't get me wrong, I'm not against HTML5. I think it is fantastic. But what HTML5 does is provide advanced markup for media. Silverlight, as you've read in this post, does much more than provide pretty graphics or stream video. For more on this topic, read Why HTML5 won't kill Flash or Silverlight.

Myth: "Silverlight is about smashing open standards."

Fact: Silverlight is a platform that functions in addition to, not against or contrary to, open standards.

In a previous myth, I mentioned how slow the HTML5 ratification process is. However, there are browsers pushing support for features and I'm not saying that is a bad thing. In fact, Silverlight doesn't conflict with these standards. The latest version of Silverlight, version 4, that will soon be released has full support for HTML and rich content. This means it can host HTML code and even execute JavaScript and plug-ins within the application. It fully supports the open standards used to render web content.

One of main arguments that Silverlight is "against open standards" is because of the video support. Opponents say that the HTML5 <video> tag is the way to go. While this sounds good in principle, there are a few issues with it. While the tag is adopted in several browsers, currently it is mainly good for simply playing, stopping, starting, and pausing videos. There is no built-in support for captioning (take a look at this article to see how difficult it is to even attempt captioning using HTML5). There is no support for adaptive streaming technologies (like IIS Smooth Streaming) which improve the user experience by degrading gracefully to lower bit rates when the network slows, rather than stopping the video and showing the annoying "buffering" icon and forcing the user to wait 15 seconds.

Silverlight makes it easy to provide videos with rich content, including chapter navigation and support for captioning. Furthermore, with the full DOM support, it is also easy to fall back to other video rendering methods. In a recent project I worked on, I built a Silverlight control to play videos in a PHP page. I used JavaScript to detect if the user had the plug-in installed. If not, I'd simply launch the direct video instead so that it could still be played on browsers that don't support Silverlight.

To further complicate the matter are dubious claims on the web about "open standards achievements" that, well, aren't really "open." Take a look at the world's 1st HTML audio generated music (their claim, not mine.) It sounds great because it is entirely open standards, right? Then why are they showing it with a video clip instead of making it play in your browser? The kicker comes in the "try it yourself" section, where you find out you need a custom "audio write enabled version of the FireFox browser." That hardly sounds like an open standard to me. I don't need a custom browser to run my Silverlight applications that handle audio and video just fine!

Again, I believe the main reason for this myth is really the perception that Silverlight is only good for playing videos and therefore is somehow competing with the HTML5 specification. Once you realize there is much, much more that can be done with Silverlight, you'll realize it is a way to get things done now, instead of having to resort to exotic JavaScript hacks or waiting for the open standards to catch up.

For another example of just how powerful the Silverlight experience can be, take a look at Bing Maps and zoom into the streets of Seattle, Washington. While zoomed in, click the little blue person to go to the street level view. These types of experiences just aren't possible with the current "open standards" and therefore provide an immediate solution until the standards can come to terms with technology advances.

Are you still holding out on installing the plugin? You can view the Bing Maps experience in a video posted at TED. This one, by the way, is streamed using the most popular "standard" for showing video today: the completely proprietary Flash plug-in that was produced not by an open standards committee, but a corporation called Adobe. (It's an interesting fact that the people who are extremely vocal about HTML5 being the video standard still watch most of their videos using Flash ... only recently have a few major websites such as YouTube actually adopted the "HTML5 standard.")

Myth: "It's hard to develop for Silverlight."

Fact: Silverlight is based on the Common Language Runtime (CLR) and can therefore be written in multiple languages.

There is no need to reinvent the wheel. .NET developers will find native C# and VB.Net support. While the XAML markup language comes with a learning curve, most people admit that it is a powerful tool and the time to master it well worth the experience it provides. Silverlight can be written using IronPython, and I've already mentioned the project to make it available to Java programmers.

The learning curve is no greater than most other languages and I would argue as a framework it is vastly superior for web programming. One of the reasons I began building Silverlight applications was because I could create richer real-time applications with better performance in a faster time frame using Silverlight over traditional HTML plus JQuery web pages. I didn't have to write exotic solutions to maintain state between page calls or support multiple browsers. Once you write the code, it runs wherever the plug-in is supported. The data-binding and XAML markup make it incredibly easy to develop very complex user interfaces in a short amount of time featuring full validation.

The IDE for Silverlight supports features such as drag-and-drop behaviors and tons of widgets and controls. In the next version to be released, there is even drag-and-drop data access support. I don't necessarily consider this a good thing: the argument that it is difficult to develop is typically because it's something new to learn. Unfortunately, many developers have been spoiled by easy "drag and drop" frameworks that might build websites quickly while sacrificing some scalability, useability, and performance. Most developers who are used to building solid architectures find that Silverlight helps, rather than hinders, their development efforts, while the developers who depend on crutches like completely auto-generated websites based on database models will struggle with learning best practices for multi-tiered applications.

Community Feedback Links (see comments):

Myth: "There aren't good materials to learn Silverlight."

Fact: The Silverlight community is teeming with individuals and websites designed to help you learn it as quickly and effectively as possible.

One complaint I hear is that there aren't enough books out on Silverlight or various design patterns such as MVVM (model-view-view model) that are used in Silverlight. One of the issues is that the technology is moving at a very fast pace ... just a few years after launch, and we're moving to version 4.0 ... so the web is often the best way to get the latest information.

The main Silverlight site is loaded with tutorials to teach you how to write Silverlight, with topics ranging from basic to advanced. Then there are videos like Victor Gaudioso's on-line tutorials and the consistently updated Silverlight TV. The Silverlight Cream website keeps you up to date with the latest articles, tutorials, and blog posts related to Silverlight. Pose a question to the Silverlight Forums and you are likely to receive a response within minutes. Take a look at all of the Silverlight Users' Groups.

Need more advanced training? Our own company has in depth training available from Jeff Prosise's Silverlight Party to Silverlight for Designers and our flagship Devscovery Conference (we have a full day devoted to Silverlight in New York).

There is plenty of material out there if you are willing to read, watch, and attend.

Community Feedback Links (see comments):

Myth: "There aren't many practical Silverlight applications."

Fact: Silverlight is right here, right now, with satisfied customers using production applications to streamline business process, provide a media presence, offer gaming experiences, and more.

This is by far the main myth that is shattered by the others. Going from assuming Silverlight is "just for videos" to this is an important step in understanding what Silverlight can do. I've already mentioned the Bing maps experience, pointed you to the showcase of applications and the sample application Microsoft provides for healthcare. There are sites devoted to Silverlight games. You can view Darwin's field notes on-line and browse hardrock memorabilia. This is all in addition to the various line of business applications that are on intranets monitoring the health of servers, providing interactive interfaces to upload, transport, and archive data, and real time visual feedback for statistics businesses care about.

If I've appeared somewhat biased in my discourse on Silverlight, it's because I am. I've written PHP, AJAX, wired interactive sites with JQuery and used XML and XSLT as "open standards" to drive content-based websites. In all of my years working on the web, I've found Silverlight to be one of the most compelling platforms for creating truly amazing experiences both on-line and, with support from the newer releases, offline as well.

Having said that, I also don't believe Silverlight is the only programming platform. There are many cases where other tools and languages are required, whether it is because of the platform (i.e. a mobile device or desktop application), the content (for example, I wouldn't write a mostly text-based content management system in Silverlight), or the target audience. However, it's important that people understand what is possible so they don't get lost in pointless and irrational rants against technologies they haven't taken the time to understand.

Please share you thoughts and feedback below!

Jeremy Likness

Thursday, February 18, 2010

A Fluent RSS Reader for Silverlight Part 2: NDepends on What you Need

NDepend is a product that analyzes large code bases and provides information about dependencies, complexity of code, best practices, and more. While designed to help manage large code bases, it also works well as a "reality check" as you are developing new projects. The latest release supports Silverlight and I'll be using it to clean up my RSS reader a little bit.

The first thing I did was download and install the product. There is an option to integrate with Visual Studio. I chose this option, then opened the RSS reader solution. In the lower right I now have an option to create a new NDepend project and link it to my solution:

A new dialog pops up that prompts me for the projects/assemblies to scan and some other information. In this case, I simply take all of the defaults and click OK.

This creates the new project and analyzes my project immediately. Eventually, I'm brought to a new web page that has a detailed report of the analysis of my solution.

The report is very comprehensive. For example, I receive summary statistics at the top:

Number of IL instructions: 1003
Number of lines of code: 120
Number of lines of comment: 265
Percentage comment: 68
Number of assemblies: 3
Number of classes: 15
Number of types: 16
Number of abstract classes: 0
Number of interfaces: 1
Number of value types: 0
Number of exception classes: 0
Number of attribute classes: 1
Number of delegate classes: 0
Number of enumerations classes: 0
Number of generic type definitions: 1
Number of generic method definitions: 6
Percentage of public types: 81.25% 
Percentage of public methods: 76.81% 
Percentage of classes with at least one public field: 6.25% 

The visual map gives me a quick idea of what the dependencies of the project are (click for a larger view):

Visual NDepend

I also get a nice chart showing the dependencies ... you can imagine how powerful this visualization becomes when you have far more projects than the ones in our little RSS reader:

OK, that's all good and pretty, but how does this help us? We need to dive deeper into the report. Scrolling down the report, the first thing that happens is I am scolded for not commenting my SecurityToken method well enough. There is a lot of code but I'm not explaining what the code is doing ... that's fine. It'll certainly help other developers down the road, but what about performance and code quality?

The report indicates that my Item, RSSElementAttribute, and Channel classes are candidates to be sealed. Why would I want to seal the classes? Many C# developers mistakenly believe that you only seal a class when you want to prevent someone else from deriving from it. Not true.

As Wintellect's own Jeffrey Richter explains in CLR via C#, there are three key reasons why a sealed class is better than an unsealed one:

  1. Versioning — if you need to unseal it in the future, you can do so without breaking compatibility
  2. Performance — because the class is sealed, the compiler can generate non-virtual calls (which have less overhead) rather than virtual calls (virtual calls are required to analyze the inheritance chain and find the correct instance of the actual method to call)
  3. Security and Predictability — classes should manage and protect their state. Allowing a class to be derived exposes that class to corruption via the deriving code.

Turns out NDepend has given us a pretty good pointer there. We'll come back to these classes in a bit.

Next, I find out two of my classes, Channel and RssReader, are candidates for structures. Structures have certain performance benefits (read more about them here) but if they are too large they may actually degrade performance. A class that is relatively small, has no derived types, and derives directly from System.Object is a candidate for a structure. For the Channel class, we can kill two birds with one stone: structures are also implicitly sealed.

I could go on and on ... my next step is to continue finding areas that the Code Query Language constraints are fired and see if it makes sense to correct them. You can read more about the CQL here. This language allows you to write your own constraints for quality checking your code. Let's jump out of the constraints and look at a few more metrics that the report provides.

Fortunately, most of the items on the report contain hyperlinks to explanations of what the metric or query is. The list of basic metrics is explained here. These are important ones to look into:

Afferent Coupling — this is how many types outside of an assembly depend on the current type. A high afferent coupling means the type has a larger responsibility because more external types depend on it. In our case, only the common project generates this type of dependency.

Efferent Coupling — this is simply how many external types to the current assembly are depended upon by types within the assembly. A high efferent coupling means we have many dependencies on other types. There will always be efferent coupling, because in the basic sense your string, int, and other framework types have a dependency on the core library.

Poor design results in efferent coupling increasing as you move from the database to the presentation layer. This is because more dependencies on database, business, and other types accumulate. A quality design will ensure consistency across the layers, because the presentation layer will only couple to models and types defined in the business layer, but will be completely abstracted from the data layer, etc.

Relational Cohesion — this refers to interdependencies within an assembly. Typically, related types should be grouped together. A low relational cohesion means there are very few interdependencies. This is not always bad. For example, if you have your domain models in a single assembly, the relational cohesion will depend on how many of those classes aggregate or compose other classes ... it is perfectly normal to have several unrelated types defined. On the flip side, a high cohesion means perhaps the classes are too interrelated and should be broken out for better de-coupling. There is no hard, set rule for this and really is a judgment call based on the specific project being analyzed.

Instability is an important metric to consider. It is the ratio of efferent coupling to total coupling. On a scale from 0 to 1, this describes the "resistance to change" for the assembly. A 0 means it is highly resistant to change, while a 1 means it is extremely instable, or highly sensitive to changes. Because I have grouped models, services, and interfaces together in my project, it fails with a score of 1. This means I need to re-factor and break some elements out so they are more independent.

Abstractness is another important measure. This measures the ratio of abstract (interfaces and abstract classes) types to concrete types. It is again on a scale from 0 to 1. I try to target a combination of 0 and 1 for this metric ... the "1" or completely abstract being independent projects that define interfaces, with the "0" being the implementations of those interfaces. We'll re-factor the RSS reader to pull the interfaces and abstract classes to separate projects and then re-analyze. This will also help me with my testing efforts because it's very hard to write quality unit tests for highly coupled classes.

Based on the report, I need to do some refactoring and change the types and modifiers of some of my entities, as well as split these out into separate projects.

Doing this from within the project is simple. You can click on the NDepend icon and get a summary (click for full-sized view):

Next, I click on one of the warning rows to view the detail (for example unused code -> potentially dead methods). This gives me the detail of the violation and where potential violations occurred. I can easily click on the member or class to navigate to the code and fix it directly:

NDepend Navigation

Following the advice of the tool, I created a new "model" project where I moved the basic domain entities (item, channel) and the custom attribute. I followed the advice to seal the classes but left them as classes due to the reflection used to generate instances from the RSS feeds.

A new service project was created to implement the service. Note how important this is because now for unit tests of classes that depend on the service contract, I can simply mock the interface rather than having to pull in an assembly with an actual implementation (read about Using Moq for Silverlight for Advanced Unit Tests).

I also flattened my structure a bit. I had namespaces with only one class simply because I was dumping these into folders, rather than simply grouping them together and refactoring down the road when it makes sense.

After all of this, I ended up with the new dependency diagram:

My new assemblies no longer have instability ratings of "1" which means the solution as a whole is moving toward better stability and long term maintainability.

Obviously this was a small project with few changes. The power of the tool increases exponentially as you build more complex and involved projects. It can help you sort dependencies, optimize your code, enforce naming conventions, and much more.

Now we've got the reader a little better suited for future expansion. My next step will be to build the unit tests (that should have gotten there a lot sooner, I know, I know) and then see if it makes sense to introduce any type of dependency injection.

Jeremy Likness

Monday, February 15, 2010

Windows Phone 7 Series (Formerly called Windows Mobile 7)

There has been a lot of buzz around Microsoft's latest mobile phone operating system. I hope to be involved with this product as much as possible and will blog the detals that I can as they are made available. While the buzz has referred to this as "Windows Mobile 7" the platform has been officially renamed to "Windows Phone" and the next release is named, "7 Series."

Highlights of the announcement include a completely revamped user interface with Zune and XBox Live integration.

To get an overview of what the phone looks like, see demos, and catch updates, visit the official website at http://windowsphone7series.com/.

According to Steve Ballmer during the press release at the Mobile World Congress 2010, the phone will be available to purchase by Christmas 2010.

Here some Twitter feeds of individuals related to the project and tags to follow for official updates:

Take a look at the Channel 9 first look "hands on demo": First Look Windows Phone 7 Series Hands on Demo.

The development platform and related toolsets will be announced at MIX 2010. Take a look at this exclusive offer to learn how to build applications for the new platform for attendees: Windows Phone 7 Series Offer for MIX10 Attendees.

That's all I've got, everything that is known right now can be found on these various sites. As I mentioned, I hope to be involved with this platform as much as possible over the next few months and will blog updates as they become generally available to the public.

Enjoy!

Jeremy Likness

Sunday, February 7, 2010

A Fluent RSS Reader for Silverlight Part 1: Proof of Concept

One of the most common examples to help learn a language or framework is an RSS Reader. This is an ideal mini-project because it includes networking, parsing XML, and binding to data elements such as lists. I wanted to provide an example that shows some more interesting solutions that are possible using C# in Silverlight. This is the first part in a series. By the end of this post, we'll have a working reader. What I'll then do is add some more detailed error handling, provide unit tests, tackle skinning and visual states, and hopefully end up with a fairly advanced client.

Today, let's focus on getting a client where we can enter a feed and get some listings. A perfect way to test this will be a Twitter feed. If you would like to read my recent tweets as RSS, for example, you can simply navigate to http://twitter.com/statuses/user_timeline/46210370.rss. By viewing the source, you can see the basic structure of an RSS feed. It is fairly straightforward.

Download the source for this project

Domain Entities

The first step is to define some domain entities. These are the "models" for our application, and represent the actual data we are working with. With an RSS feed, it is fairly simple. We'll work our way backward from the item up to the channel. Our item to start with looks like this:

 
public class Item
{
    public string Title { get; set; }

    public string Description { get; set; }

    public DateTime PublishDate { get; set; }

    public string Guid { get; set; }

    public Uri Link { get; set; }
}

A channel is simply some header information with a collection of items. For starters, we'll make it look like this:

public class Channel
{
    public Channel()
    {
        Items = new ObservableCollection<Item>();
    }

    public string Title { get; set; }

    public Uri Link { get; set; }

    public string Description { get; set; }

    public ObservableCollection<Item> Items { get; set; }
}

Note that I build my entities based on the actual content, not how the content is delivered. In other words, even though the RSS stream may encode a link or a date as text, I want my domain entity to be able to work with the property as it was intended. Therefore, there are no "hacks" to force something into a Uri that doesn't belong: we use that type for the links. Similarly, the date time is handled as a date, and we can worry about how to display it when the time is right. The point is this keeps the data we're dealing with abstracted from the way we receive it, so if down the road we are reading a database or connecting to a different type of service, we only need to change the way we adapt from the service into our domain.

De-serialization

One of the challenges we'll face with RSS feeds is taking the feed data and turning it into our domain objects (de-serializing the data). There are a number of different approaches, but I wanted to present one that I think makes it extremely easy to expand and extend entities that are mapped to XML documents. It involves using Silverlight's powerful LINQ to XML features.

The concept is relatively simple: XML data is mostly strings in elements and attributes. For RSS feeds, the data is entirely in elements. Therefore, it should be relatively straightforward to parse an XML element into an entity. We'll know the target property and the type of that property, so all we need is the element and the conversion.

To tackle the element, I decided to create a custom attribute called RssElement that allows me to tag a property with the name of the element the property's value is contained within. Here is the attribute:

[AttributeUsage(AttributeTargets.Property,AllowMultiple = false)]
public class RssElementAttribute : System.Attribute
{
    public readonly string ElementName; 

    public RssElementAttribute(string elementName)
    {
        ElementName = elementName;
    }
}

Now I can circle back to my entities and tag them with the source elements. First, the item:

public class Item
{
    [RssElement("title")]
    public string Title { get; set; }

    [RssElement("description")]
    public string Description { get; set; }

    [RssElement("pubDate")]
    public DateTime PublishDate { get; set; }

    [RssElement("guid")]
    public string Guid { get; set; }

    [RssElement("link")]
    public Uri Link { get; set; }
}

And then the channel:

public class Channel
{
    public Channel()
    {
        Items = new ObservableCollection();
    }

    [RssElement("title")]
    public string Title { get; set; }

    [RssElement("link")]
    public Uri Link { get; set; }

    [RssElement("description")]
    public string Description { get; set; }

    public ObservableCollection<Item> Items { get; set; }
}

Now that we have a way of tagging the properties, we can create a fluent extension to move from an XElement (what LINQ gives us) to an actual type.

I created a static class called LinqExtensions to extend my LINQ classes. Again, working backward, let's consider what happens to the value of an element in XML when we move it to our entity. We'll need to take that value (which is a string) and convert it based on the target type. It has a consistent method signature: receive a string, return a type. Therefore, I decided to implement a dictionary to map the conversion process to the target types. Working with the types I know I'll need, my dictionary looks like this:

public static class LinqExtensions
{
    private static readonly Dictionary<Type, Func<string, object>> _propertyMap;

    static LinqExtensions()
    {
        _propertyMap = new Dictionary<Type, Func<string, object>>
                           {
                               {typeof (string), s => s},
                               {typeof (Uri), s => new Uri(s)},
                               {typeof (DateTime), s => DateTime.Parse(s)}
                           };
    }

This is very simple when you see where I'm going. Everything is mapped to a function that takes a string and returns an object. A string is a pass-through (s => s). A Uri converts the string to a Uri (s => new Uri(s)) and a date time attempts to parse the date (s => DateTime.Parse(s)). By using the type as the key to the dictionary, I can do this:

...
DateTime date = _propertyMap[typeof(DateTime)]("1/1/2010") as DateTime; 
...

Of course, part of what we can do down the road is add some better error handling and contract checking. Next, let's use this conversion. I am taking in an element, then using the property information that I have to set the value onto an instance of the target class. Given a class instance T, an element, and the information about a property, I can inspect the custom attribute to find the element name, then use the conversion for the type of the property to set the value. My method looks like this:

private static void _ProcessProperty<T>(T instance, PropertyInfo property, XContainer element)
{
    var attribute =
        (RssElementAttribute) (property.GetCustomAttributes(true)
                                  .Where(a => a is RssElementAttribute))
                                  .SingleOrDefault();

    XElement targetElement = element.Element(attribute.ElementName);

    if (targetElement != null && _propertyMap.ContainsKey(property.PropertyType))
    {
        property.SetValue(instance, _propertyMap[property.PropertyType](targetElement.Value), null);
    }
}

As you can see, we grab the element, find the value in the XML fragment we're passed, then use the property's SetValue method to set the value based on the conversion we find in the property map.

How do we get the property info? Take a look at this fluent extension. It takes an element and is typed to a target class. It will create a new instance of the class and iterate the properties to set the values. By using it as a fluent extension, I can write this in code, which I think makes it very readable:

...
var widget = xmlElement.ParseAs<Widget>(); 
...

The "parse as" method looks like this:

public static T ParseAs<T>(this XElement element) where T : new()
{
    var retVal = new T();

    ((from p in typeof (T).GetProperties()
      where (from a in p.GetCustomAttributes(true) where a is RssElementAttribute select a).Count() > 0
      select p).AsEnumerable()).ForEach(p => _ProcessProperty(retVal, p, element));

    return retVal;
}

I basically grab all properties with the custom attribute, then pass the class, element, and property to the method we developed earlier to set the value. Finally, I return the new instance of the class. The ForEach is available with the toolkit, but you can roll your own enumerator extension like this:

public static class IteratorExtensions
{
    public static void ForEach<T>(this IEnumerable<T> list, Action<T> action)
    {
        foreach(var item in list)
        {
            action(item);
        }
    }
}

To me, it saves code and makes it easier to read what's going on if you have to perform a simple action and use the ForEach extension to do it.

While we're having fun with our fluent extensions, we're almost ready to take an RSS feed and turn it into a group of objects. We need a way to get the text we receive from the call into an XDocument, so I made this extension:

public static XDocument AsXDocument(this string xml)
{
    return XDocument.Parse(xml);
}

We also need to take the XDocument and iterate the elements to build channels and items. Because we already built our ParseAs method and have tagged the entities for channels and elements, this becomes as simple as:

public static List<Channel> AsRssChannelList(this XDocument rssDoc)
{
    var retVal = new List<Channel>();

    if (rssDoc.Root != null)
    {
        foreach(var c in rssDoc.Root.Elements("channel"))
        {
            var channel = c.ParseAs<Channel>();
            c.Elements("item").ForEach(item => channel.Items.Add(item.ParseAs<Item>()));
            retVal.Add(channel);                    
        }
    }

    return retVal;
}

Again, I believe the fluent extensions are making our life easier because one logical layer builds on the next. We already have our entities tagged and our parsing worked out, so now we simply iterate the channels and parse those as channels, then iterate the items and parse those as items. Could it be an easier? Now, if I have the text of an RSS feed in a variable called rssFeed, the only thing I need to do in order to turn it into a list of channels and items is this:

...
var channelList = rssFeed.AsXDocument().AsRssChannelList(); 
...

We've done most of the heavy lifting, so let's tackle actually going out and fetching the feed. If you've worked with Silverlight and tried to fetch any type of information from another website, you'll be familiar with the strict cross-domain policies that Silverlight has. You can read the Microsoft article on Network Security Access Restrictions in Silverlight to get the full scoop.

In a nutshell, it says most likely we won't be able to reach out from our Silverlight client to touch the RSS feeder and bring back the information. How do you like that twist? Fortunately, there is a workaround.

Using a Handler to Bypass Cross Domain Concerns

Silverlight is allowed to pull information from the same server that it is hosted from. One simple way to bypass the issue with going out directly from the client to the reader is to create a proxy. We don't have the same restrictions on the server with going across domains, so we can have Silverlight "ask" the server to pull the feed for us, and deliver it back.

This, however, has its own issues. I can make a handler that takes a Uri and then returns the data, but this would expose a potential security risk in my web application. It would open the door for malicious attackers to hit other websites and make it look like it is coming from my server! It's sort of like the old "call-forwarding" techniques that old school phreakers would use to hide their illicit telecommunications activities.

I am going to use the handler approach, but make it a little more difficult to abuse. In a production system, I'd implement a full brown encryption scheme. The issue with the Silverlight side is that it gets downloaded to the browser, so a would-be hacker can easily pull apart the code and dissect what's going on. You'll want a fairly strong algorithm to circumvent foul play. For this blog example, I'm going to implement a very light security piece that will thwart any casual foul play by preventing someone from guessing how to pull a feed without knowing the algorithm. OK, so it's not very secret because I'm letting you in on the deal.

For the sake of illustration, I'm simply going to have the Silverlight client generate a guid, then generate a hash on the GUID using SHA1. Again, for a more secured solution, I'd have a SALT and keywords, etc, but this at least shows a way to secure the communication and should get you thinking about security concerns when you start exposing services for Silverlight to consume.

In order for this to work, both the client and the server need to implement the same algorithm. I am a big fan of DRY (Don't Repeat Yourself) so I don't want to duplicate the algorithm in both places. Instead, I'll build this simple piece on the Silverlight side:

public static class SecurityToken
{        
    public static string GenerateToken(string seed)
    {
        Encoder enc = Encoding.Unicode.GetEncoder();

        var unicodeText = new byte[seed.Length * 2];
        enc.GetBytes(seed.ToCharArray(), 0, seed.Length, unicodeText, 0, true);

        System.Security.Cryptography.SHA1 sha = new System.Security.Cryptography.SHA1Managed();

        byte[] result = sha.ComputeHash(unicodeText);

        var sb = new StringBuilder();

        for (int i = 0; i < result.Length; i++)
        {
            sb.Append(result[i].ToString());
        }
        return sb.ToString();

    }        
}

It's a simple method that takes a string and returns the hash as digits. On the web server side, I go into my project and use "add existing item."

Then, I navigate to the security class in my Silverlight project, highlight it, then choose "add as link."

This allows me to share the code between the two sides. I can make a change once and ensure both are always in sync. I call this "poor man's WCF RIA." OK, bad joke. Let's move along.

In the ClientBin folder, I add a handler. I do it here because it makes it easy for Silverlight to reference the handler using a relative Uri. Otherwise, I'll have to hack around with the application root and parse strings and do lots of work I just don't care to do. This way, I can reference it as:

...
var handlerUri = new Uri("RssHandler.ashx"); 
...

And be done with it. Silverlight will pass the guid, the key, and the desired Uri. My handler will generate the key from the guid, make sure it matches what was passed, and then proxy the feed. The code looks like this:

[WebService(Namespace = "http://jeremylikness.com/rssReader")]
[WebServiceBinding(ConformsTo = WsiProfiles.BasicProfile1_1)]
public class RssHandler : IHttpHandler
{
    public void ProcessRequest(HttpContext context)
    {
        context.Response.ContentType = "text/xml";

        if (context.Request.QueryString["key"].Equals(SecurityToken.GenerateToken(context.Request.QueryString["guid"])))
        {
            using (var client = new WebClient())
            {
                context.Response.Write(client.DownloadString(context.Request.QueryString["rssuri"]));
            }
        }
    }

    public bool IsReusable
    {
        get
        {
            return false;
        }
    }
}

Now we need to the build the service on Silverlight.

The Reader Service

First, let's start with the interface for the service. What we want is to pass in the URL of the RSS feed, and receive a list of channels with corresponding items that were fetched from the feed. That ends up looking like this:

public interface IRssReader
{
    void GetFeed(Uri feedUri, Action<List<Channel>> result);
}

Notice I'm making life easy for classes that use the service. They won't have to worry about the asynchronous nature of the service calls or how the result comes back, etc. All that is needed is to pass a method that will accept the list of channels when it becomes available.

As I mentioned earlier, we already did most of the heavy lifting with parsing out the feed, so wiring in the service is very straightforward. The implementation looks like this:

public class RssReader : IRssReader 
{
    private const string PROXY = "RssHandler.ashx?guid={0}&key={1}&rssuri={2}";

    public void GetFeed(Uri feedUri, Action<List<Channel>> result)
    {

        var client = new WebClient();

        Guid guid = Guid.NewGuid();
       
        client.DownloadStringCompleted += ClientDownloadStringCompleted;
        client.DownloadStringAsync(new Uri(
                                       string.Format(PROXY,
                                                     guid,                                                         
                                                     SecurityToken.GenerateToken(guid.ToString()),
                                                     feedUri), UriKind.Relative), result);
    }

    static void ClientDownloadStringCompleted(object sender, DownloadStringCompletedEventArgs e)
    {
        var client = sender as WebClient;
        
        if (client != null)
        {
            client.DownloadStringCompleted -= ClientDownloadStringCompleted;
        }

        if (e.Error != null)
        {
            throw e.Error; 
        }

        var result = e.UserState as Action<List<Channel>>; 

        if (result != null)
        {
            result(e.Result.AsXDocument().AsRssChannelList()); 
        }
    }
}

Walking through the steps, we first get the Uri and the method to call when we're done. We wire up a web client and pass it the URL of our handler. This has parameters for the guid and our key, as well as the Uri we want to fetch. We also pass the method to call when we're done. The asynchronous calls all provide a generic object to pass state, so when the call returns we know which version of the call we're dealing with (nice of them, hey?)

When the call is completed, we unhook the event to avoid any dangling references. If there was an error, we throw it. (Yeah, part of our refactoring in future posts will be dealing with all of the errors I'm either throwing or not catching throughout the example). We take the state object and turn it back into an action, then pass the received value as the list of channels using our fluent extensions.

So now we can get it and load it into our models. We need a view model to host all of this!

The View Model

For the first pass on this, I'm going to wire everything myself. No fancy PRISM or MEF extensions, we can refactor those later. Let's get to showing something on the screen.

Our view model should be able to accept the URL of a feed, let the user click "load" and possibly "refresh" on the feed, then show it to us. We want to give them some feedback if the feed isn't a valid Uri, and we probably should keep the load button disabled until there is a valid feed. I ended up creating this for phase one:

public class RssViewModel : INotifyPropertyChanged
{
    private readonly IRssReader _reader; 

    public RssViewModel(IRssReader reader)
    {
        _reader = reader;
        Channels = new ObservableCollection<Channel>();
    }

    private Uri _feedUri;

    private string _uri; 

    public string FeedUri
    {
        get { return _uri; }
        set
        {
            // this will throw an error if invalid, for validation
            _feedUri = new Uri(value);
            _uri = value;
            RaisePropertyChanged("FeedUri");
            RaisePropertyChanged("CanRefresh");
        }
    }

    public bool CanRefresh { get { return _feedUri != null; }}

    public void RefreshCommand()
    {
        if (CanRefresh)
        {
            Channels.Clear();
            _reader.GetFeed(_feedUri, LoadChannels);
        }
    }

    public void LoadChannels(List<Channel> channelList)
    {
        channelList.ForEach(channel=>Channels.Add(channel));
    }

    public ObservableCollection<Channel> Channels { get; set; }

    protected void RaisePropertyChanged(string propertyName)
    {
        PropertyChangedEventHandler handler = PropertyChanged;
        if (handler != null)
        {
            handler(this, new PropertyChangedEventArgs(propertyName));
        }
    }

    public event PropertyChangedEventHandler PropertyChanged;
}

The view model implements INotifyPropertyChanged to support binding. In lieu of using a converter for the Uri, I decided to deal with it as a string, then cast it to a Uri. If the cast fails, it throws an error, so we can use the Silverlight validation engine to display the error. Only if the text was successfully cast to a valid Uri will we enable the refresh button. When it's clicked, we call the service and clear the list of channels, then populate the list on the return call.

A Basic Page

Now we can throw together some XAML to show it. In this first iteration I'm not about getting stylish or fancy, so I tossed together something fast. I'm using grids in lieu of stack panels so they auto-size well, and a custom ItemsControl that allows vertical scroll bars. We simply add the scroll viewer around the items presenter using a style, like this:

<Style 
    TargetType="ItemsControl" 
    BasedOn="{StaticResource ItemsControlStyle}"
    x:Name="ItemsControlOuterStyle">
    <Setter Property="ItemsControl.Template">
        <Setter.Value>
            <ControlTemplate>
                <ScrollViewer x:Name="ScrollViewer" Padding="{TemplateBinding Padding}">
                    <ItemsPresenter />
                </ScrollViewer>
            </ControlTemplate>
        </Setter.Value>
    </Setter>            
</Style>

I know the high priests of MVVM are going to try and burn me for blasphemy with this next move, but we're still in the proof of concept, OK? So far we have no testing, IoC, nor beautiful command extensions. Yes, you are about to see a plain old code-behind event:

<TextBlock
    Grid.Column="0"
    Margin="5"
    Text="RSS Feed URL:"/>
<TextBox 
    Grid.Column="1"
    Width="200"
    Margin="5"
    Text="{Binding Path=FeedUri,Mode=TwoWay,NotifyOnValidationError=True,ValidatesOnExceptions=True}"/>
<Button
    Grid.Column="2"
    Margin="5"
    IsEnabled="{Binding Path=CanRefresh}"
    Content="Refresh"
    Click="Button_Click"/>

Here you can enter the feed. When you tab out, it will validate it and show an error if it is invalid. If it passes, the button will be enabled.

This next chunk of XAML shows the items. We'll do more refactoring as well. I will introduce the concept of formatters than will parse things like hyperlinks and Twitter hash terms to provide more interactive functionality, but for now let's just get it out there:

<ItemsControl
    Style="{StaticResource ItemsControlOuterStyle}"
    Grid.Row="1" 
    ItemsSource="{Binding Channels}">
    <ItemsControl.ItemTemplate>
        <DataTemplate>
            <Grid 
                Margin="5"
                HorizontalAlignment="Stretch"
                VerticalAlignment="Top">
                <Grid.RowDefinitions>
                    <RowDefinition/>
                    <RowDefinition/>
                    <RowDefinition/>
                </Grid.RowDefinitions>
                    <HyperlinkButton 
                        Grid.Row="0"
                        Margin="5"
                        FontWeight="Bold"
                        Content="{Binding Title}"
                        NavigateUri="{Binding Link}"/>
                    <TextBlock 
                        Grid.Row="1"
                        Margin="5"
                        TextWrapping="Wrap"
                        Text="{Binding Description}"/>
                <ItemsControl 
                    Grid.Row="2"
                    Style="{StaticResource ItemsControlStyle}"
                    ItemsSource="{Binding Items}">
                    <ItemsControl.ItemTemplate>
                        <DataTemplate>
                            <Grid HorizontalAlignment="Stretch"
                                  VerticalAlignment="Top"
                                  Margin="5">
                                <Grid.RowDefinitions>
                                    <RowDefinition/>
                                    <RowDefinition/>
                                </Grid.RowDefinitions>
                                <Grid.ColumnDefinitions>
                                    <ColumnDefinition Width="Auto"/>
                                    <ColumnDefinition Width="Auto"/>
                                </Grid.ColumnDefinitions>
                                <TextBlock 
                                    Margin="5"
                                    Grid.Row="0"
                                    Grid.Column="0"
                                    Text="{Binding PublishDate}"/>
                                <HyperlinkButton
                                    Margin="5"
                                    Grid.Row="0"
                                    Grid.Column="1" 
                                    Content="{Binding Title}"
                                    NavigateUri="{Binding Link}"/>
                                <TextBlock 
                                    Grid.Row="1"
                                    Grid.Column="0"
                                    Grid.ColumnSpan="2"
                                    TextWrapping="Wrap"
                                    Text="{Binding Description}"/>
                            </Grid>
                        </DataTemplate>
                    </ItemsControl.ItemTemplate>
                </ItemsControl>
            </Grid>                   
        </DataTemplate>
    </ItemsControl.ItemTemplate>
</ItemsControl>

Whew! Now we just need to put in a little bit of code behind. We'll enrage the dependency injection purists by creating hard coded instances and further stoke the ire of MVVM evangelists by wiring in a code behind for the button click:

public partial class MainPage
{
    public MainPage()
    {
        InitializeComponent();
        LayoutRoot.DataContext = new RssViewModel(new RssReader());
    }

    private void Button_Click(object sender, RoutedEventArgs e)
    {
        ((RssViewModel)LayoutRoot.DataContext).RefreshCommand();
    }
}

Upon publishing and hitting the service, I was able to pull up my Twitter feed:

That's it. Now we have a proof of concept, and it worked. We have a lot more to do ... making it expandable to handle different types of feeds including HTML, styling it, perhaps looking at using multiple feeds, tests, and the lot (yes, I know, I know, the tests should have come earlier). I hope you enjoyed this first look into my process of creating a basic reader and we'll continue to evolve the project with time!

Download the source for this project

Jeremy Likness

Tuesday, February 2, 2010

Using Moq with Silverlight for Advanced Unit Tests

Moq is a library that assists with running unit tests by providing easily mocked objects that implement interfaces and abstract classes. You can learn more about Moq on their website. There is a distribution for Silverlight, and in this post I'll focus on some ways to use Moq for some more involved testing scenarios.

Download the source code for the example project

I started with the Simple Dialog Service in Silverlight and extended the example a bit. In the post, I promised that abstracting the dialog function behind an interface would facilitate unit testing. In this post, I'll deliver on the promise.

Adhering to the pattern I described in Simplifying Asynchronous Calls in Silverlight using Action, I created the interface for a "Thesaurus" service. In this contrived example, we are showing a list of words. You can highlight a word and see the related synonyms in another list. You may also delete the words from the list.

Because we assume in the real word that there are thousands upon thousands of available words and synonyms, the view model won't try to hold a complex object graph. I won't have a special "word" entity that has a nested list of synonyms. Instead, I'll maintain two lists: one of words, and one of the current synonyms. The service will be called when a word is selected to retrieve the list of synonyms for that word.

Enough set up ... let's take a refresher and look at the dialog service interface, along with the new thesaurus interface:

public interface IDialogService
{
   void ShowDialog(string title, string message, bool allowCancel, Action<bool> response);
}

public interface IThesaurus
{
    void GetSynonyms(string word, Action<List<string>> result);

    void ListWords(Action<List<string>> result);

    void Delete(string word);
}

As you can see, the interface is straightforward. In the sample project, I've implemented the dialog using the same code as my previous post (which involves a ChildWindow). The "implementation" of the thesaurus service actually only provides a few words and synonyms but is enough to run the application and see how it would function in the real world.

To run the actual "live" code, right-click on SimpleDialog.Web and choose "Set as startup project." Then, right-click on SimpleDialogTestPage.aspx and choose "Set as start page." Hit F5 and you'll be in business. In this screenshot, I've selected a word that we run into quite often in the development world, and you can see that it has synonyms listed for it:

The view model is straightforward. It expects an implementation of the dialog service and the thesaurus service, then manages well on its own:

public class ViewModel : INotifyPropertyChanged
{
    private readonly IThesaurus _thesaurus;

    private readonly IDialogService _dialog;

    public ViewModel()
    {
        Words = new ObservableCollection<string>();
        Synonyms = new ObservableCollection<string>();
       
        DeleteCommand = new DelegateCommand<object>( o=>ConfirmDelete(), o=>!string.IsNullOrEmpty(CurrentWord));
    }

   public ViewModel(IThesaurus thesaurus, IDialogService dialogService) : this()
    {
        _thesaurus = thesaurus;
        _dialog = dialogService;

        _thesaurus.ListWords(PutWords); 
    }

    public DelegateCommand<object> DeleteCommand { get; set; }

    public ObservableCollection<string> Words { get; set; }

    public void PutWords(List<string> words)
    {
        Words.Clear();
        foreach(string word in words)
        {
            Words.Add(word);
        }

        // default first word
        if (Words.Count > 0)
        {
            CurrentWord = Words[0]; 
        }
    }

    public ObservableCollection<string> Synonyms { get; set; }

    public void PutSynonyms(List<string> synonyms)
    {
        Synonyms.Clear();
        foreach (string synonym in synonyms)
        {
            Synonyms.Add(synonym);
        }
    }

    public void ConfirmDelete()
    {
        _dialog.ShowDialog("Confirm Delete", string.Format("Do you really want to delete {0}?", CurrentWord),
            true, b =>
                      {
                          if (b)
                          {
                              _thesaurus.Delete(CurrentWord);
                              _thesaurus.ListWords(PutWords);
                          }
                      });
    }

    private string _currentWord = string.Empty; 

    public string CurrentWord
    {
        get { return _currentWord; }
        set
        {
            _currentWord = value;
            _thesaurus.GetSynonyms(value, PutSynonyms);
            OnPropertyChanged("CurrentWord");
            DeleteCommand.RaiseCanExecuteChanged();
        }
    }

    protected void OnPropertyChanged(string propertyName)
    {
        PropertyChangedEventHandler handler = PropertyChanged;
        if (handler != null)
        {
            handler(this, new PropertyChangedEventArgs(propertyName));
        }
    }

    public event PropertyChangedEventHandler PropertyChanged;
}

Let's walk through it briefly. The constructor initializes the lists and the delete command, which requires that there is a CurrentWord selected. When the services are passed in, it requests the list of words and asks the service to call PutWords with the list. This will load the results and set a default "current word" based on the first item in the list.

The current word is interesting, because when it is set, it goes out to request a list of synonyms. This is populated to the synonym list. When the view model is constructed and the list of words is received, it will automatically fire a request for the synonyms by setting the current word. The current word also updates the DeleteCommand so it can be enabled or disabled based on whether or not a word is currently selected.

The delete command itself calls the dialog service for confirmation, and only if the user confirms will it then call the service, delete the word, and then request a new list of words.

This view model is written so that I need almost zero code behind in my user control. The XAML looks like this, and is driven completely by data-binding:

<UserControl x:Class="SimpleDialog.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
    xmlns:cal="clr-namespace:Microsoft.Practices.Composite.Presentation.Commands;assembly=Microsoft.Practices.Composite.Presentation"
    mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480">
  <Grid x:Name="LayoutRoot" 
        HorizontalAlignment="Center"
        VerticalAlignment="Center">
        <Grid.RowDefinitions>
            <RowDefinition/>
            <RowDefinition/>
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition/>
            <ColumnDefinition/>
        </Grid.ColumnDefinitions> 
        <ListBox 
            HorizontalAlignment="Center"
            VerticalAlignment="Center"
            ItemsSource="{Binding Words}"
            SelectedItem="{Binding CurrentWord, Mode=TwoWay}"
            SelectionMode="Single"
            Grid.Row="0"
            Grid.Column="0"/>
        <ListBox
            HorizontalAlignment="Center"
            VerticalAlignment="Center"            
            Width="Auto"
            ItemsSource="{Binding Synonyms}"
            Grid.Row="0"
            Grid.Column="1"/>
        <Button 
            HorizontalAlignment="Center"
            VerticalAlignment="Center"            
            Width="Auto"
            Content="Delete"
            Grid.Row="1"
            Grid.Column="0"
            cal:Click.Command="{Binding DeleteCommand}"/>
  </Grid>
</UserControl>

The key here is the bindings from the delete button to the delete command, and the list box of words to the "current word." This allows all of the other updates to cascade accordingly. I said I had "almost no" code behind. This is because I'm traveling light and decided not to drag an IoC container into the example. Therefore, in the code behind, I manually wire up the view model like this:

public MainPage()
{
    InitializeComponent();
    DataContext = new ViewModel(new Thesaurus(), new DialogService());
}  

Now that we've set the stage, the real focus of this post is how to test what we've just created. I have a view model that does quite a bit, and want to make sure it is behaving. To make this happen, I'll do two things. First, I'll add the Silverlight Unit Testing Framework. I first discussed it in my post about unit testing views AND view models. Next, I'll navigate to the Moq page and download the Silverlight bits (all of this is included in the reference project).

My test class is "view model tests." The first thing I need to do is set up some support for the tests. We want a pretend list of words and synonyms, and mocks for the services the view model depends on. Our class starts out like this:

[TestClass]
public class ViewModelTests
{
    private readonly List<string> _words = new List<string> {"test", "mock"};

    private readonly List<string> _synonyms = new List<string> {"evaluation", "experiment"};

    private readonly List<String> _mockSynonyms = new List<string> {"counterfeit", "ersatz", "pseudo", "substitute"};

    private Mock<IDialogService> _dialogMock;

    private Mock<IThesaurus> _thesaurusMock;

    private ViewModel _target;
}

We created two words with synonym lists. The mocking of the services is as simple as declaring Mock<T>. There is more to it that we'll cover, but this gets us started. I like to always have a reference to the object I am testing as target to make it clear what the test is run for.

Now we will need to set up the view model for all of our tests. Just like with the standard unit testing framework, we can flag a method as TestInitialize to do set up before each test is run. Take a look at my setup:

[TestInitialize]
public void InitializeTests()
{
    _dialogMock = new Mock<IDialogService>();

    _thesaurusMock = new Mock<IThesaurus>();

    _thesaurusMock.Setup(
        t => t.ListWords(It.IsAny<Action<List<string>>>()))
        .Callback((Action<List<string>> action) => action(_words));

    _thesaurusMock.Setup(
        t => t.GetSynonyms(It.Is((string s) => s.Equals(_words[0])), It.IsAny<Action<List<string>>>()))
        .Callback((string word, Action<List<string>> action) => action(_synonyms));

    _thesaurusMock.Setup(
        t => t.GetSynonyms(It.Is((string s) => s.Equals(_words[1])), It.IsAny<Action<List<string>>>()))
        .Callback((string word, Action<List<string>> action) => action(_mockSynonyms));

    _target = new ViewModel(_thesaurusMock.Object, _dialogMock.Object);
}

We create a new mock for every test so the internal counters are reset. Then, it gets a little interesting. The setup command allows you to determine what an expected call will be, and to provide results. Many methods return values, and there is a Result method to use for providing what those values should be. In our case, however, the methods do not return anything because they call the Action when complete. Therefore, we set up a callback so that we can perform an action when the method is fired.

To see how that works, take a look at the first set up. We are setting up our mock for the thesaurus service. The lambda expression indicates we expect the ListWords to be called, and the parameter can be any action that acts on a list of a strings. Now the important part: the callback. The callback has the same signature as the method. When the method is called, Moq will fire the callback and send the parameters. In our case, we take the action that was passed in, and call it with our list of words. This will mock the process of the view model requesting the word list, then having the PutWords method called with the actual list.

Next, we set up the GetSynonyms method twice. This is because we want to pass a different result based on the word that the method is called with. Instead of IsAny, we'll use Is with a lambda that determines what will pass. In this way, we can compare specific values or even ranges. If the passed value is the first word, we call the action with the synonym list. If the passed value is the second word ("mock"), we call the action with the list of mocks. If any other value is passed in, we simply don't care so we haven't set that up.

The first test we'll make is for setup. In other words, after the view model is constructed, we simply want to test that everything was set up appropriately. The view model promises to provide us with lists and commands, so we need to ensure these will be available when we bind to it.

[TestMethod]
public void TestConstructor()
{
    // make sure it made the call
    _thesaurusMock.Verify(t => t.ListWords(_target.PutWords), Times.Exactly(1),
                          "List words method was not called.");
    _thesaurusMock.Verify(
        t => t.GetSynonyms(It.Is((string s) => s.Equals(_words[0])), It.IsAny<Action<List<string>>>()),
        Times.Exactly(1),
        "Synonym method never called for first word.");
    Assert.AreEqual(_target.CurrentWord, _words[0], "View model did not default target word.");
    Assert.IsNotNull(_target.DeleteCommand, "Delete command was not initialized.");
    Assert.IsTrue(_target.DeleteCommand.CanExecute(null),
                  "Delete command is disabled when selected word exists.");
    CollectionAssert.AreEquivalent(_words, _target.Words, "Words do not match.");
    CollectionAssert.AreEquivalent(_synonyms, _target.Synonyms, "Synonyms do not match.");
}

The setup commands "rigged" what we expect when a method is called. If these were the only actions we took, this would really be a "stub" (something there to make it happen) rather than a mock (something that changes and can be inspected after the test to determine the outcome). With Moq, we use verify to inspect the state of the mocked object. First, we verify that the word list was called exactly once. Next, we verify that the synonym list was called specifically with the first word. Next, we are comparing values, lists, and commands to make sure they are as we expect. For example, the view model should have already populated a list of synonyms that matches the first word, so we use the CollectionAssert to make sure the lists are equivalent.

Now we can do a little more in depth test. Let's test selection. When we select the second word by setting the CurrentWord property, the view model should call the thesaurus service to retrieve the list of synonyms for 'mock.' The test looks like this:

[TestMethod]
public void TestSelection()
{
    _target.CurrentWord = _words[1]; 

    CollectionAssert.AreEquivalent(_mockSynonyms, _target.Synonyms, "Synonyms did not update.");
}

Technically, I could have verified the call as well and if you feel that is necessary, then by all means include that test. In this case, I know the synonym list simply won't update itself, so if it ends up changing to the list I'm expected, I'm confident that the call took place. Therefore, I simply test that the mock synonyms were updated to the collection of synonyms on the view model.

What about a more complicated test? Our delete command requires confirmation, so the user doesn't accidentally delete a word they did not intend to. Therefore, we need to set up the dialog service to return the results we want. We'll have two tests: one that ensures the delete service is called when the user confirms, and another that ensures it is NOT called when the user cancels. The confirm test looks like this:

[TestMethod]
public void TestDeleteConfirm()
{
    _thesaurusMock.Setup(t => t.Delete(It.IsAny<string>())); 

    _dialogMock.Setup(
        d => d.ShowDialog(It.IsAny<string>(),
                          It.IsAny<string>(),
                          It.IsAny<bool>(),
                          It.IsAny<Action<bool>>()))
        .Callback(
        (string title, string message, bool cancel, Action<bool> action) => action(true));

    _target.DeleteCommand.Execute(null);

    _thesaurusMock.Verify(
        t => t.Delete(It.Is((string s) => s.Equals(_words[0]))),
        Times.Exactly(1),
        "The delete method was not called on the service.");
}

First we set up the service to expect the delete call. Next, we set up the dialog. We don't care about the parameters passed in. All we care about is that we call the action with "true" to simulate a user confirming the dialog. Once this is set up, we trigger the delete command, and verify that the delete service was called for the appropriate word.

For the cancel, we pass false in the action, and use Times.Never() on the method signature to ensure the delete service was not called.

Finally, because our delete command is only enabled when a current word is selected, we need to test that it is disabled with no words:

[TestMethod]
public void TestDeleteEnabled()
{
    _target.CurrentWord = string.Empty; 

    Assert.IsFalse(
        _target.DeleteCommand.CanExecute(null),
        "Delete command did not disable with empty word selection.");
}

We already tested for the positive case in the constructor. Now, we simply clear the selection and confirm that the command is no longer allowed to execute.

If you ran the application earlier, you simply need to set the "test" project as the start-up project to run the unit tests. They will run directly in the browser.

As you can see, using the MVVM pattern appropriately ensures straightforward XAML (mostly binding-driven without cluttered code-behind) and very granular testing. Tools like Moq can make it much easier to test complicated scenarios by allowing you to easily set up dependencies, mock returns, and ultimately validate that the view model is behaving the way it is expected to.

Download the source code for the example project

Jeremy Likness