Friday, January 29, 2010

Programmatically Accessing the Live Smooth Streaming API

Live Smooth Streaming is a Microsoft technology that allows you to take a live, encoded, incoming video stream and rebroadcast it using Smooth Streaming technology. This technology multicasts the video stream in segments of varying bandwidths. This can then be played with a Silverlight-based client like the built-in MediaElement or more advanced player like the Silverlight Media Framework.

The incoming streams could be from your web cam or third party sources, or even from a file encoded on disk that is being webcast at a particular point in time. The live smooth streaming allows the client to adjust to the current network load and gracefully degrade quality when the network slows to avoid having to pause the playback and force the user to wait for a new buffer. The user can also rewind the "live content," and there are options to archive it to play on demand at a later date.

A content network with many live streams might have multiple servers and cabinets in their data centers to serve content. In that scenario, programmatic management of endpoints becomes essential to effectively scale the operation. If you find yourself needing to automate some aspects of starting, stopping, shutting down or creating publishing points for live streaming, you're in luck!

Let's start with a few entities to help us handle publishing points. A publishing point can have a stream state (the status of the encoding stream it is listening to), and archive state (the status of the stream it is archiving to) and a fragment stream that enables insertion of other content to the stream. These various nodes on the publishing point share a common set of states:


public enum State
{
    Disabled,
    Started,
    Stopped,
    Unknown
}

These are fairly self-explanatory. "Unknown" simply means the publishing point is in a status, such as stopped or shut down, when it doesn't make sense for the node to have a valid status.

The publishing point itself has a more involved list of states. For an excellent overview, read this article on creating and managing publishing points. Here are the states:


public enum PublishingPointState
{
    Idle,
    Starting,
    Started,
    Stopping,
    Stopped,
    Shuttingdown,
    Error,
    Unknown
}

Again, these are fairly self-explanatory. Idle happens when the publishing point has been created, but no action has been taken against it, or it is shut down. Starting indicates it is ready to receive a feed, and started means it is actively processing one.

Let's go ahead and build an entity to represent the publishing point:


public class PublishingPoint
{        
    public State StreamState { get; set; }

    public State ArchiveState { get; set; }

    public State FragmentState { get; set; }

    public PublishingPointState PubState { get; set; }

    public string SiteName { get; set; }

    public string Path { get; set; }

    public string Name { get; set; }
}

The first few properties are the various states of the nodes and the publishing point itself. The site name is the website it is configured on. This will usually be "Default Web Site." The Path is the virtual path (not the physical path) the publishing point resides in (also known as the Application), and the name is the actual name of the publishing point.

While there is not a strongly typed API to interface directly with the live smooth streaming configuration as of this writing, we can easily reach it through the new administration classes. With IIS 7.0 and smooth streaming installed, you should be able to find Microsoft.Web.Administration.dll in your %windir%\system32\inetsrv directory. Reference this DLL, and you can programmatically access web sites. You can also interact with configurations through a reflection-style interface that we'll cover here.

Get the List of Publishing Points

Our first task is to get a list of publishing points.


private const string LIVESTREAMINGSECTION = "system.webServer/media/liveStreaming";
private const string METHODGETPUBPOINTS = "GetPublishingPoints";
private const string ATTR_SITENAME = "siteName";
private const string ATTR_VIRTUALPATH = "virtualPath";
private const string ATTR_NAME = "name";
private const string ATTR_ARCHIVES = "archives";
private const string ATTR_FRAGMENTS = "fragments";
private const string ATTR_STREAMS = "streams";
private const string ATTR_STATE = "state";
        
public List<PublishingPoint> GetPublishingPoints()
{
    var retVal = new List<PublishingPoint>();

    using (var serverManager = new ServerManager())
    {
        Configuration appHost = serverManager.GetApplicationHostConfiguration();

        try
        {
            ConfigurationSection liveStreamingConfig = appHost.GetSection(LIVESTREAMINGSECTION);

            foreach (Site site in serverManager.Sites)
            {
                foreach (Application application in site.Applications)
                {
                    try
                    {
                        ConfigurationMethodInstance instance =
                            liveStreamingConfig.Methods[METHODGETPUBPOINTS].CreateInstance();

                        instance.Input[ATTR_SITENAME] = site.Name;
                        instance.Input[ATTR_VIRTUALPATH] = application.Path;

                        instance.Execute();

                        ConfigurationElement collection = instance.Output.GetCollection();

                        foreach (var item in collection.GetCollection())
                        {
                            retVal.Add(new PublishingPoint
                                           {
                                               SiteName = site.Name,
                                               Path = application.Path,
                                               Name = item.Attributes[ATTR_NAME].Value.ToString(),
                                               ArchiveState = (State) item.Attributes[ATTR_ARCHIVES].Value,
                                               FragmentState = (State) item.Attributes[ATTR_FRAGMENTS].Value,
                                               StreamState = (State) item.Attributes[ATTR_STREAMS].Value,
                                               PubState =
                                                   (PublishingPointState) item.Attributes[ATTR_STATE].Value
                                           });
                        }
                    }
                    catch (COMException ce)
                    {
                        Debug.Print(ce.Message);
                    }
                }
            }
        }
        catch (COMException ce)
        {
            Debug.Print(ce.Message);
        }
    }

    return retVal;
}

The ServerManager is our hook into administration. All of the configuration for the live smooth streaming is in the APPHOST section of the configuration, which we fetch at the beginning of the method by grabbing the configuration and then navigating to system.webServer/media/liveStreaming.

The method to fetch publishing points requires a web site and an application (virtual directory). We iterate through these and use the ConfigurationMethodInstance to ask for the this. The new IIS configuration extensions allow methods and API hook points to be defined in the configuration itself, with a set of inputs and outputs. You can read more about the ConfigurationMethod. We use this and pass in the web site name and application, to receive the list of publishing points. Notice how we get a generic collection of ConfigurationElement and reference the attributes to build our more strongly-typed PublishingPoint object.

Of course, there are multiple areas where exceptions may be thrown. In the example, I capture the COM errors and just print them to debug. You can interrogate the error code and provide more specific feedback as to why the call failed.

Starting, Stopping, and Shutting Down Publishing Points

Now that we have a publishing point, we can stop, start, and shut it down. This is also done through a configuration method. The method names for the action are:


public enum PublishingPointCommand
{
    Start,
    Stop,
    Shutdown
}

The method names are the actual text, so we can use the ToString() method on the enum to pass the command. Issuing a command against a specific publishing point now looks like this:


private static void _IssueCommand(PublishingPoint publishingPoint, PublishingPointCommand command)
{
    using (var serverManager = new ServerManager())
    {
        Configuration appHost = serverManager.GetApplicationHostConfiguration();

        ConfigurationSection liveStreamingConfig = appHost.GetSection(LIVESTREAMINGSECTION);

        if (liveStreamingConfig == null)
        {
            throw new Exception("Couldn't get to the live streaming section.");
        }

        ConfigurationMethodInstance instance =
            liveStreamingConfig.Methods[METHODGETPUBPOINTS].CreateInstance();

        instance.Input[ATTR_SITENAME] = publishingPoint.SiteName;
        instance.Input[ATTR_VIRTUALPATH] = publishingPoint.Path;

        instance.Execute();

        // Gets the PublishingPointCollection associated with the method output
        ConfigurationElement collection = instance.Output.GetCollection();

        foreach (var item in collection.GetCollection())
        {
            if (item.Attributes[ATTR_NAME].Value.ToString().Equals(publishingPoint.Name))
            {
                var method = item.Methods[command.ToString()];
                var methodInstance = method.CreateInstance();
                methodInstance.Execute();
                break;
            }
        }
    }
}

Of course, you can probably see a place where it makes sense to refactor some code, because we're spinning through the collection again. This time, instead of turning it into a concrete type, we are taking the specific item in the collection for the publishing point we're targeting, then creating an instance of the command method and firing it off. Simple as that!

Creating the Publishing Point

So now we can manipulate and gather information about a point, but what about creating it? If you are frustrated trying to find where in the API you can call a "Create" method, take a deep breath and step away from the configuration collection. It's not there. To create a publishing point, we simply drop a file on the system! That's right. If you create a new publishing point, you can navigate to the file system and set an .isml file. Open it in notepad and you'll find the format is straightforward:

<?xml version="1.0" encoding="utf-8"?>
   <smil xmlns="http://www.w3.org/2001/SMIL20/Language">
      <head>
         <meta name="title" content="Pub Point Title" />
         <meta name="module" content="liveSmoothStreaming" />
         <meta name="sourceType" content="Push" /><!-- or Pull -->
         <meta name="publishing" content="Fragments;Streams;Archives" />
         <meta name="estimatedTime" content="00:11:22" />
         <meta name="lookaheadChunks" content="2" />
         <meta name="manifestWindowLength" content="0" />
         <meta name="startOnFirstRequest" content="False" />
         <meta name="archiveSegmentLength" content="0" />
      </head>
   <body/>
</smil>

That's it ... very straightforward. Give it a title, make it push or pull, and estimate the time. Of course, you can manipulate some of the more advanced features as well, but for a basic interface, this is all we need. We'll store the XML with a constant that holds the template called ISML_TEMPLATE. When the user requests a new publishing point, we'll find the right virtual directory, get the physical path, and then write out the file:

public enum LiveSourceType
{
    Push,
    Pull
}

public bool CreatePublishingPoint(PublishingPoint publishingPoint, string title, string duration, LiveSourceType type)
{

    bool retVal = false;

    using (var manager = new ServerManager())
    {
        Site site = manager.Sites[publishingPoint.SiteName];

        Application application = site.Applications[publishingPoint.Path];

        string template =
            string.Format(ISML_TEMPLATE, title, type, duration ?? string.Empty);

        try
        {
            string path = string.Format(@"{0}\{1}", application.VirtualDirectories[0].PhysicalPath,
                                        publishingPoint.Name);

            File.WriteAllText(path, template);

            retVal = true;
        }
        catch (Exception ex)
        {
            Debug.Print(ex.Message);
        }
    }

    return retVal;
}

This is fairly straightforward to give you the point. In production, you'll need to test for nulls (i.e. if the site doesn't exist, a null template is passed in, etc) and may want to validate that it was created successfully by enumerating the list after creation. I'd also recommend creating an XmlDocument or XDocument for more advanced titles ... in this example, using string format, we can make it fast but must make sure the title is clean or escape it before making it part of the source XML.

That's it ... in a nutshell, you now have all of the bits and pieces needed to interface programmatically with the Live Smooth Streaming APIs. I have no sample project because this is intended to be a guideline for your own explorations and use of the system. I would also suggest the excellent blog by Ezequiel Jadib which contains loads of information about smooth streaming.

Jeremy Likness

Wednesday, January 27, 2010

Introduction to Debugging Silverlight Applications with WinDbg

I've had a few users ask me about finding memory leaks and understanding what happens with references in Silverlight. One very powerful tool to use when debugging Silverlight applications is the Windows Debugging Tools. You can download the 32-bit (x86) version side-by-side with the 64-bit (x64) version.

Both WPF and Silverlight ship with an extension DLL you can load called SOS. This extension contains many powerful commands.

In the video, Silverlight Debugging with WinDbg (30 minutes long), I walk through a debugging scenario using my Fractal Koch Snowflakes.

I show how to dump the heap, walk object references, see which references are targetted for garbage collection and which ones still have root references, examine event handlers and delegates to trace back to the target methods, an unroll lists and arrays. It's sort of a "firehose" approach but with a real world example that I hope helps drive home how powerful it can be.

Watch the video by clicking here (30 minutes).

Jeremy Likness

Saturday, January 23, 2010

Simple Dialog Service in Silverlight

I noticed on the forums there are a lot of users not comfortable with asynchronous programming who struggle a bit in Silverlight with getting their arms around the concept of a dialog box. In other environments, you can simply shoot out a dialog, wait for the response, and continue. In Silverlight, of course, the action is asynchronous. I would argue it should be this way most of the time.

The problem is that many people tend to take the approach of trying to force the process into a synchronous one, instead of changing the way they think and approach the application. There is a reason why processes are asynchronous in Silverlight. There is one main UI thread, and a blocking process would block the entire thread and effectively freeze the Silverlight application. Having the process asynchronous allows you to continue to render graphics elements, perform actions in the background, even display a soothing animation while you await the user's response.

I spoke to a more highly decoupled approach in a post awhile ago that was more an experiment with the event aggregator: Decoupled Child Window Dialogs with Silverlight and PRISM. Here, I want to show the more straightforward approach.

The first step is to choose what the dialog will be displayed with. In Silverlight, the ChildWindow makes perfect sense because it is a modal dialog that appears, waits for the user response, and then saves the response. We'll create a new child window and call it Dialog.xaml. It looks like this:

<controls:ChildWindow x:Class="SimpleDialog.Dialog"
           xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
           xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
           xmlns:controls="clr-namespace:System.Windows.Controls;assembly=System.Windows.Controls"
           Width="400" Height="300" 
           Title="{Binding Title}">
    <Grid x:Name="LayoutRoot" Margin="2">
        <Grid.RowDefinitions>
            <RowDefinition />
            <RowDefinition Height="Auto" />
        </Grid.RowDefinitions>
        <TextBlock TextWrapping="Wrap" Grid.Row="0" Text="{Binding Message}"/>
        <Button x:Name="CancelButton" Content="Cancel" Click="CancelButton_Click" Width="75" Height="23" HorizontalAlignment="Right" Margin="0,12,0,0" Grid.Row="1" />
        <Button x:Name="OKButton" Content="OK" Click="OKButton_Click" Width="75" Height="23" HorizontalAlignment="Right" Margin="0,12,79,0" Grid.Row="1" />
    </Grid>
</controls:ChildWindow>

You'll notice I made very few changes from the provided template. The key here is that I changed the title to bind with the Title property, and added a text block to display a message from the Message property.

Because this is a simple dialog box, I really feel a view model is overkill. Some purists will insist on this but I argue the functionality is simple enough that we don't need the extra overhead. We'll be using an interface to the solution down the road that we can easily unit test, and this makes the dialog function (which lives entirely in the UI) a self-contained implementation.

Next, I made a few changes to the code behind:

public partial class Dialog : ChildWindow
{
    public Action<bool> CloseAction { get; set; }

    public static readonly DependencyProperty MessageProperty = DependencyProperty.Register(
        "Message",
        typeof(string),
        typeof(Dialog),
        null);


    public string Message
    {
        get { return GetValue(MessageProperty).ToString(); }
        set { SetValue(MessageProperty, value); }
    }

    public Dialog()
    {
        InitializeComponent();
        DataContext = this;
    }        

    public Dialog(bool allowCancel)
        : this()
    {
        CancelButton.Visibility = allowCancel ? Visibility.Visible : Visibility.Collapsed;
    }

    private void OKButton_Click(object sender, RoutedEventArgs e)
    {
        this.DialogResult = true;            
    }

    private void CancelButton_Click(object sender, RoutedEventArgs e)
    {
        this.DialogResult = false;           
    }
}

Again, not too many changes here. I added the Message property as a dependency property, and in the constructor I set the data context to itself. This allows me to bind to the title and message. This allows us to simply set the message on the dialog and have it appear. I also added an action for it to retain when closed. This will store a callback so that it can notify the host of the user's final actions. Finally, there is a method that conditions whether or not there is the option for a cancel button. For alerts, we'll simply show an OK button. For confirmations, we'll show the Cancel button as well.

Next is a simple interface to the dialog. Nothing in our application should know or care how the dialog is displayed. There is simply a mechanism to display the dialog and possibly acquire a response. The interface looks like this:

public interface IDialogService
{
    void ShowDialog(string title, string message, bool allowCancel, Action<bool> response);
}

In our simple demonstration, I'm allowing for a title, a message, whether or not you want to show the cancel button for confirmations, and then an action to call with the response. With this simple interface we have all we need to wire in unit tests for services and controls that rely on the dialog. You can create your own mock object that implements the IDialogService interface and returns whatever response you want to stage for the unit test.

In the production application, we'll need to show a real dialog. Here is the class that handles it:

public class DialogService : IDialogService
{
    #region IDialogService Members

    public void ShowDialog(string title, string message, bool allowCancel, Action<bool> response)
    {
        var dialog = new Dialog(allowCancel) {Title = title, Message = message, CloseAction = response};
        dialog.Closed += DialogClosed;
        dialog.Show();
    }

    static void DialogClosed(object sender, EventArgs e)
    {
        var dialog = sender as Dialog;
        if (dialog != null)
        {
            dialog.Closed -= DialogClosed; 
            dialog.CloseAction(dialog.DialogResult == true);
        }
    }

    #endregion
}

This is fairly straightforward. When called, we'll create an instance of the dialog and set the various properties, including the callback. We wire into the closed event. Whether the user responds by clicking a button or closing the dialog, this event is fired.

There is a reason why we are using the snippet DialogResult == true. The result is null-able, so we cannot simply refer to the value itself and make a true/false decision (null, by definition, is unknown). So only if the user explicitly provides a true response by clicking the OK button will the expression evaluate to true. Otherwise, we assume false and call back with the response.

There is also a very important step here that is important to recognize. Many beginners fail to catch this subtle step and end up with memory leaks. When the closed event fires, the dialog box effectively closes. Typically, there would be no more references to it (it is no longer in the visual tree) and it can be cleaned up by garbage collection. However, there is a fatal mistake you can make that will result in the dialog box never going away.

That mistake has to do with how events work. When we register the closed event, the thought process is often that the dialog box has an event, and therefore knows to call back to the dialog service. This is flawed, however. A reference exists from the dialog service to the dialog box. The dialog service effectively "listens" for the event, and when it is raised, will respond. We must unregister the event like this:

dialog.Closed -= DialogClosed;

Otherwise, the reference remains and garbage collection will never claim the dialog box resource. In a long running application, this could prove to be a serious flaw. This line is not included in the code sample download. I encourage you to run with windbg or even step through debug in Visual Studio and watch the object graphs and reference counts as you fire multiple dialogs. Then, add the line of code above and re-run it to see the difference when you appropriately unregister the event.

To demonstrate the use of the service, I put together a quick little page with a few buttons and a text block:

<UserControl x:Class="SimpleDialog.MainPage"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml"
    xmlns:d="http://schemas.microsoft.com/expression/blend/2008" xmlns:mc="http://schemas.openxmlformats.org/markup-compatibility/2006" 
    mc:Ignorable="d" d:DesignWidth="640" d:DesignHeight="480">
  <Grid x:Name="LayoutRoot">
        <StackPanel Orientation="Vertical">
            <Button x:Name="ToggleEnabled" 
                    HorizontalAlignment="Center"
                    Margin="5" Width="Auto" Content="Disable Dialog Button" Click="ToggleEnabled_Click"/>
            <Button x:Name="DialogButton" 
                    HorizontalAlignment="Center"
                    Margin="5" Width="Auto" Content="DialogButton" Click="DialogButton_Click"/>
            <TextBlock x:Name="TextDialog" 
                       HorizontalAlignment="Center"
                       Margin="5" Width="Auto" Text="Result Will Show Here"/>
        </StackPanel>
    </Grid>
</UserControl>

The first button is a toggle to enable or disable the second button. It shows how to receive a response and act based on the user input. If the user confirms, the button is toggled to enabled or disabled. If the user cancels or closes the dialog, the state of the button remains the same. The second button raises a dialog, and simply shows the result. Because it is an alert dialog, we know a false response indicates the user closed the window instead of clicking OK.

Here is the code behind:

public partial class MainPage : UserControl
{
    private bool _toggleState = true;
    private readonly IDialogService _dialog = new DialogService();

    public MainPage()
    {
        InitializeComponent();
    }

    private void ToggleEnabled_Click(object sender, RoutedEventArgs e)
    {
        _dialog.ShowDialog(
            _toggleState ? "Disable Button" : "Enable Button",
            _toggleState
                ? "Are you sure you want to disable the dialog button?"
                : "Are you sure you want to enable the dialog button?",
            true,
            r =>
                {
                    if (r)
                    {
                        _toggleState = !_toggleState;
                        DialogButton.IsEnabled = _toggleState;
                        ToggleEnabled.Content = _toggleState ? "Disable Dialog Button" : "Enable Dialog Button";
                    }
                });
    }

    private void DialogButton_Click(object sender, RoutedEventArgs e)
    {
        _dialog.ShowDialog(
            "Confirm This",
            "You have to either close the window or click OK to confirm this.",
            false,
            r => { TextDialog.Text = r ? "You clicked OK." : "You closed the dialog."; });
    }
}

Notice that in the click events, we call to the dialog service and pass it anonymous methods for the return types. We could point to real methods on the class as well. The important part is to change your thought process from synchronous step one => wait => step two to asynchronous step one => delegate to step two, then step two is called when ready.

In a production application, instead of creating an instance of the dialog box, we would either register it with a container:

Container.RegisterInstance<IDialogService>(Container.Resolve<DialogService>());

And inject it in a constructor, or use something like MEF and import the dialog service:

[Import]
public IDialogService Dialog { get; set; }

We would simply flag the implementation as the export or use the InheritedExport attribute on the interface itself.

Click here to download the source code for this example.

Jeremy Likness

Wednesday, January 20, 2010

New Silverlight 3 Version Released (3.0.50106.0)

A new version of Silverlight 3 has been released.

You can download the latest control here. If you are a developer, then you'll want to use the developer installer that is available here.

This version fixes some issues related to hardware acceleration in the graphics processing unit (GPU), certain cases that cause Deep Zoom to take on high CPU, and issues with downloads from Silverlight applications.

Read Microsoft's Knowledge Base article for this release: Description of the update for Silverlight: January 19, 2010.

Jeremy Likness

Saturday, January 16, 2010

Making Your Own 8K Homegrown Inversion of Control Container

If you develop software, chances are you you've worked with Inversion of Control containers and the Dependency Injection pattern. Many frameworks exist to address how to marry concrete implementations to abstract classes and interfaces. A few popular ones on the .NET platform include:

Download the Source for this Example

There are a few reasons why I decided to play with the concept of building my own dependency injection solution. First, I am never content to just "talk to the framework." I like to know "what lies beneath." With so many open source projects, it's not difficult to go behind the scenes and understand just how these frameworks are glued together. Second, sometimes a larger framework in a smaller project just doesn't make sense (sort of like hitting a tack with a sledgehammer) so understanding some principles of how to roll my own lightweight container will come in handy. Of course, I could also just bust out a simple Service Locater or Factory and achieve similar results, so that brings me to my third point: it's fun.

What I managed to throw together was and 8K DLL that performs auto-discovery. Nothing fancy here. It simply scans the assemblies you send it for concrete implementations of interfaces and types you ask for. Again, this is more of an exercise of pulling back the cover and looking inside the CLR than an attempt to rival the robust, mature frameworks out there.

The result doesn't support generics, isn't attribute-based, doesn't have support for lifetime management (i.e. singletons), doesn't have named containers and doesn't perform stable composition. It does, however, do the trick of marrying a simple implementation to an interface or an abstract class and knows how to hierarchically wire up dependencies as deep as it needs to go.

With this small solution, I was able to make a simple call to a method on an interface:


...
ResolverEngine.Resolve<ICoordinator>().Coordinate();
...

And also loop through several implementations of another interface:


...
ResolverEngine.ResolveAll<IPlugin>().ForEach(p => p.DoSomething());   
...

So how do we go from an interface to the actual implementation of that interface, and create an instance of a class that might have multiple parameters in a constructor we don't know about before hand?

The way I decided to implement this was to store references to the assemblies we want to scan for dependencies (so that we're not inspecting mscorlib if we don't need to, for example), to resolve on demand but cache the resolutions for future look ups.

From that information, we can at least start with our assembly cache and our type-mapper cache (think of an interface or base class type mapping to multiple implementation types).


private static readonly List<Assembly> _assemblyCache = new List<Assembly>();
private static readonly Dictionary<Type, List<Type>> _typeCache = new Dictionary<Type, List<Type>>();

Now for the resolver. Here's something interesting to sort out: generics only work on an actual type, not a variable pointing to a type. In other words:


// this is fine
IGenericInterface<MyClass> myClassInterface = Factory.GetInterface<MyClass>();

// this is not valid
Type type = typeof(MyClass); 
IGenericInterface<type> myClassInterface = Factory.GetInterface<type>(); 

To be able to recursively resolve dependencies (for example, when constructor injection is used), we'll need to support the explicit runtime type name. Therefore, our shell for the resolve method ends up looking like this:


public static T Resolve<T>() where T : class
{
    return Resolve(typeof(T).AssemblyQualifiedName) as T;
}

public static object Resolve(string type) 
{
    Type srcType = Type.GetType(type);
    ...
}

Now we can use the generic call or the runtime call without issue.

The algorithm to resolve the type has two parts.

The first part determines how to map the current type to an implemented type. If the current type is a class then it's simple: we want to instantiate it. If the current type is an interface or a base class, we need to scan the registered assemblies to find types that implement the base class or interface. We need to make sure they are public and not static.

The second part scans the type for the most verbose constructor (we don't have to do it this way, but its the way many IoC containers work so I decided to implement it like this). Once we find the most verbose constructor, we need to recursively resolve the types specified in the constructor and then activate the instance with those parameters.

Let's filter the types first.


private static Type[] _FilterTypes(Type srcType, Assembly a)
{
    return a.GetTypes().Where(
                t => t.IsClass && t.IsPublic && !t.IsAbstract && !t.IsInterface && 
                    (srcType.IsInterface && t.GetInterface(srcType.FullName) != null
                    || srcType.IsAbstract && t.IsSubclassOf(srcType))
                ).ToArray();
}

This reads "scan the assembly for all types that are public classes, are not abstract or interfaces, and either implement the source interface or derive from the base class."

Once we have the mapped type, we need to find the most verbose constructor and then recursively satisfy the dependencies. The _Activate method does this for us:


private static object _Activate(Type t)
{
    object retVal = null; 

    // find the main constructor
    ConstructorInfo ci = (from c in t.GetConstructors() 
              orderby c.GetParameters().Length descending 
              where c.IsPublic && !c.IsStatic
              select c).FirstOrDefault();

    if (ci != null)
    {
        if (ci.GetParameters().Length == 0)
        {
            retVal = Activator.CreateInstance(t);
        }
        else
        {
            ParameterInfo[] parameterInfo = ci.GetParameters();

            object[] parameters = new object[parameterInfo.Length];

            int parameterIndex = 0;
            foreach (ParameterInfo parameter in parameterInfo)
            {
                parameters[parameterIndex++] = Resolve(parameter.ParameterType.AssemblyQualifiedName);
            }

            retVal = Activator.CreateInstance(t, parameters);
        }
    }
    return retVal; 
}

We grab the constructors that aren't static and are public. If we have none or the one we find has no parameters, we simply activate the instance. Otherwise, we build an object collection and recursively resolve the dependencies, adding the resolved instances to the collection. We then activate the instance by passing in the collection, which finds and applies the collection to the appropriate constructor.

Now, we can back up to our resolution method and expand it:


public static object Resolve(string type) 
{
    Type srcType = Type.GetType(type);

    if (srcType == null || (!srcType.IsClass && !srcType.IsInterface && !srcType.IsAbstract))
    {
        throw new ArgumentException("type"); 
    }

    object retVal = null;

    if (!srcType.IsInterface && !srcType.IsAbstract)
    {
        retVal = _Activate(srcType);
    }
    else if (_typeCache.ContainsKey(srcType))
    {
        retVal = _Activate(_typeCache[srcType][0]);
    }
    else
    {
        foreach (Assembly a in _assemblyCache)
        {
            // get the mappable types in the assembly 
            foreach(Type t in _FilterTypes(srcType, a))
            {
                // activate and cache it
                object instance = _ActivateAndCache(srcType, t);       
         
                // if we were able to activate and it is appropriate, load it up
                if (instance != null)
                {
                    // finally, return value is the first type we come across
                    retVal = retVal ?? instance;
                }
            }
        }

    }
    return retVal; 
}

We check the type to make sure it is valid. If it's a class type, we simply activate it. If it is an abstract class or interface, we first check the cache and take the first mapping (again, this is where frameworks offer much more because you can explicitly map which type you want, in our case to simplify we just grab the first available). If it's not in the cache, we get all of the types and insert them into the cache with _ActivateAndCache.

The _ActivateAndCache method does a reality check on the instance to make sure it is assignable from the interface or abstract class. Only if this passes does it return the instance and cache it:


object retVal = null;

object instance = _Activate(t);                 
     
if (instance != null && srcType.IsAssignableFrom(instance.GetType()))
{
   retVal = instance;
   // cache it
}

return retVal; 

At this stage we've pretty much built all we need. The RegisterAssembly method does a little bit more than just store the assembly reference. Because we may have already cached a type, when a new assembly is registered we'll need to scan the existing types and then add any implementations of those types from the new assembly. That code looks like this:


public static void RegisterAssembly(Assembly assembly)
{
    bool resolve = false; 

    lock (_assemblyCache)
    {
        if (!_assemblyCache.Contains(assembly))
        {
            _assemblyCache.Add(assembly);
            resolve = true;
        }                               
    }

    if (resolve)
    {
        foreach (Type type in _typeCache.Keys)
        {
            foreach (Type t in _FilterTypes(type, assembly))
            {
                _ActivateAndCache(type, t);
            }
        }
    }            
}

Finally, I added a few methods to resolve all instances instead of just one.

Where does that leave us? The test project introduces two very simple interfaces:


public interface IPlugin
{
    void DoSomething();
}

public interface ICoordinator
{
    void Coordinate();
}

In an implementation project, I set up a straightforward implementation of IPlugin, then a base class that implements the interface and another class that derives from the base class. This is all completely contrived to demonstrate what the resolver can do:


public class Plugin1 : IPlugin
{
    public void DoSomething()
    {
        Console.WriteLine("This is the first plugin.");
    }

}

public abstract class PluginBase : IPlugin
{

    public virtual void DoSomething()
    {
        Console.WriteLine("You found me."); 
    }

}

public class PluginMain : PluginBase
{
    public override void DoSomething()
    {
        Console.WriteLine("This is the override, now calling base class.");
        base.DoSomething();
    }
}

The coordinate class demonstrates how the resolver will recursively resolve dependencies. It takes in the interface as well as the base class:


public class Coordinator : ICoordinator
{
    IPlugin _interface;
    PluginBase _base;

    public Coordinator()
    {
    }

    public Coordinator(IPlugin interfaceInstance, PluginBase impl)
    {
        _interface = interfaceInstance;
        _base = impl;
    }

    public void Coordinate()
    {
        Console.WriteLine("Interface:");
        _interface.DoSomething();

        Console.WriteLine("Base class:"); 
        _base.DoSomething();
    }

}

Finally, I put another implementation of the plugin to the main program to show how it doesn't resolve until that assembly is added. With all of these together, the main thread looks like this:


static void Main(string[] args)
{
    ResolverEngine.RegisterAssembly(typeof(PluginMain).Assembly);

    Console.WriteLine("Coordinator...");

    ResolverEngine.Resolve<ICoordinator>().Coordinate();
    
    Console.WriteLine("Plugins...");

    ResolverEngine.ResolveAll<IPlugin>().ForEach(p => p.DoSomething());             

    Console.WriteLine("Now registering this, and running plugins again...");

    ResolverEngine.RegisterAssembly(typeof(Program).Assembly);

    ResolverEngine.ResolveAll<IPlugin>().ForEach(p => p.DoSomething());           

    Console.ReadLine();
}

While it is fun to compile and run the application, the real power will come from stepping through in debug and watching the various steps of parsing out types and constructors. Looking at ConstructorInfo and ParameterInfo will reveal a lot about how C# interacts with the underlying CLR. Even if you never use this code, it will help you better understand and interact with the various dependency injection frameworks that are available and learn a little more about what they are doing for you behind the scenes. This only brushes the surface ... pulling down an open source project and wading through the code will reveal how much more powerful these frameworks are.

Download the Source for this Example

Jeremy Likness

Tuesday, January 12, 2010

Quick Tip: Embedding and Accessing Schemas in .NET

I have a project I'm working on that requires the use of some extensive XML manipulation. While XML can be very powerful, XML without a schema is like JavaScript objects: no strong typing and the wild west as far as standards are concerned. A good, solid XML document will have a valid schema to validate against.

It's really not that difficult, either. There are several online tools that can help generate a template of the schema from a sample XML document, like this one. Visual Studio provides full intellisense for building out your schema.

You can choose to actually publish the schema to a website for validation if it's an "open schema." If it's closed and you are simply using it for internal validation, then it makes sense to embed as a resource.

A best practice is to validate your method parameters before acting on them. In my methods that take in the XML document, I want to validate that it maps to the schema or throw an exception right away.

First, I'll add the schema to the project, then right-click and change it to an embedded resource. This ensures it is complied into the distributable binary.

Next, I create a helper class to access the schema. I'm almost always going to want to validate it as a schema set, so instead of having to remember how to parse the manifest and get the resource stream each time, why not do it like this:


public static class SchemaAccess
{
    private const string SCHEMA = "MySchema.xsd";
    private const string SCHEMANAMESPACE = "http://mycompany.com/MySchema.xsd"; 

    public static XmlSchemaSet GetSchemaSet()
    {
        // replace this fully qualified name with the schema file
        string schemaPath = typeof(SchemaAccess).FullName.Replace("SchemaAccess", SCHEMA);
        XmlSchemaSet schemas = new XmlSchemaSet();
        schemas.Add(SCHEMANAMESPACE, XmlReader.Create(typeof(SchemaAccess).Assembly.GetManifestResourceStream(schemaPath)));
        return schemas;
    }
}

Instead of hard-coding the path to the schema, I make sure my helper class is at the same level. The embedded resources are always accessed by the full namespace followed by the file name, so I can simply take the fully qualified name of the current helper class and then replace the class name with the resource name. If my helper class is at "Foo.Bar" then it will be "Foo.Bar.SchemaAccess." The replace will turn that into "Foo.Bar.MySchema.xsd" which is what I need to access the schema.

Then, I simply add it to the set and return it. Of course, if I'm dealing with multiple schemas, I can add those, too.

With LINQ, it's very easy to validate a schema. Simply add System.Xml.Schema to your usings, then validate your XDocument like this, using the Validate extension method. Here is a sample unit test that prints the errors to the debug console:


XmlSchemaSet schemas = SchemaAccess.GetSchemaSet();
bool validationErrors = false;
myDoc.Validate(schemas, (o, e) =>
    {
        validationErrors = true;
        Debug.Print(e.Message);
    });
Assert.IsFalse(validationErrors, "The xml was invalid. View the debug console for details."); 

In production, I'll just throw the exception. That's it!

Jeremy Likness

Monday, January 11, 2010

Auto-Discoverable Views using Fluent PRISM in Silverlight

One reason a developer would use a technology like MEF is to, as the name implies, make an application extensible through a process called discovery. Discovery is simply a method for locating classes, types, or other resources in an assembly. MEF uses the Export tag to flag items for discovery, and the composition process then aggregates those items and provides them to the entities requesting them via the Import tag.

Download the source code for this example

It occurred to me when working with PRISM and MEF (see the recap of my short series here) that some of this can be done through traditional means and I might be abusing the overhead of a framework if all I'm doing is something simple like marrying a view to a region.

Challenges with PRISM include both determining how views make it into regions and how to avoid magic strings and have strict type compliance when dealing with regions. This post will address one possible solution using custom attributes and some fluent interfaces.

Fluent Interfaces

Fluent interfaces simply refers to the practice of using the built-in support for an object-oriented language to create more "human-readable" code. It's really a topic in and of itself, but I felt it made sense to simplify some of the steps in this post and introduce some higher level concepts and examples along the way.

There are several ways to provide fluent interfaces. One that is built-in to the C# language is simply using the type initializer feature. Instead of this:


public class MyClass 
{
   public string Foo { get; set; }
   public string Bar { get; set; }

   public MyClass() 
   {
   }

   public MyClass(string foo, string bar) 
   {
       Foo = foo; 
       Bar = bar;
   } 
}

Which results in code like this:


...
MyClass myClass = new MyClass(string1, string2); 
...

Question: which string is foo, and which string is bar, based on the above snippet? Would you consider this to be more readable and "self-documenting"?


...
MyClass myClass = new MyClass { Foo = string1, Bar = string2 };
...

It works for me! So let's do something simple in our PRISM project. If you've worked with PRISM, then you'll know the pattern of creating a Shell and then assinging it to the root visual in a Bootstrapper. The typical code looks like this in your Bootstrapper class:


protected override DependencyObject CreateShell()
{
   Shell shell = new Shell(); 
   Application.Current.RootVisual = shell;
   return shell; 
}

That's nice, but wouldn't it also be nice if you could do something simple and readable, like this? Keep in mind we're not cutting down on generated code (and in fact, sometimes fluent interfaces may increase the amount of generated code, which is a consideration to keep in mind), but we're focused on the maintainability and readability of the source code.


protected override DependencyObject CreateShell()
{
   return Container.Resolve<Shell>().AsRootVisual();             
}

In one line of code I'm asking the container to provide me with the shell (I do this as a common practice as opposed to creating a new instance so that any dependencies I may have in the shell will be resolved), then return it "as root visual." I think that is pretty readable, but how do we get there?

The answer in this case is using extension methods. In my "common" project I created a static class called Fluent which contains my fluent interfaces (this is for the example only and would not scale in production ... you will want to segregate your interfaces into separate classes related to the modules they act upon). In this static class, I create the following extension method:


public static UserControl AsRootVisual(this UserControl control)
{
    Application.Current.RootVisual = control;
    return control;
}

An extension method does a few things. By using the keyword this on the parameter, it tells the compiler this method will extend the type. The semantics in the code look you are calling something on the UserControl, but the compiler is really taking the user control, then calling the method on the static class and passing the instance in. It is common for the extension methods to return the same instance so they can be chained. In this case, we simply assign the control to the root visual, then return it so it can be used elsewhere. We're really doing the same thing we did before, but adding a second method call, in order to make the code that much more readable.

One important concern to have and address with fluent interfaces is the potential for "hidden magic." What I mean by this is the extension methods aren't available on the base class and only appear when you include a reference to the class with the extensions. This may make them less discoverable based on how you manage your code. It also means you will look at methods that aren't part of the known interface. It's not difficult to determine where the method comes from. Intellisense will flag extension methods as extensions, and you can always right click and "go to definition" to see where the method was declared.

Auto-discoverable Views

I have two main goals with this project: the first is to be able to tag views so they are automatically discovered and placed into a region, and the second is to type the region so I'm not using magic strings all over the place. My ideal solution would allow me to add a view to a project, tag it with a region, and run it, and have it magically appear in that region. Possible? Of course!

Typing the Regions

I am going to type the regions to avoid magic strings. Because the main shell defines the regions, I'm fine with the strings there ... that is sort of the "overall definition", but then I want to make sure elsewhere in the code I can't accidentally refer to a region that doesn't exist. My first step is to create a common project that all other modules can reference, and then add an enumeration for the regions. The enumeration for this examle is simple:


namespace ViewDiscovery.Common
{
   public enum Regions
    {
        TopLeft,
        TopRight,
        BottomLeft,
        BottomRight
    }
}

Enumerations are nice because I can call ToString() and turn it into the string value of the enumeration itself. I decided to adopt the convention "Region." when tagging it in the shell, so my shell looks like this:


<UserControl x:Class="ViewDiscovery.Shell"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
    xmlns:region="clr-namespace:Microsoft.Practices.Composite.Presentation.Regions;assembly=Microsoft.Practices.Composite.Presentation"
    >
    <Grid x:Name="LayoutRoot" Background="White">
        <Grid.RowDefinitions>
            <RowDefinition Height="Auto"/>
            <RowDefinition Height="Auto"/>
            <RowDefinition Height="Auto"/>
        </Grid.RowDefinitions>
        <Grid.ColumnDefinitions>
            <ColumnDefinition Width="Auto"/>
            <ColumnDefinition Width="Auto"/>
        </Grid.ColumnDefinitions>
        <TextBlock Grid.Row="0" Grid.Column="0" Grid.ColumnSpan="2" HorizontalAlignment="Center" Text="View Discovery"/>       
        <ItemsControl Grid.Row="1" Grid.Column="0" region:RegionManager.RegionName="Region.TopLeft"/>
        <ItemsControl Grid.Row="1" Grid.Column="1" region:RegionManager.RegionName="Region.TopRight"/>
        <ItemsControl Grid.Row="2" Grid.Column="0" region:RegionManager.RegionName="Region.BottomLeft"/>
        <ItemsControl Grid.Row="2" Grid.Column="1" region:RegionManager.RegionName="Region.BottomRight"/>
    </Grid>
</UserControl>

This is just a simple 2x2 grid with a region per cell. I used the ItemsControl so each cell can host multiple views. We're off to a good start! Now let's figure out how to tag our views.

The Custom Attribute

Custom attributes are powerful and easy to implement. I want to be able to tag a view as a region using my enumeration, so I define this custom attribute:


namespace ViewDiscovery.Common
{
    [AttributeUsage(AttributeTargets.Class,AllowMultiple=false)]
    public class RegionAttribute : System.Attribute
    {
        const string REGIONTEMPLATE = "Region.{0}";

        public readonly string Region;

        public RegionAttribute(Regions region)
        {
            Region = region.ToString().FormattedWith(REGIONTEMPLATE);              
        }
    }
}

Notice that I don't allow multiple attributes and that this attribute is only valid when placed on a class. Attributes can take both positional parameters (defined in the constructor) and named parameters (defined as properties). In this case, I only have one value so I chose to make it positional. When the region enumeration is passed in, I cast it to a string and then format it with the prefix, so that Regions.TopLeft becomes the string Region.TopLeft. Notice I snuck in another fluent interface, the FormattedWith. To me, that's a sight prettier than "string.Format" if I'm only dealing with a single parameter. The extension to make this happen looks like this:


public static string FormattedWith(this string src, string template)
{
    return string.Format(template, src);
}

Now that we have a tag, we can create a new module and get it wired in. I created a new project as a Silverlight Class Library (sorry, this example doesn't do any fancy dynamic module loading), built a folder for views, and tossed in a view. The view simply contains a grid with some text:


<Grid x:Name="LayoutRoot" Background="White">
        <Grid.RowDefinitions>
            <RowDefinition/>
            <RowDefinition/>
        </Grid.RowDefinitions>
        <TextBlock Text="I am in ModuleOne." Grid.Row="0"/>
        <TextBlock Text="I want to be at the top left." Grid.Row="1"/>
    </Grid>

Tagging the view was simple. I went into the code-behind, added a using statement to reference the common project where my custom attribute is defined and then tagged the view with the attribute. Here's the code-behind with the tag:


namespace ViewDiscovery.ModuleOne.Views
{
    [Region(Regions.TopLeft)] 
    public partial class View : UserControl
    {
        public View()
        {
            InitializeComponent();
        }       
    }
}

So now it's clear where we want the view to go. Now how do we get it there?

Discovering the Views

The pattern in PRISM for injecting a module is for the module to have an initialization class that implements IModule and then adds the views in the module to the region. We want to do this through discovery. To facilitate this, I created a base abstract class for any module that wants auto-discovered views. The class looks like this:


public abstract class ViewModuleBase : IModule
{
    protected IRegionManager _regionManager;

    public ViewModuleBase(IRegionManager regionManager)
    {
        _regionManager = regionManager;
    }

    #region IModule Members

    public virtual void Initialize()
    {
        IEnumerable<Type> views = GetType().Assembly.GetTypes().Where(t => t.HasRegionAttribute());

        foreach (Type view in views)
        {
            RegionAttribute regionAttr = view.GetRegionAttribute(); 
            _regionManager.RegisterViewWithRegion(regionAttr.Region, view); 
        }
    }

    #endregion
}

The code should be very readable. We enforce that the region manager must be passed in by creating a constructor that takes it and stores it. We implement Initialize as virtual so it can be overridden when needed. First, we get the assembly the module lives in, then grab a collection of types that have our custom attribute. Yes, our fluent interface makes this obvious because we can do type.HasRegionAttribute(). The extension method looks like this:


public static bool HasRegionAttribute(this Type t)
{
    return t.GetCustomAttributes(true).Where(a => a is RegionAttribute).Count() > 0; 
}

This takes the type, grabs the collection of custom attributes (using inheritance in case we're dealing with a derived type) and returns true if the count of our attribute, the RegionAttribute, is greater than zero.

Next, we iterate those types and get the region attribute, again with a nice, friendly interface (GetRegionAttribute) that looks like this:


public static RegionAttribute GetRegionAttribute(this Type t)
{
    return (RegionAttribute)t.GetCustomAttributes(true).Where(a => a is RegionAttribute).SingleOrDefault();
}

Now we have exactly what we need to place the view into the region: the region it belongs to, and the type. So, we register the view with the region and we're good to go!

In my module, I add a class for the module initializer called ModuleInit. I'm only using the auto-discovery so there is nothing more than an implementation of the base class that passes the region manager down:


namespace ViewDiscovery.ModuleOne
{
    public class ModuleInit : ViewModuleBase
    {
        public ModuleInit(IRegionManager regionManager)
            : base(regionManager)
        {
        }
    }
}

Now we go back to the main project and wire in the module catalog. I'm not using dynamic modules so I just reference my modules from the main project and register them by type:


protected override IModuleCatalog GetModuleCatalog()
{
    return new ModuleCatalog()
        .WithModule(typeof(ModuleOne.ModuleInit).AssemblyQualifiedName.AsModuleWithName("Module One"));            
}      

OK, so I had some fun here as well. I wanted to extend the module catalog to allow chaining WithModule for adding multiple modules, and be able to take a type name as a string, then make it a module with a name. Working backwards, we turn a string into a named ModuleInfo class like this:


public static ModuleInfo AsModuleWithName(this string strType, string moduleName)
{
    return new ModuleInfo(moduleName, strType);
}

Next, we extend the catalog to allow chaining on new modules like this:


public static ModuleCatalog WithModule(this ModuleCatalog catalog, ModuleInfo module)
{
    catalog.AddModule(module);
    return catalog;
}

Notice this simply adds the module then returns the original catalog.

At this point, we can run the project and see that the view appears in the upper left.

View Discovery with One View

I then added a second view with a rectangle:


<UserControl x:Class="ViewDiscovery.ModuleOne.Views.Rectangle"
    xmlns="http://schemas.microsoft.com/winfx/2006/xaml/presentation" 
    xmlns:x="http://schemas.microsoft.com/winfx/2006/xaml" 
    >
    <Grid x:Name="LayoutRoot" Background="White">
        <Rectangle Width="100" Height="100" Fill="Red" Stroke="Black"/>
    </Grid>
</UserControl>

... and tagged it:


namespace ViewDiscovery.ModuleOne.Views
{
    [Region(Regions.TopRight)] 
    public partial class Rectangle : UserControl
    {
        public Rectangle()
        {
            InitializeComponent();
        }
    }
}

An finally compiled and re-ran it. The rectangle shows up in the upper right, as expected:

View Discovery with Two Views

Next, I added a second module with several views ... including a few registered to the same cell. Adding the new module to the catalog was easy with the extension for chaining modules:


protected override IModuleCatalog GetModuleCatalog()
{
    return new ModuleCatalog()
        .WithModule(typeof(ModuleOne.ModuleInit).AssemblyQualifiedName.AsModuleWithName("Module One"))
        .WithModule(typeof(ModuleTwo.ModuleInit).AssemblyQualifiedName.AsModuleWithName("Module Two"));            
}   

Compiling and running this gives me the final result:

View Discovery with Multiple Views

And now we've successfully created auto-discoverable views that we can strongly type to a region and feel confident will end up being rendered where they belong.

Download the source code for this example

Jeremy Likness

Saturday, January 9, 2010

Automated Silverlight Unit Testing Using StatLight

One concern with the Silverlight Unit Testing Framework is that it runs on a UI thread and requires a browser to function. This makes it difficult to integrate into automated or continuous integration testing. Difficult, but not impossible.

A solution is provided by the project called StatLight which not only supports Silverlight testing automation, but actually integrates with several different unit test providers. I am focusing on the Silverlight UT for this post. The program is very flexible and will even run as an continuous test server by watching a XAP file for changes and triggering the tests when it does. It will output to the console and to an XML formatted file, making it easy to grab and process test results.

I decided to run the tests I made in the Unit Testing Asynchronous Behaviors in Silverlight post. First, I headed over to the StatLight web site and downloaded the project. I simply unzipped it into a folder to "install."

Next, I copied the XAP file for my tests, called PRISMMEF.Tests.xap, into the folder with StatLight (you can reference the full path, this was just for convenience). I created a subdirectory called reports.

I launched a command line as administrator. StatLight launches its own web service so elevated permissions are required to allow it to bind to the appropriate ports. At the command line, I ran the following. It started the test run, placed a dot on my screen for every test run, and then completed.

Silverlight Unit Test Automation

This resulted in the following output as a file called report.xml:

<StatLightTestResults xapFileName="PRISMMEF.Tests.xap" total="6" ignored="0" failed="0" dateRun="2010-01-08 19:15:18">
  <tests>
    <test name="PRISMMEF.Tests.Common.PartModuleTest.TestConstructor" passed="True" timeToComplete="00:00:00.0290016" />
    <test name="PRISMMEF.Tests.Common.PartModuleTest.TestInitialization" passed="True" timeToComplete="00:00:00.1300074" />
    <test name="PRISMMEF.Tests.Common.ViewModelBehaviorTest.TestAttach" passed="True" timeToComplete="00:00:00.0340020" />
    <test name="PRISMMEF.Tests.Common.ViewModelBehaviorTest.TestFromXaml" passed="True" timeToComplete="00:00:00.1100063" />
    <test name="PRISMMEF.Tests.Common.ViewModelProviderTest.TestConstruction" passed="True" timeToComplete="00:00:00.0630036" />
    <test name="PRISMMEF.Tests.Common.ViewModelProviderTest.TestPropertyChange" passed="True" timeToComplete="00:00:00.0520030" />
  </tests>
</StatLightTestResults>

From this, you can quickly see what I do with my Friday evenings. Seriously, it's a nice, simply, easy to iterate XML format that I can plug into my build scripts, parse with a program or even use simple XSLT and generate nice automated reports for the results of testing and even fail a build based on failed results.

Jeremy Likness

Thursday, January 7, 2010

Silverlight Unit Testing Framework: Asynchronous Testing of Behaviors

Last month, I bogged about Unit Testing ViewModels AND Views using the Silverlight Unit Testing Framework. I wanted to take that post a step further and talk about some more advanced testing scenarios that are possible.

The site itself provides a lot of information about how to get started and what is available with the framework. One thing to keep in mind that is a radical shift from other testing frameworks is that the Silverlight testing framework runs on the UI thread. This means it does not spawn multiple threads for multiple tests and in fact requires the tests to run "one at a time" so they can take advantage of the test surface that is supplied.

This is a bit different than other frameworks but in my opinion, makes a lot of sense when dealing with Silverlight. The framework provides incredible flexibility for configuring and categorizing your tests.

If you are searching for a very comprehensive example of the framework in use, look no further than the Silverlight Toolkit. This comes with all source code and in fact uses the testing framework for its tests. You will find not only advanced scenarios for testing, but thousands upon thousands of tests! (I also am guessing that a new standard for PC performance has been invented by mistake ... let's all compile the entire toolkit and compare how long it takes!)

Tagging Tests

One thing you'll find if you run the toolkit tests is that you can enter a tag to filter tests. For example, type in "Accordion" to only run the 800 or so unit tests for accordion-type controls.

To use tag functionality, simply "tag" your test like this:


[TestClass] 
[Tag("MEF")]
public class PartModuleTest 
{
}

I've tagged the test to be a MEF-related test. When I wire up the framework, I can filter the tag like this:


UnitTestSettings settings = UnitTestSystem.CreateDefaultSettings();
settings.TagExpression = "MEF";
this.RootVisual = UnitTestSystem.CreateTestPage(settings);  

When I run the tests, only my tests tagged with MEF will run! The toolkit provides an example of a UI that allows you to select the tag, then run the test.

Asynchronous Tests

It is often necessary to test methods that are asynchronous or require event coordination. An example may be a service that must wait on return values, or a user control that must be loaded into the framework before you can test it. The Silverlight Unit Testing Framework provides the Asynchronous tag to facilitate this type of test. This tells the framework not to move onto the next test nor consider the current test method complete until an explicit call to TestComplete is made.

There are several "helper" methods supplied for asynchronous processing that we'll explore in a minute. To use these methods requires inheriting from one of the base test classes such as SilverlightTest which provides the methods as well as the test surface to add controls to.

In PRISM, MEF, and MVVM Part 1 of 3: Unity Glue I explored various options for binding the view model to the view. The 3rd and final method I reviewed was using an attached behavior. I would like to write some unit tests for that behavior (indeed, if I were a test-driven development or TDD purist, I would have written those tests first).

In order to test the behavior, I need to attach it to a FrameworkElement and then validate it has done what I expected it to do. But how do I go about doing that in our unit test environment?

Attached Behaviors

Similar to other controls in other frameworks, Silverlight controls have a distinct life cycle. It varies slightly depending on whether the control has been generated in XAML or through code. There is a great summary table of these events on Dave's Blog. What's important to note is that values and properties are set as soon as you, well, set them, but bindings don't take effect until they are inserted into the visual tree. In XAML, the XAML node becomes part of the tree and fires the Loaded event once it is fully integrated. In code, this happens after the element is added as the child of some other element that is in the tree. This allows Silverlight to parse the hierarchy and propagate dependency properties.

So what we essentially want to do is take our behavior, attach it to an element, and then wait for the Loaded event to fire so we can inspect the element and see that it has been modified accordingly (in this case, we expect that the DataContext property has been set to our view model).

Setting up the Project

The testing framework provides some handy templates for getting started. I add a new project and select the Silverlight Test Project template. I then add references to the projects I'll be testing and the supporting frameworks like PRISM and MEF.

Next, I'll want to build some helper classes to help me test my functionality.

Helper Classes

I like to create a folder called Helper and place my stubs, mocks, and other helper classes there. These may be utilities, like the Exception Expected utility I use, or classes that are used for the testing framework.

First, I'll create a test view model with a simple string and string collection property for testing:


public class TestViewModel 
{
    public TestViewModel()
    {
        ListOfItems = new List<string>();
    }

    public TestViewModel(List<string> items)
    {
        ListOfItems = items;            
    }

    public string Property { get; set; }

    public List<string> ListOfItems { get; set; }
}

If my view models have common methods described in a base class or interface, I might use a mocking framework to mock the class instead.

The Test Class

The behavior I created has an affinity to the Unity inversion of control (IoC) container. It could be refactored otherwise, but it made sense for the sake of the demonstration. Therefore, I'll need to have a container for testing, as well as the view model. My test class starts out looking like this (notice I base it on SilverlightTest):


[TestClass]
public class ViewModelBehaviorTest : SilverlightTest
{
    const string TESTPROP = "Test Property";

    IUnityContainer _container;

    TestViewModel _viewModel; 

    [ClassInitialize]
    public void ClassInitialize()
    {
        _container = new UnityContainer();

        _viewModel = new TestViewModel() { Property = TESTPROP }; 
        _container.RegisterInstance<TestViewModel>(_viewModel); 

        ViewModelBehavior.Container = _container; 
    }
}

I create a reference to the entire test class for the container and the test view model. When the class is initialized (this is one-time setup, before all tests are run) I create a container, a view model, and tell the container that anytime someone asks for the view model, give them the specific instance I created. I also set the container on the type for the view model behavior class, so it knows what to use when resolving the view model.

The Code Behind Test

For my first test, I'll programmatically attach the behavior and test that it works. The view model behavior takes in a string that is the fully qualified type name for the view model, and then uses the unity container to resolve it. Therefore, my test looks like this:


[TestMethod]
[Asynchronous]
[Description("Test creating an element and attaching in code behind.")]
public void TestAttach()
{
    TextBlock textBlock = new TextBlock();
    textBlock.SetValue(ViewModelBehavior.ViewModelProperty, typeof(TestViewModel).AssemblyQualifiedName);
                
    textBlock.Loaded += (o, e) => 
    {
        Assert.IsNotNull(textBlock.DataContext, "The data context was never bound.");
        Assert.AreSame(textBlock.DataContext, _viewModel, "The data context was not bound to the correct view model.");

        EnqueueTestComplete();
    };

    TestPanel.Children.Add(textBlock);                                  
}

There's a few things going on here, so let's break them down!

The TestMethod attribute tags this method to be run by the framework. It is decorated with a description, which I can view on the output when the test is run and helps make the test more, ah, descriptive. The first thing I do is create a test block and attach the view model property. Here, I'm taking the test view model and getting the fully qualified name and using that to set the attached property. We want to make sure everything works fine and there are no errors during binding, so this is where the asynchronous pieces come into play.

The Asynchronous tag tells the framework that we're waiting on events, so don't consider this test complete until we explicitly tell the framework it's complete. When the text block fires the Loaded event, we confirm that the data context is not null and that it in fact contains the exact instance of the view model we created in the class initialization. Then we tell the framework the test is complete by calling EnqueueTestComplete, which is provided by the base class.

Finally, if you were to run this without the last line, the test would stall because the text block would never get loaded. We add it as a child of the test surface, and this injects it into the visual tree and fires the loaded event.

The XAML Test

I'm not completely confident with this test because the whole reason for creating a behavior was so I could attach the view model in XAML and not use code behind. Therefore, I should really test attaching this behavior through XAML. So, at the top of the test class we'll create the necessary XAML and wrap it in a UserControl:


const string TESTXAML = 
    "<UserControl " +
        "xmlns=\"http://schemas.microsoft.com/winfx/2006/xaml/presentation\" "  +
        "xmlns:x=\"http://schemas.microsoft.com/winfx/2006/xaml\" " +
        "xmlns:vm=\"clr-namespace:PRISMMEF.Common.Behavior;assembly=PRISMMEF.Common\">" +
            "<Grid x:Name=\"LayoutRoot\" Background=\"White\" " +
                "vm:ViewModelBehavior.ViewModel=\"PRISMMEF.Tests.Helper.TestViewModel, PRISMMEF.Tests, Version=1.0.0.0\">" +
                "<ListBox x:Name=\"ListBox\" ItemsSource=\"{Binding ListOfItems}\"/>" +
    "</Grid></UserControl>";

If you think the constant is ugly, you can always add an actual XAML file, set it as an embedded resource, then read it in instead. That would give you the full functionality of the editor to tweak the test code. Here, we simply create a control with a grid and a list box. The list box uses the attached behavior and also binds the list.

I want to test the list binding as well, so I add a collection to my test class:


.
private static readonly List<string> _testCollection = new List<string> { "test1", "test2" };         
.

In the class initialize method, I'll pass this into the view model's constructor so it is set on the ListOfItems property.

Now, we can create the control from XAML, load it, and test it:


[TestMethod]
[Asynchronous]
[Description("Test creating from XAML")]
public void TestFromXaml()
{
    UserControl control = XamlReader.Load(TESTXAML) as UserControl;
    
    control.Loaded += (o, e) => 
    {
        ListBox listBox = control.FindName("ListBox") as ListBox;

        Assert.IsNotNull(listBox, "ListBox was not found.");
        Assert.IsNotNull(listBox.DataContext, "The data context was never bound.");
        Assert.AreSame(listBox.DataContext, _viewModel, "The data context was not bound to the correct view model.");

        IEnumerable<string> list = listBox.ItemsSource as IEnumerable<string>; 
        List<string> targetList = new List<string>(list); 
        CollectionAssert.AreEquivalent(targetList, _testCollection, "Collection not properly bound."); 

        EnqueueTestComplete(); 
    };

    TestPanel.Children.Add(control);           
 }

Now we load the control from XAML and wire in the Loaded event to test for the data context and the instance. Then, I take the items from the list box itself and compare them with the original list using CollectionAssert. The AreEquivalent does a set comparison. Then we signal the test is complete.

There's no code for this example because it was very straightforward and I'll likely be posting a more comprehensive example in the future as the result of a talk I'll be giving. Be sure to tune into MSDN's geekSpeak on Wednesday, February 17th, 2010 when I will be the guest to cover exclusively the topic of the Silverlight Unit Testing Framework (the talks are all stored on the site in case you read this after the event).

Thanks!

Jeremy Likness

Sunday, January 3, 2010

PRISM, MEF, and MVVM Part 3 of 3: Dynamic MEF Modules in PRISM

Series recap:

In the final part of this series, I will show a dynamically loaded module (using PRISM) that takes full advantage of MEF.

Here is a preview of the final product, illustrating the different modules and areas I have pulled together to demonstrate (click for a full resolution view):

Download the Source for this Example

Why even bother with dynamic modules? Dynamic module loading can be a powerful way to build scalable (and more stable) applications in Silverlight. By creating dynamic modules, you ensure the user only loads what they need, when they need it. The dynamic module only comes into play when the user demands a feature that requires the module.

Before we dive into the new MEF-based module, let's break down a few best practices when using dynamically loaded modules:

  • The best option I've found for this so far is to use PRISM's "on demand" functionality for loading modules.
  • Typically, you will want to put your main dependencies in the "main" or "host" Silverlight project.
  • Create a new dynamic module by adding a Silverlight Application project, not a Silverlight Class Library.
  • Use a XAML catalog to reference the dynamic modules. This gives PRISM something to resolve when the new module is requested, but avoids direct dependencies on the satellite modules by the main application.
  • When referencing projects or DLLs that are part of the main application in a satellite module, be sure to set "copy local" to false. This ensures these are not compiled into the XAP, because the module can leverage the fact that the main module has already loaded the dependencies. This leads to very lightweight XAP files.

When everything comes together as planned, you can scan traffic with a tool like Fiddler to see what happens. Take a look at this example. Here, you can clearly see the satellite modules are loaded dynamically. In fact, the MEF module, which displays a list and dynamically displays a child view, weighs in at only 5,000 bytes!

Click here to view the Fiddler snapshot

I started by adding a new Silverlight Application and calling it MEFModule.

The MEF ViewModel

The MEF view model looks like this:


[Export]
public class MoreStuffViewModel : IPartImportsSatisfiedNotification
{        
    private IService _service; 

    [Import]
    public IService Service
    {
        get { return _service; }
        set
        {
            _service = value;
            if (_service != null)
            {
                _service.GetMoreStuff(_ServiceLoaded); 
            }
        }
    }

    [ImportMany("MoreStuff",AllowRecomposition=true)]
    public List<string> ImportedStuff { get; set; }

    [Import]
    public IRegionManager RegionManager { get; set; }

    public DelegateCommand<object> DynamicViewCommand { get; set; }

    [Import("DynamicView")] 
    public object DynamicView { get; set; }

    private ObservableCollection<string> _listOfMoreStuff;

    public MoreStuffViewModel()
    {
        // initialize all lists
        _listOfMoreStuff = new ObservableCollection<string>();
        DynamicViewCommand = new DelegateCommand<object>(o =>
            {
                RegionManager.RegisterViewWithRegion("MainRegion", () => DynamicView);
            });
    }

    private void _ServiceLoaded(List<string> values)
    {
        foreach (string value in values)
        {
            _listOfMoreStuff.Add(value);
        }
    }
       
    public ObservableCollection<string> ListOfMoreStuff 
    {
        get
        {
            return _listOfMoreStuff;
        }

        set
        {
            _listOfMoreStuff = value; 
        }
    }

    #region IPartImportsSatisfiedNotification Members

    public void OnImportsSatisfied()
    {
        foreach (string importedValue in ImportedStuff)
        {
            _listOfMoreStuff.Add(importedValue);
        }
    }

    #endregion
}

I've purposefully loaded this view model with tons of goodies to help understand and take advantage of MEF to its fullest. You'll notice two things right away about the view model: it is exported using the Export attribute, and it implements an interface called IPartImportsSatisfiedNotification. When you implement this interface, MEF will automatically call OnImportsSatisfied, allowing you to react to the new imports.

You'll notice I have a collection called ImportedStuff that imports a contract with a magic string (tip: in a production project this would probably be a type or an enumeration or at least a static class with a constant to allow for type checking) called MoreStuff. We'll give this collection what it needs somewhere else, for now it is simply waiting for imports and when those imports are satisfied, we merge them into our master ListOfMoreStuff collection.

We also reference IService. If you remember, when we set that up back in part 1, we went ahead and added an Export tag. Here, we import it. Notice that on the setter, when the import happens, I go ahead and call the GetMoreStuff method, which returns me the Spanish numbers. When these are loaded, we merge them into our master ListOfMoreStuff. So the list will contain the results of the service call and any MoreStuff imports we may find.

We also want to dynamically display another view, but we don't know what that view is yet. Remember in part 2 I explicitly set the export value of IRegionManager using Unity to resolve it? This is where we will import it!

The DynamicViewCommand gives us a command to bind to in order to show that view. The view gets imported with another magic string (again, just stating I know we don't like them, but they are there for the sake of brevity in this example only) called DynamicView. In the constructor for the view model, we bind the command to a call to RegisterViewWithRegion and pass in the dynamic view.

This demonstrates how powerful MEF really is: we are able to specify an action to dynamically show a view here in the view model without any prior knowledge of where the view is or even what it is composed of!

The rest of the view model code simply initializes lists, combines them, etc. We've got our view model in place. How about some views that use it?

The MEF Views

In the views folder, I created two views. The first or "main" view is simply called MoreStuffView and it binds to the ListOfMoreStuff collection as well as the dynamic view command:


<ListBox ItemsSource="{Binding ListOfMoreStuff}">
   <ListBox.ItemTemplate>
        <DataTemplate>
            <TextBlock Text="{Binding}"/>
        </DataTemplate>
    </ListBox.ItemTemplate>
</ListBox>
<Button Grid.Column="1" cal:Click.Command="{Binding DynamicViewCommand}" Content="Add View"/>

The code behind requires two pieces: first, we need to import the view model and bind it:


[Import]
public MoreStuffViewModel ViewModel
{
   get { return (MoreStuffViewModel)LayoutRoot.DataContext; }
   set { LayoutRoot.DataContext = value; }
}

Next, we need to call our friend, the PartInitializer class, to satisfy all of the imports in the constructor:


public MoreStuffView()
{            
    InitializeComponent();           
    PartInitializer.SatisfyImports(this);                
}

And that's it ... all of the services, exports, etc will be wired in elsewhere. I created a second view called DynamicView. This view simply contains a text block that indicates it was dynamically loaded. Remember how we had our "magic string" import for a view in the view model? Here, we will export the dynamic view in the code behind. There is only one change to the auto-generated code behind file, and that is to add the Export tag:


[Export("DynamicView")]
public partial class DynamicView : UserControl
{
    public DynamicView()
    {
        InitializeComponent();
    }
}

That's it for the view model and the views!

Providing Exports

The exports can obviously be supplied in a variety of ways. For this example, I simply created a class and exported a few values to demonstrate that the imports were working and merging with the main list:


public class MoreStuffExports
{        
    [Export("MoreStuff")]
    public string Item1
    {
        get { return "MEF Export 1"; }
    }

    [Export("MoreStuff")]
    public string Item2
    {
        get { return "MEF Export 2"; }
    }            
}

The MEF Module Initializer

Now it's time to wire everything together. There is actually almost nothing to the module initializer class. Because we base it on the PartModule we defined in the last post, the MEF-specific actions happen as part of the base class. In this class, we simply register the main view:


public class MEFInit : PartModule
{
    IRegionManager _regionManager;

    public MEFInit(IRegionManager regionManager)
    {
        _regionManager = regionManager;
    }

    #region IModule Members

    public override void Initialize()
    {
        base.Initialize(); // gotta call base for the MEF magic to happen
        _regionManager.RegisterViewWithRegion("MainRegion", typeof(MoreStuffView));
    }

    #endregion
}

That's it!

Conclusion

The goal of this this small series is to demonstrate how well PRISM and MEF work together, and how accessible modular and extensible applications can be in Silverlight 3. You've learned several ways to marry the view model to the view using different types of dependency injection containers and patterns. You've also seen how to take advantage of dynamically loaded modules to create application extensions that have an extremely small footprint but can easily integrate with the flexibility provided by both Unity and MEF in the context of PRISM.

Download the Source Code for this example

Jeremy Likness