Tuesday, May 25, 2010

Silverlight Out of Browser Dynamic Modules in Offline Mode

Silverlight Out of Browser (OOB) applications are becoming more and more popular due to the convenience of being able to install and launch them locally. As Silverlight applications become larger and more composable, advanced techniques such as dynamically loading modules are also becoming more popular.

The "out of the box" Managed Extensibility Framework provision for dynamic modules is the DeploymentCatalog. This will download a XAP file based on a URI and integrate it with the current solution. It also works in OOB mode and will attempt to retrieve the URI from the same location as the in-browser version (the only caveat is that you must specify the absolute, rather than relative, URI).

What happens if the user is running on their desktop, and offline? This gets quite interesting. It turns out that most functions will simply use the browser cache, so if the items are cached then they will load with no problem. However, if the cache is cleared, you can run into problems.

To address this issue, I created the OfflineCatalog. This MEF catalog behaves like the DeploymentCatalog with a few exceptions. First, it will save any XAP file to isolated storage whenever it retrieves one, and second, if the application is OOB and offline, it will automatically load the XAPs from isolated storage instead of trying to fetch them from the web.

Instead of building my own catalog from scratch, I decided to cheat a little bit and use some of the existing catalogs "under the covers." To start with, we'll base the class on ComposablePartCatalog. I'm setting up some helpers — an aggregate catalog to aggregate the parts I discover, a list of assemblies to load from the XAP, and a static list of parts so that if I use multiple catalogs I won't ever try to load the same assembly more than once. It looks like this:

public class OfflineCatalog : ComposablePartCatalog
{
    private readonly AggregateCatalog _typeCatalogs = new AggregateCatalog();

    private readonly List<Assembly> _assemblies = new List<Assembly>();

    private static readonly List<string> _parts = new List<string>();

    public Uri Uri { get; private set; }

    public OfflineCatalog(string uri)
    {
        Uri = new Uri(uri, UriKind.Relative);
    }

    public OfflineCatalog(Uri uri)
    {
        Uri = uri;
    }

    public override IQueryable<ComposablePartDefinition> Parts
    {
        get { return _typeCatalogs.Parts; }
    }
}

This will asynchronously load, so I provide an event to "listen to" when the loading is complete:

...
public event EventHandler<AsyncCompletedEventArgs> DownloadCompleted;
...

Now I can wire up the download - it will simply try to download the XAP using a web client if the application is online, and read it from isolated storage if the application is offline:

public void DownloadAsync()
{
    if (NetworkInterface.GetIsNetworkAvailable())
    {
        Debug.WriteLine("Begin async download of XAP {0}", Uri);
        var webClient = new WebClient();
        webClient.OpenReadCompleted += WebClientOpenReadCompleted;
        webClient.OpenReadAsync(Uri);
    }
    else
    {
        _ReadFromIso();
    }
}

For this example, I just take the full URI and replace some of the non-friendly characters with dots to make a filename - that is how I'll store/retrieve the catalog from isolated storage:

private string _AsFileName()
{
    return Uri.ToString().Replace(':', '.').Replace('/', '.');
}

Now I can easily read in the file and send the stream off for processing:

private void _ReadFromIso()
{
    Debug.WriteLine("Attempting to retrieve XAP {0} from isolated storage.", Uri);

    using (var iso = IsolatedStorageFile.GetUserStoreForApplication())
    {
        if (iso.FileExists(_AsFileName()))
        {
            _ProcessXap(iso.OpenFile(_AsFileName(), FileMode.Open, FileAccess.Read));
        }
        else
        {
            if (DownloadCompleted != null)
            {
                DownloadCompleted(this, new AsyncCompletedEventArgs(
                                            new Exception(
                                                string.Format(
                                                    "The requested XAP was not found in isolated storage: {0}",
                                                    Uri)), false, null));
            }
        }
    }
}

Now it's simple to wire in the download event. Once downloaded, I simply write to isolated storage and then call the same method to parse it back out:

private void WebClientOpenReadCompleted(object sender, OpenReadCompletedEventArgs e)
{
    Debug.WriteLine("Download of xap {0} completed.", Uri);

    if (e.Error != null)
    {
        // will try to read from ISO as a fallback 
        Debug.WriteLine("Catalog load failed: {0}", e.Error.Message);                
    }
    else
    {
        var isoName = _AsFileName();

        Debug.WriteLine("Attempting to store XAP {0} to local file {1}", Uri, isoName);

        using (var iso = IsolatedStorageFile.GetUserStoreForApplication())
        {
            using (var br = new BinaryReader(e.Result))
            {
                using (var bw = new BinaryWriter(iso.OpenFile(isoName, FileMode.Create, FileAccess.Write)))
                {
                    bw.Write(br.ReadBytes((int) e.Result.Length));
                }
            }
        }
    }

    _ReadFromIso();
}

Notice if the load fails, I'll still try to read the isolated storage version as a fallback. Now we need to take a look at the "meat" of the method that reads from iso. The ProcessXap method does several things. The AppManifest.xaml provides us with a list of the parts (assemblies) contained. We parse that into a LINQ XML document and begin iterating it. We call a method that loads these into the assembly space and adds them to the list of assemblies. I add all of the assemblies first because I want to make sure any dependencies are already loaded before I start putting the parts into the MEF catalogs. Otherwise, MEF will choke if I try to add an assembly that references another assembly that hasn't been parsed yet. You can see how I take advantage of the existing MEF catalogs: for each assembly, I simply call the GetTypes method, pass those into a TypeCatalog, and add it to the aggregate catalog. When MEF asks us for the parts, I simply tell the aggregate catalog to pass along its parts. Take a look at this main loop:

private void _ProcessXap(Stream stream)
{
    var manifestStr = new
        StreamReader(
        Application.GetResourceStream(new StreamResourceInfo(stream, null),
                                        new Uri("AppManifest.xaml", UriKind.Relative))
            .Stream).ReadToEnd();

    var deploymentRoot = XDocument.Parse(manifestStr).Root;

    if (deploymentRoot == null)
    {
        Debug.WriteLine("Unable to find manifest for XAP {0}", Uri);
        if (DownloadCompleted != null)
        {
            DownloadCompleted(this,
                                new AsyncCompletedEventArgs(new Exception("Could not find manifest root in XAP"),
                                                            false, null));
        }
        return;
    }

    var parts = (from p in deploymentRoot.Elements().Elements() select p).ToList();

    foreach (var src in
        from part in parts
        select part.Attribute("Source")
        into srcAttr where srcAttr != null select srcAttr.Value)
    {
        _ProcessPart(src, stream);
    }

    foreach(var assembly in _assemblies)
    {
        try
        {
            _typeCatalogs.Catalogs.Add(new TypeCatalog(assembly.GetTypes()));
        }
        catch (ReflectionTypeLoadException ex)
        {
            Debug.WriteLine("Exception encountered loading types: {0}", ex.Message);

            if (Debugger.IsAttached)
            {
                foreach (var item in ex.LoaderExceptions)
                {
                    Debug.WriteLine("With exception: {0}", item.Message);
                }
            }

            throw;
        }
    }
    
    Debug.WriteLine("Xap file {0} successfully loaded and processed.", Uri);

    if (DownloadCompleted != null)
    {
        DownloadCompleted(this, new AsyncCompletedEventArgs(null, false, null));
    }

}

So how do we process the parts? The AssemblyPart provided by the framework takes care of it for us, as you can see here:

private void _ProcessPart(string src, Stream stream)
{
    Debug.WriteLine("Offline catalog is parsing assembly part {0}", src);

    var assemblyPart = new AssemblyPart();

    var srcInfo = Application.GetResourceStream(new StreamResourceInfo(stream, "application/binary"),
                                                new Uri(src, UriKind.Relative));

    lock (((ICollection)_parts).SyncRoot)
    {
        if (_parts.Contains(src))
        {
            return;
        }

        _parts.Add(src);

        if (src.EndsWith(".dll"))
        {
            var assembly = assemblyPart.Load(srcInfo.Stream);
            _assemblies.Add(assembly);                    
        }
        else
        {
            assemblyPart.Load(srcInfo.Stream);
        }
    }
}      

Notice I am locking on the main list to make sure I don't load a duplicate.

That's it - now we can simply pass one or many of these catalogs to the composition host and we're good to go (basically, take a look at any examples that use the deployment catalog and use this in its place).

Now once the user has the application, they can run it offline even though the modules are dynamic. Of course, you'll have to download all modules first - you can put a check to see if it is running on the desktop and force a download to make it happen. It will also automatically check for new XAP files when going back online, so you can release updates to modules independent of the fully composed application.

I don't have a project example for this but hope I've provided enough source for you to piece together the catalog yourself and take advantage of it in your Silverlight OOB applications.

Jeremy Likness

Tuesday, May 18, 2010

Making the ScrollViewer Talk in Silverlight 4

Recently I came across the requirement to react to the fact that a user had scrolled a view to the bottom. It sounded easy at first because I imagined hooking into a scroll viewer changed event, listening to the event args, and then reacting when it was done. The only problem was that I couldn't find the appropriate event to bind to!

A quick search found that many people are having the same problem. The "important" events are hidden inside the ScrollViewer, down at the bars. A few solutions walk the visual tree down to the bars and hook in, but I wanted something that was a little more generic and clean.

Fortunately, we can get what we want by listening to properties. I'm going to give an example for the "VerticalOffset" property which shows how much the vertical scroll is offset from the height. When this equals the height, it has been scrolled to the bottom. This example, however, isn't just for scroll bars - it will work with any type of dependency property you want to listen to.

The trick is to create a binding for the property you want to listen to. Then, bind to the binding using a dependency property, and listen for that to change!

In our case, we want to watch the vertical offset for a ScrollViewer. Here is the binding:

...
var binding = new Binding("VerticalOffset") { Source = myScrollViewer }; 
...

Next, I want to listen to that binding and react when it changes. I can, for example, interrogate the scroll viewer and see if it's reached the bottom (note that if I wanted, I could also fire a command and indirectly bind the property change to an ICommand on a view model).

...
var offsetChangeListener = DependencyProperty.RegisterAttached(
   "ListenerOffset",
   typeof(object),
   typeof(UserControl),
   new PropertyMetadata(_OnScrollChanged)); 
myScrollViewer.SetBinding(offsetChangeListener, binding);
...
public void _OnScrollChanged(DependencyObject sender, DependencyPropertyChangedEventArgs e) 
{
   // do something with the scroll viewer 
}
...

That's it! You can wrap this into a behavior that you simply drop onto the ScrollViewer and you can make even the most stubborn ones sing.

Jeremy Likness

Friday, May 14, 2010

WebClient and DeploymentCatalog gotchas in Silverlight OOB

This is a quick post to share a gotcha I found that may be impacting others.

I am in the process of building a large composable application for a customer using Silverlight 4 and the Managed Extensibility Framework (MEF). The application has a framework that supports multiple line of business applications that are dynamically loaded and registered to the main application.

This application works both in browser and out-of-browser and even in offline mode. The DeploymentCatalog in MEF works with the web client stack underneath the hood, so it functions the same way as if you tried to request the files yourself and then pull them down. If online, it will request from the web, otherwise it will look to cache. If the cache is empty, you're stuck.

The first caveat with working out of browser is that relative urls no longer work. They are relative to your install base so they will not find the XAP. There are a few ways to resolve this, but the easiest I've found is to sense when running online (i.e. the first time it is launched) and save the URI to IsolatedStorageSettings. Then, in offline mode, you can pull it out and reconstruct the URI back to the place the host XAP was located, and specify a fully qualified URI:

if (Current.IsRunningOutOfBrowser)
{
    Source = IsolatedStorageSettings.ApplicationSettings[SOURCEURI] as Uri;
}
else
{
    var src = Current.Host.Source.ToString();
    src = src.Substring(0, src.LastIndexOf('/')+1);
    Source = new Uri(src);
    IsolatedStorageSettings.ApplicationSettings[SOURCEURI] = Source;
    IsolatedStorageSettings.ApplicationSettings.Save();
}

Now you can easily make a full URI like this:

var uri = new Uri(Source, "local.xap");

After that point it is mostly business as usual, unless you run into a problem like I did. My application was working perfectly in the browser, but in OOB mode it would not load. I started debugging and found that all of my WebClient and DeploymentCatalog downloads would fire ... but then nothing would happen. They'd never return, not even with an error.

This was very puzzling to me and I researched it far and wide and could not find a resolution. I made a sample side project as a proof of concept and everything worked fine. What was different?

I actually reached out to my team for help when it hit me ... something a little "different" I was doing.

The App.cs typically sets the RootVisual of your application. You might create a new control and set it or use composition to import it. Because I had multiple modules to load, I wanted to delay setting the root visual until everything was loaded. So, I created a special loader class to do this, and held off on assigning the root visual until composition was complete.

It turns out this was the culprit. While it worked fine in the web browser, something in OOB was expecting the root visual to be set. When I set it to an empty grid, everything suddenly worked fine. So, I put the empty grid in place and then add the composed control as a child of the grid when composition finishes.

We were going to add this anyway (to provide a nice status to the user as the parts were being initialized) but I never suspected it would kill my ability to do anything on the network. If you have experienced similar quirkiness in your applications, this may be one place to look.

A colleague believes it may be due to the fact that WebClient and DeploymentCatalog (via web client) both return via Dispatcher, and with no root visual, there may be no dispatcher to latch to ... sounds plausible to me.

Also wanted to share that a case study for an important Silverlight project was officially released by Microsoft:

Vancouver Winter Olympics Case Featuring Wintellect's Works with Microsoft and NBC

Learn about Wintellect's work with Microsoft and NBC/CTV to support real time video for the 2010 Vancouver Winter Olympics video site using Smooth Streaming and Silverlight. Read the recently released case study and learn about the massive effort across multiple partners to pull together the on-line solution for streaming HD videos, both live and on demand for the Vancouver Winter Olympics. Get an inside view of Wintellect's contribution to making this project a success with coverage of 360 events, 4.4 million hours of video, and much more!

Here is the link: Winter Olympics Case Study

Jeremy Likness

Sunday, May 9, 2010

MVVM Coding by Convention (Convention over Configuration)

Convention-based programming is an interesting model. In essence, it attempts to reduce the potential for error by handling most scenarios based on conventions or standards, and allowing the developers to focus on the exceptions. Probably one of the most thorough public resources I've seen for the convention-based model is Rob Eisenberg's Build your Own MVVM Framework which he dubs "a framework with an opinion."

Click here to download the source for this example.

I've not been a huge fan or advocate of convention-based programming in the past, but that is quickly changing. What I didn't like was the secrets it keeps. In other words, the convention-based model is great, if and when you know and understand the convention. A lot of "magic" happens. I am a fan of self-documenting code, and if you aren't careful, a convention-based model might make things happen without it ever becoming clear how it happened.

I've also been taking a look at the convention-based model to gain a clearer understanding and see some definite advantages. Perhaps the most intriguing "find" has been that we are always coding by convention anyway ... it's just a question of which convention and how far we take it.

Let's take the simple example of taking text from a text block in Silverlight and sending it off to a service.

The Old-Fashioned Code-Behind Way

One possible way to do this is to simply wire an event in the code-behind and make it happen. Let's take a common scenario: enter some text, then click a button. We want the button to remain disabled until text is available, then submit the text to some service. This is the contract for the service:

public interface ISubmit
{
    void Submit(string text);
}

And our reference implementation is deadly simple:

public class SubmissionService : ISubmit 
{
    public void Submit(string text)
    {
        MessageBox.Show(text);
    }
}

Now here is a sample control:

<Grid x:Name="LayoutRoot" Height="50" Background="White">
    <StackPanel Orientation="Horizontal" Margin="5">
        <TextBlock Text="Enter Some Text: " Margin="5"></TextBlock>
        <TextBox x:Name="TextInput" Margin="5" Width="200"/>
        <Button x:Name="SubmitButton" Content="Submit" Margin="5" Click="Button_Click"/>
    </StackPanel>
</Grid>

Next, we wire it all up in the code-behind, like this:

public partial class CodeBehind
{
    public CodeBehind()
    {
        InitializeComponent();
        SubmitButton.IsEnabled = false;
        TextInput.TextChanged += TextInput_TextChanged;
    }

    void TextInput_TextChanged(object sender, System.Windows.Controls.TextChangedEventArgs e)
    {
        SubmitButton.IsEnabled = !string.IsNullOrEmpty(TextInput.Text);
    }

    private void Button_Click(object sender, RoutedEventArgs e)
    {
        ISubmit submit = new SubmissionService();
        submit.Submit(TextInput.Text);
    }
}

That's where most people start. I've often been asked, "Jeremy, it's quick and it's simple ... what's wrong with it?" Nothing, if all you ever do is work with that single text box and single event.

The problem comes when you are composing larger applications. We lose several important aspects of productive development when our code and design is intermingled like this. For example ...

  • Reusability — with the view and logic intertwined, this code can only ever do one thing. We can never reuse the view for something different (perhaps there is a different set up logic you'd want to apply to a view with a text box and a button) and we cannot re-use the business logic because it's all tied into the control.
  • Extensibility — there is no real way to extend this model, we can only modify it
  • Presentation Separation — this is less evident with the textbox example, but a combo box or radio button provide more clarity: what happens when I change my radio button to a check box, or my combo box to a list box? Without separation, it means I have to modify everything.
  • Testability — for larger applications, catching bugs as early in the process as possible is paramount. This "thing" can only be tested one way. With a cleaner separation, we could test UI and logic separately, and isolate the fixes to one place, rather than having them impact both sides every time and possibly propagate to other controls that aren't re-using the pattern.
  • Designer/Developer Workflow — this is also huge. More and more shops have a design team and a development team and their schedules don't always synchronize. With this model, you are forced to wait for design to toss something over the fence before development can even get started, and then end up in a message if elements change. Wouldn't it be nice if developers could build even before the design team was done, and the design team could toss over results without impacting what was being developed?

Those are just a few reasons I shy from that approach when dealing with large, enterprise projects that are complex and require composability, extensibility, scalability and performance (not to mention the ability for many developers to contribute simultaneously to the success of the project).

The Control Freak

One method to attempt to separate these concerns a little is the control/controller pattern. I built a few applications like this and actually preferred it for ASP.NET WebForms applications (before MVC) to help separate business concerns from control logic. In Silverlight, the pattern might look a little like this ... first, the control:

<Grid x:Name="LayoutRoot" Height="50" Background="White">
    <StackPanel Orientation="Horizontal" Margin="5">
        <TextBlock Text="Enter Some Text: " Margin="5"></TextBlock>
        <TextBox x:Name="TextInput" Margin="5" Width="200"/>
        <Button x:Name="ButtonSubmit" Content="Submit" Margin="5"/>
    </StackPanel>
</Grid>

The controller contract exposes ways we interact with the controller to initialize it, have it save state, etc. For our example, we keep it simple:

public interface IController<in T> where T: UserControl 
{
    void Init(T userControl);
}

Notice that our type is contravariant ... that is, we are allowing the generic to be strongly typed to the actual type of the view itself (going from generic to more specific).

This is implemented for our view like this:

public class ViewController : IController<Controller>
{
    private readonly ISubmit _submit;
    private Controller _controllerView;

    public ViewController(ISubmit submit)
    {
        _submit = submit;
    }

    public void Init(Controller userControl)
    {
        _controllerView = userControl;
        userControl.ButtonSubmit.Click += ButtonSubmit_Click;
        userControl.ButtonSubmit.IsEnabled = _HasText();
        userControl.TextInput.TextChanged += (o, e) => userControl.ButtonSubmit.IsEnabled = _HasText();
    }

    bool _HasText()
    {
        return !string.IsNullOrEmpty(_controllerView.TextInput.Text);
    }

    void ButtonSubmit_Click(object sender, System.Windows.RoutedEventArgs e)
    {
        _submit.Submit(_controllerView.TextInput.Text);
    }
}

In this case, we do manage to separate the logic from the view. We can even abstract marrying the controller to the view. There are different ways of doing this, but even a simple factory:

public static class ControllerFactory
{
    public static void InitController<T>(T userControl) where T: UserControl 
    {
        if (typeof(T).Equals(typeof(Controller)))
        {
            var controller = (IController<T>)new ViewController(new SubmissionService());
            controller.Init(userControl);
        }                      
    }
}

This scans the type and returns the appropriate controller, so the code-behind becomes this simple:

public partial class Controller
{
    public Controller()
    {
        InitializeComponent();
        Loaded += (o,e) => ControllerFactory.InitController(this);
    }       
}

(Another popular method is to init the controller, have it create the control and then return or inject it somewhere). While this allowed us to separate the logic from the code-behind, it is still an illusion. The controller knows too much about the view and how it is implemented. There is still tight coupling. I have to drag the view along everywhere the controller goes, so I've really just moved it into a separate file.

This pattern can be evolved with a few steps such as providing a view contract and then acting on the interface instead of the view. This will create testability, for example. Even with this extra abstraction, however, the contract ends up mirroring events and attributes on the view and sometimes might be perceived as extra work that doesn't really go far.

MVVM Comes to Town

So let's take a look at the popular pattern that I've been using in my Silverlight business applications and teaching for some time now, the Model-View-ViewModel (MVVM). I've seen people roll their eyes or cringe when MVVM is mentioned because the perception is that it can entail a lot of work. In my experience, however, the benefits far outweigh the necessary infrastructure. In fact, it like the difference between leasing and buying with a large downpayment. With MVVM, you do some heavy lifting (big downpayment) up front. Once the plumbing is in place, however, building out and extending becomes very easy (small monthly payments). Other architectures might get you to the first screen faster, but come at a maintenance and extensibility cost.

Here's a peek at the MVVM control:

<UserControl.Resources>
    <ViewModel:TextViewModel x:Key="VM"/>
</UserControl.Resources>
<Grid x:Name="LayoutRoot" DataContext="{StaticResource VM}" Height="50" Background="White">
    <StackPanel Orientation="Horizontal" Margin="5">
        <TextBlock Text="Enter Some Text: " Margin="5"></TextBlock>
        <TextBox x:Name="TextInput" Margin="5" Width="200" Text="{Binding InputText, Mode=TwoWay}"/>
        <Button x:Name="ButtonSubmit" Command="{Binding CommandSubmit}" CommandParameter="{Binding ElementName=TextInput,Path=Text}" Content="Submit" Margin="5"/>
    </StackPanel>
</Grid>

The cost of MVVM is already apparent. We've introduced another dependency in the view, namely, the view model. It's nice we can create it in XAML but now we've essentially coupled the two together. Furthermore, we now have some extra markup. This is the "glue" that holds the pieces together, instructing the controls how to bind with the underlying view model.

What does that view model look like?

public class TextViewModel : INotifyPropertyChanged
{
    public TextViewModel()
    {
        var cmd = new SubmitCommand { Submit = new SubmissionService() };
        CommandSubmit = cmd;        
    }

    private string _text; 

    public string InputText
    {
        get { return _text; }
        set
        {
            _text = value;

            if (PropertyChanged == null) return;

            PropertyChanged(this, new PropertyChangedEventArgs("InputText"));
            ((SubmitCommand)CommandSubmit).RaiseCanExecuteChanged();
        }
    }
      
    public ICommand CommandSubmit { get; set; }        

    public event PropertyChangedEventHandler PropertyChanged;
}

This is a fairly straightforward model. Instead of the business logic mixed in with our code-behind or controller, we now have a more data-centric class. What's nice is that our code doesn't deal with pulling fields from a text box or responding to events. Instead, this is all handled by the framework. We don't go after a text box control to get a string, instead, we simply deal with a string command. We don't have to know whether that string is in a text box or a password box or a custom third-party control. Heck, you can have all three and reuse the same view model. We can test the view model in isolation and because we are responding to commands, not events, we can also fire a "submit command" without screen-scraping or trying to force a mouse click.

We've witnessed that a "con" for this approach is some extra overhead and noise in the code-behind, but the benefit is that we don't have to worry about keeping track of changes in controls or moving values into or out of the UI, we just deal with properties and that "noise" is what glues them together. We can build our view model independently and test it and wire it into services and much more.

Now we come to the convention-based approach. I must admit that this appeared to me to be something a little "trendy" so I considered it with caution. My one fear of using convention-based models is that "magic" happens without really understanding it. With convention programming, the convention of how you structure and name controls can drive how they interact and behave. If you don't know the convention, this can create a bit of mystery.

One moment of epiphany for me came, however, when I was teaching the concept of data-binding to someone not familiar with Silverlight. They suddenly made it very clear that data-binding was a convention they had to learn. In this case, there is a convention of how we get the view model bound to the view. There is also a convention for data-binding itself. We must make sure we have the appropriate command, that we set the mode (one way, two way, etc.) and that the path matches the view model. So in essence, most of us are already programming with convention-based models.

So why not simplify things a bit?

Convention to the Rescue

With a convention-based engine, my control ended up looking like this:

<Grid x:Name="LayoutRoot"  Height="50" Background="White">
    <StackPanel Orientation="Horizontal" Margin="5">
        <TextBlock Text="Enter Some Text: " Margin="5"></TextBlock>
        <TextBox x:Name="InputText" Margin="5" Width="200"/>
        <Button x:Name="Submit" Content="Submit" Margin="5"/>
    </StackPanel>
</Grid>

Notice there is no data-binding and no view model. This will work perfectly well in Blend and the designers can modify it to their heart's content. I only ask that my convention is followed, which in this case means the UI elements are named in accordance with the view model.

First, let's look at the code-behind. I only do one small thing here (and I can easily make a behavior and do it in XAML instead): I tag the control with a convention. I'm doing a string here, but it can easily be an enumeration or something else. I call it "example":

[ControlTag("Example")]
public partial class ConventionMVVM
{
    public ConventionMVVM()
    {
        InitializeComponent();
    }
}

Now let's take a peek at the view model. Remember, we didn't wire in a data context or instance it in our view. In fact, there is no direct relationship between the two. When you look at the view model, you'll notice that I've create some methods and properties that match the names in the control, this is their "affinity" and is no more coupled than the path on a data-binding command. I also tag the view model with the same tag as the view:

[VMTag("Example")]
public class ConventionViewModel : INotifyPropertyChanged 
{
    [Import]
    public ISubmit SubmitService { get; set; }

    private string _text;

    public string InputText
    {
        get { return _text; }
        set
        {
            _text = value;

            if (PropertyChanged == null) return;

            PropertyChanged(this, new PropertyChangedEventArgs("InputText"));
            PropertyChanged(this, new PropertyChangedEventArgs("CanSubmit"));
        }
    }

    public void Submit()
    {
        if (CanSubmit)
        {
            SubmitService.Submit(_text);
        }
    }

    public bool CanSubmit
    {
        get { return !string.IsNullOrEmpty(_text); }
    }
        
    public event PropertyChangedEventHandler PropertyChanged;
}

Notice that I'm not using commands. I implement the notify property changed event and I import the submission service using the Managed Extensibility Framework so I can code against the contract. That's it!

It may seem even more puzzling when you look at my main page. I've hosted all four of the patterns represented here, but only three of the controls are referenced directly:

<Grid x:Name="LayoutRoot" Background="White">
    <StackPanel Orientation="Vertical">
        <Views:CodeBehind/>
        <Views:Controller/>
        <Views:MVVM/>
        <ContentControl x:Name="ConventionRegion"/>
    </StackPanel>
</Grid>

Notice I don't have a reference to the view, but I do have a content control that can be filled. In fact, in the code-behind, I tag it the same way I did the view and the view model:

public partial class MainPage
{
    [RegionTag("Example")]
    public ContentControl ExampleRegion
    {
        get { return ConventionRegion; }
    }

    public MainPage()
    {
        InitializeComponent();
    }
}

Hmmm ... so what is going on here?

Here we decided we wanted to minimize our monthly payments. We bought the car up front and put a big down payment. In the convention framework, the bulk of the work goes into the common scenarios. We build these out and isolate them to a single place where we can tweak and troubleshoot. We publish our convention and when developers follow it, things "just work." What's nice is the conventions don't meet every situation, so scenarios that aren't covered by the convention can be addressed and dealt with as one-offs. Of course, when we see a pattern emerge that is repeated, we can revisit our convention and integrate it back.

The first piece of the convention model handles gluing view models to views and populating them into regions. This is handled by a ConventionManager class that imports and routes the pieces:

[Export]
public class ConventionManager : IPartImportsSatisfiedNotification
{
    [ImportMany]
    public Lazy<ContentControl, IRegionTagCapabilities>[] Regions { get; set; }

    [ImportMany]
    public Lazy<UserControl, IControlTagCapabilities>[] Views { get; set; }

    [ImportMany]
    public Lazy<Object, IVMTagCapabilities>[] ViewModels { get; set; }

    public void OnImportsSatisfied()
    {
        foreach(var regionImport in Regions)
        {
            var tag = regionImport.Metadata.Tag;
                
            var viewTag = (from v in Views where v.Metadata.Tag.Equals(tag) select v).FirstOrDefault();

            if (viewTag == null) continue;

            var viewModelTag =
                (from vm in ViewModels where vm.Metadata.Tag.Equals(tag) select vm).FirstOrDefault();

            if (viewModelTag == null) continue;

            var view = viewTag.Value;
            _Bind(view, viewModelTag.Value);
            regionImport.Value.Content = view;
        }
    }
}

So far this piece simply routes and glues. The regions, views, and view models with similar tags are bound and routed together. Of course, this is a place where the convention engine can be easily extended. For example, your regions would most likely have panels and items controls so that multiple views could be routed through them. Views and viewmodels might have different metadata tags and other rules for marrying them together. For our simple example, this works. Now let's take a look at the _Bind method ...

private static void _Bind(FrameworkElement view, object viewModel)
{
    var elements = _Elements(view.FindName("LayoutRoot") as Panel).ToList(); 

    foreach(var button in (from e in elements where e is Button select e as Button).ToList())
    {
        var nameProperty = button.GetValue(FrameworkElement.NameProperty);

        if (nameProperty == null) continue;

        var name = nameProperty.ToString();

        var actionMethod = (from m in viewModel.GetType().GetMethods()
                            where
                                m.Name.Equals(name)
                            select m).FirstOrDefault();

        if (actionMethod != null)
        {
            button.Click += (o, e) => actionMethod.Invoke(viewModel, new object[] {});
        }

        var enabledProperty = (from p in viewModel.GetType().GetProperties()
                                where
                                    p.Name.Equals("Can" + name) 
                                select p).FirstOrDefault();

        if (enabledProperty == null) continue;

        button.IsEnabled = (bool) enabledProperty.GetGetMethod().Invoke(viewModel,                   
                                                        new object[] {});
                
        var button1 = button;
        ((INotifyPropertyChanged) viewModel)
            .PropertyChanged +=
            (o, e) =>
                {
                    if (e.PropertyName.Equals("Can" + name))
                    {
                        button1.IsEnabled = (bool) enabledProperty.GetGetMethod()
                                                        .Invoke(viewModel, new object[] {});
                    }
                };
    }

    foreach(var propertyInfo in viewModel.GetType().GetProperties())
    {
        if (propertyInfo.GetGetMethod() == null || propertyInfo.GetSetMethod() == null) continue;

        var propName = propertyInfo.Name;

        var element =
            (from e in elements where propName.Equals(e.GetValue(FrameworkElement.NameProperty)) select e).
                FirstOrDefault();

        if (element == null) continue;

        if (element is TextBox)
        {
            var binding = new Binding
                                {
                                    Source = viewModel,
                                    Path = new PropertyPath(propName),
                                    Mode = BindingMode.TwoWay
                                };
            ((TextBox) element).SetBinding(TextBox.TextProperty, binding);
        }
    }
}

private static IEnumerable<UIElement> _Elements(Panel panel)
{
    yield return panel; 

    foreach(var element in panel.Children)
    {
        if (element is Panel)
        {
            foreach(var child in _Elements((Panel)element))
            {
                yield return child;
            }
        }
        else
        {
            yield return element;
        }
    }
}

This convention focuses on buttons and text boxes. It makes use of the iterator function to simplify recursion. The recursion of controls is done using the yield statements to flattern the discovered controls into a list. If we were searching or filtering, this would stop once the desired element was found so it provides some performance benefits as well (in this case we just blow through the whole list for the simple example).

First, we scan the buttons on the form. We are looking for a view model method with the same name as the button, and wiring the click event to invoke the method. Furthermore, if a property exists that is prefixed with "Can" and the button name, we use that to set the IsEnabled property of the button. We could simply bind directly to a command object, but I wanted to demonstrate the flexibility of convention-based solutions and how it can end up being a simple method and property without having to introduce commands.

Next, we scan the properties on the view model. For each property, we look for an element with the same name. If it exists and is a text box, we create a binding. A fully-blown convention model would bind to password boxes, text blocks, toggle buttons (radio buttons and check boxes), combo boxes, and more.

Conclusion

What we've done with the convention model is invested some extra effort up front to handle the common scenarios (button click and text box input, as well as routing of views and view models to regions). Once this is in place, it becomes very easy to add a new view or view model. We simply tag these and then name them. We're still creating a coupling between the view and the view model, but instead of requiring a longer, complex data-binding statement, we've simplified it to a simple name and don't have to remember the binding direction or path syntax, etc.

What I really like about this model is that it truly frees the design team to move forward with minimal impact on the development workflow. In one project I've worked with, the design project is completely separate. The only "noise" is the exports of the controls, which can also be done externally. This means development can proceed with building view models and unit tests and once design submits the result, it's a simple question of matching names and properties.

As I mentioned earlier in this post, this is just a taste of convention. For a more fully developed solution, be sure to refer to Rob's presentation.

Here is the application for you to play with, in the order of: code behind, controller, MVVM, and convention. The source code can be downloaded here.

Jeremy Likness

Sunday, May 2, 2010

Silverlight Out of Browser (OOB) Versions, Images, and Isolated Storage

This is a quick and simple post to address three very common questions I receive about Silverlight Out-of-Browser (OOB) applications. In case you haven't heard, applications made with Silverlight version 3 and later can be installed locally to your machine (whether it is a Windows machine or a Mac) and run "out of the browser" as a local application. In Silverlight 4, you can also run with partial trust. That is not the focus on this post.

Creating an out of browser application is quite easy. Open the project properties for your Silverlight application. You will find a checkbox that allows for installation when out of browser:

Silverlight Out of Browser

Simply check the box and you are on your way! Of course, you can also click the button to set other properties, such as the name and description for the application, provide icons for installation, and decide whether you want it to run with elevated trust.

Question One: How do I publish new versions of my Silverlight Out-of-Browser Application?

This is easy. Silverlight does not do version checking by itself. To Silverlight, your "version" is basically the Uri of the XAP file from which it was originally downloaded. If you publish a new XAP file for the web version, when the user revisits, it automatically pulls down the newest version (sometimes this doesn't happen immediately because the XAP file may be cached by the browser, similar to images and other resources). The out of browser application is the same: when the application starts, it automatically polls the server. If a new XAP file is detected, the bits are downloaded and the application is updated.

Of course, this sort of "forces" the update out of the box. Jeff Wilcox's blog has an excellent article about Out of Browser that demonstrates how to handle the update experience programmatically by prompting the user.

Question Two: Do I need to embed my images for out-of-browser to work?

The answer is "no." Out-of-browser works very similar to "in the browser." Images can fatten XAP files so most people place the images in the ClientBin folder where the XAP file resides. This allows you to reference the image with a relative path:

...
<image source="foo.jpg"/>
...

If foo.jpg is located in the root of ClientBin on the server, it will load just fine. But what about with OOB? The same thing happens. The installed application retains a pointer to the Uri where it was installed from (this is how it can check for new versions). When a relative image reference is found, it will simply download it relative to the XAP file directory, the same as the online application. Therefore, you do not have to package the images in the XAP for them to become available.

But what about when the application is offline?

This is tricky territory. The word on the street is that it will still use your browser's cache to retrieve the images. This makes it very unpredictable, because you never know when the cache will clear and in my tests, you'll often end up with an application that has no images. If you are writing an application that spends a good deal of time in offline mode, you will probably want to embed your images in the XAP to make sure they are available all of the time.

Question Three: Do the web and desktop versions of my application share the same isolated storage?

Yes, they do.

This one may seem confusing because the default size of isolated storage for a web application is 1 megabyte (MB) while the default for out-of-browser applications is 25 megabytes. However, despite the difference in quota, both versions point to the same isolated storage location.

The Example

To illustrate the image and isolated storage points, I made a very simple out of browser application that you can play with here. It contains an image file that is external to the XAP, so you can see how it is fetched the same way both in and out of browser. It also automatically detects the mode it is running in. In the web browser mode, you can enter text and click a "serialize" button to write that text to isolated storage. In the OOB mode, it has a "deserialize" button that reads from isolated storage and writes into the text box. You can run the web and out of browser versions at the same time, serializing data from the web version and immediately deserializing it from the OOB version to see how they share the same isolated storage.

To install the application to your desktop, simply right-click and select the option to install:

Installing an OOB application

The code for this is simple. Note that I did not do any error checking, so you'll want to serialize some text before attempting to deserialize it. This is the code behind for the application:

public partial class MainPage
{
    private const string FILE = "testserialization.txt";
        
    public MainPage()
    {
        InitializeComponent();
        Application.Current.InstallStateChanged += Current_InstallStateChanged;
        ISOButton.Content = Application.Current.IsRunningOutOfBrowser ? "Deserialize" : "Serialize";
    }

    void Current_InstallStateChanged(object sender, System.EventArgs e)
    {
        ISOButton.Content = Application.Current.IsRunningOutOfBrowser ? "Deserialize" : "Serialize";
    }

    private void IsoButtonClick(object sender, RoutedEventArgs e)
    {
        if (Application.Current.IsRunningOutOfBrowser)
        {
            using (var iso = IsolatedStorageFile.GetUserStoreForApplication())
            {
                if (iso.FileExists(FILE))
                {
                    using (var br = new BinaryReader(iso.OpenFile(FILE, FileMode.Open, FileAccess.Read)))
                    {
                        Text.Text = br.ReadString();
                    }
                }
            }
        }
        else
        {
            using (var iso = IsolatedStorageFile.GetUserStoreForApplication())
            {
                using (var bw = new BinaryWriter(iso.OpenFile(FILE, FileMode.Create, FileAccess.Write)))
                {
                    bw.Write(Text.Text);
                }
            }
            Text.Text = "--- wrote to iso ---";
        }
    }
}

Notice that the content and behavior of the button changes based on whether or not the application is running out of browser (OOB). It also hooks into the InstallStateChanged event to update "on the fly" when the user installs.

Hopefully this short post demonstrates some useful concepts about running your applications out of browser. You can download the source for the example here.

Jeremy Likness

Saturday, May 1, 2010

MEF: DLL Versions and Multiple Exports for a Class

During my talk about the Managed Extensibility Framework (MEF) at Devscovery this past week, I had two very good questions asked by the audience and promised I'd get an answer.

The first one was about exporting in MEF. I was under the impression that a MEF part could have one export, but I was mistaken. The confusion came from this thread which doesn't question multiple exports, but rather the shared policy (whether there is one export, or multiple instances of an export created). It's one of those cases where I had read that and somehow got it in my head it wasn't supported, and then simply didn't have a reason to need it so I never investigated it further.

So, thanks for the great question.

Answer One: you can export a single class under multiple interfaces. Not a problem. And, depending on your creation policy, you can ensure only one instance is exported, or a separate interface for the different contracts.

The second one was a little more interesting. It was about versioning and DLLs. The question was if I import an assembly that has a part, then update the assembly on disk, can I pull in the new version?

Answer Two: no, you cannot reload a newer version of the assembly when it is updated on disk This isn't a MEF limitation, but part of the CLR itself. If you think about it, you cannot do this without MEF, either. You might overwrite the DLL, but it will not get loaded (if it already is loaded) until you restart the application. Technically, you can unload the entire app domain or launch a new app domain, but you cannot unload the assembly within an app domain.

Wintellect's Jeffrey Richter explains it well in this dot net rocks interview (PDF transcript). He says:

No you cannot and we will probably never offer that feature. If you think about it, if you load an assembly into an app domain and then you start executing code in that assembly, that may cause other assemblies to get loaded into that app domain as well. And then if you unloaded the first one, we don't know to unload all the others that got loaded accidentally, if you will. So that is one of the reasons why we have app domains, is so that you can do this unloading and then all the other assemblies that got loaded as a side effect get unloaded at the same time. And we are probably going to keep that. We have heard that request from many people that if they want to unload an assembly, may be some day we'll add it, but I think it's very unlikely.

Thanks to everyone who attended and hope this answers your questions.

Jeremy Likness