Thursday, April 28, 2011

Dynamic Types to Simplify Property Change Notification in Silverlight 4 and 5

The biggest problem with data-binding is the requirement to implement the INotifyPropertyChange interface. There are dozens of solutions out there that try to simplify the process with techniques ranging from parsing lambda expressions and walking the stack frame to using IL weaving to modify classes at compile time. The most popular approach is to derive from a base class and call a base method to handle the event.

The frustration often comes from mapping data objects that don't implement the interface to view models that do. Wouldn't it be nice to have a simple, straightforward way to manage this without duplicating properties and writing tedious mapping code? It turns out there is.

For this particular problem, I started with the solution. Given a model, say, a ContactModel, I wanted to be able to do this:

public PropertyNotifier<ContactModel> Contact { get; set; } 
public void SetContact(ContactModel contact) 
{
   Contact = new PropertyNotifier(contact); 
}

In other words, a nice type would wrap the object and expose it with full property change glory, and little effort on my part.

So, where to start? To begin with I created a simple base class that allows for property change notification. For now I'm going to ignore some of the interesting ways to actually call the notification.

public abstract class BaseNotify : INotifyPropertyChanged 
{
    public event PropertyChangedEventHandler PropertyChanged;

    public void RaisePropertyChange(string propertyName)
    {
        var handler = PropertyChanged;
        if (handler != null)
        {
            handler(this, new PropertyChangedEventArgs(propertyName));
        }
    }
}

This class is really all you need to have your own MVVM framework. Next is some heavy lifting. Because the solution uses dynamic types and heavy reflection, it will not work on Windows Phone 7. It will, however, work with Silverlight 4, and there is perhaps an even more elegant solution to be derived from this work in Silverlight 5 by adding ICustomTypeProvider to the mix.

How can this create a bindable object in Silverlight 4 or 5? First, create the shell of the view model. It should create the proxy class with property change notification. It should allow you to pass in a template and have that template mirrored by the proxy. Ideally, it should be easy to get the template back out (i.e. yank out the original model to send on its way after it has been modified). Here's the start:

public class PropertyNotifier<TTemplate> : BaseNotify where TTemplate : class
{
   public TTemplate Instance { get; set; }
   public INotifyPropertyChanged NotifyInstance { get; set; }
}    

Simple enough. Not sure if the notifier instance really deserves a public setter... but it is there for now. Now comes the fun part!

The type must be created on the fly, so it needs a dynamic assembly and module to host the type. There is no sense in creating a new one for each type, so these can be static properties that live on the notifier. There should also be a type dictionary to map the source type to the proxy type (to avoid recreating the proxy type) and a mutex to avoid collisions with the dictionary (thread safety).

private static readonly ModuleBuilder _builder;
private static readonly Dictionary<Type, Type> _types = new Dictionary<Type, Type>();
private static readonly object _mutex = new object();        

static PropertyNotifier()
{
    var assemblyName = new AssemblyName("PropertyNotifier");
    var currentDomain = AppDomain.CurrentDomain;
    var builder = currentDomain.DefineDynamicAssembly(assemblyName, AssemblyBuilderAccess.Run);

    _builder = builder.DefineDynamicModule("PropertyChangeModels");
}

If you are afraid of collisions you can give the assembly a creative name like a GUID or append random text and strings if you like. This makes it nice and readable in the debugger. The assembly is created in the current domain and the module defined to host dynamic types.

Without understanding the details of how the type is actually built, you can still wire in the constructor and put in a placeholder, like this:

public PropertyNotifier()
{
    Monitor.Enter(_mutex);
    try
    {
        if (!_types.ContainsKey(typeof (TTemplate)))
        {
            _types.Add(typeof(TTemplate), _BuildType()); 
        }                                
    }
    finally
    {
        Monitor.Exit(_mutex);
    }

    var type = _types[typeof (TTemplate)];
            
    NotifyInstance = (INotifyPropertyChanged)Activator.CreateInstance(type);                    
}

public PropertyNotifier(TTemplate instance) : this()
{
    Instance = instance;
}

If the type has not been created, it is built. An overloaded constructor will take in an instance and then set it.

Next, assuming the type is built (we'll get into the gory details later), a few methods will help with mapping properties. First, define a delegate for the getter and setter. Then, define a dictionary of dictionaries. The key to the outer dictionary will be the type, and the inner dictionary will map the property name to the getter or setter method.

private delegate void Setter(object target, object value);

private delegate object Getter(object target);

private static readonly Dictionary<Type, Dictionary<string,Setter>> _setterCache = new Dictionary<Type, Dictionary<string,Setter>>();
private static readonly Dictionary<Type, Dictionary<string,Getter>> _getterCache = new Dictionary<Type, Dictionary<string, Getter>>();

The helper methods will inspect the type for the property information and use reflection to grab the getter or setter. They will then store these in the cache for future look ups:

private static object _GetValue(object target, string property)
{
    Monitor.Enter(_mutex);
    try
    {
        if (!_getterCache[target.GetType()].ContainsKey(property))
        {
            var method = target.GetType().GetProperty(property).GetGetMethod();
            _getterCache[target.GetType()].Add(property, obj => method.Invoke(obj, new object[] {}));
        }
    }
    finally
    {
        Monitor.Exit(_mutex);
    }

    return _getterCache[target.GetType()][property](target);                
}

private static void _SetValue(object target, string property, object value)
{
    Monitor.Enter(_mutex);
    try
    {
        if (!_setterCache[target.GetType()].ContainsKey(property))
        {
            var method = target.GetType().GetProperty(property).GetSetMethod();
            _setterCache[target.GetType()].Add(property, (obj,val) => method.Invoke(obj, new[] { val }));
        }
    }
    finally
    {
        Monitor.Exit(_mutex);
    }

    _setterCache[target.GetType()][property](target, value);
}

You can call the first with an object and the property name to get the value. Call the second with the object, the property name, and the property value to set it. Subsequent calls will not require inspection of the properties as the methods will be cached to call directly (and if you really want to speed it up, look into dynamic methods).

So the proxy still hasn't been built yet, but that's more complicated. First, get the simple stuff out of the way. When the instance is passed in, automatically wire the properties to the proxy. When the proxy is created, hook into the property change notificaton to automatically push changes back to the original instance:

private TTemplate _instance;

// original object
public TTemplate Instance
{
    get { return _instance; }
    set
    {                
        _instance = value;
        NotifyInstance = (INotifyPropertyChanged)Activator.CreateInstance(_types[typeof (TTemplate)]);

        foreach(var p in typeof(TTemplate).GetProperties())
        {
            var sourceValue = _GetValue(value, p.Name);
            _SetValue(NotifyInstance, p.Name, sourceValue);
        }

        RaisePropertyChange("Instance");
    }
}

// proxy object
private INotifyPropertyChanged _notifyInstance;

public INotifyPropertyChanged NotifyInstance
{
    get { return _notifyInstance; }
    set
    {
        if (_notifyInstance != null)
        {
            _notifyInstance.PropertyChanged -= _NotifyInstancePropertyChanged;
        }

        _notifyInstance = value;
        _notifyInstance.PropertyChanged += _NotifyInstancePropertyChanged;

        RaisePropertyChange("NotifyInstance");                
    }
}

void _NotifyInstancePropertyChanged(object sender, PropertyChangedEventArgs e)
{
    if (Instance == null)
    {
        return;
    }           

    if (_setterCache[typeof (TTemplate)].ContainsKey(e.PropertyName))
    {
        _SetValue(Instance, e.PropertyName, _GetValue(NotifyInstance, e.PropertyName));
    }
}

OK, all of the proxy and marshalling is in place. Now it's time to build the type! First step is to define the type name and set the parent so it derives from the BaseNotify object:

private static Type _BuildType()
{
    var typeBuilder =
        _builder.DefineType(string.Format("{0}Notifier", typeof (TTemplate).Name), TypeAttributes.Class | TypeAttributes.Public);

    typeBuilder.SetParent(typeof(BaseNotify));
}

Next, grab a handle to the property change method from the base class and set up a dictionary to cache the getters and setters on the template type:

var propertyChange = typeof(BaseNotify).GetMethod("RaisePropertyChange", new[] { typeof(string)});

_getterCache.Add(typeof(TTemplate), new Dictionary<string, Getter>());
_setterCache.Add(typeof(TTemplate), new Dictionary<string, Setter>());
                        

Now comes the fun part, looping through the properties and caching the getters/setters (this is from the template):

foreach(var p in typeof(TTemplate).GetProperties())
            {
                var getterInfo = p.GetGetMethod();
                _getterCache[typeof(TTemplate)].Add(p.Name, obj=>getterInfo.Invoke(obj, new object[]{}));

                var setterInfo = p.GetSetMethod();
                _setterCache[typeof(TTemplate)].Add(p.Name, (obj,value)=>setterInfo.Invoke(obj, new[]{value}));
}

Each property has a private backing field, so create the field on the proxy type:

var field = typeBuilder.DefineField(string.Format("_{0}", p.Name), p.PropertyType, FieldAttributes.Private);                

Next, define the property.

var property = typeBuilder.DefineProperty(p.Name, PropertyAttributes.HasDefault, p.PropertyType,null);

The property needs a getter. This is where the code is a little more interesting becaues it requires emitting IL code. Fortunately, you can build a sample class and use ILDASM.EXE to disassemble it and learn what the proper op codes are. Here is the getter method:

var getter = typeBuilder.DefineMethod(string.Format("get_{0}", p.Name),
    MethodAttributes.Public |
    MethodAttributes.SpecialName | 
    MethodAttributes.HideBySig,
    p.PropertyType, Type.EmptyTypes);
var getterCode = getter.GetILGenerator();

getterCode.Emit(OpCodes.Ldarg_0);
getterCode.Emit(OpCodes.Ldfld, field);
getterCode.Emit(OpCodes.Ret);

Next is the setter method. The setter method has some extra code that loads the property name and then calls the property change method. That is why the handle to the method was captured earlier.

var setter = typeBuilder.DefineMethod(string.Format("set_{0}", p.Name),  
    MethodAttributes.Public |
    MethodAttributes.SpecialName |
    MethodAttributes.HideBySig, null,
    new[] { p.PropertyType });

var setterCode = setter.GetILGenerator();

setterCode.Emit(OpCodes.Ldarg_0);
setterCode.Emit(OpCodes.Ldarg_1);
setterCode.Emit(OpCodes.Stfld, field);


// property change
// put the property name on the stack
setterCode.Emit(OpCodes.Nop);
setterCode.Emit(OpCodes.Ldarg_0);
setterCode.Emit(OpCodes.Ldstr, p.Name);
setterCode.Emit(OpCodes.Call, propertyChange);
setterCode.Emit(OpCodes.Nop);                

setterCode.Emit(OpCodes.Ret);

Now that the methods have been generated, they must be attached to the property:

property.SetGetMethod(getter);
property.SetSetMethod(setter);

That's the hard part! The easy part is to define a default constructor (calls down to the base) and create the actual type. Remember, this is the method called in the constructor so the type is returned and stored in the dictionary, then the activator is used to create the instance. Also, go ahead and set up the getter and setter cache:

typeBuilder.DefineDefaultConstructor(MethodAttributes.Public);            
            
var type = typeBuilder.CreateType();
            
_getterCache.Add(type,new Dictionary<string, Getter>());
_setterCache.Add(type,new Dictionary<string, Setter>());
            
return type;

Believe it or not, that's what it takes to build a proxy, assuming the base class contains simple properties and no complex nested types or structures. Here's a simple template to test the proxy with:

public class ContactTemplate
{
    public int Id { get; set; }
    public string FirstName { get; set; }
    public string LastName { get; set; }        
}

Here's a view model that is based on the template. It uses the property notifier to wrap the properties with property change notification. It also creates a default template in the constructor just to give you some information to work with when the application runs:

public class ContactViewModel : PropertyNotifier<ContactTemplate>
{       
    public ContactViewModel()
    {
        var template = new ContactTemplate
                            {
                                Id = 1,
                                FirstName = "Jeremy",
                                LastName = "Likness"
                            };
        Instance = template;
    }       
}

Now some XAML to bind it all together:

<Grid x:Name="LayoutRoot" Background="White">
    <Grid.DataContext>
        <ViewModels:ContactViewModel/>
    </Grid.DataContext>
    <Grid.RowDefinitions>
        <RowDefinition Height="Auto"/>
        <RowDefinition Height="Auto"/>
        <RowDefinition Height="Auto"/>
    </Grid.RowDefinitions>
    <Grid.ColumnDefinitions>
        <ColumnDefinition Width="Auto"/>
        <ColumnDefinition Width="Auto"/>
    </Grid.ColumnDefinitions>
    <TextBlock Text="First Name: "/>
    <TextBlock Text="Last Name: " Grid.Row="1"/>
    <TextBlock Text="Edit First Name: " Grid.Row="2"/>
    <TextBlock Text="{Binding NotifyInstance.FirstName}" Grid.Column="1"/>
    <TextBlock Text="{Binding NotifyInstance.LastName}" Grid.Row="1" Grid.Column="1"/>
    <TextBox Text="{Binding NotifyInstance.FirstName,Mode=TwoWay}" Grid.Row="2" TextChanged="TextBox_TextChanged" Grid.Column="1" Width="200"/>
</Grid>

When you run the application, you'll find the property change works just fine. Now, with this helper class, anytime you need to take a simple data object and implement property change, you can just wrap it in the property notifier and bind to the InstanceNotifier property. This works perfectly well in Silverlight 4.

Of course, I won't leave you hanging. Here's the source code to play with!

Jeremy Likness

Wednesday, April 27, 2011

Supporting Non-Supported Entities like IEnumerable in Sterling

While the Sterling NoSQL database supports many items out of the box, there are certain item types it just doesn't know enough about. For example, consider a property defined as IEnumerable. Unlike a list, an enumerable is not bounded. It simply exposes an iterator, and for all we know, that iterator might be an enumerable method that produces random numbers in an infinite loop. The property doesn't reveal what the backing store is, and while Sterling could guess that it is a list or array or other structure, guessing doesn't cut it when faithfully persisting an object graph.

Fortunately, Sterling was designed with extensibility in mind. If it can't infer how to handle a structure, you can train it using custom serializers. With the latest changeset (Sterling Changeset 76881 - part of the path from the 1.4 beta to 1.5 RTM) some key internals to Sterling were exposed that help this process. In the 1.0 version of Sterling, a custom serializer had to manage all serialization from that node. There was no "re-entry" into the recursive model that Sterling uses to walk object graphs, handle foreign keys and manage lists, dictionaries, and other structures. This limited the flexibility of custom serializers. The latest changeset, however, exposes the "serialization helper" that performs most of Sterling's heavy lifting and makes it easy to create extension points to save IEnumerable lists or even teach Sterling how to populate private collections (by default, Sterling only handles public fields and public properties with getters and setters).

Here is an example taken directly from the core Sterling tests. The hypothetical "non-supported" class is based on IEnumerable:

public class NotSupportedList : IEnumerable<TestModel>
{
    private readonly List<TestModel> _list = new List<TestModel>();

    public void Add(IEnumerable<TestModel> newItems)
    {
        _list.AddRange(newItems);
    }

    public IEnumerator<TestModel> GetEnumerator()
    {
        return _list.GetEnumerator();
    }

    IEnumerator IEnumerable.GetEnumerator()
    {
        return GetEnumerator();
    }
}

Obviously this is a contrived example, but it is common to use this pattern to provide read-only lists and hide the management of the list from the processing or iterating of it. Here is a non-supported class that holds the list (non-supported because the list property would ordinarily cause Sterling to choke):

public class NotSupportedClass
{
    public NotSupportedClass()
    {
        InnerList = new NotSupportedList();
    }

    public int Id { get; set; }

    public NotSupportedList InnerList { get; set; }
}

The TestModel is a helper class defined for Sterling tests that demonstrates a lot of the built-in featuers. It contains nested classes, various data types, sub-lists and more. Serializing it means traversing a rich object graph. In addition, it is defined as a table in the database, so saving a class with these instances should result in being able to query the test models directly and load them individually.

Now that the class is defined, Sterling just needs to "learn" how to manage the non-supported list. This is done by a custom serializer. Custom serializers derive from BaseSerializer. They indicate the types they support by overriding the CanSerialize method. A serializer can handle one or many types. For example, if you wanted to create a serializer that manages any instance that implements an interface ICustomInterface, you would specify support by returning:

return typeof(ICustomInterface).IsAssignableFrom(targetType);

This example will just support the "non-supported" list. To serialize the property, it is simply cast into a list (which Sterling understands) and is passed off to the serializer. This is where the power of the recursive serialization comes into play, because if any other classes or tables exist in the list, Sterling will automatically handle those directly without further intervention from the developer. This will be evidenced by the recursive parsing of the TestModel object graph. Here is the custom serializer:

public class SupportSerializer : BaseSerializer
{
    public override bool CanSerialize(Type targetType)
    {
        // only support the "non-supported" list
        return targetType.Equals(typeof (NotSupportedList));
    }

    public override void Serialize(object target, BinaryWriter writer)
    {
        // turn it into a list and save it 
        var list = new List<TestModel>((NotSupportedList) target);

        // this takes advantage of the special save wrapper for injecting into the stream
        TestCustomSerializer.DatabaseInstance.Helper.Save(list, writer);
    }

    public override object Deserialize(Type type, BinaryReader reader)
    {
        // grab it as a list - again, unwrapped from a node and returned
        var list = TestCustomSerializer.DatabaseInstance.Helper.Load<List<TestModel>>(reader);
        return new NotSupportedList {list};
    }
}

Note that list initializers automatically look for the "add" method. That is why the NotSupportedList can be returned using the list initialization notation, as it is passed to the public method to add to the list and then stored internally.

As you can see, very straightforward - the only need is to convert the non-supported enumerable to a list for saving, then extract it and set it back to the property on deserialization. The helper method exposes the functionality needed. Sterling does not require that you modify your types in any way to be used, instead you create a database instance and describe the objects and keys to Sterling. Here is the test database definition:

public class CustomSerializerDatabase : BaseDatabaseInstance
{        
    protected override List<ITableDefinition> RegisterTables()
    {
        return new List<ITableDefinition>
                        {
                            CreateTableDefinition<NotSupportedClass, int>(t=>t.Id),
                            CreateTableDefinition<TestModel,int>(t=>t.Key)
                        };
    }
}

The definition includes the type of the class and the type of the key, then a lambda expression that exposes how to access the key. Saving and loading is simple, as you will see in the test. Setting up the database is as simple as registering the database type as well as registering the custom serializer:

[TestInitialize]
public void TestInit()
{
    _engine = new SterlingEngine();
    _engine.SterlingDatabase.RegisterSerializer<SupportSerializer>();            
    _engine.Activate();
    DatabaseInstance = _engine.SterlingDatabase.RegisterDatabase<CustomSerializerDatabase>();
    DatabaseInstance.Purge();
}

The purge is just there to clear any previous data from prior test runs and would not be part of a normal production application. Now for the test. The test creates the object and stores several test models, then saves it. It confirms that not only was the main class saved, but that independent test model instances were automatically saved as well using Sterling's built-in foreign key functionality. It then reloads the class and confirms that it reloaded correctly - in this case just validating the keys on the test models, as the full object graph is validated with other tests.

[TestMethod]
public void TestCustomSaveAndLoad()
{
    var expectedList = new[] {TestModel.MakeTestModel(), TestModel.MakeTestModel(), TestModel.MakeTestModel()};
    var expected = new NotSupportedClass {Id = 1};
    expected.InnerList.Add(expectedList);

    var key = DatabaseInstance.Save(expected);

    // confirm the test models were saved as "foreign keys" 
    var count = DatabaseInstance.Query<TestModel, int>().Count();

    Assert.AreEqual(expectedList.Length, count, "Load failed: test models were not saved independently.");

    var actual = DatabaseInstance.Load<NotSupportedClass>(key);
    Assert.IsNotNull(actual, "Load failed: instance is null.");
    Assert.AreEqual(expected.Id, actual.Id, "Load failed: key mismatch.");

    // cast to list
    var actualList = new List<TestModel>(actual.InnerList);

    Assert.AreEqual(expectedList.Length, actualList.Count, "Load failed: mismatch in list.");

    foreach (var matchingItem in
        expectedList.Select(item => (from i in actualList where i.Key.Equals(item.Key) select i.Key).FirstOrDefault()).Where(matchingItem => matchingItem < 1))
    {
        Assert.Fail("Test failed: matching models not loaded.");
    }
}        

That's it. As you can see, the current Sterling model makes it extremely simple and easy to save even complicated object graphs and retrieve them with simple queries and load statements. The 1.5 release will be available before the end of May.

Jeremy Likness

Saturday, April 16, 2011

Performance Optimization of Silverlight Applications using Visual Studio 2010 SP1

Today I decided to work on some miscellaneous performance optimization tasks for my Sterling NoSQL Database that runs on both Silverlight and Windows Phone 7. The database will serialize any class to any level of depth without you having use to attributes or derive from other classes, and despite doing this and tracking both keys and relationships between classes, you can see it manages to serialize and deserialize almost as fast as native methods that don't provide any of the database features:

Note: click on any smaller-sized image in this post to see a full-sized version in case the text is too small for the sample screenshots.

Obviously the query column is lightning fast because Sterling uses in-memory indexes and keys. I decided to start focusing on some areas that can really help speed up the performance of the engine. While there are a lot of fancy profiling tools on the market, there really isn't much need to look beyond the tools built into Visual Studio 2010. To perform my analysis, I decided to run the profile against the unit tests for the in-memory version of the database. This will take isolated storage out of the picture and really help me focus on the issues that are part of the application logic itself and not artifacts of blocking file I/O.

The performance profiling wizard is easy to use. With my tests set as the start project, I simply launched the performance wizard:

I chose the CPU sampling. This will give me an idea of where most of the work is being performed. While it's not as precise as knowing time spent in functions, when a sample rate goes down you can be reasonably assured that the function is executing more quickly and less time is being spent there.

Next, I picked the test project.

Finally, I chose to launch profiling so I could get started right away and finished the process.

The profiler ran my application and I had it execute all of the tests. When done, I closed the browser window and the profiler automatically generated a report. The report showed CPU samples over time, which wasn't as useful to me as a section that reads "Functions doing the most individual work." There are where most of the time is spent. Some of the methods weren't a surprise because they involved reflection, but one was a list "contains" method which implies issues with looking up items in a collection.

Clicking on the offending function quickly showed me where the issues lie. It turns out that a lot of time is being spent in my index collections and key collections. This is expected as it is a primary function of the database, but this is certainly an area to focus on optimizing.

Clicking on this takes me directly to the source in question, with color-coded lines of source indicating time spent (number of samples):

Swapping my view to the functions view, I can bring the functions with the most samples to the top. A snapshot paints an interesting picture. Of course the contains is a side effect of having to parse the list, which uses the equals and hash codes. Sure enough, you can see Sterling is spending a lot of time evaluating these:

So diving into the code, the first thing I looked at was the Equals method. For the key, it basically said "what I'm being compared to must also be the same type of key, and then the value of the key I hold must also be the same."

public override bool Equals(object obj)
{
    return obj is TableKey<T, TKey> && ((TableKey<T, TKey>) obj).Key.Equals(Key);
}

Because these keys are mainly being compared via LINQ functions and other queries on the same table, I'm fairly confident there won't be an issue with the types not matching. So, I removed that condition from the keys and indexes and ran the performance wizard again to see if there were any significant gains.

This time the list stayed on the top, but the table index compare feature also bubbled up:

One nice feature of the profiler is that you can compare two sessions. I highlighted both sessions and chose the comparison feature:

The report actually showed more samples now for the index, still remaining one of the most problematic areas of code:

Obviously, my issue is with the comparison of keys. In Sterling you can define anything as your key, so Sterling has no way of necessarily optimizing the comparison. Or does it? One comparison guaranteed to be fast is integer comparison. All objects have a hash code. Instead of always comparing the key values, why not evaluate the hash code for the key, then compare that first to eliminate any mismatches? Then only if the hash codes are equal, have Sterling compare the actual keys themselves as well to be sure.

I made those tweaks, and the new equals function:

public override bool Equals(object obj)
{
    return obj.GetHashCode() == _hashCode && ((TableKey<T, TKey>) obj).Key.Equals(Key);
}

Notice the fast check for the hash code first, then evaluation down to the key only if the hash code check succeeds. What's nice about this approach is that Sterling probably won't be doing much unless something interesting like a match happens in the first place, so the extra check only takes place when that "interesting event" happens. All of the boring events now benefit from a faster comparison as they are quickly rejected.

That's the theory, anyway. What did the profiler say? I executed a final profiler run. This time there was definite improvement in the equality function and the list lookup:

Of course, the only caveat here is that the samples aren't statistically significant. The improvement is slight, but with a significant margin of error. Fortunately, I was able to run the profiler multiple times in the old and new, and consistently saw improves in the functionality, so there is some improvement and even a little counts as this is the area Sterling spends the most of its time. After evaluating this, a HashSet<T> may out perform the list, but also is not available on the Windows Phone 7 version, so I have some more tweaking and testing to do.

I also managed to find a few other nuggets, including converting the reflection cache to use dynamic methods and taking an expensive operation that was evaluating types and caching the lookup in a dictionary instead. After tweaking and checking these improvements in, next up will be looking at the isolated storage driver and potential bottle necks there.

As you can see, the built-in profiling is very powerful and a useful tool to help improve application performance and identify issues before they become serious.

Jeremy Likness

Thursday, April 14, 2011

Creating a Markup Extension for MEF with Silverlight 5

One exciting new feature of Silverlight 5 is the ability to create custom markup extensions. Markup extensions are a part of XAML. Whenever you see a notation that starts and ends with a brace, you are viewing a markup extension. Markup extensions exist to pass information to properties in the XAML that the XAML parser can use when generating the object graph.

A commonly used example of markup extensions is the data-binding syntax. In the following XAML snippet, the data-binding uses a markup extension to specify that a binding is being defined.

<TextBlock Text="{Binding Path=Name,Mode=TwoWay}"/>

The brace begins the extension, followed by the name of the class that will handle the extension, followed by any parameters you wish to pass. You can do that same thing in code behind by creating a Binding object directly and setting it's properties.

So how are markup extensions useful? They provide a hook to perform functionality that would be impossible or overly complicated using traditional approaches such as attached properties or behaviors. One popular use of markup extensions is to provide access to static classes that otherwise are not available in XAML. A common reason to do that would be to access a service locator - for example, to generate your view model.

To illustrate how to create your own, I decided to use the Managed Extensibility Framework as an example. It is often challenging to insert classes that are managed by MEF into XAML because they require a step known as "composition" to wire dependencies. With a custom markup extension, however, that limitation goes away because the extension can perform the composition call!

Because MEF does not work the same way in the designer as it does in the runtime, I also wanted to provide a fallback mechanism for design-time views. The markup extension will take a component type (a fully qualified type name) and an optional design type. In the designer, if the design type is specified, it creates an instance and returns it. If not, it creates an instance of the component and returns it. During runtime, when not in the designer tool, the extension takes the additional step of calling MEF to compose the dependencies on the target component.

While the following example view model could be created in XAML, the properties would never get set because they are imported from somewhere else:

public class MainViewModel
{        
    [Import("Name")]
    public string Name { get; set; }

    [Import("Email")]
    public string Email { get; set; }
        
}

The values are exported in another class (just for the sake of this example):

public class Exports
{
    [Export("Name")]
    public string Name
    {
        get { return "Jeremy Likness"; }
    }

    [Export("Email")]
    public string Email
    {
        get { return "[email protected]"; }
    }
}

The design-time view model returns some sample data:

public class DesignViewModel
    {
        public string Name { get { return "Joe Designer"; } }
        public string Email { get { return "[email protected]"; } }
    }

Now for the custom markup extension. It should inherit from MarkupExtension and the name should end with Extension (similar to the way dependency property definitions end with the word Property). There is just one method to implement and you can provide your own properties. Here is the MEF composition extension:

public class MefComposerExtension : MarkupExtension 
{
    public string Component { get; set; }
    public string Designer { get; set; }

    public override object ProvideValue(IServiceProvider serviceProvider)
    {            
        if (Component == null)
        {
            throw new Exception("Component required.");
        }
            
        if (DesignerProperties.IsInDesignTool)
        {
            return Activator.CreateInstance(Type.GetType(string.IsNullOrEmpty(Designer) ? Component : Designer));
        }

        var instance = Activator.CreateInstance(Type.GetType(Component));
        CompositionInitializer.SatisfyImports(instance);
        return instance;
    }
}

Notice that the first check is whether it is in the design tool. If so, it will return an instance of either the design parameter or the component. If not in the design tool, it will create an instance of the component, and then use MEF to satisfy the imports. To use the custom markup extension, you simply add a reference to the namespace and then reference the name of the extension inside braces. Here is the XAML for the namespace:

<UserControl ... xmlns:local="clr-namespace:TypeConverter" .../>
And here is the XAML for the main page that uses the markup extension to set the data context:
<Grid x:Name="LayoutRoot" Background="White" 
        DataContext="{local:MefComposer Component='TypeConverter.MainViewModel',Designer='TypeConverter.DesignViewModel'}">
    <Grid.RowDefinitions>
        <RowDefinition Height="Auto"/>
        <RowDefinition Height="Auto"/>            
    </Grid.RowDefinitions>
    <Grid.ColumnDefinitions>
        <ColumnDefinition Width="Auto"/>
        <ColumnDefinition Width="Auto"/>
    </Grid.ColumnDefinitions>
    <TextBlock Grid.Row="0" Grid.Column="0" Text="Name: "/>
    <TextBlock Grid.Row="1" Grid.Column="0" Text="Email: "/>
    <TextBlock Grid.Row="0" Grid.Column="1" Text="{Binding Name}"/>
    <TextBlock Grid.Row="1" Grid.Column="1" Text="{Binding Email}"/>        
</Grid>

As you can see, it is fairly straightforward - simply the namespace and the name of the extension, wherever the returned value should be inserted. This works fine in the preview version of Blend. You can see the grid's datacontext property correctly resolve the design view model:

And the design-time data in on the design surface:

Of course, the most important step is to run this and see it in action. Sure enough, at runtime it will not only create the view model instance, but compose the parts through MEF and find the imported properties:

Obviously this feature is extremely powerful. While it has existed for some time in WPF, it is now available for Silverlight developers and promises to create many new possibilities when integrated in code moving forward.

Grab the source code for this project here.

Jeremy Likness

Wednesday, April 13, 2011

The MIX 2011 Keynote on Wednesday 4/13 Recap

Today was an exciting day during what many have dubbed "the real" MIX 2011 keynote. There were announcements about Windows Phone 7, Silverlight, and Kinect. So what were my thoughts sitting in the audience?

The awesome community created Windows Phone 7 video was incredible and I appreciate the recognition they gave the author. Please share that with as many people as you can to help turn it into a bona fide commercial for the phone. Joe Belfiore started with an apology for the update fiasco ... followed by what wasn't necesarily an apology but definitley was a targeted appearance for Qt developers from Nokia. Then the "Mango" goodness began.

There's almost too much to list but needless to say, the pace of the phone perhaps exceeds what Microsoft managed to pull off with the earlier versions of Silverlight. We've got IE9 with HTML5 support (same core engine as the desktop version), reach to more countries for customers and developers, more language support, faster rendering, sockets support (as evidenced by an IRC demo), built-in SQL (don't worry, I'll share what that means for my Sterling Database in future posts), ringtones, motion sensor support and more. Add to that developer tools with accelerometer simulation and location simulation, an amazing built-in performance analyzer, shared XNA and Silverlight, direct camera support and more, and I'd have to say that was one heck of a wallop.

Then came Silverlight 5. Honestly, I have to admit disappointment here. There wasn't much new for them to mention other than the fact that the beta can now be downloaded as most information was shared during the December firestarter earlier in the year. My disappointment came from the demos and message that came across. To me Silverlight is a serious enterprise line of business platform and Microsoft has had a strong message supporting its use in LOB apps. The representation at MIX was more about web sites and media experiences. 3D is cool, but it's not really line of business except for some specialized application. I would have loved to see more emphasis on the new power for building Line of Business, which is what my upcoming talks at CodeStock and DevLink will focus on.

Finally, the Kinect was announced with some cool SDKs and APIs. The crowd roared when it was announced that attendees would receive a free sensor. Now that's exciting! I believe this keynote packed the punch it needed to. While there wasn't as much play there for Silverlight 5, it knocked a home run for the Windows Phone 7 and really shows Microsoft's commitment to making the phone a true contender in the smart phone space.

Now onto the sessions!

#msmix11s

Jeremy Likness

Text Search with Ancestor Binding and Child Windows in Silverlight 5

In case you missed it, the release of the Silverlight 5 beta was officially announced at MIX today. Over the next few days in addition to blogging about my experiences at MIX, I'll also be sharing with you some new features that are available in the new Silverlight beta. In this post, I'd like to cover a handful of features that I think are exciting and will make it easier to create line of business applications.

To demonstrate these features, we'll use a sample out-of-browser application that simply shows a list of icons with titles. I chose some social media-style icons just for the sake of a simple demonstration. The features this short demo will show include:

  • Child Windows in Out of Browser (OOB) applications
  • Relative ancestor binding
  • Text (keyboard) selection in list boxes

Stay tuned for more on custom markup extensions, implicit data templates, typed weak references and more.

Of course, for any of this to work you'll need to grab the Silverlight 5 beta. It will run side-by-side with Silverlight 4 (you'll just need to make sure you specify Silverlight 4 as the target version for production applications!) You can download the new bits here.

To get started, I loaded several icons to the project. A simple type encapsulates the path to the icon and a friendly name for it, called ImageItem:

public class ImageItem
{        
    public string Path { get; set; }
    public string Name { get; set; }
}

Next is a simple view model that creates the list of images.

public class MainViewModel
{        
    public ObservableCollection<ImageItem> Images { get; private set; }

    public MainViewModel()
    {
        Images = new ObservableCollection<ImageItem>(new List<ImageItem>
                                                            {
                                                                new ImageItem
                                                                    {
                                                                        Path = "/TextSearch;component/facebook-icon.png", Name="Facebook"
                                                                    },
                                                                new ImageItem
                                                                    {
                                                                        Path = "/TextSearch;component/linkedin-icon.png", Name="LinkedIn"
                                                                    },
                                                                new ImageItem
                                                                    {
                                                                        Path = "/TextSearch;component/new-rss-xml-feed-icon.png", Name="RSS"
                                                                    },
                                                                new ImageItem
                                                                    {
                                                                        Path = "/TextSearch;component/twittericon.png", Name="Twitter"
                                                                    },
                                                                new ImageItem
                                                                    {
                                                                        Path = "/TextSearch;component/YouTube_icon.png", Name="YouTube"
                                                                    }
                                                            });
    }
}

Now you can pop the view model into a grid and render our images. Here is the first little addition to Silverlight 5: the addition of TextSearch. Even though there is a complex data type for our list box binding, the text search attached property allows you to specify a path that is a logical "text" representation. Why would you want to do that? To be able to select a list box item with the keyboard! This is basic functionality that has been around in things like combo boxes on the web for ages, but while subtle will be very powerful to use when wiring your list boxes. No more scroll and click, just type to jump ahead!

<Grid x:Name="LayoutRoot" Background="White">
    <Grid.DataContext>
        <TextSearch:MainViewModel/>
    </Grid.DataContext>
    <ListBox ItemsSource="{Binding Images}" TextSearch.TextPath="Name">
        <ListBox.ItemTemplate>
            <DataTemplate>
                <Grid>
                    <Grid.ColumnDefinitions>
                        <ColumnDefinition Width="70"/>
                        <ColumnDefinition Width="*"/>
                    </Grid.ColumnDefinitions>
                    <Image                          
                    Width="64" Height="64" Source="{Binding Path}" Tag="{Binding Name}"/>
                    <TextBlock Text="{Binding Name}" FontSize="50" Grid.Column="1"/>
                </Grid>
            </DataTemplate>
        </ListBox.ItemTemplate>
    </ListBox>           
</Grid>

Now when you run the application and get this listbox:

You can click on the list box for focus, then type "f" for Facebook, "t" for Twitter, and so forth. The listener has a timer and will accept key combinations, for long lists that require typing out part of a word.

Next, switch the project to OOB and set elevated trust. The next cool feature to add is child windows. These are standalone windows that can exist on your desktop separate from the main application window. In this example the windows will just show a full-size image of the icon with the name in the title. First, a command to show the window:

public class ItemShowCommand : ICommand 
{        
    public bool CanExecute(object parameter)
    {
        return parameter != null;
    }

    public void Execute(object parameter)
    {
        var image = new BitmapImage(new Uri(((ImageItem)parameter).Path, UriKind.Relative))
                        {
                            CreateOptions = BitmapCreateOptions.None
                        };
        image.ImageOpened += (o,e) =>
        new Window
                        {
                            Title = ((ImageItem)parameter).Name,
                            Height = image.PixelHeight + 20,
                            Width = image.PixelWidth + 20,
                            TopMost = true,
                            Content = new Image {Source = image},
                            Visibility = Visibility.Visible
                        };
    }

    public event EventHandler CanExecuteChanged;
}

Notice that the command constructs and image and forces it to render immediately to size the child window. The child window creation is straightforward. By default, it will be created collapsed, so we explicitly set the visibility and size it to a margin. Creating the window will make it active - there is no assignment or other step necessary. Once wired in, an example child window will look like this:

Now the command can be added to the main view model with the definition and creation in the constructor:

public ICommand ShowCommand { get; private set; }

public MainViewModel()
{
    ShowCommand = new ItemShowCommand();
    ...
}

Great. Now we can bind to a mouse left button down on the grid ... but wait! Here is a classic problem: the binding for an item in the list is an ImageItem, but the command is on the MainViewModel! How can you get to the command? Fortunately, Silverlight 5 now provides support for ancestor binding. To bind to an ancestor in the visual tree, you simply specify the type and how many levels up. The binding will find that element for you and let you bind to any of its properties. Here is the binding:

<Interactivity:Interaction.Triggers>
    <Interactivity:EventTrigger EventName="MouseLeftButtonDown">
        <Interactivity:InvokeCommandAction Command="{Binding RelativeSource={RelativeSource AncestorType=Grid, AncestorLevel=2}, Path=DataContext.ShowCommand}"
                                            CommandParameter="{Binding}"/>                                                                  
    </Interactivity:EventTrigger>
</Interactivity:Interaction.Triggers>

The type to bind to is a grid. Because the trigger is a child of the grid that holds the list box item, that grid would be "level 1." The parent of that is the list box, and then the list box parent is the grid we want, or "level 2." By specifying this, you can now run the application and:

  1. Type a letter to select an item
  2. Click an item to pop up a new, totally separate window

Now, that's a fairly simple example but hopefully it will help you get started with some fun experiments using the new beta. Feel free to post them here!

Grab the source for this project here.

Jeremy Likness

MIX Sessions Recap for Day One

My first day of MIX was an exciting one. I expected the keynote would cover the non-Silverlight and Windows Phone 7 topics so contrary to the reports of "oh no" you'll invariably hear, it only served to stretch my own excitement as I look forward to today's keynote and what it will have to offer. I already reported on the keynote, so I thought I'd share a bit about sessions.

The first session I attended was HTML5 for Silverlight Developers. It was very eye-opening to me. The speaker had experience on both the Silverlight product teams for earlier versions and the phone, as well as more recent tooling for HTML5 so he spoke from a position of authority. It turns out there are far more similarities between the two than I expected. The comparison, however, also served to draw out the fact we'll hear time and time again: the lack of good tooling for HTML5.

The most interesting part for me was when he designed a full path in Expression Blend, then took the path expression and pasted it into SVG and demonstrated the same drawing in HTML5. That was very interesting, as well as some tools and plug-ins that would generate the code from existing products. He also provided a large chart comparing features between the two ... for example, SVG mapping to XAML, Timer vs. DispatcherTimer, etc. I enjoyed the session and learned quite a bit.

My next session was a Deep Dive MVVM with Laurent Bugnion. Laurent did a great job because there was a ton of information to pack into only a very short window of time. While there were some things I didn't necessarily agree 100% with on the approach, the point wasn't about a debate around "what's the pure or right way" but instead, "here is your tool box and these are the challenges people claim to face, so here's a way to solve them." In that sense I think the talk drove home that you cannot overcomplicate matters and sometimes it's OK to have a little code behind (yes, his demo had code behind!) He managed to hit most of the questions people tend to ask about MVVM with a solution within the framework.

The final session was Pragmatic JavaScript, jQuery, and AJAX with ASP.NET. It was again a very informative session. I learned new ways the Microsoft tools support jQuery and even help provide support for extensions, intellisense, and more. He also tackled some various challenges you might face such as long-polling and how to resolve it using the combination of jQuery and the ASP.NET tools. At the end of the meeting I felt like the open web (meaning tags and JavaScript) looks very fun again and I want to get more involved with it in addition to Silverlight.

As fun as that day was, I know this day will be even more exciting. Stay tuned on my twitter feed (@JeremyLikness) as I report from the keynote floor for what I expect to be some amazing announcements in the Silverlight and Windows Phone 7 space.

#msmix11sw

Jeremy Likness

Tuesday, April 12, 2011

The MIX 2011 Tuesday 4/12 KeyNote Recap

Today's keynote was all about the plug-in free web. The keynote started with a barrage of IE9 statistics and demonstrations showing how "native support" is important. By native support, the team is referring to minimizing the layers between the web and the underlying hardware. It was very impressive to see the amazing performance of HTML5 sites rendered within IE9. The flipside, however, is that to make a truly cross platform experience, developers will now have to consider the slower browsers as the "reference standard" for a "lowest common denominator" to create a consistent experience, and then figure out if and how they might tag on "extras" to take advantage of the bells and whistles with IE9. The time from preview to launch was impressive, and they are now starting a new cycle by providing the IE10 preview at http://ietestdrive.com. They demonstrated SVG animations, videos projected onto canvases and even the world’s largest PacMan game for the 30-year anniversary. A little gem was the fact that many of the demos were done on a build on an ARM processor.

While digesting the idea of web browsers providing an experience "closer to the metal" as IE has set out to do, the focus shifted to the new MVC tools available at http//asp.net/mvc. This was also very impressive to see a website built quickly from scratch with full HTML5 support. The Entity Framework 4.1 allowed the presenters to create some POCO classes and literally generate the database on the fly. While this was impressive to watch, something even more interesting and subtle worked its way into the presentation: the open community support.

By far the most impressive thing I took away from the keynote was the amazing speed at which the open source community has embraced the MVC and WebRazor ecosystem. There are templates you can download and install, widgets, shopping carts, and so much more. In fact, forget the cool technology and HTML5 compatibility. There is a world of support evolving around the toolset that I have not seen anywhere else. The integration with the tools makes it easier than ever before to say, "Hmmm, I wonder if I have to build my own shopping cart" then search and find out that there is already on available and with only a few short click integrate that into your own application. This is powerful, and this is something I think is desperately missing from the Silverlight ecosystem.

Sure, we have open source projects and initiatives, but the volume and level to which the tools embrace these is immature at best. If nothing else, what I took away from the keynote was that there is already a growing and thriving open source ecosystem on the MVC platform and the Silverlight community needs to pay close attention or be left behind. Not for the sake of the future of the technology, but for creating the additional ease in the developer workflow to be able to pick and choose first class components that integrate to create a seamless and phenomenal experience on the web.
I'll be looking forward to what no doubt will be a Silverlight-focused morning of keynote nuggets tomorrow while I consider pulling down and playing with the HTML5 web that has been made so much more accessible by the open source community.

Now I'm onto a session about HTML5 for Silverlight developers!

#msmix11w

Jeremy Likness

Friday, April 8, 2011

Jounce Part 15: Asynchronous Sequential Workflows

One common complexity developers face when writing Silverlight applications is the fact that most operations are forced to asynchronous. When using the Asynchronous Programming Model (APM) or an event-driven model, these operations force the developer to consider the "chain of events" and provide the right code to correctly manage the sequences. Microsoft will address the issue with the async CTP and there is always the Reactive Extensions (Rx) library. Jounce provides a very simple and lightweight solution to fill the gap between what is readily available and what you will receive from future libraries and updates.

The Jounce support for asynchronous workflows uses the concept of a "coroutine." To be completely transparent, this is a technique that I learned from Rob Eisenberg and is available in his Caliburn.Micro framework. This is a more simplified version that is intended to be extremely lightweight and specifically address the scenario of handling sequential workflows with asynchronous steps.

The understand how it works, take a look at the IWorkflow interface.

public interface IWorkflow
{
    void Invoke();
    Action Invoked { get; set; }
}

It's a very straightforward interface. The invoke method is called to start the process. When the process is finished, invoked is called. That process from start to finish is a single asynchronous step.

To manage the workflow, Jounce relies on the fact that iterators in C# function as state machines. You can access this state machine through the enumerator. In order to create a workflow, you chain together a sequence of IWorkflow nodes using an enumerable method. A more detailed explanation of how this works can be read in my coroutines blog post, but the net process ends up looking like this (taken directly from a quick start included with the Jounce source):

private IEnumerable<IWorkflow> Workflow()
{
    Button.Visibility = Visibility.Visible;
    Text.Text = "Initializing...";
    Button.Content = "Not Yet";
    Button.IsEnabled = false;

    yield return new WorkflowDelay(TimeSpan.FromSeconds(2));

    Button.IsEnabled = true;
    Text.Text = "First Click";
    Button.Content = "Click Me!";

    yield return new WorkflowRoutedEvent(() => { }, h => Button.Click += h, h => Button.Click -= h);

    Text.Text = "Now we'll start counting!";
    Button.Content = "Click Me!";

    yield return new WorkflowRoutedEvent(() => { }, h => Button.Click += h, h => Button.Click -= h);

    Button.IsEnabled = false;
    Button.Content = "Counting...";

    yield return new WorkflowBackgroundWorker(_BackgroundWork, _BackgroundProgress);

    Text.Text = "We're Done.";
    Button.Visibility = Visibility.Collapsed;
}

As you can see, several events are happening in a sequential fashion. Some of those events are asynchronous, such as time delays and events. It is all placed in a nice, sequential method without resorting to multiple method calls to chain the events or nested lambda expressions. The biggest benefit here isn't performance, but readability and maintainability of your code.

The whole process is kicked off using the workflow controller, like this:

WorkflowController.Begin(Workflow(),
                            ex => JounceHelper.ExecuteOnUI(() => MessageBox.Show(ex.Message)));

In this case the workflow is begun and any errors are issued using the message box command. You can of course trap errors and handle them as you see fit in your workflow. The quick start includes a text box and some buttons to show that the UI thread is not blocked while the work flow executes. It is a long running task and can even be used to handle navigation or state transitions that occur over multiple screens and input cycles. The state is all maintained by the iterator.

To make workflows easy to build, Jounce provides a few "out of the box" implementations.

Delay Workflow

The delay workflow is perhaps the easiest to use to understand the pattern. The controller will call "invoke" and you are responsible for calling "invoked" when your work is done. Take a look at the delay:

public class WorkflowDelay : IWorkflow 
{
    private readonly DispatcherTimer _timer;

    public WorkflowDelay(TimeSpan interval)
    {
        if (interval <= TimeSpan.Zero)
        {
            throw new ArgumentOutOfRangeException("interval");
        }

        _timer = new DispatcherTimer {Interval = interval};
    }

    void TimerTick(object sender, EventArgs e)
    {
        _timer.Stop();
        _timer.Tick -= TimerTick;
        Invoked();
    }

    public void Invoke()
    {
        _timer.Tick += TimerTick;            
        _timer.Start();
    }

    public Action Invoked { get; set; }        
}

As you can see, in this case the workflow simply starts a timer. When the timer ticks, it is shut down and invoked is called. Yielding one of these in your workflow simply creates a delay. This is useful, for example, when you have "toasts" or notifications that you want to show for a minimum period of time, or during testing when you want to simulate network delays.

Event Workflow

Another workflow encapsulates events. The invoke method registers for the event, and invoked is called once the event is fired. The constructor takes in delegates to hook and unhook the handler so it can write itself out of the equation when the event is done.

public class WorkflowEvent : IWorkflow 
{
    private readonly Action _begin;

    private readonly EventHandler _handler;

    private readonly Action<EventHandler> _unregister;
        
    public WorkflowEvent(Action begin, Action<EventHandler> register, Action<EventHandler> unregister)
    {
        _begin = begin;
        _unregister = unregister;
        _handler = Completed;
        register(_handler);            
    }

    public void Completed(object sender, EventArgs args)
    {
        Result = args;
        _unregister(_handler);
        Invoked();
    }

    public EventArgs Result { get; private set; }

    public void Invoke()
    {
        _begin();
    }

    public Action Invoked { get; set; }
}

public class WorkflowEvent<T> : IWorkflow where T: EventArgs
{
    private readonly Action _begin;

    private readonly EventHandler<T> _handler;

    private readonly Action<EventHandler<T>> _unregister;

    public WorkflowEvent(Action begin, Action<EventHandler<T>> register, Action<EventHandler<T>> unregister)
    {
        _begin = begin;
        _unregister = unregister;
        _handler = Completed;
        register(_handler);
    }

    public void Completed(object sender, T args)
    {
        Result = args;
        _unregister(_handler);
        Invoked();
    }

    public T Result { get; private set; }

    public void Invoke()
    {
        _begin();
    }

    public Action Invoked { get; set; }
}

There is also a routed event version that does that same. If you need to wait for a control to be loaded, for example, you can yield a routed workflow and hook/unhook into the loaded event. This line from the quick start shows registering and waiting for a click before continuing:

yield return new WorkflowRoutedEvent(() => { }, h => Button.Click += h, h => Button.Click -= h);

Background Worker

You can yield a background worker workflow to start a background worker and wait until it is finished before continuing. The class has support for reporting progress. Again, from the quickstart, you can simply pass what method is called to do the work and what method is called to report progress:

yield return new WorkflowBackgroundWorker(_BackgroundWork, _BackgroundProgress);

Action Workflow

By far the most flexible workflow is the action workflow. A great example of how to use this is when dealing with callbacks. I like to simplify asynchronous calls using action when dealing with a service layer. This lets me easily manage what happens when the service call is complete without using any awkward event or APM code. During testing, it is also easy to mock the callback and not have to bring in additional overhead. What does a service call look like using the workflow model?

Consider a service method that has this signature:

void DoSomethingService(string parameter, Action<Exception,string> callback);

The method initiates an action, taking in a parameter. When completed, you are called back with an exception (only if one occurred) and a result as a string. To wire this up in a workflow, you simply need to remember the pattern that "invoke" is called to start the process, and invoked when it is finished. Here is a piece of the workflow that will inject the call and capture the exception and return value:

var doSomethingStep = new WorkflowAction();
            
Exception exception = null;
string result = string.Empty;
            
doSomethingStep.Execute = () => ServiceProvider.DoSomethingService(myParameter, (ex, res) =>
                            {
                                exception = ex;
                                result = res;
                                doSomethingStep.Invoked();
                            });
            
yield return doSomethingStep;

As you can see, the action is wired up to fire the service call. Upon returning, the results are captured and then the "invoked" step is called. In the workflow, once the yield completes, you will have the exception and/or result available to process for your next steps.

Work it Out

In conclusion, the IWorkflow engine in Jounce is designed to be an extremely lightweight and flexible way to handle asynchronous processes in a sequential fashion to make your code easier to read and more maintainable (and also to facilitate long running transactions that don't block). For more complex scenarios of both generating, reading, and reacting, you should certainly consider frameworks like Reactive Extensions (Rx) and the upcoming Async featuers. However, sometimes those libraries can be overkill when you just need a few simple steps to execute in sequence, and that's where the workflows in Jounce step in.

Jeremy Likness

Thursday, April 7, 2011

The Art of Debugging

Mastery of the art of debugging is rare. I know this from years of experience working on enterprise systems. If it was simple, more people would be doing it and everyone would be able to track down bugs. The reality is that most shops have that one "go to person" known as "The Exterminator" who is called in to sweep the place for those bugs no one else was able to track down. At Wintellect, our "Chief Bugslayer" is John Robbins and with him and the rest of our team, a significant part of our business revolves around finding nasty bugs and fixing them.

I've been working on bug fixes for decades now. One of my first feature articles published in print was a piece called "The Exterminator's Guide." One thing I've found is that effective bug hunting involves a combination of skills. It's not enough to know the technology. There is a method to the madness. There are certain steps that can be learned, and as you encounter more systems during your career, experience only adds to the mix. What has always amazed me is the gap between those who are good at finding defects and those who aren't. You'd think it would be a continuous spectrum of skills but what I've found is either people get it, or they don't - the ones who do, do it quickly and consistently. So what is the secret?

Train Your Eyes

Do me a favor and take a quick pop quiz. Read the quote below and quickly count the number of F's in the passage.

Finished files are the result
of years of scientific study
combined with the experience
of years.

I'll come back to the answer for that in a second. It would be too easy if I put it there. Just note down what you thought it was, and then let's take something a little more involved. Here's another set of instructions, and trust me, this is all leading up to something. Are you ready for another contest?

I want you to watch a very short movie. Don't click before reading these important instructions! I want you to watch the movie once. Only once - that's it. No more. Otherwise, you're cheating, and we don't like cheaters. But when you watch it, you'll have an assignment.

In the video you are going to watch, you're going to see a group of basketball players. Some are wearing white. Some are wearing black. They are passing two balls around. Got it? White, black, and balls being passed. Here's your mission:

Count how many times the ball is passed by the players wearing white.

That's it. If you think this is an exercise about focus and attention to detail, you're right. So again, when you click the link, watch it only once and count the number of passes by players wearing white.

Here is the link. Go ahead and watch it, then write down your score.

Step One: Click Here and Start Counting!

By now I hope you are starting to see my point, and the first step to mastering the art of debugging. In my experience, the majority of developers don't debug code the right way. When they hit F5 and start stepping through the program, they're not watching what is going on.

What? Am I kidding? They've got break points set. There are watch windows. They are dutifully hammering F10 and F11 to step into and out of subroutines. What do I mean? Here's the problem:

They are waiting for the program to do what they expect it to do. And it's hard not to, especially when you are the one who wrote the program! So when you step through that block of code and go, "Yeah, yeah, I'm just initializing some variables here" and quickly hit F10, you've just missed it because that one string literal was spelled incorrectly or you referenced the wrong constant.

The answer to the "F" quiz is 6. Most people count 3 because they sound the words in their head and listen for the "f" sound, rather than just looking at the letters. And that's what people do when they debug - they feel out the program, rather than watching what it is really doing.

Seriously, Train Your Eyes

Did you see the gorilla? Most people won't their first time. It's because they are following instructions. They are counting passes, which is exactly what the exercise was about. But can you believe how obvious it is (and yes, now you have permission to watch the video again) when you see it and know what you're looking for? How could you miss something like that?

Hopefully by now we've established that your mind has a pretty good filter and is going to try to give you what you want. So when you step through code with expectations, guess what? You'll see the debugger doing what you expect, and miss out on what is really happening that may be causing the bug.

So What's the Next Step?

There are several things you can do to help hone your debug skills, and I encourage you to try these all out.

Have someone else debug your code, and offer to debug theirs. The best way to understand how to look at code and see what it is doing is to step through code you're not familiar with. It may seem tedious at first, but it's a discipline and skill that can help you learn how to walk through the code the right way and not make any assumptions.

Try not to take in the code as blocks. In other words, when you have a routine that is initializing variables, don't step over it as the "block of initialization stuff." Step through and consider each statement. Don't look at the statements as sentences, but get back to your programming roots and see a set of symbols to the left of the equal sign and a set of symbols to the right of the equal sign. You'll be surprised how this can help you hone in quickly to a wrong or duplicate assignment. It's common in MVVM, for example, for developers to cut and paste and end up with code like this:

private string _lastName; 
private string _firstName; 

public string FirstName 
{
   get { return _firstName; }
   set { _firstName = value; RaisePropertyChanged(()=>FirstName); }
}

public string LastName 
{
   get { return _firstName; }
   set { _lastName = value; RaisePropertyChanged(()=>LastName); }
}

Did you spot the bug? If not, take some time and you will. This is far more difficult when it's code you've written because that expectation is there for it to "just work."

Get Back to the Basics

With all of the fancy tools that tell us how to refactor code and scan classes for us, sometimes we forget about the basic tools we have to troubleshoot.

I was working with a client troubleshooting a memory leak issue and found myself starting at huge graphs of dependencies, handles, and instances. I could see certain objects were being created too many times, but looking at the code, it just looked right. Where were the other things coming from?

So, I got back to the basics. I put a debug statement in the constructor and ran it again. Suddenly I realized that some of the instances were faithfully reporting themselves, and others weren't. How on earth? Ahhh ... the class was derived from a base class. So I put another debug statement in the base class. Sure enough, it was getting instanced as well. A quick dump of the call stack and the problem was resolved ... not by graphs and charts and refactoring tools, but good old detective work.

Make it Unique

Simple little steps can go a long ways. If you are dealing with multiple instances of the same object and all of the properties and fields are the same, don't pull your hair out with frustration (you see what it did to me). Instead, do something simple and easy: put a Guid inside the class and then override the ToString() to print the Guid, or use it in your debug statements. Now you'll be able to trace where each statement is coming from.

The First Debugging Tool is Your Mind

Finally, I'm going to give you the same advice my mentor gave me so many years ago when I started troubleshooting my first enterprise issues. He told me the goal should be to never have to fire up the debugger. Every debugging session should start with a logical walkthrough of the code. You should analyze what you expect it to, and walk through it virtually ... if I pass this here, I'll get that, and then that goes there, and this will loop like that ... this exercise will do more than help you comprehend the code. Nine times out of ten I squash bugs by walking through source code and never have to hit F5.

When I do hit F5, I now have an expectation of what the code should do. When it does something different, it's often far easier to pinpoint where the plan went wrong and how the executing code went off script. This skill is especially important in many production environments that don't allow you to run the debugger at all. I was taught and have since followed the philosophy that the combination of source code, well placed trace statements and deep thought are all that are needed to fix even the ugliest of bugs.

It's an Art you CAN Learn!

While I can't guarantee you'll be able to squash defects like the Wintellect Bugslayer himself, debugging is an art that can be learned with patience, focus, and experience. I hope the earlier exercises helped you understand the filters that sometimes block your efforts to fix code, and that these tips will help you think differently the next time you are faced with an issue. Remember, there is no defect that can't be fixed ... and no debugger more powerful than the one between your ears.
Jeremy Likness

Tuesday, April 5, 2011

Silverlight 5 Beta Here, HTML 5, Sterling Feedback and More

I have a few things to cover on this post. I want to start with Silverlight 5 but stay tuned for some information about how you can learn more and ask direct questions about the Sterling database.

Silverlight 5 Beta is Here

The Silverlight Team just officially announced that the Silverlight 5 beta will be available next week at MIX 2011. This is an exciting milestone because the new version will close many gaps. I provided my own detailed analysis when the version was first announced (From Silver to Gold in One Release). The recent announcement, however, came with a deeper message that is evident in its title, "Standards-based web, plugins, and Silverlight."

Already reactions are visible on Twitter and in various blog posts. For now, I think we need to take the post at face value and look at the key points discussed, essentially:

  • Silverlight is king for plug-in based experiences
  • Silverlight with XNA are the building blocks for the Windows Phone 7 platform
  • HTML 5 is a solution for "many scenarios"

I think this is great news. I read into this that as an open standard, HTML 5 will grow in reach. The community certainly wants that to happen, and for good reason. There are more devices in play than ever before now that the smart phone market is growing and the playing field isn't just computers and laptops, but includes mobile devices and even gaming consoles. So why not embrace a technology that reaches across platforms and experiences? I think Silverlight developers would be insane not to at least pay attention to HTML 5 and learn what is there in order to make better decisions in the future about what technologies to use.

Having said that, I still believe that for applications (not web-based experiences) technologies like Silverlight make perfect sense. I also believe that while you will see more HTML 5 based platforms, that the native technologies for iOS, RIM, Windows Phone 7, etc. will still play an important role in the high touch, native experience. I don't believe that HTML 5 development kits are going to replace or come close to creating the same experience that an Objective-C, Java, Silverlight, or XNA based application can deliver. It's the same thing we've known in the computer industry for decades: the right tool for the right job.

HTML 5 Tooling

Most developers I know who have worked on multiple platforms are fans of the tooling that Microsoft provides. For any shortcomings, there are plenty of benefits. It's exciting to see part of the message focused on HTML 5 tools. I anticipate we'll hear announcements and perhaps even see previews at MIX of tools to make it easier to build HTML 5 applications. I hope this is received by the Silverlight community as an exciting step towards building web-based technologies and doesn't turn into another "oh my, Silverlight is going away" series of defensive rants. I see many projects that would make perfect sense to be created on an HTML-based platform and my biggest complaint for that platform right now is tools. I'm excited to see those tools and to receive more options for building the right solution. I continue to be a Silverlight fan and know there are many projects that only make sense in that type of rich environment, but I also want to be able to have another option for the projects for which it doesn't make sense.

The Death of Silverlight for Cross-Platform

I don't believe Silverlight will die, but I do read from the message that the goal of having Silverlight supported on as many devices and platforms is going away. I could be wrong, but it appears that while it will continue to be a first class citizen in the browser for the desktop and in the Windows Phone (and perhaps in other devices such as XBox if the rumors are true) it is not going to be an active goal of the team to put it in other environments like Android and iOs.

This is disappointing to me because one of the original promises of Silverlight was that ability to write the application once and have it run everywhere. It's a goal that has been pursued time and time again but no one has achieved it in the past 4 decades so why would now be any different? I think the problem is that the barriers to entry in the smartphone market were just too great. Many decisions outside of Microsoft's control prevented the plug-in from making it to certain platforms. That's not an excuse because there are certainly places it could be done (Android, Linux) and is being done through some valiant open source and not-so-open source efforts, but if you can't hit all of the platforms, does it make sense to continue with that agenda or pull back and see what does make sense?

I think the combination of native languages and platforms (Java, XNA, Silverlight, Objective-C) for high touch experiences, and standards-based platforms (HTML 5 and JavaScript) will be key moving forward. I do NOT believe HTML 5 will EVER be the only, de-facto standard. I feel it will grow and more solutions will be based on it, but I don't see the native high-touch experiences going away. In fact, I see the open source community rallying behind tools like MonoTouch and MonoDroid to create a best of breed compromise between writing once but deploying natively with the full suite of platform features.

So What Does it Mean?

I think this means Silverlight developers can stay calm and comfortable in the knowledge that the platform is here to stay. I'm certainly involved in many projects using the technology that I don't see going away — in fact, there are many that just couldn't be written in HTML 5 as it stands today. Windows Phone 7 will continue to gain traction and create an environment for development with Silverlight as well, and with the gaps closed by the version 5 release we'll see more line of business, enterprise applications than ever before.

I do think the Silverlight professional who is focused on providing the best possible solution for their customers will no longer remain versed in only Silverlight, but will branch out and understand and embrace HTML 5. If your customers require a presence on smart phones then growing to understand those native platforms will only help you provide a solution that reaches all of the end user devices and isn't limited to the places where the Silverlight plug-in lives. I know I'll be brushing up on my HTML 5 knowledge and skills and finding the time to investigate solutions for other platforms as well so I can speak intelligently to the best possible solution when a customer asks, "How do we solve this problem?"

The future looks bright and I'm excited to hear the announcements that will come next week.

Sterling Feedback

That leads me to my second point. Next week at MIX I'll be participating in the Open Source Fest on April 11th starting at 6pm PST. I'll be there representing Sterling. Please stop by, let me know your experience and feel free to ask any questions, provide any feedback and share any feature requests you have! I'll also be happy to discuss Jounce. It's a great opportunity to learn more about all of the open source projects available while having direct access to the creators to share how we can make those projects better for you.

In addition, I'll be participating in a Microsoft geekSpeak show on April 20th at 12:00pm PST / 3:00pm EST. You are free to call-in and ask any questions you like, so it will be a highly interactive session. This is the power of the Internet: you can participate from anywhere around the world, so if you are interesting please register and join me there.

Thanks, I appreciate all of you who read and support this blog. Now that I've shared my thoughts on the upcoming Silverlight 5 release and Microsoft's position, what are your thoughts? Share them through the comments below. Jeremy Likness