NDepend is a product that analyzes large code bases and provides information about dependencies, complexity of code, best practices, and more. While designed to help manage large code bases, it also works well as a "reality check" as you are developing new projects. The latest release supports Silverlight and I'll be using it to clean up my RSS reader a little bit.
The first thing I did was download and install the product. There is an option to integrate with Visual Studio. I chose this option, then opened the RSS reader solution. In the lower right I now have an option to create a new NDepend project and link it to my solution:
A new dialog pops up that prompts me for the projects/assemblies to scan and some other information. In this case, I simply take all of the defaults and click OK.
This creates the new project and analyzes my project immediately. Eventually, I'm brought to a new web page that has a detailed report of the analysis of my solution.
The report is very comprehensive. For example, I receive summary statistics at the top:
Number of IL instructions: 1003 Number of lines of code: 120 Number of lines of comment: 265 Percentage comment: 68 Number of assemblies: 3 Number of classes: 15 Number of types: 16 Number of abstract classes: 0 Number of interfaces: 1 Number of value types: 0 Number of exception classes: 0 Number of attribute classes: 1 Number of delegate classes: 0 Number of enumerations classes: 0 Number of generic type definitions: 1 Number of generic method definitions: 6 Percentage of public types: 81.25% Percentage of public methods: 76.81% Percentage of classes with at least one public field: 6.25%
The visual map gives me a quick idea of what the dependencies of the project are (click for a larger view):
I also get a nice chart showing the dependencies ... you can imagine how powerful this visualization becomes when you have far more projects than the ones in our little RSS reader:
OK, that's all good and pretty, but how does this help us? We need to dive deeper into the report. Scrolling down the report, the first thing that happens is I am scolded for not commenting my SecurityToken
method well enough. There is a lot of code but I'm not explaining what the code is doing ... that's fine. It'll certainly help other developers down the road, but what about performance and code quality?
The report indicates that my Item
, RSSElementAttribute
, and Channel
classes are candidates to be sealed. Why would I want to seal the classes? Many C# developers mistakenly believe that you only seal a class when you want to prevent someone else from deriving from it. Not true.
As Wintellect's own Jeffrey Richter explains in CLR via C#, there are three key reasons why a sealed class is better than an unsealed one:
- Versioning — if you need to unseal it in the future, you can do so without breaking compatibility
- Performance — because the class is sealed, the compiler can generate non-virtual calls (which have less overhead) rather than virtual calls (virtual calls are required to analyze the inheritance chain and find the correct instance of the actual method to call)
- Security and Predictability — classes should manage and protect their state. Allowing a class to be derived exposes that class to corruption via the deriving code.
Turns out NDepend has given us a pretty good pointer there. We'll come back to these classes in a bit.
Next, I find out two of my classes, Channel
and RssReader
, are candidates for structures. Structures have certain performance benefits (read more about them here) but if they are too large they may actually degrade performance. A class that is relatively small, has no derived types, and derives directly from System.Object
is a candidate for a structure. For the Channel
class, we can kill two birds with one stone: structures are also implicitly sealed.
I could go on and on ... my next step is to continue finding areas that the Code Query Language constraints are fired and see if it makes sense to correct them. You can read more about the CQL here. This language allows you to write your own constraints for quality checking your code. Let's jump out of the constraints and look at a few more metrics that the report provides.
Fortunately, most of the items on the report contain hyperlinks to explanations of what the metric or query is. The list of basic metrics is explained here. These are important ones to look into:
Afferent Coupling — this is how many types outside of an assembly depend on the current type. A high afferent coupling means the type has a larger responsibility because more external types depend on it. In our case, only the common project generates this type of dependency.
Efferent Coupling — this is simply how many external types to the current assembly are depended upon by types within the assembly. A high efferent coupling means we have many dependencies on other types. There will always be efferent coupling, because in the basic sense your string, int, and other framework types have a dependency on the core library.
Poor design results in efferent coupling increasing as you move from the database to the presentation layer. This is because more dependencies on database, business, and other types accumulate. A quality design will ensure consistency across the layers, because the presentation layer will only couple to models and types defined in the business layer, but will be completely abstracted from the data layer, etc.
Relational Cohesion — this refers to interdependencies within an assembly. Typically, related types should be grouped together. A low relational cohesion means there are very few interdependencies. This is not always bad. For example, if you have your domain models in a single assembly, the relational cohesion will depend on how many of those classes aggregate or compose other classes ... it is perfectly normal to have several unrelated types defined. On the flip side, a high cohesion means perhaps the classes are too interrelated and should be broken out for better de-coupling. There is no hard, set rule for this and really is a judgment call based on the specific project being analyzed.
Instability is an important metric to consider. It is the ratio of efferent coupling to total coupling. On a scale from 0 to 1, this describes the "resistance to change" for the assembly. A 0 means it is highly resistant to change, while a 1 means it is extremely instable, or highly sensitive to changes. Because I have grouped models, services, and interfaces together in my project, it fails with a score of 1. This means I need to re-factor and break some elements out so they are more independent.
Abstractness is another important measure. This measures the ratio of abstract (interfaces and abstract classes) types to concrete types. It is again on a scale from 0 to 1. I try to target a combination of 0 and 1 for this metric ... the "1" or completely abstract being independent projects that define interfaces, with the "0" being the implementations of those interfaces. We'll re-factor the RSS reader to pull the interfaces and abstract classes to separate projects and then re-analyze. This will also help me with my testing efforts because it's very hard to write quality unit tests for highly coupled classes.
Based on the report, I need to do some refactoring and change the types and modifiers of some of my entities, as well as split these out into separate projects.
Doing this from within the project is simple. You can click on the NDepend icon and get a summary (click for full-sized view):
Next, I click on one of the warning rows to view the detail (for example unused code -> potentially dead methods). This gives me the detail of the violation and where potential violations occurred. I can easily click on the member or class to navigate to the code and fix it directly:
Following the advice of the tool, I created a new "model" project where I moved the basic domain entities (item, channel) and the custom attribute. I followed the advice to seal the classes but left them as classes due to the reflection used to generate instances from the RSS feeds.
A new service project was created to implement the service. Note how important this is because now for unit tests of classes that depend on the service contract, I can simply mock the interface rather than having to pull in an assembly with an actual implementation (read about Using Moq for Silverlight for Advanced Unit Tests).
I also flattened my structure a bit. I had namespaces with only one class simply because I was dumping these into folders, rather than simply grouping them together and refactoring down the road when it makes sense.
After all of this, I ended up with the new dependency diagram:
My new assemblies no longer have instability ratings of "1" which means the solution as a whole is moving toward better stability and long term maintainability.
Obviously this was a small project with few changes. The power of the tool increases exponentially as you build more complex and involved projects. It can help you sort dependencies, optimize your code, enforce naming conventions, and much more.
Now we've got the reader a little better suited for future expansion. My next step will be to build the unit tests (that should have gotten there a lot sooner, I know, I know) and then see if it makes sense to introduce any type of dependency injection.