Archive for December, 2007

Fun with C# Extension Methods: Easy Ranges

I’m not a real Ruby on Rails developer, but I’ve tried to learn it, just to broaden my perspective. Coming from a C# background, I’m impressed by how easy it is to read Ruby code. In fact, it is usually so compact and self-descriptive, you can understand it just by reading the code. Imagine not having to write comments because your code is so clear! That’s what you can do with Ruby.

And now, using extension methods, it’s almost as easy to write C# code which is just as self-descriptive as Ruby code. I’m going to try to demonstrate that in this post.

Before we start, if you want to create your own extension methods, you need to define a static class to put them in:

public static class Extensions

I generally make my extension methods public, because I intend them to be usable for all my projects. I’m putting them all in a common project which I reference in most of my work.

Easy Ranges

.Net 3.5 includes a new extension method for IEnumerable<int> called Range. Its first parameter is the starting integer, and its second is the number of integers to count. So to get a range from 1 to 10, you write:

Enumerable.Range(1, 11)

It might be just me, but personally, I find that syntax a bit counter-intuitive. If I want the range 100 to 200, I’d have to write:

Enumerable.Range(100, 101);

Could I read that later, and easily understand what was being done? Not without a comment or two. When I read that code, it’s not obvious at all what range is being used.

But, let’s not complain. It’s easy to implement our own interpretation and extend an integer with a new method. Let’s make it more Ruby-like.

In Ruby you declare a range from 1 to 10 like this: [1..10]. A range of 100 to 200 looks like this: [100..200]. That’s nice, but pretty much impossible for us to define for C#. I think it would be just as good though if I could write something like this in my programs:

IEnumerable<int> range = 1.To(3);

For me, that compares quite nicely to the Ruby syntax. Plus, I get intellisense for the method on an integer, so it works well.

To implement that, I just have to define a new extension method like so:

public static IEnumerable<int> To(this int first, int last)
  return Enumerable.Range(first, last - first + 1);

Using this new method and the new C# LINQ syntax, I could select all even numbers from 1000 to 2000 like this:

var odds = from i in 1000.To(2000)
           where i % 2 == 0
           select i;

foreach (var i in odds)

This gives 501 items, as you’d expect.

What’s next?

Ruby has a nice little syntax for defining a quick loop. Rails uses it a lot. I’m going to demonstrate how to create the same syntax in C# in my next post.


Refactoring C# Series: Use Automatic Property


Use Automatic Property


You have a property in a class which just wraps a field of the same type, and simply returns or sets that field.

private string _field1;
public string Field1
  get { return _field1; }
  set { _field1 = value; }


public string Field1 { get; set; }

C# Version



Encapsulation is quite possibly the key principle of object-oriented design. It is common practice in C# to encapsulate fields by wrapping them in a property.

When a class has many properties, much of the class is taken up by the same coding pattern for a property:

private string _field1;
public string Field1
  get { return _field1; }
  set { _field1 = value; }

In C# 3.0 this block of code can be removed by using automatic properties. The compiler will do the same thing that you would have done before if you just define the property name and the getter and setter.

public string Field1 { get; set; }

Reducing the property code like this makes it much easier to understand the code later, as only the necessary details are defined. Even if you use a terse syntax formatting, as in the previous example, you are saving 5 lines of code per property. The way I usually format my code with extra linefeeds, I save 9 lines for each one. When there are a lot of properties in a class, that is a large amount of code which can be removed.

Removing code isn’t just good for making it easier to understand; it makes it easier to test too. You wouldn’t have to test properties to see if they are simply setting and getting the correct values with this syntax, as the compiler is doing the work for you. So if you have unit tests, you might be able to remove lots of property testing code.

Of course, if the property does not just simply set or get a field value, you must use the previous C# 1.0 syntax for properties. In addition, you cannot specify get or set alone, but you can include scope identifiers to make a set private, for example:

// The following are not valid:
// Automatically implemented properties must define both get and set accessors
public string GetOnly { get; } // Not valid
public string SetOnly { set; } // Not valid either

// But this is okay
public string GetWithPrivateSet { get; private set; }


  • Remove the body of the property, replacing the code between the get and set blocks with a semicolon “;” .
  • Compile.
  • If there is a compiler warning that the field is no longer being used, you can now simply remove the field which the property was wrapping.
  • If there is no compiler warning, other methods or properties are using the field directly. You can remove the field and recompile anyway, which will give you the location in the code where the field is being used. Replace each reference with a reference to the property instead.
  • Compile.

Repeat for each property.

If the field was a protected field, not private, then you might have subclasses which access the field. They will have to be changed to access the property instead of directly accessing the field.

Tip: For new code, you can simply use the code snippet “prop” in Visual Studio 2008 to get an automatic property.


I’ll just do an example with one property, even though it looks a lot better with a longer class.

class Account
  private long _id;
  public long ID
    get { return _id; }
    set { _id = value; }

First, remove the body of the property getter and setter (and apply a bit of reformatting too, if you like):

class Account
  private long _id;
  public long ID { get ; set ; }

Compile, to make sure the code is happy, then remove the field:

class Account
  public long ID { get ; set ; }


A Mixin for IComparable

Following on from my other posts on C# Mixins, here’s a short one to demonstrate the benefits of Mixins using IComparable<T>.

I don’t know about you, but I can never remember how the CompareTo method of IComparable<T> works. If I remember correctly, it gives back -1 if the value of the compared object is less than the value of the called object, and +1 if the compared object is greater than the value of the called object.

No, wait! That’s the wrong way round! See what I mean?

The CompareTo method is defined like this:

int CompareTo(T other)

According to the MSDN Library, the value it returns is:

Value Meaning
Less than zero This object is less than the other parameter.
Zero This object is equal to other.
Greater than zero This object is greater than other.

Now I don’t know about you, but I always have trouble with that. Mix-ins to the rescue!

What I really need is to define my own methods on the IComparable<T> interface. Something like LessThan, MoreThan and ValueEquals. That would be much more valuable. I could define those methods in a superclass, and have my new classes all inherit from that superclass. But that would bind me to a certain structure, reduce the coherency of my classes, and make me feel bad. But if I implement IComparable<T> with its sole method, I can use a Mixin to take advantage of that method and add the new functionality I need, without affecting the structure of my code.

Here’s an example using the Temperature class shamelessly lifted from the MSDN Library.

The class Temperature is defined as:

public class Temperature : IComparable<Temperature>
  // Implement the CompareTo method. For the parameter type, Use
  // the type specified for the type parameter of the generic
  // IComparable interface.
  public int CompareTo(Temperature other)
    // The temperature comparison depends on the comparison of the
    // the underlying Double values. Because the CompareTo method is
    // strongly typed, it is not necessary to test for the correct
    // object type.
    return m_value.CompareTo(other.m_value);

  // The underlying temperature value.
  protected double m_value = 0.0;

  public Temperature(double degreesKelvin)
    this.Kelvin = degreesKelvin;

Note that Temperature implements IComparable<Temperature>.

So now if you define a static extension class for IComparable<T>, you should also be able to use it with IComparable<Temperature>.

Here are the extension methods I’m going to add – LessThan, MoreThan, and ValueEquals:

public static class IComparableExtensions
  public static bool LessThan<T>(this IComparable<T> comparable, T other)
    return comparable.CompareTo(other) < 0;

  public static bool MoreThan<T>(this IComparable<T> comparable, T other)
    return comparable.CompareTo(other) > 0;

  public static bool ValueEquals<T>(this IComparable<T> comparable, T other)
    return comparable.CompareTo(other) == 0;

My program now references the IComparableExtensions class by importing its namespace. I then write:

var t1 = new Temperature(273);
var t2 = new Temperature(100);

All I had to do was implement IComparable<T> with its one CompareTo method (which was trivial), and I automatically get the extension methods LessThan, MoreThan and ValueEquals mixed-in to my Temperature class.

if (t1.LessThan(t2))
  Console.WriteLine("t1 is less than t2");
else if (t1.MoreThan(t2))
  Console.WriteLine("t1 is more than t2");

And this doesn’t just work with IComparable<Temperature>, it works with anything which implements IComparable<T>. Classes implementing IComparable<int> would also get access to the new methods, for example.

Here’s the output of the code:

t1 is more than t2

… as you would expect.

And voila! That’s why mix-ins can be so useful.

I’ve intentionally done this example using a well-known interface, to make it easier to understand. But imagine what you could do by inheriting from a particular class of your own, defining your own interface, and then adding in more functionality using a mixin. You’ve nearly got multiple inheritance.

More to come on this…

No Comments

A not-so-simple Mixin with C# 3.0

My last post gave a simple idea of how to do a Mixin with C#. Rather than repeating what someone else has already done, if you want to see a more complex example of what can be done, check out Create Mixins with Interfaces and Extension Methods by Bill Wagner at

No Comments

A simple Mix-in with C# 3.0

Heard of mix-ins? They’re an alternative to multiple inheritance, made popular recently by Ruby.

Basically, you can use them to “mix in” methods from an interface with their implementations into a class.

In Ruby you can do this by including a module in a class. In C#, you do it by implementing an interface and defining an extension method for the interface.

Here’s a simple example.

First, define the interface. In this case, it won’t have any special features, so the interface is empty. We’ll call it IDebug, as it is going to let us call a method to get details of the object it is implemented on.

public interface IDebug

After the interface is set, define a static class with an extension method for the interface. We’ll just define one method here, called “GetTypeInfo”.

public static class DebugExtensions
  public static string GetTypeInfo(this IDebug debug)
    return String.Format("{0} ({1}): {2}"
	, debug.GetType().Name
        , debug.GetHashCode().ToString()
        , debug.ToString());

The method returns the name of the class (not the interface) where the interface is implemented, plus a few extra bits of information.

Now implement a couple of classes which implement the interface.

class MyClass : IDebug
  public override string ToString()
    return “I am an instance of MyClass”;

class MyOtherClass : IDebug
  public override string ToString()
    return “I am an instance of MyOtherClass”;

Now, magically, the method “GetTypeInfo” is included with the class as an extension method.

In the method you call this from, you then need add a “using” declaration for the namespace of the extension class.

After you’ve done that you can call the method from the mix-in.

var myObj = new MyClass();
var myObj2 = new MyOtherClass();

The output of this is:

MyClass (7995840): I am an instance of MyClass
MyOtherClass (56251872): I am an instance of MyOtherClass

No Comments

New series: Refactoring C# 1.0 code to C# 3.0

I really like Scott Hanselmann’s idea to write an indefinite series of posts about reading code to be a better developer. I’m going to copy his idea, and write a series of my own.

Since its first version, C# has evolved from being a Java clone to something much more dynamic. I’ve noticed that developers often find themselves stuck long projects, and its sometimes hard to keep up with all the changes. I know a lot of developers who are still using .Net 1.1 because the project they are working on forces them too. For them, C# is still very much like Java.

So for all those who want to know what has changed since the first version, I’ve decided to make a new series of posts called “Refactoring from C# 1.0 to C# 3.0”. I will show through examples how you can make your code easier to understand and maintain by using the new features in C#. I’m not necessarily going to do it in historical order – I won’t show any preference for C# 2.0 Generics over C# 3.0 Extension Methods, for example. And C# 2.0 anonymous methods will take second place to C# 3.0 lambda expressions, which generally replaces them. I’m going to try to show how things have changed, and when you should or should not use the new features.

Some of the things I’ll cover are:

  • Anonymous types
  • Anonymous methods and lambda expressions
  • Extension Methods
  • Yield statements and iterators
  • Generics
  • List comprehensions (ala LINQ)
  • Mixins
  • Partial types
  • Type and Array inference
  • Property visibility
  • Automatic properties
  • Static classes
  • The Global namespace
  • Object, Collection and Dictionary initializers

I’ll try to treat each one as a refactoring opportunity, and not as a “must” or “must-not”. The idea is to write more maintainable code using the new features, not just go along with the trends.

Note: I’m not going to treat the .Net base library at all. Just the C# language.

No Comments

Learning LINQ with LINQPad

I’ve been trying to learn LINQ. Chris Sells recommended C# 3.0 in a Nutshell, which has turned out to be really good. The name of the book doesn’t really do the LINQ part of it any justice – it could have been called C# 3.0 and LINQ, as the LINQ section is so good. If you want to learn LINQ in depth, with an easy to follow explanation, C# 3.0 in a Nutshell is a great option.

Along with the book, the author created a small application called LINQPad. It’s absolutely fantastic for learning LINQ with. And best of all, it’s free.

And if that weren’t enough, tucked deep inside the LINQPad samples is a link to a very helpful diagram for learning C# LINQ comprehension syntax. And that’s free too!

Not only is the author good at writing, he’s extremely generous too.

No Comments

Qualities of a .Net Application Design

I’m often asked to produce design documents for new applications. I can’t do this without discussing the advantages and disadvantages to each part of the design. A great way to do this is to focus on the desired qualities of the system you’re trying to build. Define the qualities you are trying to achieve, and design each part of the system to match those qualities as best as possible.

It helps to have a list of things to consider. Here’s a list of some of the qualities you need to address for a .typical Net application:

Development-Time Qualities

  • Modifiability – (Often called “flexibility“, and especially important for agile development.) How easily can the software be modified to additional requirements and changes? Can changes be localized to as little code as possible? Do public interfaces have to change? How are the parts of the application dependent on one another? Are the most-likely-to-change areas of the application depended upon to be stable? To what extent are concerns separated?
  • Reusability – Can units of the application be reused elsewhere? Does the code fit the “DRY” principle (Don’t Repeat Yourself)? Does the system make good use of existing standards? Reusing code from other frameworks and systems can help keep testing and maintenance down, but reduce modifiability.
  • Portability – Could the system be easily ported to other platforms? This is often not an issue for most enterprise applications, as business requirements typically change more frequently than the technical platforms, and the .Net framework takes care of most of the issues. For a web application, portability could also include browser compatibility. What happens if the users suddenly receive an upgrade to their browsers?
  • Testability – How easy is it to prove that the application functions correctly, and that the various parts of the application do what they are supposed to do in isolation of one another? Can the application be tested automatically? Can the application eventually be tested easily end-to-end as part of an integration test? Can the application be load-tested?
  • Buildability – Can the application be built on systems other than those in the development environment? How often can it be built? The more often it can be built, the better. What tools can be used to do the builds? Can they implement continuous integration by building at every check-in? Which third-party controls or components are used? What part do licenses for third-party products play in the ability to build centrally?
  • Maintainability – How easy is it to understand the software for people other than the original developers? How easy is it to correct defects? Is the system documented enough to make it understandable?
  • Debugability – How easy is it to find out where and why there is an error in the system? How easy is it to find what is going wrong in a live, productive system?
  • Extensibility – How easy is it to extend the functionality of the system beyond its original specification?

Runtime Qualities

  • Functionality – How well does the software help the users to do their work? Are there missing features which make the software useless? Does the software provide end-to-end support for a particular process?
  • Usability – How easy is it for users to understand and learn to use the software?  Is the interface intuitive? Is there a supportive help system? Does the user interface match the users’ needs? Are basic usability standards and conventions adhered to?
  • Performance – Does the system perform fast enough? How does it perform when multiple users are active? Where are the bottlenecks and latency in the system?
  • Concurrency – Is it possible for more than one user to use the system at a time? What is the performance impact of multiple sessions? Is state shared across the application, making the system unreliable?
  • Security – Does the system prevent unauthorized access or misuse? What happens if access is wrongly denied to a user? How easy can security be administrated? Does the STRIDE threat model show up any weaknesses? What is the potential outcome of a successful attack? 
  • Integrity – This can be related to security, but it also refers to the data integrity of the system. Is the data always kept in a consistent state, or is it sometimes possible to infer different states of the system because of inconsistencies in the system? Do transactions pass the ACID test?
  • Reliability – Can the users rely on the software? Will it always perform correctly? Is it still usable at peak usage periods?
  • Availability – How often and for how long is the system available for use? How do upgrades affect its availability? Is the system meant to be used across time-zones?
  • Scalability – How does the software cope with adding more users over time? Can the system cope with an increase in the volume of data? What happens if the users start to use the system more often? Must the software scale up or can it scale out?
  • Deployability – How easy is it to deploy the initial release of the software? How easy is it to release future releases? Does software need to be replaced, or just patched? How does the deployment affect scalability?
  • Ugradability – Can a new version be deployed without stopping the normal operation of the system? What must be changed or upgraded when a new version is released? What third-party software may be upgraded which would affect the software?
  • Correctness – Does the software do exactly what was specified?
  • Conceptual Integrity – How balanced, simple, elegant, and practical is the whole system? Is there a clear vision as to what the software should do? Is the design consistent?
  • User Responsiveness – Related to performance and usability; how does the user perceive the application to be responding? Does the system stop responding for long periods of time, especially when retrieving data? Does the latency of parts of the system lead to decreased usability? Does the software use multiple threads to improve responsiveness?
  • Interoperability – How easily can the system be used with other systems?
  • Robustness – How does the system react to abnormal conditions? Does it crash, or recover gracefully? Do error messages baffle users and decrease usability?


Software qualities must always be balanced against one another. There’s no point in having performance good enough for if it’s not needed, for example, especially if it makes maintenance harder. Each quality has a particular importance for the system being developed.

In general the highest priority qualities are:

  • Maintainability
  • Correctness
  • Reliability

Also important for .Net Object-Oriented design are:

  • Reusability
  • Extensibility

Most of what I focus on in my day-to-day work is focused on good maintainability and modifiability. Changes come along all the time, and the people paying for the software want them done fast. The faster you can adapt to their needs, the more they trust you, and the better your business relationship.


ASP.Net MVC Framework leads you to extension method heaven

The first pre-release of new ASP.Net MVC (ahem, Ruby-on-Rails for .Net) framework has just been made public.

I find it really exciting that Scott Guthrie and his team are listening to what the people want. Webforms is really quite heavy, especially in comparison to Ruby on Rails, so by offering new frameworks Microsoft will gain new developers. And new developers equals more servers, so it’s a good business model, methinks.

A major plus for me is seeing how the Framework shows off the new C# 3.0 features. I love Python and Ruby, but with no support yet for Visual Studio, it’s not easy to use them. Until now, the expressiveness of Python and Ruby have been missing from C# – it grew up from Java after all – but C# 3.0 is now moving towards that expressiveness too. With XAML and C#, things have really changed for developers over the past 5 or so years.

For example, check out this post from Rob Conery. He demonstrates an extension method which allows you to create a list of attributes as a string. You just call “ToAttributeList()” on any object, and you get a string back like “field1=”0″ field2=”1″”.

I can already think of loads of places where I would like to use that. There’s already a “ToDictionary()” method in the .Net 3.5 Framework, which can be used to create a dictionary from a list of objects, using a field as the key, but I can create a dictionary directly using the property names as keys, with the objects as values.

public static Dictionary<string, object> ToPropertyHash(this object item)
  var props = from property in item.GetType().GetProperties()
              select new {Name=property.Name, Value=property.GetValue(item, null)};

var dict = new Dictionary<string, object>();
foreach(var prop in props)
dict.Add(prop.Name, prop.Value);

return dict;

Here’s testing code which demonstrates what you get.

var myObject = new { Name = “Frank”, Age = 5 };
var dict = myObject.ToPropertyHash();
CollectionAssert.Contains(dict.Keys, “Name”);
CollectionAssert.Contains(dict.Keys, “Age”);
Assert.AreEqual(dict[“Name”], “Frank”);
Assert.AreEqual(dict[“Age”], 5);

Maybe that’s not so interesting for you, but I am developing Windows Workflow Foundation apps, and a workflow instance always requires you to build a new dictionary with the names of the properties as keys. This could be very, very helpful to cut down on code for that.

Instead of …

Dictionary<string, object> args = new Dictionary<string, object>();
args.Add(“Name”, “Richard Bushnell”);
args.Add(“Age”, 32);
args.Add(“NoOfKids”, 5);

… I use:

var args = new { Name = “Richard Bushnell”, Age = 32, NoOfKids = 5 }.ToPropertyHash();

What I cannot understand yet, is why Microsoft has to copy Ruby On Rails? Don’t get me wrong, I love Rails. But in some places it just doesn’t suit development. developers already have their ways of working, and have bought controls, etc. Why force them to not use them anymore, if they just want to tidy up their code with a new MVC model? Wouldn’t it be best for everyone if they developed another kind of framework? The focus could be on testing, as with MVC, but more directed towards developers who already use WinForms and WebForms.

Microsoft can develop independently. Take LINQ, for example. The .Net team could have simply provided list comprehensions as implemented in Python, but instead, they came up with a totally new model which no other development framework has ever had. And while XAML is similar to the old Delphi DFM files, it’s still taken some good leaps forward. I don’t see that yet with the ASP.Net MVC Framework. They’re still playing catch-up with Rails.

In general though, I’m really excited to watch it developing.