Nerdy tidbits from my life as a software engineer

Saturday, June 20, 2009

Random Weekend Thoughts

I’ve been rather busy at work, but have accumulated a number of thoughts over the last few months that I figured I’d share:

1. For a software engineer, I know remarkably little about ANAMES, CNAMES, and other DNS related items.  Hence, when the blog went down about one week ago, I was unable to diagnose it.  And my resolution was to give up.  Forget it.  www.michaelbraude.com now just forwards to www.michaelbraude.blogspot.com.  Because it’s easier this way – and because, frankly, Yahoo’s domain hosting is terrible and blogger’s domain hosting help section is absolutely useless when it comes to setting up domains and resolving issues.  My advice is to use anybody else.

2. Bing is better.  Yes I’m biased, but it’s still better.  There is a palpable excitement within Microsoft after the Bing launch.  It’s fun to be a part of that.

3. On the heels of my last rant, a few things have occurred to me.  First, one big reason people like thin clients such as web pages is because the number of alternatives right now is low.  There are no rich clients that connect to the cloud and do things like Facebook or Twitter do.  If there were, then people would prefer the rich client because it would be superior to the webpage in every way.  Second, I am increasingly convinced that we have reached the end of what’s possible with HTML and JavaScript.  There’s just not much more room for improvement.  Google docs will never surpass Excel because, frankly, the technology is too limiting.  And lastly, even if there were a bunch of killer cloud-based Windows applications that were superior to web pages, people would need a trivial way to a) find them and b) run them.  Which is why we need a click-once application store for windows applications.  Apple has shown that the appstore model is a good one.  It makes it easy for people to find fun applications to run.  Windows needs something similar.

4. We need to give people a reason to want more powerful computers.  If you’re browsing the internet, your 6 year old Athlon 1.4 GHz computer from 2003 will be fine.  Having a library of rich, cloud-based applications will spur demand for faster machines (a 3D WPF-based Facebook app would be awesome!).

5. Why is it so hard to find small laptops with high screen resolutions?  Who wants a 16” screen with 1280x800 resolution?

6. I love my XBox, but mostly because Nintendo is unable to publish enough games themselves to keep a gamer busy.  Why is it that every time I go to Gamestop, the entire Wii wall is filled with games for 9 year old girls?  Clearly there are adults who own Wii’s.  Why isn’t anybody developing games for them?

7. What crazy company would ever ship competitors’ products with their own?  Imagine if Toyota gave you a “ballot” when you bought a car from them, from which you could “vote” on which car stereo you preferred – a Toyota stereo, or an after-market brand?  No, this insanity makes no sense to anybody.  That the EU is complaining about the browserless Windows 7 solution is an admission that, yes, a web browser is an integral part of the operating system.  Which makes the whole bundling argument rather weak.  But ship competitors products with your own?  Forget it.  I’d rather ship nothing.

Friday, June 12, 2009

XmlSerializer Can’t Handle Interfaces (With a Workaround)

It didn’t occur to me until it was a bit too late, but XmlSerializer can’t handle interfaces.  I understand why this is: how would the serializer know what class to instantiate that would return an instance of the interface type?  But the lack of a built-in workaround here is a bit of an annoyance.  Since I like to abstract things in ways that allow me to mock them, I separated the implementation of some of objects into interfaces, without realizing that this would break serialization.  Even having your interface implement IXmlSerializable doesn’t solve the problem – you just end up with a different error that says something to the effect of, “System.InvalidOperationException: [class] cannot be serialized because it does not have a parameterless constructor.”  This is because the serializer is expecting that whatever implements IXmlSerializable is a class and that it has a parameterless constructor.  Obviously, interfaces won’t work in these cases.

For instance, this property will fail to serialize:

/// <summary>
/// Gets / sets the working folder shims.
/// </summary>
[XmlElement]
public virtual IWorkingFolderShim[] Folders
{
get;
set;
}

So in order to get around this, I had to do something hackish.  It’s not that pretty, but it works:

/// <summary>
/// Gets / sets the working folder shims.
/// </summary>
[XmlElement]
public virtual WorkingFolderShim[] FolderShims
{
get;
set;
}

/// <summary>
/// Gets / sets the folder shims.
/// </summary>
[XmlIgnore]
public virtual IWorkingFolderShim[] Folders
{
get
{
return this.FolderShims;
}
}

The object where these properties are defined implements an interface that exposes the folders property – not the FolderShims property. That way, so long as people reference the object by its abstraction, the FolderShims property is hidden from them. The proper thing to do is to make the FolderShims property internal, but I hate unit testing with InternalsVisibleTo unless it’s really absolutely necessary.

Wednesday, June 10, 2009

Snippets are a Life Saver

I occasionally keep forgetting how critical snippets are to my productivity.  I actually keep a bunch of snippets on my Sky Drive so that I can port them over from one machine to another.  With the help of snippets, some incredibly tedious tasks that would normally drive me nuts with boredom can be largely skipped with automation.

There are a few major snippets that I use constantly – particularly with WPF applications.  The first is the absolutely critical Dependency Property snippet (yes, I know Visual Studio ships with one of these, but I like mine much, much more):

/// <summary>
/// Defines the $Name$Property dependency property.
/// </summary>
public static readonly DependencyProperty $Name$Property = DependencyProperty.Register("$Name$", 
    typeof($Type$), 
    typeof($Owner$));

/// <summary>
/// Gets / sets the $Name$ property.
/// </summary>
public virtual $Type$ $Name$
{
    get 
{ return ($Type$)GetValue($Name$Property); } set
{ SetValue($Name$Property, value); } }

This alone saves me hours of development time, since I don’t need to cut and paste this same code over and over again.  But today, as I was painfully filling out some code to support the INotifyPropertyChanged interface, I realized that I should really make a snippet out of it to save me time.  And so, I came up with this:

/// <summary>
/// Defines the $Name$ property value.
/// </summary>
private $Type$ m$Name$;

/// <summary>
/// Gets / sets the name of the argument.
/// </summary>
public $Type$ $Name$
{
    get
    {
        return m$Name$;
    }
    set
    {
        this.m$Name$ = value;
        this.FirePropertyChanged("$Name$");
    }
}

Of course, this assumes that you’ve defined the FirePropertyChanged method somewhere in your class, which is very simple and usually just looks like this:

/// <summary>
/// Fires the PropertyChanged event.
/// </summary>
/// <param name="propertyName">The name of the property that changed</param>
private void FirePropertyChanged(string propertyName)
{
    if (this.PropertyChanged != null)
        this.PropertyChanged(this, new PropertyChangedEventArgs(propertyName));
}

So this experience has reminded me that whenever I end up doing some mindless, repetitive task over and over again, I should write a snippet for it.  Too bad the snippet editor isn’t part of Visual Studio 2008.  It’s relatively baffling that we have a built-in snippets manager, but not a snippets editor.  I should double check what we’ve got coming up in 2010…

Thursday, June 4, 2009

How I’m Running My First Sprint

The project I’m working on has just started to get serious.  When projects are small, keeping track of your tasks and priorities is easy, because the scope of your project is small and what you need to do is obvious.  But as soon as things get larger and people start to expect certain things from what you’re doing, that ad-hoc approach to software development doesn’t work any more.  That’s when your process really becomes valuable.

Before I came to Microsoft, I worked with a team that did an excellent job executing an agile process.  Our scrums were well-run and it was always clear what our status was.  If we were falling behind, problems and roadblocks were immediately obvious.  What I love about agile is that it frees you from bureaucracy but still chains you to the ground.  You can’t get too far ahead of yourself either by going off on unnecessary tangents, but you also don’t need to burden yourself by demanding that you know everything ahead of time.  It’s both liberating and grounding at the same time.

The product that I’m developing right now is largely unknown, so agile is the perfect process to develop my project with.  And while I might be the only person working on it right now, I don’t anticipate it being that way forever.  So in order to make my process more formal and organized, I decided to hold my own scrums and manage my own sprint – even though there’s only one person involved right now.  I figure that it’s good practice for the future when more people are involved.

So the first steps I took towards running my own scrums were the following:

  1. Enter requirements into TFS
    My requirements engineering process in this case was pretty half-hearted.  Most of my requirements are still very undefined.  I wrote a number of use cases a while ago, which are sitting in a OneNote notebook on my computer.  These helped me make some basic architecture decisions.  But I’m past that stage now.  The requirements, or “Scenarios” as TFS wants to call them, are work items that describe overlying goals.  It’s critical that these get written down somewhere.  Otherwise, the next step has much less meaning.
  2. Put items in a backlog
    Next, I created a list task work items and put them in an iteration path that ended in Backlog.  I tied each work item to a requirement.  This is a very important step.  If a work item did not fall into the domain of a requirement, then I either had to question the wisdom of the work item or add a requirement that I hadn’t defined yet.  The important rule for me is that every work item in my backlog corresponds to a requirement.  This makes sure that I never put work into my queue that is not part of an overlying goal – so I can’t get distracted.  The other benefit of this is that if I open my requirement work item, I can see a list of every work item that is associated with that requirement.  Even better: when I check things into TFS and attach the changeset to work items, I can now follow the chain all the way back to a requirement.  Very cool.
  3. Design a sprint template
    Now here’s where things get strange.  There are of course a number of free templates on the internet that you can draw from, but I found that most of these tailor to specific people’s specific process.  My process is inherently different from other people’s.  For instance, for my money, there’s only a few things that I really care about graphing: the burn down line, and the amount of time left to complete my tasks.  Initial estimates are interesting, but to me they are ultimately only useful from a post-mortem perspective.  They don’t help me from a tracking and planning perspective.

    Velocity is an interesting statistic to track.  But I ultimately don’t care so much about it because I figure that my time-left estimate should take into account my velocity.  For instance, if a task would normally take me half a day to complete, but I’m swamped with some other issue that will delay me by two days, I would expect my time-left on that task to be 2.5 days – not .5 days.  So velocity, as a statistic, should be backed into my time-left numbers.

    My underlying principle for the sprint template was: keep it simple.  Don’t overburden the sprint with too much data.  All I’m really interested in is, am I on track to finishing these work items by the end of the sprint or not?  Other things I don’t care so much about.

    So I started by importing a TFS query into an Excel spreadsheet which automatically populated it with my current sprint work items.  The only columns that I’m interested are: work item ID, title, assigned to and remaining work.  I then filled out a simple table that I fill in every day with my current work estimates.  Lastly, I replaced the “remaining work” values in the TFS table with an Excel function that automatically fills it in with my latest estimates.  That way, when I save the spreadsheet, the work items get updated with my latest estimates.

The end result looks like this:

Sprint

I anticipate, of course, that I will continue to refine this as time goes on.  But for now, this process is working very well.  As you can see from my chart, I’ve already learned a few important things so far about my current sprint.  First, I have a much higher capacity than I thought I did when I started the sprint.  Second, my initial estimates tend to be more pessimistic than they should be.  I suppose I’m a fast worker.  But without metrics like the chart above, I would never know for sure.  In a few days, depending on where I am, I will know whether I have the time to add more tasks into this sprint or not.