Michael Braude's Technical Blog

Nerdy tidbits from my life as a software engineer

Wednesday, August 19, 2009

On The Netbook Craze

All this talk about netbooks has me a bit perplexed.  What is a netbook?  From what I can tell, it’s a super slim, lightweight notebook that’s optimized for portability, price, and battery life.

But then we then go off into this whole sub-discussion about what sort of operating system a netbook should run on and what it’s primary purpose is, and this discussion makes no real sense to me.  The idea that a netbook is only useful for surfing the Internet is based on the theory that because it’s super-cheap, it is incapable of doing anything else.  This might be true today, but it clearly won’t be true tomorrow as hardware continues to get more powerful and less expensive – as it always has.  And this makes the whole idea of an Internet-only notebook silly to me.  Why not just call it a really cheap, low-end laptop?

The earliest netbooks might have had 512MB of RAM and were incapable of running Vista.  But obviously, within a year or two (if it’s not already), 2GB of RAM will be as cheap or cheaper than 512MB is today (or else the marginal difference will be so small that it won’t matter).  So whatever netbook you can by today for $300.00, tomorrow you will be able to buy a far more powerful notebook for the same price – if not cheaper.  And it will be absolutely fully-capable of running just about anything you want it to.  So why pretend that the purpose of these machines is to limit ourselves to just browsing the Internet?

So I fail to see how this is any more of a threat to Windows than the Internet already is.  It may be a threat to Microsoft’s margins, but really nothing else.

Backported From MSDN

On the heels of the backlash from my web-guy rant and Mr. Atwoods tirade against me, I decided to move my blog back to blogger from MSDN.  Maybe this was a knee-jerk reaction to a bout of criticisms that I haven’t yet experienced on the Internet, but I felt that it was appropriate for me to try, as much as possible to separate myself and my opinions from my employer.  It was disheartening to see people associate my rants with those of Microsoft as a whole; my whacky opinions do not pervade throughout the entire company.

I understand that what whatever I write or say, I will always be representing Microsoft.  But to me, the veil of blogging ‘under’ Microsoft’s blog-hosting service associates my opinions with those of my coworkers in a way that I’m not really comfortable.  Also, I want to feel free to be critical of Microsoft and I felt somehow restricted when I was writing on MSDN.  This is quite a struggle for me, actually.  As thrilling as it is to work here, there are plenty of things about our products, services, and corporate behavior that infuriate me.  Should I stand up and bite the hand that feeds me?  I have been struggling with that question ever since I started here, but I want to at least feel more freedom to say what I want.

Lastly, call me vain, but I want to keep using my domain name, and I was unable to port that to MSDN.  Bummer.  That would have been nice.  Not to mention that there’s nearly three-years worth of blogging history here that I would hate to leave behind.

Anyways, there are 6 posts from July that I ported over this morning.  It’s unfortunate that I can’t port the comments, too.  But you can go to the original site and read those if you want.  I won’t delete them.

I’ve considered cross-posting, but I hate the idea of having the same post in multiple places.

Monday, August 17, 2009

Re: All Programming is Web Programming

Jeff Atwood’s tirade against me is at least partially justified.  I knew that my rant against web-development was inflammatory when I posted it.  But in his diatribe, he omits a few important details from my post and then makes the same sort of sweeping generalizations that he’s criticizing me for making.

First, while he takes a lengthily quote from my article, he happily omits this line where I acknowledge that, obviously, there are smart people who develop web pages:

OK, so that’s not an entirely fair accusation to make.  Of course there are smart people who like to work on the web, and who find challenging things about it.

The premise for my rant is that this is not a challenging medium for me.  I’m happy that other people find this environment challenging.  To me it’s tedious and frustrating.  Call me old-fashioned and out-dated, but I much prefer to develop in my full-featured desktop environment – the one that

Computer Science [has] spent the last forty years making … as powerful as possible.[1]

Is it so bad for me to lament this movement backwards towards simpler technologies?  Forty years worth of research and development have evolved to create an environment where we get to focus on things that matter (such as, what is the best way to architect a program so that it accomplishes our goals) and less on things that don’t (such as, how can I get this to work in a webpage).  For me, this is a much more challenging problem to solve.

[Note that by the term “web development”, I am specifically referring to the presentation layer of a webpage – not the back-end services that do all sorts of complex stuff.  That’s not web-development – that’s just normal server-side development, and it encompasses all platforms and presentation layers, so it has nothing to do with the web, specifically]

Next, Jeff goes on to rant about why the movement to the web makes perfect business sense (which I never disagreed with), and continues by saying that he does web-dev because it allows him to write software that gets used (which is another reason that I work for Microsoft).  I understand why the world is moving in this direction; I just wish they were using more elegant technologies to do it.

But my primary complaint is more due to the fact that the web-world is filled with terrible software engineers.  I know that’s a dangerous accusation to make, but look: I taught myself DHTML when I was 15 with a used book that my father gave me – and I was quite good at it.  We all know this stuff is easier than C++/C#/Java development – as politically incorrect as that may be to say – because it requires far less training to be effective at it and because there is no need to understand the underlying technologies and principles that they are built on.  That doesn’t mean there aren’t brilliant people who work in this space, or that there aren’t amazing things being done on the web, or that all of the people who work with these technologies are dumb.  But if Jeff can make laws, so can I:

Braude’s Law: The easier it is to learn a given technology, the larger the percentage of bad engineers who will work with it.

Can we really dispute this?  Isn’t it self-evident?  Yes, there are exceptions; but clearly, it takes more training and understanding to be a C++ guru than a DHTML guru.  And because I want to work with people who can challenge me and who I can learn from, I choose to work with technologies where, crass as it may be to say, the bar is higher.

But Jeff acknowledges that this is true:

Web programming is far from perfect. It's downright kludgy. It's true that any J. Random Coder can plop out a terrible web application, and 99% of web applications are absolute crap. But this also means the truly brilliant programmers are now getting their code in front of hundreds, thousands, maybe even millions of users that they would have had absolutely no hope of reaching pre-web. There's nothing sadder, for my money, than code that dies unknown and unloved. Recasting software into web applications empowers programmers to get their software in front of someone, somewhere. Even if it sucks.

I am not willing to sacrifice the quality of my development environment or the intelligence of my coworkers just so that I can get my code in front of people.  He may be happiest writing programs that get used; but I am happiest writing elegant solutions to difficult problems with intelligent people.  And that’s why, so long as the web is built on DHTML, I will never be a web-guy.  It’s also why I work for Microsoft: here, I get to write applications using sophisticated technologies (many of which are invented here) to develop quality software that also gets in front of millions of people.

Lots of people have pointed out that much of the DHTML stuff we see on the web these days is in fact auto-generated using more sophisticated technologies.  These I have less of a problem working with.  But what does it say about the quality of a development environment that we have to go out of our way to hide it from us?  Ultimately, everything is still built on technologies that, as Jeff admits, suck.

Lastly, I disagree with Jeff’s bad news:

I hate to have to be the one to break the bad news to Michael, but for an increasingly large percentage of users, the desktop application is already dead. Most desktop applications typical users need have been replaced by web applications for years now. And more are replaced every day, as web browsers evolve to become more robust, more capable, more powerful.

You hope everything doesn't "move to the web"? Wake the hell up! It's already happened!

Sure, lots of traditional desktop applications have moved to the web.  But I’ve got some bad news for Jeff: not all programming is web programming, and as much as he may wish it to be true, it clearly hasn’t happened already.

Want an obvious example?  How about iTunes?  Here’s a desktop application that hundreds of millions of people are using, so it’s clearly not a niche.  Can we replace it with a web-app?  Theoretically we probably could (we could also write it in punch cards or re-write it in assembly).  But to do that, we would need to not only replicate the iTunes application in a web browser using JavaScript, we would also need to provide a platform-agnostic way for this web-app to talk to an iPod via a web browser.  Perhaps this is possible, but let’s ask ourselves a question: is there any tangible advantage of going down this path?  Are we really gaining anything, or are we just showing off how good we are at hacking a square peg into a round hole?

And we can say the same thing about just about every prominent desktop application that, again, theoretically could be written in JavaScript.  Do we really want a web-based version of Photoshop?  Would we honestly prefer to develop applications in a in a web browser instead of Visual Studio or Eclipse?  Would it please you more to design a PowerPoint presentation in the comfy-confines of Firefox?  Why are we so intent on short-changing our user experience for the sake of cross-platform convenience?  What, so that we don’t need Windows any more – is that really what all of this is about?

But Jeff acknowledges as much when he writes:

Writing Photoshop, Word, or Excel in JavaScript makes zero engineering sense, but it’s inevitable.

This may be one thing that we agree on, but it doesn’t make me any happier.  As much as I want everybody to use the products from the company that I work for, I am more interested in ensuring that our user-experience is as fantastic as it can possibly be.  So it saddens me to watch us all happily trade down just so that we can get away from Windows.  Truthfully, I chalk this gleeful movement up to my employer’s inability to continue creating exciting products and our lack of an app-store for Windows.  But I’ve opined about that somewhere else.

While the web-movement is obviously here to stay and accelerate, I am not convinced that all programming will be web programming.  A large number of things will have to change before that can happen.  And if it does come true that all programming becomes this DHTML mess that we find ourselves in right now, then I will gladly change professions – or at least say hello to middle-management.

Monday, July 27, 2009

The Importance Of Code Organization

If there is one thing I dislike about C#, it’s that it allows you to place definition statements where ever you want to.  While there is technically a similar freedom in C++, the nature of header files and visibility blocks encourages people to group, say, member variables and public methods together.  This encouragement is lost in C# because you have the freedom to scatter your member variables throughout your code files in any manner you want.

To me, this freedom promotes some bad habits which make it difficult to understand and navigate through a large program.  The reason is that if you scatter your member variables, properties, and methods around your source code in a random manner, it becomes difficult to figure out where they are.  The result is a large amount of wasted time as people hunt through your source code to find what they are looking for.

So I have one rule for anything I ever write: given the given fully qualified name of a class and it’s members, it should be immediately obvious to anybody where in your source code that member, property or method is defined.  Nobody should need to search for it.  Just from the namespace, they should be able to deduce – within a handful of clicks – where something is defined in your file structure.

For instance, say I have an int whose fully qualified namespace is:

int My.Program.DataTypes.SomeData.mID

If my solution contains two projects called My.Program and My.Program.DataTypes, I should expect to see the definition for mID in the My.Program.DataTypes project in a file called SomeData.cs.  Furthermore, I should expect to see the mID member variable defined in a particular section of SomeData.cs so that there is no need to hunt through the file to find it.

Any given class file can contain large numbers of member variables, properties, methods, event handlers, interface implementations, etc.  How would a random person know where in a given file a member variable is defined if you chose to scatter these definitions randomly throughout the source file?  In order to make it immediately obvious where to go to find a given member variable, I always group them together and – optionally – surround them with #region elements.  This way, if you open SomeData.cs and want to find the ID property, you can quickly browse to it by expanding the “Public Properties” region and scrolling down until you find it.  No search box necessary – it is obvious where the property is defined.

Why is this important?  Two reasons.  First, it makes your code more readable because things are laid out in an order that makes sense.  And second, because it saves a large amount of time and overhead.  The cost of searching for something in a large program is high enough that it should be avoided.  You should simply not have to search your code in order to discover where things are defined.

This is also my principle complaint about public inner classes.  The problem with inner classes is that it is not clear where they are just from looking at their full-qualified name.  For instance, I would expect:

var myObject = new My.Program.ObjectModel.SomeObject
to reside in a project called My.Program, within a folder called ObjectModel, in a file called SomeObject.cs.  But if SomeObject is an inner class of ObjectModel, it is not clear from looking at the name of the class where it’s code is defined because it’s not obvious whether ObjectModel is a class or a namespace.  When things get super inner-nested, this entanglement becomes even more confusing.
(Inner classes also prevent you from leveraging using statements to reduce long-winded class names, which in turn makes your code less readable because it becomes cluttered with lots of unnecessary scoping)
It is easy to write code in an organized manner.  Simply put new source files in locations that correspond with their namespace and define your members, properties and methods in the same sections of your code files.  That’s not too hard, is it?  It is much harder to take a program that was not written this way and separate things out into locations that are logical.  And it is even harder to understand a project where everything is defined in scattered, unorganized source files. 
So my advice is to do it right the first time.  Avoid future headaches from the start, make it easy for other engineers to collaborate with you, and reduce the overhead required to alter your work.  It’s a quick investment that continues paying dividends long after you’ve moved onto something else.

Wednesday, July 15, 2009

Why The Web Will Always Be Second Best

For all of the euphoria surrounding the exciting things coming out on the Internet these days, I think it’s important to remind ourselves of the limitations that web technologies naturally imposes on us.

All executed code needs to be compiled into machine code before it can be executed on the local machine, and the process looks like this:


Now I don’t profess to be an expert on compilers, but I know enough to draw this conclusion on JavaScript: it will never be as fast as native or intermediate code.  And the reason why is because in order to execute a super-heavy JavaScript library, everything you see above needs to happen as soon as you open a webpage.

This may not seem like a big deal, but remember that text parsing is actually incredibly slow.  For the uninitiated, the process of converting source code into a recognizable stream of tokens (IE keywords such as “int” and “class”) is done via regular expressions.  Regular expression matching is very time consuming, and there is quite simply no way around this.  Perhaps the algorithm can be sped up to some degree, but its complexity cannot be reduced: for a given text file of N characters, each of them needs to be scanned from start to finish – and this takes O(N) time.

The parser then takes this stream of tokens and converts it into a syntax tree, which can then be converted into native code (or evaluated, in the case of JavaScript).  If we imagine that an entire program can be converted into one big tree of some unknown height, we can conclude that the complexity of parsing this tree and executing it is equal to the it’s height.  Or, roughly O(log(N)), for some base log that I can only estimate.

I’m just guesstimating here, but I imagine that the total complexity of compiling an application is about O(N Log(N)), which makes it roughly equal with the complexity of quick sort.  So in addition to downloading an entire JavaScript application – in its verbose, text-based, precompiled form – a JavaScript application needs to go through the overhead above.  For small applications, this additional overhead is just about negligible.  But as applications grow larger and larger, it will become more and more pronounced.  In the end, it will end up being the largest barrier that prevents JavaScript from becoming the language of choice for highly-featured web-based applications.

Keep in mind that the largest JavaScript applications on the Internet are a few megabytes or so in size.  Loading and running these applications right now, while fast, still takes noticeable time (look at the progress bar on GMail, for instance).  But if you consider that most large commercial applications consist of many tens of millions of lines of code which take up many gigabytes of space and take many hours to compile, you can start to see the natural limitations of JavaScript.  A JavaScript application of that size, while perhaps theoretically possible, would take so long to load that it wouldn’t be usable – no matter what tricks you use to speed it up.

You may think that this is a limitation that will go away over time as new technologies and   techniques arrive that speed things up.  But years from now, future applications will be even larger than they are today.  So even if JavaScript applications can eventually catch up with today’s desktop applications, the bar will rise, our standards will increase, and today’s applications will look puny by tomorrow’s standards.  Of course, there are technologies emerging that speed JavaScript up significantly, including many within Microsoft.  These are exciting and will no doubt increase the limit of what can be done in a browser.  But ultimately, no advancement will ever bring the two environments on par with each other, because the complexity of compiling or interpreting on the fly is a constant that cannot be reduced.

I think it’s time we recognize JavaScript for what it is: a scripting language that is being used for purposes beyond what it was conceived for.  If we really want rich applications to be delivered over the internet and hosted in a web browser, we will need to think of a better technology for doing so.

Monday, July 13, 2009

WCF Duplex Bindings With Silverlight

I have had the hardest time getting my self-hosted WCF service to play nicely with Silverlight.  I thought this was supposed to be simple, but it turns out that exposing a self-hosted, WCF service with a callback contract to a Silverlight application is just about impossible.  Here are the five hurdles you have to overcome to get this to work:

  1. First, you have to explicitly give your service permission to open an endpoint on localhost at a specific URL.  I’m sure this can be automated somehow, but probably not easily.
  2. You have to host a clientaccesspolicy.xml file on the service in order to give the Silverlight runtime permission to call your service.  This involves writing another WCF service just to return basic documents via HTTP.  It’s not too tough, but very annoying.
  3. Silverlight does not support the WSDualHttpBinding binding.  To get around this, the server needs to expose itself via a custom endpoint that is configured to use a PollingDuplexElement object.   How you would ever figure this out without the help of this MSDN article, I have no idea.
  4. Next, the Silverlight application needs to be configured with another custom binding that can communicate with this strange, bastardized endpoint that you have exposed on the server.  Svcutil.exe does not pick this up for you: you’ve got to define this endpoint manually.  Another MSDN article explains this nastiness.  Good luck finding this out by yourself.
  5. Whatever your ServiceContract you had before you decided to add a Silverlight client will now need to change to send and receive using Message objects.  For me, this was a deal-breaker.  All of your nice, strongly-typed DataContract’s go away and get replaced with generic SOAP messages.  Terrible.  And all of your other clients now need to be updated to deal with these objects instead of the strongly-typed data structures that make WCF so powerful to use in the first place.

All of these hurdles have caused me to put off the Silverlight client until further notice.  The only way I can think of writing this is to do a very nasty double-hop scenario.  And I really hate that idea.

Wednesday, July 8, 2009

Calling Self Hosted WCF Services from Silverlight

I have an application that self-hosts a WCF service.  Now I want to add an HTTP endpoint to that application and have a Silverlight application call my service.  Sounds easy.

Except that it’s not, because the Silverlight app is trying to do a cross-domain web service call (since the endpoint is self-hosted), and for that to work, the endpoint needs to return a file called clientaccesspolicy.xml when the silverlight app asks for it.  But since my application isn’t running in IIS (and I don’t want it to), returning this file when that HTTP request comes in is not a trivial thing to do.  In fact, I don’t think it can be done.  A self-hosted WCF service is not a web server – just an endpoint.

So I’m a bit stuck, and a bit more perplexed.  There must be a way to call a self-hosted WCF service from a Silverlight application, don’t you think?  Or maybe not, which would be very frustrating, because then I’d either have to do it in JavaScript or I’d need to do some super-nasty webservice-that-calls-a-WCF-service architecture.  And thinking about that just makes me cringe.  But if it’s what I have to do, then I guess that’s what I’ll do.

I don’t like this cross-domain restriction.  I’m sure there’s a good reason behind it, but it seems to create more problems than it solves.

UPDATE: There is, actually, a way to do this – though it’s not as intuitive as you might think.  Check out the solution here.

The Beauty of Data-Driven Applications

A common problem I run into when writing applications is this:

I have a situation where a series of tasks need to be assembled and arranged in a way that does something complicated.  Each individual section may be simple, but the process as a whole is complicated.  The pieces of this process need to be arranged dynamically, and I want the ability to update them and slot new pieces in without disrupting the system as a whole.  What’s the best way to design such a system?

Of course, no matter what, you want something with lots of abstractions – ways that disconnected pieces can plug into each other without really knowing who their neighbors are.  That much is a given.  But where do you define the process as a whole?  In other words, where do you physically say, “For the process that I call ‘A’, first do Task1,  then do Task2, then do Task3”, etc.?

Perhaps the easiest and most obvious way to do this would be to use a simple object hierarchy.  Something like this:

DataDriven Now, you’re library of Task objects will grow whenever you need to add some new small block of functionality.  And your library of Process objects will grow whenever you need to define a new process.  An individual Process object may be very stupid, and could simply look like this:

public class ProcessA : BaseProcess
public ProcessA()
this.Tasks = new BaseTask[]
new TaskA(),
new TaskB(),
new TaskC(),
All of the business logic on how to execute a process can be contained inside the generic BaseProcess object, so the only thing that the subclasses need to do is define the array of tasks that get executed and what their order is.  In other words, the only purpose of the subclasses is to define the data that the parent class needs in order to execute.
Things get more tricky, however, when more complicated connections needs to be defined.  Just defining a sequence of tasks may not be enough.  Maybe we also need to define what output from what task goes into the input of another task.  Where do we define that logic?  How do we represent it?  Potentially, we could just shove it into our current model and everything will be fine.  But we could soon find ourselves writing a lot of code that just glues these things together.  And that makes me wonder: how much decoupling have we really achieved by separating these tasks into separate procedures instead of just strong-coupling everything together in the first place?  After all, the whole purpose of this design is to decouple each task from one another so that we can arrange them in any number of ways.  All we’re really doing in this case is moving that coupling from the Task library to the Process library.
To some extent, we will never really get around this problem.  We may like to pretend that TaskB is decoupled from TaskA, but if TaskB requires some input that can only come out of TaskA, then this really isn’t the case.  The important thing to note, however, is that TaskB shouldn’t care where this input comes from – so long as it gets it.  The other important thing to note is that if TaskA produces this input, it shouldn’t care who uses it or what it’s purpose is.  So from task A and B’s perspective, this dependency doesn’t exist.  But from the processes perspective, it does.  The question is: where is the best place to define this dependency?
I say, put this logic in external data instead of in your code.  Rather than create a large, complicated, compiled hierarchy of Process classes, define an XML schema and create a library of documents that define these bindings for you.  Then, define an adapter or a factory that generates a generic Process object by parsing these XML files.
Understand that both solutions are functionally equivalent.  But making your application data-driven has a few distinct advantages:
  1. You can now alter the behavior of a process object without recompiling it.  This means you can easily distribute hot-fixes and additional functionality.
  2. Third-party’s can more easily integrate with your application and extend it.
  3. The source of a Process’ XML can now come from any location.  Loading them from a web server or a database instead of a local file system will have no impact on your system.
  4. You can easily write a library of adapters which can deserialize the process object from any number of formats.  You are no longer tied down to any one data representation.

Most importantly, however, your application now only reacts to changes in data.  This is the way I think of it: imagine you have two machines that build houses based on schematics.  One machine has a number of buttons on it.  Each button builds a different house.  If you want to build additional houses, you need to buy a new machine.  Contrast that with a rival machine, which, has only one button but also has a scanner.  The scanner can read schematics directly from any piece of paper so long as it adheres to a certain standard and can build any house that can be specified in a schematic.

Wouldn’t you rather have the second machine?  The beauty of writing data-driven applications is that at their core, you have created something akin to the second machine.  You have decoupled the dependencies from your application so much that your program is now simply responding to input rather than replaying set procedures.  This makes it far more versatile, and it’s why programming in the WPF is so much more pleasant than writing WinForms applications – because now you get to focus on modifying the data and the UI separately from each other.  There is still a contract that the two sides need to adhere to, but your programming paradigm becomes much cleaner.

Which is why, I always try and make my applications as data-driven as possible.

Tuesday, July 7, 2009

How Much Should You Mock?

I managed to incite a small riot a few weeks ago when I got involved in a somewhat heated debate internally about how far one should go with their unit testing.  How much should you be mocking in your unit tests?  Should you mock the file system?  Should you mock a database?  Should you mock calls to a web service?

The basic problem is that at the lowest layers, you end up reaching a point where you must interact with an outside dependency.  This is the point where there’s nothing left to mock: you have abstracted everything you possibly can and now you must interact with the outside world.  The arguments many people make are, because the interaction with the external dependency is something that you don’t own, you can’t possibly simulate it’s behavior, and therefore you can’t test it.  Or, because the dependency can’t be abstracted any further, that being able to test it is so difficult that there are no benefits to the extra work.  Both of these are no doubt true on at least a few levels.  What people prefer to do is wrap these dependencies with abstractions, and then test those abstractions rather than the boundaries.  The last bit at least, I agree with completely.

Here’s my problem.  Often times, the reason that external dependencies are so hard to be test is because they’re not designed to be tested.  Take the Tfs Client API, for instance.  This is an insanely difficult library to test because it is filled with sealed objects that have non-virtual methods and private setters.  Ack!  The only way to test this is to mimic the hierarchy by using a nicely designed bridge pattern and reference the object model via our own abstractions.  But this is not ideal.  Why would we choose to wrap an API with a testable object model instead of using the actual object model?  As long as we can mock the original hierarchy, this becomes a large waste of time and serves no purpose.

Sadly, many of the boundaries in our applications run into walls just like this.  And to me, this is the major reason that mocking them becomes so difficult.  It’s not that there is no value in mocking these dependencies, it’s just that there’s no real practical way to do it most of the time. 

…but if there was, my question is: why would we choose not to?  Just because it’s not our system doesn’t mean we shouldn’t try and test our interaction with it.  Somewhere in our code, we are making assumptions that external systems are going to behave a certain way.  They’re going to throw certain exceptions in certain cases, they’re going to return different values based on different inputs, etc.  Because we have to code against these behaviors, why would we choose not to test against them, too?  There are, perhaps, limits to our zealotry – but I don’t think that means we shouldn’t try.

So my answer to the question is this: you should test and mock everything that you reasonably can.  This is not a substitute for an integration test or a nicely layered design – it’s a companion to it.  And my other plea is: remember that other people will be writing code against yours, so if you want their code to be robust, make sure your API is testable.  If only it were easier to test the boundaries of my applications, that riot might have been avoided.

Saturday, June 20, 2009

Random Weekend Thoughts

I’ve been rather busy at work, but have accumulated a number of thoughts over the last few months that I figured I’d share:

1. For a software engineer, I know remarkably little about ANAMES, CNAMES, and other DNS related items.  Hence, when the blog went down about one week ago, I was unable to diagnose it.  And my resolution was to give up.  Forget it.  www.michaelbraude.com now just forwards to www.michaelbraude.blogspot.com.  Because it’s easier this way – and because, frankly, Yahoo’s domain hosting is terrible and blogger’s domain hosting help section is absolutely useless when it comes to setting up domains and resolving issues.  My advice is to use anybody else.

2. Bing is better.  Yes I’m biased, but it’s still better.  There is a palpable excitement within Microsoft after the Bing launch.  It’s fun to be a part of that.

3. On the heels of my last rant, a few things have occurred to me.  First, one big reason people like thin clients such as web pages is because the number of alternatives right now is low.  There are no rich clients that connect to the cloud and do things like Facebook or Twitter do.  If there were, then people would prefer the rich client because it would be superior to the webpage in every way.  Second, I am increasingly convinced that we have reached the end of what’s possible with HTML and JavaScript.  There’s just not much more room for improvement.  Google docs will never surpass Excel because, frankly, the technology is too limiting.  And lastly, even if there were a bunch of killer cloud-based Windows applications that were superior to web pages, people would need a trivial way to a) find them and b) run them.  Which is why we need a click-once application store for windows applications.  Apple has shown that the appstore model is a good one.  It makes it easy for people to find fun applications to run.  Windows needs something similar.

4. We need to give people a reason to want more powerful computers.  If you’re browsing the internet, your 6 year old Athlon 1.4 GHz computer from 2003 will be fine.  Having a library of rich, cloud-based applications will spur demand for faster machines (a 3D WPF-based Facebook app would be awesome!).

5. Why is it so hard to find small laptops with high screen resolutions?  Who wants a 16” screen with 1280x800 resolution?

6. I love my XBox, but mostly because Nintendo is unable to publish enough games themselves to keep a gamer busy.  Why is it that every time I go to Gamestop, the entire Wii wall is filled with games for 9 year old girls?  Clearly there are adults who own Wii’s.  Why isn’t anybody developing games for them?

7. What crazy company would ever ship competitors’ products with their own?  Imagine if Toyota gave you a “ballot” when you bought a car from them, from which you could “vote” on which car stereo you preferred – a Toyota stereo, or an after-market brand?  No, this insanity makes no sense to anybody.  That the EU is complaining about the browserless Windows 7 solution is an admission that, yes, a web browser is an integral part of the operating system.  Which makes the whole bundling argument rather weak.  But ship competitors products with your own?  Forget it.  I’d rather ship nothing.