Wednesday, November 29, 2006

Wii can learn something from Nintendo...

If you live in the U.S. (or nearly anywhere for that matter) and follow the consumer tech market even at a glance, you surely could not have missed all the goings on surrounding Sony's and Nintendo's recent release of new gaming consoles, the PS3 and Wii, respectively.  What is interesting is while all the crazy hoopla seemed to focus on Sony's frontal assault on Microsofts XBox 360, Nintendo [relatively speaking] quietly introduces the Wii, a much lower priced, less featured, not-nearly-as-good graphics, gaming system. 

On paper, the Wii, is a huge “why bother.”  However on further examination, there is some interesting genius at work here.  Sony and Microsoft are bent on being #1, owning the living room, and being a vehicle for all entertainment and media.  Gaming is beginning to take a back seat.  Nintendo, however, is emerging as being the one player in the console gaming market that clearly knows where it's bread is buttered.  They do gaming consoles, handheld game platforms, and have a much larger library of Nintendo produced games.  That's it.  They're not out to “be the do-all end-all media device.”  It is also worth noting that Nintendo seems to be the only one of the three that is actually very profitable (in the console gaming business).

What prompted this post was that I came across this interesting article in The New Yorker magazine.  What really struck me was this:

“A recent survey of the evidence on market share by J. Scott Armstrong and Kesten C. Green found that companies that adopt what they call 'competitor-oriented objectives' actually end up hurting their own profitability. In other words, the more a company focusses on beating its competitors, rather than on the bottom line, the worse it is likely to do.”

I guess the way I'm going to somehow tie all of this back to CodeGear is to say that, as a much smaller company with a single-minded focus we should really take a few notes here.  We must make sure we focus on what we're purporting to be all about.  We're going to have to have “developer-oriented objectives“, and not “competitor-oriented objectives.”  By doing that, I have little doubt that we can thrive and have a profound impact on the lives of developers.

To steal a line from the Nintendo Wii TV commercials, “CodeGear wants to play.”

Tuesday, November 28, 2006

Optionitis can kill if left untreated...

If you've been around these parts for a while you've probably heard the above reply to common feature requests. Many times these requests are at odds between several groups of users.  So their typical solution to this impasse is the ever so simple,  “Well, then just make it an option.”  I'm sure there have been those who've scoffed at my response and put me in the “closed-minded-dolt” category ;-).  Well it seems I may have gained a little vindication.  Apparently “Joel on Softwareseems to agree.  One could certainly argue that development tools are targeted at a different level of user than Windows itself or your typical word processor or spreadsheet application.  To that I say, why?  Why do development tools have to have a million and one little options for this and that?  We already have different and keybindings, tabs on, tabs off, auto indent on/off,  code completion on/off, etc...  There are a lot of end user reasons to limit the options as Joel so deftly does in a somewhat comical fashion to that faceless team of individuals working on the “start menu.”

However, there are also a lot of practical development side reasons to limit the choices as well.  Many times, just the simple act of adding a single on/off boolean option actually can double your testing efforts!  Ok, that was probably a bit of hyperbole, but it will double a portion of your testing effort.  So now you have to test everything related to that option with it on and with it off.  Add another option, and you're quadrupling the effort.  Think binary here.  In practice there are plenty of relatively safe testing “shortcuts” and ways to minimize that impact.  My point here is that glibbly introducing an optional new feature because “those 5 people I talked with last week said it was a good idea,” is probably not the most responsible thing to do.  Sure, for smaller applications targeting smaller markets, you do have to try and cater to as many as you can.  However, I've seen many, many cases where a better and workable, non-optional solution to a problem (or more appropriately a class of problems) is born out of a whole aggregated class of similar problems.

So when you're asked to “just add this little new feature,” and are inclined to make it optional, just step back and consider whether or not there are other problems within this particular class.  Is this new option worth the extra testing burden?  Is the feature useful to a wider cross-section of your customer base, so maybe it shouldn't be optional but always enabled?  Don't use the option as a “get out of jail free card.”  Sure, if customer X says they just can't stand that feature and you can reply with a simple, “Just turn it off.”  You sure dodged that bullet... or did you?  What if there was some use-case that that customer brought to the table that you had not considered?  Adding an option should not be used as an excuse to not have to thoroughly think through a problem.  Consciously or not, many times that is what is happening.

So for now, I'll still hold that “Optionitis can kill if left untreated.”  Now... at some point we still have to add that error message, “Programmer expected...”

Monday, November 27, 2006

Cough... cough...

Groundwork

Now that the dust is beginning to settle and some of the initial euphoric/shocking/stunned reactions are beginning to subside regarding the CodeGear announcement, I figured I'd weigh in with my perspective.  I specifically wanted to hold off till this point mainly because I wanted some time to fully digest and evaluate what this all means and how I think it will play out in the coming months.  To be fair, it is all still sinking in and there are still fair number of questions we have yet to answer.  Being as close to this whole process as I've been has given me a decidedly unique perspective.  First of all, being a technical kinda guy all my life with little to no desire to ever delve into “business” allowed me to take a kind of “layman” approach.  Now of course I'd also like to think that I'm no spring chicken and still possess a keen ability to analyze and verify a lot of the information I've been able to see.

If you've ever dealt with folks in the finance/business world, they have their own language and speak at a level of abstraction that tends to baffle most folks.  Wait... that sounds oddly familiar, doesn't it?  Isn't that exactly how everyone tends to describe us, in the high-tech world?  We have URLs, ASTs, QuickSorts, etc...  They have EBITDA, Rev-Rec and Cost-Models.  So?  The point I'm trying to make here is that many in the high-tech world tend to eschew all things business.  While this is clearly an oversimplification and probably too broad of a statement, I'm only trying to highlight that the reverse is not necessarily true.  The business side understands that there is value and a market for the high-tech side.  It's their job to recognize and figure out how to monetize and capitalize on those things.

Another aspect of the business side of things is that often maligned and thought of as only “for those other guys,” is marketing.  I guess one of the reasons for that is that marketing is actually about psychology.  I'm not saying that they aren't out there, but I don't know any programmers, software engineers, etc... that also have psychology degrees.  Human factors is close to what I'd consider human psychology mixed with technology.  Psychology is in many ways more of a meta-science than, say, physics, mathematics or biology.  This is probably one reason that marketing has really been misunderstood.  We all know when marketing is trying too hard, is just plain bad and misses its mark.  However when marketing is successful... the target audience doesn't actually feel like they've been marketed too.  The message is clear, resonates, and makes sense.  This is how CodeGear needs to handle marketing.

My Take

As I started this post, I wanted to make it clear that there is a lot of machinery behind this endeavor.  It is also not a “started in the garage” kind of venture.  So the first item is the CodeGear announcement itself and what it means.  During the months following the February 8th announcement of Borland's intention to divest itself of the Developer Tools Group (DTG), a huge internal effort began.  This all started with creating a credible and achievable plan for the next 3 years.  These plans not only included the existing products, but also plans for growing the business and moving into other developer focused markets.  Probably one of the hardest part was to determine what parts would go with the DTG, which parts are licensed from Borland, and how to handle the overall transition.  There were the obvious items, Delphi, JBuilder, C++Builder, InterBase, etc...  But there were also some technologies that had been spread across all the Borland product lines.  An example of this is the licensing.  I've read a bit about how some folks were nervous that they wouldn't be able to activate their legacy products.  I assure you that this was an item discussed all the way to the top.  It was imperitive that we, CodeGear and Borland, not allow that to happen.

With all the late evenings and long weekends we came down to the CodeGear announcement that Borland intends to make DTG a wholly owned subsidiary called CodeGear.  My first thought was... “hmmm... OK... that's interesting.”  I remember meeting with many potential investors and going through all the long presentation sessions.  I did many follow-up diligence sessions.  I discussed the customers, the products, the roadmaps, the teams, the history, the good, the bad, and the ugly.  I must say that nearly all of the folks I met were cordial, engaged, interested and above all, shrewd and analytical.  So after all of that,  it did seem somewhat anti-climatic.  We had been diligently preparing for one specific outcome and something slightly different happened.  I remember early on in the beginning personally resolving to approach this whole process with an open and non-judgemental attitude.  Whatever the outcome, I was going to, as much as I'm able, do whatever it takes to make this a success.  It isn't every day that one gets to participate in the genesis of a new company, in whatever form.

So were the last 9 months a waste?  Absolutely NOT!  As a matter of fact, we're in far better of a position to be successful and run this business.  We have spent the last months shining a bright light on every deep, dark corner of the business.  We've questioned everything.  Quite frankly, we had to relearn this business in the operational sense, and I'm sure this is no real secret, it's been left to it's own for a very long time.  We also have to take into account market shifts and other dynamics.  The great thing is that we no longer have to sit on the sidelines and watch all the action.  Now we have the chance to get in on it.  What is that action?  Things like web development, dynamic/scripting languages, continuned traction in the Win32/Win64 native markets.  Let's not forget the whole .NET side of things as well.  There is the quickly maturing open source movement and a whole ecosystem surrounding Java and Eclipse.  We still have a lot to offer those markets in terms of experience, wisdom, and insights.  This isn't just a whole lot of “been there, done that,” but a chance to actively apply a huge amount of what we've learned over the years regarding what developers need and what.  This is a chance to help shepard in these new technological advances by making them more accessible to the average developer.  Over the coming months/years, I'm certain that the shape of our offerings in those spaces will be very different than they do today.

Balance

This is a hard lesson... for anybody.  Part of why I put this in here is that this is one thing I know developers struggle with.  While CodeGear is clearly focused on the developer, we also know that you can't be all things to everybody.  So in many ways, CodeGear will have to find the right “balance” as it comes out of the gate.  There is a time for whipping out the shotgun and blasting away at a market and hope that something will hit.  There are also plenty more times where the sniper rifle is far more effective.  So the balance comes from when you use which approach.  So, you will probably see some use of the shotgun and a good amount of the sniper rifle as well.

So while the landscape is not exactly how we originally envisioned it, it is very, very close.  CodeGear will be allowed to operate in near total autonomy.  We'll have control over what products we produce and when we release those products.  We also control how we approach new emerging markets/technologies.  We'll have control over our own expenses.  If something costs too much, we either do it differently or decided to not do it at all.  Pretty simple.  We get to decide where to re-invest the profits.  We get to decide with whom we'll form partnerships.

Many of you may remain unconvinced, and that's OK.  I have no delusions to think that we're going to make everyone happy.  A lot of mere talk isn't going to convince some people.  I know that.  So all I ask is that you watch carefully, be patient.  Things are going to begin to happen in the coming weeks.  This will be especially true for next quarter when we get all the nitty-gritty details of the CodeGear business arrangements settled and announced.  The great thing is that I've got more irons in the fire now than I've ever had while at Borland.  We're also at a point where it isn't a question of what direction to go and what to do because we know that it must fit with focusing on the developer.  We're certainly not at a loss for ideas and direction, it is just now up to deciding what to do first and when.  Much of that has already been decided as you'll see in the upcoming weeks/months.  For my part, I'll keep rambling on...  And please excuse the dust as we remodel.

Monday, November 20, 2006

CodeGear Borland, an example

How many times in the past (the Borland past, that is) has the director of IT actually posted a message in the newsgroups?  To the best of my knowledge, a grand total of ZERO times.  There was a thread over in the Delphi.non-tech group about the new CodeGear site and Mark Trenchard, the CodeGear IT director, actually posted a message!  Mark was recently brought on board and was previously with the networking group at HP.  This is certainly a good sign that things truly will be different here at CodeGear.  I will continue to encourage that all CodeGear employees interact with the community where appropriate.

Tuesday, November 14, 2006

CodeGear = new Company();

Various other links:

Letter to our customers, partners, and fans from Ben Smith, CEO CodeGear.

Borland Spins Off Its Tools Unit.

Borland Launches CodeGear to Supply Developers with Tools of the Trade.

Borland forms CodeGear - FAQ.

 

CodeGear := TCompany.Create;

And you thought it'd never happen! :-)  However, as of today, I'm officially a CodeGear employee.  CodeGear is the new developer company that is born of Borland.  We're about, for, and by developers.  In the coming days and weeks, we'll be talking more what we're about, where we're going to take this new venture and how we're going to get there.  Delphi, C++Builder, JBuilder, InterBase are all the core products around which this company will be built.  But that isn't all we're about.  Be sure to stay tuned as we roll out more and more information.  You can read the press release here.  Also you can Digg this link as well.  Full disclosure, I did submit the story to Digg.

Friday, November 3, 2006

try...finally.

There were quite a few interesting comments made to this post from the other day that seemed to indicate that there is a little confusion out there regarding exceptions.  More specifically, the try...finally construct.

I've seen some rather interesting twists on the use of try...finally that have made me pause and wonder why it was done that way.  When programming with exceptions, you always have to be aware that at nearly any point and without warning, control could be whisked away to some far-off place.  In many ways you have to approach and think about the problem very similarly to how you would in a multi-threaded scenario.  In either case, you always have to keep in the back of your mind two (or more in the multi-threaded case) potential code paths.  The first one is easy since that is the order in which you're writing the code statements that define the overall logic and intent of your program.  The other, and often forgotten code path is the exception execution path.

When an exception is raised (or "thrown" in the parlance of C++ and other similar languages) a lot of behind the scenes work is set into motion.  I won't be going into the details of exactly what is going on since that tends to be platform and language dependant.  I'll focus more on what happens from the programmers perspective and even more specifically the try...finally construct.  One way to think of the try...finally block is that it is the programmer's way of "getting in the way" of that secondary code execution path.  However, it is also unique in that it also "gets in the way" of the normal code execution path.  In the grand scheme of things, the try...finally block is one of the most used (and often mis-used) block types for programming with exceptions.  I'd venture to say that in the typical application the ratio of try...finally blocks to try...except blocks is on the order of 100:1.  But why use them at all and what are they good for?  It's all about resource management.  Memory, files, handles, locks, etc... are all examples of the various resources your application uses and interacts with.  The whole point of the try...finally block is to ensure an acquired resource is also properly handed back regardless of which execution path the application takes.

Let's look at some common misuses of the try...finally block and examine them more closely.

var
Obj: TMyClass;
begin
try
Obj := TMyClass.Create;
...
finally
Obj.Free;
end;
end;

There's a subtle problem here... Let's follow the normal execution flow first.  Control enters the try block, an instance of TMyClass is allocated and the constructor is called, control returns and the local variable, Obj, is assigned.  Some operations are done with the Obj instance (the "..."), then control enters the finally block and the Obj instance is freed.  So far so good, right?  I mean, the memory is allocated and freed and all is well with the heap along with any other resources needed by the TMyClass instance (assuming it is a properly designed class, that is).


Now let's look at the other possible flow of control.  Control enters the try block, an instance of TMyClass is allocated and the constructor is called.  Here is where something can go horribly wrong.  When programming with exceptions, the possibilities are nearly endless as to what can happen during the call to the constructor.  The most obvious is the case where the memory manager is unable to allocate enough space on the heap to hold the instance data for the new instance.   An "Out of Memory" exception is raised.  Hey, but that's OK because the finally block will get in the way of the the exception control flow, right?  Yep, that's right.  So memory was never allocated, the constructor was never called, local variable Obj was never assigned, and control is passed to the finally block which contains.... uh... Obj.Free;  Do you see it now?  Yep, that's right, the Obj reference was never properly set.  Because of that, the call to Obj.Free; is not good for the health of your application.  Chances are that another exception is going to be raised which will supercede the original exception (most likely a more fatal and nasty one).


So how do we fix this?  Hey I know!  What if we just made sure to pre-initialize the Obj reference to nil (Obj := nil;)?  Sure.  You could do that, but that is just adds another line of code to your function.  How can we arrange the above code to ensure that every time control is passed to the finally block regardless of which path of execution is used to get there?  It's actually very simple.  Here's the same block with that subtle change:

var
Obj: TMyClass;
begin
Obj := TMyClass.Create;
try
...
finally
Obj.Free;
end;
end;


But now the construction of the TMyClass instance isn't protected!  It doesn't have to be and let's examine why.  From some of my previous posts regarding exception safety and the use of Assigned, we alluded to the fact that while the constructor of an object is executing, if any exception is raised the destructor will automatically be called, the object will be freed and the exception is allowed to continue on its merry way.  There are essentially two main areas were things can go wrong.  The first is during the actual allocation of the memory for the instance.  If the memory manager is unable to find a block of free memory large enough to hold that instance, it will raise an "Out of Memory" exception.  Since the instance was never actually allocated, there is no need to ever execute the Obj.Free; line of code.  Since the try...finally block was never entered, Obj.Free will never be called.  The other place where things can go wrong is in the object's constructor.  In this case the memory manager allocated the memory and then control was passed off to the the constructor that would begin to setup the instance.   If something fatal happened along the way there, we know from those past articles that the destructor will automatically be called and the instance memory handed back to the memory manager for later re-use.  Since the object was already freed and, again, control never entered the try...finally block, the Obj.Free; line is never executed.  So in both of those scenarios, there was no need to touch the local Obj variable reference since the resources associated with the TMyClass instance were taken care of.  If the memory allocation succeeds, the constructor runs to completion, then control will return and the local variable Obj will be assigned after which control will then enter the try...finally block.  It is only at this point that you always want to make sure the TMyClass instance referenced by Obj is freed regardless of what happens next.


There are a lot of other interesting "twists" on the usage of try...finally and try...except blocks that I've seen over the years.  The above case stands out in my mind as the most common mistake made.  I'll address some of the other cases in following posts.  If you have any questions about certain common idioms and patterns that you're not sure are exception aware, post some comments and maybe I'll address them in future posts.

More videos.

No! Not that kind!  Steve Trefethen has posted an excellent video demonstrating some of the little-known Code Completion, Code Insight, Code Parameters, Code Browsing and Live Template features.  Some of these features have been in the product for several releases, and some are new for BDS2006 (and the Turbos).

Wednesday, November 1, 2006

Exceptional Safety

Back in late 1992 or 1993 or so, we had a dilemma.  We wanted to add exceptions to the Turbo Pascal (what was soon to become the basis for Delphi's programming language).  WindowsNT was under full-swing development. With this new OS came a new-fangled thingy called Structured Exception Handling (SEH).  So what was the dilemma here?  If you'll recall, the first release of Delphi was targeting Windows 3.1, aka. Win16.  There was no OS-level support for SEH.  Also, WindowsNT wasn't going to be released as a mass-market consumer-based OS.  Again, what's the problem?  Just add your own implementation of SEH to the language and move on.  The problem was all about "safety."  So we, of course, added our own specific implementation of SEH to the 16bit version of the Delphi language.  Aren't exceptions suppose to make your code more safe?  Ok, I'm being obtuse and a little evasive here.

The fundemental problem we faced was the notion of a partially constructed object.  What if halfway through an object's constructor an exception was raised?  How do you know how far into the execution of the constructor you got by the time the exception handler is executed and the object's destructor is called?  The constructor is excuting merrily along, initializing fields, constructing other objects, allocating memory buffers, etc...  Suddenly, BAM!  One of those operations fail (can't open a file, bad pointer passed in as a constructor parameter, etc...).  Since the compiler had already injected an implicit exception handler around the execution of the constructor, it catches the exception and promptly and dutifully calls the destructor. Once that is complete and the partially constructed object is destroyed and all the resources are freed, the exception is allowed to continue, in other words, is re-raised.  The problem in this scenario is the fact that the destructor really has absolutely no clue why it got called (sure it could see that an exception is currently in play, but so what?).  The destructor doesn't know if the instance was ever fully constructed and if not, how much got constructed.

The solution turned out to be remarkably simple and somewhat clever at the same time.  Since all classes in Delphi have an associated virtual method table (VMT) each instance must be initialized to point to that table.  Since Delphi classes allow you to make virtual method calls from within the constructor, that VMT pointer has to be initialized before the constructor is allowed to execute.  If the VMT pointer has to be set before the constructor executes, why not just initialize the entire instance to some known state?  This is exactly what is done.  The entire instance is initialized to zero (0), the VMT pointer is set, if the object implements interfaces, those VMT pointers are also set.  Because once the user's code in the object constructor begins to execute you know that the instance data is in a known state.  By using this fact, the destructor can easily tell how far in the object's initialization sequence things got before the world went haywire.  Remember yesterday's post where I mentioned the FreeAndNil procedure?  Another item to note is the non-virtual TObject.Free method.  Because you can assume that if an instance field contains a non-nil or non-zero value, it must have been successfully initialized, it should also be OK to de-initialize it.  This is more important for any memory allocations or object constructions that took place in the constructor (or any other object method for that matter).  The destructor has to know when a valid pointer is in that field.  So a non-nil value means, go ahead and free the memory or destroy the object.

We realized too, that it would be very tedious and error-prone for the user to always have to remember to always do this pattern: if FField <> nil then FField.Destroy;  Enter TObject.Free.  If you opened System.pas and looked at the implementation of Free, it simply does if Self <> nil then Destroy;  So you can safely call the Free on a nil, or unassigned instance.  That is because the check is done within that method.  All you need to do is FField.Free; and your destructor is now "exception safe."  The same thing can be done for memory allocated with GetMem.  You can safely call FreeMem(FField), even if FField is nil.   It just returns.  Finally, it should be noted that certain "managed types" such as strings, interfaces, dynamic arrays and variants are also automatically handled through some compiler generated meta-data.  So just before an object instance's memory is freed, an RTL function is called that will take the instance and this meta-data table which contains field types and offsets so this RTL function knows how to free certain instance fields.  Again, if a particular field is nil, it is simply passed over with no action needed.

What about the FreeAndNil thingy?  For the astute among you, you've probably noticed that the implementation of that procedure actually sets the passed in reference to nil first and then destroys the instance.  Shouldn't the name actually be NilAndFree?  Yeah, probably.  But it just doesn't roll of the tougue very well and is equally confusing.  "So if you set the referene to nil first... how can you destroy it?"  So why was it implemented in this manner?  Exception safety is big one reason.  Another significantly more obscure reason involves intimately linked objects.  Suppose you have one object that holds a list of other objects which in-turn hold a reference back to the "owner" object?  Depending on the order in which the objects get destroyed, they may need to notify other objects of their impending doom.  Since the owner and the owned objects are intimately linked, they directly call methods on each other throughout their lifetime.  However, during destruction, it could be very dangerous to willy-nilly call methods on the other instance while it is in the throws of death. By setting the instance pointer to nil before destroying the object, a simple nil-check can be employed to make sure no method calls are made while the other instance is being destroyed.

So there you have it; a few little tips on ensuring your objects are "exception safe" and a couple of hints into when you should use FreeAndNil.  By peeking under the hood and examining the code, you can get a better understanding of why and how things are implemented.  So, you could always use the if FField <> nil then FField.Destroy pattern buy why when calling FField.Free; does all the work for you?  Using the pattern, if FField <> nil then FField.Free; is a redundant, as is, if Assigned(FField) then FField.Free;