Wednesday, December 27, 2006

A year in review.

Officially, CodeGear (and Borland) are in the midst of a holiday shutdown.  This is because the week between Christmas and New Years is never a very productive week due to many people taking the time to spend it with their family and friends.  With the new year approaching, I thought it might be good to spend some time reflecting on all that has transpired throughout the venerable 2006.

January 3rd, 2006 - Met with Tod Nielsen for one of his 100 one-on-one's in 100 days.  I vaguely discussed the content of that meeting here.  One thing I did not discuss in that blog post was that the subject of spinning-off Delphi (and friends) into some kind of separate company or subsidiary was actually mentioned.  It was mainly couched in terms of “fantasy“ and “what would the world be like if.“  I related the many conversations among Gary Whizin, Chuck Jazdzewski, Anders Hejlsberg and myself where we would all wax poetically about this subject.  Little did I know ;-).

February 7th, 2006 - Received a phone call from Rick Jackson (then the acting VP of R&D at Borland) letting me know about an announcement about to go over the wire the next morning.  Began working on a blog entry to be posted immediately following the actual public announcement.  Didn't sleep very much that night, that's for sure.

February 8th, 2006 - Fly! Be Free!  This is a date that will stick in my mind for many years to come.  Borland announces both the acquisition of Seque Corporation, and their intention of selling the Developer Tools Group to a yet-to-be-named investor or entity.  I still believe that Borland pre-announcing the intent to sell the DTG was the best approach.  There is simply no way to keep something that big and involving that many people secret for very long.  The rumors and speculation would have been far more distracting and frustrating for everyone involved.  How do to even keep something like this from the rank-and-file employees when you're asking for information how to separate internal computing systems, divide up the sales and support staff, and all the hundreds of thousands of little details?  Wholesale acquisitions of a company can far easier be held close to the vest, but for the DTG business which has been so intertwined with everything in Borland, you just could not keep it sufficiently “under wraps” for very long.

February 23rd, 2006 - The first of my “DevCo” - XX days after announcement posts.  The intent was not only to keep the customers informed, but also to make sure everyone was aware that there are real people behind these transactions.  I also wanted to make sure there was some trickle of information to let folks know that, while the content and substance of all the “behind the scenes” work could not be divulged in detail, it was clear that work continues.  One statement I made in that post was that “Once the deal closes, we can remove the Cluetraining wheels.”  As we move into 2007, those training wheels are now off.

March 16th, 2006 - Things you always wanted to know... This is an interesting post not just because of the content but because the reference to “Ed” is actually none other than CodeGear's very own Ben Smith!  So now you know ;-).

March 21st, 2006 - "DevCo" - 5 weeks after spin off announcement.  This post caused a bit of a stir among the “bankers” involved with the whole spin-off.  Apparently my progress posts began to be the only real source of information for the press since our PR firm was regularly contacted for clarification about certain posts I'd made.  The main sticking point was the mention of BearStearns and their reactions ;-).  From this point on, when I'd walk into the room for a meeting and the “bankers who shall not be named” were present, I'd get the obligatory “No blogging about this meeting!”  It actually became a kind of running joke for many months... with a hint of seriousness tossed in.  One thing I will note was that Ben Smith, was a supporter and still is, of my blogging and rarely did I ever get a “you stepped over the line” from him.  I'll admit that I certainly did push hard on the line throughout that period!

March 24th, 2006 - "DevCo" - Ping....  This was one of the first posts to highlight that even during the whole spin-off process, we were committed to appropriately growing the team.  We'd just hired a new compiler engineer to help on the C++ compiler.

Aprin 1st, 2006 - "DevCo" - 1 year after spin out announcement... I'm certainly happy that we didn't actually go more that one year to get to CodeGear.  However, that didn't stop me from having a little fun at our own expense.  I will, however, point out that while I was having some fun with that post, there was some drama happening behind the scenes regarding how we're going to carve up certain shared technologies and services.  I just took the opportunity to poke some fun at the process.

April 17th, 2006 - One Intern, Two Intern... Red Intern, Blue Intern....  We like interns.  Especially the “Asok” variety ;-)  One thing to note here is that we did hire an R&D intern over the summer, and have now offered him a full-time entry level position.  So, if you're in the S.F. Bay area, are a C.S. or C.E. student, let us know.  CodeGear can use you and you can get some valuable real-world experience (at least as “real-world” as it is around here ;-).

May 4th, 2006 - Borland announces "DevCo" progress...  The first official word from Borland that the Developer Tools Group was an independently functioning entity within Borland.  This was also the first time I really felt like I was in a different company.  I mentioned the company (Borland) meeting and how I really felt more like an outsider than actually a part of Borland.  From this point on, that separation just became more and more pronounced.

May 26th, 2006 - Questions, comments, suggestions, smart remarks....  This was the start of the Developer Tools Group soliciting the customers and the general public to send in their name suggestions on what we should call this new company/entity.  CodeGear was actually born out of all the myriad of suggestions we recieved.  I don't think anyone actually mentioned CodeGear specifically, but there were a lot of CodeXXXX and XXXGear suggestions.  Michael Swindell simply culled through all the suggestions and came up with CodeGear.  There were about 4 other top candidates for the name.  Since we do own the domains for those other names, I'll not mention them here since we may actually decide to use them for other things in the future.  In an internal Developer Tools Group poll, CodeGear stood head-and-shoulders above the other suggestions.

June 14th, 2006 - Nick-alodean.  This was an excellent day!  We finally convinced the ever present, tireless, Nick Hodges to join the Developer Tools Group!  I've known Nick ever since he posted, arguably, the first “Open Source” Delphi component, TSmiley.  As a long-time member of TeamB, and staunch Delphi supporter, it seemed so natural to invite him to to the helm of Delphi.  Nick has never been one to shy away from controversy and attack the issues head-on.  Since joining the Delphi team, sipping from the proverbial firehose, and being tossed into the deep end of the pool, he's becoming a clear asset to the Delphi product and the team.

July 7th, 2006 - It's a little quiet around here...NOT!  This is one of my first posts in reference to entering a “quiet period” during the spin-off process.  This is merely a period imposed both internally and by the rules of a publically held corporation to ensure that the whole process goes as smoothly as possible.  As it was explained to me, we could not have any information from this point leak out through unofficial channels (and, yes folks, this blog is about as “unofficial” as it gets).  Those deeply involved wanted to be spending more time on the details of the actual process and not get distracted by having questions about some trickle of information.  This only serves to make all those involved nervous and distracts from the core of the transaction.  We could not even hint at how many and what type of interested parties there were.  Standard fare for these kinds of deals.

August 7th, 2006 - They're baaaack.... Even during the “quiet period” imposed about the details of the spin-off, we here at the Developer Tools Group (CodeGear) were anything but quiet.  This marked the release of the Turbo editions of the Delphi and C++ products that were part of the Borland Developer Studio.  We also introduced the Explorer Editions of the Turbo products as a freely available download.  To date, 100s of thousands of the Turbo Explorer editions have been downloaded and continue to be very popular.

September 12th, 2006 - Will the real "DevCo" please stand up.  I just couldn't pass this up.  As some collegues and I walked back from lunch, we saw this truck entering the campus parking lot through the entry gate.  As we walked around to get back into the building, I snapped this photo with my camera phone.

October 30th, 2006 - New Delphi Survey.  The annual Delphi survey is posted.  The results of this survey and many other sources have a very profound affect on our product plans and roadmaps.  One small change to the roadmap that has been hinted at since Nick Hodges presented it at a user group in Amsterdam, is the movement of Unicode support for Win32 to the release following Highlander instead of Compact Framework.  Other changes are coming, so stay tuned.

November 14th, 2006 - CodeGear := TCompany.Create;  Finally!  Yes, I know, I know.  It was not exactly what we'd planned on, but it is certainly close!  After having been allowed to “peek behind the curtain” throughout the whole spin-off process, I'm still excited and totally stoked about being able to control our own destiny.  Yes, I know that this isn't going to be easy.  Yes, there is risk.  And, Yes, I'm a little scared!  However, it is a motivating fear, not a paralyzing fear!  A little bit of fear is actually good!  When you mix a little bit of fear with confidence, you have a recipe for success.  Without the formation of CodeGear and all the other events over the last year, I scarcely cannot even imagine what state Delphi, JBuilder, C++Builder and InterBase would be in.   I choose to not dwell on such things and only remain thankful that we are where we are.

November 27th, 2006 - Cough... cough....  As the dust clouds began to settle and the initial reactions to the CodeGear announcement began to wane, I offered my opinions on what this all means.  The good thing is that we can all put it behind us, pick up what we've learned, and charge ahead into 2007!

Well, there you have it!  It's been quite a year and that was only the highlights.  I haven't mentioned that we've hired and re-hired a lot of new and established talent.  We've been very hard at work on the product roadmaps to better adjust and align them to customer demands.  The dev teams have remained focused and are working hard to meet the roadmap goals and milestones.  We all here at CodeGear are beginning to feel more in control and are closer to how we drive success.  The distance between our CEO and the rank-and-file CodeGear employee is a mere 3 steps.  Unlike Borland where it can easily be twice or more than that.  The CEO and our head sales VP sit up here on the third floor with all the developers!  In fact, Nick Hodges is sandwiched between them!  After the first of the year, there will be no Borland employee here in the CodeGear, Scotts Valley campus.  Sometime in Q1, '07 we'll be officially “launching” the CodeGear company.  In the meantime, our marketing team is hard at work getting ready for this event.  As a matter of fact, last week I attended a meeting with the marketing team and an outside “branding firm” that will be helping us get the message out about what CodeGear is and what it stands for.  I found in interesting that the head of this outside firm was actually part of the Borland marketing team during the heyday of the developer tools.  I think this firm may actually have a clue about developers.

I want to take the time to thank all of you who read this blog (all two or three of you ;-).  I've tried to be as forthcoming with information and my own insights as I can.  I've also appreciated all the comments and reactions over the last year, even the ones I don't agree with :-).  Happy New Year!  I wish you all a productive, successful, and enriching 2007!

Friday, December 22, 2006

Holiday wishes.

With the holidays approaching, let me take this opportunity to wish everyone a happy and exciting Christmas and new year.  2007 is shaping up to be an interesting year for all of us at CodeGear.  So be sure to keep watching for upcoming announcements about the official launch of CodeGear.  So stay safe and enjoy spending time with your families.

Sunday, December 10, 2006

Delphi, from a fresh perspective...

Here it is December 10th and I guess I've been so busy helping build CodeGear and getting plans in place for Q1 '07 and beyond that I completely missed an excellent blog post by Steve Shaughnessy, the new Delphi database architect.  Steve outlines his experience in learning Delphi, the language, and the VCL framework.  I bring this up because as Steve states, he's been with Borland for 17 years and had somehow avoided ever having the priviledge of using Delphi (shock!  horror! oh, the humanity ;-).  I have truly appreciated his perspective, insight and fresh perspecitve.  A healthy team is always not only looking forward but is also looking back and re-evaulating past decisions and implementations.  While it is good to look at what can be improved upon, it is also very good to celebrate and highlight all those things that the team did right!  So I encourage you to read Steve's post and feel good about your decision to use Delphi.  And if you're evaluating Delphi, Steve's insights into learning Delphi after extensive experience with many other languages can be valuable.

Wednesday, November 29, 2006

Wii can learn something from Nintendo...

If you live in the U.S. (or nearly anywhere for that matter) and follow the consumer tech market even at a glance, you surely could not have missed all the goings on surrounding Sony's and Nintendo's recent release of new gaming consoles, the PS3 and Wii, respectively.  What is interesting is while all the crazy hoopla seemed to focus on Sony's frontal assault on Microsofts XBox 360, Nintendo [relatively speaking] quietly introduces the Wii, a much lower priced, less featured, not-nearly-as-good graphics, gaming system. 

On paper, the Wii, is a huge “why bother.”  However on further examination, there is some interesting genius at work here.  Sony and Microsoft are bent on being #1, owning the living room, and being a vehicle for all entertainment and media.  Gaming is beginning to take a back seat.  Nintendo, however, is emerging as being the one player in the console gaming market that clearly knows where it's bread is buttered.  They do gaming consoles, handheld game platforms, and have a much larger library of Nintendo produced games.  That's it.  They're not out to “be the do-all end-all media device.”  It is also worth noting that Nintendo seems to be the only one of the three that is actually very profitable (in the console gaming business).

What prompted this post was that I came across this interesting article in The New Yorker magazine.  What really struck me was this:

“A recent survey of the evidence on market share by J. Scott Armstrong and Kesten C. Green found that companies that adopt what they call 'competitor-oriented objectives' actually end up hurting their own profitability. In other words, the more a company focusses on beating its competitors, rather than on the bottom line, the worse it is likely to do.”

I guess the way I'm going to somehow tie all of this back to CodeGear is to say that, as a much smaller company with a single-minded focus we should really take a few notes here.  We must make sure we focus on what we're purporting to be all about.  We're going to have to have “developer-oriented objectives“, and not “competitor-oriented objectives.”  By doing that, I have little doubt that we can thrive and have a profound impact on the lives of developers.

To steal a line from the Nintendo Wii TV commercials, “CodeGear wants to play.”

Tuesday, November 28, 2006

Optionitis can kill if left untreated...

If you've been around these parts for a while you've probably heard the above reply to common feature requests. Many times these requests are at odds between several groups of users.  So their typical solution to this impasse is the ever so simple,  “Well, then just make it an option.”  I'm sure there have been those who've scoffed at my response and put me in the “closed-minded-dolt” category ;-).  Well it seems I may have gained a little vindication.  Apparently “Joel on Softwareseems to agree.  One could certainly argue that development tools are targeted at a different level of user than Windows itself or your typical word processor or spreadsheet application.  To that I say, why?  Why do development tools have to have a million and one little options for this and that?  We already have different and keybindings, tabs on, tabs off, auto indent on/off,  code completion on/off, etc...  There are a lot of end user reasons to limit the options as Joel so deftly does in a somewhat comical fashion to that faceless team of individuals working on the “start menu.”

However, there are also a lot of practical development side reasons to limit the choices as well.  Many times, just the simple act of adding a single on/off boolean option actually can double your testing efforts!  Ok, that was probably a bit of hyperbole, but it will double a portion of your testing effort.  So now you have to test everything related to that option with it on and with it off.  Add another option, and you're quadrupling the effort.  Think binary here.  In practice there are plenty of relatively safe testing “shortcuts” and ways to minimize that impact.  My point here is that glibbly introducing an optional new feature because “those 5 people I talked with last week said it was a good idea,” is probably not the most responsible thing to do.  Sure, for smaller applications targeting smaller markets, you do have to try and cater to as many as you can.  However, I've seen many, many cases where a better and workable, non-optional solution to a problem (or more appropriately a class of problems) is born out of a whole aggregated class of similar problems.

So when you're asked to “just add this little new feature,” and are inclined to make it optional, just step back and consider whether or not there are other problems within this particular class.  Is this new option worth the extra testing burden?  Is the feature useful to a wider cross-section of your customer base, so maybe it shouldn't be optional but always enabled?  Don't use the option as a “get out of jail free card.”  Sure, if customer X says they just can't stand that feature and you can reply with a simple, “Just turn it off.”  You sure dodged that bullet... or did you?  What if there was some use-case that that customer brought to the table that you had not considered?  Adding an option should not be used as an excuse to not have to thoroughly think through a problem.  Consciously or not, many times that is what is happening.

So for now, I'll still hold that “Optionitis can kill if left untreated.”  Now... at some point we still have to add that error message, “Programmer expected...”

Monday, November 27, 2006

Cough... cough...


Now that the dust is beginning to settle and some of the initial euphoric/shocking/stunned reactions are beginning to subside regarding the CodeGear announcement, I figured I'd weigh in with my perspective.  I specifically wanted to hold off till this point mainly because I wanted some time to fully digest and evaluate what this all means and how I think it will play out in the coming months.  To be fair, it is all still sinking in and there are still fair number of questions we have yet to answer.  Being as close to this whole process as I've been has given me a decidedly unique perspective.  First of all, being a technical kinda guy all my life with little to no desire to ever delve into “business” allowed me to take a kind of “layman” approach.  Now of course I'd also like to think that I'm no spring chicken and still possess a keen ability to analyze and verify a lot of the information I've been able to see.

If you've ever dealt with folks in the finance/business world, they have their own language and speak at a level of abstraction that tends to baffle most folks.  Wait... that sounds oddly familiar, doesn't it?  Isn't that exactly how everyone tends to describe us, in the high-tech world?  We have URLs, ASTs, QuickSorts, etc...  They have EBITDA, Rev-Rec and Cost-Models.  So?  The point I'm trying to make here is that many in the high-tech world tend to eschew all things business.  While this is clearly an oversimplification and probably too broad of a statement, I'm only trying to highlight that the reverse is not necessarily true.  The business side understands that there is value and a market for the high-tech side.  It's their job to recognize and figure out how to monetize and capitalize on those things.

Another aspect of the business side of things is that often maligned and thought of as only “for those other guys,” is marketing.  I guess one of the reasons for that is that marketing is actually about psychology.  I'm not saying that they aren't out there, but I don't know any programmers, software engineers, etc... that also have psychology degrees.  Human factors is close to what I'd consider human psychology mixed with technology.  Psychology is in many ways more of a meta-science than, say, physics, mathematics or biology.  This is probably one reason that marketing has really been misunderstood.  We all know when marketing is trying too hard, is just plain bad and misses its mark.  However when marketing is successful... the target audience doesn't actually feel like they've been marketed too.  The message is clear, resonates, and makes sense.  This is how CodeGear needs to handle marketing.

My Take

As I started this post, I wanted to make it clear that there is a lot of machinery behind this endeavor.  It is also not a “started in the garage” kind of venture.  So the first item is the CodeGear announcement itself and what it means.  During the months following the February 8th announcement of Borland's intention to divest itself of the Developer Tools Group (DTG), a huge internal effort began.  This all started with creating a credible and achievable plan for the next 3 years.  These plans not only included the existing products, but also plans for growing the business and moving into other developer focused markets.  Probably one of the hardest part was to determine what parts would go with the DTG, which parts are licensed from Borland, and how to handle the overall transition.  There were the obvious items, Delphi, JBuilder, C++Builder, InterBase, etc...  But there were also some technologies that had been spread across all the Borland product lines.  An example of this is the licensing.  I've read a bit about how some folks were nervous that they wouldn't be able to activate their legacy products.  I assure you that this was an item discussed all the way to the top.  It was imperitive that we, CodeGear and Borland, not allow that to happen.

With all the late evenings and long weekends we came down to the CodeGear announcement that Borland intends to make DTG a wholly owned subsidiary called CodeGear.  My first thought was... “hmmm... OK... that's interesting.”  I remember meeting with many potential investors and going through all the long presentation sessions.  I did many follow-up diligence sessions.  I discussed the customers, the products, the roadmaps, the teams, the history, the good, the bad, and the ugly.  I must say that nearly all of the folks I met were cordial, engaged, interested and above all, shrewd and analytical.  So after all of that,  it did seem somewhat anti-climatic.  We had been diligently preparing for one specific outcome and something slightly different happened.  I remember early on in the beginning personally resolving to approach this whole process with an open and non-judgemental attitude.  Whatever the outcome, I was going to, as much as I'm able, do whatever it takes to make this a success.  It isn't every day that one gets to participate in the genesis of a new company, in whatever form.

So were the last 9 months a waste?  Absolutely NOT!  As a matter of fact, we're in far better of a position to be successful and run this business.  We have spent the last months shining a bright light on every deep, dark corner of the business.  We've questioned everything.  Quite frankly, we had to relearn this business in the operational sense, and I'm sure this is no real secret, it's been left to it's own for a very long time.  We also have to take into account market shifts and other dynamics.  The great thing is that we no longer have to sit on the sidelines and watch all the action.  Now we have the chance to get in on it.  What is that action?  Things like web development, dynamic/scripting languages, continuned traction in the Win32/Win64 native markets.  Let's not forget the whole .NET side of things as well.  There is the quickly maturing open source movement and a whole ecosystem surrounding Java and Eclipse.  We still have a lot to offer those markets in terms of experience, wisdom, and insights.  This isn't just a whole lot of “been there, done that,” but a chance to actively apply a huge amount of what we've learned over the years regarding what developers need and what.  This is a chance to help shepard in these new technological advances by making them more accessible to the average developer.  Over the coming months/years, I'm certain that the shape of our offerings in those spaces will be very different than they do today.


This is a hard lesson... for anybody.  Part of why I put this in here is that this is one thing I know developers struggle with.  While CodeGear is clearly focused on the developer, we also know that you can't be all things to everybody.  So in many ways, CodeGear will have to find the right “balance” as it comes out of the gate.  There is a time for whipping out the shotgun and blasting away at a market and hope that something will hit.  There are also plenty more times where the sniper rifle is far more effective.  So the balance comes from when you use which approach.  So, you will probably see some use of the shotgun and a good amount of the sniper rifle as well.

So while the landscape is not exactly how we originally envisioned it, it is very, very close.  CodeGear will be allowed to operate in near total autonomy.  We'll have control over what products we produce and when we release those products.  We also control how we approach new emerging markets/technologies.  We'll have control over our own expenses.  If something costs too much, we either do it differently or decided to not do it at all.  Pretty simple.  We get to decide where to re-invest the profits.  We get to decide with whom we'll form partnerships.

Many of you may remain unconvinced, and that's OK.  I have no delusions to think that we're going to make everyone happy.  A lot of mere talk isn't going to convince some people.  I know that.  So all I ask is that you watch carefully, be patient.  Things are going to begin to happen in the coming weeks.  This will be especially true for next quarter when we get all the nitty-gritty details of the CodeGear business arrangements settled and announced.  The great thing is that I've got more irons in the fire now than I've ever had while at Borland.  We're also at a point where it isn't a question of what direction to go and what to do because we know that it must fit with focusing on the developer.  We're certainly not at a loss for ideas and direction, it is just now up to deciding what to do first and when.  Much of that has already been decided as you'll see in the upcoming weeks/months.  For my part, I'll keep rambling on...  And please excuse the dust as we remodel.

Monday, November 20, 2006

CodeGear Borland, an example

How many times in the past (the Borland past, that is) has the director of IT actually posted a message in the newsgroups?  To the best of my knowledge, a grand total of ZERO times.  There was a thread over in the Delphi.non-tech group about the new CodeGear site and Mark Trenchard, the CodeGear IT director, actually posted a message!  Mark was recently brought on board and was previously with the networking group at HP.  This is certainly a good sign that things truly will be different here at CodeGear.  I will continue to encourage that all CodeGear employees interact with the community where appropriate.

Tuesday, November 14, 2006

CodeGear = new Company();

Various other links:

Letter to our customers, partners, and fans from Ben Smith, CEO CodeGear.

Borland Spins Off Its Tools Unit.

Borland Launches CodeGear to Supply Developers with Tools of the Trade.

Borland forms CodeGear - FAQ.


CodeGear := TCompany.Create;

And you thought it'd never happen! :-)  However, as of today, I'm officially a CodeGear employee.  CodeGear is the new developer company that is born of Borland.  We're about, for, and by developers.  In the coming days and weeks, we'll be talking more what we're about, where we're going to take this new venture and how we're going to get there.  Delphi, C++Builder, JBuilder, InterBase are all the core products around which this company will be built.  But that isn't all we're about.  Be sure to stay tuned as we roll out more and more information.  You can read the press release here.  Also you can Digg this link as well.  Full disclosure, I did submit the story to Digg.

Friday, November 3, 2006


There were quite a few interesting comments made to this post from the other day that seemed to indicate that there is a little confusion out there regarding exceptions.  More specifically, the try...finally construct.

I've seen some rather interesting twists on the use of try...finally that have made me pause and wonder why it was done that way.  When programming with exceptions, you always have to be aware that at nearly any point and without warning, control could be whisked away to some far-off place.  In many ways you have to approach and think about the problem very similarly to how you would in a multi-threaded scenario.  In either case, you always have to keep in the back of your mind two (or more in the multi-threaded case) potential code paths.  The first one is easy since that is the order in which you're writing the code statements that define the overall logic and intent of your program.  The other, and often forgotten code path is the exception execution path.

When an exception is raised (or "thrown" in the parlance of C++ and other similar languages) a lot of behind the scenes work is set into motion.  I won't be going into the details of exactly what is going on since that tends to be platform and language dependant.  I'll focus more on what happens from the programmers perspective and even more specifically the try...finally construct.  One way to think of the try...finally block is that it is the programmer's way of "getting in the way" of that secondary code execution path.  However, it is also unique in that it also "gets in the way" of the normal code execution path.  In the grand scheme of things, the try...finally block is one of the most used (and often mis-used) block types for programming with exceptions.  I'd venture to say that in the typical application the ratio of try...finally blocks to try...except blocks is on the order of 100:1.  But why use them at all and what are they good for?  It's all about resource management.  Memory, files, handles, locks, etc... are all examples of the various resources your application uses and interacts with.  The whole point of the try...finally block is to ensure an acquired resource is also properly handed back regardless of which execution path the application takes.

Let's look at some common misuses of the try...finally block and examine them more closely.

Obj: TMyClass;
Obj := TMyClass.Create;

There's a subtle problem here... Let's follow the normal execution flow first.  Control enters the try block, an instance of TMyClass is allocated and the constructor is called, control returns and the local variable, Obj, is assigned.  Some operations are done with the Obj instance (the "..."), then control enters the finally block and the Obj instance is freed.  So far so good, right?  I mean, the memory is allocated and freed and all is well with the heap along with any other resources needed by the TMyClass instance (assuming it is a properly designed class, that is).

Now let's look at the other possible flow of control.  Control enters the try block, an instance of TMyClass is allocated and the constructor is called.  Here is where something can go horribly wrong.  When programming with exceptions, the possibilities are nearly endless as to what can happen during the call to the constructor.  The most obvious is the case where the memory manager is unable to allocate enough space on the heap to hold the instance data for the new instance.   An "Out of Memory" exception is raised.  Hey, but that's OK because the finally block will get in the way of the the exception control flow, right?  Yep, that's right.  So memory was never allocated, the constructor was never called, local variable Obj was never assigned, and control is passed to the finally block which contains.... uh... Obj.Free;  Do you see it now?  Yep, that's right, the Obj reference was never properly set.  Because of that, the call to Obj.Free; is not good for the health of your application.  Chances are that another exception is going to be raised which will supercede the original exception (most likely a more fatal and nasty one).

So how do we fix this?  Hey I know!  What if we just made sure to pre-initialize the Obj reference to nil (Obj := nil;)?  Sure.  You could do that, but that is just adds another line of code to your function.  How can we arrange the above code to ensure that every time control is passed to the finally block regardless of which path of execution is used to get there?  It's actually very simple.  Here's the same block with that subtle change:

Obj: TMyClass;
Obj := TMyClass.Create;

But now the construction of the TMyClass instance isn't protected!  It doesn't have to be and let's examine why.  From some of my previous posts regarding exception safety and the use of Assigned, we alluded to the fact that while the constructor of an object is executing, if any exception is raised the destructor will automatically be called, the object will be freed and the exception is allowed to continue on its merry way.  There are essentially two main areas were things can go wrong.  The first is during the actual allocation of the memory for the instance.  If the memory manager is unable to find a block of free memory large enough to hold that instance, it will raise an "Out of Memory" exception.  Since the instance was never actually allocated, there is no need to ever execute the Obj.Free; line of code.  Since the try...finally block was never entered, Obj.Free will never be called.  The other place where things can go wrong is in the object's constructor.  In this case the memory manager allocated the memory and then control was passed off to the the constructor that would begin to setup the instance.   If something fatal happened along the way there, we know from those past articles that the destructor will automatically be called and the instance memory handed back to the memory manager for later re-use.  Since the object was already freed and, again, control never entered the try...finally block, the Obj.Free; line is never executed.  So in both of those scenarios, there was no need to touch the local Obj variable reference since the resources associated with the TMyClass instance were taken care of.  If the memory allocation succeeds, the constructor runs to completion, then control will return and the local variable Obj will be assigned after which control will then enter the try...finally block.  It is only at this point that you always want to make sure the TMyClass instance referenced by Obj is freed regardless of what happens next.

There are a lot of other interesting "twists" on the usage of try...finally and try...except blocks that I've seen over the years.  The above case stands out in my mind as the most common mistake made.  I'll address some of the other cases in following posts.  If you have any questions about certain common idioms and patterns that you're not sure are exception aware, post some comments and maybe I'll address them in future posts.

More videos.

No! Not that kind!  Steve Trefethen has posted an excellent video demonstrating some of the little-known Code Completion, Code Insight, Code Parameters, Code Browsing and Live Template features.  Some of these features have been in the product for several releases, and some are new for BDS2006 (and the Turbos).

Wednesday, November 1, 2006

Exceptional Safety

Back in late 1992 or 1993 or so, we had a dilemma.  We wanted to add exceptions to the Turbo Pascal (what was soon to become the basis for Delphi's programming language).  WindowsNT was under full-swing development. With this new OS came a new-fangled thingy called Structured Exception Handling (SEH).  So what was the dilemma here?  If you'll recall, the first release of Delphi was targeting Windows 3.1, aka. Win16.  There was no OS-level support for SEH.  Also, WindowsNT wasn't going to be released as a mass-market consumer-based OS.  Again, what's the problem?  Just add your own implementation of SEH to the language and move on.  The problem was all about "safety."  So we, of course, added our own specific implementation of SEH to the 16bit version of the Delphi language.  Aren't exceptions suppose to make your code more safe?  Ok, I'm being obtuse and a little evasive here.

The fundemental problem we faced was the notion of a partially constructed object.  What if halfway through an object's constructor an exception was raised?  How do you know how far into the execution of the constructor you got by the time the exception handler is executed and the object's destructor is called?  The constructor is excuting merrily along, initializing fields, constructing other objects, allocating memory buffers, etc...  Suddenly, BAM!  One of those operations fail (can't open a file, bad pointer passed in as a constructor parameter, etc...).  Since the compiler had already injected an implicit exception handler around the execution of the constructor, it catches the exception and promptly and dutifully calls the destructor. Once that is complete and the partially constructed object is destroyed and all the resources are freed, the exception is allowed to continue, in other words, is re-raised.  The problem in this scenario is the fact that the destructor really has absolutely no clue why it got called (sure it could see that an exception is currently in play, but so what?).  The destructor doesn't know if the instance was ever fully constructed and if not, how much got constructed.

The solution turned out to be remarkably simple and somewhat clever at the same time.  Since all classes in Delphi have an associated virtual method table (VMT) each instance must be initialized to point to that table.  Since Delphi classes allow you to make virtual method calls from within the constructor, that VMT pointer has to be initialized before the constructor is allowed to execute.  If the VMT pointer has to be set before the constructor executes, why not just initialize the entire instance to some known state?  This is exactly what is done.  The entire instance is initialized to zero (0), the VMT pointer is set, if the object implements interfaces, those VMT pointers are also set.  Because once the user's code in the object constructor begins to execute you know that the instance data is in a known state.  By using this fact, the destructor can easily tell how far in the object's initialization sequence things got before the world went haywire.  Remember yesterday's post where I mentioned the FreeAndNil procedure?  Another item to note is the non-virtual TObject.Free method.  Because you can assume that if an instance field contains a non-nil or non-zero value, it must have been successfully initialized, it should also be OK to de-initialize it.  This is more important for any memory allocations or object constructions that took place in the constructor (or any other object method for that matter).  The destructor has to know when a valid pointer is in that field.  So a non-nil value means, go ahead and free the memory or destroy the object.

We realized too, that it would be very tedious and error-prone for the user to always have to remember to always do this pattern: if FField <> nil then FField.Destroy;  Enter TObject.Free.  If you opened System.pas and looked at the implementation of Free, it simply does if Self <> nil then Destroy;  So you can safely call the Free on a nil, or unassigned instance.  That is because the check is done within that method.  All you need to do is FField.Free; and your destructor is now "exception safe."  The same thing can be done for memory allocated with GetMem.  You can safely call FreeMem(FField), even if FField is nil.   It just returns.  Finally, it should be noted that certain "managed types" such as strings, interfaces, dynamic arrays and variants are also automatically handled through some compiler generated meta-data.  So just before an object instance's memory is freed, an RTL function is called that will take the instance and this meta-data table which contains field types and offsets so this RTL function knows how to free certain instance fields.  Again, if a particular field is nil, it is simply passed over with no action needed.

What about the FreeAndNil thingy?  For the astute among you, you've probably noticed that the implementation of that procedure actually sets the passed in reference to nil first and then destroys the instance.  Shouldn't the name actually be NilAndFree?  Yeah, probably.  But it just doesn't roll of the tougue very well and is equally confusing.  "So if you set the referene to nil first... how can you destroy it?"  So why was it implemented in this manner?  Exception safety is big one reason.  Another significantly more obscure reason involves intimately linked objects.  Suppose you have one object that holds a list of other objects which in-turn hold a reference back to the "owner" object?  Depending on the order in which the objects get destroyed, they may need to notify other objects of their impending doom.  Since the owner and the owned objects are intimately linked, they directly call methods on each other throughout their lifetime.  However, during destruction, it could be very dangerous to willy-nilly call methods on the other instance while it is in the throws of death. By setting the instance pointer to nil before destroying the object, a simple nil-check can be employed to make sure no method calls are made while the other instance is being destroyed.

So there you have it; a few little tips on ensuring your objects are "exception safe" and a couple of hints into when you should use FreeAndNil.  By peeking under the hood and examining the code, you can get a better understanding of why and how things are implemented.  So, you could always use the if FField <> nil then FField.Destroy pattern buy why when calling FField.Free; does all the work for you?  Using the pattern, if FField <> nil then FField.Free; is a redundant, as is, if Assigned(FField) then FField.Free;

Tuesday, October 31, 2006

Assigned or not Assigned, that is the question...

There's a rather interesting discussion taking place in the news:// newsgroups about whether or not the Assigned is better than simply testing for nil.  There's also some discussion in that same thread going on about FreeAndNil, which I can cover in another post.  As for Assigned, I thought I'd shed some light on why it was even introduced.  This is also a bit of a peek under the hood (bonnet for those of you across the pond) into some inner workings of the Delphi (BDS) IDE VCL designer.

'if PointerVar <> nil then' has been used to check if a pointer variable holds a useful value (assuming, of course it had been initialized to nil previously).  This statement generates reasonably good machine code, is clear, and serves the intended purpose just fine.  So why add "Assigned"?  Starting with Delphi 1, we introduced to the language the notion of a "method pointer" or some call it a "closure" (although it is not *quite* what a "closure" in the pure sense actually is).  A method pointer is simply a run-time binding consisting of a method and, this is very key, a specific instance of an object.  It is interesting to note that the actual type of the object instance is arbitrary.  The only type checking that needs to take place is to make sure the method signature matches that of the method pointer type.  This is how Delphi achieves its delegation model.

OK, back to Assigned.  The implementation of a method pointer for native code (Win16 and Win32, and the upcomming Win64 implementation), consists of two pointers.  One points to the address of the method and the other to an object instance.  It turns out that the simple if methodpointer <> nil then statement would check that both pointers were nil, which seems logical, right?  And it is.  However, there is a bit of a hitch here.  At design-time, the native VCL form designer pulls a few tricks out of its hat.  We needed a way to somehow assign a value to a component's method pointer instance, but also make sure the component didn't actually think anything was assigned to it and promptly try and call the method (BOOM!!).  Enter, Assigned.  By adding the standard function, Assigned, we could preserve the <> nil sematics, and also introduce a twist.  It turns out, and for the adventurous among you can verify this, that Assigned only checks one of the two pointers in a method pointer.  The other pointer is never tested.  So what happens is that when you "assign" a method to the method pointer from within the IDE's Object Inspector, the designer is actually jamming an internally created index into the other (non-Assigned-tested) pointer within the method pointer structure.  So as long as your component uses if Assigned(methodpointer) then before calling through the method pointer, your component code will never misbehave at design-time.  So ever since Delphi 1, we've diligently drilled into everyone's head, "Use Assigned... Use Assigned... Use Assigned...."  It think it worked.  For component writers it is critical that Assigned be used when testing the value of published method pointers before calling through them.  For all other cases, it is a little muddier and less defined.  Personally, I still use the <> nil pattern all the time.  Maybe it's because I'm "old skool", I don't know... I do, however, always use Assigned for testing method pointers, for the reasons given above.

I've seen some rather strange arguments as to why you should always use Assigned, even on normal instance references and pointers.  One, that I found kinda strange was that "in the future, 'nil' may be defined as something other than a pointer value with all zeros (0)"  If that were going to be the case, don't you think that the "value" of nil will also change to reflect that condition?  Another factor is, why on earth would you ever need to use a non-zero valued nil?  That, to me, should be one of those immutable laws of computing, like PChars (char*) are always terminated with a zero (0), nil should always be a zero (0) value.  Now, I do realized that the Pascal definition of 'nil' doesn't necessarily mean zero (0), but it has been so convienient and consistent, that for any new platform or architecture that would want to change it would need to have an extremenly compelling reason to redefine it.  At some point, I'll set the WABAC machine and we can all go back to investigate some other obscure factoid and try to dig up the the reasoning behind certain decisions and designs related to Delphi, VCL, and the IDE.  For now, keep on using "Assigned(someinstance)" or "<> nil" with impunity... however for method pointers, it's if Assigned(methodpointer) then all the way!

Monday, October 30, 2006

New Delphi Survey.

Be sure to fill out the newly posted survey.  Your feedback is highly valued and greatly appreciated.  I would suggest you block out about 30-45 minutes to completely fill out this survey.  I'd say that it's one of the more comprehensive surveys we've had in a while.

Friday, October 27, 2006

Firefox and Cake

I use Firefox almost exclusively, and anytime I use IE is in the context of the Firefox plugin "IE Tab."  I've also made sure the whole family is using Firefox as well, mainly because the number of nefarious exploits out there for Firefox are significantly less than for IE (that is slowly changing as FF gains popularity).  Well I just installed FF 2.0 and so far it's been working great.  The UI updates are very subtle, but I can tell the overall performance is definately better.  Memory footprint is about the same, though.

I happened across this posting where the Internet Explorer team sent a cake to the Firefox team congratulating them on the release of FF 2.0.  Of course Slashdot is filled with a myriad of conspiracy theories, negative spin, and outright paranoia... however my initial thought was how classy that was for the IE team to do that.  The FF folks should be proud of their accomplishment and the fact that a competitive team from MS is recognizing the tireless efforts of the team working on an open source project, speaks volumes... about both teams.

Thursday, October 26, 2006

You know it's going to be an interesting day when...

After a long dry spell... you post twice in one day?  No, that's not it.

After fighting with the product build, unit tests, and automated smoke tests for two days and you finally get the build working and start allowing commits to the repository... the first commit after all of that breaks the build again...Gahhh!!  It's a good thing I was there to warn the culprit before the engineer who has spent the last two days (and nights) getting everything back in shape, came storming into this guy's office... armed to the teeth with one of the monster Nerf cannons laying around here.  The hallways sure did light up for a few moments ;-).

I'm highlighting all of this because no matter what processes you put into place, and how many emails, wiki posts, meetings you have, ultimately it will boil down to simple human error.  Just like the old adage, the more foolproof you make something, the more the world seems to hand you smarter fools :-).  I know I've certainly been just as guilty as the next guy of breaking the build.  Are there any of you out there with some great (infamous, funny, or just plain scary) stories of how against all odds, a member of your team managed to bypass all the safeguards and promptly broke the build?  We all have stories of some "former team member" who did some of the craziest stuff, but what about your most embarrasing moments?  These are the kinds of moments that, while infuriating at the time, you will look back on and find the shear humor and irony of the entire situation.  What is your WTF story?

Reaching into the grab bag...

It's been a while since I've been posting to my blog, so I'd like to appologize for that... not that anyone is really out there hanging on my every word ;-).  So this post is going to be a bit of a grab-bag of things.

Any news???

The first and most obvious thing to comment on is actually something I cannot comment on so any comments I make should in no way be construed as commenting on anything that is worth commenting on at all.  Huh??!!  Right.  Now that that is out of the way...

Installers are important...

Duh!  Of course they are important.  How else can you deliver your product and ensure that it is placed properly on the end-user's system and is configured correctly?  The problem is that many companies tend to think of installers as an afterthought.  "We just built this great killer-app... now how do we deliver it?"  Because of this tendency to push it off to late in the process, it is often short-changed.  A flexible build, delivery and install process is key to making sure that your "integration" team does not become a bottleneck.

How you're going to deliver your product is something that needs to that needs to have a decent amount of up-front consideration.  It can, and often does, affect how you actually architect and build the product itself.  You have to ask questions like:

  • Who is going to install the product?  One of your highly trained sales engineers or the clumsy end-user?
  • Is your product going to be delivered via direct marketing, an indirect channel, or offered via a shopping website with immediate downloads?
  • How many different "editions" or SKUs (Stock Keeping Units) will you have?  (Beginner, intermediate, advanced?)
  • Will you be offering a trial version?
  • Will your product be localized into other languages?  If so, are all languages delivered together or separately?
  • Will you offer a lightweight free version?

Nearly all of those questions above, and probably a few others I can't think of right now, are ones we have to ask ourselves all the time.  "Integration"  is the process of bringing together all the various bits and peices from other teams and external third-parties and "integrating" them into an installable image.  For many years there have been a plethora of tools out there that serve to ease the construction of these installation images.  However, not many (if any at all) actually help in streamlining the whole integration process.  Not that they should, mind you, but there is often a lot more to delivering a product than simply pressing F9, grabbing the resulting executable, jamming it into an installer project, build the installer and burn a CD or upload to your shop site.  For many folks those steps are actually all that is needed due to the limited complexity of the product.  When you compare that to what we in the DTG group have to do, that is far from an adequate process.

Over the years we've developed quite a few internal tools (built with Delphi and sometimes InterBase of course!) that help make this process easier to maintain and to make it, and this is a critical point, repeatable.  If you cannot ensure that each time the process is run that it goes through the same steps in the same order every single time, how can you be sure of anything in the output.  For instance, no "offcial" build is ever delivered from a developer's machine.  Official builds are always done on a single-purpose isolated system that is dedicated to that one and only task of building the product.  Developers rarely, if ever, have direct access to this system.  The "integration" team is responsible for the care and feeding of that system.  That team is also responsible for maintaining what we call the "delivery database" which is exactly what it sounds like.  It is a huge centralized database that describes which files get "delivered" where and what kind of processing needs to be done to them on the way.  By extension this same database is also used to generate scripts that are used by the installer software to create an installation image.

I'll try and talk a little more about this whole process in some upcoming posts, but for now I'd like to hear from folks what kinds of processes you've setup to streamline and speed the delivery and building of installers?  Also, what installers out there on the market are your favorites?  We've used for several releases InstallShield from Macrovision, but there are many others and we're always evaluating new versions or new entries into the market.  A few of the others we've looked at over the years is Wise, InstallAnywhere, and a new interesting entrant into the whole installer domain, InstallAware.  There are many others; some free, some open source and other commercial offerings.  What is interesting about InstallAware is that it is written entirely in Delphi!  That's really cool!


Huh?  What's that?  It's a term used to describe a group of bicycle riders, usually in some kind of race, such as the Tour De France.  Peloton is also the code name for the upcoming release of JBuilder.  (What!  This is a Delphi blog!  Why are you talking about Java??!).  Yeah, yeah, I know.  However, since the formation of the Developer Tools Group here at Borland, I've had the privilege of working closer with the Peloton team here in Scotts Valley, CA.  In many ways my role has expanded a bit and I now keep an eye on all the various projects going on in the DTG.  Closely watching these other teams (this include the InterBase team) has been very enlightening.

The Peloton team had a daunting task ahead of them.  They had to, essentially, recreate almost the whole JBuilder experience on top of the open source Eclipse platform!  The face and character of the Java market is shifting dramatically and the move is on to open-source.  The maturity and usefulness of all the various bits and pieces of open-source are finally reaching a critical mass.  The problem, however, is that this new world order in the Java space doesn't really have "order."  It's probably best characterized as being the "wild-west."  The possibilities are nearly endless, but the dangers and pitfalls are many.  There needs to be a trailblazer and a regional "marshal" to help establish some order and sanity to this new frontier.  Take the Eclipse effort.  Clearly the Eclipse IDE was first and foremost built to be a Java IDE.  However, some of that is changing with the uptake in the adoption of the RCP (Rich Client Platform) for creating non-IDE type products. There is also the CDT (C/C++ Developer Toolkit), which is a cross platform C/C++ IDE tool. 

However there is a bit of chaos.  Enterprises don't like chaos. Even many small to medium developer shops cannot afford added chaos to their development teams.  By chaos, I'm referring to the whole open nature of the Eclipse platform.  This is not an argument against open source at all, but merely an observation of reality.  We've been speaking with many of the current and past JBuilder customers;  For the ones that have moved to Eclipse, they are finding that "free" is by no means "free!"  There are even some companies that have some of their top talent dedicated to defining the official Eclipse platform and plug-ins the development teams are going to use.  That is amazing to me!  You now have companies where development tools are not their core business, and most of them, software isn't even their core business.  They're banks, investment firms, hospitals, government agencies.  They're not in the business of selling software let alone assembling or building development tools!  Software is merely a necessary tool that the company uses to remain competitive in their respective markets.  This is where Peloton (aka. JBuilder) will step in and fill the gap.  By providing a certified and preassembled Eclipse environment, we can help many of these companies get back to focusing on what makes their businesses successful.

Peloton is not just a simple thin coating on top of the Eclipse you can go download for free right now, but it is also the evolution of the whole JBuilder product line itself!  It is interesting to note that the first 3 versions of JBuilder were actually built on top of the Delphi IDE! (Hah!! There's your Delphi reference!!).  Starting with version 4 (there was a 3.5 in there... but let's keep this simple), JBuilder shifted to what was called the PrimeTime platform, which was an all Java IDE platform.  So from version 4 up to JBuilder 2006, it was based on the PrimeTime core.  Now, we're just doing the same thing again and moving to a new IDE core again and this time we've chosen the Eclipse platform.  There was significant effort and emphasis placed on making sure that your existing JBuilder projects can be easily and seamlessly carried over to this new version.  The DTG has always been about making sure our customers were not left behind and are provided an avenue to the latest technologies.  Just like Delphi and C++Builder, JBuilder has held to this equally as well.  If you're at all interested in the upcoming release of JBuilder or have been using a older version, I would strongly suggest you look into purchasing JBuilder 2006 with Software Assurance.  Everyone who is on SA when Peloton is released will get it as part of their SA agreement.


There has been some recent bruhaha surrounding the DTG's BDS published roadmap.  I will say that we're currently in the process of review, re-alignment, adjustment and/or clarification of the currently publised roadmap.  Industry landscapes can change, priorities can shift, so maintaining a stagnant roadmap is not in anyone's best interest.  Likewise, radical departures from an existing publised roadmap are equally, if not more, damaging.  "Balance" is the watchword of the day...  Very soon we'll be announcing the opening of another online Delphi survey.  Nick Hodges has been frantically gathering input about what to place on it from all those involved from the various teams.  Changes are not made on a whim, but we are actively evaluating everything we're doing and where we're going.  This is, in fact, a continuous process and something we've done for many, many years.

Busy, busy, busy

I just want to reiterate that the DTG continues to be filled with a flurry of activity.  For instance, we've just hired a new head of marketing who is fully and solely dedicated to the DTG!  Yeah, folks it's been a while on this one...  We've also opened many positions on the documentation team.  So if you're a technical writer, here is one of the positions to send in your resume' or CV.   We also recently filled a position on the Delphi compiler team.  There are some openings available on our integration team, so if you're experienced in writing installers, and would like to help define new, modern ways to install, update, and deliver Borland Developer Studio, be sure to send your resume here.

Monday, September 18, 2006

Hotfix!! Get yer Hotfix, here!

Hot on the heals of the Turbo product release, we've just now signed-off a rollup of the previous BDS2006 hotfixes 1-6, except 2 and have now added three previously unreleased hotfixes 7-9!  This hotfix rollup will work for all BDS2006 Update 2 editions and all the Turbo releases, all languages.  You'll be able to download it on soon, and it should be up on CodeCentral here.  I'll update this posting as more sites come online.  It will be, ahem... interesting to note that the Borland download site will probably take a little longer to come online than the sites under DTG's more direct control...

Update: There's now an article on BDN and the site has been updated.

Update: The following is the MD5 hash for BDS2006HotfixRollup.exe: 4FE838F389ABB7AD0D477F03192F5330  If this doesn't match what you've downloaded, then you've gotten a corrupted version.  Again, this is for the above mentioned executable, not the .zip or any other download container it may be in.  There are many various MD5 hash generators/checkers available so you can verify the above hash code.

Update: You can download just the .exe here.

Thursday, September 14, 2006

TurboMan, The Game.

In case you missed it, Up-and-comer, David Lock has just posted the source to his TurboMan game in CodeCentral.  Be sure to point it out to all those young budding programmers that you know out there that want to get starting in game writing.  So, let the Middle/Junior/High School programming teachers know about this and pass it along to their students.

Tuesday, September 12, 2006

Will the real "DevCo" please stand up.

While walking back from the local Japanese/Sushi resturant for lunch, this truck just showed up here at the Scotts Valley campus.  Please take note of the name stamped on the side.  Why is it here?  Well, housed in that large concrete structure on the other side of the truck is a huge diesel generator.  My guess is that this truck is here to fill the fuel tanks.  Anyway, I just thought it was funny and figured I'd share it.  Oh, and as you can guess from the picture, it is a beautiful, hot, sunny, summer day out there.

Friday, September 8, 2006

Treating your developers right...

This is uhmmm... for the benefit of "someone" who I know tends to read this blog.  I also know, many of you already know all of this and have experienced it first hand, so please bear with me for a moment.  For the record, we developers here at the Developer Tools Group at Borland, do have a lot of private offices.  The reasons seem to be very clear from this "Joel on Software" article.

Tuesday, September 5, 2006

It's T-Day!

In case you haven't heard yet..., it's T-Day!  The Turbo Explorer and Turbo Professional products are now available for download.  Along with the availability of these exciting new products, there's a wealth of information emerging out there to assist new and long-time programmers in getting up to speed with the products.  Especially for Turbo Delphi.  For instance, you can visit Nick Hodges blog for his series of "30 Camtasias in 30 days."  Just posted moments ago, Huw Collingbourne of Bitwise Magazine has also gotten into the act with these articles, Introduction to Delphi, Learn to Program Delphi Part One, and Delphi Study Guide.  Finally, Neil J. Rubenking has written this review of Turbo Delphi.  4.5 out of 5.

Wednesday, August 23, 2006

Shhhh.... It's vewy, vewy, qwiet awound here... eheheheheh!

OK.. OK.. OK.. I know I've not been as active lately on blogging.  I've been kinda waiting for certain, ahem, events to go down so I'd be able to really talk about that big bag of beans that just got spilled.  However, there's been some non-spin-off related news that I should really point out.  Our very own David Lock has been working on a little side project to create a TurboMan video game!  You can read about it on his blog starting with this post.  Also, unless you've been under a rock for the last few weeks, Nick Hodges has been doing some excellent introductory videos using the soon to be released Turbo Delphi product.  So stay tuned to the Turbo Explorer site and David's and Nick's blogs for some fun and exciting things.  I'll try to keep folks up to date on anything that may be happening surrounding the fact that you "may have heard that Borland is planning on divesting itself of the Developer Tools Group."  I guess you can count this as the calm before the storm?

Tuesday, August 8, 2006

Some really good press amidst all the Turbo-mania...

This article on FTPOnline, Borland Brings Back Turbo has a very insightful statement:

"For those who might have been worried that Borland's spin-off of the developer tools to a separate company (whose name is still yet to be announced) might herald a sunset of minimum additional development and maximum milking of customers, this announcement should be a welcome relief. This is clearly a long-term strategy. Borland (and the successor company for the developer tools) is not milking the installed base, but rather trying to regrow the community. It is a bold and risky strategy in this era of commodity developer tools, but perhaps the best alternative for Borland and the successor developer tools company to remain relevant."

Bold and risky, indeed.

The Adventures of TurboMan - Part 1

Watch the new adventures of TurboMan!

More Turbo stuff...

Go Digg this article

ComputerWorld Australia: Borland Revives 'Turbo' for developer tools

Long-time Turbo Pascal and Delphi supporter, columnist, and book author, Neil J. Rubenking: New Borland Line Salutes Turbo Pascal Spirit

The Official Press Release: Borland's Developer Tools Group Announces Plans to Rev Up Classic Turbo(tm)

Update 1:

Even Slashdot is getting into the fray: Borland Announces the Return of the Turbo Products, with Video (Remember this is Slashdot we're talking about here.. so there is bound to be some... ahem... rather clueless and uninformed comments ;-)

Update 2:

Now The Register has their entry in the Turbo-mania thing: Borland enlists Turboman for Windows tools

Monday, August 7, 2006

They're baaaack...

Yep.  The rumors are actually true.  There will be "Turbo" versions of Delphi, Delphi for .NET, C++, and C#.  You can read some information here about it:,1895,2000205,00.asp

and here:

Yes, there will be free versions of the Turbo's.  They're the Explorer editions.  Go to to get information and news about the availability of these editions, along with tutorials, games and contests.

Monday, July 24, 2006

More conspiracy... AMD & ATI...

OK... it's getting wacky, folks.  If AMD starts folding in the high-performance ATI GPU chips into either the CPU (interesting...), or into the chipsets (so last century), they could end up spanking Intel's currently integrated graphics solutions... which are marginal at best.  Of course, they'll probably use that nice low-performance shared memory :-(.  2006 is shaping up to be an interesting year on so many levels...

Now.. NVidia fan-boys start slamming ATI, and ATI fan-boys start slamming AMD and/or NVidia...  For me, I marginally prefer ATI over NVidia... however my current laptop (a Dell Inspiron Latitude D820) has an NVidia chipset, but my last few laptops all had ATI (Dell Inspiron D810, IBM Thinkpad A31p).

So does Intel snap-up NVidia as a defensive move or do they beef up the "V//V" (is that veev, or vive?)?  I expect something will happen.

Friday, July 21, 2006

Fun, Funny and The Nod

Recently I've been trying to read the Creating Passionate Users blog more regularly since they seem to always have some valuable insight into topics that range from user behavior to group dynamics to making the mundane fun.  I really liked the posts about Usability through fun which says that it's OK for something that is not normally associated with "fun" to actually be fun.  To some folks, it may be unfathomable to think that some accountant spending all day twiddling with a spreadsheet would ever equate that with "fun."  Likewise, a developer spending hours on end trying to crack the mysteries of some webservice, a database schema, a UML model, or just trying to reduce a poorly written algorithm from order n2 to nLogn, may also refer to the experience as tedious and frustrating, but in the end rate the overall experience as "fun."  Developers are certainly an interesting crew.  The rest of society just looks as us as those poorly groomed geeks hunched over a keyboard in some dark room writing nothing but gibberish into the computer.

So why do so many of us consider this as "fun?"  We would most surely not think of it as "funny."  (Ok maybe the word picture of some tie-dyed, long haired bearded hippie may seem "funny."  O wait... I work with David I ;-).  I remember the first time I encountered a computer and computer programming.  There was a feeling of accomplishment and power after writing my first "Hello World" program.  And maybe that's also part of why we do what we do, we're a bunch of narcissistic control freaks ;-)?  Maybe it's just that we're also a highly creative group that are able to easily tap into both the left and right side of our brains?

This brings me to something I've been thinking about for a while.  How can we capture, identify, and otherwise articulate the notion that programming can be a  "fun" and pleasurable experience?  How can we, as this new Borland spin-out (currently referred to as "DevCo"), get the current and next generations interested in pursuing a career in software development?  What about providing low- to no-cost versions of Delphi, C++Builder, and C#Builder and blanketing the earth?  Clearly Microsoft has begun to go down this path with their no-cost Express editions of Visual Studio and the Coding 4 Fun site.  I applaud their efforts since the more you can grow the total number of developers the bigger the whole pie is and thus the larger "DevCo's" slice can become.  One thing that may hinder some of this effort from Microsoft is their sheer size.  They're a huge monolitic, faceless, unapproachable corporation.  Sure, they've been actively trying to present to the world their kinder gentler side through all the various bloggers and massive PR machine.  But how truly "genuine" is this effort?  I'd like "DevCo" to be the company that is approachable, honest, innovative, highly relevent and above all FUN!  Being associated with "DevCo," either as an employee or as a customer, should be regarded as being fun.  I think we can be a serious contender, and have fun at the same time.

Which leads to my next point; The Nod.  How many of you regard Delphi or C++Builder as your "secret weapon?"  It's your "edge," right?  It'd be cool to walk into a coffee shop and see someone with their laptop open with BDS up pounding out some code.  You'd be able to give them "The Nod."  That unverbalized communication that tells the other person that "you just know" and "you're both in with an elite bunch."  I've had this happen in many other apsects of life.  What about the time you just bought that new car and you know it was one of the first for that model year?  The first time you're driving around town and you see some stranger in the same model and year... You make eye contact and get "The Nod."  I remember when I had just gotten a brand new Ford Mustang SVT Cobra.  Since only about 3500 or so were built in that model year, it's a pretty exclusive group.  So when you drive around and see someone else in the same kind of car, you get "The Nod."  This still happens to this day.  I've gotten and have given "The Nod" many times over the last few years.  Remember the J.D. Hildebrand quote in the old Windows Tech Journal, "It's going to change our lives, you know."?  What is interesting for this discussion is the following quote, which was "He cocked his head for a moment and then grinned back at me, nodding. 'I know.'" (emphasis mine).

Is all of this just some crazed lunatic waxing poetically or just pining for the "good ole' days"?  Maybe ;-)... but try and reserve judgement for a few weeks and see if "DevCo" doesn't start taking steps toward fulfilling some of what I'm describing above.  The "DevCo" team has quite a few rounds left in the clip...

Wednesday, July 19, 2006


Nick pointed to this application, Foxit that is a light-weight PDF reader... Well I was digging around the site and found out that they have an SDK that allows you to embed a PDF reader into your application.  They even mention that it is fully usable from Delphi.  It's actually really easy if you look at the... ahem... PDF document on how to call into the DLL.  It's just a simple matter of passing in a DC.  So you could just create a TCustomControl descendant and override the Paint method, and pass the Canvas.Handle property to the FPDF_RenderPage() function.  I wonder if Nick realized that they support Delphi?

Tuesday, July 18, 2006

Conspiracy Theorists Unite!! Microsoft Acquires SysInternals...

I'm leaning toward this not really being a "good thing" in the near term... however it may end up being so in the long run.  I wonder if Mark will still use Delphi for his MMC plugins?

Monday, July 17, 2006

Meanwhile, back at the ranch...

While we're in a holding pattern regarding the final fate and disposition of the Developer Tools Group (aka. DevCo) at Borland, there continues to be more good news on the hiring front, or should I say re-hiring.  So today marked the return of another former Borland developer to the BDS team.  Some of you may or may not know him, but Lee Cantey has just rejoined the Developer Tools Group.  Lee had been with Borland for many, many years and came up through the tech support ranks.  He's also one of the key developers on the C++ compiler and tools team, which includes BCC, ILINK, TDUMP, etc...  Lee not only carries with him a huge amount of talent, but is brings back a wealth of institutional knowledge.  I'm sure there's going to be more than a few occasions where he's going to hear, "Hey Lee, do you remember where the source code is for <some internally developed tool>?"

It's great to have Lee back on board!

Wednesday, July 12, 2006

Aftermath...Revenue...Culture...Kool-Aid...Red-Pills...What's in a name?

The past two days were filled with marathon meetings.  Not just any meeting but, as I mentioned in this post, it was the first of many DevCo operations meetings where we got all the leading players together from all the regions where we operate (Americas, EMEA, APAC), to sort out everything from our Q3 revenue plans, to figuring out transition priorities, to figuring out what direction we want to drive the new company's culture.  Can't talk about the first item, the second is plain boring (what financial system are we going to use for invoicing, inventory tracking, etc...), but the third item was actually interesting.  While it wasn't about products, and not specifically about operations, it was about our most valuable assets, the customers and the employees.  We broke up into several smaller groups and had to write down as many words, phrases, concepts, etc... that we felt described an effective corporate culture.  It was great to see that without exception, everyone held to a common theme.  Everyone was committed and passionate about what kind of culture we want to have.

Now of course, you cannot dictate or form policy for a culture, but you can influence it.  There was a lot of talk about reviving some of the old Borland cultural mojo.  This is because the vast majority of those on the extended leadership team have been with Borland for a long time and remember "the old days."  We also were not a bunch of old geezers pining for the "glory days," but recognized that while there was a lot of good things about the old Borland culture, times have changed and a lot of what worked great back then don't work now.  So we're looking to look to the past culture as a foundation on which we'll be building a brand new and unique "DevCo culture."  From my point of view, and I know that Nick is on board with this, part of what should be part of this new endeavor (and will be if I have anything to do with it) is that everyone here needs to climb on board the Cluetrain, drink the Kool-Aid, and take the red-pill.

On another note, we've been getting lots and lots of DevCo company name submissions from the email addresses that Michael Swindell setup.  During our Ops meetings we had a chance to go through a lot of the submissions from the customers and employees.  There were a lot of... ummm let's just say "interesting" names.  Some of the names were down right hilarious.  Others just didn't translate too well to all languages and cultures.  Still more were just real "head scratchers."  There were a lot of submissions that were very similar or even identical. Some even identical to some names suggested internally.

The activity of the name submissions is another example of one of the major themes we constantly highlight.  Our customer community is vibrant, passionate, motivated, highly vocal and very active.  Talk about an asset!  As we discussed this business with "some people," that was something that was very apparent.  These folks took special note of this.  While you cannot easily put any kind of price tag on that (it is a nearly priceless asset, in my mind), it factor into the value of all the related bits you can quantify.

For now, we're all on the edge of our seats waiting for any kind of news and information regarding the spin-off.  I get questions from the team almost daily whether or not I've heard anything and what's going on.  I imagine that we are all feeling a little like the NASA engineers/managers during the Apollo missions in the 1960's and '70's when the first time they orbited the moon. As the module passed to the dark side of the moon all the communication stopped.  Folks were just listening to radio for any signal.  When it finally comes, we can all breathe a huge sign of relief.  So right now, the spin-off module is on the dark side of the moon...

Friday, July 7, 2006

It's a little quiet around here...NOT!

Just a quick little note about the DevCo spin-off.  Borland is still in a "quiet period" before any announcements regarding a buyer can be made.  However, in lieu of that, and since we just finished Q2 the business functioning and moving ahead.  The good thing is that as more time passes, the Developer Tools Group (aka. DevCo) is operating more and more as an independent entity.  Only not as independent as if we're a separate company since our revenue is on Borland's P&L statements.  We are still responsible to Borland.  For instance, we just held the first Developer Tools Group operations meeting with Tod Nielsen this morning.  This was a chance to have all the heads of the different regions (Americas, EMEA, APAC) present their recap of Q2 results and present their plan and forecasts for Q3 (NO, I can't tell you that information... The SEC really frowns on doing that kind of thing ;-).  I must say it was abundantly clear that all the region managers are excited and pumped up about the spin-off.  Some of you may know of or have met some of these folks, most notably Jason Vokes and Malcolm Groves.  I always look forward to them coming into town and hearing all the great stories about how things are out in the field.  This time was no exception.

Next week we'll be spending several days in intense on/off-site meetings among the whole extended DevCo leadership team.  This will be the first time we've all gotten together to really lay down our own plans and to map out our own destiny.  One key element that we're going to be covering is getting down to the finer details of executing a, get this, Marketing Plan!  Yes, folks there's some cool things surrounding some of the products that are coming.  The product road maps are published and we're proceeding full speed ahead and executing on them.

While we're being very quiet about the spin-off status, rest assured that we're making, and will be making, plenty of noise in other areas.

Tuesday, June 27, 2006

It's Obvious...

I usually steer clear of controversial subjects such as the notion of software patents, mainly because I am somewhat conflicted on the subject myself since I've filed for patents here at Borland.  On one hand, one should be able to gain patent protection for a new and unique invention (or in the case of software, a process).  On the other hand, with the seemingly whacky patents that have been granted over the years for what many agree are obvious uses of existing technology (can you say, OneClick?).  Well it seems that the U.S. Supreme Court is actually going to hear a case and possibly rule on what should be considered "obvious to a person of ordinary skill in the art."  This article on ArsTechnica outlines this case.  There's also some links to the actual petition.  What is really interesting in this case is that the petioner actually won their case in the U.S. Court of Appeals!  However they felt that the court didn't rule on the premise of their argument which is based on a 1952 federal law regarding the obviousness of an idea or invention.

Regardless of what side one falls on the whole notion of whether or not software should be patented, the fact of the matter is that software is patented.  So if you refuse to file for patent protection on your new and unique ideas simply on principle, you may find that that is all you've got to fall back on when someone becomes wildly successful based on your idea.  Or even worse, comes after you with an infringement suit because they patented the idea.  Until the current U.S. patent system is reformed or at least clarified, your only choice is to take a defensive stance and try and patent as much as you can.  In many cases the best you can hope for is a stalement if someone comes after you with an infringement suit.  If you have your own portfolio of patents, chances are the person filing suit against you may be infringing on one of yours.  So the only recourse you have is to cross-license the patents.  If you had nothing in your portfolio, then you could be liable for major damages including on-going license fees.  The really nasty kicker is that if the plaintiff can show that you had prior knowledge of a particular patent and infringed anyway, you could be liable for triple damages!

Whether or not this case presented to the U.S. Supreme Court has any affect on the patent system remains to be seen.  It does, however, look to be a step in the right direction.