Thursday, February 24, 2005
More Delphi memories...
Wednesday, February 23, 2005
Danny's live interview with The Server Side
The Server Side finally posted it's live interview with Danny. Lot's of good information...even if it is a little stale. This interview was done last year prior to the release of Delphi 2005.
Monday, February 14, 2005
First Ship boxes
Today is February 14th, 2005. The date of the 10th anniversary of the release of Delphi. It was on this date, in 1995, that Delphi was introduced to the public during a special evening event at the Software Development West conference. Since then, Delphi has been gone through many revisions and has even appeared on the Linux platform (in the form of Kylix). For the developmnent team, everyone received their own personal copy of that release of Delphi along with a special sticker denoting that this is a “First Ship” box. While not actually the first boxes off the production line, they do come from the first production run. Here's a picture of a lot of the boxes I've received over the years. Again, this is with my whimpy phone camera, so you can't really see the that some of the older boxes are actually faded a little.
Tuesday, February 8, 2005
10 Years of Delphi
That's right, on February 14th, St. Valentines' Day, Delphi will become 10 years old. Delphi 1.0 was released to a standing room only crowd at the 1995 Software Developers Conference West. I remember cringing when Anders Hejlsberg demoed the exception handling capabilities by dereferencing a nil pointer. I had done all the work to map the hardware exceptions (via. the Windows 3.x toolhelp.dll) into language exceptions. Windows 3.x at the time had no OS supported exception mechanism. Remember all those “General Protection Fault” dialogs? brrrrr... Here's a picture of an October 1994 pre-release CD of what was then known as “Delphi '95.” Yep, “Delphi” was still only a codename at that point.
If you look closely you can see a reflection of me snaping this awful picture using my cell-phone ;-).. Oh, and the postcard to the bottom left is one that was sent out to all the Turbo Pascal customers introducing Borland Pascal 7.0, which was the first product I worked on after joining Borland.
Thursday, February 3, 2005
The Delphi Eclipse...
One item in my previous post that I specifically didn't address is the notion of using Eclipse instead of VS. Again, this is something we'd be remiss to ignore. However, the primary ding against Eclipse is its reliance on a JVM. Were it not for the fact that we are trying to support the .NET platform, this might be a reasonable path to persue. I just don't see how we could create a reasonable tool that hosted not only the JVM, but also the CLR in the same process! Yes, Eclipse would be much less encumbering from a business perspective than VS, but I have to think about the developer experience. I, as a developer, would find it very hard to swallow if I had an IDE that fired up both the JVM and CLR in process.
Much of the same issues surrounding what does Eclipse give us and what we'd still have to build. In fact you can substitute Eclipse for VS in much of this post. Of course some of it doesn't apply, but you can get and idea of the scope of things involved.
Finally there is the shear fact that there is nearly 12 years invested in a lot of the current Delphi/Galileo IDE codebase. It's not perfect. It has some rough edges. However, I will point out that it was also the first IDE framework ever released by Borland that supports multiple languages out of the box. I owe a lot of this to the dedication and talent that we have and have had on the Delphi team. Now we've announced our intention of folding C++Builder into the Galileo IDE, which has only been made possible due to the work we'd been doing over the past 2-3 years.
Wednesday, February 2, 2005
Whither or not Delphi....
OK.. I've have a few days to digest, mull over, and otherwise consume Julian's post regarding the reasons Delphi should be migrated into Visual Studio. All in all it was a well reasoned article and on the surface it looks like a “no-brainer.” However, I'm going to toss the proverbial “spanner” into the works.
First of all, does anybody really think that Borland hasn't ever looked into what it would look like for Delphi to be hosted in the Visual Studio IDE? I mean, we're in business to make money. And as much of it as we think we can. It only makes sense to explore every avenue of opportunity, no matter what you're emotional ties are to a particular way of doing things. Of course I cannot comment on anything specific regarding any kinds of future plans regarding Delphi and Visual Studio. I will however outline a few things in order to dispell several myths surrounding the notion that moving to VS somehow magically gives us a huge boost.
First, lets outline some rules of engagement. First of all, I'm not talking about a “Delphi language plug-in.” On the surface that seems like a very simple and “no-brainer” kind of thing to do. However, I'm not interested in having Delphi be simply another “also-ran” in the sea of Visual Studio plugins and third-party languages. Delphi has an identity all its own, both in its market and its overall look and feel. Delphi would have to retain a lot of this identity. This raises the bar to a level beyond simply installing Delphi into an existing VS installation. Yes, Microsoft does offer a program that allows third-parties to deliver and install the core VS bits in addition to that vendor's specific enhancements. The problem is that it still has the Microsoft identity plastered all over it in such a way that Delphi would still be relegated to “also-ran” status.
OK, so we've outlined some of the up-front intangible costs associated with moving Delphi into VS. Now lets looks at some of the real costs. Let's define what Visual Studio really is. Basically, VS is simply a shell that provides some core services, such as a text editor, menu and window/docking management, some core-debugger services, and some core project management services. Other things like the compilers, expression evaluators, syntax highlighting, Intellisense, etc.. are all items that are pushed out to the “language plug-in” These are things Borland would have to provide. What about the designers, like the WinForm, ASP.NET and let's throw in the CF designers? So far it is unclear whether or not these items are actually part of the core VS redist bits. Especially the CF designer bits. You know that it won't include any kind of support for VCL (Win32 or .NET), so that is something only Borland can provide. Then there is the notion of the smart device emulators (for running and debugging CF applications without the need for a physical device). These don't come with the core VS redist bits.
So far, you might be saying, “So? Just require that the user purchase VS to get all those extra bits.” Hmm... good idea... not! How can we, with a straight face, tell our customers, you, that you need to toss a chunk of cash at MS and at Borland? No, you want to buy one product, install it, and be good to go. So then we're stuck with selling into VS shops, which has it's own brand of issues... well you get the picture.
So let's do a little math and check the score:
Visual Studio Core
- Editor
- Debugger (Win32/.NET/CF)
- Menus/Windows/docking
- Project Manager
Galileo Core (present Delphi IDE core)
- Editor
- Debugger(Win32/.NET).
- Menus/Windows/docking
- Project Manager
I'm sure that's not quite an exhaustive list, but you get the idea here. There are probably several things that the Galileo core provides that the VS core does not, and vice versa. Now let's look at what Borland must supply:
Visual Studio Core
- Delphi Compilers (.NET & Win32)
- Delphi language bindings (syntax highlighting, error insight, Intellisense, etc...).
- ASP.NET Designer
- WinForm designer.
- Expression evaluator (for debugging).
- VCL design-time package management
- VCL/Win32 Designer
- VCL/.NET Designer
- CodeDOM
- ECO
- Modelling
- Refactoring engine and code for specific refactorings
Galileo Core
- Delphi Compilers (.NET & Win32)
- Delphi language bindings (syntax highlighting, error insight, Intellisense, etc...).
- ASP.NET Designer
- WinForm Designer
- Expression evaluator (for debugging).
- VCL design-time package management
- VCL/Win32 Designer
- VCL/.NET Designer
- CodeDOM
- ECO
- Modelling
- Refactoring engine and code for specific refactorings
Hm... That's interesting. I'm not seeing the advantage here. The VS core and the Galileo core are essentially done. They're sunk costs. What the above list doesn't depict is the team cost involved in just moving what we have today onto VS. That would be a very large task to just get to where we are today. That doesn't include any new features. Sure, there would be some increase in feature-set simply from the move to VS, but what those features would be I can't say because I don't know. Yes, we'd be freed from the task of maintaining the IDE core, but to put that into perspective, in the Delphi 2005 product release, I imagine that there was only about 1-2 man-months of real core IDE work. There was some core work to add some features, but quite honestly these were features that would not have come from some VS core either.
Now comes the argument, “what about all those third-party VS add-ons you can now leverage?” Sounds great, in theory, however in practice I'm much more skeptical. I know how development works and I know developers. I can see that the following scenario would become all too common: I install some whiz-bang VS add-on and point it at some Delphi source file. I tell the add-in to “do it's thing”... Now I watch in horror as I see C# or VB code injected into my Delphi source code module! Then there's this one, I try and point some hot new add-in at the VCL form designer, and all it can do is sit there twiddling it's thumbs.
How about, “what happens if MS slips the next release of VS?“ Well that's a good question. Borland has a fiduciary responsibility to its stockholders to be profitable. Much of that ability to make a profit is being in control of one's own destiny. If we cannot meet revenue targets, we let down not only ourselves and our customers, but also our stockholders. So now we're relegated into always delivering on the previous rev of VS. Not a very attractive prospect in today's market. Of course we may end up wrestling with a very similar issue with the recent unsubstantiated rumor that MS has slipped the delivery of .NET 2.0 into late summer or early fall of 2005!
Here's another one I've heard, “Oh that Allen guy is the Galileo IDE architect. He's just protecting his code and his job.” No. I'd like to think that I'm not that shallow or egocentric. Sure I have an ego, and it can be bruised, but I'm also very pragmatic. As soon as I see the benefit to Delphi, the product, and to Borland the company, you bet I'd be on board.
Finally, and I hesitate to even mention this because it tends to bring out the worst in folks, there is the notion of whether or not our existing Delphi customers would accept a Delphi release built on VS? What would they be willing to pay? What if they already have VS, do they pay the same price? These are not questions I'm prepared to answer or even profess to to answer. I know what my gut tells me. I'm a developer, so I'd like to think I know developers and have a little insight into what makes them tick. We share much of the same passions and ideas. I don't know, maybe the Delphi folks will surprise me.
Oh.. BTW, Julian used to work for Microsoft in the Visual Studio group... but, he, as am I, are certainly not biased in any way ;-)..
Thursday, January 20, 2005
Charlie clarifies
Here's my response to Charlie's response that I posted as a comment:
Charlie,
As always, your wordsmithing abilities have far outstripped my feeble attempts at articulating my thoughts. Upon re-examination I can certainly see where my interpretation may have come from. In the third paragraph you mentioned how the election officials were told by the manufacturer that their machines could tally 10,000 votes. I can see how the officials probably looked at this and thought.. "we only have about 8,000 registered voters in our precinct, so 10,000 should be plenty." However when the machine was delivered, it could only tally 3,005 votes. In light of that, the last sentence seemed to convey, to me at least, that he manufacturer *intentionally* delivered the less capable machine. I was only scoffing at the appearance that your statements seemed to place some level of hubris on the part of the manufacturer. I understand now that you were merely highlighting the incompetence of the folks involved.
Now, as far as open-source being the "solution" to this problem, let me toss in a few more kinks. I don't think that this is only a software issue. I think that it comes down to a hardware *and* a software issue. For instance, in this case, suppose *all* voting machines from this manufacturer contain the same exact core software (or firmware). Then, based on what physical hardware is installed, the system knows how many votes it can store. What if the manufacturer simply grabbed the wrong machine (ie. one without the correct number of flash memory devices) from the assembly-line and shipped it out. Incompitence one. Now when the machine was delivered to the voting authority, only a few test votes were cast to make sure the system functioned. Also nobody double-checked that the machine "sub-model" was in fact the correct one.
Most of these ideas about how all the various ways in which the machine could have failed have little to do with the software. In fact, prior to working for Borland, I used to design, build, and program magnetic stripe encoding and access control equipment. In order to save significant manufacturing costs, the embedded software (which I wrote in all 6800 assembly) contained *all* features that the customer could possibly order. Also, they could order varying levels of card storage and "store-and-forward" buffer. By simply using a single master ROM, we'd burn several hundred copies. By default all but only the core functionality was enabled. By placing the chips into a specialized EPROM burner, the tech could burn certain memory addresses in order to enable all the extended functionality. If the customer ordered more than a few units all with the same features, then a new temp master is created and then it is off to the gang programmer and the custom ROMs are burned.
I would imagine that most of these voting machines are in fact embedded systems, that in order to keep the costs way down, they are using much older technology. In fact they are probably not running any version of Windows, Linux, or whatever... Since they are single purpose systems, you can do a whole lot to cut costs, increase manufacturing efficiencies, all without compromising the core functionality of the machine.
Yes, there is room for programmer error. There is also room for manufacturing defects. And finally there is room for configuration errors. The latter two items have little to nothing to do with the embedded firmware. Also, since these hardware platforms are themselves proprietary, unless you have a full understanding of the environment under which the actual voting code is running, having the code may simply not be enough. About all you can hope to get from being open source is simple code reviews and find some of the more blatant errors. Most of which should be caught by simple in-house peer reviews and fundamental QA testing.
Finally, your ideas for making things verifiable and trackable are pretty good. However I'd take it a step further. There should be hardware keys that are burned into the actual device that uniquely identifies that machine. These keys should be similar to those funky paralell or USB port dongles that folks tend to loath. What is interesting about these devices is that they contain some "write-only" memory. Huh? What good is "write-only" memory? Well what you do is write to this memory a private key so that when you send data through the device (or onboard chip), it encrypts this information with this key. Since *only* that machine has that particular private key *and* there is no way ever retrieve it without destroying the device itself, you can be certain that a particular vote is from a valid machine. This is because the public key is well known and available to anyone. So basically as long as you can decrypt the data with specific public keys you know that the vote could have only come from a valid machine. Not even the manufacturer or the election officials should ever have this private key. That is just off the top of my head...
I better stop, or I'll start being accused of being a too "wordy." ;-)...
Wednesday, January 19, 2005
Open source and e-voting.
Charlie Calvert has an interesing bit on CodeFez about electronic voting machine failure. I haven't really formed any opinion regarding all the bruhaha surrounding proprietary vs. open source e-voting solutions. I think both sides have very valid arguments. However, I do take some issue with the assumptions that Charlie has taken in this editorial piece. I had to read the following quote several times to believe that it had just been said. Especially that last sentence!
What do they mean the machine could only handle 3,005 votes? In this day of 32 bit operating systems, where the standard limit for an Integer value is over 2 billion, exactly how did they manage to create a limit of 3,005 votes? A failure on this magnitude takes real work to achieve! It is something only a proprietary software company, intentionally trying to cripple their software, would be likely to achieve. [emphasis mine]
Wow. Unless I totally misinterpreted this, it certainly looks like Charlie has flat accused the voting machine vendor of intentional voter fraud! There are way too many variables to simply make that kind of judgement. For instance, according to Charlie, the vendor stated that the machine “had the capacity to record 10,000 votes.” What if that machine could be configured in various ways to store more or less verification data with each vote? Each different configuration would affect the total vote storage capacity. Voting machines are more than simple counters that accumulate a tally. They have to store transactional data, timestamps, and other bits of verification data (obviously short of associating a particular voter with a specific vote!). I would imagine that each precinct would be able to dial up or down the level of verification data stored depending upong their state or precinct's rules regarding election verification.
I would be more quick to pin this problem on the sales and support teams, rather than the programmers! Either the salesman didn't properly convey that as they dial up the verification data, the total vote storage capacity will decrease or the folks charged with setting up the machines didn't RTFM! Open source would not fix that problem one bit!
What about the software that runs the Space Shuttle? Should that be “Open Sourced“ as well. It would have made little difference to the Columbia crew. In fact, all the reports I read or heard, talk about how the Shuttle kept correcting the yaw introduced by the extra drag created on by that gaping hole in the RCC panel. Even to the point of firing attitude thrusters. The software that runs the flight control systems has been through rigorous testing and performed flawlessly. It is proprietary. Sure, you can argue that that software is running on mission critical systems. What about the software that runs the resporation machine that is helping keep your relative alive while the doctors are performing a triple bypass? Yep. Proprietary. Sure, no-one died when the voting machines failed, but it *does* attack the very core foundation of what this country was built on. I just don't see how open-source would have been the “magic-bullet“ to solve all these problems? You can apply that same argument to all the other cases where software is a cricital component, but I don't see an outcry from the “open-source“ proponents to have GE Medical Systems open source their defibrillator firmware. I admit that is a bit of a hyperbole, but I just want to point out that closed-source systems do work and do provide significant value to our society.
Regarding Charlie's statement about intentionally crippling the software, I have to wonder what that company's motivation would be? Have criminal charges been filed against the voting machine company? A company is in business to make money, not make a few quick bucks by defrauding the voters in some North Carolina county, then go to jail for voting fraud. Some grand conspiricy is a little far-fetched. Almost to the level of Roswell cover-ups and alien autopsies.
According to all the articles I've read regarding the machine failure, I find nothing about the failure being software or hardware. They simply state that it was a “voter machine failure.“ It very well could have been a bad bank of flash memory where the software thought it was writing the proper tracking data, but it just flew out the bit-bucket. Sure, the software should properly verifiy that it was writing the data correctly, and if an error is detected it should block all futher voting and alert the poling place staff. I'd be interested in seeing a reference to some article that outlines the specifics of the machine failure. I couldn't find any in Charlie's piece.
Finally, I like Charlie. I have a lot of respect for him. He's certainly a better writer than I'll ever be. But, we don't have to agree on everything ;-)... Besides, it appears that the courts have finally decided the race.
Friday, January 14, 2005
Danny is now finally on blogs.borland.com
Thursday, January 13, 2005
Free lunches, memory, and Moore's law..
I've been reading a lot of the various comments and observing the overall gasps and guffaws surrounding Herb Sutter's DDJ and C/C++ User's Journal articles regarding the end of the line for Moore's Law. Granted, this is actually the first time in nearly 30 years that processor speeds haven't grown at the “normal“ meteoric rate. However, I don't think this should be cause for immediate concern. In fact this should be seen as a good thing. As the old proverb goes, “neccessity is the mother of invention,” so too do I think that this may be a good time for the PC vendors to focus on other aspects of the whole system. Rather than focusing on the core processor for all your speed gains, they should turn their attention to the memory bus. This is the number one bottle-neck of a system. What if all the memory (RAM that is..) ran on the same clock as the core CPU? What if a new value from the memory could be fetched in a single clock cycle? Sure, there are small amounts of memory that do use the internal CPU clock, the level 1 cache. But those are just cheats and tricks. In practice, the cache does a fairly good job of keeping the CPU monster fed with new instructions and data to push around, but as soon as the level 1 cache became exhausted, they added a cache for the cache, the level 2 cache. This cache is usually 2-4 times the size of the level 1 cache and runs approximately that much slower. Some systems even add a level 3 cache. What's next? A level 4 cache... oh wait.. that's the main system memory ;-)..
Now there are some new architectures that are being used. For instance, Non-Uniform Memory Access or NUMA, is an interesting technique for multiple CPU systems to reduce cross CPU contention. By giving each CPU its own chunk of system memory that only it can access, each CPU can now run without too much worry that the other CPU may be accessing the memory at the same time. Mainly because the CPUs have to negotiate with each other in order to get access to the memory that they don't control.
In a nutshell that is the state of the hardware. What about the software that runs on these systems? I think Julian Bucknall has covered that detail quite well. Basically, it would be a good idea to get used to writing code to take advantage of multi-processor architectures. This will present a whole new raft of problems, for sure, but there are a multitude of techniques for solving these problems.
We, on the Delphi team, are looking into various things we can do to help the users write better code for these architectures. Everything from RTL/VCL support to language enhancements are all on the plate for us to look into. Again, “neccessity is the mother of invention,” and this current stalling of Moore's Law may just be the catalyst that is needed for the software tools industry to step in and lend a hand. This is an opportunity.