Archive for February 2007

Introduction to CSLA – lunch presentation at ODNC


The Ottawa .NET Community organized today interesting lunch-and-learn session: first of two sessions on introducing the Rocky Lhotka’s CSLA Framework. Usual location – Glacier Room at downtown Microsoft Office. I decided to attend for two reasons: first was that I knew the presenter – David Campbell. David is a great guy – consultant with 20 years of history in the software development business, running his own company. Last year when we were looking for people with CSLA experience in Ottawa, David’s name came out first.

My second reason was a educational. No, I did not really expect to learn something new about CSLA. We are using the framework on two projects since summer 2006, I have read the book (and agree with David that it is good but a pretty dry read) and also read and wrote quite some amount of code that uses the framework. I certainly do not think that I know enough about CSLA (as the German proverb says, man lernt nie aus ) but it is hard to go during introductory session to the level of detail that uncovers anything new to active users of the CSLA. What I was looking for is an inspiration how to present the framework to the developers who are new to it – and I was not disappointed. And btw, I *did* learn something completely new about CSLA: that SmartDate trick with “.”, “+” and “-” is really neat (see the source code of SmartDate unit tests).

What I always enjoy on the ODNC sessions is discussion during and after the presentation. It was like this last time (Adam Machanic) and it was like that today. People ask great questions (OK – with one expection – if you are ODNC regular, you know who I am talking about).

We have had lots of discussion in-house about the relative pros and cons of using CSLA. In our projects, portions of the CSLA functionality are not so important: we do not really need multi-level undo, for one example. On the other hand, the location-independent business objects and scalability it gives you is really nice. Yes – CSLA forces you to do things certain way, which may not be considered ideal, but at least it results in consistent approach across the codebase.

CSLA has pretty steep learning curve, even with the books available and the way of doing things can look strange to seasoned object oriented designer. Heavy use of generics and design of the core framework classes forces you to use very flat object hierarchies. Instead of inheritance, it pushes either towards sharing functionality with wrapping or using code generation. I am not exactly crazy about the read-only/readwrite object dichotomy – without use of inheritance, it often leads to code duplication.

Also the book example on Projects and Resource is IMHO not the most illustrative one: it puts too much emphasis on dealing with lists of objects and does not illustrate many important aspects of dealing with root objects and switchable (root/child) objects. I had trouble of using this example for in-house training and mentoring: it is not simple enough to make obvious things really obvious and not comprehensive enough to cover many everyday situations.

Despite of all that, our experience with CSLA was overly positive: we were very pleased with performance compared to plain datasets and after few initial hick-ups, the framework allows you to create very solid, reusable, scalable layer of mobile business objects.

David is going to do Part Deux of the CSLA intro, which should be practical exercise of creating address book applicatopn based on CSLA with multiple UI. Looking forward to it – maybe that example will fill the gap …

And btw – thanks, David.

Webbits 2007-05


It is not Friday today and I am obviously late with next link collection. Somehow I have completely missed the weekend – so much was happening at work that I had barely had time to notice what is going on on the Web. My Google Reader is overflowing with new links and there are lots of new links in delicious. Let’s start.

Merlin Mann of 43folders started to do videopodcasts – have a look (in the second edition he interviews Jonathan Coulton, famous creator of the Code Monkey song). This little utility allows to synchronize contacts and tasts between Thunderbird and PocketPC. Interesting article on how to make Gmail you personal information centre.

Good article on the Zen-like “productivity zone” state and how to reach it. Tutorial how to sync address book via IMAP. The.NET addict published two more articles in the series of  “Windows developer learning OS-X programming: this time he compares NIB and XAML and the Objective-C 2.0 categories with C# 3 LINQ extensions. Hmmm !

Some new software: for Windows, Virtual PC 2007 is out. ASP.NET developers – this Viewstate viewer is really useful. Version 5.0.3 of fantastic Reflector is available. For Web workers – top web tools for college students – but you will them useful even if you are not one. For Mac users – automate the network share mounting based on your location and tagging add-on for iTunes.

Interesting series of articles on software maintenance.  Guide how to switch from Windows to Linux (from IBM). Registers suggests that if you are buying new laptop, go for Mac (Milos/Gabo – they say almost exactly what were talking about today …). With few exceptions, if you want to avoid Vista, Mac + XP in Parallels is your best choice as most manufacturers are offering only Vista notebooks.

Scripting, repeatability and GUI


Most programmers love GUI. It is so easy, so convenient, just point and click here, click there, drag and drop, repeat few times and problem solved. Who in sanemind would bother typing and writing code when can GUI do it all. The convenience rules. If you need to add or change something, it takes just few more clicks. Not a big deal, just add a few more easy steps. No problem – right ? Wrong !

What is wrong with this approach is that with few clicks after few clicks, the whole process becomes more and more complicated. It may need to be done differently, based on conditions. It may get different results if the order of clicking changes. If it needs to be done on more than single system, it start to be more like chore, than a shortcut. Something that needs to be done over and over is always very sensitive to human errors. And because the clicks do not leave audit trail, only the guy with the mouse in hand actually has any knowledge on what is going on. It is also very hard to keep the sequence of mouse clicks in CVS :-). And it is close to impossible to tell what was the difference between two series of clicks after they are done – one of which worked and one not.

To get repeatability, you need a process that starts and runs unattended, based on input parameters performs the task at hand. It can be either compiled program or script – but script is faster to create and easier to maintain. Therefore “scripting”. The script allows to be version controlled, contains explicit decisions and it is the minimal documentation of the task. Without script that starts from well-defined state at the beginning and properly handles all tasks / transformations, you will never know in which state you have ended.

Not all script were created equal. Many GUI tools allow you to perform tasks required and then generate scripts that (in theory) perform the same thing. This sounds like an ideal combination of both: you can keep using the GUI and by generating scripts, you keep trail of the changes for version control. Right ? Not quite, unfortunately.

The problem with generated scripts is, well, that they are generated and not written. They contain lots of repetitive information, many function arguments that are not really relevant to the task and no parameters. They are bad documentation because of the verbosity and can lead to problems with version control – small change in GUI setting may lead to very different script rendering.

Generated scripts *can* be used as started point to accelerate the initial steps, though. It is time consuming to start with empty file – but it is very often very educational and sobering experience. Generate first version of the script by tool, then developer takes over, changes the script structure, extracts parameters, re-factors repetitive steps to functions/procedures, adds comments. After this, the GUI should not be used as primary tool again, from that point on running the script should be the only way how to perform the task. If change is needed, edit the script and re-run it. This type of script is actually very useful: it is both documentation as well as tool, it can grow, evolve as the problem evolves and helps understand what is being done.

Typical area where this is extremely visible is database development. In Microsoft world, the Management Studio (former QA) is tool that allows do most of the configuration tasks via GUI. If also will generate the SQL for the task – but the SQL will not be as flexible as it should be: it may contain hardcoded references to database, it may not be complete (e.g. script will generate ALTER PROCEDURE which will fail if procedure does not exist, rather than test existence and drop if exist, then re-create). Formatting will depend on settings – e.g. you may or may not get the brackets around identifiers – which will make version control of such script an interesting exercise.

The GUI tools can be great help if they are used by skilled person who understand their power as well as their limitation and use the GUI to speed up development, but does not depend/rely on the GUI tool exclusively. It usually takes some experience to realize the GUI trap and develop skills/habits outside of the GUI tool. Therefore when looking for e.g. senior database developer, try to test him/her with simple repeatable task that requires working without or outside of the GUI. If he/she has problems writing scripts by hand, using command line tools and scripting in general, it is not a senior developer, regardless how many years of experience has on the resume.

Good book on coding and software development


I have picked it two days ago in Chapters – Code Craft by Pete Goodliffe. Somehow similar in topics covered as the Rapid Development or From Coder to Developer – but newer. Divided into 6 parts:

Part I – At the codeface – defensive programming techniques, naming, formatting, commenting, error handling

Part II – The secret life of code – code contruction tools, testing and debugging methods, building

Part III – The Shape of the code – code design, architecture, source code growth and decay over time

Part IV – The Herd of Programmers – development practices and programming skills, psychological aspects of programming, communication

Part V – Part of the Process – rites and rituals of software development, specifications, estimation, code reviews,

Part VI – View from the top – high level view, sofware development metodologies.

I started to read in rather unstructured way in chapter 16 (Code Monkeys) dealing with typology of programmers and chapter 17,  covering the team and communication issues. I liked it a lot. Easy read, funny, entertaining and very practical. Often just common sense, but already formulated and properly packaged, ready to be communicated.

I have decided to read the full book. Will post a review after I am done.

More great programming quotes


It has been said that the great scientific disciplines are examples of giants standing on the shoulders of other giants. It has also been said that the software industry is an example of midgets standing on the toes of other midgets. [Alan Cooper]

And the users exclaimed with a laugh and a taunt: “It’s just what we asked for but not what we want.”

For a sucessful technology, honesty must take precedence over public relations for nature cannot be fooled. [Richard Feynman]

There’s no sense being exact about something if you don’t even know what you’re talking about.[John von Neumann]

You cannot bullshit a compiler. [Anon]

… the cost of adding a feature isn’t just the time it takes to code it. The cost also includes the addition of an obstacle to future expansion. … The trick is to pick the features that don’t fight each other.[John Carmack]

Learning is not compulsory. Neither is survival.[W. Edwards Deming]

You can’t communicate complexity, only an awareness of it. [Alan J Perlis]

If you don’t think carefully, you might think that programming is just typing statements in a programming language. [Ward Cunningham]

“If we wish to count lines of code, we should not regard them as lines produced but as lines spent.”

Quote of the day


We build software line by line, idea by idea, side by side. Our software is an expression of ourselves, our best moments, our toughest challenges, our greatest hopes.

This wonderful quote is from handcrafted copy of Vista, which was distributed to everybody who worked on a product. It is a Microsoft tradition that the people who worked on a project get a copy of the project when it ships.

What a wonderful idea.

Read more on Larry Osterman’s blog.

Hiring programmers – or Degrees of Done


Back in old country, I once had a programmer working for me who made himself famous with the following quote:

“I have fixed that bug, do you want me to compile it too ?”.

In his mind, he was done as soon as he identified the bug and put the fix in the code. All the rest was trivial and unimportant routine.

During last 20 years I have worked with many very different developers and learned the hard way, how very many different meanings the word “done” can have. For some, done means that coding just finished and he/she successfully compiled the projects and stepped through some of the code paths with debugger with negligible occurrences of major crash. Whether the code is in CVS, commented – who cares ?

For others it meant that unit test was written and run, code is commented and documented, everything is properly tagged and version-ed in source control system, built and deployed via automated build system (now do NOT laugh, the later are not mythical creatures, I really met few guys like that). The problem is that it takes some time while you find out who is where on the done-ness scale and making sure that the whole team is on the same degree of “done-ness” or at least in agreement what done means, can present quite some challenge and consume considerable amount of time from the project manager and/or team lead.

What is your degree of done is a great question to ask when you are building new team and hiring developers. We had interesting discussion with Connie on the topic of hiring criteria and both came to a conclusion that biggest mistake you can make is to put too much weight on best technical skill-set match. The longer is the project, the less emphasis should be placed on skillset match – simply because skills will evolve but non-technical personality treats do not change – and in the long run, these may cause most troubles. For smart person, with good education and wide enough experience it is much easier to fully master some special area of expertise, especially when there is some time available. On the other hand, things like teamwork, work ethic, social skills, communication or “degree of done” are very hard to get and even harder to fix/develop.

The other reason why hiring only by skill match can be tricky is: how do you evaluate a supposed expert’s expertise ? If do not have another expert (ideally better one), how do you validate how much of claimed experience is truly there and how much is just a nice wrapping, an empty shell lacking depth ? The risk in hiring somebody with boasted experience but lack of depth is that this person will inevitably become the key design/implementation influencer and decision maker in the given area of expertise. Eventually, as the project moves ahead, other team members will catch up and understand more and will question some of expert decision made – and the fun begins.

Another important “must have” for a good hire is multicultural background. No, I do not mean *that* type of multiculturalism as the politicians like to use: I mean the information technology cultures such as platforms, operating systems, languages and the person’s exposure to multiple of them.

If you have a guy who worked on non-Microsoft platforms, it will be much easier to make him understand why the automated builds, scripted installs and all that old fashioned command line stuff is so important – because that person will see the value and power of scripting, repeatability and automation. This is very hard to explain to a guy who was all his life just clicking buttons and checkboxes in GUI tools and the black box of command line prompt either scares or annoys him. (OK, let’s be fair here: until Powershell’s availability, the command prompt in Windows was both scary in its ineptitude and annoying compared to e.g. Bash)

If you have a seasoned Oracle database guy who used the database on Solaris, Linux and Windows, you will never have to mentioned and explain that you need scripts for things like create schema, load demo data etc. You will get nice, handwritten, commented, parametrized scripts for everything whether you ask for it or not. These guys just operate that way because until recently there was no GUI and because this is fastest way (hi Bob :-)). On the other hand – ask a SQL Server database developer to work using “script first” approach and you may find out that it is something not so obvious, the guy does not get it (just keeps on clicking in Management Studio) and that the scripts you eventually get are far from good – because they are generated from changes done via GUI, not really written. To be left maintaining such code – good luck.

So what is the best hire ? Smart people, with good education and problem solving capabilities, with solid verifiable experience, wide multicultural background (in the computing sense), with proven record of working in teams of various sizes and ideally well versed in business area you are working in, with work ethic compatible to your organization, shared communication culture and similar degree of done. It does not really matter how many keywords does the resume match: if you need to have great C# programmer and have a choice of experienced Java guy who never touched .Net and 10 year senior Windows programmer who spend most of his/her career on Visual Basic like platforms, it will typically end up like this: Pick a good Java guy, give him 4-6 weeks and the reward will be beautiful, maintainable code with very natural C-sharpness. Pick a VB guy, you will “save” the start time – but you will more likely get average, harder to manage, less scalable code because the object design and object thinking just are not there to the same degree as in Java case. The longer the time was spent on VB version 6 or less, the worse.

An advanced apology to all my VB.NET loving, VB.NET writing friends – please no flame wars, all of you absolutely are the exceptions confirming the rule, you are the crowd who gets the OOP, and besides we all know that VB.NET is just syntactic sugar on top of C#, of course the “better” type of sugar :-). I am not picking on you. I said VB but I really meant Perl …

And btw, Tidlo – if you are reading this, send me an email. Hope you are still programming and occasionally compiling after you fix the bug.

DotNet Development Toolbox – II


Continuing from here.

Ch-7: Digging into the source code

In this chapter, Mike describes the tools that work with, around or towards source code: Idlasm, Reflector and FxCop. The book is written for Visual Studio 2003, so some references are slightly outdated – but there is one utility that stands out and is still more useful than ever: Lutz Roeder’s Reflector. It is part of the Visual Studio 2005 (albeit in older version that you can get on the Web site).

Here is in a nutshell what does it do for you: it lets you load a compiled assembly (.exe or .dll), shows content of all classes, functions, etc contained in that assembly. It contains built-in disassembler, which is amazingly good. It also contains analyzer, which shows you – at binary level – what functions/methods are called from a given class method or which classes call this method as well as inter-assembly dependencies. Extremly valuable when debugging some low level DLL loading issues.

Reflector has also inspired many add-ins: see the Lutz’s blog, or Add-Ins page on CodePlex. The addins do lot of different things: compute and display code metrics, allow to do binary compare of different versions of same assembly, graphically display interdependencies between assemblies, generate unit test stubs … make your choice.

Evolution of the math problem or dumbing down the recent grads



A logger sells a truckload of lumber for $100. His cost of production is 4/5 of this price. What is his profit?

1970 (Traditional)

A logger sells a truckload of lumber for $100. His cost of production
is 4/5 of this price, or in other words, $80. What is his profit?

1970 (New Math)

A logger exchanges a set L of lumber for a set M of money. The cardinality of set M is 100 and each element is worth a dollar. Make a square array of 100 dots to represent the elements of set M. The set C of the cost of production contains 20 fewer elements than set M. Represent set C as a subset of K and answer the following question:
What is the cardinality of set P of profits?


A logger sells a truckload of lumber for $100. His cost of production is $80 and his profit is $20. Your assignment is to underline the number 20.

1990 (Outcomes-based destreamed integrated Math)

By cutting down beautiful forest trees, an environmentally ignorant logger makes a profit of $20. What do you think of this way of making a living? In your group, use role play to determine how the forest birds and squirrels feel.


DotNet Development Toolbox


I have recently picked very interesting book that I have read about year ago, from Mike Gunderloy named Coder to Developer, with subtitle Tools and Strategies for Delivering Your Software. It is an excellent book and I highly recommend to give it a look if you are in software development business on Microsoft platform. Easy read, practical, useful.When I was reading it back in 2006, it was before we have set up own development lab and started the biometric project. With this recent experience, I was re-reading the book with quite different view: unlike before, I knew what we tried, what worked and what did not. Unlike before, I have now broader experience with what it means developing software and run large project inside the organization, creating and maintaining infrastructure for team of developers and leading the project design and implementation.

Book has 14 chapters, addressing various areas from starting a new project, organizing it, using tools such as source control, unit testing, IDE, bugtracking, logging, build tools etc. While starting up the lab and the biometric project, we had to go through pretty much every book chapter. Sometimes we made same choice as Mike recommends, sometimes different. Here is our toolbox, in order of the book chapters:

Ch-3 – Source code control
Mike is mentioning several Source code control systems: BitKeeper, ClearCase, CVS, Subversion, Perforce, VSS. We have excluded VSS (because of the reliability), ClearCase (because of complexity and price) and the final selection was done between CVS and Subversion. Actually, between CVS-NT and Subversion. We have decided to adopt both, starting with CVS, because it was more familiar for majority of the team members.

At the beginning, we were considering VSTS, but the prohibitive price, complexity and low version number were reason why we decided to wait for at least Service Pack 2 before considering it.

Ch-5 – Unit Testing

Compared to other projects, we managed to be quite successful in implementing unit tests and TDD. The BO layer has over 100 unit tests which helped to catch several pretty vicious bugs in early days. We settled upon using MbUnit, instead of NUnit, because of very useful extensions such as row tests. MbUnit is pretty much superset of NUnit – see more at project Wiki page. MbUnit works very well, the only disadvantage is very limited documentation.

Ch-6 – IDE

Visual studio – what else ? In version 2005 it provides very good featureset. Refactoring support is still not quite as comprehenshive as in Eclipse, but very good nevertheless. Some developers use special plugins – e.g. Visual AssistX, but others (including myself) found it rather contraintuitive and liked plain VS.NET better.

One of the plugins that we evaluated and actually liked, was Testdriven.NET – nice VS addin to use MbUnit, csUnit and NUnit. Unfortunately, the publisher has very strange licensing policy: the more licenses you buy, the more you pay : Professional license costs $95, but if you need more than one you must purchase Enterprise version ($135). Out of principle, to be “voting with our dollars” we decided not to go with Testdriven.Net. Let’s hope that more people will do the same and eventually, the author will get the message.

To be continued