?

Log in

No account? Create an account
latest entries related journals calendar about the author My life in pictures older entries older entries more recent entries more recent entries
On Agility - My Thoughts Today
An ill-used association of words and pictures
feyeleanor
feyeleanor
On Agility

It's long been anecdotally known that the best programmers are those who work bottom-up in an exploratory manner against the backdrop of a broad understanding of what they're trying to achieve. This classic hacker approach to development comes naturally to people who've written lots of code in a variety of languages and very likely do this as much for the pleasure of coding as for the paycheque. Because of their experience these hackers become hard-wired to write good and effective code with minimal overheads, although that doesn't completely preclude defects and errors in judgement. However a combination of hard work, dumb luck, insatiable curiosity, native aptitude and intense personal interest have given them an appearance of genius which everyone else in the field either craves or affects.

These are the people every software manager wants on his team, churning out high-quality code in record time and keeping costs to the bare minimum.

Unfortunately due to the high bar for achieving this level of skill these programmers are few and far between. Assuming a bell curve similar to that used for rating intelligence perhaps we're looking at between two and five percent of all coders. I'll be generous and say five. These people possess great agility of the kind displayed by mountain goats and its amazing where they can get to under their own steam - although like mountain goats it's arguable as to whether most people would want to follow their lead too closely.

That's not to say the other ninety-five or so percent of developers are bad, just that they're not going to work in the same way and as we get closer to the median we're going to see considerably higher costs associated with their performance. They need more structure, more assistance, a safety net.

This is where methodology comes into the frame. It's an attempt to make programmers of an average skill level, working together in a structured manner, perform to approximately the same standard as that hypothetical top five percent.

Traditional techniques like SSADM accepted that the developers using them were competent craftsmen (i.e. software engineers) but required no particular talent from them apart for a conscientious attention to detail. This sometimes resulted in ponderous and bureaucratic life-cycles depending on the environment in which they were utilised and made changing requirements mid-development somewhat painful, although iterative strategies like the Boehm spiral largely mitigated the impact of these tendencies.

Role specialisation allowed individual developers to fall into their favoured role - analyst, coder, tester, architect - and there were benefits to this: someone who spends ten years focused on testing will become pretty good at ripping apart code and finding obscure bugs just as a skilled analyst can turn poorly understood requirements into a cohesive whole. Unfortunately role specialisation is expensive in head count and many companies make such a hash of internal communications that inter-role politics further inhibit performance.

The Agile mindset is supposed to solve these problems by forming all of the talent required to develop a project into a cohesive team, and then stepping back and letting them get on with building a system. It also embraces the iterative strategy, encouraging developers to work on smaller groups of requirements and solve these well whilst remaining open to the possibility that requirements might change between iterations.

Everyone gets to do a bit of everything in an Agile team so work becomes more interesting and varied whilst building a mutual appreciation of the concerns specific to each given role. Ideally this means that all the roles work well together, improving productivity whilst keeping a relatively high level of quality and flexibility. There will of course be some additional overhead as the formalised communication between roles is replaced with a more interactive model, but that's to some extent counterbalanced by the reduced antagonism such easy access provides.

All good stuff.

If this was all that Agile methodologies concerned themselves with there would be a big gain for average teams. Their working methods would be approximating those of the top slice and where they individually lack the full skill set that would usually require, together they're operating as a hybrid entity of similar agility but with the advantage of more physical resources to pursue their goal. This is a pack hunting strategy and nature demonstrates time and again that this can be highly effective.

Unfortunately to adopt a consciously Agile approach most teams look for more than just a mix of abilities and a focus on iterative delivery. The very lack of experience that differentiates the vast majority of developers from the top slice leaves many teams fretful as to whether they're really capable of producing high quality code or meeting agreed deadlines, and consequently there is a hunger for assurance and guarantees of success.

I'm good friends with quite a few developers in that top slice, most of whom seem to share my scepticism regarding the popular Agile methodologies. Now some of this is doubtless down to the desire (and this is where I'm forced to concede my own bias) to work with minimal interference, some of it down to often iconoclastic natures, and some of it is down to enjoying that thrill of uncertainty when taking on a new and potentially insoluble problem.

One thing the top five percent are not looking for is a sense of assurance as they know the ignorance and uncertainty that characterises all new projects will soon be replaced by the sureties of solid code. Further experience has taught them that with a lot of ad hoc debugging and some ripe expletives this code will grow into a solution fit for purpose. That fitness may be posited on flexibility, on scalability, on runtime performance or any other property or set of properties that is desirable for the problem in question and the environment in which the solution must operate. But whatever the particulars, fitness is good :)

Therefore what the top slice are fundamentally looking for in any development methodology is the minimum of friction between what makes code fit for purpose and what allows them to get the system completed in a timely and cost effective fashion. They're looking for an asymmetric relationship between effort invested (preferably low) and quality of system produced (preferably high) and the ability to achieve that is what defines them in the first place.

Their assurance comes from the willingness to put in as much effort as necessary both to code the system and to figure out its requirements.

Now my natural prejudice is to analyse all of the faddish Agile methodologies I meet to understand how they work, chuckle at the slight of hand involved in marketing them, and then to get on with doing things my own way. But I have to ask myself whether or not there really is some value in them that I'm not seeing.

For one thing Agile is becoming the orthodoxy in web development, an area where the majority of projects conform to a simple template with the complexities of building a web application buried in a framework such as Ruby on Rails. These are not particularly complex systems from a computer science perspective and their mission criticality is often marginal, but thanks to the crufty mish-mash of front end platforms that they have to work with there is an obvious role for automated functional testing. Likewise the desire of graphic designers to change the look and feel of a web site/application at the drop of a hat introduces a need for flexibility.

Clearly Agile is a good fit for these kinds of system. So then we have to ask ourselves what brand of Agile does the job best? I can only speak as an outside observer having sworn off all methodologies some years ago, but I'm going to look at some of the techniques common to the main Agile flavours that are currently popular in London and then see if any of them match my 'top slice' criteria.

First up there's the Behaviour/Test Driven Development (henceforth [B|T]DD) model that seems to be the most popular approach at the time of writing.

Testing is essential to the development of good software because it ensures that it does what it's meant to do and performs as intended. In a traditional structured methodology this would account for approximately forty percent of the total development life-cycle and occur after implementation and before maintenance. In practice very few projects have really divided work up in this manner because much of what would normally be subsumed into implementation is ad hoc testing by coders, but the separation of testing into these two different aspects is useful.

TDD proceeds on the assumption that writing tests and then writing code which passes those tests ensures that a codebase is both focused on the requirements of a project and of better quality than writing code to fulfil the requirements and then testing it. This is a wonderfully rational approach that has long troubled me.

The test suites which TDD utilises have a tendency to grow geometrically in proportion to the functionality of the codebase being tested, so the goal of unity test coverage is in itself a large investment of resources often out of proportion to the cost of ad hoc testing for any given level of code robustness. This becomes particularly pronounced if requirements change following implementation, making the previous tests obsolete. More disturbingly though TDD lacks mechanisms for confirming the correctness of that test suite, leaving it prone to the same density of defects as that of any other codebase. Defects in a test suite have the unfortunate side-effect of forcing defects into their associated codebase which would not exist otherwise.

Another problem with TDD is that its effectiveness varies greatly depending on the level of domain knowledge possessed by the developers. Companies such as ThoughtWorks who use TDD extensively rectify this shortcoming by bringing domain experts into their teams and extracting as much knowledge from them as possible, usually via pair programming (which I'll come to in a moment).

The usual benefits ascribed to TDD are an increase in code quality and ease of system integration. Code quality means different things to different people and I don't doubt that a good test suite will remove certain kinds of defects from a codebase, so I'll take the assurance at face value. However whilst tests can catch erroneous code they're not much good for catching ugly and inefficient code so there's much more to ensuring code quality than just running an automated test suite: at a bare minimum code needs to be aggressively refactored and reviewed by developers other than the author to make sure it's conceptually clean and readable.

The final problem I'll mention with TDD on this particular occasion is that it makes architectural change difficult. The momentum embodied in a test suite which leads development makes it difficult to pursue hunches which don't necessarily have an identifiable set of pre-existing conditions. This is not how the majority of developers work but it's an example of where TDD in its pure form fails to meet my 'top slice' criteria.

Some of the shortcomings of TDD are solved by BDD, an extension to TDD which concerns itself with requirements. In BDD requirements are expressed as an executable specification which can be run against the current code base, generating tests to confirm that the requirements are met, which I guess makes these specifications Meta Tests. This approach draws much of its inspiration from Domain Driven Design, which in layman's terms is about creating a ubiquitous language describing a problem domain which is shared by developers and non-developers alike when discussing it. Ideally this ubiquitous language can be expressed as executable code in the form of a Domain Specific Language (which is probably why Ruby boasts so many BDD advocates).

When I first met tools like RSpec I was very excited as they seemed to solve a problem we're all aware of - the difficulty of ensuring that code matches requirements. Unfortunately having made a few attempts at learning this approach to development I've found it generally cumbersome. I don't know whether this is down to the tools or to me, or a combination of the two, but I generally find that when I develop a meaningful Domain Specific Language for a problem domain the need for specifications becomes minimal anyway: the code becomes self-documenting and the volume of defects decreases significantly.

My biggest concern about BDD though is that it's naturally biased to functional requirements. Whilst there's no escaping the need to fulfil all functional requirements when developing a software system, at the same time there are more qualitative requirements which also need to be addressed if the system is to be fit for purpose. With regard to web applications the classic non-functional which tends to be ignored during development is scalability.

Perhaps BDD will evolve mechanisms for dealing with this aspect of fitness in which case I'll be keen to reappraise it as there's definitely something useful in the underlying concept.

Another common Agile practice is pair programming. Pairing has its origins in the eXtreme Programming methodology and is an attempt to realise in real-time the benefits of code review whilst also transferring knowledge between the paired programmers. The idea is that at any one time, one of them will be operating the keyboard and inputting code whilst the other will be reviewing the code and commenting on it: together they are supposed to operate at a higher overall level of ability than they would individually, resulting in code which is simpler and thus less error prone. Studies have shown a statistical effect when inexperienced programmers work as pairs but the court is still open on whether it makes any practical difference once a high level of individual ability is attained whilst even highly skilled developers benefit from off-line code reviews - even if they're reviewing their own code.

I've tried formal pairing once, and that was enough to convince me that it's not a technique I personally find enjoyable in a commercial setting. This isn't to say I've never paired informally - looking back twenty-five years to when I was hacking on various home micros I often shared a keyboard with friends, just as we used to clump around Operation Wolf or Gauntlet in Pleasurama. Likewise at Uni we often shared terminals for various nefarious purposes and I recall this being a lot of fun at the time. I've even been known to spend half-an-hour with spikyblackcat hammering out the quirks in some bit of code or other and being able to solve problems collaboratively is a useful technique to have in your toolbox.

However I think that for pairing to work well as the default coding strategy there needs to be a basic natural chemistry between the programmers involved as well as a shared approach to problem solving - or at the very least a willingness to let someone else solve problems there own way and share the insights. If that chemistry or open-mindedness is lacking the process is likely to become deadlocked and looking at the teams I know of who adopt the practice successfully I'd say that pair programming shops are something of an all or nothing affair.

Putting all your eggs in one basket isn't necessarily a bad thing, but it better be a good basket handled with care. There are definitely risk management considerations in any monoculture and as the primary source of risk in software development is customer requirements perhaps we should take a look at how many Agile methods deal with this.

User stories have become a very common method of describing requirements in Agile circles. These are a high-level view of a requirement designed to allow the development team to estimate the effort involved. In a BDD development team these user stories will also show a strong equivalence to the executable specification although the latter may well involve more than one specification for an individual user story.

Prior to the Agile revolution most commercial development was driven by use cases which describe requirements based upon a request-response cycle between a user of the system and the system itself. Use cases are generally more complex than user stories due to the lower level of abstraction they represent and they can often contain implementation details, constraining the way developers solve the broader problem.

As the main difference between these two approaches to specifying requirements is their level of abstraction (user stories are higher level abstractions than use cases) I suspect they're fairly interchangeable for many projects, even projects with unstable requirements. However user stories do seem to offer greater flexibility.

Of course a further wrinkle to user stories is that they should be implementable within a single iteration of any iterative technique. There are a lot of different ways of handling this but the one I currently find most intuitive - which isn't any recommendation to use it, just an acceptance that it's not a million miles removed from my own informal habits - is Scrum. Now Scrum is more a management technique than a development methodology in that it doesn't specify how a team of developers should implement a project, instead focusing on how requirements are catalogued as user stories and which user stories should be implemented during each iterative cycle (known in Scrum speak as a sprint).

From what I can glean there's no Scrum orthodoxy regarding testing or pair programming, just two lists of user stories and a team of developers insulated from outside interference by the Scrum Master (effectively the project manager). One list is the product backlog, which approximates a requirements catalog, and the other is the sprint backlog which lists the current set of 'live' stories. This is pretty much how most good developers work when left to themselves - they keep lists of requirements and bugs which they tick off as they're resolved.

There is one bit of voodoo with Scrum that's popular in a number of Agile methodologies: the Burn chart. This provides a graphic rundown of the number of stories implemented at any given point in time and ideally will amortise these to fairly linear performance over a project's duration. Like a lot of Agile practices the idea is to give clients greater confidence in the development team than is often the case. As a developer I'm suspicious of this sort of marketing aspect of Scrum as there are some requirements that even when broken down into their simplest components will still require much greater investment of time than others, but the information is closely related to the content of the sprint backlog so it doesn't really involve any additional overhead.

I have of course skipped over a number of salient details of this particular methodology as I'm more interested in demonstrating the areas of equivalence to the methods of the 'top slice' than I am in the nitty gritty of running scrum on a daily basis. The point is that by making the actual implementation strategy a black box with just one stack input and a memory store, Scrum manages to be Agile without demanding any particular coding practices. So whilst its method isn't explicitly that adopted by the top-flight hackers we all aspire to being, it at least leaves a space where those methods could flourish.

Unfortunately Scrum is normally seen as an approach most suitable with development teams of between five and nine developers, which breaks the Paul Graham rule that any system can be developed by three clever hackers in a basement. But I guess that's a discussion for another day.

Tags:
today I am mostly: contemplative

participate