?

Log in

No account? Create an account
latest entries related journals calendar about the author My life in pictures older entries older entries more recent entries more recent entries
Agile and the Death of Education - My Thoughts Today
An ill-used association of words and pictures
feyeleanor
feyeleanor
Agile and the Death of Education
This is rough: I'll try and remember to edit it later for readability :)

Much of the political chatter this week is about the demise of maths and science teaching in Britain's schools, a subject which those of us who: a) lived through the old O'Level system; and b) seen our children through the modern GCSE system often have strong opinions on.

This year is the twentieth anniversary of my transition from school to polytechnic, almost exclusively on the strength of a General Studies A'Level, and started down the path of über-hackerdom that's made me the easy-to-get-along-with person I am today. It was a tough transition even back then, and critics claimed that standards had already slipped badly since the 1960s. In fact they were so obsessed with how much standards had slipped that the education system was already developing the standardised testing methodology which has in recent years lead to grade inflation in schools, and dissatisfaction amongst admission officers in universities.

Now I'm going to plunge into some murky waters with what follows, and some of you may well ask whether it's appropriate to put both education and software development under the same microscope. However I believe that there is much more similarity between the two processes than is often recognised. Both focus on discovery and learning, both focus on the development of vocabulary, and at their best both encourage independent thought based and the creation of robust models of reality. For many in our society education is falling far of these goals, and the same is true of the methodologies peddled in the software world.

Core to these deficits is the obsession with process over outcome. In our schools there is a tendency to discourage the intellectual rigour of the hard sciences and mathematics in favour of the relativist methods of critical thinking. Likewise in software development the emphasis is on juggling ever-changing business requirements rather than solving the underlying problems which give rise to those changes. In both cases we've abandoned the empirical experimental technique which is the basis of our science in favour of superficiality, and finding that what we then produce is not what we claim to aspire to we've turned to standardised testing in the hopes that we can enforce supposed standards of excellence which when studied in depth have very little relation to the real world outcome we desire.

Testing is a strange beast in that it's intimately coupled with that which is tested. This is a well-known characteristic in the world of physics where anyone educated to A'Level standard has to deal with Heisenberg's Uncertainty relationship, a fundamental property of quantum systems: the act of observing quantum entities causes their properties to change in a bounded but non-deterministic manner. Application of this knowledge leads to the Quantum Zeno Effect and Inverse Quantum Zeno Effect in which a series of observations allows a system to either be held in a given state against statistical odds, or else guided towards a different state (Quantum Evolution). A simple example of the latter is a classic optics experiment in which a series of polarised lens are arranged in sequence, each with a slightly different angle of polarisation, and a beam of polarised light passed through them. The more lens, the more transparent the apparatus becomes to the polarised light.

The same principles can be applied to quantised information spaces, especially when they're based on statistical distributions (such as the gaussian and bell curve distributions so beloved of exam boards) which disguise our lack of understanding of the entities we're testing. The more tests performed, the easier it is to create an Inverse Zeno Effect that will ensure the aggregate results tend towards one or other end of the curve. This is a basic principle behind the Quantum Mechanics of Information, or as it's known in the political world Tractor Production Metrics. The results bear little result to the real world because all quantum systems are in a sense virtual - full of potential but only actualised by observation.

So getting back to testing. Tests are supposed to verify that a system performs the way that it is intended to, which presupposes that the system has already been validated as conforming to the desired intent. Tests without validation are meaningless. This is true in education, and it's equally true in the world of software. The best validation for a system is that it produces outputs which require minimal manipulation to fit the inputs of other systems with which it is coupled, so for education we could measure it in terms of how much additional work is required to get an A'Level graduate up to speed for a degree course in the same subject, and then form a distribution showing how the core skills also applied to aptitude to related and unrelated academic subjects.

For software components we have very similar standard which can be applied, based upon Cohesion (how focused a component is on implementing its particular task) and Coupling (how tightly two components are connected. Good software in general consists of highly coherent components which are very loosely coupled. This is very old news to anyone who's studied computer science.

Unfortunately in both software development and education those paying the bills apply very different criteria for success: arbitrary budgetary constraints and politically motivated targets take what should be a free market in implementation which would use an Inverse Zeno Effect to create ideal software components or highly capable students, and instead skew it in favour of resource budgeting. On paper this sounds laudible: let's keep costs down as much as possible. But in practice this channels the evolution of the system away from the results desired and make development a hit-and-miss affair.

In both modern Agile Development (i.e. the commerical 'brand' rather than the underlying principles) and in the education system an obsession with cost has made standardised testing ubiquitous, and with it guaranteed that instead of the Evolutionary Zeno effect giving quality we instead get measurability and this instead is hailed as quality. That's the principle of Continuous Integration.

The myth of Continuous Integration is a noble one. By testing every change we will ensure that the system under development (whether the mind of a child or the structure of an application) will never be left in a confused and error-prone state. Each change will be the best that it can be, and if a better change comes along lated we can backtrack and integrate it universally. It's a lovely idea.

But it doesn't work.

Evolution through natural selection can only prune the possibilities you already have available. To really advance your position you need to throw in a randomising element that will take you outside your existing paradigm (in information terminology, shift you to a problem space where your axioms are less likely to fall foul of Gödel's Incompleteness) and open up new possibilities.

So Continuous Integration could work well if your methodology also included an experimental, empirical element, but used purely as a verification tool it is inadequate for the task.

And now I have a job interview to be getting off to.

today I am mostly: summer

participate