The Geek Factor: Why They Aren’t Buying Your Agile And How To Make Them Love It

Gave this presentation at this year’s Agile Vancouver conference.   Thank you everybody who attended – much food for thought and ideas in the discussions afterwards. The slides can be found at .

If Agile works, why isn’t everyone doing it? Or, as Agile has become fashionable of late, why all the lip service without the expected amount of real change? This talk makes the argument that it comes down to trust and presents tools and examples for building and keeping trust. The focus is on how to project plan and design applications in a way which, wherever possible, avoids putting stakeholders into situations which require trust in the first place.

In a nutshell:

  • Own your stack. If possible, use one co-located team per bounded context and architect systems in a way which allows developers to easily write end-to-end tests and do releases.
  • Make sure you and your stakeholders have a shared definition of what success looks like. Are you measuring the same things?  For example:  “Diligence in planning” vs. “Adaptability when responding to new knowledge”:  In light of the former, finding something unplanned-for is failure. For the latter, it’s learning and success is measured by how well the new knowledge is assimilated.

November 7, 2011 at 7:24 am Leave a comment

Making commitments: Meeting deadlines vs. maximizing throughput

For those of us who build software, being able to make commitments to business stakeholders is hinging on our ability to deliver features fast enough to enable the business to react quickly to new opportunities and changed market conditions.

And yet – in more than two decades in the industry practically every single project I have ever been involved in was optimizing for adherence to schedule (using waterfall, scrum, “Agile”, whatever, it doesn’t matter) instead of optimizing to maximize throughput.

Where is the disconnect? What am I smoking?

August 21, 2011 at 7:12 am Leave a comment

LiskovCheck: Semi-Automatic Liskov Substitution Principle Adherence Check

LiskovCheck.exe is a command line utility which scans .NET assemblies for inheritance relationships and lists them in English sentences such as (from nunit.framework.dll):

  • “It looks like a LessThanConstraint and behaves like a ComparisonConstraint.”
  • “It looks like an AssertionHelper and behaves like a ConstraintFactory”.

The first sentence sounds about right, but the second one raises eyebrows: One could swap things and say “It looks like a ComparisonConstraint and behaves like a LessThanConstraint.”  However, “It looks like a ConstraintFactory and behaves like an AssertionHelper” makes less sense. (Disclaimer: I haven’t looked at the nunit source – no idea what this is.)

From LiskovCheck’s SpecFlow acceptance tests:

Feature: Semi-automatic check for adherence to the Liskov Substitution Principle.

Definition of the Liskov Substitution Principle:

“Liskov substitution principle (LSP) is a particular definition of a
subtyping relation, called (strong) behavioral subtyping.”

…also known as:

“If it looks like a duck, quacks like a duck, but needs batteries – you probably have the wrong abstraction.”
(from )

Scenario: A subtype is more likely to adhere to the Liskov Substitution Principle.

Given a DLL named Zoo.dll with a “Duck” class which inherits from “Animal”
When I run “liskovcheck Zoo.dll”
Then the words “It looks like a Duck and behaves like an Animal” should be on the screen.

Scenario: A subtype is less likely to adhere to the Liskov Substitution Principle

Given a DLL named Zoo.dll with a “MerganserDuck” class which inherits from “TransistorRadio”
When I run liskovcheck.exe with the argument ‘Zoo.dll'”
Then the words “It looks like a MerganserDuck and behaves like a TransistorRadio” should be on the screen.

The project is on GitHub at

To try it out:

  • On click “Downloads” and download
  • Unzip the file.
  • Open a command window in the LiskovCheck-0.0.1 folder and run (e.g.) ‘liskovcheck Ninject.dll’.
  • You may need to “Unblock” a downloaded .dll file  first, by right clicking it, going to “Properties” and selecting “Unblock”.

I wrote LiskovCheck as an excuse to play around with Ninject, SpecFlow and OpenWrap.

For more information, see



February 13, 2011 at 9:50 pm Leave a comment

Zombie Agile: Bad Things Done Right

I currently have the pleasure of working in one of those “waterfall-ish” shops where things actually work: Profitable projects get delivered in fairly predictable timeframes by sane people. They have a fairly “traditional” setup with business analysis folks who write old-style requirements documents (… few if any new-fangled “user stories”…).  These are given to development for implementation, to be manually QA’d by the QA team.  A few lone rangers not withstanding, practically nobody seems to write any tests.

How can this be … ? Aren’t we supposed to TDD/BDD/scrummerize everything, practicing kanbanilism for good measure? Blasphemous whispers in the Church Of Agile; Petty doubts: Have I joined a fringe cult …. ?

So what’s the difference? How do they get away with it?  Situations where I’ve encountered “traditional” software development life cycles that actually worked all had the same combination of traits in common:

A healthy “that was then, this is now” attitude to requirements.  A recent project is a good example: The “traditional” business analyst–produced requirements remained a living document which was updated throughout the delivery of the project. It could thus be relied upon to closely match what was implemented in the code. As much as anything, the habit of keeping it up to date led to many more conversations between BAs and developers than would otherwise have taken place, greatly helping everybody to understand the problem domain.  Very importantly, the document also served as vital input for doing the second part of the trick:

Relentless QA. Unsurprisingly in the absence of automated test coverage, there were many bugs. QA used the “living” requirements document as their guide to making manual test plans. They were very well-organized in executing them and very proactive in sharing the results with the developers. The resulting back-and-forth further helped to grow a shared understanding of the system and it’s requirements.

Smooth Deployments. Calling it “continuous integration” would be a stretch, but they had their builds and deployments organized in a way which allowed for one or two QA releases per week.

Awareness. They were aware of test driven development, the potential advantages of Agile and many of the techniques. However, they had a large legacy code base and many organizational structures and developer habits which did not lend themselves to “Agile” without major changes and a lot of experimentation.  It was a matter of “Koennen vor Lachen” (… German for “Easier said than done”.  It doesn’t translate well.).  Yes, sounds great, but how do we start … ?

Hard to say. Trying to do “Agile” without test-driven development is an extremely hazardous idea.  At the same time doing TDD in legacy situations is a tall order. For a team of developers to start consistently writing tests in brownfield projects takes a lot of new ways of thinking and technical savvy which I feel absolutely has to go hand in hand with appropriate ways of managing the process: Without knowledgeable support and commitment by non-technical stakeholders the effort will die. (Very common scenario/attitude: “We don’t have time to write tests.”)

The practices of “waterfall done right” could serve as initial steps when iterating towards “real agile”. Compared to a seasoned SCRUM / Lean team at their best,  “Zombie Agile” with living requirements, relentless QA and smoothed-out deployments is expensive, inefficient and somewhat dead. Nonetheless, it seems to be delivering a surprising number of good results.

June 19, 2010 at 10:51 am 4 comments

Slides from “Refactoring Towards Sanity”

See below:

PrairieDevCon 2010 – Refactoring Towards Sanity

The free CodeRush VS add-in which seems to do some of what can be found  in Resharper is at:

I haven’t tried it yet, but look forward to give it a whirl.

Thanks everybody.

June 2, 2010 at 1:59 pm Leave a comment

PrairieDevCon: Sample code for “Refactoring Towards Sanity”

The sample code for the “Refactoring Towards Sanity” can be found here:

IMPORTANT (and awkward): It turns out that WordPress doesn’t allow uploading .zip attachments, meaning that you’ll have to jump through the extra hoop of right-clicking the link, selecting “Save Link As…”, and renaming the file to

See you at the session.

June 2, 2010 at 5:33 am Leave a comment

PrairieDevCon 2010 Presentations

I’ll be in Regina for PrairieDevCon in June. It sounds like a lot of fun.

Prairie_Dev_Con_Presenter Started preparing for my “chalk talk” on refactoring project management and for the “dojo” on refactoring (… arguably easier to handle …) actual code. I’m looking forward to these sessions; Both have great potential for lively exchanges of in-the-trenches experiences. I have much to learn.

See below for the abstracts:

Moving Towards Lean In A Waterfall World

Moving from traditional waterfall-centric push project management to lean pull-based approaches takes more than putting a kanban board on a wall. Using a “real world” example, the audience is invited to help with identifying waste, maximizing flow and (perhaps most importantly at all) explore ways of getting buy-in from management and other stakeholders to whom this way of doing things is completely alien. Just for fun, we’ll try to use nice, easy to understand techie concepts and apply them to running projects: Refactoring (…. gradually change old ways of doing things, in small easy to digest steps), anti-corruption layers (… how to work with other parts of the organization who are in no position to try all this new-fangled stuff), etc.

Refactoring Towards Sanity

Working with legacy code is a fact of life. Re-engineering it all is costly and frequently impractical. This session takes participants through a minimalist example of how to carve “areas of sanity” from tightly coupled Big Balls Of Mud: Instead of re-writing the complete application, focus on the core domain. Gradually isolate key functionality from dependencies and side effects which affect the rest of the system, thus creating “safe” areas where code is under test, SOLID, etc.

(Tools: Visual Studio & Resharper)

April 13, 2010 at 6:15 pm Leave a comment

Older Posts Newer Posts