In an ideal environment you're either already be using agile dev practices, or have enough support across the project team that trying it is considered a good thing. This post is about when we don't have that kind of environment.

There are occasions where "agile" has either gained a bad reputation, or triggers various fears, or maybe just elicts a complete lack of interest. When that happens, for whatever reason, attempts to be openly agile on subsequent projects can cause problems.

I'm going to talk about the situation for one of the projects I found myself on as the sole developer with one business owner (client) and one manager-of-all-things.

For the moment, I'm not interested in the why's or how's of how we got into that situation. History is important, but at the time I had code to deliver a friendly client as fast as possible to give the best/biggest impact.

I had two big advantages to work with:

  1. a friendly and available (internal) client to chat with
  2. an overly busy manager who didn't have time to do paperwork

From that starting point, these are the main tactics I used to start introducing agile development practices to the project, without triggering nervousness or complaints from any technical or business managers.

If people have negative experiences or worries about "agile" for whatever reason, they will have picked up some of the jargon - that may even be the source of some of their negativity.

So don't use the jargon. Instead I used more innocuous and general wording to express the same kinds of things. Rather than Backlogs I talked about TO-DO Lists. Rather than planning the next sprint I was doing the prep for the next chunk of work. I didn't pair-program instead I worked with X on a problem, etc.

I think this is the most important point: Choose language to make people comfortable - preferrably language they would choose to use themselves.

Once they're comfortable that the new process is working, you can start to point out what names various methodologies use and how their practices work.

You know you're going to be doing regular iterations; at this stage others may not. So pick a duration that suits you and decide the overall shape of those iterations. This needs to be done with an eye to how the others can be persuaded to interact with you on the project.

If you can only get the client to agree to "chatting for an hour every fortnight", that's your timebox. Put those meetings in your calendars up front, that meeting is going to be for planning, review, sign-offs and everything else - so get ready to do some serious pruning and condensing. Also make sure you have something new to show for every one of those meetings.

In my case the client was happy for us to call each other whenever we had anything to say, but he was pretty busy, so we roughly settled on a call about once a week, with me sending an email a day before to set an agenda (most of which he could tick off before the call).

The manager would initially call/email/IM frequently each day, but this did slacken off as the project progressed. I like to think they got more relaxed about the process we were using. They did at least mention iterative development and close conversation with the business as good practices for future projects.

For people used to PRINCE2 or any of the other large ceremony waterfall type processes, there's no surprise in the idea of spending a large chunk of time early in the project on planning and organisation. So we can take advantage of that with a "zeroth iteration".

This can be explained as a 2 or 3 week chunk of work (whatever your iteration length is), to sort out development environments and tools.

This iteration isn't intended to produce anything visible or deliverable to the client. After all they aren't expecting anything yet, and the spec documents are probably still being discussed. Instead, its deliverable is for you.

The deliverable is a toolchain - which includes a suitably configured project, in source control, with a build process (preferrably automated) that builds an empty app, with a test harness (that doesn't test anything yet), with a dev server that the app will deploy to.

At the end of this iteration, you should be able to demo to yourself (and any other devs) that you can pull a (nearly empty) pile of code from source control, build it with a version number, runs tests on it, deploy & configure it on a server and mark it as ready to release. As a side effect, you should also have a list of the people you need for a release and some kind of agreed release process arranged with them.

It doesn't have to be perfect, but it needs to work and it needs to be as painless as possible, because from now on you're going to be using this toolchain and these processes to deliver stuff regularly.

It is important that this iteration actually produce a deliverable, working, system. Ideally into a live-ish looking environment, with pseudo-approval.

The fact that it's not going to be used is no excuse for getting 90% of the way and hand-waving that the rest is trivial. That "trivial" stuff is where the delays and frustration build up. That why we want to see and fix all that early on before things get frantic.

For my project I was lucky enough to have an older prototype app which I could copy, modernise and polish slightly for the first "release". That first (sorry, zeroth) iteration appeared on my dev box, a dev server and an integration test server.

After discussions with the business and test teams signing off the releases, we agreed that future iterations would only go as far as the dev server (tagged and prepped as if for release), until their teams were ready for a formal release process. But that first shot meant we knew what the deployment/release process would be and could write it up for future use.

This is the core of agile/iterative development - you're going to be developing manageable chunks of functionality and making then ready to use at regular intervals. Then picking the next chunk of stuff from the remaining pile of things that need doing.

That doesn't necessarily mean anyone outside the project team needs to know it's happening. Initially it may also be ok for most people inside the team to ignore it too (and they probably will). But still, set out that commitment, because the intent is for everyone to be able to rely on it. That cycle needs to be solid and reliable, like a nicely ticking clock.

What you're promising here is that every N weeks/days, there will be a new working version of the deliverable system available, somewhere that anyone can take a look if they care to. Ideally on a shared server. Along with that new version, your promise includes a list of the new stuff it implements.

We're trying to be stealthy here, especially with people's feelings. That means:

  • Don't try to enforce anyone else caring about the new version.
  • Don't expect them to give feedback, or even (initially) to take a look.
  • it's your commitment, not theirs. It's to help you. Don't try to make them commit to anything in return.

For my project, I committed to the cycle by saying that I'd send an email to everyone every other Friday, announcing a new version on the dev server with an invite to try it and give feedback. After a few weeks of that email, referencing it and the links in it, others involved in the project started referencing it too. Once I started seeing screenshots being shared as part of discussing bugs/features, I knew they were actually looking at it and taking an interest in the difference between the versions.

I cover the details of that email in Ending an agile iteration

When it comes to the agile manifesto stuff about valuing "Individuals and interactions over processes and tools", I find it helps enormously to bring things into a human scale. I also find it helps at the design stage to anthropomorphise the modules/components being discussed. This helps put some relatable context around the scenarios.

For documentation: In projects following quite a formal process at the management levels, the documentation is often written deliberately to be impersonal, third person, referencing things in abstract and generic terms. There may be many reasons for this, but when it eventually boils down to people discussing what they want the system to do, talking about the people themselves (or talking about the other systems as people) makes things feel more concrete.

So, even if the formal docs talk about "the customer", when trying to nail down the edge case of some complex feature, pick a name and describe them doing the actions you're working out. For example:

Steve Aardvark used his work email address (s.aard@apple.com) to sign up with us. But he moved to Intel so his email address changed to steve1234@intel.com. His brother Simon replaced him at Apple, and they gave him the s.aard@apple.com email address. When Simon signs up with us, we follow this process: ...

You may also find it works for you to assign some kind of persistent personality to your fictional people. Maybe Steve is always the one doing complicated stuff, Gemma is always the one with the tablet/phone requirements, Anton is the hacker, etc

For analysis: This humanising technique is commonly used for writing user stories (or use cases, or sometimes requirements), user guides and anything else which needs to make new concepts easily accessible to the intended audience. User stories can be initiated by questions like:

I'm Harriet, never used anything like this before. I'm sat in front of the computer my granddaughter bought for me, and I've clicked the blue 'e' like she said. What do I do next?

I'm Dave, sales coordinator, how do I get that report emailed to me every day before 8am, as a PDF, with my name on the front page, to deliver to my boss?

Getting user stories like this into the formal project spec documents might not fly. Don't push for it, just add them to whatever system you're using to track the technical issues, referencing back to the formal feature request they deal with.

For planning: I find the humanising approach works really well for project planning too. One of my common phrases when feature priorities were being discussed was:

I'm going to come in on Monday, pick a task from this pile and implement it. Which one would you like me to pick? If you don't care, I'll pick one myself. repeat until you have enough to keep you going until the next meeting

For system design: Extending this kind of talk to the components of a system may be taking things a bit too far for some people, but I find it helps, even if it's just making a scenario a bit whimsical:

Poor old Database Connection Manager! You've made an army of a hundred workers each needing 10 connections. So they made Mr DCM open a thousand connections when he's used to seeing about 20. Then they got bored and went away away again because he took too long to send the first response to the first one. Then they came back again a minute later to start all over again. No wonder he's taken to staring in horror at the hoard of zombies each demanding a new unique ID then dying before he can get them a ticket!

Again, it's unlikely you'll get happy faces if bug descriptions are worded like that in your formal issue tracking system, but in conversations (and especially in design meetings) it can help make things recognisable and start to make solutions/decisions easier to discuss (Adding Mr DCM2 won't help, they're using the same ticket machine. Second ticket machine? Better make sure the ticket rolls don't overlap. Put a bouncer on the door? etc)

Hopefully after the project has been running a while, the iteration cycles will be ticking along nicely, the team will be confident to poke at the latest release and discuss where to steer from here and you'll have a fairly good shared idea of when and how to do the next formal release.

Once the dust has cleared from a release, it's time to consider ending the stealth approach. This isn't the time for a Scoobydoo reveal, you're aiming more for that trick with the picture of a vase which is also two faces depending on how you look at it.

If things have gone well - in the eyes of the sceptics - this shouldn't be too much of a problem. Start showing how the "agile jargon" matches the things you've been doing, maybe discuss some of the differences between various forms of agile practices, with a view to agreeing to adapt or adopt a few more in the future. If that works - hey, you've just done a retrospective!

If things haven't gone well, then maybe try a few different tactics for the next few iterations. Look at the where the pain started and think about what could be done differently at those points. Basically, keep being agile: if it's working, do a bit more of it; if it isn't, do less of it and try something else.

For our project, it became clear quite early on that we were following a few "lightweight agile" practices, so we just started building on that as everyone got comfortable. After a fair number of releases, it was declared "good enough for now" and we all moved on to other projects, with recommendations to use mostly the same approaches on those projects too. And that's about as good as reviews get for a software project!

Previous Post

© Me. Best viewed with a sense of humour and a beer in hand.