To set the scene; three years ago, I started on a large software development project for a football club, and I was working as the sole developer. I employed the release-early-and-often approach, for a number of reasons. When the club got to use the software early on, they could see other areas that it could be used for, and so a couple of major additions were requested.
There wasn't much in the way of a spec, as the club wanted a prototypical approach so they could see how things worked before deciding how to proceed. It's not the most efficient way, but this was the first software project the club had commissioned. Hey, at least it wasn't waterfall.
The software was being used in production whilst development continued, which drove further requirements, and after a year or so, we had a stable release, and development stopped. The project was considered a success, and was being used around the world.
Being just me, I was the developer, business analyst, and project manager. Being mainly a developer, that was where I focussed my attention. I considered project management a lower priority, as I was the only “resource” on the project (n.b. I don't like being called a resource).
My process then was to keep track of the requirements in FogBugz. New features and bugs were either emailed or telephoned in by the one stakeholder who I dealt with from the club. We'd discuss which ones were the highest priority, and I'd update FogBugz accordingly. As a single developer, I could use the free version of FogBugz, and there didn't seem much point paying for a second user so the stakeholder could have access.
Once I'd finished a few tasks, I'd deploy a release to the staging server, and email the stakeholder with an overview of the changes I'd made, and asked him to check it over before it went live. Usually this was fine, but there were times when things were overlooked, and bugs or misunderstandings were uncovered only when users started using the live version in earnest. As I had different branches in git for live, staging and main development, it was quick and easy to fix and deploy a new version.
So, once finished, I moved on to some other projects, for a couple of years. A month or so ago, the club asked me for a new phase of development. They had been collecting some new requirements; small improvements to existing features, and some brand new things too.
I guess I've become a little wiser in the last couple of years, (and looking at the code, a much better developer too) and wanted to improve the process. The first thing we did was to meet up and talk through all the requirements in detail. This involved putting their list of requirements into a Google Drive document, me asking lots of difficult questions about what exactly things meant, but hang on, if we do that, then what happens with this other thing, and oh yeah, maybe we don't mean that, but actually we do, so we'll have to work round that by doing this, but perhaps there's a better way, and so on. I didn't let him off the hook on anything, so if he didn't understand what he wanted after all that, we got rid of it.
We were editing the document as we discussed each feature, which meant a really tight feedback loop. I've seen people read a spec or requirements document, ask for some changes, and a week later, get another draft, and get bored because it's mostly similar to the last one, and even with changes tracked, it's hard to read and notice the differences, and it's really dull. This way, we did it all in real time. The next step was to convert this document into tasks; into a plan.
In one of my subsequent projects, working with a team of around half a dozen developers, a tester and a product manager, in a fairly agile environment, we used Pivotal Tracker to manage the workload. We found a few problems with Pivotal Tracker, and perhaps we weren't being as agile as we wanted to be, but it was mostly workable.
Since then, I've been using Trello to keep track of what I'm doing, but really only as a to-do list for myself. This time, however, I want to make the process much more transparent to the client, and for him to be more involved. Being free, I invited the client to Trello, and gave him access to the board for the project. A board is basically a list of lists.
I then added the following columns, or lists, to the board:
- Current phase
- In development
- Code complete
- In staging to test
- Testing failed in staging
- Testing passed in staging
I went through the requirements / spec document, and for each discrete task, I added a card to the backlog column. I gave the card a brief title, and copied and pasted the requirement text from the Google document into the card description. Each Trello card has its own URL, so I copied and pasted that back into the spec document to allow easy cross-referencing.
The idea is for the cards to migrate from left to right across the board's columns. Initially, we agree which cards from the backlog I should work on next, and put them in the current phase column. This week, I let the client do this himself!
I pick which ever card from the current phase column, and move it into the third column. At this point, I may add a checklist to the card for the specific development tasks, so I can tick them off when I've done them. It makes me feel happy, and the card seems less daunting. Whilst I'm developing, I may discover more things I need to add to the list.
Once I've finished development, the card goes to the right again. I'll add another checklist to the card, for the acceptance tests, which I'll describe in a bit. I'll then work on the next card in the current phase. Once they're all done, then I'm ready for a staging release. The client may not be ready, however, if he hasn't finished testing the last staging release. This is fine, as he has plenty of his own work to do. I'll just add another item or two from the backlog into the current phase and continue developing.
In staging to test
Once we're both ready for a staging release, I rebase the staging branch in the git repository, and deploy to the server. I'll move all the cards from the code complete column, which all have acceptance tests.
To avoid the problems I used to have with bugs appearing on the live site, I've made a bigger deal of the client testing on the staging site. A quick look over is no longer good enough. Each card has enough tests in the checklist to cover the feature, and is based on the requirements, which are handily at the top of the card. The client can add extra tests too. He then works through the tests on the card and ticks them off when they pass.
Testing failed in staging
If any one of the acceptance tests fails, the card is moved here, with a comment from the client explaining why. There have been other reasons for cards ending up here, which have been useful too. The client has noticed that a feature wasn't completely defined, or once he's actually used it, something else needs changing. Technically it isn't a bug, or perhaps even a legitimate reason to fail a card, but it raises the problem, so it's a good thing. In these cases, I've added a new card to the current phase describing the enhancements, which then follows the usual process to the right. The original failing card is then considered to have passed, and moves to the next column.
Valid test failures are then moved back to the current phase. If the testing hasn't started immediately after a staging release, this could be a problem, as I'll have already started a new phase, and have new cards in the second and third columns. I saw two ways round it; a new column for failed cards in development, or to distinguish the cards in the existing columns. Using Trello's labels, I marked failed cards that are back in the current phase with an orange label, so I can clearly see the difference. I checkout the staging branch in git, and work on the fixes as a priority.
When all the rework is complete, I do another staging release.
Testing passed in staging
Once all the tests have passed, we are, in theory, ready for a live release. I make sure I ask the client if he really is ready, I rebase the live git branch from the staging branch and deploy.
The cards are moved into the live column. In the last release with delayed testing, another phase was ready for staging, but had to wait, like a train at a signal. Once the code was deployed to live, I released the next version onto staging for testing straight away.
There is the possibility that something breaks the live site, so I keep a separate column for done. The most recently deployed cards are left in the live column, as a reminder for me in the event of a bug report from the live site. As it happens, there haven't been any yet. Before doing a new live release, I'll move everything from the previous release in the live column to the done column.
I've made an example board with a single card you can look at.
I've found the benefits of this approach are that the client has more visibility into exactly what I'm doing, and the current progress and status of the project, which helps when his boss asks how things are going.
The releases are more stable, and the client is more confident of the quality of the code.
I'm happier, and more relaxed, as some of the responsibility is now with the client. We all know that developers are the worst people to test their own code. In the absence of a trained software tester, a happy client is the next best thing.
In all, I wanted to explain how, as a one-man-band, I manage a project. I'm not claiming it is the right way, nor that it should be labelled agile, nor that I won't change it. If having read this, you choose to review your process, by taking on some of my ideas, or working out your own, then that's great.
Really, the previous paragraph is partly a pre-emptive strike against potential commenters wanting to tell me I'm doing it wrong. I feel a little vulnerable admitting past mistakes, and possibly current ones too, but this stuff doesn't come as easy to me as coding.