By Helen Crosby, COO Geonomics

Agile is a very popular methodology. It can streamline development, foster meaningful innovation, and change your team’s processes for the better. It’s also hugely complicated, massively expensive, and a colossal pain to implement – or so some would have you believe.

The simple truth is that agile methodology is based on very simple principles. You don’t need to devour an entire library’s worth of books or hire a crack squad of business change experts. Instead, find a way to take the fundamental (and still very good) ideas behind agile and apply it to your company’s processes.

This is what we’ve tried to do with our Learning Machine methodology, and we’ve seen some success with it. Over 13 weeks, we’ve conducted 63 separate experiments across our products, acquisition, and CRM teams. Of those experiments, 17 have been definite wins. 24 haven’t made any difference at all, and the remaining 22 are kind of up in the air. It may not be perfect, but I would have taken a 10 percent win rate at the start, so I’m certainly not complaining.

How did we develop it? Well, it started with ideas, essential to the development of new products and the improvement of existing ones. But not all ideas will make an actual difference to the product’s success.  And you don’t know which ideas will work until you try them.

We wanted something that could test as many things as possible in a production environment with real customers – and the least amount of effort. The Learning Machine was created to allow us to quickly discard dead-end ideas – and build on the ones that might provide a measurable benefit to the business.

The rationale

The end goal of the Learning Machine is a methodology that can power through as many learning loops in as short a space of time as possible. In theory, the result should be continuous, speedy improvement of our software.

Of course, there needed to be a bit more to it than theory. While changing things on the fly was important, it was necessary to establish some basic tenets to keep the team from veering too far off-course. Momentum is essential to any methodology, but for the Learning Machine it would be especially critical: establishing a regular rhythm for experimentation can ensure that improvements are made at a speedy, but sustainable, pace.

We were also committed to giving our developers autonomy, but only up to a point – and with full accountability for their work. While they would, for the most part, decide what they worked on (within a given remit), they would be expected to explain their reasoning – and challenged if it is found wanting.

Everybody wants to work on cool stuff, and everyone wants to make cool things cooler – but if an experiment doesn’t lead to demonstrable improvement and move the company closer to its business goals, it’s probably not worth implementing.

The process

We required something scalable and changeable that would allow talented developers to get on with it – within certain parameters. The shape of a project can and will shift, but it needs to remain faithful to overarching goals. In order to meet them, improvements need to be delivered and validated on a regular basis. Just saying “we’ll have it when it’s ready, Steve” isn’t likely to encourage productivity: there should be freedom, and there should be trust – but you still need a structure to glue everything together and ensure everyone is pulling in the same direction.

We kept this idea in mind when we created The Learning Machine methodology. Running in week-long cycles, it’s designed to encourage creativity from all sources.

Anyone in the business can submit a hypothesis for improving our product (along with how we would test it in a short, one-week experiment). Marketers, software engineers, the executive suite – if it’s a good suggestion, we want it. We explicitly didn’t want to foster an environment where opportunities for improvement are rejected just because the team leader wasn’t the one who thought of them.

For example, the Head of Security and Technical Compliance submitted an idea called “Chain Reaction” for our flagship product, a pirate-themed online game where you “dig” for real money prizes across a virtual map. The notion was that, by including a feature whereby one dig can set off a chain reaction to uncover prizes in consecutive, linked squares, total digs might increase by 10 percent. In Week 1 of the experiment (where they were tested against a control game), digs actually doubled; in Week 2, the pattern held up. With the feature’s worth validated, we could move on to things like improving the design and functionality.

A product-improving, revenue-boosting, literally game-changing idea – and from someone whose day job was not software development.

So at the beginning of each week, our product team has a 60-90 minute brainstorm, where they review submissions from all quarters of the business. Each team shares with the whole business: what they’ll be working on, the way they intend to run their various experiments, and what exactly they’re trying to improve.

With that done, priorities are defined:

  • Probability. Is the idea likely to succeed – and quickly?
  • Impact. Will it have a meaningful impact – and will it be immediately visible?
  • Resources. Will it be resource-intensive or not? Sometimes, the (literally) game-changing) ideas don’t cost an arm, a leg, a spleen and a pancreas to implement.

Small improvements made quickly: it’s a defining agile principle, and while they may not add up to much on their own, the cumulative effect results in a better product.

With these things decided, the product teams have another brainstorm – this time focused on testing. What are the minimum viable testing examples? Will we need to conduct an experiment with landing pages or beta slots? Will a survey do, or will it require user testing and player panels? Or can we develop a lean solution that we can test out live in production?

When a decision is reached, we run the tests and gather the results at the end of the week. Our analysis reviews the accuracy of our predictions and assumptions against the hard figures. If the idea is working, we’ll iterate on it further and try to implement it in the best form possible. If it isn’t, and looks unlikely to, we’ll kill it stone dead. We treat unsuccessful experiments as a learning experience – it’s important to understand the reasoning behind a triumph or a failure – making them just as valuable as successful ones.

When we’ve identified the developments that we need to explore further, we systemise them. Here’s where the one-week cycle can be a little limiting, because implementation can, from time to time, take a bit longer than seven days, especially with more complex improvements.

We’re fortunate that, in most cases, we have the engineering skills and discipline in-house needed to do it pretty quickly, but there are a few things to keep in mind if you’re going to adopt this approach. You’ll want to consider QA testing and maintainability, and burndown charts are an absolute must: time is always of the essence if you intend to keep improvements coming at a steady pace.

But don’t overcook it: we’ve engineered things to death in the past, and it’s almost always a massive waste of time. Find the best architectural solution…and then use it. Done.

Measuring success

With the Learning Machine, we’ve attempted to merge the best of Lean UX and agile to foster a culture of experimental iteration and continuous improvement. It’s not perfect, and we won’t pretend it works for everyone: I’m reasonably confident that at least some of you will read about the multiple 60-90 minute meetings and run away screaming.

Still, we’ve found that it’s allowed us to direct resources and energy to the work that matters most to the business. For example, there was a time when we thought leaderboards might make a significant difference to our game. The Geonomics of old would have spent weeks on the required concept, implementation, coding, and design work. With the Learning Machine, it only took us a few days to discover that nobody at all cared about it – saving us considerable amounts of wasted time and effort.

We certainly don’t claim that following it to the letter will generate incredible results for your company: all we’re saying is that we like it, it works for us, and it serves as a testament to the versatility of agile. Building our Learning Machine has given us better, enhanced products – but only because we tailored it to the way we work. It’s worth keeping that in mind when you build your own.

About the author

Helen Crosby is Chief Operating Officer at Geonomics. Joining the company in 2011, Helen brings significant senior management experience to the company from her time as Chief Technology Officer and Director at Patientline, the communication and entertainment service for the NHS, and her role as General Manager and Director of Service Solutions at BT Openreach, the infrastructure division of BT Group.  Responsible for a diverse array of operations at Geonomics, from project and product management to legal to finance, Helen has a degree in French and German from the University of Bristol and an MBA from Imperial College London.

 

 

The Lean, Mean, Learning Machine: Developing a Custom Agile Methodology

| Methodology & Culture| 346 views | 0 Comments
Profile photo of voxxed
About The Author
-

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <s> <strike> <strong>