Thoughts on Clear Thinking, leadership, product and all things software.


Cutting time to market in 1/2

Lately I’m able to deliver commercial MVP’s with customers in half the time or less, and there is nothing artificial about it. It’s all about product work and validation, and I thought it would be worth sharing. 

First of all I am a brain for hire so I build software with teams and companies and I have done so for a very long time. Many people do that but my approach differs some and I thought it would be worth sharing. 

My approach for new functionality is based on this approach:

  1. Build the full user experience first (Commercial MVP, NOT technical MVP)
  2. Perform the Quality Assurance process to validate commercial fit and user experience befor building the underlying system
  3. Iterate on step 1 and 2 until we:
    1. Have discovered our commercial MVP
    2. Understand our Cost/Value factor
  4. Build the entire system

The first 2 steps are quite similar to many other prototyping phases. However, in the workflow I use it’s not a prototype, it’s the User Experience code that will actually hit production later. That is IF it is deemed commercially viable with an acceptable cost/value.

So, what is the consequence of working this way and what are the trade-offs? Let’s start with the potentially controversial changes to how people work.

Postpone all backend implementation

In this approach I postpone 98% of all backend implementations until AFTER we have run QA on the full User Experience of potentially a quite complex system. However, this does not mean that the application has no logic, it will actually have all working logic and functionality required to run a QA on commercial fit and cost/value calculations.

Before you think I have gone completely mad let me just say that I have actually built systems and tooling to achieve this. Which allows me to build full user experiences which mocks out all backend functionality and later allows me to easily implement backend systems in step 4. The tool is peercolab.com btw.

So why is this a good idea? Because of the following reasons:

  • The first iteration of a solution is rarely the one that solves the problem
    • And it will often reveal that we didn’t even understand the problem
  • Integration is the killer of iteration speed
    • This is why we have been led to believe that microservice to monolith is the solution to our problems. As if we had no problems before microservices became a thing….
  • Persistent state management and concurrency is a waste of time before you have validated that
    • You understand what the problem is
    • You understand what the solution to the problem is

Having the discipline to postpone this decision is often hard because it feels unfamiliar. However it is the single most rewarding step in my approach. This allows us to not fear change, to learn, to re-write functionality and discover the right commercial MVP. To be honest most projects I see accumulate legacy long before it even hits production because of prematurely implementing and designing large structural systems. And doing so before we even know whether it solves the real-world problem.

Separate problem and solution breakdown

This is a big one too. The way I define it is that the problem surrounds what real people do in their real life regardless of whether they have an app or a software system. In the problem breakdown there is no software system. There are people, behaviours and criteria for considering what it takes to recognize a solution.

The Dunning Kruger Effect has an enormous impact on the problem breakdown as the problem breakdown for a commercial MVP is usually quite big. This is why separating the problem and the solution breakdown is so important. Although our natural instinct is to understand a problem by visualizing a solution to it, we need to understand that bringing the concept of a solution into understanding the problem increases the complexity by orders of magnitude

I use a systematic approach to problem breakdown that I call Clear Thinking. It’s really nothing fancy, it’s just a systematic way of breaking down a problem and still be able to postpone the solution breakdown.

Build a quality user experience to learn, not necessarily to release

When I build the user experience I build it to validate, not to ship. This does not mean that it has low quality or is some thrown together prototype. It means that it is built with quality BUT with the purpose of being able to rip it apart at any point in time because our quality assurance and validation iterations reveal learning in the problem and/or solution domain. One of the outcomes could be that the solution will not be worth the investment, and it would be great to discover that ahead of making the investment.

And technically again, this still means doing data breakdowns, API modelling and all the things that we are used to doing. However we will do them so that we can build and validate the user experience and perform commercial viability tests first, not because we want to build the underlying system yet. However we will simulate it!

Conclusion

As I started out in this post and the reason why I wanted to share it is because of the effects of it. Like I said I take companies and teams to a commercial MVP in usually less than half the time of what they are used to. It also leaves them with much less maintenance cost and the real beauty, alignment! Because of the problem breakdown approach and iterations, the teams end up almost strangely aligned. Which after all is not strange at all. The only thing that everyone in a project can align on is the user experience, and by approaching the user experience first you give everyone a seat at the table.

-Svein Arne Ackenahusen

Published by


Leave a comment