Editor’s note: Joe Procopio is the Chief Product Officer at Get Spiffy and the founder of teachingstartup.com. Joe has a long entrepreneurial history in the Triangle that includes Automated Insights, ExitEvent, and Intrepid Media.

RESEARCH TRIANGLE PARK – Making a mistake launching a new product feature is costly, and I’ve done it at least a dozen times. Never again.

We’re all familiar with the Minimum Viable Product (MVP) strategy. It mandates that we “fake” components of a new product by making many of its processes manual at first release. We do this for a few reasons:

  1. We want to get our product idea out to customers as quickly as possible, so we can validate its reason for being.
  2. We want to discover where the product is going to break, so we can focus our limited time and resources on de-risking.
  3. We want to determine how our customers are going to accept and use the product, so we can build out those features first.

There are other valid reasons for adopting an MVP strategy, like getting to revenue as quickly as possible, but those are the big three.

Joe Procopio (Photo courtesy of Joe Procopio)

MVP isn’t a new concept, per se, but its adoption has exploded with the lowered barriers of entry brought about by the Internet and Software as a Service (SaaS).

What’s gaining traction now is the strategy of repeating the MVP process with every new feature, even down to every new version.

How do we do that?

Soft Launching and A/B Testing

There are already a number of ways to test the viability of a finished feature.

We can soft launch or A/B test by singling out a certain small segment of our customer population, turning on the feature for them, and either following the data or contacting them directly to see how they respond.

We partition that customer segment by usage, engagement, location, demographic, basically any dimension that corresponds to the thesis we’re trying to prove out. If the new feature is there to reduce clicks, we’ll choose most frequent users. If the new feature is an add-on for revenue, we’ll choose most engaged users, and so on.

This testing is done for a couple of reasons. In the case of a soft launch, we want to avoid disaster, and making mistakes with a smaller audience is preferable to making mistakes at feature launch. When we A/B test, we’re trying to choose between options to gain greater user acceptance - Do they like it better this way or that way?

But if we take the time and collect the right data, we may also be able to draw the same kinds of conclusions we might find with an MVP:

  • The new feature isn’t valid.
  • The new feature breaks under certain stress.
  • The new feature is being used in a way we weren’t expecting.

With a finished feature, this can be an expensive discovery. So why not use a Minimum Viable Feature (MVF)?

Finished Feature vs. MVF

Many years ago, when I was working on a successful product that was YouTube before YouTube, I was building out an online video repository when executive management got all hot on the idea of video email. Super-sexy idea for sure, especially back in the early oughts, but there was one huge problem.

Bandwidth.

These were still the days of mostly dial-up, and while most people had broadband at the office, my gut told me that the primary use case for video email wasn’t going to be office to office. We needed to do one of two things:

  1. Limit the video, Twitter/Vine style, to a meme-ready six seconds.
  2. Build a simple web form to act like an email client that just sends a link to the recipient to stream the video from our servers.

Neither of those options were frictionless nor particularly easy to build (security, new tech, still had bandwidth issues), but at least we could put something up relatively quickly to prove me right or wrong and then we could build the new feature properly.

We didn’t. We dove headfirst instead.

When’s the last time you emailed someone a video?

By building an MVF, we could have saved ourselves a lot of time and money and probably also enhanced the value proposition of our product by giving people who weren’t video auteurs a reason to put a video, however short or useless, online.

As it played out, once everyone realized the primary use case, we went ahead and rebuilt something that looked a lot like option 2 above and the relaunch was successful. It was just really expensive to get there.

Crawl > Walk > Run With Small Tech

I’m a huge fan of learning to crawl before we learn to run, and every feature we consider should have three stages of evolution. Three is a solid number and two will work, but going beyond three is probably wasting time and may annoy our customers.

Building using a Minimum Viable Feature strategy looks a lot like building using an MVP strategy. We’ll be replacing some of the automation with small tech, a term for using the least intrusive, not-as-robust, easiest-to-integrate tech we can find. Also cheap, it should be cheap or free.

At my current startup, we have a small tech fallback network around existing parts of the business. We have an in-house support team that uses voice, text, chat, email, Slack, and even some proprietary messaging within our software to do their job. We ride this fake network when we MVF new features.

If we don’t already have a network built, we can slap one together with people, phones, email, chat, Slack, Zapier, Google Docs, whatever we need for collaboration and communication both inside the company and outside - for example customers and partners.

Once we prove the feature concepts are viable, we can spend the time and money to build the proper tech to replace the small tech. If the feature concepts aren’t viable, we lose next to nothing. But what we’re really hoping to gain is knowledge about the latter two MVP concepts: Where does the feature break and how is it being used/misused?

Basic Tenets of an MVF

When does a feature require an MVF? It’s usually a judgment call. In some cases, the feature is small enough or critical enough to skip the MVF process and just push to production.

But let’s turn that question around. When is a feature not even a feature? In other words, some features are small enough or non-critical enough that they may even be left as a prototype. One question I always ask about feature ideas and requests is whether a fully-formed feature will add to our intellectual property or is it just a band-aid? If it’s the latter, it might not get polished.

This is especially true for internal features, like when we need “a technical way to do X.” We usually don’t need a ton of tech to solve it, so we use small tech to see if we’re actually solving the root cause of the problem or if the problem pops up elsewhere. If the problem was solved and wasn’t frequent or wasn’t critical, the need for the technical solution usually dies down once the small tech is in place.

If the feature is indeed external and critical, an MVF also helps us solve not just for the one use case, but for several. Unlike the rigidity of a finished feature, we can try out several use cases at once, switching on the fly or at low or no cost.

But the most important aspect we can carry over from and MVP into MVF is measurement. The same data capture mechanisms, feedback loops, and kill switches should be in place when we bring the new feature to customers. We should be continually listening, and adjusting to the patterns we see. This allows us to measure more than twice, and build once.

Which is exactly what we need to do to avoid those costly mistakes.

Hey! If you found this post actionable or insightful, please consider signing up for my weekly newsletter at joeprocopio.com so you don’t miss any new posts. It’s short and to the point.