Editor’s note: Joe Procopio is the Chief Product Officer at Get Spiffy and the founder of teachingstartup.com. Joe has a long entrepreneurial history in the Triangle that includes Automated Insights, ExitEvent, and Intrepid Media. His columns are published on Tuesdays.

RESEARCH TRIANGLE PARK – Let’s talk about how we use data to make a better product, raise our margins, and generate more revenue.

Here’s the common knowledge: Our product needs an input cycle. We need to be collecting data on our product, analyzing that data, then acting on that data. This produces more data, and the cycle starts all over again.

Here’s the issue: In over 20 years building products and companies, as well as leading and advising others to do the same, I’m constantly running into three problems with that process.

  1. We’ll build out part of the input cycle and stop there. Maybe we collect a bunch of data, look at it, get all happy or frustrated, and then forget about it.
  2. We’ll build the input cycle with the wrong data, and assure ourselves with false positives that everything is awesome.
  3. We’ll follow the input cycle all the way through, but then do the same thing over and over and expect different results.

First Let’s Build an Input Cycle

An input cycle is a little different than a feedback loop and, in fact, can include any and all feedback loops. What we want with our input cycle is to capture every lesson learned about our product at every point in the lifecycle.

Joe Procopio

That lifecycle starts with what we’re building, and ends a little bit beyond revenue, ideally at recurring revenue. In other words, the final question of this exercise is how do we get existing customers to spend more money on the existing product.

The lessons learned are high-level, and they can come from customer feedback, professional input (mentors, advisors, investors, partners), and most importantly, our day-to-day experience building the product and/or running the company.

Obviously, this works best for a product that already has revenue or at least revenue streams, because revenue is the penultimate metric. From that we can dial back to profit, margins, all the way back to market.

If we’re not at revenue yet, we can use other key performance indicators, or KPIs, but until we’re talking about actual dollars, any other metric could produce false signs of success. That’s the biggest mistake early startups usually make, and they wind up running out of runway.

With all that understood, we’re basically going to post-mortem or retro our product on a continual basis.

Let’s Make a Spreadsheet

Don’t sleep on the flexible and powerful spreadsheet. This process is going to be fluid. We’re using a spreadsheet so we can create one, fill it, tweak it, save, try something else, score it, save another version, start over. Startup is all reinvent and keep going, and this model fits that ethos.

Along the top of the spreadsheet we’re going to break the product process down to sets of customer actions that are influenced by our marketing, our feature set, and customer success. In other words, this is the funnel to get the customer from random buyer to repeat customer.

Try these as a starting point:

Awareness: This is how random people find us, including our marketing plan as well as the first impression our product makes on them.

Interest: This is how we turn a random person into a prospective customer. It’s the tail end of marketing and everything about the product up to actual interaction.

Decision: This is how we get the customer to consider making a purchase. It’s how we define the product and all the steps the customer might take up until they can actually decide to own it.

Trial: This is optional, more for a freemium model or trial version. It’s everything we do while the customer is using the product up until they have to pay for it.

Conversion: This is the purchase transaction. Easy to define, hardest to make happen. Includes everything from UI/UX to usage to value.

Rating: This is how we turn customers into loyal customers and then repeat customers. It starts with value and includes incentives and loyalty to increase their usage.

Feedback > Input > Results > Lessons

Now we have a place to extract the results of all the input — and again, this includes customer feedback, professional feedback, and internal discovery.

Once we’ve distilled the input into results, we categorize each result into one of our sets of customer actions, and we extract what we’ve learned (or think we’ve learned) from those results into one of three categories:

  1. We know this works.
  2. We think this works.
  3. We know this doesn’t work.

In our spreadsheet, this is three columns under each of the six customer actions (so 18 columns in all).

Now actually, I don’t care how you format it, as long as we have landing places for the entries. You can color code each entry green, yellow, and red if you’d like (I’m a fan of heat mapping), then use one column for each customer set. You can add entries to the bottom and sort them by time or manually put the highest importance entries at the top. You can change the format over time. Keep this flexible.

In any case, we’ll want to get an initial run at this and that’ll take some time. We can either do a top-down review of the entire product lifecycle all at once, or we can just start from today and do this five minutes at a time.

Learning from the Lessons

We can do this in five minutes, whether it’s daily, weekly, whenever we review our input. It could be after our regular customer data reporting process, after a board or advisor meeting, after a roll to production, whenever. I have one morning a week dedicated to reviewing product data, and I do this at the end.

Once we’re done analyzing the results of said input, we’ll take what we’ve learned to our spreadsheet and document at a high level. Keep each entry to one lesson from one result in a few words. Here are some examples, but please understand they may or may not relate exactly to how you run your business and this needs to work for you.

Awareness: Promoting the DIY aspects of our tool doesn’t attract millennials. (Doesn’t work)

Decision: Adding a trial version direct link to our email prospect form increased downloads by 75%. (Know it works)

Rating: Cutting the results review feature from 4 steps to 2 is probably positively influencing ratings by up to half a point. (Think it works)

While we’re entering new entries, keep a few things in mind. These are lessons, not opinions, not ideas, and don’t overload the spreadsheet. Also, we should be reviewing older entries as we add new entries, because new lessons may just be updates of old lessons, or they may even contradict or cancel out old lessons with new information.

Keep it flexible.

And here’s the payoff — How this makes more money. Every time we have an idea or an experiment to grow vertically— that’s building a better product for the customers we have, or horizontally— that’s extending the product to attract new customers, we should review that idea or experiment against these lessons.

This will help us push what we know works, avoid what we know doesn’t work, and as quickly as possible turn what we think into what we know.

When we do that, using revenue as the penultimate metric and recurring revenue as the goal, we’ll have some very good insight into what our roadmap should look like. We can release features customers want, with confidence, and best of all we’ll stop doing the same thing and expecting a different result.

Hey! If you found this post actionable or insightful, please consider signing up for my weekly newsletter at joeprocopio.com so you don’t miss any new posts. It’s short and to the point.