Introducing Replicated Instance Insights: Key Metrics for Customer-Hosted Applications

Dexter Horthy and Mariel Wilding
 | 
Jan 12, 2023

As an independent software vendor (ISV), it's important for you to have a clear understanding of what success looks like for your business. Over the last 7 years, we have been working to enable some of the most successful software vendors in the world to deliver their cloud-native applications to customer environments.  During this time, we have identified specific roles, tools, processes, and key metrics that top-performers commonly use to measure and understand their performance in delivering and distributing their application(s) to customers.

This is the first article of a series in which we’ll dive into using strategic metrics to drive excellence in commercial off-the-shelf software. It’s important to align on what to measure and how before launching any customer-hosted or on-prem application initiative. Establishing clear goals and how you’ll measure progress enables your teams to understand whether adjustments to people, process, and tools are having the intended impact. Before we get into any specific metrics, let’s briefly review what goes into designing and measuring good software delivery metrics.

Designing and leveraging performance metrics

Define, evaluate, set a goal, measure, adjust for instance insights
Instance Insights Cycle

Measuring success is crucial for any business, and as an ISV, it's important to have a clear understanding of what software delivery success looks like for your company. The steps below outline the process of designing and leveraging performance metrics to work towards becoming a top performer in software delivery.

  • Define the metric - while we will go into more detail on useful key performance indicators that we have identified, each ISV should go through the process of fully defining what each metric means to them and ensuring it is relevant to the business use case and success.  

    Example: We define uptime as the amount of time that a software application is available and functioning properly (e.g. as in Resource Statuses), measured as a percentage of total instance lifetime.
  • Evaluate current performance - it’s important to look at the current state of the business and how it’s performing against the defined metrics. This benchmark helps ISVs understand strengths and weaknesses, and identify areas for improvement.

    Example: Our current overall uptime is 75%.

  • Set a goal - this involves establishing specific, measurable, achievable, relevant, and time-bound targets for each metric for the business to work towards. This helps ISVs focus their efforts and resources on specific outcomes and provides a clear direction for the business.

    Example: Achieve an overall uptime of 90% by the end of Q2.

  • Measure performance - once you’ve set your goals, it’s important to regularly track and review progress against them in order to determine whether your team(s) are on track to achieve them. While ISVs may have established methods for measuring various metrics already, we are working on new telemetry and reporting pages to help our ISVs measure the health and status of end customer instances and make it easier to measure and gain more insight. You can view the status of these updates on our roadmap.

    Example: Through tracking our uptime, we’ve improved from 75% to 80%, but need to continue working to meet our goal of 90%.

  • Make adjustments - by analyzing progress towards goals over time, ISVs can better understand areas of strength and weakness and make adjustments to their strategies accordingly. Evaluating insights from the effectiveness of their strategies and making data-driven decisions help them stay competitive in the market and achieve goals more efficiently. Changes can be made to the product, process, team, and/or the goals themselves. We’ll share specific areas where adjustments can be made for each metric throughout this series.

    Example: After observing that the majority of downtime is due to issues with the database we ship with our application, we’ve added the ability for end users to bring their own PostgreSQL instance instead of using an embedded in-cluster database.

Key metrics for customer-hosted software

There are many ways to measure performance, and we’ll continue to evolve this list, but the following key metrics are ones we, at Replicated, have found anecdotally to be highly valuable over the last 7 years. By measuring these metrics, you can get a detailed picture of how your business is performing and identify areas for improvement:

  • Installation Success - this includes metrics like time to install, the end-to-end time it takes a customer to progress from an initial installation attempt to having live software in production, and Install Success Rate - the percentage of attempted installations that result in live software running in production. Measuring these metrics and continually working to improve them can improve efficiency in delivery and vastly improve the experience of end-customers.

    Best in Class: 80% of installs complete in under 2 hours.
    Best in Class: 90% install success rate.

  • Adoption - The median age of deployed software and upgrade success rate are key indicators of the ease and frequency of updates to new versions of your software. Straightforward, timely, and reliable upgrades means customers are getting value from new features and bug fixes shipped by your product team(s).

    Best in class: median age of deployed software < 60 days.
    Best in class: 99% of attempted upgrades complete without downtime.

  • Velocity - this includes release frequency, the interval of stable releases in production, and Cycle Time, the time from first commit to production release. These velocity metrics gauge the efficiency in delivering value to customers.

    Best in class: Release to production customers monthly.

  • Reliability - we recommend monitoring uptime, mean time between failures, mean time to recover, and support burden to get a comprehensive picture of the reliability of your software and how efficiently you can support customers.

    Best in class: 99% uptime.
    Best in class: 1 support hour / instance / month.

  • Revenue - While many executives will measure revenue metrics like conversion rate and churn in other systems, it can be valuable to understand the leading indicators of these events and metrics by examining how easily customers can install applications for evaluation, as well as when they decommission software instances. A great installation experience can be just as impactful to revenue metrics as an airtight sales process or strong product-market fit, and a poor experience can easily detract from the perceived value of an otherwise great core product.

    Best in class: Trial install success rate > 80%, Trial conversion rate > 50%.
key metrics for customer-hosted software
Key Metrics for Customer-Hosted Software

In future articles in this series, we’ll delve deeper into each of these metrics and discuss how to define and measure them, what best-in-class looks like, and what adjustments can be made to help work towards improving performance in each metric. By setting and tracking quantitative goals around these metrics, you can work strategically to reach them and drive business success.  You can't improve what you can't measure, and our hope is that you, as a Replicated vendor, make this data a critical part of measuring the ease of use and overall quality of the software you distribute.

Stay tuned for additional insights into these key indicators: