Crystal Widjaja (Gojek) Unlocks the Secrets of Unmeasurable Metrics and Extracts Lessons from Convoy's Downfall!

Unsolicited Feedback - Een podcast door Brian Balfour & Fareed Mosavat

CategorieΓ«n:

This week, we are joined by Crystal Widjaja, a data-driven product expert. Crystal has held product and data leadership roles at various companies, including Kumu and Gojek, a super app and delivery and logistics platform in Southeast Asia. In these roles, she led data teams and product teams. Crystal is also a prolific contributor to Reforge, providing valuable insights through our programs, blog, and artifacts. If you enjoy what you hear in the podcast, you can find more of Crystal's rich insights at Reforge.com. This week, we will be discussing: Jason Cohen's blog post on "Metrics That Cannot Be Measured Even in Retrospect" and the challenges faced by data-driven product leaders. πŸ“Š The lessons that product leaders can learn from the failure of Convoy, a major player in the freight brokerage business. πŸ’‘ Challenges of Being a Data-Driven Product Leader We're starting with Jason Cohen's article in which he makes three key points: 1) Some widely discussed metrics, such as the impact of a single feature on product revenue, are not easily quantifiable. πŸ”’ Why? Customers often ask for many features during the buying process, but they end up not using them. However, this doesn't mean that these features don't affect revenue or aren't important. β“πŸ’°β— Our take? It's a bit crazy how many product management books and blogs tell you to measure the impact of a feature on acquisition, retention, and monetization. πŸ“šπŸ“ˆ Instead, use TARS, a framework that stands for Target Audience, Adoption, Retention, and Satisfaction. βœ¨πŸŽ―πŸ˜ƒ Or, instead of measuring the positive impact on revenue, Shashir Mehrotra suggests measuring the churn rate if you remove the feature to evaluate its impact. βͺβž‘οΈβš–οΈ 2) Measuring the impact of incremental activities on customer churn can be challenging. Why? There's often a big lag between the action happening and the customer churning, making it impossible to measure the single action that caused the churn. Crystal thinks this is the wrong point to make. In general, there's a sliding scale of metrics from difficult to easy things to measure, but nothing is really impossible. The real question is, for the impossible side of the scale, can we come up with a proxy that's good enough? Do I really need perfect data? "You can come up with a proxy for everything, right?" - Crystal 3) Measuring the probability of risks is more of a "cover your ass" activity than actually being useful? πŸ€” Why? Whether something has a 30% or a 70% probability of happening, it could still happen. So, "don't put probabilities on the slide at all. Only list the risks that you feel are so important that they either merit action or awareness." βŒπŸ“Š Fareed agrees - there are only two types of risks that matter: Ones with a very high probability of happening ❗ Ones that are so severe that their impact is existential ❗ Anything other than those two should just be a "deal with it when it happens" situation. πŸ’ΌπŸ•’ Do a pre-mortem. Just sit down in a room and say, if this project fails, why would it have failed? Then figure out which of those fail points you want to try and preempt or solve against. πŸ’­πŸ’‘ Avoid a "Bike Shedding Discussion." "You are designing a nuclear factory, but everyone's spending all this time deciding, where should we put the bike storage shed? That must be the most important thing to talk about and define, and I'm just gonna force the conversation on this smaller piece, versus the like, building of the nuclear factory." - Crystal 🚳🚲🏭 Want to hear more on Crystal's metaverse approach to data analytics and discover why Brian, Fareed, and Crystal dislike benchmarks? How about hearing the hard truths about Convoy's downfall? Listen to the full episode! It's time to level up your product decisions! πŸ’₯

Visit the podcast's native language site