AA241 - Product Risk: How Assumptions Kill Your Product (and Your Company, Eventually)
Arguing AgileDecember 17, 2025x
241
00:53:0136.45 MB

AA241 - Product Risk: How Assumptions Kill Your Product (and Your Company, Eventually)

Stop wasting time building the wrong thing faster!

In this episode of Arguing Agile, Product Manager: Brian Orlando and Business Agility Coach to THE STARS: Om Patel respond to yet another listener question, discussing Product Risk Analysis in agile environments! 

Listen or watch as they challenge the common misconception that analyzing risks upfront is "waterfall" and reveal why ignoring product risks until you've burned three sprints is how teams end up building features nobody wants.

Stick around while the hosts break down Marty Cagan's four critical product risks (Valuable, Usable, Feasible, and Business-Viable) but stick around for the conversation on why most teams focus on execution risks while the real product killers are hiding in plain sight!

The topics covered are:
- Difference between product risks and execution risks
- Why traditional risk registers are theater
- "Speed-to-death" prioritization for testing assumptions
- Handling team skill gaps as feasibility risks
- Aligning stakeholders who fixate on the wrong risks
- Why business viability (pricing, unit economics) is the most ignored yet most dangerous risk

This episode is great for product managers, agile coaches, and team members who want to stop building things people don't want.

#ProductManagement #Agile #RiskAnalysis

REFERENCES

"Marty Cagan - Inspired", "Melissa Perri - Escaping the Build Trap", "Teresa Torres - Continuous Discovery Habits", "David Marquet - Turn the Ship Around", "Product School blog", "Eric Reis - The Lean Startup"

LINKS
YouTube

Website

Spotify: https://open.spotify.com/show/362QvYORmtZRKAeTAE57v3
Apple: https://podcasts.apple.com/us/podcast/agile-podcast/id1568557596

INTRO MUSIC
Toronto Is My Beat
By Whitewolf (Source: https://ccmixter.org/files/whitewolf225/60181)
CC BY 4.0 DEED (https://creativecommons.org/licenses/by/4.0/deed.en)

Brian:

so om you're telling me teams should spend more time analyzing risk upfront. Isn't that waterfall?

Om:

It sounds like it, right? But No, no, I'm not saying that. I'm saying ignoring product risk until you've burned three sprints is how you end up building features that nobody wants.

Brian:

Oh, okay. but are we validating with users every sprint?

Om:

Validating. But what are we validating? We're validating the wrong risks. You're testing usability when the stuff you're building when nobody's willing to pay for or. Want in the first place?

Brian:

I mean, Yes. We are.

Om:

We're going to production every sprint,

Brian:

Well, we're going to production with the things that nobody wants every spring. Yeah. How do you know what risks to, what? Risks to prioritize over other risks to test with the audience.

Om:

Ah, I'm glad you asked.'cause that's the whole point of this podcast.

Brian:

Welcome back to Arguing Agile. If this is your first time I'm your host Brian Orlando product manager extraordinaire. And this is my co-host, Enterprise Agility coach., And the s Sultan will Swing Om Patel!. Today we have another listener question that we're gonna answer the question was about product risk analysis in agile environments. most teams will, waste weeks or months on. Building the wrong thing because they're not managing risks or building something in an incorrect manner depending on the market again, it goes back to the four Marty Kagan IIT that we're gonna cover in this podcast. But that was the spirit of the question how can you do product risk analysis and not get yourself in trouble with the executives.

Om:

Oftentimes delivery teams do look at risks, but they're not product risks. They're execution risks. So that's where we're gonna separate ourselves a little bit here,

Brian:

Or you get stumped in just talking about technical risks. yeah. But there are more than just tech debt risks. So these are all gonna be part of this podcast, our commitment by the end of this episode will be, you'll know exactly how to identify product risks that actually. Have the potential of killing your product. You'll know how to prioritize them without reverting to waterfall.'cause that's my biggest concern going into this podcast. And then you'll have some sort of structure to better communicate with your stakeholders. Instead of creating a 47 page , risk register, whatever the heck a risk register is.

Om:

Hopefully you'll be better positioned to hit the dot board, or at least be in the same zip code as the dot board. We'll see.

Brian:

Alright. Let's dig into the four risks that actually kill products. And, and actually a lot of teams don't manage risks. They just, they do what they're told because they wanna keep their jobs. That's what I'm saying. While you're worried about technical debt and sprint velocity and things that you might be worried about, like the, there are other threats to the product that are out there. They're hiding in plain sight. Marty Cagan wrote books about them. So what is product risk? It, the possibility that something about the product, what it does, how it's built, how it's received, could lead to product failure. That's the easy version here about product risk. So we have a few points here for the, the steel man or the case against, depending on. What world you're living in. I, I prefer steel man. I'm just scum steel man. I like that. I'm just gonna say right now, like I, I, I much prefer our normal steel man in red with like this special font it's so much better than this. Yeah. Anyway, we're trying new things. So the the case against here are two points. Number one, risk management creates analysis paralysis and causes delays, number one. The second one is agile teams discover risks through iteration, which I, I feel it. That's, I mean, a strong, a strong contender for a steel man is like, Hey, get off my, get all the way off my back with all this. Like we're, we're discovering risks as we go along. And also we can't do all the risks upfront because you'll have 87 risks and we'll spend a month and a half trying to just.

Speaker 3:

Yeah.

Brian:

On dealing risks. So those are the two steelmen points. Let's, let's talk about those.

Om:

Sure. So the, this first one, analysis paralysis, I mean, that's real, right? But the same time I would say you don't just go in with eyes closed, right? There is a balance there to be hit. Analysis paralysis happens when, to your point, you just compile an exhaustive list of all the risks, regardless of impact or probability. And go through them all. You'll be in meetings all day long instead of doing real work. So I can understand the stakeholders getting frustrated when all they're seeing is this sort of stuff happening instead of real progress that's one of those things where I think every team has to figure that out. But I'm not talking about execution risks here. I'm talking about product risks so you work with your product people, work with involve sales because they know the landscape out there, the competitive landscape. They know what the market landscape looks like. And look at what might hit you next. Through iterative discovery as opposed to not planning. Again, this one too, you don't just, again, close your eyes and say, well just do stuff and then we'll find things and then we'll deal with them.'cause it's often too late. You've already run into walls before you could have avoided that situation perhaps. So in that one as well, if you have a risk repo of some kind, a list, whatever it might be, classify these risks as impact or what is the impact of each risk? What's the probability of it happening in the first place? And then based on those two things you could see, what should you be talking about now and now means not necessarily that before you start work or anything like this is ongoing. So what, let's say your engine's running now, the teams are running along, delivering stuff. Maybe set aside a little bit of time to look at the risks that are high impact. High probability and just those, you shouldn't be in meetings all day with those. If you are, you've missed something further along in the back behind.

Brian:

I was at a company one time as the product manager, and what I did was I tried to add to each epic at the epic level. It never stuck organizationally, the rest of the product managers and whatnot, like the director of product or VP product, whatever it was, that organization, they never adopted this as a strategy. But what I did was I had in each epic, like before we start spinning off much stories, right? I had, in each epic I had a test for each of the four Marty Kagan IIT that I'm showing on the screen right now. The value, risk, value is illit anyway, value. I, yeah, whatever, whatever now value usability, feasibility, and business viability. I had them. As tests is like, what, how are we gonna mark off that the customers actually care about these things? Well, I guess customers don't care about every single one. Customers care about value and usability. The feasibility is our technical team. The business viability is our quote business air quotes, more sales and product than anyone else. But they're, they are distributed. Why I am, why I am very I have some trepidation with this category, is I could see these four things turning into phase gates and now we're back in waterfall I'm very concerned about that because the, it's just four little things of like, Hey, you said I need this new feature to be competitive in this market, or whatever market's changing. Brian, you need a new AI powered X, Y, Z, whatever it is. Well, I can still run these four categories against the AI powered X, Y, Z feature, to see if like, well, is that really what people want before I go all in and spend three months with my dev team,

Om:

yeah. the value often is two sided, isn't it? Well, the customer sees as value in internally to the org It's profitability, maybe. Yeah. So to your point about it, the slippery slope to waterfall is real so you put these, you put these in the epic, let's say. I'm wondering what kind of success you had just even at the team level, because when people see that and there's visibility there, they're gonna look at that and say, we're not gonna work on this till all of those things are checked off as opposed to really hitting this quickly. Mm-hmm. And, and then you're not saying that we stop looking at these once you start your epic Right? This, this is ongoing. Yeah. It's integral to the work that you're doing until the whole epic's delivered. Because at any point you could encounter something new, perhaps feasibility, let's say. The tool sets that your teams are using suddenly get, they get deprecated, let's say or, or anything could happen is my point. So I'm interested in two things, right? One is often teams have a separate artifact to track risks, and that artifact has no bearing on the backlog.

Speaker 3:

Mm-hmm.

Om:

So the development team, they don't see those. They're just there with maybe the PMO, the project manager, et cetera, to deal with. So I think there's two things, right? One is what was your success like that you experienced moving this stuff into the epic where the teams can see that. And you know, why didn't the organization. I like that, to adopt that at a greater scale.

Brian:

so in this section we gave you the four categories, the four Marty Cagan categories. We hit those early, because ignoring the basics like it doesn't, that doesn't make you more agile. It just makes you more reckless. That's what I'm.

Om:

Yeah, you can go somewhere fast, but where are you going?

Brian:

Yeah. You know, hey, we gotta move fast and break the law Om. That's what I'm saying. So, so takeaways. Let's, let's get to some takeaways here. The takeaway I wrote down was a quick 15 minute risk triage exercise. Risk triage. Everyone loves a good triage. That's what I'm saying. It's like the old consulting quad two by two. Here we go two by two. That's right. So list your current product assumptions you write down, hey, what has to be true for our product to succeed? And then you map your assumptions to a risk type. So the four Marty Cagan and ility risk types. you just do a quick mapping or grouping or whatever you wanna say, and then you rank the speed to death, top to bottom. So if we don't do this, this is how fast we'll die. and then you get a good idea for the most important thing to test first. And you take that and you test it. And then if you're looking at me saying like, well Brian, I don't like, how do I even test? People buy at this price or whatever. Congratulations. You got a real test on your hands. if your question is like, I don't even know if I'm the right person to do that test. Yeah. That's the test you should be doing.

Om:

Yeah, I agree with that. You don't even know what's on the other side if you don't do that test, so.

Brian:

That's true. Section one's interesting because again, you could easily see a lot of what we just talked about backsliding into waterfall

Om:

Yeah. that is a slippery slope to be avoided.

Brian:

Let us know in the comments. I, is it a slippery slope? I, is it easier than we're thinking? Let us know. Okay. Let's see. Let's move on to the next section. Why risk registers are theater and what to do instead. So I don't even know what a risk register is. Your stakeholder, they want a risk register. Your PMO, they want a raid log. Your executives want a risk mitigation plan. I've been here many times where people ask me, I need a risk mitigation plan. And I will tell you right now, I look at them and I'm like, what are you talking about? What planet are you from right now?

Om:

Yeah. Most of those emanate from the execution side rather than product.

Brian:

Yeah, I think it comes from the executive side of wanting to cover our behind when we're going into something risky. That's where I think it comes from.

Om:

There's a lot of truth in that

Brian:

because on the product management side of the house, like none, none of these even help me. They don't help me like what we did in section one where we talked about, well, we'll just build the four -ilities and we'll do a map and we'll find out what is the quickest, like speed to death if we get it wrong. Yeah. And then we'll just start going out into the field and validating those assumptions one by one. And if the first assumption is like there's nobody out there that's willing to pay whatever,$20 a month for this thing we're done, right? Let's, let's move on. Which is a hard pill to swallow for a lot of people. And that's where we end up here is like, well, no, I need you to do a whole lot more bureaucracy because I don't believe you.

Om:

You mentioned the word theater. I think that's exactly what happens with these risk registers. Yes. Where a lot of time is spent on the artifact itself, updating it and all of that instead looking outward. Treating things as with hypothesis and running experiments, none of that really happens in, most often not, right? It's just like, what are we doing about this risk? Or ranking these and re-ranking these. And what other risks can there be? So now we're ideating on risks. We don't even know what risks we're gonna have in the future.

Brian:

Well, I'm glad you brought that up, because. In escaping the build trap, Melissa Perry argues that traditional risk documentation creates a false sense of security while delaying the actual learning. And as you know on the podcast, no one gets to throw stones at Melissa Perri.

Om:

Yeah. Yeah. I think that's a solid, strong point that she makes there. Traditional risk documentation lets you hide behind the facade of, well, we're managing risks and so what happens at the team level, there are risks on the execution side. We're ignoring those for now. product people are doing things and they're simply creating these updates to leadership and things are diluted.

Brian:

Well, that's, I mean, you're going straight into the steelman, which I will tell you is I think it's a. It's a strong steelman point. Mm-hmm. Which is like documented risks they, they create organizational alignment like I understand what Melissa Perri is saying. I'm not, I'm not saying she's wrong'cause I would never dare say that on a podcast. But, but you're, you're like, if, especially if you have a PMO, they're gonna want these things. They want these risks documented and they're gonna want to hand these risks off to people. Even in the world where I started in section one where I said, well, we have these IIT risks and we're gonna take each one and we're gonna test'em one by one. Well, you can easily see like you can take that concept fast forward it, you can easily see, well, Brian, what's the problem with just putting them in a spreadsheet and then calling it a plan? Like where I'm going with this one is like, exactly.

Om:

So, yeah, I mean this is where most teams devolve into this, right? Yes. Yeah. And so then the updates. To product leadership would be, we've got this exactly right. And then the updates by those leader, the mid-level, I'll call 'em right to the senior execs, would be, there are no risks, right? Because that's not what you want to relay right up to the chiefs, right?

Brian:

we've roamed them all we've ro Yeah, that's right.

Om:

When in Roam!.

Brian:

When in Roam. And then the other side of this is executives, like executives need visibility as to what could go wrong. this is even to not even to mention that some companies might start a financial clock of betting on these features and there might be a financial cost to things. there might be capitalized expenses that need to go on one budget or get written off or what tech tax type of stuff that companies are doing. All that's in the steel man is like li listen, you need to document the risks. In order for us to organizationally start some processes going and then for departments and leaders, to have accountability. And the other side of it is executives need visibility and without some kind of documented plan of how long it's gonna take and how much it's gonna cost. These traditional documents that I rail against on the podcast and, you know it's tough. You know, it's that you're not living in the real world. That would be the case again.

Om:

Yeah, definitely. So, what is our position?

Brian:

I don't even know what a risk register is, i've done Roam and I've done like a raid and Roam or whatever, whatever they're called. But again, they're all like, they're all nonsense because you test your highest, your most damaging risk is the first one you want to tackle. Hey, no one's gonna buy this at this price point, or no one's gonna buy this without these features in place. And then you go test it on the market. You have to go actually talk to people.

Om:

The risk register is nothing more than just the list of your risks but they're classified according to the probability of the risk happening. Yeah. And the impact should the risk materialize. So if you have no idea about either of those two, the probability or impact, you need to go do some legwork. Right? Right. And that's the experiments that you were talking about earlier. Go test against the market, find out until you have a good sense of the probability and if something happens, should it happen, what is the impact? And the highest impact ones are the ones that we're saying should be tackled first. Mm-hmm so your risk register could be four pages deep, but you're not going to worry about all those. You're just look at the top ones. However many, a few. A handful. The ones that you can actually run with. And if that's a continuous exercise that you're doing On a routine basis, that's really what you can do about risks. I mean, so what's gonna happen is new risks will come up and then you'll just reshuffle the order. Of what's there in your risk register.

Brian:

It's time for that part of the podcast where I say what I think I heard, what I think I heard was that rather than the risk register or this risk document, this risk, Excel spreadsheet, rather than it being a one-time snapshot, you could fit them into that format. if your PMO is really pressing you, you could just categorize by risk. These are experiments that we have done over time to prove or disprove or steer these risks,

Om:

right? And yes, I agree. And that's really. What's meant by stakeholders when they ask for the risk mitigation plan. We've looked at this, right? We think these are not real, or these are going to happen and here's what we're gonna do about 'em when they do happen.

Brian:

Yeah. I think if you're a more traditional organization and they want a risk mitigation plan, would it be, do you think they would push back if you were to ask them well, hey, we can give you a risk mitigation plan, but it's gonna take time for that plan to build. Basically , I need to take the team offline to go explore each risk a little bit. You know, that is the plan. So yeah. That is the plan

Om:

so for every row in this spreadsheet, every risk is what I'm saying, for every risk, you have those attributes, right? The probability and, and the impact. Mm-hmm. And right next to that, in the adjacent cell, it would be risk mitigation plan or next steps, And that's where you would say things like, well, we need a couple of weeks to test this out, whatever it might be. Yeah. And so what, but this document, whatever it is, needs to be kept updated. So two weeks later you say, here's what we learned the impact is now higher or lower.

Brian:

Yeah. See, my story about this one is like I, I've been involved in organizations that were very waterfall based, that they wanna do all their planning in, in, in this period of time. And then after the planning phase closed and they wrote it down and their books says like, planning phase is over, then they were in the implementation phase and it was like, you weren't gonna go back to planning during the implementation phase, right? Because there's like financials that are accompanying the software development, like the SDLC in this case, in those organizations. And like maybe that's not, maybe that's like an outdated experience. You know, because like modern startups and stuff like that, they're like, they're like, what are you talking about right now, Brian? Larger organizations, they do this. They're like, before I commit multimillion dollars to this project, I need you to do all the risks upfront. Yeah. Put them in the plan. There's gotta be an in the middle. You know that that's my, that's my war story here is like, I've been on projects where they're trying to quote, be agile, but they got the the specter of this organizationally is still over their head. Maybe they have a PMO that's like exerting influence. So the PMOs constantly it is like a constant fight going on behind the scenes where, like the PMO constantly, when a risk crops up that was not handled six months ago or three months ago or whatever, now's a big fight about it and everyone's upset and they're blaming agile, whatever. But I'm like, it's not, it's like we would've run these risks no matter what we were doing.

Om:

These risks are basically like trains, I mean, they're gonna come at you, right? So when A PMO or some such, they're exerting that kind of influence. It just makes me wanna not put anything on the risk registry. I don't have to defend it which is terrible. Yeah. Because now you're running with your eyes closed. As opposed to having a dynamic type of approach, whether it's PM or not, know what put those down, classify them, go figure out what to do about 'em. and that's gonna be an ongoing exercise throughout. Mm-hmm. So your risk register will always be changing.

Brian:

I have a takeaway. I wrote down, I'm interested in what you think about it. That's, I didn't run this by home before I put it up here, so it should want you think about it. Nice risk. this is my version of risk register. It's a learning, I call it a learning board that has three columns, assumption test and what we learned. Those are the three columns that should be transparent's. Everyone, anyone that's interested in seeing how I'm dealing with risks, they can look at my learning board or whatever, risk mitigation board, whatever you wanna call it. They can, and I'm gonna rank them top down. They can look through all my stuff. But the priorities I'm gonna consider is my top 3, using the exercise from part one of this podcast, right? You know, what would kill us the fastest? I'm gonna update it weekly.'cause we got software development efforts going on, we got other things going on I'm not gonna update it daily. I'm not gonna commit to that. But weekly, ready for my weekly status meeting or whatever we have at the company. I'm gonna update this weekly with our experiments. You know, hopefully it'll be real time, but I'll commit to weekly and then this, this is what I'll present instead of those artifacts about how we're managing risks is, Hey, this is my board. This is a risk that , that we're trying to show that we're mitigating. And I don't know if that would be enough for most people. I dunno if they want more bureaucracy than that. More, more documentation.

Om:

I think it depends on the organization. I mean, essentially what you've just outlined here. Is just in time discovery and update of the risks right. So every week, whatever it is that you could do to update it is fine. You learn about a new risk. You just updated it this morning. So you're gonna wait in a week. No, you're gonna go update it now because presumably this is a radiator that's available for everyone to see at all times. So it's not something you just kind of whip out during the during this risk review meeting by the time you meet however often that is, this risk might have already been actioned because it is something that was substantial, right? Yeah. So whether it's this or an elaborate spreadsheet, if you're just looking at the items that matter most that, that's really what we're saying, right? Mm-hmm. The items that matter most in terms of impact, in terms of what we've learned, update that, and then those that were high impact would presumably diminish. The impact will either diminish or go altogether. Yeah.'cause you validate it successfully based on a hypothesis driven experiment.

Brian:

So I'm gonna throw it to the audience now. What do you think am I being overly harsh on the risk register? I'm interested to know what you think. So you've identified a bajillion risks, right? Your backlog is full of risks. You have these registers that we're talking about and your stakeholders want to address all the risks upfront before shipping. How do you actually deal in that environment?

Om:

Keep, keep the cards close to your chest before dealing them. Yes. I guess. Yeah. So stakeholders are really looking for comfort level. That's what they're looking for. So instead of worrying about all the risks , I think this is almost like a repeat of the previous point. Just look at the ones that are emerging and are high impact. Now deal with those. You can't possibly deal with all the risks. You don't even know all the risks so I mean, 1, 1, 1 approach is the. You know, go back to those pesky stakeholders and say, well, why don't you go make a list of all the things that's potentially go wrong is you can't do it

Brian:

This category is about prioritization. Because you can't, you already know, you can't deal with all the risks you can't deal with it. Truly, all of them. So how do we prioritize? What do we have? Do we have some suggestions?

Om:

I think the risk that's gonna kill you the fastest is obviously front and center. So deal with that, that's so prioritized by speed to death. High impact, high probability. Those two measures mean that you need to deal with those first.

Brian:

The against categories here. Obviously we should write down and note all risks. That's your PMO we don't wanna be caught by anything.'cause it could surprise us and come back and then we get this documentation culture and then the other one is skipping risk analysis. The idea that once you start skipping one risk analysis or, saying, oh, these are low priority risks. Then once the risks are open to interpretation, now you get to basically skip whatever you want you don't need to worry that customers won't buy this solution. Just build what I say, customers would

Om:

always

Brian:

Because

Om:

nobody got fired from buying IB M's.

Brian:

Those are the steelman points. The, nice thing about this all risks deserve attention and like, how dare you not document a risk because you know, we're taking your judgment into what to test on the market and what not to test because we feel good about it and stuff like that. This is sort of the intuition versus evidence conversation, right? A little bit, just a tiny, tiny bit. But I did wanna bring my own perspective to this Category and ask the question are we testing against the market here? are we playing a bunch of games? Is that what we're doing here? The low probability risks, the ones that like, based on our best evidence and best understanding of the market are low probability if we're in this like PMO, gotta check all the boxes, culture, or if we're in a low trust culture where got a CYA, all over the place. Yeah. Yeah. The speed to death thing I agree with, but you're not, you're gonna be so overwhelmed with all these little boxes you're trying to check.

Om:

I agree. since this is the first time on this podcast where our advice is to keep that resume updated, if you're in that kind of situation,

Brian:

it is. I have some quick notes here that I just wanna talk about. it's the whole, section, and then the advice that will follow in the takeaways is gonna follow this impact confidence matrix. So it's impact on one, access confidence on another axis. And that is like, sort of a real loose, loose and fast framework you can use to say like, what things are we gonna test in what order? Again, you have, you have speed to death. You speed to death should follow this same framework, right? High impact, high confidence let me test that stuff and get it out quick. And then every sprint you should be retiring a risk if you're like, if you absolutely have to like deal with all the risks first. If you're in that kind of environment,

Om:

you'll never get started.

Brian:

you won't. Yeah. You'll never get started. But I mean if you are. If you can knock one out a sprint. I mean, if a sprint is two weeks and you're knocking one out a sprint, I mean, that's a, that's a snails pace in my opinion. I mean if I were working in that organization and you weren't even moving that fast, I'd be worried, just like I'd be worried that you're not dealing with any of these risks you were just saying like, wow, our VP told us Yeah. This is important, so we should do it

Om:

first right. Yeah. I mean, the bottom line is you've gotta be whittling down those risks on a regular basis, for the most part it should be sprint.

Brian:

Yeah. No, I don't have a story about getting stuck. But I have a ton of stories where the company ignored risks and then you just miss the market or you miss the feature or you miss whatever. I have been at a couple companies that took a lot of risk upfront. I think, when the whole world started moving towards mobile development, like going all in on mobile without a strong signal. was one risk that turned out good betting early on Stripe, when Stripe was a real young company, it was pretty easy.'cause there was a, it was very difficult to just take credit card transactions over the internet back in 2012 or whatever, So I can think of a lot of the opposite of this, where we didn't spend a ton of time running down risk, we just picked one.

Om:

Yeah, I've got one to share with you, which from my past experience in the publishing industry. Newspapers were pretty profitable back in the day, meaning back when newspapers were quite thick, right? over time they started losing advertisers as the internet came about and people started moving online. You didn't have to wait 24 hours before you'd pick up a newspaper to see an ad from at and t, a full page ad for example. the risks were their entire world is about to turn upside down. But they were hiding behind this, well too big to die. We've always been around since like 1827 or whatever right. They didn't realize how quick that net spiral can be. So today a lot of those big, massive titles are gone.

Brian:

On the takeaways here, the speed to death prioritization framework, I mean, that's your best planning tool right here. I've got 'em on the screen.

Speaker 3:

yep.

Brian:

This requires you to be a little bit open about listing your, I say top 10. You don't have to do 10 risks, but list your top 10 risks. be as transparent about it as possible. Let people contribute to the list. You know, ask yourself, Hey, if I'm wrong about this you know, how, how fast. Do we decline? Focus on the fastest decline. You focus on them first to try to knock those risks down. we talked about this in a previous section, so I'm not gonna spend too much more time on it,'cause again, you, you, you're trying to trade this off against future feature enhancements and kind of one-off dev and customer support, whatever else you, your team might be working on. So like, you're not gonna get to it all. I'm just saying you use this as a framework to kind of get started with this. And then as you adapt it to suit your needs change it, do whatever you want with it. But start with something like this, you know? Huh?, What's the biggest risk that if we miss like we're gonna we'll all be out of business and then start with that.

Om:

Yeah.

Brian:

So anyway, let's cap this section off by asking like, how do you prioritize risks? Do you have a framework? Do you have a different strategy? Drop a note in the comments for us and let us know.

Om:

Alright? While you're there and subscribe.

Brian:

Ooh, good point. So now let's let's talk about the old elephant in the room. Is there an elephant in the room, Om, I don't know,

Om:

elephant and a hippo? Oh my God, we've hit jackpot today. I can't

Brian:

like Yes, too. We're on a safari. That's what I'm saying. So what happens when your biggest risk is that your team doesn't have the skill? To build what you need. This is one of the Marty Kagan Illes technical feasibility was one of the categories, so nobody wants to say it out loud, but sometimes the biggest risk in product is not the marketing or the technology. It's your own team. The technical feasibility risk, and it's what to do when you're not sure that you actually can build what you want to build So Marty Cagan, he looks good on the podcast, doesn't he? So he identifies the feasibility risk, one of the four critical risks, it's defined as can we build this with the time, skills, and technology that we have. So again, for as far as you and I are concerned, this falls squarely back on the team, potentially more so than any of the other ities.

Om:

Yeah. This is one of those things that in some organizations, this is a risk that a project manager or somebody like that would pick up on and say, we gotta do this, but we don't have the right skills available. But here we're talking about from a product perspective, right? A product person may not know what skills are required. Unless they're technical like yourself. Or they may know, but they may assume that we're a big organization, we'll just get the right people lined up for

Brian:

So I, if I'm just gonna walk in the room as an executive and just say, dark mode is super easy. It's just a toggle. Right, exactly. That's super easy. If I'm going straight into the Steelman, , oh boy I'm gonna have a great time doing it because the steelman is, is particularly pedantic today for this category that I will say, listen, teams grow through , me the leader, challenging their capabilities, and also acknowledging skill gaps and saying . This team's incapable. Of doing this thing that just damages team morale. That's all those things do. Okay. That's all those things do. So let's not spend a bunch of time on that. Yeah. let's not bring that up publicly, right?

Om:

Yeah. Let's not bring that up. If you are saying that you're the type of executive who always has stretch goals for the team that's even struggling to meet other goals, right? That are not the committed goals you could do this. You're a smart individual. These are the type of skills that hover over our developer's shoulder and tap 'em and say, listen, you're such a smart guy. Can you just sneak this in there? all this planning stuff is fine, but we gotta have this. Yeah, that's exactly what happens right in real life too. I see. That can only work so far if it's a team that has very scant skills in the technology that's needed. What are they gonna do? They, they're not gonna necessarily, in that environment we just painted, they're not necessarily gonna be the first ones to put their hand up mm-hmm. And say, we don't know how to do this rather,'cause that message is not well received. Yeah. Rather they'll just say, yeah, we can do this and then they'll, they'll turn around and they'll start googling. Yeah. And I've seen this way too often.

Brian:

Oh, the old, the old consultancy trick. Yeah, that's right. Exactly. Yeah. You can totally do that. If I am making a genuine effort to stay in front of the feasibility risk, then I will be seeing things coming on the roadmap well ahead of time and I'll be giving my team time to go gain the skill and learn new things. I think about this one time I wanted to introduce some sort of queuing technology. We were AWS stack at the time and I wanted to introduce a queuing technology. I don't remember exactly what I needed the queuing technology for, but I needed queuing somehow. And the team had a half a dozen technological solutions. They had about a half a dozen solutions that they could have. Implemented because what I asked them to do was go out and look around the business and bring in all the different like one team was using Redis, one team was just directly going to a database. One team was using some kind of custom lambda function. One team was actually using SNS and SQS the way you know, one team was just using SQS without SNS. There was a bunch of different implementations and I probably could have just dictated to the team just go do this and just, we'll figure it out. A better design later. Let's just get something running. But I remember, I gave the team like, here's what I want to do in the near term. Here's what it might turn into in the future so I gave the team a short term, long term vision and then I said, go take a sprint for two weeks at that company. Go take a sprint and recommend. gimme a like what we want to implement option, and then like a potential second option because I like to have multiple options about what, how we should go implement this technologically, how, which required them. It required them to put their hands on some new technology. It required them to go talk to other teams. It required them to do research and proof of concepts or whatever. But the point was, I was like, you got two weeks, figure it out. You know? And it was two weeks of just dedicated figuring it out.

Om:

If done right, that's very effective.

Brian:

that's the way that I've done it in the past.

Om:

Yeah. I say if done right, because otherwise it ends up being a two week vacation for the developers to go do whatever they want. In that sort of an exercise where you're doing some sort of tools analysis, you need to lay down exactly what are your must have requirements, right? You can have A, B, C, D, E, F, whatever tools there are, you could easily come up with a matrix like that and say, does this tool meet this requirement? You could rank 'em on a scale of one to 10 or whatever. That way you're being objective instead of a tech lead's opinion on what is the shiniest thing out there. Yeah. And you t 'em all up and say, this one meets all our requirements and then factor in other things it meets all our requirements, but the vendor's very new, small, et cetera, or has a bad record for customer service, whatever it might be. And you could quickly, to your point about this, right. Quickly come up with a short list of one or two and go with it so it can be done.

Brian:

Yeah. The thing coming outta that story we wouldn't want to go with a queuing solution that absolutely none of our team had any kinda expertise and everybody had to learn it from scratch. So there was a lot of those kind of conversations would, would happen in that, in, that's what the time is for. Yeah. That was the time is for us all to get on board and decide what we're doing.

Om:

Yeah. and knowledge would be just one of those factors right. On the grid that I mentioned. Right.

Brian:

So I have some takeaways here. Gather your tech lead two or three team members. Have your sprint review for the item. Talk about what skills, tools, knowledge do we need? Do we not have. And then you're gonna talk amongst your team of your plan to either train or hire into those tools or partner with other teams or other companies. You know, we haven't brought that up 'cause it's been like, I've kind of stayed in the box of like one team in this podcast, but it could be, we just partner with like the example I brought up earlier where we partner with Stripe. We're not gonna write a billing system, we're just gonna partner with Stripe Sure. And pay whatever dollar print transaction or whatever the stripe cost is so we don't have to deal with keeping all this stuff

Speaker 3:

Yeah,

Om:

And one of the benefits of this approach is if you have to go back to the well and ask for funding for learnings. This quantifies it nicely for you.

Brian:

Yeah, that's true. The 0.4 here is the present feasibility risks to your stakeholders and if your mitigation plan is, Hey, we're gonna spend this much money and then we don't have to deal with. whereas if we decided we were just gonna home grow our own solution it, it's gonna take this long and we're not the experts at it

Om:

very open-ended.

Brian:

Yeah. So there, there's some good takeaways here in this category. So we covered value, usability, feasibility. now we're gonna talk about aligning your stakeholders to actually care about the right risks, which I feel is its own challenge.

Om:

Indeed.

Brian:

Okay, your executives are worried about the wrong things. They want detailed project plans with analysis of risks that nobody cares about, or your your product people have no, no need to implement. You know, they already know they're not gonna implement it. So how, how do you redirect your stakeholder anxiety towards risks that actually matter?

Om:

The other side of it is, executives say they need that because they want. Predictability. So if you say, well, we're gonna go figure this out, take a week, a sprint, whatever, they don't wanna hear that.

Brian:

No.

Om:

Right? So what's our position here?

Brian:

Well,

Om:

fixating on risks because the product risks feel abstract. So they fixate on project risks. Yeah.'cause they're easier to grapple. They've dealt with those before. Whereas Greenfield product, who knows?

Brian:

I'm glad you asked that because it gives me the opportunity to put Teresa Torres on the screen. That's, and she looks good on the podcast too. I really like Therese Torres. I like her stuff. She's very grounded and I like the way she writes too, teresa Torres Continuous Discovery Habits. She emphasizes stakeholder management is about making your learning visible, not necessarily managing the stakeholders or managing the emotions of the stakeholders. It's more about bringing them into the process.

Om:

Well, well, the irony of this, so you make'em visible, it automatically. Deals with the fact that this discomfort that they have, 'cause now they can see and we can have a discussion around it.

Brian:

So you're going straight into the steelman, the steelman two points here. The stakeholders, they have legitimate concerns about risk Okay and, and a lot of different types of risks, which I, I'm not gonna spend too much time on this because we already, I feel we covered that in the rest of the podcast. Mm-hmm. But then executives, they need predictability like that. I mean, they are interested in your experimentation. Sure. But that's kind of like a running log. They need predictability. Hey finalize your stuff, send it to me so I can like. Go to the casino and bet on these things, even though like software's a terrible business Yeah. For that kind of outlook , and even you, just brought that up as like, well, your executives, they need predictability.

Om:

They do. They do. But you know, it's at what cost? Are we going to simply pacify the predictability aspect of it? By taking shortcuts on validating things. Well, I mean, that's the question.

Brian:

Well, I wrote some notes down just for the exact purpose. I said, stakeholders, they fixate on, they fixate on these risks because software development is just ambiguous, is an ambiguous business, and it's very abstract. And, and the risks here are very abstract. Mm-hmm. Well, what's your confidence level that people are gonna buy this if we do X, Y, Z? I mean, how can you even answer that?

Om:

You can't answer it unless you've done some sort of experiment and have some evidence to back it up. Otherwise it's one man's opinion again. Or person's,

Brian:

you have to do a bunch of work to get to a real answer there and then like, well, you can't start the work unless the risks are done, but you can't do the risks. Yeah. Anyway. Yeah. It, it's, it, it gets, things are getting weird is what I'm saying I think I said in a previous section about the risk register. I think it was you're making the risks visible to the stakeholders and that group that you're going to, and then you're saying for each experiment,'cause you've already ranked things, right? Yep. And hopefully you've brought your ranking to them and been available for them to challenge you to be like, why is this the top priority and not this, or whatever, and explain it. Assuming you've done that. It becomes a process of just going back to'em every week and saying, we've validated this, or we're still looking at this, And this is what the tests are showing.

Om:

Yeah. And this is how that risk that we validated against is that's where it is now so things will change in that whatever we put,

Brian:

the nice thing about this is like that you're already answering, just with that conversation, you're already answering what the stakeholders are gonna ask, which is, are we on track?

Om:

When's it gonna be done?

Brian:

That's right. I mean, you're answering that. It's like, well, it'll be done when all of the risks are validated, which you can see on this list here. But again, like in the stages where all the risk validation is done before we start the, I have a problem there. I'm gonna just push it off to get through the rest of this category, because theoretically you don't need to do every single risk before you start building. You don't. Theoretically, some of the risk can be mitigated while you're building or through the way that you. So I'm assuming there's some, some execution in this that makes it not all ephemeral. Right.

Speaker 3:

For sure.

Brian:

And then the one that everyone is bad at showing the cost of not testing risks. I wanna do a whole separate podcast on this. The tracking. What a failed launch or failed feature or whatever would cost the company.'Cause nobody's good at that. Because there's a potential amount of money that you will burn. On failing to do something. And I just, I haven't been in a company where we talk financially that way.

Om:

no. I mean, you see the opposite, right? Yeah failure is not an option here. That's right. At Acme Inc.

Brian:

We were willing to burn any amount of money. That's right. To not say that we failed. That's right.

Om:

Well, the most common story I can share is when stakeholders are always focused on execution risks right. And they're assuming, well, they're not just sometimes totally ignoring product risks. The ities we talked about the project is now real because it got sanctioned and it's authorized and financed. So let's go yeah. And then the risks are all around execution, execution, execution, technical

Brian:

Yeah, I would agree. the majority are technical feasibility and the minority are on business. Business viability. Sometimes usability is considered, especially if you have strong. Product design I'm sorry. If you have strong ui ux ux Yeah. Folks on your team, then you can cover that a little bit but boy, I've seen some high, high level developers be very overbearing about ui ux. And then you don't even get to test the usability because they say, oh, every user wants X, Y, Z. We're just gonna do that And it's not even open. Yeah. I've seen that too. This is an interesting one. There's a good chance that your stakeholders, they don't want risk management the way that we're talking about risk management. They want a feeling of control double down, especially true if they're executives. So you have to give them evidence. Yeah, regardless

Speaker 3:

out

Om:

Absolutely.

Brian:

So I had a quick takeaway in this category risk learn this week, you can do a, you can do a quick one page report each week. Maybe if you're a product manager, that would be good for you to do anyway. Here's the risks that we, it doesn't even have to be a whole page, right? It could just be a paragraph or two, right. Which on a dashboard perhaps, list your tested assumptions each week. That that'd be good too. Keep you on track. And the outcomes. And then show your evidence, like we just talked about. Show your evidence so those are the, and then those three things together should lead to, and that's why we did this, the statement of intent made this decision. Yeah. I would imagine that if you're keeping those things tight every week your stakeholders are, they're either gonna be really bought in, or they're gonna have a lot of opinions. They're gonna share and have some deep conversations with you. Either way, you're keeping 'em happy.

Om:

in the latter case, right when they have comebacks, pushbacks or whatever. You're better off 'cause you already have evidence, right? So you can stand behind that, right?

Brian:

So those are the takeaways. Okay. So what do you think about this category? Can you manage your stakeholder anxiety around risk? Like share your tactics in the comments below.

Om:

Share your risk registers. No, whatever. Whatever it is that work for you.

Brian:

don't do that. I don't, I don't wanna see them. So at this point, your users love it. Your team built it. Your stakeholders are thrilled, your company's still losing money. So let's talk about business viability risk, the risk that kills products, even when everything else goes right.

Speaker 3:

Yeah.

Brian:

All you

Om:

can nail all of those other entities.

Brian:

In the planning for this podcast, I came like product school, which is like one of the people that try to you spend money and they teach you about sausages or what I don't know what they teach you about, but anyway, product management and sausages, I guess that's their angle. And they actually had what I think was a good definition of business viability risk. They said whether the product can generate revenue, fit the business model, or support long-term sustainability. That's what they, which is pretty good. I was like, ah, it's more words than Marty Cagan used, but I'll accept it. So you would, you would think there is no way that people can push back against testing business viability, right? There's no way I, it's, if it's that risky to our business, it could sink our whole business. Why wouldn't everyone jump on this first? Why wouldn't we all have the lean startup in our hands and our hearts and our, and our minds should be written into the Constitution. That's what I'm saying, but surprisingly, Some people are gonna say, Hey, product teams, they should be focusing on users, not revenue don't worry about pricing. I'll take care of pricing. I know the market. You guys don't, you're not deep enough experts in the market. Om, you've not done pricing before. You can't do pricing now. we can't turn something over to you. That's that important pricing, testing, pricing on the market

Om:

they've done it in their careers.

Brian:

Even they do it the same way everyone else does it. That's right. They pick a price outta the air and they throw it into the market. Exactly. Yeah. Time tested strategy., Once you've done it once or twice, you become the like, listen, you're again down, you come down from the mountain with the tablets there. I'm not saying, wait, let me put the steelman back up. I'm not saying these steelman points are the greatest steelman points, but I am saying that you are gonna run into these. As a real pushback as a product manager or a dev team if you're a small company of like, are you saying that I have no say in what pricing we test. You don't want to test that pricing in different verticals and also why are we not, why are we not just experimenting with it? We don't, it doesn't need to be a permanent product. Why are we not doing that? But that's, I mean, you're gonna have to argue against that, number one. The other one is business model validation. That's not like, that's not your job kid. So like ski, get back in your lane. Sorry. Stay in your lane, bro. That's the executive's job, or the sales folks' job or whatever. That's not your job. As a product manager the success of the entire product, including every team that touches the product, is my responsibility. So the, either one of these steel end points, they I'm not gonna say they land on deaf ears 'cause it, it's, it's, it's a stakeholder expressing something that needs to be dealt with. But also, both of those are very narrow focuses to say like, oh, you're gonna do that in the vacuum and confirm that it works and you shortcut everything else and do it on your own. And nobody else is gonna be privy to the learnings. And, you think that's less risky than sending it through the normal process?

Om:

See is riskier.

Brian:

But seriously, Alex I spent a long time in logistics where this was the order of the day. This was like the dev teams did not touch this stuff. You weren't invited to those sessions. Your input was not wanted, not only, not necessary, but not wanted.

Om:

Yeah. Yeah. So it's not just that field. I think this is quite prevalent out there. Seriously where product is not engaged in making pricing decisions we did a podcast on this topic, right? Yes.

Brian:

It was arguing Agile 236, why product managers should own pricing, not sales or execs.

Om:

Right. Business model validation is the executive team's job as one of the steelmen steelman arguments. What evidence can they present with validating the business model, right? What can they do without product?

Brian:

I'm gonna tell you, I don't know, but to expose my position here, the PMs, they own the viability risk they should I mean like, who else is gonna be responsible for the who? Let me say it a different way., The, if the PM does not own the business viability risk and somebody else does, well, isn't the, isn't the product manager the person who's being delegated that side of the business? So if that whole side of the business is no longer viable because a market change or some condition change or whatever, so we're saying the viability is done by somebody else, but the product manager and their team pays the consequences when that business is alone. So all those, so like, it's like the newspaper, like if we're gonna use newspapers for example, 'cause it's, it's pretty straightforward for people to understand. Yeah so the product managers are told like, don't worry about the changing market, don't worry about the internet, don't worry about the, the decline in sales. You keep doing what you're doing. Let the executives. Figure out what our new slick business strategy's gonna be. And you just stay over there, kid.

Om:

That is exactly what actually transpired. Oh, okay. So the executives were focused on retaining existing customers back in the day by creating new deals with them, right? Mm-hmm. New pricing deals with them. They forgot about the fact that there is this huge thing coming at them in the rear view mirror called the digital space, right? Yeah. they didn't really pay attention to that, and by the time they woke up, it was too late.

Brian:

That's, I mean, that's right on our list of the positioning arguments here is like the, it's the last one here. It says, viability risks kill products slowly, and by the time you realize it, it's too late to respond where, whereas if you had delegated that to the proper person and team that could have seen disruptions happening or could have seen decline in numbers early. Often would let you pivot sooner the team could have made the decision to pivot. Yeah. Again, this is one of those ones where like, move the decision closer to the decision makers and then they'll make better decisions. Yeah. I have a million war stories to talk about this one and they're all gonna be super cynical. And I don't want to talk about any of them because this has happened everywhere I've worked. at some point, like everywhere I've worked, there's always been this battle between the product managers and the executive slash sales slash whatever other department has a lot of power, right? Yeah. and you always get into this like, well, we're gonna let them, whoever the them is, determine the viability in this case because we don't want to get into a fight. some trade offs here and there, and eventually that creep will kill an entire segment of the business.

Om:

This falls squarely into what David Marquet says about moving information to authority, right? Rather than authority to information.

Brian:

So I do have some takeaways here that I would like to just touch on quickly. You know, validate your business viability. Unit economics is a good way to start. Yeah. you have your customer acquisition costs. You have your customer lifetime value. the note here on the screen says your lifetime value should be three x the acquisition costs, right? And if it's not, maybe you need to do a little deeper digging into your unit economics. Your pricing strategy has to be tested early and often. And again, if your team is just not involved in that at all and maybe this is the first time that some people have. Been exposed to you should be testing pricing in addition to features and all the other things. if this is the first time you're hearing about it here on this podcast, I'd be super interested in knowing what your takeaway is of like, what are you gonna do with this knowledge of like, oh, you always should been testing pricing. Exactly. And also like, what, what are you doing? Instead of testing pricing,

Om:

building the wrong thing.

Brian:

I guess, but it's one more indicator for you to know like, we shouldn't be building this thing that no one's going to buy.

Speaker 3:

Yeah.

Brian:

You know, I dunno. I don't know. There's a lot more to talk about in this category. Again, another one I feel we could have had a whole another podcast on. So we, we've covered the four risks, We covered the four risks. we were focused on revenue with this last one and the actual business viability. So let us know in the comments if this has been in your experience, or let us know in the comments if you actually do test pricing and we are wrong.

Om:

Yeah, I'd love to know some of the techniques you use for testing pricing. So do let us know in the comments and like, and subscribe to our cast.

Brian:

That's right. let's wrap up.'cause we covered a lot today. So product risk it's not to create documentation. It's about knowing which assumptions will kill your product if you are wrong.

Om:

Yeah, absolutely. One of the techniques that we suggested is to prioritize the risks by speed to death,

Brian:

Right. So the speed to death to remind everyone , the risk that if you're wrong, will kill your product the fastest and I like that we ended the podcast on the business viability.'cause like, in my opinion, most teams ignore that one, and that's the one that you probably should be testing first.

Om:

Absolutely.

Brian:

So for everyone out there, like the best way to support the show is to give us a subscribe on all the platforms, give us a review, say we're five stars and Awesome. And let us know what else you'd like us to talk about. Let us know because like the, this podcast and the previous one that went up were both directly from feedback, no feedback from the audience. Let us know and we'll be all about handling it in the order in which it was received.

agile product management,product management, feasibility risk,Marty Cagan,team development, risk prioritization,usability testing, continuous discovery,product risk analysis,agile coaching, business viability,product leadership, product viability,