AA256 - The AI PM Competency Trap: Why AI Tools Won't Save Your Product Career
Arguing AgileApril 08, 2026x
256
00:56:3838.94 MB

AA256 - The AI PM Competency Trap: Why AI Tools Won't Save Your Product Career

Every PM is scrambling to learn AI tools - but is that a trap? 

In this episode of Arguing Agile, hosts Brian Orlando and Om Patel summarize Shreyas Doshi's provocative article "Why Product Sense Is the Only Product Skill That Will Matter in the AI Age." Using the article as background for our discussion, we explore whether AI tools like Claude, Cursor, and NotebookLM are genuine superpowers for product managers or just the new baseline that everyone will have access to.

https://shreyasdoshi.substack.com/p/why-product-sense-is-the-only-product

We've structured this episode around several key debates, including:

🔹 AI Tool Usage: Are AI tools a temporary competitive advantage or just commoditized table stakes? When everyone has a bulldozer, the winner is the one who knows where to dig.

🔹 Strategy vs. Prompting: If you and your competitors use the same LLMs to write product strategy, how long until you're all building the exact same app? The hosts argue for using AI as a red teamer rather than an original thinker.

🔹 Speed of Conviction vs. Quality: Business leaders love saying "a fast B+ decision beats a slow A+ decision." But Brian and Om challenge this with Bezos' two-way door framework and evidence that AI-assisted workflows create 37% more rework.

🔹 Product Taste & Judgment: The hosts wrestle with Shreyas' concept of "product taste"—the ability to pick the optimal recommendation from a set of excellent AI-generated options and explain your reasoning.

🔹 The Zero-Cost Software Future: What happens when the cost of building software drops to zero? Will we end up at software bazaars with 30 identical prototypes? The hosts paint a vivid (and slightly terrifying) picture of commoditized software development.

#ProductManagement #AIProductManager #ProductSense

"Why Product Sense Is the Only Product Skill That Will Matter in the AI Age" by Shreyas Doshi (Substack article March 2026), Jeff Bezos' Disagree and Commit framework, Jeff Bezos' One-Way Door vs Two-Way Door decision framework, Claude by Anthropic, Claude Code, Cursor, NotebookLM, HubSpot, Jira, Figma, Google Docs, DataDog, MoSCoW Prioritization Framework, Stanford Cooper Bench Experiment (referenced from Arguing Agile Episode 247), Arguing Agile Episode 254 "QA: Why a Massive QA Boom Is Coming", Arguing Agile Episode 247 "A Poor Team Player - Stanford's Cooper Bench Experiment"

LINKS
YouTube: https://www.youtube.com/@arguingagile
Spotify: https://open.spotify.com/show/362QvYORmtZRKAeTAE57v3
Apple: https://podcasts.apple.com/us/podcast/agile-podcast/id1568557596

INTRO MUSIC
Toronto Is My Beat
By Whitewolf (Source: https://ccmixter.org/files/whitewolf225/60181)
CC BY 4.0 DEED (https://creativecommons.org/licenses/by/4.0/deed.en)

So today on the Aging Agile podcast, we're going to talk about A .I. and product management. Is it really the genuine superpower that makes you a 10 X PM or is it just another tool like the new gear or something? Brian, Brian, would would you you consider consider yourself yourself an an A [UH] .I. native native PM? PM? Are or or are are you you still still out out here here being a traditional PM? You know, doing things with actual humans and having product taste. Listen say yes to anything air related at this point. In fact, I, I'm, I'm not just a native. OK, codependent. I would ask Claude to spell my name if I didn't fact, I'm a think that it would hallucinate three different versions and then give me a doctoral dissertation on the environmental impact of consonants but seriously, I'll agree to anything in this podcast as long as you don't make me use the word taste to describe product sense. Like welcome back to Arguing Agile. If this is your first time, welcome to the podcast. I'm your host, product manager Brian Orlando. And this is my co -host, enterprise business agility consultant and the smooth operator of business operations, Mr. Rampatel. Coast to coast, L .A. to Chicago. Oh, I mean to [UH] OK. All right .'s[UH] let's roll. Let's get this going. So here we go. by the end of this episode, you'll be able to identify the A .I. execution trap. We'll talk about what that is. You'll be able to master the five pillars of product sense as outlined by Shiraya Stoshi. We'll talk about his article that's kind of what we've bounded the whole podcast around. And then you'll stop competing on commoditize skills or at least we hope that you do. Yeah. And you'll be able to start building the only career mode that actually matters. Oh good call. Good call. So we kind of[UH] put this podcast together based on an article that Sharias wrote. And So I'm showing the article on the screen between us now. It's, it's from March 4th, 2026 titled Why Product Sense Is the Only Product Skill That Will Matter in the Age. And [UM] we've kind of taken the outline here and we've wrapped the entire agenda of the podcast around it. So we'll be referring back to this. We'll be quoting from this, but we're not necessarily going to read like point for point. Yeah. You can do it for yourself if you want to. Maybe we'll put a link in the description below a first category is knowing how to use the latest agent going to save your product roadmap or is it just the new baseline? Oh boy, this is going to be a really good one today. All here's our first Shreyas quote So our first citation here from the article tools had never been a significant source of alpha in product success. And that is not changing with AI tools. So in the article, if you read the article, which we will, we will hear just so we can look He says he gets to a point here in the intro where he says, basically yeah, great. I mean, these tools are a significant advantage. for everyone that has the tools. But what happens in a future world where everyone has the tools? Right, the haters right now, they all call us haters. They won't sure. We're calling it anyway. Yeah, we're turning the tables on where markets segmenting hating on the Internet. Like it's[UH] 1993 over here he says tools have never been a significant source of alpha in a in product success. And that's not changing with actuals. He says or anybody can adopt the tool and therefore have the advantage. So in a world where everyone has an advantage, no one has it. Right? Yeah. Right. So he says, this what this means for you personally is while you can and should use all the actuals, you cannot bank on your use of these tools today to provide you a long term advantage in your product career. Yeah. Yeah. So that that's his there we go. That's his sort of a line in the sand right there is hey, everyone's got great tools. Great. What does that have to do with product management? Which I do have to say, like just stress is the only one allowed to have a hot take like this. Because if we came out with a hot take like this, oh man. Yeah, yeah, yeah. We we wouldn't farewell I think we would be fine. But I don't think, I don't think I don't think we can get away from the steel man because this is what people would say immediately on the Internet and on Twitter and especially on LinkedIn because that's where that's where the hot takes come out. That would have works the most to try to thirst for attention over here. They say they would say AI gives companies a massive permanent competitive advantage. And if you can just be faster than all your competitors it doesn't matter what will happen in five years. You got a competitive advantage now. buy my book now. OK, or be left out. OK. And then the other one says if you don't master agents, you're going to get replaced. You got to have agents home. to be. They said you have to have agency. Now you have to have agents. Interesting. thought they said you had to have the sham. Well, I thought that's what they had to say. You had to have the. What was it? What was the thing that they slapped on the tank in the water? What was that fix tape. Yes, yes, this is. Oh, there's some, there's some good ones here. Yeah, pizza party. Some good ones. Oh yes, yes, indeed. That's the mandate I will. there may or may not have been a flex tape or to meme thrown on the screen here when we're talking. But just in case there wasn't. these two steel man points like I mean, obviously, if like if we're if we're playing the new game on the podcast which is called Spot the Play, I don't have a slide for this one. That's our new game on the podcast. Spot the Play. I'm a big fan of the fear hook like I, I, I told you and I don't think I have was researching for a couple of different podcasts that we're going to do in the future. I watch a podcast. Oh, I did. I did go to LinkedIn because I said, hey, where do people go to get their learnings? And of course, nobody, nobody, nobody said anything. Yeah, yeah, no, no, no. the people that run LinkedIn learning like they're too busy getting lunch all day. And yeah, that's a good point. Yeah. Being, being, being thought leaders. Right. Right. Being lost in their own thoughts. This has been a fun segment of the podcast. I feel we're not even in any of the categories. now, I'm I'm having a lot of fun, but we're not even in any of the categories yet. But the fear, the fear hook. You're constantly seeing LinkedIn posts screaming I, I won't replace you. A p .m. with I will replace you. Right [UH] And I will replace you Another worker that you know grinds harder or whatever will replace you. Like this is the same panic of oh, should work nine nine six because if you don't work nine nine six you'll get replaced by somebody who will you know you should work through the weekends and work for you know pennies on the dollar or whatever because otherwise somebody else will boy that's like there's a million different ways I could play that fear hook. So I would hope that when people see it they recogcognize. Yeah, that's, that's the mechanism being deployed. This has been the fear hook section of the podcast. It may or may not be a normal part of the podcast. I hope it's not Well, we'll see. Got news for you It's there's usually some hook. This time it happens to be the fear hook. That's right. but the steelman like in the immediate short term, like the people that are out of work now or the product managers who are about to get this place because they're, you know, their company is firing them as product managers and then reposting their job for 50 percent more money. Slap an eye in front of it and throw in model evils in the, in the, in the bullet points somewhere salt ban some, you everyone starts doing that and everyone starts saying they're leveraging AI, what happens? Right oh, I like this quote. When everyone has a bulldozer, the winner is the one that has the knoknowledge on where to dig That's the differentiator right there. The tools are no longer there to give you a competitive advantage. I mean, I think you just nailed this podcast because those this if we look at our stance here, on screen, I'm showing the bullet points for our stance on this one that we kind of talk about is that the tools like they don't provide the alpha, which is what she has is pointing out that because the tools become the next hot thing that you need. And yeah, they do cut you ahead. Like I use every day now. Right? And it definitely punches me ahead at light speed. Light speed is too slow. We need ludicrous speed. That's what I'm talking about. But they're commoditized. Everybody's got it now. Everyone can spend 20 bucks and get cloud code or cloud code. Yeah, I absolutely agree with that So I think at the end of the day when you look at it and say if technology A .I. is a leveler, what makes you spring ahead of the competition? Yeah, yeah. and then and then you know, the other funny thing is I put execution in these slides. Right? But A.I. accelerates delivery, making bad strategy fail that much faster. That's what I really meant by that bullet point. Yeah. So I put execution here. What I really mean is hey, like we got AI and that means that we can deliver way faster. Great deliver the wrong thing way faster and check our. But remember because we're doing OKRs and they're all based on nonsense anyway. We can tick tick our boxes that we're all nonsense boxes anyway because we had a bad strategy to start with. We can take them faster now. Yes, that's what you're saying. You know I watch it. I watch the I should hold this one for later in the package. I did a fair amount of research on this topic. I know it seems like the old argument of podcasts often seems like we're being very flippant with the categories in the discussion. But I did a good amount of research in this one. If the execution of code drops to zero, let's talk zero profit. But let's, let's talk about the absolute zero. Right? Like, hey I can spin up a program. Takes me zero time right. At all What is it? What future of software development does that lead to? Like in the SAS world. Right? Does that lead to a SAS world where now we're like going to like the software bazaar where we walk into the software show and everyone has show how their software solves the same problem? Basically like like a bazaar about like a particular problem. And all the software vendors bring their new solution because it costs zero to spin up new solutions. So now everyone just spins up 2030 solutions and brings them to the to the big roundtable. Right. And the customer then can come to the big tent and pick the best one that they see Right. Because that is that is the problem. It's the opposite. It's the opposite of the way that we've been doing software. Correct. Yes. Yes, indeed. And that is the promise that you can like or things a lot faster. And if a customer doesn't like something that you're showing them, guess what? You have one more up your sleeve. Yeah. Because you just do all thesevariants. Right Yeah, exactly. And in discussion with the customer, you could spin something up and say, is this what you want? Almost like a piece of bespoke software for them. Right? Right. But yeah, that's that's the the that is 11 future possibility out of what we're talking about that you know, hey, yeah, accelerates execution and all you need is delivery. But basically you don't need a bunch of discovery. Just just take guesses with your discovery. Throw 30 of them out there because it costs you the same to develop those as it would to develop one prototype. You know, in the old school way of doing things. Yeah. You know, we, we don't have designers anymore. We put out 30 guesses and shoot them all out to the client, which is Between you and I, it sounds like you're not. First of all, getting one prototype in front of the customer and getting them to actually pay attention and take their time and give you critical feedback was hard enough. Now you think they're going to look at 30. Come on, come on. Absolutely not. Let's talk like where are the serious people They'll stop looking at more than one, especially if you've missed the mark because you're your focus has been on speed. Right. Right. So you didn't even land there but you have something and you have many of these things and you're asking the customer go through all these pick the best you want pick a card any card. It doesn't work from a customer standpoint. Hey there's confusion. What is what is is your your product? product? Yeah. Yeah, Right. right. Well to do for the next customer? Right. what do you think about the last, item that's up here or here? that says your permanent advantage What are you going and basically your, your products and your product manager product advantage is that you have applied your top tier judgment, your refined judgment on top of the eye. So in a world where you come up with 30 prototypes that you're ready to take to your to your why am I having such a problem with like a convention or something like that? You know, I mean like your big trade show or something like that you're you're going to take like the best three,, maybe that is where product sense is in the future. Right? That's a, that's a for statement to be hey, maybe this is the way that that things are in the future. But also, I feel I'm just regurgiting the same thing as like, well, now they got to pay attention to three things. It was already difficult to get them to pay attention to one thing. Agreed but here's the saving grace. And at least from my standpoint, if you're applying judgment. And you're the only one. You're still representing a product that you believe fits the bill. How about applying judgment in conjunction with your customer? So get the customer and say here are three possible variants. Let's have a quick look. Discard the ones you don't like very quickly. Right I think there's merit there. Honestly I don't think you waste a whole lot of time especially because tactically to get to execution has been so much quicker now with A .I. Yeah. Yeah. We we let me go to the takeaway. So in this category I wrotea take away. It says to evaluate your daily PM tasks and then to categorize them by execution versus choices of judgment you know, choices. That's probably clunky judgments. three things here with which to do by which to do this Number one is hey audit your week. Find out percentage of your time is spent on the mechanics of how to do things like the nitty gritty. Right. I you know, a better way to say this would be the tactical. Yeah. And that that percentage of time should diminish. Right? Sure Sure. With AI. So that's step two is automate the mechanics. anything you have to do over and over again repeatedly. You should be able to figure out a way to automate that. I wouldn't do it maybe once, maybe twice to make sure you understand the process and then automate it. I do this especially now that I work every day with AI. I don't stick around trying to do things over and over again. I will automate them nearly immediately. And and then that's that right now. The probabilistic return. Yeah. And you'll have to spend your time on it anyway. And then you'll be scratching your head going, why don't even bother automating this if I have to touch every single output? This is ridiculous. Yes And there's my yes. And for this part, that's, the loop touch point hopefully is lightweight at this The, the human in point. Right. So by the time you get around to evaluating the output, that time is much, much smaller. Let's say. Right. Then the time it would have been doing things traditional. Yeah So if you're evaluating, that's all you're doing. You're just looking at things, casting judgment, discarding or counting in things. It has time well spent. I don't think you should be worried too much about it because the alternative is to just blindly say OK, I take it away. And that to me is a no no. you're trying to do all these things because you're trying to get to number three on my list, which is you're trying to get to a point where these things are handled. So you can reinvest the new time that you have free to talk to people, to empathize with customers, to do some simulation on the market and the market conditions and stuff like that. So you can get back into the strategy later. it's like a tactical strategic split really is what's happening here. Yeah. And you're trying to automate as much of the tactical as possible so you can get back into the strategy world because that's where your creativity can kind of shine. Yeah, the slippery slope, the, the cost of not doing this or the alternative I guess is instead of this last step, step number three, people will be tempted to just churn out more. Right? So go back to one, go back to two just come up, come up with more and more and more output. Yeah, maybe you could argue their outcomes, but they're not effective unless you're doing it this way is what we're saying. Right? Spend more time with your customers, spend more time, you know, the day in their life, so to speak. Right? That's the only way you're going to know. so more isn't necessarily the focus here. Better is the focus. All think? Are AI tools a temporary advantage or just a So what do you new baseline? Let us know in the comments. so if the tools are commoditized and what actually separates a good product decision from a bad product decision and and can anyone tell when is doing all the heavy lifting? Great questions. says can't bank those use of those tools provides a long term advantage because everyone else is going to be buying and using those tools. Right? So that's not a long term. That's a short term advantage. Great. I can summarize 10000 customer support tickets in three seconds or whatever, maybe like 12 seconds or in the case of Gemini that eats like paste like you know, a couple more seconds than that But does that actually mean that it understands your users? So the citation that I'm going to throw about. He says the only real long term career mode for product people is how you can improve the already brilliant, already comprehensive inputs and outputs that I will provide for you. And that that's all. That's only what I clipped here. Sorry. That's only what I clipped for the show because if you actually read in the article here, I'll put the article on the screen. in the article, he has another great singer. He says frankly, this will be a fairly high bar if you actually read and one that most product people will find impossible to meet, which is a great singer. I see you being a hater, Shrius. And [UH] listen, I need more of it is what I'm saying on the podcast. your ability to improve on the slot that you're handed. first draft that you're handed And that's a, I don't know if you can hear that anywhere else. Any other Linked influencers. You're certainly not going to hear it from the the A .I. tool providers. Right? Because they're touting the speed of A .I. you know, you will hear from the A .A. tool providers [UH] the steelman items right here. A .I. sentiment analysis can replace qualitative user interviews. Number one. And number two, the Elmer already knows what the customer wants. So those are two big sweeping statements And maybe it's not sentiment that maybe it's maybe it's behavior analysis. It's not just sentiment. Let's say it's not something like raw and like flawed like sentiment because I've worked in an industry where we did sentiment analysis at scale on social media. It's a flawed process. Sure. The data science only get you so far. Right? Yeah. Yeah Maybe it's not just sentiment analysis. Maybe it's behavior analysis, pattern of life, a couple of other things together. And that can get you user prediction on functionality that might be able to get you a long way to the end of your decision process, to making a quick good decision. And the LL already knows what customer wants. I mean I'm trying to take the best faith opposition here. If your LL M is hoovering up all of the behavioral data from like your real deep like data dog like real deep in the logs no one PM or really developer is going to have all that knowledge at their fingertips. I mean, they can dig for it and find it short. But it's as far as yeah, as far as an LL. I'm like supplemented by a bunch of MCP servers and endpoints and stuff like that you know, aggregating the stuff in an automated fashion there are powerful tools available to give you that insight and suggest a next course of action better than any one p .m. at scale could give you. it's tough to push back against that as like a, oh yeah, these tools are very powerful. Yeah, I mean the tools do give you, you know, advantage in terms of speed, Just pure processing through the stuff. But the first point that talks about replacing user interviews. I cannot really do that effectively [UH] I mean it can just look at historical things and say here's what the behavior has been it cannot ask meaningful questions beyond that. Right. Whereas a human would be able to do that based on other circumstances et cetera that it doesn't know about So I'd say if what if you could use A.I. to build a foundation and then go and have your user interviews, you'd probably find that they're more effective because you can ask better questions that way. But you can't just simply sub out the interviews for an LLM. So let me let me separate for you because on one hand I'm trying to support the against on this. I don't know why in the old Western movie I'm the Black Cat Cowboy like if you're arguing point is summarization is not empathy. Like you still need to deploy empathy because it's just the summarization and the recommendation or whatever Sure. I says even on another podcast I listen to him on, he says the Elem's actually are better at empathy at most that than most high level like directors of product And the example he used was most high level directors of product and stuff like that when he has like one on one sessions with them. I was going to say counseling sessions because that's really what he was describing. But you're not allowed to call it counseling sessions He says like, usually I have to ask not, I'm not recommending what you should do. I'm just recommending the couple of things that you definitely should not do and then trying to change your behavior based on recommending things you should definitely not do because they you know, destroy teams and destroy trust and stuff like that And he's with the LL M, it takes two, maybe threempts and the LM gets it and adjusts and now is taping, taking the empathetic route with people in those positions. Like there's egos involved in this stuff like that. And usually he's like, I could have a three or four hour discussion with these people. What do you mean these people? until they come around to my way of thinking and still theythey're kind of like on the fence about the decision and whatnot just personally, I don't think replacing your user interviews with some kind of like scaled A .I. analysis. I don't think that's a generally not a good idea because you're trying to like use as a crutch to not talk to people. That's not great. That's not good. Don't do that. Don't do that. But if you're looking at it from the perspective of not empathy. And the median answer that I will get out of the machine is not the best. Right. Or even suitable. Right. Everything needs to get punched up. Everything needs to be like taken with a grain of salt. Looked at, you know, look at the data coming back under hood. Right. If we have like pretend we have some kind of like at scale analysis agent running feeding us stuff. it needs to come back with some kind of like report and then the high level summary. Like in the discussion so far between you and I, it's oh we're reading the high level summary and our recommendation. It's OK, that's cool. But also it needs to show us work when it gives us a recommendation so we people can look through, contact people, actually pick up the phone and then decide. it's like a an advantage in addition to what we're doing. Exactly. Yes. It's not a replacement is what we're saying. Right. It's a it's an enabler And the more effectively you can analyze the data, draw inferences, and then conduct your user interviews, the better you will be. And that's your competitive edge. And and even if it was pure analysis by itself and that's all it was, those insights, a quote insights would be homogenized across all competitors. So we're all using Claude. We're all using Claude Code. We all love it. Whatever. On the Claude Max, Great. We're all getting the same answers. So we all end up building the same world and build the same thing. Yes, there you go. All the more reason to not completely cut out those human touch points. And it's extremely frustrating as a business to be in a market with a product where you go to a convention or something and everybody has the same features but slightly different UX but the same fee. And the customers show up confused because everything looks the same. So you can't tell like which look they look booth has like a rigorous team of, you know, deep expertize and which people are just like prompting the model and just throwing it out there. You have no idea. Yeah. that in that instance, what are you going to do? You're going to pick like the lowest price and just like get out of there, we had a discussion earlier in this podcast. I don't know if it got cut out about the future when like building software drops to zero. The cost of building software drops to zero. Will the future become trade shows and bazaars like that where we just build 30 prototype. Yeah. And just bring the best five prototypes to the bazaar and then hope the customer picks the the shiniest, nicest, coolest looking, whatever like is that the future of software? I hope not because look, just think about it from the customer's perspective. You go in, you're going to see five products that roughly look all the same. I mean yes, they're different but you can't immediately tell what the differences are. And that's per vendor. So how are you going to pick? Then it comes down to vendor relations, price, et cetera. Right, right. I hope that's not where we're going to end up. I mean, it's, it's, that's a bizarre, bizarre. It's, but a possibility. You can see it. You can see it. less pretend in the future, the tools really do keep getting better and faster and more accurate. No doubt they will [UH] Yeah. You could totally see a future where the most creative person to come up with the most different, you know, the most creative outcomes comes up with 30 outcomes hones down through customer feedback to the best four or five whatever. They're all prebuilt because again, the cost ofbuilding is zero. And you just bring your big custom solution to the to the bazaar. At that point, you may as well just give them the building blocks and say spin your own. Yeah, right. Yes. there was a there's a thing going around LinkedIn about don't don't try to redeploy, rebuild a hubspot But I think that's what we're saying is, Yeah, that's going to come to the bazaar. And like there'll be 150 booths and half of them will have rebuilt hubspot and it'll be complete. It will work. Sure. So you're really just picking the one you know. Is that the future of software? I don't know. Like I mean we're way off track. So going to be [UH] use A .I. to find the smoke and use humans to find the fire. And I'm going to give three points here Oh, here we go. Number one, run your feedback through an LL .M. to identify the top three recurring themes in the takeaway is a future state. We're going to run our feedback through an alum and. It will just give you the top three or one. We're just going to develop the top three features, I guess, because it's costs zero. And then number two, you're going to pick up the phone. You're going to call five customers and you're going to run your prototypes, run your three solutions by those five customers. And then the last one is you're going to ask why do you find the the gap that your eye can't explain? the, the last you do, the 80 percent with the eye in the last 20 percent. You're going to jump with your human expertize, your product intuition. That, that's what I'm saying. Yeah. Yeah. Yeah. That future is much more palatable to me for and against on the podcast. Yeah. This take away very like I know we do a strong. Mm if you want to set up your company this way and like we strip all the rest of corporate everything out of it. This is solid. come up with your prototypes come up with 30 prototypes. Whatever. Right. Pick the best Call five customers. Walk through all the prototypes with them. Assume you can get them to commit that kind of timeAnd then, you know, and then start asking them why per prototype capture all that feedback and and roll it in. I'm also open to the mindset where you don't have a product at the end of it. You have a bespoke product for the customer. So you work with them. Right. And then you you say, say, well, well, here here are are three three things things or or five five things things that that you you can can look look at at. and And they they might might like like bits bits and and pieces pieces of of each each of of those those, but but not not any any one one of of those. those. Right. Right. And And you can have, you know, have them spin up something that's kind of like a hybrid solution taking bits and pieces from those five or three, however many. Right. And that's their answer. Right. But another customer could come up with a completely different one. And there's nothing wrong with that. You have to think through the logistics thereafter though. How are you going to support it? All of that stuff. Right. But that's a different problem for a different way. That's a different podcast. Yes. Right. All right. So what do you think? Can L .M. ever replace a genuine user interview? Let us know in the comments. So we're going to move on to the next category which is going to be a debate on strategy versus prompting once you actually figure out what the customer needs, you have to decide what to build. relying on A .I. to write your strategy is problematic. So it's a problem. Why is that a problem, Brian? Oh, it's because if you use the same elem as your competitors, I'm glad you asked. If you use the same elem as your competitors to write about your product strategy, then how long until you and your competitors are all building the exact same app oh boy. That's this is this is a category. It sure is. Yeah. And definitely this is a potential outcome. Right. In the future where people again technology is a leveler. Everybody's using different [UM] LLens that have pretty much similar capabilities where you're using A versus B et cetera. So you're building products that look very close to one another. So here, here, here is stress is trade secret of how you use that tool. It can only be via how you can improve upon what these highly powerful tools provide you. to put this on the screen just just to read a serious article. Says it cannot be via some trade secret of how you use the tools it cannot be via using two or more tools than everyone else does. It can only be via how you can improve on what these highly powerful tools provide you. Whether it is customer insight, marketing insight, analytic insight, strategic recommendations, prioritization suggestions, go to market idea, et cetera. And who is the you here? It is you, the product leader, the human who's eventually responsible for product outcomes, result and success. And he says, I'll repeat, the only long term career mode for product people is how you can improve on the already brilliant, already comprehensive inputs and outputs that the put it. We're going will provide for you, which is the quote from the last section here reiterated you know, and again, let's talk Steelman. Right, right. we have this prophetic go to in the article that we just read. Let me go straight to the steelman. Hey, you can generate a complete go to market strategy with the right prompt or with the right, you know, skill file in cloud or whatever. And also I templates de risk launches by using established quote best practices. They're the best practices. We haven't, we haven't heard that term in a long time. And we haven't best practices. That means there's no, there's nothing else. Yeah, yeah, yeah. It's the alpha and the omega. Sorry, but both of those both of these points are simply saying that you know [UH] I can do it all. Right. And you can blindly trust it. But then also the point you made earlier if all the competitors are using A .I. let's assume effectively similarly. Yeah. Then now what happens. every product looks very very similar And where where is your competitive advantage here? So stress is thesis is the secret sauce is in the application of what I gives you. Right. And how well you can do this yourself as a product leader. So this is where the human comes into the loop. Right And now we have variation because that expertize, that skill is the same across the board[UH] of competitors[UH] It varies. So therefore, it's just extrapolating already saying improve upon those skills and layer them on top of the outputs and the outcomes from A .I. tools. Well arguing agile to 50 like we had [UH] for QA is that why a massive QA boom is coming in that podcast. I lobbied to say, hey, you need to move your QA people to the business side of the house so they can you know, not only can they ensure that your applications are [UH] running at some kind of like spec. Right. We're getting some kind of like predetermined level of quality conformance and compliance but but also fitness for purpose but also the that they can verify business assumptions. You put them in front of your business users to say oh well every user wants X YZ and then the person's ears perk up and like, oh, did you say did you make a claim? Did you make a quantitative claim of every user must want that we can test? And you put them I understand we made product operations and all kinds of stuff and things oh, we, we, we broke these little things up and like try to make new jobs for them [UH] none of which took off by the way. I don't think it's true. Oh, like the Lennie and Claire Rose of the world. Like they, they, they're all sharing that graphic where like product designers are completely flat and they're like, yeah, well, who could have done this? I almost want to make a. Like, oh, who could have shot our product designers? Is you like what's Yeah, this, this, this is a deep meme right here. I need to need to have that on the site. And you have that on the site as a generator right there. Yeah, I know. Anyway, anyway, I feel like I need to, I need to get through this podcast. Yeah, I worry about sweet memes. My the problem with[UH] using alums to help you generate a strategy is like it's just going to loosen up a strategy as a product person so yeah, like I mean who has tried to help a company adjust their strategy, having access to data across all departments in the business, just like holding it in my head and coming up with a strategy like the LL would need access to all the department thinking about like like what it would truly need access to. Yeah. To come up with a good strategy across the whole business. Yeah, that's not likely. Also it can only just get to like structured data. Possibly. Right. Not necessarily everything. Right. So yeah, it's a big, big lift, you know. And then the other things here is that strategy strategy[UM] like it requires anticipating competitive features like you can't. Yeah, I guess you could run a framework for a strategy maybe to a point assuming that certain things are predictable. You could run a framework. Which I think of as you could run a LL to do it. But at the point where you have to use like human creativity. Yeah, I don't know. Like human creativity and human human judgment. Yes. Yeah, great. Yeah. And then and then the last, the last point that I have here. Which is, I just put this in here. Fun. That's how you know that I wrote it. Says if your strategy is not highly opinionated, it's probably bad. It's probably corporate theater. That's what I'm saying. Yeah, I mean you should be able to pressure test that with others. Right? Yeah, mean that's straightforward. You should be able to pressure test it. Yeah, like let's let's go straight into the takeaways I'm sad that this takeaway is not pressure testing but[UH] treat as a red teamer not an original thinker or author or thought leader. I think that got cut out of our. I think that got cut out in the law section of the podcast. Me going on and on for 20 minutes about a thought leader that thought that thought leader. Anyway Write your core product strategy in differentiators in a plain text document. We're working with A.I. now. So this is what we're doing. Feed it to an L .M. and prompt it to play the role. That's right. The L.M. is playing role in the role now of your most aggressive competitor. And then the third point is adjust your strategy based on the vulnerabilities that it finds and then some repeat and rinse and repeat. That's exactly right. So if we're using A .I. to our advantage, it's it's doing some red teaming. Oh boy. All right. all right. So I think we're at the closing of that, right? what do you think[UH] is your company relying too much on A .I. in the strategy and thinking department? So saying let us know in the comments. I mean, you know, if you still work there I mean, if you don't work there anymore, if you still work there like maybe anonymously, let us know in the comments. Yeah, I'm not trying to get you fired. Yeah, yeah. We name no names. Yeah. And if you found this useful stick around because the next category is all about A .I. making decisions for you and why you may or may not think that's a good thing. especially when they're on a big stage. Oh, that's what I'm saying. Yes, they love to say that a fast B business leaders, plus decision beats a slow a minus decision every time. Or could be a fast B minus decision beats a slow A plus decision every time. That's what I think they would say. but but what happens when A .I. makes everybody fast. speed of conviction might matter as much as the quality of conviction in a world where I dramatically compressed execution times. leader who makes a B plus decision today might consistently beat the leader with an A plus product sense. It is a week longer, week longer. This actually true in practice. And there's a CEO say this at a conference, but is actually true. Oh boy. God says no, it's not. So So we're going to put him up on the screen. And in the article, he's kind of going back and forth with Claude about decisions. And he says beat the leader with an A plus product sense who takes a week longer. And he asks, is this actually true? in practice? He says, I know some CEOs say this at conferences, but is it actually true? And Claus response to him in this kind of back and forth that he has in this article is no, it's not true. It's one of the things that sounds true and gets repeated constantly. But when you look at the actual outcomes, it falls apart. And the actual outcomes, I'm not going to read through every single bit of this. You can read through it inside. But he references a couple of things, one of which is Jeffrey Bezos disagree and commit in which Jeffrey Bezos says you shouldn't make the one way, the one way door versus two way door decisions. If the one way door decision is your B plus in this case and your two way door decision is the A plus. In this case, you should wait the extra week to make that because you can take it back. Right. Whereas otherwise it's going to cost you dearly. speed was never the problem in the first place. Oh boy. said that on LinkedIn recently. Speed to coding execution was never the problem. And let me guess, people stomped all. Oh boy. Oh boy. People came out of the woodwork like it was 1999 to tell me I was wrong. And they were, they were like, hold on honey. So put, put the dinner in the microwave for me because someone's wrong on the Internet. And they were, oh, the keyboard commentos with their capes flapping in the breeze. They came after me Oh my wrong kid. Wrong. but wouldn't be arguing as a podcast if there wasn't a steel man that I can go straight into to keep this category of speed of execution is the ultimate mode in software home. And if everything's moving to a trade fair, I just execute three different integrated workflows. And I do it faster than everyone else. And I get to market faster than everyone else. That's the ultimate competitive advantage. Number one, number two is a fast B plus decision beats a slow A plus decision, which I wish I didn't even add on here because he already undermined it in the article. Yeah, yeah, yeah, yeah. But anyway, the speed of execution, that's the ultimate mode[UM] turbo mode all the time. Turbo mode. Just click the turbo button. Do you remember when computers had a turbo button in the? Yeah, do you remember that Refrigerators and computers all have turbo buttons on them now So yeah, yeah, that's speed. That's the essence here. Speed of execution is everything. However, is it because you're moving quickly right at warp speed. But where are you going? What's your direction? Oh boy. Fast B plus decisions. They create compounding cascades of expensive corrections. And I want to throw that one out there and just let it sit out there for a second. Because when we talked about argument agile to 47 as a poor teen player. Stanford's Cooper Bench experiment February 4th, 20 26. When we talked about that one, We talk about rework in [UH] air product managers. Was there a quality or tech that one where we talked about rework? And so I think it might have been CEOs So in one of the last episodes, We talked about fact that the people that are putting in time with A .I. in their workflow, they're having like thirty seven percent extra rework they're a certain percentage faster. But out of that percentage faster, like a good majority is spent on rework. Right. To have to deal with the A .I. is doing. And that's sort of this one is like, you can have a B plus decision. You call it, you call it a B plus decision. And you have to go back and like do a bunch of rework or you know, a bunch of outcomes that weren't predicted and deal with that, you know, deal with a mess. you know, the mess. And you're saying, well, it's worth dealing with a mess because I was first market and I was whatever. You know the issue is a lot of people that make that determination. They never go back and look at the effects of their decision. Second, third order effects of their decision and talk about that in terms of financial impact. They kind of just like no time for that. They're moving forward. fast. Yeah Yeah, they press everyone to move along. Yeah, yeah, yeah. Just That is a real phenomenon out there. You've seen companies suffer as a result of that, right? Long term. But short term looks great. People that make these decisions get promoted and get their bonuses and sail off into the sunset on their yachts [UM] Well, they say [UH] you know, it's not a, it's not a podcast. I get a crisscross reference in there [UH] You know, the other thing they'll say is it looks in a true product sense. that's not slow. It, it wastes less time than going down all these side roads and detours or whatever where we're chasing the wrong implementation. OK, we implemented 30 different implementations and now we got to look at them all to figure out which one is that load on you, right? Yeah, yeah. So it's like, you replaced that slow methodical work with a bunch of fast busy work. You like I don't know. Like this is one of those things. It's like, yeah, OK, if we're at the trade bazaar, if that's the future of software development, we implement 30 different solutions and then we look at them and make a qualitative decision of which one we think is the best that we want to bring to market. OK, it like isn't like some kind of work just being replaced with a different kind of work. And we're not really punching ahead. Yeah[UM] we're all just like a little busier. Is that the world we're headed towards? Like we're moving at light speed in all the wrong directions. Is that the world of the future? I think so [UH] not not not the world of the future, but I'm seeing that happen out there. People are just focused on speed, you know, aided and abetted by the purveyors of these tools. That's how they're selling their tools, you know. So and people have fallen for that So yeah, this is actually a I don't know if we're not careful, we're going to end up eventually exactly where we are now. It's just that we spend less time creating things and then more time fixing them. it won't be exactly where we are right now because we'll also be giving a bunch of money to Sam Altman. Well, oh, well, while he's looking for a man in finance although he did, he did, he did shut down this story. He was like, I don't want some money on this anymore because apparently it's nobody wants it and it's not profitable. And why did you launch it in the first place, Sam Altman? I think he was driving his Bugatti and he got a shower thought. And anyway, anyway, it's complicated. That's what I'm saying. Being a being a CEO is complicated. So if we're going to get out of this with something helpful the two way door tests [UH] would be the suggestion before applying a speed to decision making. OK, what is a two way door test? Well, I'm going to tell you right now. you want to ask if we ship this and we're wrong, can we easily reverse it without a bunch of[UH] rigamarole like that one? Rigamarole includes things like data loss but also reputation loss for your company. Right. Because that's a lot harder to reverse and recover from. That's right. So that's a great you know, checkpoint right there. If we ship it, we screwed up. Can we retract ourselves from this quickly? So if the answer is yes, just ship it.Yes. Yes. Because you have agents, you have you have all kinds of automated systems ready to go. Yeah. Ready to go. Like no problem. Like nobody's debating any of that. Send it on its way. If the answer is no. Now it's a slightly more complicated answer because you have to apply human judgment and some product sense before going in and changing the code. Which is like if you work like for me who works mainly solo, like I do this all the time. I'll ship something if I don't like it the way that I because I use my own, my dog food, my own product. If I ship something, I don't like the way that it works. I don't like the way that it feels. Right. I will change it. I will not sit around for something that I am not completely satisfied with. And I'm skipping the feedback step because I'm also the user. But you easily could divorce it to to say I'm going to ship something or potentially two or three things because you could do wireframes to your shirt. Good. Yeah, exactly. And and the user will tell you, I like this part from where from A but I like this part from where from C. I don't like wire from B at all. Now we know. Yeah. Yeah. And you can iterate and go back and do the same thing again having adjusted for that feedback. So yeah. what do you think does your team use speed to market as a crutch for bad judgment? Let us know in the comments again. If you, if you can let us know in the comments and if you found this useful the next thing that we're going to be talking about is if everyone has the same execution speed, the same access to limbs and the same execution speed as each other, then they all have the same tools. Right? And potentially all running on the same infrastructure. Because again, even that's consolidating the future. Shreyas says it all comes down to taste and by the way, I will tell you It is a pretentious word that I absolutely despise. despise it is a but even though I concept that we have to talk about. taste Italian is[UH] what's coming to go[UM] let's not mean some words in this next category. Oh, I hate the word taste. It sounds absolutely pretentious but it doesn't matter what you call it. You can call it curation. It still sounds pretentious. So we're not getting out of this you can call it judgment. Also like slightly less pretentious. You can call it whatever you want. Like when I gives every company the exact same 10 excellent recommendations. How do you know which one that you should pick? ask. No, I'm just getting. check with Trace. So he says, he with [UH] let's the five categories of product sense. category number four is great taste taste. It says, given a set of excellent recommendations, can you figure out which recommendation is optimal for your business goals? And can you explain your thinking in ways that others understand? Which I would say is probably the more critical part of his categories. He's outlaying is you can explain it because I've been that guy in product that can hone in on the exact right course but maybe doesn't have the exact words to articulate why. That's the best course. And that's just a recipe for disaster right there And then people are going to hate your suggestion just because it's your suggestion. And they have other you know, more favorite recommendations or whatever for different reasons. Right? Bonuses are because it is their idea. And you know, the IKEA effect in the product like we talked about on another podcast. Egos. Yeahyeah, he goes all kinds of stuff that's a great little category. I, I Yeah Maybe we'll hit him up in the comments on the podcast. Yeah, I I'll have to not like vomit from the bad tastes of the product taste [UH] as a word. Like I just can't God given right to it. Thank you curation is an unquantifiable buzzword for designers. Same thing as taste. Taste is an unquantifiable buzzword for designers like with the team can't move forward with taste. So we should not use that term. OK, number one, number two ,A .I. says that this layout converts and the A .I. can prove like if you do like A B test with A.I. the A .I. can prove that a particular layout or screen design or whatever converts you should just ship that. Like you don't need to worry about taste if you don't A B tests or whatever else. [UH] the steelman here kind of is in the court of like taste is a nonsensical thing. And you can just boil it down to things that you can quantify for your designers. Evidence. And if it's intuition, then you can give intuition to the A .I. and ship it at speed and prove it at speed and just let the A.I. take it over. So I taste this kind of like nonsense in the world. It's sort of, that's what we're saying. Yeah, yeah. So, so this is a bit like saying, you know, ship this because I have pretty good gut feel, gut feeling. Right. Which is also unquantifiable. Sure. Yeah. but one of those is brought to you by a product leader and the other one is brought to you by your boss You have to do one of them. That's the kicker right there. Oh, there are things that have been cut out of this podcast for the children Oh boy. To product taste I don't know, man. I like. there are some points that we have here for. But I don't know, like those against were very good for, for, for my side of the argument. They were very good. So let's just throw out the, the, the four case here. Go ahead and look at these. Oh, you know, whatever you call it, it separates a disjointed Frankenapp from a cohesive product. I don't know if that really proves that just to me that reads like this. It separates a disjointed Frankenapp with another shaped Frankenapp. Okay. I don't see the value there whatever you call it. Because what you, what you're injecting into it is pixie dust. Like it's not quantifiable. Right? Yeah. So yeah, exactly. So I don't really have a whole lot of a whole lot of time for that first one. Honestly, that justification is not solid. You're right. The second one says I optimize this for local maximums. Human judgment aims for the global vision. I like this one more because I doesn't have the wider perspective. And it and I will tell you just working with A .I. this one is true. Yes. The A .I. does not know about the system and it also doesn't know about the unknowns because you have like it might know about the unknowns, but because you haven't connected the dots between hey, this is a solution we landed with. But what other stronger solutions are there? It's not going to tell you that. It's just going to say, yes, I absolutely understand what you're asking for. This is a great. This is how we should go about it. And you're like, wait a minute. That solution is not even being maintained. When I look at the get like a repo. Right. It's been like deprecated and all the maintainers have moved over here. Why are you not suggesting we go with the next evolution of the, oh, well, you didn't ask me that. Right. And that, you know, that's what it will do. Yeah. Just because it's not, it's not thinking systems level. And like, I already can hear the people arguing about the, well, you should prompt harder, bro. Like you did do more reps in the gym of prompts or whatever. You didn't put it in your skills file that you should always look for updated. Come on. At the end of the day, you can't prompt for everything. Come on. Come on. There are things you just can't. moving the goal post is never gets older. Yeah, right. Exactly. And the last one here is creative execution builds a unique solution that I didn't suggest creative execution, creative execution. You know, this is what you get when you have creative developers executing things. Yeah. And like building what is trained on in the first place. like this one's going to be tough with people because they're going to say, look, most apps are not like super creative. Most apps are crappy crub apps that just need to work and move forward. Like you don't need to be super creative. Just get it done [UH] I feel like I'm everyone's boss right now on the podcast is listen, I don't need creative. Just get it done. It needs to work. See to work. Ship it yesterday. That's right All right. So this one's not as strong, but it's, it has more merit than the first one. I have a takeaway that I would like to hit quickly and then be done with this podcast. So establish your products. Core tenants. does that mean? write down three non-negotiable UX or brand rules for your products again, we're building a skill file at this point potentially. Right. You could look at it that way when evaluating an generated solution graded against those three rules. You can do more than three. That's fine. And then if it violates the tenant rejected, even if the claimthe claims that it's highly optimized. And there's a lot. I mean, there's a lot more that I could do in this category. You talk about, you know, do this than that than whatever I'm just going to start by saying like have some guardrails bake that into your eye. So it follows your rules as you go along. And again, don't ask you to do everything for you. you know. Yeah. On that number one, when you're writing down non-negotiable UX or brand rules for your product, that's not just coming from you and your team. If you involve your customers as well. Yeah you're already further along at that point. Right? Rather than somebody who singularly writes down things in a, you know, in a file or whatever. So this one is solid to me. I write down those things. They're must haves anyway. You could go further. You could say write down three, four, or however many other things that might be. They were now into the Moscow. Right [UH] that might be good to have nice to have whatever. But you got to hit these ones first. And maybe that's what you do, you know[UH] in your first release is just the non -negotiable ones The other ones you could try to negotiate. But yeah, the number one here is a solid one. Number one and two go together Moscow is a good one. Like we haven't, we haven't brought that up in a while. It's been a while. Yeah, if you're working with an eye for the to follow your way of thinking and stay inside the bounds of your decision making. Moscow is a good one because you can put that in the skill as well. You can. And that's a framework that you can work with. Sure. Yeah, OK, it. That's it for that category. So let us know in the comments if you have any strong opinions on [UH] ways to use A .I. for your strategy going forward. that kind of brings us to the end of this podcast where we talk about, what do we spot today? What do we learn? We're going to talk about. We're we're going to kind of recap the promise in this podcast, which is we identify the execution trap. We talked about the five pillars of product sense that she has talks about in this article. And then we want to stop competing on commoditize skills. Because again, if everybody has access to AI what are the advantages here? We didn't even talk about the original concept. I wrote out when I wanted to talk about this podcast is you've got director of product VP of product. These folks who generally should be coming up with like strat strategic level product visions and stuff. And these people are relying on A .I. because again, they've never worked in the individual contributor product role. Right? They come from alternate roles. so they're basically lost when it comes to product management. Right? So using A .I. and other tools and at the point where they're using all these tools and not really understanding the core, not not having a court to go back to to that it out to make sure it's good and stuff like that. Right. Not being able to mobilize product managers because they kind of are standoffish with the product managers that they work with because again they've not done the job. Right now you get this real friction. And when everyone can use the tools to B.S. their way through through strategy. Are these folks being completely exposed at their level in the organization? And now you've got a real problem. Yeah. You meaning like you, if you're the CEO in the organization, you have a real problem now. You know what that problem is and how to deal with it. that's a much different podcast. It's worthy of doing. Right. But that was the original concept of why, why and how I put this podcast together with Tracy's article, which was a great guideline and valuable in its own. But I'm still not satisfied with what I just said that this podcast dealt with it. And we might plan another one. Yeah, we should. Yeah. And we should. So if [UH] if you enjoyed this one, know, like and subscribe. We're a small podcast. Believe it or not, every like and subscribe helps us go to the moon. Oh yeah. And if you can taste your product, let us know. Oh, and

product management, AI product manager, product sense, Shreyas Doshi, AI tools for PMs,product strategy, product taste, product judgment, AI execution trap, product manager career,product discovery, user interviews, AI native PM, commoditized skills,