AA130 - Exploring Quality Assurance (QA) and Testing in Agile Software Development
Arguing AgileSeptember 20, 2023x
130
00:43:1029.68 MB

AA130 - Exploring Quality Assurance (QA) and Testing in Agile Software Development

Dive into the world of Agile software development, quality assurance, and testing in this knowledge and experience-packed podcast episode! Join a former QA Manager turned Product Manager and an Enterprise Agility Coach as we explore key topics around agile development and testing!

#AgileDevelopment #QualityAssurance #TestingInAgile #SoftwareQuality #TechDebt #CrossFunctionality #AutomationTesting #ShiftLeft #ScrumGuide #TechnicalProductPeople #SoftwareBugs #ReleaseProcesses #PodcastDiscussion #AgileMindset #EngineeringMindset #podcast

0:00 Topic Intro: Testing and Agile
0:16 Quality in the Scrum Guide
2:44 Tech Debt
4:15 Cross Functionality
6:23 Analytical & Engineering Mindset
10:29 Testing Mindset
11:25 Unburdening Development
14:50 Aligning Product and Quality
18:18 Technical Product People
22:01 Experiences with Bugs
26:15 Grades of Service
29:22 Shift Left
34:07 Automation
37:15 Advanced Release Processes
43:01 Wrap-Up

= = = = = = = = = = = =
Watch it on YouTube

Please Subscribe to our YouTube Channel
= = = = = = = = = = = =

Apple Podcasts:
https://podcasts.apple.com/us/podcast/agile-podcast/id1568557596

Google Podcasts:
https://podcasts.google.com/feed/aHR0cHM6Ly9mZWVkcy5idXp6c3Byb3V0LmNvbS8xNzgxMzE5LnJzcw

Spotify:
https://open.spotify.com/show/362QvYORmtZRKAeTAE57v3

Amazon Music:
https://music.amazon.com/podcasts/ee3506fc-38f2-46d1-a301-79681c55ed82/Agile-Podcast

= = = = = = = = = = = = 

I said last podcast, that since my background is in testing and since we were talking about testing, we should have an entire podcast specifically on testing and agile. So that's what this podcast is. Testing in agile, testing and agile, in agile, in agile and agile. So just got my CSM certification and in class they said you don't, they don't have testers in scrum. They only have developers. That means we can fire all our testers, right? Absolutely. Who needs testers, right? You only need developers. I didn't expect that answer that. No. Yeah. So if you're one of those people then You really need to Think through what the purpose of Scrum is. What are we delivering at the end of it? Since a lot of people who listen to this podcast probably use scrum, let's start with the scrum guide. And I, and I think actually, the, the, the thing that people get in their heads right away this weird idea that you can just get rid of all your testers because whatever agile doesn't have testers or, developers do testing and agile so you don't need dedicated testers. If this comes from the Scrum Guide in any way, shape, or form, let's go ahead and read the Scrum Guide right now and get this out of the way. First thing in the podcast. Yep, let's do that. There are exactly four references to quality in the Scrum Guide. So, under the Developer section, and by the way, Developers in the Scrum Guide means anybody On the scrum team that is committed to creating any aspect of the usable increment, which might include testers. It's anybody, exactly. So instilling quality by adhering to definition of done is part of the job. That's number one. That's number one. You adhere to the definition of done. That's how you get quality in. Let's go to number two. This is the sprint section of the Scrum Guide. During the sprint, quality does not decrease. Well, that sounds pretty important. Yeah. Okay. Yeah, absolutely. And then, the retrospective is a way to plan to increase quality and effectiveness. It's giving the purpose of the retrospective. So that's number three reference. And then number four is um, that in, where it talks about a definition of done. It says it's a formal description of the state of the increment when it meets the quality measures required for the product. So, so we, we the team have decided That there is a standard of quality. We've written it down in our definition of done, and that is another way. So they just listed four, four, four mentions of quality to get the idea out of the way, to get the concept out of the way real early in this podcast. The idea that quality is a core principle. Quality never decreases because you're doing work in tiny vertical slices. Or because you're doing work in, potentially shippable increments, it's not that you're cutting quality to get those increments. The idea is you never, part of your definition of done, part of your agreement together, your commitment is we're going to do stuff to get stuff out quickly but not at the expense of cutting quality. Yeah. Oftentimes, newly minted scrum masters kind of tend to get this wrong a little bit, cause they're focused on. The other thing, which is we have to have a product increment at the end of the sprint. We have to have it. There's only 10 days. So let's go ahead and do that. And they don't really put quality at the forefront of their mind. Um, so what ends up happening is over medium term, maybe even short term, you end up accumulating some, technical debt, which you never get back to. Think about that for a sec, right? Because, that is important. When was the last time your organization said, Take the next print or two and just fix up tech debt. Unless the customer is screaming in their face, this doesn't happen. Oh, I'll represent the product side in this podcast today. It's like, although I was supposed to represent the quality, like the QA side, because that's my background, on the product side, it only happens if you've made some sort of agreement with your product person to dedicate some sort of percentage inside of your capacity, whatever your capacity is. So like 25 percent of our capacity, we're going to deal with known problems or whatever. The reality of me saying that number is sometimes you can do that, but sometimes you just can't. It's a mandate at the end of the day, right? It's best avoided rather than have to deal with it somehow. You know, allocating a fixed capacity for every sprint. For me, that looks like an opportunity cost. That 25%, whatever percentage you use, is an opportunity cost, that you're incurring by not delivering something more useful, right? Because you're basically doing defect remediation at that point. Right. The other thing in the Scrum Guide, before we move off of like what the book says, it has a line that the scrum team is cross functional. Scrum teams are cross functional, meaning the members have all the skills necessary to create value each sprint., so again even if you're an environment where you say, well, my developers do all the testing, the idea of cross functionality is like, well, I mean, who, who is your expert? I mean, I understand anybody who says, the, the, if I'm going to have any empathy for the other side of it, like the, the cash strapped startup scrappy side of the world who might be listening to this saying like, well, we just don't have money, like we'd like to do that. We just don't have money to do it is, I guess you're saying, I guess you're saying at that point, like we, we, we would like to be cross functional. We just can't afford to be. Yeah, what I'm hearing there is we don't have money. So we're going to put something out there that might be crappy, but we'll fix it later. That's not going to work, especially if you're in that startup environment where it's very important to land, with your customers, right? With a product that they like, not just simply that something that satisfies the market, but maybe even delights them, right? In that environment, if you're going to just simply... Pay lip service to the quality out there. How's that gonna work out for you? Well, when I was a people manager, that era of my career, especially when you, when you, when hiring for, Q. A. People and testers, I realized that there is a skill set that you have to sort of that out and find in the hiring process and those are the people that are gonna be really, really good in that position and hiring for developers and hiring for Q. A. People. Yeah. Is, you're looking for very different people, potentially. Yeah, that's a really good point. You are looking for different people. And this again is sometimes misunderstood. People just say, well, testers, they can write code, sort of. They can write scripts. So they need, they need a developer background, right? They need to know some languages and have some prowess and maybe an idea or two. I think you're right. It's a mindset issue. You need testers to have that testing mindset, which says as we're building it, Let's build quality in the other side of it is we'll just get to it later and we'll inspect for it at the end. See, that's, that's interesting. Cause again, the people management side of it this is an interesting segue. I think we should take it now because, I remember, when the company that I was at decided to make a real effort toward, integrating automation, like we were strictly manual testing until the point where we really started coming into some realizations that, if we really were going to. Elevate the level of quality we needed to integrate automation into our practices and then we did and we went about it and I as a manager kind of pivoted and moved into that, brave new world but You could find somebody who has That that the skill set we're talking about they did like whatever's on the surface and whatever they're being told is like not good enough They need to dig a little deeper. You mean they need to chase Those leads when they see them, they can't help themselves. They really need to they need to know why things don't work The way that they expect them to work, like just being told like just move on is just not good enough Like there's a there's a mindset. So you find some way somebody with that mindset, but also somebody who has this this deep interest in solving problems and, in, in seeing, menial tasks automated, the kind of people that like, they do it like once or twice and then it bugs them that they, they can't just spend a little more like, Hey, in the time that it took, I've done this twice manually. In the time I've done it twice manually, I could have figured out how to automate it. You know, finding someone with both of those mindsets. The mindset of, I don't want to repeat work and I'm willing to go learn new things in order to not have to repeat this again. And the mindset of, I'm really bugged by not understanding why things are the way they are. I need to dig into the system and figure it out. Both of those mindsets together. You can find that and develop it. Um, I don't know if it's more difficult or less. I probably could think about that and have that as a separate session. You know, all your, basically, are your, are your QA analysts the same mindset of your QA engineers? Because QA engineers have a different career track of like software, S D E T, the software developer and test type of career track. Anyway, like we're getting kind of off tangent, but, is, is it, it was an interesting segue because I've, I've specifically. In my role of, of, people leadership in QA and testing, I, I can tell you that those are different mindsets that you have to find. Absolutely. For sure. They are. Right. So that, that kind of hound dog mentality, go figure out what's wrong. Go keep sniffing right until you figure out what's wrong. as opposed to kind of just say, Hey, something's wrong, throw it back at the developer that worked on it last. That's really that junior level mentality. So once people, mature beyond that, they're looking at holistically why this is happening, and then they, to your point, they get frustrated having to do things manually, and they learn the skills to automate. I think it's a natural progression for somebody who is a manual tester to want to automate things, and that should be encouraged as much as possible. It used to be the case where people carried Both skill sets on a team like we have an automation tester, but we also have manual testers, right? That's an overhead. I think it's also a function of maturity. I think lately what I'm seeing is you have a QA person who is obviously very well versed on how to test manually, but they're also an automation person. They write automation scripts and they get engaged early on with developers to the point where they really don't want to wait until something is thrown over the wall at them. So right after the sprint starts, they huddle up with the developers and say, what are we doing right? Let me take some of these automation, pieces and start writing that or even better. Let me write some of these unit tests for you. You write others, and we can kind of work together, aka. And once they do that for a little while, you can start seeing more developers and more, testing type folks get engaged in that and move away from pair to mob programming where everybody owns everything. And I know this doesn't often sit well with, like the PMOs of the world where they want to assign things and have one, one net to choke type of scenario, but These teams are doing really, really well. So there's something going on there. To go straight into the idea of this category, that the, the testing, mindset is different than the typical developer's mindset? Is that like an overblown statement? Is that like an overused statement or is that actually kind of true? No, I, I, I think it's true. And here's the reason why. I've seen the other side, which is... Is to say, they get something thrown at them. The developer says, I've assigned this to you, Fred or Mary, right? Go test this. And they test it and the first thing that falls over, they say, Oh, well, this doesn't work. And they assign it back at them, right? I've seen that. I call it the ping pong game, right? But the other side is the tester isn't looking to ascribe blame. They're looking to see. What was missed and then they're looking to educate the team as a whole to say, if we work differently, we can avoid this in the future. So then what happens is you really evolve along that maturity curve and they're helping the process become better over time. This is part of how do QA people and or testers, add quality to the product?, one of my gripes with, people who think that they can operate with like one test or for 10 developers or whatever, you know what I mean, like a weird ratios like that. Um, and I'm not going to get into ratios in the podcast cause like every company is a little bit different on what your ratio can be, right? If you have a tester, you already talked about sitting down and peering with them to write your, even if all your tests are 100 percent of your tests are automated, maybe, maybe, maybe not 100, but if your company is dedicated to automation, your testers can help unburden the developer from having to think about tests in addition to all the rest of the things that they're thinking about. So the tester can be completely focused on how are we going to test this? And the developer is focused completely on how are we going to code and create this and where are all the edge cases and where are all the paths, the little perpendicular paths where that can take us off course and then sitting together, you're truly exploring. All of the options, and knocking it out when, when it's first coded. Cause like, I think we, there really shouldn't be any pushback on the concept that fixing something. At the time of its first coding is the least expensive method of solving problems as opposed to catching defects when they're out in production and then bringing them back in, taking a section of our velocity and dedicating to solve that problem or go around that technical debt or whatever you mean that that's the most expensive after it's already in production, already in a branch, already in the release that went to prod dealing with the change. That's the most expensive way. Okay. Thank you. Just like oh, we're gonna develop this new website and put it out and see if customers like it. That's the most expensive way of testing if your idea is good, or crap. I mean, this has already been proven. So yeah, I think we can kind of move beyond that, right? It's an order of magnitude cheaper to fix it. At the source, then, later what people don't think about is the ancillary things. It's not just that the customers finding this thing and you're fixing it later. And there's this opportunity cost I mentioned earlier. It's the rep damage. Your company has already and your product has already suffered rep damage when reputation damage, right? When the customer finds something, they're already not happy about it. So you can't really put a value on that. So I think you need to think about those things as well. But yeah, to your point about, have freeing up the developers from the burden of Thinking about testing and doing testing, et cetera, in the bigger picture, not just unit tests, stuff like that. There's some new things here now, right? So lately, what we're seeing is testers focusing on taking a piece of functionality and then putting it through load testing or performance testing or scalability, those kinds of things. Or any NFR. Any NFR, yeah, exactly. So it's freeing up your development folks to say, Just get it working and then let us see how well. It's the elasticity in the product that they're testing, right? And when they reach a limit, you've now learned something. Is it good enough? Maybe it's good enough. Maybe you're only expecting 100, 000 people. To hit this thing right at a time. Maybe it's not, maybe it's a million plus, right? So you've found this out ahead of time before it's rolled out, but it's not your developer doing this because if they were doing that, it would take them a certain amount of time. They may even have to learn new tools, new skillsets perhaps to a lesser degree, but still, so folks to me, QA and developers jointly own. The delivery of the Sprint, right? That Sprint goal is owned jointly by both of them. I'm glad we're here because, this is another one pulled, pulled straight from the headlines. Like, oh, straight from the headlines. This plot was written. I, I worked on a mobile app one time, that had, 100, 000 concurrent users. There was like 2 million registered users in the system, but yet about 100. So like, that's like 5%, right? Yeah. 5 percent of the users in the system were concurrent users, but still 100, 000 people using your mobile app at one time. That's a lot. It's a lot, yeah. It's not insignificant. It all went to one central database. And, a development, every single time the development got to choose, they're like, I'd rather test all my functional stuff. I want to make sure that my checkbox is a checkbox, and it works, and it saves a setting in the database. And, why would we ever test load tests? You know, these things. That's what the users are for. Well, I mean, this is where, this is where product comes in. This is where we, where having dedicated QA people and testers can help product, put out a quality product. Because, product probably has, especially if you're product people, I, I often have to At this stage of my career, like I'm going into the, what is it? September. Yeah. Going into the 19th year. Of working in, development, software development. Uh, I often have to stop and remember that, Connecting to the business is, the most important thing to business people. You know what I mean? Not, not, until it comes time to actually code and create a feature, Then, then my experience is the most important thing. Because they're like, Brian, go figure out how to get this thing implemented. You know what I mean? 90 percent of the time, like it's go figure out how to get this thing implemented. But the other 10 percent of the time is, Oh my goodness, everyone's fighting. Um, because, because they run into something like this is, is, Oh, our app is successful now we're in trouble because no one ever thought that 100, 000 people would be clicking on our mobile app at any one point together at the same point in time, 100, 000 people will be success is now our biggest enemy. You know, and now your testers are sitting back kind of like, because they, you probably have a work item in your system that they've opened that says, hey, when you scale over this amount, the servers fall over and because your product and your testers And your development are not all speaking the same language. You don't have the, what the end user needs directly in sight. You know what I mean? You, Because you're not optimized, a lot of companies are not optimized for the end user. Again, this is , just from what I've seen in my career. Development likes to optimize for getting things out and, and off of their, off of their plate as fast as possible, so they can say, well, you're not waiting on us. They like to optimize a deadline is what it boils down to. QA likes to optimize to say, we don't have anything coming back on us to say that, there's dings on the product. That the quality is bad because that comes directly back on us. And then I don't even know what product optimize is for. Buying yachts, I guess. I don't know. Yeah, I mean... No, but you made a good point when you started talking about this point, which is really who represents the voice of the customer when you're actually working on the product at that point, isn't it better to find it there? So if, if you didn't know about the product scope of this, a hundred thousand people will be concurrently hitting this. That's one thing it's a miss. You didn't know about it. So there's a miss there, but if you knew potentially there might be, then that, that's where your, your testers could say, let's write some automation and let's, let's create an environment where we can, we can hit this thing really hard and see if it will even stand upright when 100, 000 people do it. the alternative to that is muddle through. Go change a pool value somewhere, that's not going to work for you. Well, I mean, the, the idea is that your QA people, your testers in this case, become an additional avenue of feedback to your developers. Like the product manager may not know. Let's stay in the mobile app world for a second. Typically mobile apps will access the servers via an API. So an API, an endpoint has been written for the mobile application to do a specific operation, whatever it is, load, load the widget for how many items I have in my inventory today or whatever, whatever. I don't know. I'm making up stuff. And that's fine, but if, if you're a tester and you're using, Postman or using JMeter or using some API testing tool and you load up a test and you just hammer the server with a hundred thousand connections within three seconds of each other, the, the developers might say like, why would you do that? Why would you ever run a test like that? Said many developers to me in my career many, many times. All the time. Many, many times. Yeah, yeah, I concur with that. Why would you do that? I mean, the, the arbiter in that case is now the product person to say, Hey, listen, I'm running these tests, but development is saying, why would you do that? It's not realistic, maybe that's not written down in any of your requirements. Maybe there is no work item in any system using any system It could be a good system could be a bad system could be Jira could be any system could be any system There's no work item that says make the system perform under a hundred thousand concurrent users I've been in this environment too where you don't have access To go look at production You don't have access to go look at productions logs to go log in with read only to productions database. Maybe you have like real secure it policies or whatever, and you can't go look at those systems. Like you built the system, but now that it's a production, you don't have, you're, you're, you're not a big enough boy to go look at my head sideways when I come across those. But you're right. They're out there, right? These situations. It's like, why would you not have your testers look at that stuff? Because it's reality. They certainly are. But I mean, like the, if, if, if the tester can't look at it and when we're going to pretend for a second in this, it's. Example, the tester is somewhat technical, like what are the chances of the product manager is going to know to go look? Yeah, exactly. They don't go through coming through log files, right? Well, this is, I mean, the interesting point about me in my career where I am now is, I, will go push to look at that stuff and I will get. Glances from the side, the side of people's eyes have been like, What is this product manager trying to get API access? They're trying to get access to our Splunk logs or, Elastic logs or whatever. Like, why, why do you need this? You need data log access? No. I'd like to know how users are using the system. And, and you can make a, you make a valid point by saying, Well, if you want to know how users are using the system, You should go ask the users and listen. That is valid. That is valid. But also you have data in the system. Why is that not exposed to me? I should be able to see that. Listen, totally agree with that. So you can't really go and ask 100, 000 of your users to see if they're all going to hit the end point at exactly the same time. That's just not realistic. However, to your point that data's there. It's there in the log. It's there assuming you're logging stuff, right? So go look at your Splunk log. Go look at your Datadog or whatever it is that you are using. Exactly. And figure that out. One thing I wanna talk about here is the, is that little gap in between those scenarios, which is people look at that and then they say, Hey, listen, we had 20, 000 users get on at the same time, and it's fine, everything worked great. So let's just say we may have 40. Double that. We're fine, right? It's not validated. Because that 40, it just comes out of the air. Somebody just makes that up. It was 20, we'll just double it. Because there's safety numbers, right? So, even if twice as many users got on on a Saturday, we'll be fine. And then, one day, it goes belly up. Because six times the number of users went on, you didn't validate the actual scope of your product ahead of time. What happened at a company I was at one time is there actually was a bug in the mobile app, like super, super, super deep in the mobile app. And, the testers, would've had to, you would've had to really, you really would've had to have a very... test, that happened at regular intervals, like automated regular intervals to expose this because, they made a change in the mobile app that every time someone loaded, a particular screen, I don't remember what screen it was. it made too many requests to the server. So that 100, 000 concurrent connections basically turned into an infinite number of connections. And the database that was at the heart of the system, it, it didn't crash. What it did was it queued, because it, because of the nature of that database, that particular technology, it queued up. All of those requests. So the users experience latency basically. So the users, they, extreme latency. Because the normal request for like, whatever, 100, 000 actions to the database, but basically the system couldn't handle 10 times volume, but had we load tested the system at 10 10x, We would have we would have First of all, we were caught by surprise in that situation right because it was a bug sure we fix a bug problem went away But for all the users that that were not on the buggy version or we're not on the platform that had the bug They got impacted because all this traffic came into the database that all the applications went to. So, again, I bring this up because of two reasons. Because, number one, NFRs are very difficult to deal with without testers kind of lobbying for them. Because they're usually the people with that mindset that says, Oh, what happens when we have twice as many normal users? What happens if it does? Oh, well, that's never happened before. So don't worry about it. You know, well, okay Well, at least let's talk about it though, right? Let's at least talk about it, right? Oh, no, we don't have time to talk about it We were busy. We got all these features to implement, right? The other side of this is it's in an additional avenue for feedback kind of like I said before it's an additional avenue for feedback To the developers that maybe they don't get like maybe the developers get feedback from executives Maybe they get feedback from their development managers and leads and stuff like that. Maybe they get feedback from the product manager But maybe they don't get this additional feedback from other people that are highly technical as well Saying hey XYZ Maybe we should talk about this scenario, yeah, yeah, definitely. I have something similar to share in a situation where the connections were being not terminated properly. So you would have, the pool was being used up all the time, right? And people would keep trying because people are people, right? They would just say, oh, it didn't work. Well, try it again. Refresh. Well, that just makes matters worse. And a developer. Doing their job. They're not going to catch that, right? All they're going to do is make sure that a singular test works. Well, they might not even have access to know that they wouldn't, the pooling connect, the memory on the pooling connections is growing and growing. Right. They might not even have access to see because they don't have, they don't have access to those monitors because of corporate permissions. That's right. Security. Yeah. And politics and all of that. Yeah, exactly. So there's that. You know, when we encountered a situation similar to what you just highlighted, it was a, it wasn't a case of the customer experiencing latency, which is pretty bad because then what follows after that is timeouts and whatever else and they're going to try again and again, right? Uh, it was the database went into a thrashing mode. And, that was terrible, right? And they were logging. So the log file grew to the point where it filled up. I mean, it was just disaster after disaster. So a lot of these things that we're highlighting born from experience. The developers aren't going to catch for whatever reason. There are many reasons why, but your testers could catch those and probably should, right? If they were given the right permissions, et cetera. Yeah, exactly. So I think that alone tells you, if you're not exposing these kinds of bad experiences to your customers, that's a good thing, surely, right? I mean, because the flip side is you do and the customers get really teed off because they get timed out on. They try again, they get time down, right? All of that. You know, the server is not available today. Please try again later. It's like, no, I need this now. So a net net on all this is your product folks aren't necessarily going to be able to scope to that level of technical, availability, if you like, to the customer. Right. This is where a combination of product and testers can really thrive. Yeah, well, the product people really could benefit from having testers on the team because they may, they may, let me think about this under the guise of building a quality system for customers. The testers may bring up things that, that development may not think about bringing up. Just from, from the perspective of. Like what I said earlier is like development is concerned with just getting things out the door as fast as possible and moving on to the next thing because they, especially in shops where, like a development lead and a product manager kind of go off in a corner and create a series of like, Oh, we got to have feature X and feature Y and feature Z and just code these, code these team. We haven't even entertained the idea. In this podcast of what happens when different segments are offshore. You know, well, we, we, we offshored development of these features and, um. And, when we bring them back in house, they need to go through. So, we haven't entertained the idea that, the people writing the code and writing the features and the people testing them are, are separated by walls. Because that, that's a whole different, like... It is. You know, opportunity for issues. For sure, that's what leads to this, right? Oh, yeah. It's like a blame culture, you know. Yeah. Um, so, I agree with that. I, I think one of the things I wanted to do... kind of just finish up this topic with is if you if you have these symptoms. In your company, on your product, et cetera. And, you don't have that same level of, respect, for testers. They're not there. They're not given the enough room to work with, et cetera. Think about that. Right. See if you can connect some of the dots because really your customers don't care who's doing the work, all they care about is getting the work of the right quality at the right time. So a couple of the terms I want to put out here, fitness for purpose. This doesn't mean that your product has to be 100 percent bug free. It just has to be fit for purpose, right? So the bugs are there. Maybe share them with your customers. If they're not severe enough and say, here's what we got. Do we want to wait till these are fixed? Which ones of these should be fixed? What's next now, later. And. Have that discussion. Then they're not so surprised, right? They're not going to write bad reviews necessarily, because you've engaged them in the discussion. So that's, that's one term, fitness for purpose. And then the other one, for those of you that are involved in providing services, et cetera, is grade of service. I learned this a long, long time ago when I was working in telecom. A service could be up, right? But it doesn't perform as well. So, grade of service is really talking about the quality. If you think about simple things like you pick up a landline and you're talking to somebody and the conversation is, the connection is there, first of all, right? It's a solid connection. However, you're hearing static sometimes. Sometimes there's the occasional dropout. The grade of service suffers right there. Right? So that's really what I'm talking about. Um, yeah, the service is up. So you can check that box, right? And so the developers can say, look, it's working. The testers can say, yeah, we tested it, it works. Ask the customer. If you're searching for this topic, around the internet, I think you like, you'll encounter shift left testing and, and like, again, shift left is like one of those terms that is it's, it's, it's a buzzword. It's meaningless. It means nothing. But, my product manager hat to, to this, I would not put a hat on and mess up my wonderful hair. Uh, I don't have a good hair day. Um, it like buzzwords aside, If your, if your testers, your QA people are helping product represent the needs of the users, like you're, you're shifting left. The, the idea is you're, you're shifting to how can I make sure this new feature benefits users? And if that's what you're going into your testing, and you know, before the features even coded, you're talking to the development team, you're talking to your product manager, you know what I mean? Like when you're at refinements or sprint planning or whatever, that, that is to me, when people say shift left, That's what I think of when people say that. Totally agree. I, on the other hand, never have a bad hair day, but, besides that. So I agree with that, right? As early as possible shift left. Hey, listen, is there a reason why your testers aren't involved in the very early discussions with your UX folks? Right, right. They should be there. I mean, oftentimes I hear the opposite, though. I hear, well, there's nothing for the testers to do yet. We haven't even written code yet. Like, that's the perfect time to bring them in. When I do sprint reviews, for example. I'm, I'm demoing some features like I think the, the, a lot of times, the perfect people to review those items and to demonstrate them to the users are the testers because usually they're in an environment that's not the dev, the development environment where the, where they were, all the data and everything is showing is like, well, it's this data is kind of fake and whatever, because usually the, the testers are operating in some kind of middle ground stage environment or And if they're in the environments where they've been in production demonstrating the feature, they have some sort of test user or test data or test account that's in production. So it looks like real data. And, which is the best case scenario, by the way. Look, I, I'm a big fan of testing in production because because I'm reckless. That's why. Cause I was like, no, I don't like, but it's nothing like reality. No, I'm like, I, I, I was at a company one time where I was like, where does your sales people demo from, Oh, well they have a whole sales account customer and everything that has fake data. I was like, well, that seems like a great place for QA to test in production. Like assuming that you have automation that fires in every environment, why can't you just fire the automation? To look for certain things as criteria for success in a different environment. So if you have a dev environment, you probably have some, some data in there. If you have a middle ground stage environment or something like that, you probably have some data there. And if you have a production environment, you'd certainly have data there. You can code the automation to look for certain flags, certain triggers or whatever to say, test successful or not. So it shouldn't be like testing, quote, testing and production. Um, it shouldn't be this big thing, if you're saying test, if you're saying shift, you're testing left. First of all, that, that terminology is meaningless to me. What I hear when I hear that is your product people. Whoever's quote writing requirements should be engaging the testers at the time where the creation is happening. I think of people that, that, that are like, oh, like all the people that we like watch TikToks where they're like, they meet with their development manager and they cook up requirements and nobody's in the room. It's just them two. Right? Like, okay, you should at least have somebody involved when the, when the requirements are being vetted, if it's in front of a customer or if it's in front of your team, or if you have both at the same time, which is ideal. You want to have your customer in the room with your team and your tester and your product manager saying, Hey, you asked for this, you say, this will relieve your pain point. Let's talk about it. Your tester can be there to ask questions. To listen to the back and forth and can automatically be start. Like their brain is already going into, how can I test that? What we're about to create, fulfills. Every, every issue that you're pointing out. It fixes every problem that you're claiming to have. How can I, or how can I already start to think about testing this? And the earlier they start with that, again, when we go into coding, they can be sitting with your developer. They can be saying, Hey. These are the negative scenarios. These are the, if you're doing, yeah, if you're doing test driven development or whatever, like I don't know what you're doing, but if you're doing test driven development they were already in all the conversations that brought the work item in to development. You can start from the best scenario. But here's the thing. Those people that are watching this or listening to this, that, just, those, those testers that just stop at testing with synthetic data. Yeah, you're missing something there, right? Because there's nothing like testing in production. That is the acid test right there. So if you have various environments that you're pushing your code across dev test, QA, whatever stage, and then finally production, there's no reason why you can't avail yourself to the latest. techniques that you can trigger all of these thousands of automations every minute, right? Literally. and use that to your advantage. The other thing is then look towards progressing to the point where you don't need those environments. Well, which is that blue green scenario. The last thing that I had that I wanted to talk about was automation. Yeah. Uh, and like you started down this road to talk about like, Oh, testing in production, like is a dirty word to a lot of people, but if you have, if you have CICD practices in place. Like it shouldn't matter what environment it is. Like you, if you have CICD In place and you have tests that run as part of that automation when your release process goes off, you are testing in production, you're testing in every environment that you run those CI CD integrated tests, so it shouldn't be a dirty word. To test in production, or dev, or stage, or QA, or whatever environment you have. Those tests should be standardized. And the, and you should try to get to a place where, I've been to this place many, many times in my career, in, in the maturity of test automation at places I've been at. Where, they become so secure in their test automation that they say, because if, if, especially if you're doing UI automation like frontend mm-hmm. automation, they, there's only so many UI frontend tests that you can run. Like if you're running selenium tests, for example, there's only so many tests you can run. Every additional test, it requires more. Time, like, seconds and minutes to run more additional tests. So you eventually start looking through your test library to say, like we haven't changed this functionality. And so in so long or whatever, we're just going to disable this test because we're just going to take it on faith that it runs successfully. and, we want our automation to clear in a certain amount of time. Like we want to be able to run our. Automated battery of tests and have it clear in two hours. I think about like a front end mobile automation, like you have a mobile app, you've changed some stuff. You want to release new application into the, into the app store for users. And it runs, it runs in your backend tests, grab the application, download it from the app store, register a new user, do some stuff, click some things through the application. And like, front end testing is some of the most expensive testing you can do. So, like in this, in this example that I'm talking about now is, you really need to put your money where the most valuable are and figure it out, figure out that the, we used to call it , critical path back in the day. Yeah, that still exists. Critical path still exists in testing. Uh, I don't know how I got on this topic, but, here we are on this topic of critical path testing. Cool. Well, I learned something there. Critical path is still a thing. It's still relevant. It's still an elephant. Ha ha ha. Oh my god. I got nothing. Um, I think we should wrap up by just saying... The, if you are testing in production, you either have a lot of faith, you have good processes in place. Or, or you're No, no, no, no, no. Or, or you're, I stopped it either. So you're, I can't go to, or, or you're a product manager.'cause I, I, I mess with my team all the time and say I only test in production, but also my team only gives me access to production. So. I don't always test, but when I do, I see a meme coming here. No, listen, the ultimate for me is save your money and don't spit up all these wicked environments everywhere because they are a money sap, just have dev and production. You don't need UAT. Yes. You don't need UAT, right? In production, your new latest code is in there, wrapped around a feature toggle, and nobody's any the wiser because it's turned off. Get your testers to test that using automation, in combination with all the regression that goes there, there with, and if it passes, guess what it's solid, right? Flip that toggle and you know, your customers are happy. If not keep that toggle. Unflipped, switched off, and work on the feature until it works. So there you go. That's what I need to say about blue green deployments rather than multiple environments everywhere. I mean, at the point where you have blue green deployments, like you, like you're probably advanced enough where you can, you can fly through a lot of the concepts that we've talked about at the point where you've, you've implemented the most difficult part. You might as well. Roll your tests into it at that point, like there's really not a, not a reason not to, I mean, well, the flip side of that, I agree, but the flip side of that is how can, the likes of Amazon roll out new functionality? They're truly global, right? You don't have the opportunity to say, well, this is production and this is whatever. And we're going to have something deployed to production, which usually involves a downtime because What are you going to do? Penalize people in Japan that are using Amazon or people in Hawaii, right? No, it's it's there and it's through blue green that they do that right, right and they take You know, they could they could take advantage of the feature toggle functionality, too So they release to production many times an hour. You heard that right an hour and many teams I've worked with struggle To release something every sprint let alone an hour, right? How is that possible? Why can they do that? What do they know that we don't know? Well, I mean like the teams that I work with Ever since I've been exclusively in product, I insist on, CICD, real time deployment. I, I insist on one, one feature. One, one change request, one deployment to production. Like, I insist on real time deployment to production. Like, I don't want to wait to deploy to production like once every Wednesday or something like that. I don't think that helps anyone. I don't think that's the best thing for, for customer, delight. Yeah, I don't think that's the best thing for, for customer delight. I don't think that's the best thing, honestly, for the, like the careers of the team members. To be honest, I don't think that's, I think the company should figure out how to say one change request, whatever that is, story, bug, one single story in Jira, or whatever ALM you're using, whatever that is, it should be able to go to production by itself. Absolutely. And also immediately go out in a way where it doesn't impact anybody. Like we should be able to release in a way where, the users don't really know that the software is changing underneath them. Absolutely. This is, this is, this is like, cost of doing business. Like minor, in my opinion anyway. You know, maybe all of you. Other people have other opinions, figuring out how to do this in an automatic manner, there's so many tools and so many infrastructures that just do this automatically without impacting the user that, I remember the early, early, early part of my career where you had to take down time, had to bring servers down, had to put up a splash page that says, the site is currently down for maintenance or whatever. Yeah. Yeah, there's so many tools are gone now. They should be gone. Theoretically, should be gone, yeah. There's no reason why they shouldn't be gone, let me put it that way, right? The tools are available to you now. I like that idea of... Carving a piece of functionality so small that it's just one thing, whether it's something new, whether it's a bug fix or whatever, because, one times one times one always equals one. Again, since I'm, since I'm like that, the technical person in product now, I know that, I can just reject releases that have only one, work item in them until a point where I get to like a release that has, Five, six work items or, or, or 10 work items, seven of which are related to the same web page or whatever. And I'll say, well, I'll take that release and ignore the rest of them. But, but I, the business in making that determination to say, I want to go to production now. You just said when, right? Yeah. And with what as well, right? Right. Yeah. Yeah. I like that. Yeah. Well, well, it's, it's an opportunity for me to communicate out to my stakeholders of we made a build now, and here's what you can see. I mean, I mean, it would be better. I certainly agree. It would be better. To just, as soon as my development team is done with features, just send them out. Literally, as soon as they're done. And then the communication becomes a bunch of little tiny, little micro communications. As opposed to one release announcement type of deal, you know what I mean? At this point, we're on to another topic of like, Announcing features and stakeholder communication and whatnot. That's not really the point of this podcast. This podcast is about testing. Although, although your testers can, if they're enabled the right way, they can have a big impact on product's ability to communicate out to users. For sure, absolutely agree, yeah. I think that might be a wrap for us today. Alright, so let us know how your teams are doing with your testing, down in the comments below and, like and subscribe that button.

agile coach,arguingagile,arguing agile,product manager,scrummaster,agile,podcast,scrum,product management,scrum master,product owner,