DAVID LEDGERWOOD: David, thanks for joining us. It's really cool to have you.
DAVID MORGAN: Thanks so much for having me. It's really great to be here.
LEDGE: That's awesome. Would you please give a two- to three-minute introduction of yourself and your work just so the audience can get to know you a little bit, and then we'll dive in a little bit deeper into some topics?
DAVID: Sure. I started my computer career CS Major at Florida Institute of Technology. I went to the normal classes doing some development stuff and had the pleasure of having Dr. James Whittaker as a professor who is kind of like a godfather of exploratory testing. He's written a couple of books: How to Break Software and How Google Tests Software. He's really a great guy and he piqued my interest in more of the testing side of software development.
A quick background: I've bounced between being both a developer and QA. At my current company, we weren’t doing much testing as kind of the case with a lot of companies when it comes to doing software development. The QA gets shifted off the plate a little bit as you're trying to meet some deadlines.
We started to focus a little bit more on quality and I was put in charge of creating a team here. We created a QA team and basically took off running with that and trying to implement some more Agile QA practices within a company and kind of fighting for our testing space within a development team.
That's kind of how I got to where I am. I've had the nice perspective of being both on the developer side and the QA side so I know the challenges of both.
LEDGE: You fight for the QA user.
DAVID: That's right.
LEDGE: You and I had this neat chat about how if you think about the development life cycle from start to finish of where software comes from, that QA will say and people will talk about QA shifting left and security shifting left and things of that nature in that process.
What is that process as it relates to QA? Tell us the story of how you were able to shift QA left instead of, as you've said, shifting completely off the plate and becoming somewhere in the basement with technical debt remediation.
DAVID: That's right.
LEDGE: What is that like? You had to transform an organization into thinking that way. What is that leftward shift?
DAVID: It's interesting. A lot of people these days talk about shifting left or QA automation. Those are the big buzzwords of what the QA people are trying to do. But I think some of that is a little misunderstood. A lot of them are taking manual test cases and then automating UI test cases, and then they go, “Ha, ha! We shifted left.”
And that's not exactly the case. That’s not really where you want to be. Obviously, automation is much better than manual testing. But that's not really where you need to stop. You can go way farther left than just having a whole bunch of the UI automation.
So when I was able to start this group within the company, I reached out to a couple of people including Whittaker, my professor. We're still friends. He hooked on with a guy named Jason Arbon and we had some discussions about what shifting left means in a software organization. Their advice to me was to hire more of a software engineering-type role.
Get people within your Scrum teams who understand software. They're past developers, architect-type people who understand what quality software means. If you design quality in from the start, if you have good unit tests, if you're doing integration testing, that's a great place to say you've shifted left.
But even before that, as an organization, it comes down to your product managers, your product owners, and your sales people defining good quality requirements.
There are a couple of ways to do that, one of which is this behavior-driven development, kind of an extension of test-driven development. It's something that we've implemented in some cases and had some success with.
That's really where you can empower your product managers, your sales team, and everybody to speak a common language that can drive some of your testing.
So people are familiar with using Cucumber. It's pretty popular out there these days. And then, that's all driven off Gherkin which is a language or the syntax of how you define your acceptance criteria.
So we were able to get all our user stories. Different teams kind of had different styles as to how they wrote their stories, if they, hopefully, even did criteria, how they wrote those.
Anyway, we were able to drive that process and help unify all the acceptance criteria into this common format which makes it so much easier for your developers to understand what they're supposed to be developing and what the end state of what they developed is.
You can use some of this tooling to do some of the automation for your test as well as give your embedded QA folks some direction for how they do their testing.
LEDGE: And so, QA really goes all the way back to the design process and just the mode of thinking from the organization standpoint that we're going to build quality software. It sort of reminds me of the automotive ─ I don't know what company it is ─ that quality is Job One.
Is that the methodology then?
DAVID: I totally agree with that. To further that point there, the big push is DevOps. Everybody is doing DevOps. There are a million and one DevOps things people are doing these days and just throwing more and more groups into that.
But if you're doing DevOps, say you have this awesome CICD pipeline that you set up and you're deploying code but you're not doing any testing, you're just deploying bad code faster.
So if you're an organization that's trying to release and you're not baking quality in from the start, you're going to quickly get yourself in trouble getting down the road with getting way behind on testing. And you get to a point where it's too late.
Now, you have this product out there and maybe some customers are using it. You realize, we didn't use dependency injection; we didn't take in some of the stuff that ensures that you can do quality testing. Then, you're at a point where, now, you're refactoring code without tests to back it up and kind of winging a prayer hoping that your refactoring goes okay.
I think it's baking quality in from the get-go. You have to have that mindset. If you can get the business side ─ who should be driving your software ─ in the mindset of “Hey, yes! We want this feature but we also need to spend some time making sure that feature works exactly how we want it to and we're not exposing any weaknesses” ─
It's really an organization mindset that you have to adopt.
LEDGE: ─ which you think, of course, would be everyone’s default mindset. But I think, by the nature of being the default, you then become the ignored. We now take it for granted. We just do that because we hire great people.
So you don't need to have a discipline function to track a thing that we do anyway. That's the fallacy of thinking you're doing quality without implementing quality.
DAVID: Yes. Why do we need QA people? Let's just hire better developers with QA hearts. That's a hard thing to hear. Yes, you should have developers who write unit tests and integration tests and who know how to do testing. Testing is owned by the team. Quality is owned by the team and the organization.
But there is an art to it. It is a different mindset. As I've said, I've been on both sides of the fence and when you're in developer mode and you write some unit tests, you do some of this stuff. But you're really focused on driving the business: “Hey, _____ 0:09:20.8 we really need to get this feature.” You get kind of in that mode if you don't have someone with that testing mindset a little more specific.
There's a skill set there that you don't necessarily have as a developer. And a lot of kids coming out of college don't necessarily learn.
I was fortunate. I had a pretty prominent testing personality in the world today that I got to have as a professor. So I got lucky. A lot of kids don't get that.
So it's driving some of these practices amongst new grads and older developers. They also didn't get this testing and they've become a little bit more set in their ways.
It's a tough thing to push. And I think we often hear about automated testing. You talked about getting to the point where properly written user story is ─ the syntax and semantics of doing that properly should get you to the point where you can test some things because there are things that you can code and test around.
However, from the behavioral standpoint and that qualitative standpoint, it's simply impossible to imagine that you can create tests that will check off every requirement, every use case, and every acceptance criteria.
Do we accidentally convince ourselves that we ought only to write acceptance criteria that we can automate a test for and, thus, we miss the main business objective of the quality?
DAVID: I think, again, automation is awesome. But it does provide, I think, a little bit of a false sense of security. People get this warm fuzzy that all their automation is running and they have this awesome dashboard and “Look! It's all green.”
Little do you know, as an example, that somebody messed up the _____ 0:11:25.8 and, now, every header on your web page looks ridiculous.
Well, it passes automation. The title shows up whatever your silly test is but the human element is lost in that. And I'm a huge proponent of exploratory testing and manual testing to some degree within any process. And even rechecking stuff that you've done in the past, I think going through some scenario-based manual testing to verify just the user experience exists, I think that gets lost in some of these robotic testing.
As you've said, you fall back and as long as the dashboard looks good, we're good to go. And if you lose that human interaction, I think you're doing yourself a little bit of a disservice that your website really might not be the most fun website to come and visit and it's Conky and you can add things to your shopping cart but it takes thirteen clicks which pass tests. But as a user, you hate it.
Obviously, the Agile QA Pyramid _____ 0:12:42.2 integration all the way up and it has manual testing in there. I think people tend to focus so hard on an automation that they tend to lose some of that human touch to the testing which, again, comes back to what we talked about earlier. There's a mindset there. There's something that a QA person, a testing-mindset person should be doing when they're flipping through a web page whatever your application is.
LEDGE: I absolutely agree. And it's really all about that discipline; and UX plays a huge part in the product design to make sure that we're not building a hundred percent test coverage of an awful thing.
Hey, let me pivot for a second to sort of the final question. We are in the business, as you know, of distributed engineering teams and also in the business of having the most robust and excellent vetting process known to man for senior engineers.
So you put those two things together. I just wonder as like a career engineer, how do you, if we're collecting the best heuristics for what makes an absolutely fantastic remote onsite senior engineering person in software, how do you think about that? What are your mechanisms and heuristics being in the space?
DAVID: For us locally, I've always tried to hire what we call the “three A’s.” We base a lot of our hiring on ─ in this order ─ attitude, number one; aptitude, number two; and ability, number three.
It covers both bases, onsite and remote teams. Attitude and fitting with the team, I think, is probably the utmost important thing for having a successful software development group.
Number two is aptitude. How well can someone learn what you're doing?
We've had people with no testing background whatsoever, software engineers, who really never focused on testing. They never really wrote much unit tests but had good architectural knowledge. We knew they could do code reviews and really start learning and becoming ingrained in that QA mindset which we've had great success with. Some of the best people in my team have never done testing before.
The number three ability is kind of our last point. Okay, you know Selenium and Protractor and whatever. That's great. You know that stuff. We know you can learn it. But, really, it's that attitude and aptitude that make great engineers and, at least, in building great teams.
So that's really what we look for. As far as text based, really, as long as you have some general concept of that pyramid I talked to, for us, as an organization that's trying to move to an Agile QA process, getting buy in based on that attitude that we're a team that can drive this single force of quality and what is actual good quality work, what are good quality testing practices, what are good user stories, how do you force quality into an organization, that's really what we look for.
LEDGE: Fantastic! David Morgan, thank you so much for sharing your expertise. It's good to have you.
DAVID: Thanks for having me. This was great.