Scaling an engineering team in a transformational industry

Boston Code Viking, Steff Kelsey started as engineer #3 at Notarize. This episode was recorded while he was their VP of Engineering, leading a team of 40. He's now the VPE of Appcues. 

Scaling an engineering org is challenging enough on it's own, but what about doing that in a totally new market segment?

That's what Notarize did as it moved the Notary Public process to a video-driven SaaS interaction; nobody even new that was a thing until they did it.

Steff and I talk tooling, automation, ownership, and more with a special dive into building a culture where learning and problem ownership is a managerial right of passage.

Listen

 

 

Subscribe

downloadSoundcloud-subSpotify-subStitcher-sub

See All Episodes
Steff Kelsey | Appcues
VPE
Steff Kelsey has been writing software ever since he discovered that it was more interactive than 3d animation and motion graphics (although the latter are the most fun you can have with electricity). For many years he consulted in the Boston area and with clients worldwide working on brands like adidas, Reebok, Puma, Sony, Carnival, Disney Interactive, and Samsung.
 
Now, he is the VP of Engineering at Appcues where he coaches a scaling world class Engineering Team. 
 
Steff lives by the sea outside of Boston with his wife and dog. On the weekends he can be found tending the bbq (meat science!), playing chef in the kitchen, and skipping rocks.
David "Ledge" Ledgerwood
Client Services | Podcast Host
Your host, Ledge, is a many-time founder, with deep experience in growing six-figure startups to 8-figure enterprises. Clients talk to Ledge first when exploring a relationship with Gun.io.

“Love this podcast! Best one out there for high-level industry trends and advice. Recommended!”

Danny Graham, CTO / Architect

Transcript

 

DAVID LEDGERWOOD:  Steff, thanks for joining us.


STEFF KELSEY:  Ledge, it's great to be here.


LEDGE:  Awesome! Just give the audience, if you would, a two- to three-minute intro of you and your work and what you're up to.


STEFF:  Sure! I'm the VP of engineering at Notarize which is a startup in Boston that features remote online notarization. It's a very interesting segment to be in because it's transformational.

You can think of when DocuSign rocked the world with e-signatures. This is kind of a follow up to that. Now, instead of going to a notary with a piece of paper and they actually stamp with a physical stamp, you can do a call just like we're doing right now and a notarial act can take place.

There are all kinds of challenges that come with not just going from a very small company. I was the third engineer to join and maybe the eleventh in the company; now, the company has about a hundred and the engineering squad is about forty. So there are the challenges of just scaling a team; scaling a business.

But there's also this transformational aspect of convincing people that online notarization is a thing, and it's legal.

It's been really wild.


LEDGE:  That's awesome. And you're kind of past the proverbial two-pizza engineering team. That changes everything when you to start thinking about “I can no longer throw a task over the cube wall or we're not all even in the same room.” Forget about remote and distributed. You end up being like that in one building even because you get to a point where now, we're segmenting work across multiple teams.

I assume you're an Agile environment. How are you dealing with the growing pains of doing that? What's it been like?


STEFF:  It's so much harder than you think. When you're just one team around the table and you just naturally share the same context just because everyone is in the same conversation because there's really only one conversation happening, it all just feels so natural and you just don't think about collaboration being an issue.

“We'll just do this.”

And it's fine. It's just part of our day to day.

And then, once you start to split, once you're like “Okay, now, we're multiple teams that are worried about different things”; and then, once we actually move from a smaller office into the space we're in now, to me, that was a big moment because, now, we're spread out in a much larger room where you can't really just naturally overhear what someone else is doing. It's not part of your day to day.

If you want to do the math on that, that's twice as hard; that's three times as hard.

It's not. It's exponentially harder once that happens.

There were a couple of different reasons for that. There were some choices we made when we first split the teams up. Now, as we've scaled them, we probably should shuffle the teams.

So what we did was ─ it was a very logical choice. There are a million ways to slice up your organization into different teams to have a different focus. And they'll have their pros and cons.

And we went with “Let's be as close to customers as we can. So let's slice by market vertical.”

What happens is that there's a bunch of unintentional side effects from that. And one of them is that your code starts to partition the same way.

Let's say, you introduce a new product feature that really should be like a platform product. But because of road maps, you're like, “Well, what we're going to do is whatever team needs that first, they'll build it and they'll want to use it; but they'll build it just for their segment and it doesn't end up being reusable.”

So what you build per se is business, because you're not really thinking about everything you can think about, it's not going to work for, let's say, a title agent or like a giant lender. It's not going to be the same.

These are interesting things about team design that you can kind of talk about all day.

“Okay, how could we have done this differently and where are the different friction points about collaboration?”


LEDGE:  Yes. And it's all about speed to market. Of course, I imagine you're business development folks. I've worked in sales myself that you're just like, “Go, go, go!” We want this thing and we want this for this customer now.

And we don't want to wait for you, guys, to figure out how the business case applies to customers that aren’t trying to send us money today.


STEFF:  What's hard about a transformational industry is so much of it is exploratory. We're always trying to find new channels and new customers and new verticals; and you just don't know what's going to stick. So even your initial outreach that people do in sales, you think, oh, this is good. And then, you run into a spot where you're like, oh, but this is for auto insurance and a lot of state DMVs, and the employees there don't know that remote online notarization is legal.

So someone brings them the notarized form and they don't accept it. And then, what you thought was going to be this great vertical and this first customer that's proving the case end up being unhappy because it's really their customers that are getting the notarization done and they're saying, “Hey, our customers are unhappy because the acceptance of those notarization is terrible.”

And so, it just rains on your parade because you've done all this research and set up and all these initial calls and made these relationships. And, unfortunately, too, it's the thing that you have the hardest time tracking, kind of like track acceptance.


LEDGE:  We've talked about CI/CD and the pipeline of testing and regression and there are all these things. So you went from three engineers to forty. How have you dealt with from a technological and tooling perspective doing that the right way? And what choices there do you kind of wish you could have done differently?


STEFF:  Some of the CI tools, we're kind of still using the same stuff we did at the beginning. There are plenty of SASS products that do this. We're on CircleCI for our main stuff. They have different plans now. I wish they had these plans when we made this decision.

When we had to make a choice about “How do we do ─ we want to create an end-to-end automation suite, we wanted to be able to ship with a lot more confidence and speed so we're going to make this suite. And because of how CircleCI ─ this isn't true now but it was then ─ had their plans structured, you paid for so it's like “How many things can you do with it?

And if you're setting up an end-to-end suite, some of these things will take a lot of time especially for us; the core of our product is a call and it takes a while. So some of our longest flows that need to be tested for a smoke test, I think it takes like eleven or twelve minutes.

And so, it's hard to have that be like a blocker; and then if a bunch of people run those tests and, all of a sudden, no one can unit test or anything. So we completely stopped development.

What we did was we moved the end-to-end testing stuff to our own Jenkins. We pre set up a load balancer and you can spin up Jenkins master and nodes off of that; and that was really for the end-to-end testing.

The unit testing and other stuff still run on Circle. It's not totally gone.

But we had to move that. And we're still iterating on that. It's interesting what problems you run into later.

One of the problems we're having now is just about diagnosing. Let's say, you're running all these tests and it crashes. Right now, some engineers are stopping what they're doing and they're drilling in and they're able to be like “Hey, where did this break? Let's actually diagnose it.”

You look at what their companies with tooling and they'll invest in like, “Oh, we're going to have a robot, have some kind of algorithm where you roll it backward and run the test again.” And then, once it passes, you're like, “Hey, that's broken. It's your fault. Go fix it.”

Right now, we still have a lot of manual work to keep these things going and it's like, “Okay, do we keep going with this manual work or is it justified that we invest even more into this tooling which is more than we thought the initial investment would be?”


LEDGE:  How do you the release scheduling where you're still sort of having to deal with a major release? Does it slow down?


STEFF:  We are weekly with the help of doing all that automation. It was every two weeks for a while. I think it was weekly, and then the team grew and it became every two weeks because we couldn't keep up with it because it was just a ton of manual QA. And then, we made some cultural changes around testing and we tooled up; and we got it back to a week.

And we want to be faster. We were pushing to be twice a week, and then we ran into some issues just kind of maintaining the tooling.

And then, there are different ways to release software every day. The most ambitious way is to be like “Okay, master is going to go out every day.” There are other ways where you can be like, “Hey, master goes out once a week. Should we do a release candidate daily?” so you can actually have a branch; and then, eventually, master will wipe it all out.

There are different ways to do it. We haven't really fully settled on that. We're still in the spot where we know we need to strengthen up the tools so it doesn't matter about the next step quite as much.


LEDGE:  And how do you do the support chain where, as you've said, a given thing breaks and teams probably are largely allocated to new feature development? How do you keep your Scrum moving in the right fashion if there's a production issue that pulls three engineers off?

You can't get your stuff done for that sprint. Do you find that it slows down velocity or ─


STEFF:  Most teams are still not doing sprints, and that's actually why. It's because it's really hard to be like “oh, we're doing this” when you know that there's going to be production support stuff that's going to happen.

We put in a more robust Internet response system. Now, we have someone on call and they can shield it if someone reports that they think it’s down. And then, that also helps everything up the platform doesn't go down.

We have miscommunications. Because we have a live human on every transaction, there's actually a human notary that is on the other end of the line, sometimes, they can misinterpret stuff or something will happen. It's just your one user; it's not the whole platform.

And this is going to happen with any video conferencing or Zoom or whatever. Sometimes, these things, especially if you're trying to do it from a coffee shop. You're just going to have a rough tough time on the Internet and you can't always assume that that's like hard platform failure.

So creating structure around that helps give some protection. It does affect velocity. So you need to track that every time. You're like “How much trouble did we have per release? Did we end up releasing late this week because there were a bunch of bugs that are blocking the release?”

And then, post-release it’s like “What new bugs and what new support tickets arose after the release?” And you just have to have your finger on those two pulses all the time because it affects your velocity; it's also just a measure of how the team are functioning.

And there's some luck that's in there so you can't overreact week for week. You've got to look at the smooth curves.


LEDGE:  Do you think that the personality and culture of the engineering just sort of being like flexible and agile as a bunch of people, you know, if things break and that seems to part of your culture that you're accepting and putting in some flex into the system ─

I imagine that has to go all the way up. It has to come from a core leadership type of disposition. If you have a business unit that's sort of demanding results that the engineering sort of management paradigm can't keep up with, that that would be problematic.

Do you have a product based and an engineering based type of leadership that allows for that flex?


STEFF:  Yes. Pat Kinsel is the CEO. He formerly was a product manager for Microsoft so he's very product focused. He thinks in the future roadmaps. That's just how he thinks and you have to extrapolate it back. So like, “Okay, let's talk about the problem this is solving and what it means for this customer” and then line it up that way. He's very focused.

And then, the way I communicate from an engineering leadership standpoint is we're trying to go as fast as possible to shift all these features because we're in that spot where we're still in that initial scale-up work. You're not considered bullet-proof by investors so there's going to be more feature requests and more than there are engineers. So we're just like really making new stuff is high priority. There is understanding that, sometimes, things are going to slip up a little bit.

So what we do is by making those systems like the Internet responses to be “Okay, look, we're aware that things could go wrong. It's real life. So we're going to say that we will acknowledge every time, acknowledge it within fifteen minutes. And we're going to push for a resolution within a certain amount of time.

So you have to have an eventual goal of “Let's do a five-minute to acknowledge and thirty minutes resolution.”

You should set goals like that and then you overcommunicate everything.

And then, I think it's fine because you're setting expectations like, “Look, this is what can happen but here's how we're going to deal with it. Is this method of dealing with it acceptable? And then, if not, why? Where do we need to be less flexible?”

And it's tough because we have some very large enterprise lenders and you're doing a pilot with them, and then their transactions are precious. So instead of in a high-volume business where things can kind of get missed and you just deal with it, this is like “No, this is a loan for someone refinancing or purchasing a house. This needs to be perfect.”

And then, from the perspective of this company with billions of dollars of revenue, they're looking at “Hey, we're trying this new technology. This has to work.”

So it's very different. Once one of those is going on and it's a pilot for some account we just landed, we're all watching it go through. You're pulling up and you're watching the events.


LEDGE:  You described yourself earlier when we spoke as a reluctant manager of engineering. It's like “I'm talking to you and you have really done some serious thinking about this and you work in and around the management of it; and it's very clear you know about this stuff and you've been thinking about this stuff.

What was that evolution like from being just straight software engineer to now where you're leading a team of forty engineers? You probably don't get in to code as much as maybe you'd like even. What was the learning pattern?

I think there are a lot of engineers that aspire to that but just simply have no idea or context on how to make the leap.


STEFF:  I was really fortunate in that I did have previous management experience of just people management. So I've had to do that. I've had to do mentoring. I've had to figure out how to hire people effectively. I've also had the unfortunate duty of having to fire people.

I've had that foundation. So getting back into that was still hard but I kind of got to ease into where ─ initially, when I became a manager, I was really just doing that. I was very focused on team building. And I was still learning  a lot. It was about “Okay, how do we make Notarize to be a great place to land. Let's develop career tracks. Let's make salary competitive. Let's get all this stuff in place so that we can help attract talent.”

Because we're competing with huge companies.

And so, because I got to worry about just that problem for a while the CTO at the time, Michael Lee, who is an awesome collaborator, he was really kind of top of the other stuff.

So I was given the space to just make mistakes and try stuff. As we got better, we scaled the team; and then, my responsibilities changed. Then, I was hiring and training new managers. I became a manager of managers and I had to learn what that was like; it was a lit bit different.

With that, my stack before was made of mobile; and then I had to learn things like ─ you know, there's no Internet response for native mobile.

So I had to go to some seminars and learn some stuff.

One thing I tried to do with the managers now is we look for problems that are ownerless, that are big enough fire to deal with, and I'll give them just that problem. So it would be like ─ what's an example?

It was crazy noisy. There were coming in and it was really hard to diagnose. It wasn’t a useful tool for us.”

So one of the managers, it was like, “Hey, make a project out of this. Go figure out what other companies with exception handling are doing and how they deal if they have a really noisy system? And what's a good goal? What should our goal be in using this tool?”

It can't be zero exceptions. That's not possible. So how does that work?

And then, that person will go up and research that and have the opportunity to become more of an expert on it and then come back to the team and say, “Here are some things I want to try and here's why. Here's what other companies have done. Here's a playbook for this. I want to try it.”

And they would own it for a while. Own that problem for a while until it's in good enough shape where we can either make that be part of the process or have a rotation. And then, you go to the next fire.

“Okay, here's this other thing ─ ownerless problem.”

A recent one this week with one of the new managers who is awesome and a super smart dude. We used feature gating so we have to do a feature flag product; and I was looking at it and reading a little bit about it and I'm like, “You know, I don't think we're doing anything close to best practice. Why don't you go ahead and dive in? Here's research I've already done. I've started it but I'm no expert. Why don't you own this for a while?”

And now, he has like a cross-team project that he can own and iterate on and bring knowledge to the team.

I think a lot of it is that. Definitely, you're hiring smart people and then it's giving them the space and the permission to go learn stuff ─ and also to make mistakes. That's fine. As long as we're moving forward, it's all good.

I was given that space and I think that's a great gift. And I try and do the same thing for everyone else.


LEDGE:  Fire Focus, I totally dig that.


STEFF:  It makes a big difference. These problems don't get solved. like, initially, it was just me and one manager; and then, we started to grow the team. So for a little bit, it was like, “What do you think?”

And now, there's enough where there’ll be healthy debates about “No, we shouldn’t do that now; it's not important now. It should be this.” And that kind of gets fun.


LEDGE:  Yes. And that's all about “What's your method for priority setting?” You're going to have those debates and the debates need to end and action needs to happen. How do you handle that?

I mean, engineers will sit around and debate forever about the right way to do a thing and, ultimately, there are a few right ways. How do you choose and resolve after the healthy debate?


STEFF:  Is this in the context of this manager’s meeting or in the context of the platform team needs to know what to work on next?


LEDGE:  It could be either one but my guess is the manager’s talk on a more abstracted level about which fire is worth fighting? Isn't all that different than which feature they're building, which order ─ there's probably less business mandate on the former than the latter.


STEFF:  The difference is just the size of the group. So if you're just four people having a debate about something, it's different than when it's more of a company focus.

So it's not the point that came with scaling where we're like, “Okay, we need to have a bit more of a system about being transparent about how we decide what to work on so the platform team, the core team which is led by Arturo who was the second engineer before me so he was here before me. He's a young guy, one of those MIT guys. He's great ─ super smart, too.

I kind of tasked him with “Here's some research I've done about pricing solutions and pricing problems. What do you think ─ something we should try?”

And he grabbed it and ran with. He took an initial template and then made the scoring system; in that way, when people from the front line submit their ideas, we can actually say, “Hey, look, your idea got in front of the platform team; they've scored it. Here's what the score is and here is the stuff that's being done ahead of you.”

And they look at those scores and they, at least, will know. There's a lot that can be said for like “Someone looked at it.” It's already a great feeling. And the second feeling is like, “Oh, and I get why they didn't think my project had a bigger impact.

And we can tell them like “Here's how you can show bigger impact so you can convince us.” And just go get numbers like you talk to one client who wants this but why ─ come back to me when five clients want this and what's their market share? What is the value of that customer?

And then, the impact now is much bigger. There's some money to make here. We have to do this feature.

So, then, they're brought in to helping us kind of gather that data. So making that transparent and getting people on board is awesome.

So the head of product saw what we were doing from the engineering perspective and he created a for the front line team where they can just start throwing ideas; and we gave them an impact field for each cell on that so they could score things themselves. And so, they would have a running dialogue on each with a score.

And then, from us, the engineers, how we can just go through to organize and sort that by impact and then just start pulling stuff off the top.


LEDGE:  How did you design the scoring mechanism for the impact score?


STEFF:  I read ─ I think it's called “Hacking Growth.” It's a great book with red cover. And they talked about using ICE instead of RICE which is impact, confidence, and ease.

I really liked that. I thought that was interesting so I brought that to Arturo and he made the numbers. He did a system that was one through ten; and then, for each number, he said, “Okay, for this range, this is when you would use this number for impact.

So he really did the initial ─ he made sense with the numbers and ran with it.


LEDGE:  That's awesome! I ask everybody ─ we're, obviously, in the business of sourcing, hiring, and deploying just the absolute, most elite engineers. And I think that everybody who is in the hiring of engineers has sort of a go-to set of heuristics that they feel are maybe the most important things.

How do I know when an absolutely A-plus senior engineer is in front of me that I want to hire? What are the things that you measure? What are the heuristics that you use for that?


STEFF:  It's funny. We've kind of flipped it. A lot of times in a hiring funnel, you think about the other way. You're thinking about the reasons to disqualify. So I like the way you phrased it better which is “What do you want?” instead of “What are the red flags?”

What do we look for?

If we look at what our hiring moves are, each of those sessions kind of has a target trait. An initial screen is done by a recruiter. It used to be done a lot more by me but, now, I don't have much time.

So he just does an initial call-through check and kind of a sanity check. And that's super important.

And then, the next screen is a technical screen and it's very algorithm-based. And we try to do it more collaboratively like you don't do it by yourself like you do it live and someone is watching you and you're kind of working together.

We just want to look at “How do you think about code? Can you solve problems efficiently?” ─ kind of classic. And that's what a lot of companies do.

And then, the next challenge ─ this one they come on site and we do a software design challenge. Now, it's “Let's talk about how you architect things. How do you make things? How do you decide what should be in each class? How do you connect stuff? How do you handle responsibility?”

And then, the last challenge is pair programming. So you’re given a fake project and you just work together at it. And that one is kind of all-encompassing. That is “How well do you communicate? What do you value as good coders, bad code, the whole nine yards?”

There's one thing I kind of want to add to our look and that is ─ I just haven't done it yet. But it was a PR challenge.

So you send someone a request. What's a good code to put in there? And then, they have to comment on it.

I really like that challenge. I want to do that here but it's just a matter of like ─ I think you need to iterate on that for a while so it's probably something I need to give managers to figure this out.

First, I'll figure out how we could do it, how a robot can make a task and just email it to someone; and then, the second is we need to iterate on what's a good code to show people so that we can get some interesting results.

But I really like that challenge because so much of the job is reading other people’s code.


LEDGE:  I love that. That's a great heuristic. I haven't thought of that but you're absolutely right. That code review process is going to elucidate so much of your thinking and collaboration and even the way you give that feedback and write those comments. It's going to show a lot of the edgy communication because when I know I'm being interviewed, I may communicate differently than when I'm giving a written feedback potential under a little bit of time pressure. And it really sets the tone that that's what your culture is doing. I love that.


STEFF:  We try to be pragmatic. We used to just do a lot of algorithm stuff and we iterated on that over time and, as a team, pulled back from that:  “Let's take some stuff that we know works well at other companies.”

Most of their interviews, at least, when I was there was like a day of pair programming and I was like, “Man, that is solid. There is nowhere to hide. You can either do this work and collaborate with people or you just can't.”

I thought that was awesome. I kind of put a lot of weight onto that interview a little more than the other ones.

But you can see what the values are:  How well do you design software? What do you think of that code?

Each one of these sessions kind of has a different element that we can pick. And so, from there, you can pull that together and be like, “This is the makings of a good senior engineer.”


LEDGE:  So if you can call four years ago Steff and ask a question or give a warning, what would you say?


STEFF:  Oh my gosh ─ give a warning. That's a good one. What would I caution against?

I would say that there are so many opportunities out there and it's really important to just do your due diligence. Let's just make sure that you really evaluate an opportunity from back to front. Turn over every stone. You don't know what you're going to be responsible for at the end of it.

There are stuff that you think is trivial but you're like “That's not going to be something I have to deal with. I'm not going to have to worry about cross team stuff. Who cares what that department is doing? It doesn't affect me.”

There's a point where if you are in a spot and put into leadership in a company where all of that is your problem like how all the teams collaborate together, how all of them work together. And so, it's important to really evaluate not just what your immediate coworkers will be but what your potential peers will be because it makes a big difference.


LEDGE:  Awesome! Steff, good to have you, man. It's great to get into your insights. And I know the audience is going to love it.


STEFF:  Thanks so much. The pleasure is mine. I can just talk about tech all day, for sure.


LEDGE:  You came to the right place.

WANT TO BE A GUEST ON OUR SHOW?

Send us a topic idea or two and we'll be in touch. 

SUBMIT A SHOW IDEA