DAVID LEDGERWOOD: Tyler, good to have you, man!
TYLER SHAMBORA: Thanks for having me. I appreciate it.
LEDGE: Can you just give like a two- to three-minute intro of yourself and your work for the listeners?
TYLER: My name is Tyler Shambora. I'm the director of technology at BVA. I work on things related to workflow, DevOps, developer education, and client work. I'm sort of a one-man show in the ABC world or a one-man show inside of our ABC because we don't really have the employees to facilitate or build out an entire DevOps department. So a lot of the responsibility falls on me to care of.
I work mainly on e-commerce and we work exclusively on Shopify platform so it makes some of these things a little bit easier having only one platform to deal with.
LEDGE: You and I had a conversation when we first met about how to scale an agency because you're dealing not with one CI/CD sort of full pipeline product from development to production but more like a bunch of products for a bunch of clients who are going in scale. How do you move that fast from development to production and scale up to CI/CD and DevOps flow and release flow and into a version that can be used by many clients?
I thought it was a neat challenge that agencies and professional services firms, software firms face uniquely who deal with client service rather than one individual product.
I thought talking through that would be fun.
TYLER: It's been really interesting to me because this is the first agency experience that I've had and I think you read a lot about problems of scale and it's like, “Oh, you know, Twitter was originally written on Ruby. It couldn't scale so they had to switch to C+ 0:02:14.2.” Those are interesting problems to read about. I feel like nobody ever talks about these horizontal-scale problems that people have and how to handle stuff like that.
It has been a pretty neat problem to try and solve over the last four years. To give a little context, when I first began at the agency, I think we had two and a half developers; today, we're about twenty-five to twenty-six. And that's in a three and a half year period of time.
So you're dealing with, obviously, more developers. You're dealing with more work that needs to be done; and that's the problem of scale which is you have so many people working: How do you keep things on the same workflow? How do you keep the workflows the same? How do you keep code standards the same?
And so, it's been interesting trying to figure out how to navigate that.
LEDGE: Yes. And every time you bring on a developer, what's your onboarding strategy? How do you train them to do it the way that works in the flow? How do you take best practices from the field and bring them in and address your unique problem domain with existing technologies that weren’t meant maybe to build that way?
I don't know. What are some key takeaways?
There are still people who are trying to build in scale agencies all over the place. What do you do?
TYLER: The first thing in my mind that was sort of a tough pill for me to swallow is that the nature of agency work is ephemeral. We love to hold on to clients for as long as possible but it's just inherently true that agency work is less long running than product work.
Product work will exist indefinitely whereas a client may leave you after six months because maybe their budget ran out. It's kind of understanding that the tools and the workflows that you put around these things shouldn't take days to spin up and get ready to go. It should be a matter of seconds to get things going.
And, furthermore, to train people, too ─ if you're having to train people on how to use some crazy CI tools and how all the whole ecosystem or the workflow works, that's not good if you have to spend an actual week onboarding developers to do that.
The first thing I've realized is that letting the whole workflow and tooling exist in the background is pretty important so developers are aware of what's going on and have knowledge as to how it works.
But these tools all exist. And so, if you have a single build system for a project, that build system needs to be largely the same across all projects so that when a developer jumps from one project to another ─ which is another point that's very important ─ they should expect everything to be the same and they should expect the tools to work the same.
So figuring out a way to keep track of or make sure that build systems are all up to date maybe automatically would be even better, you don't want to have one system using Grunt, one system using Gulp, one system using GitHub for repository hosting and another using Beanstalk for repository hosting.
Everything should be sort of the same as much as possible.
The other thing that's important, too, is that, at least, in our circumstance, it was and has been really important to ensure that projects are sort of not siloed with developers because that's another huge problem that you can run into. If you have one developer who becomes the subject expert on a project and then they leave, you have to pull another developer off and waste maybe a day or two of time where they're trying to figure out “What the hell is going on?” and all the nuances of that developer’s styles and whatnot.
And so, one of the key aspects of that is making sure that things are the same. Sameness is a really important thing in agencies when trying to scale.
LEDGE: I think everybody wants to do the plug-and-play-a-developer thing which, when you really think about it is ─ they're humans. They have opinions. They have experiences.
It will be maybe more useful to think about that when we actually have AIs that write code and things of that nature. We simply don't have that yet and that is a critical problem when you design these systems to how to allow the flexibility but yet also have a thing that anybody can do because you have that business continuity issue like if a key developer leaves, which they do all the time, how do you have redundancy?
Frankly, from an agency perspective, how do you bill your client for redundancy because they don't care but they expect you to be redundant?
How far did you, guys, get in that standardization?
Obviously, standardization is an ongoing living effort. But do you feel like you can do it? What were some key aspects to really getting as close as possible to that?
TYLER: That's a good question because I think there's standardization in a couple of different places. There's standardization that we think about in terms of code standards. Does your code look like my code?
But, then, there is also standardization, as I've previously mentioned, around workflow and tooling. Are our tools on this project the same as the tools on this project?
And so, with the latter, the tooling, we crushed it from pretty early on. I think that we did a pretty good job through version-controlling our workflow and making sure that it's up to date.
Surprisingly, through a relatively ancient technology ─ we use Bower to do all of that. We have a custom repo that just Bower keeps checking for updates and if it's updatable, Bower automatically run the Bower updates.
Everybody guaranteed on all our projects is on the same workflow.
Code standards, on the other hand, is a little bit more difficult especially for us because I think the driving factor behind all of the tooling and workflow and crap like that has been agility or speed for us. So it's like speed is this primary KPI that we're keeping in mind when we introduce anything in the workflow.
So if something that we introduce has an impact on how quickly we can return work to the client, finish work to the client, then, it's really taken seriously and considered seriously if this is really worth bringing to the workflow.
And so, there are tools and ways for that code quality to be ensured and whether it's running code through a linter and having people open up pull requests and if the status checks don't pass, then, the code doesn't get merged in; and that's really nice to ensuring sameness of code but the problem is that it introduces a delay in terms of how to present once the code if the code doesn't match, someone missed a comma somewhere, something like that, we have got to go back and change it even though it doesn't really affect the end product of what's being put out.
So it's like this ever present balance of what's important to you and how you consider options relative to those important things. I think, as an agency, we are now starting to shift from this priority of speed towards like, “Okay, let's dial it back on the speed stuff and focus on the quality stuff and, as such, the whole workflow and build tooling is adjusting speed.
Switching to Gitflow workflow so we're getting more code reviews in play. It's trying to keep your build system and all this stuff in line with what the agency’s priorities are at that moment.
LEDGE: Right, because higher QA standards are always going to trade off on speed of delivery. Is it about launching a product as fast as possible because the nature of a lot of agency work, as you've said, is very ephemeral and the shelf life of the product probably isn't very long, anyway?
Now, of course, clients don't want to hear that. They want to say, “I'm paying once the lowest possible price to get a great thing that is going to last forever.” And that's not realistic in the software development life cycle.
I ask everybody this question, sort of my wrap-up question: We're in the business of evaluating and placing the best A-plus unicorn senior engineers. That's what we do. And we have a pretty strong vetting process and heuristic for doing that.
But I like to ask everybody we have on, “In your experience growing engineering team like , what are the heuristics and measurements and tactics that you use to identify and pick out the best engineers to add them to your team when you're doing the hiring?
TYLER: That's a good question. I think there’s probably a three-part approach that I've been using. The first one is when dealing with initial sourcing, looking and seeing if someone has a portfolio. Is there a code that I can look at to do things? Do you do things? Do you contribute to open source? What's your footprint in the ecosystem? That's the first check.
The second check is that we run people through some sort of test with correct right or wrong answers Nothing open ended. Then, you get evaluated again like “Did you score upper third, middle third, or lower third?”
The final thing is when you come in for an in-person, we have you write code in front of us. It's not like a whiteboarding thing; it's more like an open-ended just talk it through.
To answer your question, the major factor or quality we're looking for is “Can you do the work? Are you just sort of trying to fool everyone into thinking that you're a good developer? Are you a good developer?”
There's no way to get around that. You can maybe talk your way through an interview but we try to put these checks in place It's like, “Do you know how to write code? Are you good at writing code?”
If you check out, then you're it.
LEDGE: Thanks ─ very tactical! I love it. Tyler, thanks for joining us. It's good having you on, man.
TYLER: I appreciate it. Thank you.