Hybrid Data Science in Engineering Teams

As data science becomes increasingly important for business, hybrid science and engineering teams are a necessity, but managing these distinct types of experts for maximum collaboration is a compelling challenge. How can mixed teams maximize performance?

 

In this episode Ledge sits down with Wickus Martin, Director of Machine Learning Engineering and Data Science at Dstillery, a custom audience solutions company that uses AI to find and target ideal customers.

 

Wickus discusses how sometimes organization shy away from conflict -- but it’s really effective if we put the time into the productive contention. Wickus also shares his personal experience moving from in-office management to fully remote.

Listen

See More Episodes About:

Data

Subscribe

downloadSoundcloud-subSpotify-subStitcher-sub

See All Episodes
Wickus Martin | Dstillery Inc
Director of Machine Learning Engineering

Wickus Martin is Director of Machine Learning Engineering for Dstillery, a NY based applied data science company, where he leads a cross-functional team of software engineers and data scientists who together create custom AI Audiences for Brands and Agencies. Previously, he worked in investment banking technology where his experience spanned systems calculating real time risk for equity derivatives, designing and building electronic trading systems and building out exchange connectivity. His areas of interest include big and fast data, design and architecture, and he especially enjoys working with researchers to create innovative data-driven products using machine learning and optimization techniques.

Keep up with Wickus's work at Dstillery and connect on Linkedin.

David "Ledge" Ledgerwood
Research, Insights | Podcast Host

Your host, Ledge, is a many-time founder, with deep experience in growing six-figure startups to 8-figure enterprises.

 

Keep up with Ledge and connect on Linkedin and Twitter.

“Love this podcast! Best one out there for high-level industry trends and advice. Recommended!”

Danny Graham, CTO / Architect

Transcript

 

 

DAVID LEDGERWOOD:  Wick, it’s good to have you. 

 

WICKUS MARTIN:  Hey, Ledge. It’s good to be here. 

 

LEDGE:  Could you just give a two or three minute introduction of yourself and your work, so the audience can get to know you a little bit?

 

WICKUS:  Sure. I’m the Director of Machine Learning Engineering at an applied data science company in New York City called Dstillery. 

We focus on creating audiences for marketers to either reach new consumers or to engage with their existing consumers. We’re very much AI driven, creating these audiences. 

 

LEDGE:  Everybody is throwing around machine learning, AI these days. What does this actually mean for your day-to-day work, and your team? What are you doing? When you build and audience, mechanically what is that?

 

WICKUS:  Mechanically what is it? We have these two kind of audiences, which is look-alike or act-alike. We might, for instance, look at some of your existing consumers. Once we identify them what we would then do is we would find other potential consumers, using machine learning techniques, who either look similarly or act similarly online in terms of like their web browsing behavior. 

 

LEDGE:  Before I hit record here, off-mike you and I were talking a little bit about how you have to arrange, and have worked on arranging, data science and engineering together. You have to build that product and yet you still have to have this R&D based science organization. 

I wonder, how have you guys put that together? I’ve had this conversation a few times and there doesn’t seem to be consensus on the exact right way to build that hybrid team. Everybody is fighting with this right now. 

What has been successful for you guys?

 

WICKUS:  We had a similar problem. I think we were very traditional in that we had completely separate data science and engineering departments. Data science being our research department. The researchers would come up with innovative new data-driven products, but they don’t really have the means to put these ideas into production, and that’s kind of hard. 

So, we often had a temporary collaboration between data science and engineering that would be a little bit of a hand-off, but it wasn’t close, ongoing collaboration. That hampered our efforts to create some of these products. 

 

LEDGE:  What did you do to get around that? Did you redesign the teams? What’s the actual arrangement there?

 

WICKUS:  We did quite a few things. Initially, what we had is we had this almost a rotational thing. Where we took somebody from engineering and we partnered them with data science for two weeks at a time and when they were done, somebody else from engineering would partner. 

We felt that didn’t work really well because you don’t build up a common knowledge, like vocabulary, et cetera. What we experimented with was to take a few engineers and essentially embed them inside data science. We created this new team that we called machine learning engineering. 

It consisted not only of engineers but also of data scientists. The whole idea was that, if you take these people with different skillsets but complementary in some ways and you give them a common goal. What we found was, over time, they start talking each other’s language and we were able to create different kinds of products than traditionally engineering were able to create by themselves. 

 

LEDGE:  Give me some examples there. How did that crosspollination of thinking work? What was a success story there, and maybe some of the challenges too?

 

WICKUS:  One example, I guess this was one of the earliest products we did, was we wanted to, within our platform, create a way of maximizing performance when we deliver to a particular audience, but within certain constraints. It’s a mathematical problem. It’s an optimization problem. Not a typical engineering kind of thing you would tackle. 

This collaboration resulted in something that the researchers proposed, and then the machine learning engineers took this thing on. There was a lot of like shouting, I would say almost, negotiating how to best approach this thing, because the thinking can be very different between these two groups. 

On the one hand it is theoretical; this is the perfect solution. On the other hand, you have the reality of a production system with all of its limitations. We went back and forth a few times but the end result was that we created this thing that ended up being used on like 80% of our campaigns. It was a really big success for us. 

 

LEDGE:  It was worth that friendly debate, or expensive debate. I think sometimes organizations shy away from that productive conflict but that seems like a good use to say, that was really effective if we put the time into those expensive meetings and whiteboard sessions. Where it’s a little contentious but it ends up with a better solution. 

 

WICKUS:  To be honest, I think the contention is very good. You want people to push back against your ideas because, at the end of the day, you end up with something that is a very good compromise, satisfying everyone. 

 

LEDGE:  You’re one of the leaders there so, I’m curious, how did you break the log jams become too cumbersome? Sometimes it’s like, okay y’all, we’ve been talking about this for five hours. Let’s make a decision.

Did you come to any moments like that? 

 

WICKUS:  We did. One of the first things we did was we created this process which just like a design document, a Request for Comment document. We felt that sometimes the researchers would stand in front of the whiteboard, they would try and explain the math. Sometimes you need to have a few of these sessions. It’s a little bit hard to get everyone on board if they don’t have the same skillsets and there’s a little bit of a ramping up period. 

So, we have these documents, Request for Comment documents, and you pitch your idea and you share it. Everyone reviews this. We kind of comment on this thing. We go back and forth a few times and we find that it’s a discovery process. It’s a way to communicate. It can go on for like a week, two weeks, but what we find at the end of the day we have something that is a very good compromise, that everyone understands, and that we agree is the best way forward. 

 

LEDGE:  In my own experience, I’m technically literate – certainly not an engineer anymore – but some of the stuff coming out of machine learning and AI and data science I just simply do not understand. I don’t have the background. 

Yet, I can say that when experts like yourself and other guests I’ve had on explained it in a different language, I go, oh, yeah, that makes a lot of sense. I totally get it now. 

I’ve been around enough academics to know that academics like their own sub-disciplinary vocabulary. It’s very hard to shake them out of, hey, let’s find a common language. Even if that’s Dstillery language. Let’s just find a way to communicate about this together because we really are talking about the same thing. 

 

WICKUS:  Yeah. The domains are so very different. The skillsets are different. The toolchains are so different. Our data scientists use a completely different ecosystem of tools than the engineers do. Our engineers use tools like Java and Groovy. The researches use this whole ecosystem of tools like TensorFlow and scikit-learn and Panda and NumPy, et cetera. 

You come from very different places, and unless you are willing to take the time to bridge that gap, things get lost in translation. 

 

LEDGE:  Yeah. In both directions too. When you get into a CI/CD toolchain and build management and release management and all those things, they’re not going to resonate with your data science types at all. 

 

WICKUS:  No, because the workflows are so different. The mindsets are so different. Researchers are very used to posing a hypothesis and then they want to run a bunch of experiments to test that. They’re used to doing things in a quick, experimental way and at the end of the day they will write it up, maybe publish a paper on it, then they’ll throw it all away. 

Whereas the engineers care about the quality of the code. They want to create things in a reproducible way. They want to write unit tests to make sure they don’t break things. They want to be able to scale things. They want to make sure that it’s robust. It can run 24/7. 

None of these things the researchers care about when they do their experiments. 

 

LEDGE:  I would guess though, and correct me if I’m wrong, but agile and lean and experimentation, and scientific experimentation, actually forms a nice union between those two mindsets. You can say, hey, we want to implement and ship quickly. We want to fail fast. That is experimental in nature. You might end up throwing some away, so the scientists are going to have to say, well, we’re not going to throw it all away, we’re going to keep some things and actually do some stuff. 

The engineers are going to have to say, let’s put some systems and processes around what we keep and what we throw away based on actual data. Scientists like data. It seems like you could come to a self-referential and stasis there that does make a lot of sense between the disciplines, if you take the time. 

 

WICKUS:  Yeah, I think that’s right. I think we’ve learned from each other in that way as well. Absolutely, the engineers would build the prototype and we learn from that prototype, and maybe the final product looks quite different. 

The same thing happens when we work with the researchers. Sometimes they have a certain theory and need to build a quick prototype, because you don’t always have the data to evaluate this thing that you think will work. So you need to build this quick prototype. You gather the data. Then they discover, alright, now that we have the data maybe we need to do things differently. 

There is that kind of overlap, and I think you’re right. The whole agile thing means there’s appreciation for doing things quickly, throwing it away, learning, trying again. 

 

LEDGE:  Yeah. As I understand it in the machine learning and AI disciplines, what kind of gets lost in the pop science, pop tech kind of world is, you need an outrageous amount of data to train the model and then look at, was my hypothesis even remotely correct? That everybody thinks you turn on the AI and it becomes smart and takes over the world. The reality is that all of it’s based on training data. If you don’t have enough training data, it’s going to tell you all kinds of wrong stuff. 

How do you do the backtesting? Is that built into your prototyping methodology?

 

WICKUS:  The data is probably the most important part of it all, right? You need to make sure that you have the right samples of data. You need to make sure that you have the right labels. You need to make sure that you’ve engineered the right features to help predict the outcomes you care about. 

That can be a very iterative process. You go back and forth. You try things. You look at the results. If it’s not predictive you go back to the drawing board, you try a few different things and you iterate. 

That’s not too different from what engineers do either. 

 

LEDGE:  Yeah. That nature of predictive value, it just speaks to, hey, we already know what happened in the past, so can we feed it past data that led up to that and can we say, oh, look, we did predict something. 

You can’t just predict the future without having a hypothesis of what the prediction model might look like. 

Another thing I hear, particularly in big data sort of healthcare and financial and everything, is people don’t have an appreciation for like 80% of the work at the beginning is just really cumbersome ETL. I don’t know if that’s been your experience, but you don’t get to do the cool stuff until you ingest and normalize amazing amount of data. That that engine is really the unsexy, early work necessary to make any useful models. 

 

WICKUS:  Yeah. I think you’ve hit upon the dirty little secret of machine learning. There’s a lot of doing exactly that. Writing these ETL scripts. Setting up the data pipelines. Kicking off queries. You could wait hours, sometimes days, to actually get the training data to clean it up. 

You run your model, things aren’t quite as predictive as you’d hoped, you have to start again. There’s a lot of that. 

The good thing is, though, you do become familiar with your domain over time, and you do start to get more of a feeling of which variables matter. You just need to make sure that those are easily and quickly accessible. 

 

LEDGE:  That’s a good point. 

Let me shift gears a little bit. Off-mike you mentioned that you just made the move to being remote and working distributed. I don’t know if that was a new thing in the company culture, but you are still managing a team. 

I wonder, a lot of people are now in that position both on the company side and on the employee side, where remote and distributed is really becoming a substantial option and a necessary option. What’s that been like, and what has changed in the way you manage and lead and do things together with your team?

 

WICKUS:  I would say it’s still a little bit of a learning experience because I’ve only been doing this for about four months now. Now, I was embedded with our team for quite a long time so I know the company. I know the team. I know the systems. I know the people. 

What I still try and do is I try and head into the office for one week every month for some valuable face to face time. 

There are challenges, obviously, but to be perfectly honest it’s been less of an issue than I originally thought. So far, things seem to be working pretty well. 

 

LEDGE:  What are the key collaboration tools that you use? Does video play a big factor? The listeners can’t see, but you and I are on video now and I personally have found that I like to have video up for all my calls. That it makes a huge difference. 

So, what tools do you guys use to collaborate and do the asynchronous and synchronous and all those things?

 

WICKUS:  The funny thing is, I didn’t really have to change any of the tools we were already using when I was based in the office because, even when I was in the office, people have their headphones on, they’re listening to music. They’re in the flow. They’re doing their thing. 

Instead of just tapping on their shoulder going, “Hey,” I would chat them. They may be sitting a few desks away from me, but I will still G-chat them. So I do that still. 

When we have conferences or one-on-ones and the like, we just use Google Hangouts. That’s what we used to do in the office in any case, and that’s what we do right now. 

I’m remote but I still see the guys. I chat with them the whole time. We’re in meetings the whole time. We have agile process, so every morning we do a quick standup for like 15 minutes. Everyone ducks into a conference room. I dial into the conference room. Sometimes people work from home in any case. Actually, that happens quite often. They dial in same as I do and we chat. 

I think things haven’t changed that much, really. 

 

LEDGE:  You did mention some challenges or some changes. What are they then?

 

WICKUS:  I love my commute. I walk downstairs from my bedroom. I pass through the kitchen. I grab a cup of coffee and I head on over to my study. That’s great. 

I would say I do miss the human interaction sometimes. What I’ve started doing is just popping out to go to a coffee shop for a little bit. If I don’t have a conference call coming up, I will sit there, I will program a little bit, and if I see well in an hour’s time I have another conference call coming in I just head on home. 

 

LEDGE:  Yeah. That’s smart, to think about how to change your environment. I also wonder about, you mentioned an interesting thing about your neighborhood. That you have started to find all kinds of remote people from different companies and backgrounds. 

Do you think there’s a culture growing up around certain areas? I’ve anecdotally seen that, that in fact remote workers and knowledge workers are clustering in certain areas of the country. That sounded like it was surprising, but the data shows that too. 

 

WICKUS:  Yeah. It was surprising to me because I think I wasn’t aware of it in New York. But then again, I wasn’t actually based remotely when I was in New York. I was at the office. 

Presently, I’m based in Raleigh in North Carolina, and what I’ve found is quite a few of my neighbors who do work for tech companies like IBM, Cisco, Red Hat and Google and the like, they seem to be based at home quite a lot of their time. 

So we’ve been trading war stories and it’s interesting to see that this does appear to be a trend, at least in this area where I am right now.  

 

LEDGE:  Any particular themes you hear coming up, talking to the other remote working class?

 

WICKUS:  You know, it seems to me some of them manage employees who are distributed globally. So, even if they were to go into a local office, a large part of the workforce are, in fact, not local. I don’t know if that is maybe a large part of the reason for it, but it makes sense to me. If a large part of your team… If you’re managing 600 people and 400 of them are distributed across the globe, there’s perhaps less reason for you to be in your local office. 

 

WICKUS:  Well, there are some talks about having more of a global presence. We currently do have quite a few offices around the US. Most of them are sales offices, I think, currently. Most of our tech is in New York City. There has certainly been some discussions in that direction. 

 

LEDGE:  Interesting. Well, look forward to getting the updates on the stories maybe next time. 

Wickus, thanks so much for joining us. This was really an interesting conversation. Thanks for taking the time. 

 

WICKUS:  Thank you, Ledge. 



 

WANT TO BE A GUEST ON OUR SHOW?

Send us a topic idea or two and we'll be in touch. 

SUBMIT A SHOW IDEA