DevOps Automation, Orchestration, and Choreography

Steve Peak is the founder and CEO of Asyncy, an ambitious foray into the brave new world of DevOps automation, orchestration, and choreography. An accomplished software engineer himself, Steve's contributions to the development toolchain includes other efforts like Codecov.io, which he founded in 2015. The future of application choreography in the cloud era requires new ways of thinking about abstraction, and Asyncy's passionate founder and team are charting new paths in this exciting field.

Listen

 

 

Subscribe

downloadSoundcloud-subSpotify-subStitcher-sub

See All Episodes
Steve Peak | Asyncy
Founder and CEO

Steve is the passionate founder and CEO of Asyncy. Previous founder of Codecov. Father, lover and expat in Amsterdam.

 

Read more about Asyncy here. 

David "Ledge" Ledgerwood
Client Services | Podcast Host
Your host, Ledge, is a many-time founder, with deep experience in growing six-figure startups to 8-figure enterprises. Clients talk to Ledge first when exploring a relationship with Gun.io.

“Love this podcast! Best one out there for high-level industry trends and advice. Recommended!”

Danny Graham, CTO / Architect

Transcript

DAVID LEDGERWOOD:  Steve Peak is the founder and CEO of Asyncy, an ambitious foray into the brave new world of DevOps automation, orchestration, and choreography. An accomplished software himself, Steve’s contribution to the development toolchain include other efforts like Codecov.io, a popular code coverage tool which he founded in 2015.
The future of application choreography in the cloud era requires new ways of thinking about abstraction, and Asyncy’s passionate founder and team are charting new paths in this exciting field.
Steve, thanks for joining us. I'd love to hear your story for the audience.
 
STEVE PEAK:  Thank you, Ledge, for having me here. I've been an entrepreneur for about a decade or about twelve years. Specifically, developer tools are just like software, in general.
My journey has been quite fun. I had a lot of challenges along the way. The first product that I built was a point of sale company that never really took off but I learned a lot from that ─ how to manage software, how to talk to customers and support.
During that process, I started to get a lot of people asking to build applications that had some complexity to them. They could use their data to build some really cool things.
This was about four or five years ago and, at that time, it was really difficult to build applications rapidly. We didn't have any tools. We didn't have any ways to orchestrate these things ─ how to get them out in the market, to support, and to scale them; and, let alone, the business so you end up in this weird position of “Okay, I want to build the supply of these products to you but I'm unable to actually build them.”
At that point, I was like, “I want to solve this problem. I think this problem is the same problem that a lot of people are facing.” And so, I prototyped the first high-level program or super high-level problem language that I called “Storyscript” which exposed all this business logic. I thought it would be really cool to be able to write down the stories of data ─ what they want to do in the application and write it down and actually have it executable.
And that's how this journey all started.
 
LEDGE:  What is Storyscript? What does it mean? How about the half-technical guy over here?
 
STEVE:  Sure! We all have a program in languages and they all serve a purpose. We have a low-level program language that kind of extracted us away from punched cards and “C” and other things like this.
We moved to Python and Ruby and Java and all these other great languages and they've served a really strong purpose to get us to build tougher and stronger.
But the problem is that these languages are riddled with complexity and they have a lot of requirements and in-depth knowledge on how to get them running in an environment; let alone, you have to do all the logging, metrics, tracebacks, and DevOps deployments. There's so much energy that goes into building something. As an industry, we've tried to abstract further and further.
I looked at this little problem. What does it look like in the end? What if we extract all the way up to human? What does that actually look like?
And so, it looks like English. I want to be about describe an application in English and have it translate to more of a computer language that was much more closer to the human language.
What this language does is it choreographs microservices and services functions. And that's part of the platform we've created but the language itself is the interface for that platform.
So what we've created ─ this language itself ─ defines your business logic and it also creates your architecture for you. Yet, you still get all the power of all the languages you're used to. You still get all the microservices.
As long as you just Docker container, it's great. And you also get all the services functions you can imagine.
Think of it as a language to protocol between services. It abstracts away the orchestration entirely. Essentially, it makes an Auto DevOps environment by defining this as choreography.
 
LEDGE:  So why did you think that an Auto DevOps environment was necessary? I would imagine that you would get your DevOps purists who want to make everything work in a new world of bare metal. It isn't bare metal but it's bare services and they want to touch it and feel it.
How do you convince those types of folks who are like, “Hey, let's go”? It's almost like telling developers that low-code is a good idea.
 
STEVE:  Low-code is a challenge. I would say that no code is a bad idea. Low-code is a little bit more scary for some people ─
 
LEDGE:  You're talking about low-code DevOps. Why should we do that?
 
STEVE:  Here's the fact. As a business, you want to focus on the goals of your product. You want to focus on the features of your product. DevOps is more or less a consequence of that.
So let's make that assumption. Let's make the assumption that you don't want to do DevOps. The value of your business is not how you manage your product. It's how you build your product; it's how you scale new features. It's not how you scale servers.
With that being said, we need something that removes us further from the hardware and moves us closer to the features. And, yes, it is a little bit difficult to kind of put your head like, “Oh, Auto DevOps” like that's a big tall order.
But, really, what you're focusing on is a different architecture, in general. So the problem right now with orchestration is you have very low visibility and you have to do a lot of work to get these services lined up.
There are actually very few standards. I know a lot of people can challenge me on that. But there is no guide for building a microservice. There's no central location.
I want to say this not in a business level but on an industry level. There's no central place to send metrics or logs or how a service actually scales in response.
Do we have strongly typed services?
We don't.
So there are really limited standards around the services themselves. And that's what we've created as well. We've created an open microservice guide that defines all those things I just mentioned which creates a highly reusable service that's platform agnostic.
If this kind of guide can get adopted by the industry, forget about Asyncy; forget about Storyscript. If this guide could be adopted, it can advance the industry by having these highly reusable services.
And we need that. We need to be able to have services that are not just wrapped up in a Docker container and then chucked out in the industry.
But from an orchestration level ─ let's go back to the orchestration ─ your application, essentially, has a couple of strategies. You can couple containers, a couple of microservices which, essentially, communicate directly to each other. And that can create a lot of danger. A lot of people out there that suggested that's not a good pattern to do.
And there's also the message queue. Now, you have a single point of failure that you're probably going to have to scale off this message queue. A single point of failure is another problem with microservices.
And then, you end up having a lot of network traffic.
We need another strategy. We need something where there's no single point of failure that scales across all of your pods. And then, also, there's communication there between services so services are completely isolated, independent, and auto scaled and managed in this regard.
And so, the services themselves are strongly typed so that we can communicate with them with high transparency.
And that's exactly what we've created ─ this environment where a lot of the DevOps ─ if not, most of it ─ is automatically managed for you by a smarter environment.
 
LEDGE:  With the automatic management itself, how do you manage fault tolerance and errors. No software is perfect, right? So you're taking on an enormous burden of the operations automation which means, then, that you're potentially becoming a very important upstream provider of all kinds of stuff.
How do you prevent yourself from becoming the macroservice bottleneck and problem that brings down ─ the same way when a region of Amazon goes down and we all take the day off because nothing works anymore?
 
STEVE:  There are a lot of things that we can say about “What about this? What about that?” And let's just focus on one example. Let's look at the actual execution strategy.
Let's look at a service. If a service goes down ─ we want to scale to zero as well, too. Let's just say a service is down and immediately we start it. What's really beautiful about our framework is we have the opportunity to essentially pause the execution of a search in the workflow and wait for that service to recover.
That kind of strategy needs to be implemented in an orchestration environment manually by DevOps. And this is something that we have built in. Because we have control over the execution environment and the strategy of how data moves around your architecture, then, we can actually pause the execution, wait for the service to recover, and repeat.
In theory, this would be a much better approach than to just say, “Oh, the service is gone. Throw an error and see how it manages from that.”
These are the kind of strategies that we're looking into in doing research and development ─ on how to solve these problems. And, yes, software, inevitably, is going to break. Software is, inevitably, going to have problems.
But as a developer myself and as an industry, in general, we want to keep abstracting further and further away from the machine and looking more towards these goals and these features and applications in making these things more highly reusable and more fault tolerant.
I mean, our platform, in general, is intentionally going to be safer to use and more fault tolerant by spreading out our services across many nodes, above many clusters and many zones.
And so, if any single pod goes down, any single zone goes down, then, you won't even notice. You won't even feel that as the services will recover in different areas.
 
LEDGE:  I've been told ─ and I've tried to study on my own a little bit about domain-driven design being sort of the most important way that you can begin to think about microservices in your business. And it strikes me that you probably have had to do some kind of meta domain-driven design because your domain is the one that everybody else is going to build their domain-driven design microservices on.
What was the thinking process to even begin to organize this sort of meta abstraction?
 
STEVE:  That's a good question. We want to look at things from a more fundamental level. We want to look at it from a service level where services are first class.
We define a service as maybe a company; maybe it's social media; maybe it's Twitter; or maybe it's a database. And there's a certain amount of actions. They might be kind of one-off actions. They might be event-driven actions. They might be streaming. They might be a bunch of other things. And these are the actions of the service.
And then, the actions of that service come with arguments and they end up with results. And that's very highly consistent across all services. So if we have that kind of pattern, we can create a language from it and this is what our Storyscript does.
We've also added all the other beautiful things that come with this language where you have your loops and you have your own functions; you have try catching; you have mutations come embedded in our platform.
That's a non-trivial thing, right? In your current orchestration environment, ask yourself, “How do we mutate data or how do we access keys within certain results or microservices?
Or we create a middleware; we create another service that mutates. It might sound trivial but how do you get a string from a lower case or an upper case in the microservice environment? Why is that difficult?
And this is something that's built into a platform that's just a very intuitive way of writing this out.
And one thing I like to make note, too, as very important for our feature as well is that we have the opportunity to understand data in a deeper way. So because we have control of the execution environment, we know what data comes in, we know what data jumps between each service, and we know what data goes out. And this provides a very powerful way to use machine learning to understand deeper about your application.
Imagine. You have an application where, let's say, you use FullContact and you provide email of a user for your system. We can identify that user automatically via our platform and create automatic application KPIs with that user. We can give you performance metrics on how a user interacts with the platform. We can give you information on how users actually are using the platform on a deeper level. And this is all entirely automated because we have deeper knowledge of how the data is coming across the system.
So, in theory, we want a developer to comment and write just a couple of lines of code in Storyscript; and that might use a couple of services that may have already been built and provided in our Asyncy hub.
Those services automatically get pulled down, create an architecture for you, get a application up and running for you; and then, when you get your 20 users coming in, maybe your 2000 users coming in, you automatically get application KPIs and user information.
And all you've done is write three lines of code and you get amazing amount of metrics, amazing amount of logs, and the information you need to make business decisions based on data.
 
LEDGE:  So it becomes very extremely powerful to pull together disparate exhaust, data exhaust from the microservice execution itself.
 
STEVE:  Another cool thing to think about, too, which is a big value add is ─ you talk to companies and you ask them a really fun, simple question.
“So you've got microservices ─ cool. You're on Kubernetes ─ awesome. It sounds good. You're using Newtec; it sounds cool. So how long does it take for someone to hire Day 1?
I just hired this person. How long does it take for them to ship a new function or feature or container in your production?”
The answers we're getting is that it's typically three months.
What if I were to present concept of “What if we can do this in days? How much impact could it have on delivering product in your company?”?
In Storyscript, we give that opportunity where you look at a Storyscript in its exposed business logic that's very transparent and intuitive. Your new developer comes in and they're told, “Hey, we want to add a natural language server between these two data points.”
The developer goes in, adds a new line that says, “Grab the natural data right there. Take the output. Put it in the next service and deploy.”
We provide that kind of environment. We've automatically built an AV testing. We have fault-proof stuff. We have services that autoscale. There are all these beautiful things that happen that happen for you automatically because the language itself defines the architecture, and that is a very beautiful thing.
But no one has taken it as far as Storyscript and Asyncy. We're definitely leading the edge on this new choreography concept.
 
LEDGE:  We love broad visions. It's genre-defining. It's very cool to get on the inside track with you and learn about it.
How do developers take advantage of this if they want to give it a test run ─ what you, guys, are doing, how close to deployment?
 
STEVE:  We're on private data. What's really important for us is a developer journey. We want to make sure that when developers join, they have a really solid journey building the first applications as well as the trust that we need to build with developers. It's very important to have that trust.
And so, we're taking this very cautiously but we also need to define a category. Application choreography is not really a thing yet. It's not defined. We want to lead that industry but to do that, we have to be aggressive yet cautious at the same time.
I would love to have users jump on board. I know your community can also be very vocal with their constructive feedback.
 
LEDGE:  They'll definitely be vocal.
 
STEVE:  And I want to say to all of your listeners that I am more than happy to take your feedback and I'd love to answer questions because it's very important for us to understand the challenges that people have in understanding what we're doing.
And maybe there are things that we're not doing yet and we want to learn from you. So this is not us making these really strong bold opinions that this is the way to do it; this is us building a product and investigating the market to see where our choreography could be.
As a developer myself and as a community, I really do think that the evolution of applications is going to move from orchestration to choreography. And the question we all need to ask ourselves is “How do we do that?”
We're making one approach at this and we'd love to get the feedback from the community to see if this is the right approach and how to do it differently.
 
LEDGE:  Always do it differently, right? The classic line is “My standard is always better your standard.” Just the establishment of a vocabulary and in a new way of thinking is the big lift. The technology isn't the hardest part. It's just saying, “Hey, there's going to be a new thing in the world and we're putting our foot down in making that happen.”
That's an entrepreneurial journey that we all have a lot of respect for having done similar things.
 
STEVE:  Thank you. And if anybody wants to join us, we're more happy to have you join. So if you just go to asyncy.com, you can drop your email there and we'll be in contact with you.
So much of this product is done but there's still so much to do on the journey and we are hiring quite a lot aggressively. We're based in Amsterdam, an amazing city.
It's really fun so I really appreciate the time, Ledge. And if you have questions, I'd love to…
 
LEDGE:  It's awesome to have you here. We're looking forward to tracking ─ when you're doing your hiring ─ we talked about this ─ you say passion first and that comes through in what you say about the product and in building your organization. I think that's a great message, too, that I'd just love to end on.
Passion seems to be the driving force behind what you, guys, are doing, in the team that you're building and the vision that you have.
Super cool! And it's good to be on the front train.
 
STEVE:  Awesome! Thank you so much, Ledge. I really appreciate your time on putting this podcast together.
 

WANT TO BE A GUEST ON OUR SHOW?

Send us a topic idea or two and we'll be in touch. 

SUBMIT A SHOW IDEA