Back to the list
Idit Levine
Episode 17 - July 28, 2020

Idit Levine on Building a Virtual Service Mesh

Featuring Idit Levine, CEO and Founder of solo.io
Apple Podcasts Google Podcasts Spotify Youtube

In a new episode of Semaphore Uncut, Solo.io founder and CEO Idit Levine (@Idit_Levine) shares insights on what’s next in service mesh solutions. We dig into the importance of network for distributed systems, the evolution of API gateways, the cutting edge of service mesh infrastructure, and more. Idit was very kind to share many details from her journey in discovering and developing Solo’s products.

Key takeaways:

  1. When everything is distributed, everything has to go on the network wire.
  2. Gloo is a feature-rich, Kubernetes-native ingress controller, and next-generation API gateway.
  3. Service Mesh Hub lets you group multiple service meshes into one virtual service mesh that you talk to.
  4. WebAssembly Hub is a place for the community to share and consume service mesh or API gateway extensions.

Listen to our entire conversation above, and check out my favorite parts in the episode highlights!

You can also get Semaphore Uncut on Apple PodcastsSpotifyGoogle PodcastsStitcher, and more.

Like this episode? Be sure to leave a ⭐️⭐️⭐️⭐️⭐️ review on the podcast player of your choice and share it with your friends.

Edited transcript

Darko (00:02): Welcome to Semaphore Uncut, a podcast for developers about building great products. Today I’m excited to welcome Idit Levine. Idit, thank you so much for joining us. Please introduce yourself.

Idit: I’m Idit Levine. I’m the founder and the CEO of a small company called Solo.io in Cambridge, Massachusetts in the U.S. Before I started this company, I was working most of my life in a startup company. I was working at DynamicUp, who got acquired by VMware, and we were doing cloud before cloud was actually called cloud, and was one of the first engineers there. Then, I moved to another startup company called CloudSwitch, which was a very innovative technology back then. That got acquired by Verizon, and we built the next generation public cloud of Verizon after it. I learned quite a lot on cloud and scale that was really educational for me.

Then I joined a company EMC. Mainly I wanted to move to a big company. I was very curious about how it will be to work there. There I was the CTO of the Cloud Management Division, which was pretty insane. It was a new division there, we basically started working on Cloud Foundry and Mesos back then and Kubernetes; it was innovative and not exactly irregular of the vision that EMC was having back then.

After EMC merged with Dell, I decided to open my own company. Mainly my problem was the pace. In a big organization, usually, you’re slowed down by other stuff. I started Solo almost two years ago and to be honest, we’re having so much fun.

Microservices put networking in spotlight

Darko (02:55): You have a very interesting line of products and how you got there. Some of our listeners probably heard about Gloo, but can you share a bit more about your products and the technical side behind them?

Idit: When I started Solo, it was clear to me that there was a huge shift in the market. Everybody was moving from monolithic to microservices. When there is a big shift like this, there is backroom and a lot of need for new technology and tools in the market. I realized that a lot of people are focusing on how to rewrite the application.

But once someone is rewriting this application, one will need to deploy it somewhere. Let’s say that Kubernetes has solved that problem, but what’s interesting is that now everything is distributed. And when everything is distributed, everything has to go on the wire. If everything has to go in the wire, I believe that networking will become the queen of the infrastructure.

Stuff like, how do you make sure that two microservices will be able to commute to each other? How do you make sure that they’re going to do it in a safe way? How do you make sure that you understand what’s going on now when your logs are spread out all over and there is latency between your microservices?

Rethinking API gateway in the age of Kubernetes

When I looked around, I discovered that service mesh is something that was just emerged by the guy from Buoyant, and then Istio was launched by Google and IBM. And I looked at this, and I said, “This is the future. And I really want to take part of it.” But here’s what else I realized – I’m not Google, and I’m not IBM. What is this thing that will be very important that I can build right now, but I know that it will give me a huge advantage when service mesh is everywhere?

Envoy Proxy was, by far, the new generation proxy, it was API driven. So you’re able to extend it quite dramatically. You don’t need to do restarts every time that you’re reloading configuration. There’s the discovery function. There’s so much good there that I thought, “This will be huge. What if I will take this and try to apply it to what people will buy today?”

API gateway is a 10-years-old technology. Products that are on the market today are not built for microservices and the right scale of API calls. They require an active Cassandra cluster to save state and hard restarts to reload configuration. They have no discovery function which is very important in a dynamic environment. And they don’t have a declarative model like we’re used to today.

So I said to myself, “What if I take Envoy Proxy,” which I believe will be the future of the cloud, “and build the best API Gateway?” Like, the way people would build an API gateway today, not 10 years ago. That’s how Gloo came to life.

Why is Gloo different

Gloo has a declarative model, is based on top of Envoy, and is very extensible. If you already have Kubernetes, you don’t need to worry about how it runs.

We open-sourced Gloo I think a year and a half ago. We learned a lot from this process, mainly working with customers. It’s very useful for us because each of them has their own infrastructure. The beauty of it is that all of them eventually want to adopt a service mesh.

Gloo, right now, is in explosion mode. We’re getting so many customers, and the Corona didn’t slow us down. Actually, we are just going faster right now. And of course, we are contributing Envoy apps; we really know Envoy in and out and that’s helped us a lot because that is the piece that has become very important in the service mesh ecosystem.

Service meshes need a way to talk to each other

At one point it hit me that it’s not only that an organization is going to have more than one instance of a service mesh. There could be a chance that it will be a different one. That’s when I came with a project that we’re now calling Service Mesh Hub. The idea was that we need somehow to make those meshes work together. So, we need to make sure that they are communicating in the same language.

When I tried to figure out how to solve this problem, I came with two solutions. The first one – make sure that all those meshes would talk the same language. We need an abstraction layer. And by the way, we need a simple one because right now, back then, at least, Istio was, and still a very complex API.

The second thing that we did was coming up with the concept of grouping. We’re calling it virtual mesh: you can group several meshes as one, and tweak it as one virtual mesh. Now you can make one service in one class and talk to the other seamlessly because we are taking care of all the work behind the scenes.

Then, the last thing – we understood that when we have customers running those, they just obstruct the network. When you need to customize a service mesh or Gloo, it’s a complex problem. There’s the data plane and the control plane, and you need to be able to customize both of them. We realized that instead of writing C++ we could help extend Envoy pretty easily with WebAssembly, which I think that it’s the future of the cloud.

Using WebAssembly to develop plugins for Envoy

Darko (16:29): Can you give us a background on WebAssembly? Why do you think it’s so great? And how, in practice, do you use it to create new plugins for Envoy?

Idit: WebAssembly was built specifically for a browser. Now, this technology is very interesting and now they basically created something called WASI, which is an interface that’s allowing you to take it from the web and bring it to different places. Google thought it would be a very good idea to put it in Envoy. So we basically brought Wasm to the memory of Envoy.

We’ve created an ABI, it’s called an interface, like API, but for binaries, that we’ll allow us to communicate between them. Now, what we can do is write a Wasm model and bring it to Envoy. And then Envoy’s to the extent; now you don’t need to recompile because it’s getting on top of it. You can use whatever language you want. It’s safe. It’s fast.

We helped Google a bit, but they did most of the work. And when I saw this, I said, “This is really important, but here’s the thing. It’s still not simple. You still need to build this WASI, which is not a simple thing. Then you need to bring it somehow to Envoy memory, which is again, not simple. Then you need to steal the right configuration for the Wasm filter in Envoy.” It’s not intuitive, so I said to myself, “Okay, we need to keep it way simpler. We need to simplify for our user.” I saw a huge similarity to what Docker did.

Less is more

Idit: Docker knew how to take something very complex and make it easy to work with. And this is what we tried to do to WASI. We felt that we could give him the Docker-like experience. So, we moved it to Wasmi. It’s very simple: when you’re running Wasmi in it, it’s creating your library and container, download everything. There’s no dependency besides Docker. And then, it gives you the library with the structurals of what you should build. You open it, add some business logic, and then when you finish writing the business logic, you can build it in one command line – Wasmibuild. Then,you can push it to a registry that called WebAssembly Hub. You can pull it from there if you want to play with it or test it on local Envoy. And, we added another common called deploy, and then you can deploy it.

We did it in the beginning with Gloo. Google really liked it and said they want that experience for their customers. “What if we will announce it again also as the official way to extend Istio?” So, that’s what we did together. And now, WebAssembly is the only way to extend Gloo and Istio, and I believe other platforms in the future. So, that’s really powerful. I’m excited about the future of it.

I really believe it will be a huge refresh in your market around it. Try it out. You just go to WebAssembly Hub. We’re also on Slack so you can ask us any questions that you want. But as I said, I mean, I’m very excited about this specifically.

Darko (26:26): Thank you. It was an amazing conversation. We went really deep, but I think it’s very digestible and easy to follow. Good luck with all the projects and pushing the whole community forward.

Idit: Thanks so much. Check out Solo. io, we are hiring and growing like crazy; and join our Slack community. We would love to have you guys.

Meet the host

Darko Fabijan

Darko, co-founder of Semaphore, enjoys breaking new ground and exploring tools and ideas that improve developer lives. He enjoys finding the best technical solutions with his engineering team at Semaphore. In his spare time, you’ll find him cooking, hiking and gardening indoors.

twitter logolinkedin logo