[MUSIC] There's a famous quote that's attributed to David Wheeler, all problems in computer science can be solved by another level of indirection. Okay, so let's apply that idea here. In the context of networking, another level of indirection like a pointer. Well, that's an encapsulated packet, okay? An outer header and an inner header. And if we want to solve the problem of running any service, anywhere, that's a trick that will come in very handy. So here's what VL2 does, there's an application, or you can think of it as the tenant layer, okay? So the application or tenant of the data center is going to see what's called application addresses. These are location independent, okay? Same address no matter where the VM goes. And they're going to see the illusion of a single big Layer 2 switch connecting all of the application's VMs. Now, the physical network, on the other hand, is going to be using a different set of IP addresses called locator addresses. These are going to be tied to the typology in much the same way that IP addresses are usually used, right? Where if we move from this rack to that rack, we can't use the same IP address. But that's useful for the physical typology, for physical routing, because it tells us about location. So, we will use those locator addresses within the physical network, and just do regular Layer 3 routing via OSPF. Could have used BGP there. They used OSPF. Okay, so the application gets what it wants. The physical layer gets what it wants, but in the middle we have to translate between them. And this is the layer of indirection, or we might call it a virtualization layer, which is a word I'm choosing here, not from the paper. What we do in the virtualization layer is that we're going to have to maintain a directory server that maps the application level addresses to their current locators. Okay, and then VL2 has agents that run on the server. That will query the directory server and find that AA to LA mapping. And then when it sends a packet, it'll wrap the application address in the outer locator header, which can be used by the physical fabric. All right, let's see how that works in an end-to-end an example. Okay, so, we're going to start on the bottom left here, where an application is trying to send to this destination, 20.0.0.56. Okay, now, that's an application level address, it could be anywhere in the network. Okay, so, this goes to the agent that runs on the same host. Okay, and it's going to encapsulate it with the locater. But how do we get the locater? We send that AA up to one of the directory servers. Okay, saying where is this application level address? And we're going to get back the locater as a response. Okay, and then we stick that in the encapsulated packet and if you look at that encapsulated packet there, you can see three destinations. The innermost is 20.0.0.56, that's where the application wants to get. The other application somewhere in the data center. Next, that's wrapped in 10.0.0.6, which is the locator of the destination. And finally, we've wrapped it in 10.1.1.1, which is the anycast address of the intermediate switches, which we talked about before. So that's for our ECMP routing, to assist our ECMP routing. Okay, so after we've done these multiple layers of wrapping, we send the packet then out into the fabric. And it's going to be EMCP'ed out into one of those intermediate switches, let's say it's this one. Okay, now when the intermediate switch receives that it's going to do one layer of decapsulaton, strip off the anycast IP header revealing the next header inside, which has the destination in terms of the top of rack switch. Okay, so then we send it over there. That top of rack switch is going to, one last time, decapsulate the packet leaving the AA destination, that is application level address of the destination, so that it gets sent to the right host. And then the host agent delivers it to the application. So that's the end to end walk through of how VL2 works. Now, let's take a step back and see did we actually achieve our goals? Did we achieve agility? Okay, well one of the things we wanted is location independent addressing. Did we get that? Yeah, okay. The application level addresses are completely location independent. All right, because we've decoupled identifier and locator, this lets us locate the VM's anywhere we want. And this is a concept that's come up a lot in networking because the IP address, it's use is overloaded. It's used often both as an identifier and as a locator. So in other words, when you move, in traditional networking, your IP address changes and this can break applications. So we're essentially fixing that by saying, okay, we'll have two different IP address spaces. One for identification and one for location. All right, so good job there. L2 network semantics, yes, I didn't talk too much in detail about this, but you'll be reading the paper. And that agent that intercepts the packets also handles the appropriate L2 broadcast and multicast by translating them in the right way with the information from the directory server and multicast in the IP fabric. Okay, so it does that simulation or emulation really of the L2 behavior. However, we do have to note that both of these achievements do require this layer 2.5 shim agent running on the host. Okay, but it's not too hard to see how instead of running that shim agent on the host, the same concept, the same mechanisms can transfer to a virtual switch running in a hyper visor. Next, we want performance uniformity. Did we get that? Yes, so the close network is nonblocking. It's not oversubscribed. Wherever you go, you'll be able to get the same throughput to any other location. We get uniform capacity everywhere. And ECMP provides good, though not perfect, load balancing across those paths. However, performance isolation does depend on the tenants backing off to the rate that the destination can receive. Remember, we're always limited by that rate on the end host neck. And the fabric is guaranteeing that it will never be the bottle neck compared to that neck. But what if we overload that neck? Well, then we can get some performance interference and in your quiz for this week, you'll be trying out a little example of that. So you'll see that a bit more. The paper also does leave open the possibility of some future work, which deals with faster load balancing. It leaves open the chance that, yes, although traffic matrices are changing pretty quickly in the data center, you might be able to do something that also reacts quickly. And if you read the Conga paper, which was optional reading last week, you saw an example of that with end-to-end, across the tour information about what paths are congested. Finally, we wanted security. We wanted isolation among the different tenants. Okay, now, VL2 can do a kind of security by having the directory system allow or deny connections. They can choose whether or not to resolve an application address into a locator address when it receives that request. So it's the right sort of direction of an idea, however this segmentation is not explicitly enforced at the host, at least as described in the paper. Okay, so we'll have to do a bit more enforcement to really get that guaranteed network microsegmentation. All right, now one more question. We started out this week talking about software defined network. Where is the software defined networking in this system? I didn't even mention the acronym once. Well, take a look at the directory servers. There directory servers are providing logically centralized control, right? They orchestrate the locations of applications dynamically. They control the communication policy, what Paris can communicate, right? So this is very much the centralized control plane of SDN. The host agents on the other hand provide a certain kind of dynamic programming of the data path. There's not an explicit general purpose API there, but it's performing many of the same functions. [MUSIC]