[MUSIC] Hello, today we have with us Rob Sherwood, who is the Chief Technology Officer at Big Switch. He's also leading the Open Network Linux Project [INAUDIBLE] team at Stanford, have a PhD from University of Maryland, and then at long last network and security admin. Hello, Rob, welcome. >> Hi, thanks for having me. >> Of course. And of course, we also have who's joining us from today. >> Hello there. >> All right, so Rob, why don't you give us more background about what you do in your day to day life, both as the leader of the Open Net [INAUDIBLE] project, and at Big Switch? >> So let me talk briefly about what Big Switch is. It's a bit of a marketing pitch, but it really is the elevator pitch that I use, and I have pitched this to people in an elevator. The idea is that, if you look at who's doing the most interesting things in networking, it's not the traditional network founders. It's the companies like Google, or Facebook, or Amazon. You know they're building their own switches, they're building their own networking stacks, they're doing completely different, new, interesting things in networking. And a lot of that, fascinatingly, is actually not to build better network products for other people. They're not trying to compete with Cisco and Juniper, they've just decided the stock networking products that they can buy off the shelf don't work for them. And so, what Big Switch is actually trying to do, is saying, alright, well let's see if we can learn from some of the things they're talking about, some of the things they're doing. And actually productize that for the people who can't afford to have 250 person development armies working on their network. And so, we have two products, they're all based off of merchant silicon with what I would call a bare metal switch, that is. a switch where you can put software on it yourself. And Big Switch is a pure software company. We put software on other people's switches and we use SDN technologies, we use open flow, we use centralized controllers so that you can actually manage collections of little switches like they're one big switch. And that's actually how we get the name of the company. And so historically, people have paid for these big chassis based switches. They paid a huge premium for them, and they've been locked into these vendor proprietary worlds because they don't want to have lots of points of management. If you imagine if your entire network consists of lots of little switches and you wanted to make a change in one part of the network, you want to make a change that affected lots of things. So maybe you wanted to add a global ACL. You'd actually have to log into each of those switches manually, type a series of commands, hope you didn't screw up those commands and [INAUDIBLE] all those switches and manually audit that. And there aren't really automated processes for doing that right now. A lot of companies will put together screen scraping technologies or things like that. So basically, imagine you've got this big box, it has slide in cards. One of those cards is what's called the supervisor card and that's actually where the brains of that switch is. And then you would have multiple what are called line cards, which are the things that actually have lots of ports. And generally, depending on the switch, you may have independent cards that are called fabric cards, or the fabric might be built into the back plane. But think of that switch as a little private network. And so each of those line cards acts like little switches. What people have never thought of, but this is how the network runs today, is that supervisor card is effectively a controller. What happens is, you connect those line card switches to the back plane which maybe [INAUDIBLE] connect to the fabric switches. In a. So it's actually entirely possible for the network inside that box to be oversubscribed to have packet loss problems. And what ends up happening is very smart people have worked on, how do I engineer the network inside that box to be as unobtrusive as possible? So that you don't have to worry about, it really is an issue that you get different performance between two ports that are on the same line card, versus two ports that are on different line cards, because maybe there's over subscription across the back flame. But the critical property, the reason why people pay for the big box rather than lots of little switches, because the little switches are actually much less expensive and you can get them from lots of different companies, is actually the single point of control. They want to be able to log into the supervisor and then basically have the entire rest of the switch abstracted down to one point of control. So you can say at a route that comes in this interface and goes out this other interface you don't have to worry if they're on different line cards or the same line card, or if they cross some sort of weird internal boundary that's an internal function of engineering. And so people pay a lot of money for that. People pay, the build bigger and bigger switches, bigger and bigger routers that internally have all sorts of network like properties that are abstracted away from you. And even to the point where there's actually, out of band Ethernet, a control channel built into those big switches. And there's a protocol from the supervisor cards to the line cards. Internal to this out of band network. And so, it's something that you, the user, don't think about at all because it all just works magically, but if you imagine you at a route, when you log into this box you're really logging into a supervisor card, it's a PC, it's running user space software like anything else would have. And if you change an access control list or you add a route, what happens is that supervisor card will go and using the out of band Ethernet channel, using this update protocol, will actually update the forwarding state of all of the line cards and all of the fabrics and all of the things that needed to be updated by some internal update algorithm. And so that's how big switches work in the network that's already deployed. And so the analogy is, that supervisor card really is just a controller in an open closed sense of things. And these switches, you can think of on these line cards, you can think of the line cards as leave switches in a [INAUDIBLE]. And you can think of the fabric back plane as the spine switches. And having a single point of control really is the value that people want. Because they don't want to be able to have to manage, we are talking upwards of 40 switches in a rack. That could be a huge number, if you had to log into those individually. And what's fascinating is, while people haven't really thought of or really named this out of band control plan protocol that, in existence switches, runs and manages all the forwarding updates, I call that closed flow. Because it's exactly the same thing that OpenFlow does just in a closed proprietary manner. >> So the internal network, the physical network of these big switches, those networks are non-trivial to build to get. The illusion that there is no network there, that it's just all the cards directly connected to each other. >> Exactly. >> That's very similar at a smaller scale What you have to do within a switch, to what modern data centers are moving to now across racks. >> And what gets interesting is, you get advances like companies like what Broadcom have done, which they've actually, you think of the building block of these bigger switches is a single packet forwarding chip. What they've actually, the capacity of those single packet coring chip have really gone hockey puck exponential in terms of their actual packet coring performance. And they've also pulled a whole bunch of other chips into one single form factor, so you get these systems of chip designs. Where you get files that are in the same the package. You get all the packet forwarding. You might get L2 packet forwarding with L3 packet forwarding with MPLS all in the same chip. And so, if you think about that fundamental building block, as a network device manufacturer, you are like well, do I build a 1U pizza box. Which is functionally the amounts of ports I can get out of one chip, and I connect these 1U pizza boxes together. Or do I build that chip onto a line card and then another one of those chips onto the fabric back plane and actually create the big iron chassis version of that, the big switch version of that. And you know, at some point if you actually follow the traces through the motherboard versus the cables that you have through the data center, they actually form the same topology. This is the topology. And so [COUGH] there really is a direct analogy between these two. It's just historically people paid more for the big box. >> Right. We're trying to, I mean, if you look at data center scale, we're trying to build the whole network to be a- >> [CROSSTALK] >> As a whole that applications can't notice, ideally. >> Exactly, and so if you look at, for example, Facebook's Altoona Data Center, what they've done is they've built a three layer clos topology. And clos topologies have this exponential scale up metric where, when you build them the right way, there's basically no oversubscription. Meaning that if you look at the bandwidth you can get between two ports on the same card, versus two ports on any other, two arbitrary ports. That's the same capacity through the back end of the network. >> Right, a non-blocking network. >> Yeah non-blocking network, yeah. And [COUGH] the fact that you can kind of build a network out of smaller boxes like that but still manage it through a central point of control, I think that is one of the big tenets of what I think of as an SDM design. Maybe you have some sort of external controller that manages, in the case of Facebook for example, hundreds of switches as if they were one big switch, that's kind of the value. Because that means that they can have, instead of a team of 30 guys who, if they have to make a global config change, have to log into hundreds of switches. They can log into one controller, make the change, and have the controller transitively and automatically manage that config change. >> So what you're saying about the merchants silicon switches. What kind of interfaces do these provide? >> These devices are incredibly complex. There's all sorts of pathologies that have only historical value, or they figured out how to wedge the MPLS into the L3 tables, because that was how it had made sense from a chip layout perspective. I think there's actually very few companies in the world who actually want to program their network directly, and for those companies, we have an open source project, open network Linux, that's a very small minority. Most people want to have effectively a high level policy language, they can say that these sets of hosts are supposed to talk to these sets of hosts, but not this other set of hosts. And if set A and set B are supposed to talk, I want you to go through one of these five firewall instances and load balance it dynamically. I think that's a policy statement that more people want to consume, and they just want the network to do the right thing given a statement like that. >> So Rob, you mentioned, you know, there's a, people want to work with higher level abstractions. >> Yeah >> So, you know continuing the analogy of large chassis switches with many line cards, kind of providing that more centralized control. What are the differences between those old physically big switches and where people are going with SCN abstractions now for expressing those higher level policies? >> So I think the policies are very much the same, I think what's happening is, so the example I always give is like, when you log in to your bank, and you get a pop up that says, hey, do you have a student loan? Let's think about this, or do you want to refinance, you know. That's an application that's running in the back end of your bank that's trying to make the bank money. Like any time you refinance your mortgage the bank makes money. Right, and so what's happening is, that's a web application which is on-the-fly as you log in, doing a complex business analysis of, do you have a mortgage, what's the rate on that? Can they make you a better deal, and doing a presentation in real time of that. Now, in order to make that happen, there's a whole bunch of security stuff that happens, because that database has really sensitive information. And there's business logic that happens, and there's all sorts of isolation that happens because there's lots of applications that run in parallel that all make the bank money that they don't, if one of them goes rogue or gets compromised, they want isolation. All of those things have just gotten more and more complex. Like as banks go online, as banks have more applications, as crazy people the banks come up with new and different ways to make money. You know, to provide value to their customers, that network just gets more complicated. And if it were all one big switch, you would still have a bunch of complex policy programming tasks to do. So really, the value add, you could in theory imagine doing it independent of what the physical box looked like. The value add of, at least a company like Big Switch, is that we can automate all of these tasks and provide an easy policy language to say, all right, we're going to tag all of these hosts this way, and they have this rule. And the other hosts gets tagged this way, and have this rule, but then when you're talking between these two rules, these are the sets of things that I want to happen, or not happen. And historically, people have tried to do this with VLANs, and they've tried to do it as IP address ranges. And that works until you run out of VLANs or you run out of IP address ranges, or your company buys another company that uses that same 10/8 IP address space. And this is not at all unique to banks. Every company that has a website in this day and age has backend web applications that they try to do to improve user experience and performance, and ultimately to make them more money. And so it's really the complexity of trying to run a data center has increased so much that you know, how do I manage these policies, whereas how do I debug these policies. Like, we've got this cool feature where, and stop me if I'm talking too much about Big Switch, but I think these things are actually, it's important for people to know. But, where you can give it two points on the network, and we will give you analysis of both the physical path through the network between these two points, as well as the policy path. So, you know, this host didn't talk, can't talk to this other host because you hit line 12 of ACL foo. And you know, if that's not what you want, then change line 12. And you know, we use that to build up what I think of like unit tests for the network, which is like, here's a bunch of pairs of hosts that are representative of policy in the network. And here's the expected policy of yes they should be able to talk, no they shouldn't, or they should go through like a nap process or something like that. And let me rerun that every time you make a configuration change. Just like you would have unit tests in a program. It's those types of things that, in theory you could accomplish without STN, but I don't know how you would do that as a same person. You need automation, you need a controller that pulls together lots of different beta sources, and makes it readily accessible to start making these queries. >> So you mentioned open network Linux, right? >> Open network Linux is a very fun project that I started. Big switch as a company focuses on the people who want the networking problem to go away. And there's a large fraction of people, and I put myself in this other camp that is like, I love the networking problem, that's the thing I'm actually most passionate about. >> The commercial products basically do their best to make people. You can think of it almost like the iPad for networking. It's got one button. You push the button and it does most of the things you need. But it's not the Linux, it's not the thing that actually exposes all the details to you. Completely separate from that. I have my open network Linux project. You guys can check it out on opennetlinux.org. It is a bare bones Linux distribution for these bare metal switches. It's all open source, except for the Broadcom [INAUDIBLE] bits that we can't open source, but critically, they have open APIs so that you can program to them. It solves a bunch of problems that nobody wants to think about like yelp. If the fans aren't spinning at the right rate, you can actually burn up the switch. And the switch has a lot more fans or the LEDs blinking in the right way or and that changes for every box. And so what open network Linux does is provide extractions, so that you the programmer don't have to worry about all of these things. You can just worry about, I want to insert this routing rule into the chip and I want it to go at line rate. >> Well, once you have that, when you have, well first bare metal switches and then reasonable operating system, open network Linux to use on them. That you know, that will give more opportunity for technology that's kind of embedded inside the network. Not just a centralized controller of the network but sort of more intelligence within the network, is that something that's happening, starting to happen. You think we'll go in that direction? >> Definitely, although there's kind of, I think people have this vision of all I need to do is be able to run a user space process on the switch, and then I can have a firewall or something like that. Now there's definitely possibilities along those lines, but there's an important kind of architectural limitation for folks to think about. The way these boxes are designed it really is very much like a server. It probably runs XJ 6, has a PCI bus, and the chip, for example the broadcom chip, is a device on the PCI bus. Like if you type LS PCI you see okay, there's a broadcom trident two there. What gets interesting is actually the PCI bus is the biggest bottleneck of the system. >> Mm-hm. >> So like a broadcom trident two is capable of doing something like 1.4 terabytes of traffic in and out of it. And there is what is logically a port that goes out onto the PCI bus. But you're limited there to the PCI bus speeds, and it's not even a 32 lane PCI 3 interface, I think it's four lanes PCI 2 or something like that. But the net result is you have about a gigabit of traffic that you can get between the switch CPU and the switch packet processor. And so, if you think that you're going to deploy the world's next best generation firewall on a kind of low end data center switch. This is actually going to be a big physical bottleneck for you. Now, that said, there are bulk designs. And you can actually build them up, pretty much, yourself, where you actually take front panel ports of the switch and plug them directly into a server. So, that's where things start to get interesting. And there's a bunch of interesting designs where you say, all right, well, maybe using DPDK and a fast, are you guys familiar with DPDK? The brief thing about DPDK is it's a set of libraries from Intel for very fast packet forwarding on commodity servers. And if you believe some of the performance numbers, they've been able to get, the numbers I've seen are six gigabits of processing per core, scaling linearly. So if you have a 32-core system, you can get close to 200 gigabits per second, and you've got the appropriate Ethernet cards to do this. But what that means practically is if you want to do really interesting kind of non-ASIIC based packet processing, if you had an entire rack full of servers. You could actually do that at full line rate just by distributing the load across those. And that's when things start to get more interesting, there's designs. There was a paper by a former colleague of mine who's now at AT&T [INAUDIBLE] at this year talking about a router farm. Which is the idea of data center switches don't hold, can't hold big [INAUDIBLE] tables, they don't have buffering you would need for wide area networks. But maybe if you had a whole bunch of servers in front of them to act as very small virtual routers. Maybe you could actually build kind of what looks like a farm of routers out of a bunch of data centers. Which is in a lot of data center servers that would replace a high end router, but it's much more programmable. And you get interesting designs like that. Those are things that I find pretty fascinating. >> That is pretty cool. So some of what you're data center also touches on what the open research project is trying to accomplish. Moving packets quickly in a programmable way. >> Yeah. >> What do you see the relationship as? >> So I mean, I would say there are a lot of projects which are trying to do what I would call software based forwarding. You know, open B switch is probably the most famous. There's a [INAUDIBLE] version of open B switch. So something that leverages those technologies. There are a bunch of company that actually are trying to have small amounts of hardware off load. There's a bunch of other virtual switches that you can leverage. I look at more the, kind of the architectural and hardware limitations of software switching rather than the precise software that's on top. There are notable differences in performance between the different software packages, but not on the scale of difference between software or versus dedicated hardware like so. Just to give you round numbers, if you believe the DPDK numbers, then that is a 32 core server. Which is a pretty beefy server, can get about 200 gigabits per second of forwarding performance. If you go to the current latest greatest network processor and FPGA designs you get in the order of 600 or 700 gigabits per second. If you go to the latest Asic designs you can get like 3.4 terabits per second. And what you're doing is you're trading off programmability for raw porting performance. You basically, if you are considering, how do I build an X, it's like, well, okay, can I build that functionality straight from Asic? Well, if the Asic support that functionality, they don't support high end fire-walling functionality. They don't support high end load balancing functionality. >> That also raises an interesting possibility. What if we. Wanted a bit of both. We want a lot of both, performance and programmability. So what if these chips were a little bit programmable? So, they are not as generic as FPGAs, but they have some programmability, just the particular things that networking pipelines usually use, right. So do you see that going forward? >> Yeah, and I mean, it's one of these things where I think everybody sees value there, but you get into the next level of details, so ASICs are becoming more programmable. There's actually thousands of pages for how to program ASICs. Network processors are becoming faster. FPGAs are becoming faster and cheaper and trying to use less memory. x86 based platforms, which are the most programmable, are actually becoming faster. It will be interesting to me to see how long, how these different technologies play out over time. How many use cases where you need the programming capability of an FPGA or an MPU, but you need more performance than you'd get out of an x86. There's a small class of solutions that fall into there. Fortunately, for those vendors, it's an important class of solutions, but my claim is it's getting smaller. And the question is, what will happen over time? The designs that I really like, personally, are ones where, like I was talking about with the router farm work, or more generally with NFV, where you actually use the high end data center ASICs as fast interconnects. They're basically fast and dumb, between relatively slow and programmable x86 based processing. And then it's just a matter of, how many of these servers do I need to actually keep up with what looks like line rate? And it was that look in terms of the overall cost of the solution, the power envelope of this, because this is not a power optimized solution. Well, the other thing is what happens if you're going to build 500 of these, whatever the boxes you want to build, you want to build 500 of these, that's one class of solutions. But if you're going to build 500,000 of them, or 5 million of them, maybe you actually spin your own ASIC. And that's kind of one of the design tradeoffs that people make as well. >> Do you think there's a clear killer app for bare metal switching? >> So my quick answer to that is, I think The Long Tail will be the killer app. Let me explain what I mean by that. So anything that's obviously going to make a lot. Bare metal switching is more of a business change rather than a technology change. So the reality is that every switching company in the world already sells chips made by Broadcom. They sell them in boxes that are probably made by one of a list of five different Taiwanese manufacturers, and they put their own software on top. >> Mm-hm. >> So in the data center at least, the switches are completely commoditized and everybody is already building on bare metal switches, they just don't know it. And then the question is just, what are the use cases that people want to target but are not being targeted by the traditional vendors? And the traditional vendors are very smart, they're not dumb. They're going to target the top end use cases by however they want to figure that out. And you know, they're good at their job. I think we can actually, companies like Big Switch can do a good job breaking into that top end use cases. But really what happens to the rest of those use cases? In the traditional networking world basically you're kind of given the middle finger and you're told okay, you can't do that. Like a random use case that it's not going to get covered is, hi I am a grad student. I would like to tinker with your switch and see if I can validate it with my new routing protocols better than BGP. That's not a financially viable use case for a large company to pursue. But that can be hit with bare metal. Which is a [INAUDIBLE] network Linux, you download a GPA. You get this up and running, you run your thing and at the end of semester basically you turn the box over to whomever and the next guy comes down the pipe. And maybe at some time somebody comes up with some crazy new application because he had the chance to tinker around with the hardware. I mean, really by letting people play with the hardware the hope is, and I think we're already seeing this, we're going to see the same thing we had happen with cell phones, that we had happen with PCs, is that you just get this excess of people who are developing application for it and a couple will bubble to the top. I think right now we haven't quite got that network effect going, and we're really starting to get like a very small number of people starting to play with it. It's very much kind of a hobbyist tinkerer community, but I do think that will come, and what will be the killer app over that. That's where I quip. I don't know that it's necessarily going to be any one thing, but it's going to be the ability to do whatever you want. >> So going forward, Rob, what do you see as the Rob Sherba dream for a datacenter? What does that look like? From the virtual machines out to the inter-datacenter fabric. >> So, the dream for the datacenter would basically be, all of the middle boxes, and all of the complex packet processing functions are basically moved into software. So that includes firewalls, load balancers, map devices, if you're in like a mobile carrier world, you know all of the evolved mobile packet core, APC functions. There's a whole bunch of stuff that happens in the carrier world, which is just, you buy a dedicated box to do that? All of that becomes software. Everything has incredibly high bandwidth capacity using kind of emergent silicon technology. Including, connect to through the core. And so you could have something like one of these WAN routers built from software data center components, like I'm talking about. And that will, and critically it's all managed through, probably not a single centralized controller, but a hierarchy of centralized controllers where, this is what computer science teaches us. You solve scaling problems with hierarchies. And you would have a controller for some unit of compute and storage and network, and then if you wanted to build data centers bigger than that you would have multiple of these. And if you have too many of these then you have a controller of controllers if you will. And that's a system that kind of scales up. >> So this is how you solve the problem of scaling to a very big distributed system and making it reliable. >> Exactly. And I mean, really what you, and kind of the insight here is distribute the data plain and centralize the control plain. Although centralize the control plane is a little bit of a misnomer because what ends up happening, and you get this in our solutions and this isn't actually how you going back to the chassis based big switch design. There are some packet processing functions that you offload onto the line parts. Like if you ping a router's interface. What's actually happening at the low end there is a packet's coming in the front panel port of the switch, that router interfaces actually, it's not a physical thing, it's a logical concept. But that line card has a knowledge of that logical because maybe you have five ports on a VLAN But they all have access to that router interface. So imagine the gateway interface. >> You should probably explain the gateway interface a little bit for our students. >> So if you're watching this at home, right now your laptop is talking upstream to whatever is its next top router and call that 192168 1.1. And that has, that box probably has multiple physical ports. Maybe it's, think of it as your wireless device. Now that interface, 192, 168, 1.1. If you plug into each of those ports you contain that. And so what ends up happening, how that's implemented particularly if you weren't plugged into a wireless router, if you're plugged into a big switch. Is that interface exists logically across all of the line cards where that's programmed and packets show up on the port. The Asic that manages that port says, I'm not actually smart enough to generate an ICMP echo, that's a higher level function. So let me punch that packet to the CPU that's local to the line card. Mm-hm, so it is a CPU local to line card. A line card based design is probably not an x86, it's probably some sort of embedded CPU but it's a [INAUDIBLE] complete general purpose CPU. And that says, okay, I got an ICMP echo request for this IP address, check, yes, that is one of the IP addresses I'm supposed to reply for. Let me generate the ICMP echo response, send it back to the ASIC to push out the reply port. That's how that function works. Now you could have implemented that by actually sending not to a local CPU on the line card but up to the general CPU on the supervisor card, uh-huh. But that eventually doesn't scale, once you get to enough ports. And there's a whole bunch of functions where it just makes sense to distribute that functionality down to the line cards. And so, what does this mean in the SDN world? What's the analogy? The analogy is you can actually do lots of processing at the controller. You could actually, via open flow packet in, you could send everything up to the controller, and the controller could handle everything, and you could send it back down. But eventually that doesn't scale. And so what you end up doing is exactly the same design trade off which as you say are well. There's a whole bunch of low level packet forwarding things that I don't necessarily need to handle the control. If they don't actually affect the policy, they can actually be offloaded onto individual switches. And at least in our designs, it's things like ICNP root handling, it's LACP handling. There's a whole bunch of kind of link layer protocols that don't actually affect the large functioning of a switch. And that's how we get scale, is to offload these functions down to individual switches, rather than up into the controller. >> Right, right, and there's that kind of hierarchy of control and optimization. In like, Google's before design, for example, before in their wide area SME controlled backbone. So that that's one reason for reliability, for latency and efficiency you're. You're going to have multiple controllers, yeah, but, let me spin this in another directions to multiple types of controllers. So the, looking at sort of where STN has gone so far, there is maybe two or three of killer apps for STN that are actually all related to the cloud, and I'd be interested in what. Which ones you pick as what are the killer apps, but certainly data center control and wide area networking optimization are two that have been used in practice. >> Yeah, and I think it's actually useful at least for the class audience to separate out >> Where is the technology useful versus what's the thing that makes the easiest business case? And certainly, companies like myself are actually trying to target the intersection of that. But, so, the easiest business case is clearly data center, probably exactly as you said. The second easiest business case is the WAN. [INAUDIBLE] >> Graphic for the WAN or other functions? >> A whole bunch of things right, so there is a huge SD-WAN movement which is trying to, I'm not sure if your students know. But basically what ends up happening is most large companies that have offices around the world have basically two types of networks. They have a high quality of service, like with service level agreements, VPN network. Think of that as their own internal network but they're probably leasing the lines. Then they have kind of a best effort standard Internet network. The VPN network is pretty expensive for them to use. They really only want to use it for sensitive traffic or, maybe internal video conferencing, or internal video, or internal voice over IP, or things like that. and they want to dump other, like if you're surfing YouTube. Imagine you're working at some large bank, but you don't work in the central office. You work in the branch office that's down the street. If you surf YouTube, you really want that traffic to go on the general Internet network. But if you're trying to access the company's internal sensitive data that should go on the physical internal network. There are a growing collection of companies that sell what they call an SD WAN solution. Which is an intelligent device that would basically sit at the branch point there that decides. Can I even though this traffic is ostensibly destined for the internal network, can I offload it to the general purpose network and do some interesting routing techniques or interesting encoding techniques to effectively get high levels of quality service over a best effort network. >> Got it. >> That are people who are doing that. I think that there's actually, in the carrier space, a huge space for more intelligent PE routers, so provider edge routers. So if you imagine from your laptop to your home DSL modem to the DSLAM, the DSL aggregation mutliplexer device in your central office. There's then a provider edge router, which is the thing that connects your central office to the backbone of your ISP. That is actually one of the biggest bottlenecks on the Internet these days. Because that's actually one of the massive points of congestion and over subscription. That's where all the, you might have 10,000 people in a metro area connected to the same central office. But then that central office might have 8 by 10 gigs of bandwidth going up. And if each individual subscriber has, say, 10 megabit connection, there's a huge over-subscription rate that happens there. That's fine with statistical multiplexing, but as everybody starts to use Netflix, as everybody starts to use the Internet more and more, there's less statistical multiplexing, hm-hm. And so I think that there's an interesting space for optimization there as well. >> Do you foresee some of this technology. Going deeper into the cells. So in some sense the interacts as a laboratory for testing out these ideas that skill although it has many many defenses of being credit large in terms of incentives and such. But nevertheless some of this technology UCIS used for example using this in some of their ISP networks, the internal network for an ISP. >> Very much so. And really, a big thing that dominates the business case is how fast do these networks turn over, right? So people add new data centers very quickly. They change out data centers very quickly. On that scale kind of like the PE routers and the provider routers actually the ones on the core, those turnover maybe least quickly. And so while I think the technology's directly applicable to that space, it's just the way the business side of that works in terms of how often do they buy new routers, how sensitive are they to new technologies, how much pain do they have there. That will probably, well, I think that's one of the biggest benefits. That will also probably be one of the slowest adoptions. >> So, then we've, this is really interesting. We talked about a few use cases, theirs, for SDN inside the cloud data center across the WAN backbone connecting the cloud data centers and then, Ian, I guess Enterprise Networks going out to either the public Internet or their private network. Do you think these variety of use cases, are there going to be multiple different kinds of controllers that- >> Oh, definitely. >> Are targeted towards these individual use cases? Are we going to have one controller to rule them all? >> Well, if you look at my idea of the controller hierarchy, certainly I think that the bottom layer, you'll have lots of different controllers. >> Mm-hm >> Now, can you then create a controller of controllers which is capable of managing different types of controllers? And in some cases it actually doesn't even necessarily make sense. Not to dwell on Big Switch but it's the one I know the best. That we basically sell two different complete controller stacks. >> Mm-hm. >> And there's a lot of code reuse there, but the idea is one is for managing production data center networks and the other is for managing monitoring networks. Which even if they're in the data center, they're two completely different use cases, which are probably run by two different teams of people, so there's actually not even an interest in interoperability. In fact, it's the opposite. I want to have my thing and you can have your thing but you shouldn't be able to touch mine and visa versa. Interesting technology needs to, if you're actually interested in deploying it and having impact, you need to also take into account the organizational issues. Like who would actually deploy this? Are they incentivized to deploy this? Like do they actually benefit from deploying your thing? As well as the business issues which is how can I make the case for someone to deploy this that will save them money or make them more money. Because if they don't, it's not going to get deployed. Doesn't really matter how good of an idea it is, and it took me a while post grad school to digest that, but I think it's actually pretty important. [MUSIC]