Hi, Ed Amoroso here. I want to start by giving a little call out here to a really wonderful computer scientist, a cybersecurity pioneer. Her name is Dorothy Denning. She teaches at the Naval Post Graduate school here in the United States. Just an amazing pioneer in so many aspects of cybersecurity through the The 1970s, 80s, 90s, right up to the present, still a real wonderful contributor. She had this idea years ago that she coded into a paper, a very iconic, technical paper where she said you can collect information about the behavior of a system and look at it. We all know you can do that and as computers were sort of being invented, we realized that the idea of an audit log or a log file was going to be pretty important. Give a record of computing operations. She said if you collect that and look at it and you stare for things that seem unusual, that seem exceptional, that seem out of the ordinary, that then may be a hint that there's a security issue. And that seems like an obvious point but she laid it out so nicely. Now, I'll tell you, in reflecting back on that, my own career, I've spent 30 years in telecommunications. And I knew that the way telecommunications networks had been managed for as long as one can remember, is based on a very similar property. And we call that property management by exception. So here's what that means. It means that you watch network behavior. You watch it on an hourly basis. You watch on a daily basis. You watch on a weekly basis. You watch on a monthly basis, year. You just keep watching it, and you keep track of and building a profile, an understanding of what's normal. You characterize based on observation, based on actual data. You don't posit or theorize a profile. That makes no sense. What you do instead is you watch and you build a profile. Like, for example, if you're watching traffic in a city where I'm filming here in New York City in the United States. If you were watching during the day, you start at midnight you'd see there's not too much traffic at midnight. It actually drops off and then I would say by 3 o' clock in the morning, it's probably the lowest. And then four or five it starts to build up. And then by morning it gets a little busy. By daytime there's students and professors and business people, and all kinds of New Yorkers bustling around during mid-day, stays pretty busy. Maybe around dinner time it dips a little bit, picks up again in the evening, and then it drops down. And, by the time we get to midnight, again, it's sort of falling. And we can draw the usual curve, every day we would see something, and if you watched it everyday, you'd be pretty convinced that if you were blindfolded and you didn't know what time it was and I spun you around 20 times and I put you out in front of Forth Street in New York CIty. And I didn't tell you anything other than the volumes of people and I ask you to tell me the time based on what was going on, you probably could do it. You probably could get a pretty good idea almost pinpoint the time you spend enough time managing. So what that means is that if you do that, then you can manage by exception when you're trying to deal with security, reliability, availability, dependability, problems meaning if suddenly I see a rush or I see a drop off, then that requires action. That means there's something different. Observed, is different than expected. When observed is different than expected, there could be a problem in that correlation. Again, that's what Dorothy Denning laid out for us many years ago. The paper's from mid 1980 that she pointed that out, and it's fundamental. It's important for you, as you're thinking through design issues in technology, and you're wondering about cybersecurity warning. You're building a system. Let say your a job you might have now or in the future involves building some system and then your boss comes to you and says, hey, we want to make sure we're taking into accounts security issues for this thing we're building. Maybe had some behavior that's observable. What you should think immediately is that a very powerful approach is this management exception approach, where you're characterizing normal. You're constantly watching actual and then as you build a profile, if you see something that's different from expected, you're going to take some action. Does that make sense, I hope it does? It's a foundational issue. It's one of the few that I think transcends technology and discipline because it really does say something to the idea that In theory, we would say, if a security event is happening, it's probably exposing itself by changing a profile. Now, to test your learning a little bit, I will quiz here. Let's see how well you've absorbed what we've talked about. Where is management by exception potentially useful? In the cases with signatures and real time. And the answer is D, all of the above. It's perfectly compatible with and useful with almost any scenario, whether signatures are possible or not, real time, not real, all the above, I think will benefit by management by exception approach. So I hope you've learned something here, and I hope it's something that will help you as you do design work in the future. I'll see you next time.