So what is it exactly? So Arthur Samuel, back in 1959, stated that it's the field of study that gives computers the ability to learn without being explicitly programmed to do so. And the first algorithm that we're going to look at, and we'll work through tonight, and I'll demonstrate for you one of those algorithms, it will learn. And it does learn without being explicitly programmed to do so. No structured programming with for loops. And see, [LAUGH] this is what you need to do in response to external stimulus, it doesn't work like that. Technical proposition is that machines are better than humans at accurately and consistently capturing data, communicating data and analyzing that data. And we're trying to find structure in the data, we're trying to find trends in the data, we're trying to uncover hidden insights that could lie in gigabytes and terabytes and petabytes of data. When we stand back and look at it as human beings, it looks nice to us. Because it's so overwhelming our brains aren't designed to do that. We're much better at walking and driving and gymnastics and, Making a ham sandwich or something. [COUGH] And machines are very, very good at this. So this is a bit of a Venn diagram here in Goodfellow's book, Deep Learning, and this probably looks like an eye chart, but you can look at the slides later. So as we move from the outside here, we've got example of knowledge bases, or you can think about them as facts or places where you can go look up information. And we have very simple examples in this next ring here of logistic regression or linear regression. And then this is where machine learning begins. And then there's a more complicated set of algorithms and an example in this more inner. And the ones in here are called representation learning. An example is called the shallow autoencoder. And then we have what's called deep learning. And here, you'll see this term all the time now, everyone's doing deep learning. And they're doing deep learning because it's working and solving certain types of classes of problems. Is it conscious? No, it's not conscious. So shallow autoencoder is a neural network that finds structure in data. And then MLP stands for multi-layer perception, or a more complex neural network. AI encompasses whole a bunch of disciplines depending on what your target outcomes are. Natural language processing is huge. Image and pattern recognition. Learning, that's the learning peace. Knowledge representation, how does an AI represent knowledge? Reasoning, ability to draw conclusions. Planning, perception, some understanding of social intelligence, creativity. I saw a talk recently, By an executive from Microsoft, and he was talking about some of these generative algorithms. So in their labs, they built a network and they said draw a picture of a yellow bird with black wings and a pointed beak. And it went off and it manufactured a very believable [LAUGH] looking bird that matched those constraints, but this bird doesn't exist in nature. I thought that was interesting. Examples of machine learning we encounter every day. Postal service has a zip code reader, it's very accurate. I mean, think about all the different styles of handwriting when we write a letter and how we shape numbers differently. This was one of the first big breakthroughs. I remember reading about this when this happened. This company came in and said, hey, postal service, we've got this AI zip code reader. And, of course, they'd seen other companies come in before and failed miserably, and this one worked really, really well. And so we see this deployed widely by postal services to read zip codes. Bank check deposits, reading the numerical amount off of the handwritten part and the numbers that you write in the little box. Credit card fraud detection. This one goes off on me periodically. My credit card company called me, did you buy that? Text, yes, back. And I like that. I like that there's something helping me to protect me against fraud. Amazon, eBay, Netflix and all these other services that we subscribe to, these things make recommendations, so they study your consumption patterns. And then they offer you, hey, look at this fancy new thing over here, we want to sell it to you. A key point to remembering all of this is failure is more common than success when working through a machine learning experience. And I proposed a fairly complex problem that I'll talk about in the big data analytics section the week after spring break. The jury's still out on whether I'll be successful or not, but I'll hold off and get into that after the spring break is over, here. So your intuition adds the art to the machine learning experience, but sometimes your intuition is wrong and you have to go back and revisit your assumptions.