Categories of Machine Learning. We're going to look at examples of supervised learning. In supervised learning you have some example data, you've got some input and you've got some outputs, known outputs. Regressions are a type of problem called a continuous value problem that produces a real number. Then the example we're going to look at is predicting housing prices. So, for those of you who have a background in Machine Learning, yes, this is the kindergarten example. But it's a good example because you can see how it works, and you can see how it learns, and you can see what it's response is to data it's never seen before in a fairly easy to understand manner. I think that's why many instructors start with this regression problem. Another type of supervised learning or what are called classification problems, and they have a discrete output. You can think about them like a binary output, zero or one. We'll look at the iris data set and problem, but I used it here, is this flower an iris? Yes or no? Zero or one? Does the patient have cancer? Yes or no? Unsupervised Learning. So,in unsupervised learning, outputs are unknown. You don't know what you don't know with this, and you're on the hunt to try and find patterns in the data. You might be interested in market segmentation, social network analysis. Who's heard of the cocktail party problem? How is it when we're in a party and it's crowded and there's a lot of conversation taking place that when you have a conversation you're close to someone, you and I, so all these people might start talking, we're all talking, we're all having this conversation. So, there's all this background noise but you and I are able to converse and we can hear each other just fine. How do our brains do that? Our brains have this ability to filter out all this other noise from all these other conversations and just focus on this one conversation. That's the cocktail party problem. Some tremendous advances have been made in solving the cocktail party problem. We want to filter out noise, we want to listen to one conversation. What we're really after is finding the underlying structure in the data. Given this big data set, we anticipate, make a hypothesis, there's structure in this data and we go and start applying machine learning algorithms to it using these unsupervised learning algorithms to see if we can extract some structure, see if we can identify some structure in this data. Reinforcement learning, we're not going to do that but I wanted to mention it. So, what happens in reinforcement learning is the algorithm makes a sequence of decisions over time. So, it makes a decision and that decision affects the next decision, and that decision affects the next decision, and each of these decisions are working towards a cumulative reward, there's some notion of a reward. So, good decisions are not rewarded or maybe negatively rewarded. Positive decisions reinforce that reward amount, and the goal with those algorithms is to work towards this cumulative reward, that's the notion behind it. Also researchers are looking into algorithms that are generating algorithms. It starts to get wild and I don't know if anyone's read Isaac Asimov's iRobot, but some parallels there I find interesting. So, in this notion you have this master algorithm and you didn't put a bunch of things, I want this, I want that, I want to identify this. Then it will generate an algorithm that would spit out an algorithm, okay? That then maybe supervised learning so it would need training data or maybe it's an unsupervised learning one. I haven't had time, there's just not enough hours in the day but I'm really intrigued by this. I plan some point in time to go off and look into this class of Machine Learning algorithms, I just recently found out about this one, and probably has a name I don't know what the name of it is but I wanted to explain it to you. If you haven't read the iRobot books, I highly recommend them, they're very entertaining. Asimov had a lot of insight