So some background information. Artificial intelligence is built on this hypothesis that mechanizing thought is possible. In the first century, Greek, Indian and Chinese philosophers, all worked on ways to perform this task. In 17th century after Leibniz and Thomas Hobbes, and Rene Descartes discussed the potential for rationalizing all thought as math symbols, and we're playing around with it. 1950, Alan Turing published his paper, Computing Machinery and Intelligence. In this paper led to the Imitation Game and you've probably seen the movie as well. But the Imitation Game itself, player A is a computer and player B is a human. And player C is a human that can't see, either play A and Player B and both players A and B must convince player C that it is human. And if player C, the other human cannot distinguish whether player A or whether player B is a computer and the computer wins. So that's the game. So if you hear the expression passing the Turing test, that's what it means. It's extremely convincing. So these machine learning algorithms, they rely on analyzing large data sets. Oftentimes, huge data sets. In the data analytics part, we'll see just how big some of these data sets can get. These algorithms can perform predictive analytics far faster than any human being can. So as the volume of the data gets bigger and bigger, human beings ability to process mass quantity of data diminishes rapidly. One of the issues AI has had is that we see AI portrayed in films and in TV shows that have set the public's expectations very high. Lot of hype about what AI can do. So if you've all seen 2001, A Space Odyssey. I love to do this. I'm sorry, Dave. I'm afraid I can't do that. What the hell, 9,000 goes crazy on the discovery spaceship. And then this one, anyone recognize that scene? It's from a much more recent movie. Ex Machina, yep, yep. So a lot of hype on what an AI can do. Maybe someday we'll get there, it's possible. We have a long ways to go. Through this slide in over the weekend, I've done a lot of reading and preparation for this week's material and several authors works that I've read talked about this rise and fall of the interest in AI and my interested started in 1984 and I saw it rise for a while and then a number of problems. Let me restate that, not problems. A lack of sufficient progress was made and the interest went way back down. And then meanwhile, a group of dedicated computer scientists kept chugging along and solving some of the problems and then we saw another rise in interest in AI and then a lack of expectations. And it seems now, if we're out here 2018, it seems like we're on some curve where the interest is very high and growing rapidly and the three contributing factors as I view them are we made a lot of improvements in the algorithm. The algorithm's back here and this actually goes back for my interest. I just plotted this rise and fall of interest from my perspective, and some algorithms. These very early ones date back much earlier than the 1980s. So we have these improved algorithms. We have fast computing and both in conventional CPUs and in graphical processing units, GPUs retargeting Nvidia and Radeon CPUs to put the machine learning algorithms in there and crunch through lots and lots of data. And relatively recently, we have very, very large data sets which is a great environment for computer scientists to play around and with algorithms and see which algorithms work. Go down which types of data sets. One of the big points of confusion is that people confuse a machine, getting better at its job. That's the learning part with awareness, which is what we would call intelligence or consciousness and this is not true. We are nowhere near understanding what consciousness really is, what it constitutes. It's being looked at intensely, studied in many different fields. Biology, psychology and so forth. Trying to nail down precisely what is consciousness. But as far to my knowledge, we have a long ways to go to really understand what consciousness is. So machine learning is not consciousness. And if you watch the Terminator movies, the Terminator, skynet started learning and learning and learning and learning and it became self aware and I guess something 1984. That's all right. [COUGH] Dubious of that outcome, but it made for a good movie plot. So. Machine learning provides the learning part of AI and is very primitive in comparison to what we see portrayed in TV shows, and films. Again, those films contribute to the hype of AI, what it's capable of. True AI may eventually emerge someday in the future when computers can emulate combination that we see being used in nature. Genetics or slow learning from one generation to the next generation and it's combined with fast learning, teaching from organized sources. And explorations, spontaneous learning from media and interactions with people and interacting with our environments.