So, there are a number of schools of thought. I went and bought a bunch of books, and I needed to learn about machine learning so I could talk to you guys about machine learning for this week, and one of them was Machine Learning For Dummies. It was a pretty good book. The deep learning one is really heavy duty with some really serious math in there, but this one came with examples you could download from the Dummy site, and they had examples in Python so I could try them out and explain things very well, and I liked this classification of these schools of thought on machine learning. So, the author referred to them as the five tribes of machine learning. So, one of them he called the symbologist, and the origin of this tribe lies in logic and philosophy; and the next one he called the connectionist, and the origin of this tribe is in neuroscience; and the third tribe was the evolutionaries, and the origin of this tribe is in evolutionary biology. The fourth one is called the Bayesians and the Bayesians origins lie in statistics, and this will be the primary focus of the examples that we'll look at in this class, this week. I always have trouble saying this. The analogizers. The origin of this tribe is in psychology: how we think, how our thought patterns are structured, but most of what we're going to talk about is Bayesian based machine learning, but I want to take a minute and talk about the connectionist. So, here's a picture of a neuron that I snatched from simple Wikipedia. So, it's a single neuron. It has a bunch of dendrites that receives signals from the axon terminals of other neurons making connections with it and where the axons of another neuron touch or connect with the dendrites. There's a called a synaptic connection and the axon goes out and fans out to a number of dendrites. This term that you're about to see was a concept developed by Carver Mead, and this is where I believe the real breakthroughs in AI are going to happen. They may encompass elements of the other four tribes but the really amazing stuff is going to happen here. I have more to say about that in a minute. Neuromorphic computing. Anyone heard the term before? Raise your hand if you've heard the term neuromorphic computing. Okay. Mimicking. Yes. That's right. It's based on neurons and brain function. Our neocortex has roughly 30 billion neurons. Each neuron receives connections from anywhere in between 5,000-10,000 other neurons, and these are what form the synaptic connections. Making more than 100 trillion synaptic connections, and what's largely missing from today's AI are these characteristics of neuromorphic computing, and one of them is rewiring. We learn quickly, a few touches or glances and we learn incrementally. We can add to what we already know. That rewiring takes place very, very quickly and without any programming at all neurons in our brains just do it. The next one is sparse distributed representation. This is huge. This is how our brains store information. Years ago, I heard it called sparse distributed matrices. That's another term that I heard for this. So, brain uses thousands of bits to represent a piece of information, and there's two properties involved: one of them is overlap property, and this gives our brains or neuromorphic computing systems the ability to distinguish between how things are similar and how they're different; and the other one is the union property to represent multiple ideas simultaneously. So, one sparse distributed representation, I'm just going to call them SDRs from now on, for cat and we'll have another SDR for bird, and they share an SDR for pet and they share an SDR for clawed; but they would not share an SDR for furry. It's only the cat is furry. I'll just give you an example. So, we have hundreds, thousands, millions, maybe billions of these SDR's in an organic brain. Picking that up, and they're inherently fault-tolerant. It can have some breaks in the connection and we still remember. Embodiment, the use of movement to learn about the world around us, sensory motor integration: our sense of touch, where things are, space around us. This happens every part of our neocortex, and the hunt is on to determine the neuron's master algorithm and they've been looking for it for a long time. There's a fabulous article in I-triple-E spectrum on security, cryptocurrency and block chain was great. This is an excellent article, an issue. Go to the library if you're interested and get the June 2017 I-triple-E spectrum and read the article on there, and the title on the cover is Can We Model the Brain, I think is what the title was. So, authors in there talk about the hunt is on to determine the neuron's master algorithm, and the hope then is that once we understand what that algorithm is we can model it in software. We can actually build neurons on a chip that mimic this behavior and build neuromorphic computing hardware, and then the next step would be to scale that as high as we could. Power efficiency. Data centers in the US draw megawatts. Add them all up. Up to two percent of the electricity in the US is consumed by datacenters and then on the rise. Our human brains however run quite well on about 20 watts a day. That's a fraction of the food that a person eats each day. So, I went online and converted 2,000 kilocalorie diet, comes out about 97 watts or about one-fifth of the energy consumption that we consume goes to powering our brains, and look what we can do with our brains. It's incredibly amazing. So, from a computation per unit milliwatt or microwatt however amount of work per unit energy, organic brains are very power efficient. So, I'll take that even one step further. In 100 years, I predict that you and I would not recognize conventional computing as we understand it today. I think when these breakthroughs occur, one- I know companies right now are working on brain-computer interfaces, if you read any of that work that's going on by the big companies, like Facebook and others. They want to connect our- put a USB port in our- I'm over simplifying but it's the point across. They're working on brain-computer interfaces, so our brains can directly interact with the machine. We're successful at that. I believe is what we'll eventually see in the future as traditional computing hardware in some form will be integrated with organic brains that we grow in the laboratory. We'll grow a small cluster of neurons because they're so good at what they do. Even if you look at the brains of bees, you look at the brains of ants, you look at the brains of rats. Researchers study rats alive. You put a rat down in a maze, the first thing it does is start cruising around and mapping out its environment. It just does it, and in very short time it knows exactly all of the dimensions of its space and knows exactly where it can go, up and down. Even if it's multiple levels, it figures it out, and a rat's brain is much, much smaller than our brain but it's an amazing capability. All right. So, I just wanted to interject my thoughts and prediction on what may happen in the future. I give you some food for thought.