In this lab we are going to look at how to invoke Machine Learning APIs from within Datalab. Let's go ahead and this time, instead of going ahead and doing this notebook from scratch, we will start from a notebook that's already in our GitHub repository. First we need to check it out. Let's go ahead and check out the notebook. In order to do that, we have to clone the repo, so we'll go ahead and open up a Datalab and then run a bash command from within Datalab. The idea here, is that we can go ahead and start a new notebook, call this notebook, whatever we want, let's call it, no, checkout. Far we basically looked at running Python code written data lab, but by putting in percent bash, this makes Datalab run everything in that cell using bash. This is like everything else in Jupiter. Here I'm basically going ahead and doing a Git Clone of our repo and let's go ahead and do that. At this point, I can do [inaudible] , that's another way to run Bash. You will notice that there is a folder called Training Data Analyst, and we can now go ahead and load up that notebook and start executing it. There is Training Data Analyst, and this time, what we want to do is to go into courses, Machine Learning, Deep Dive, and open up the ML APIs [inaudible] Notebook and there is our notebook. The first thing to do is to go ahead and enable APIs and services, and so that we can run division API and the Translate API in the speech API, etc. We go down here, and type vision and there is a Vision API, and the API is enabled. Let's go ahead and do the same thing for translate and speech. [inaudible] Translation API. That's also enabled already and the Natural Language API, there it is, that's enabled as well. This speech API is just make sure it also enabled, and that's also enabled, so great. All of the APIs are enabled, so let's go ahead and get the credentials. We'll go down to the APIs and services and get the credentials. We already have the API key, so I went ahead and used it, or I can go ahead and say create credentials with an API key and create a brand new key. Copy that, and there you go. That's our API key, here it is. Now we're ready to go into the ML APIs. In our notebook where it says API key, I will replace by the new API key that we have and run it. I can either click the Run button or I can do Shift Enter. Let's go ahead and install the Python Client. Having done that, let's go ahead and run the Translate API, and you notice that there is the inputs, is it really this easy? You see the translation in French, because we asked for the target to be French. Let's change the target to be ES, that's Espanol and run it, and now what we get back a Spanish. How does this work? We went, ahead and specified the inputs as an array of strings and as the service to go ahead and do a translation from English to whichever language we want, passing in those inputs, and what we got back is the outputs they translated string. Similarly, what we want to do is to go ahead and invoke the Vision API. To invoke the Vision API, we need an image, and in this case the image is the image of a street sign. I don't know Chinese, so I don't know exactly what it says, let's see what it is. We'll go ahead and put this on Cloud Storage. This has actually been made public so we don't have to change anything here, we can ask the Vision API to read that image and tell us what text is in it. We can go ahead and run that, and at this point we get back the JSON output. Again, what doing here is that we're invoking the version one of the Vision API, passing in the GCS image URI, GCS meaning, again, Google Cloud storage. We have this image on Cloud Storage, we could also pass in the image as part of our request, but having it on Cloud Storage makes it faster because we don't have to upload all of that image data along with our request. We're asking you to do text detection and what comes back is all of the text in this image, along with the language CH, meaning Chinese, and the bounding Polygon of each of those pieces of text. We could of course, go ahead and get the first piece of it and take that text annotation, get the full language, the locale, which we save as CH, and then we could go ahead and print out what we got. We got back the foreign language which is CH and the foreign text, which is all of this. Now what we can do is to go ahead and run it. Of course, the result of it having withdrawn is already here. I can click on this cell, clear it, and then now you can run it again and you can make sure that what you being run is yours, and we see that the Chinese text has now been translated into English. The other thing that we can do is the language API, so here we have a set of codes, and what we want to do is to look at the sentiment associated with these codes. Again, as before, let's go ahead and clear the cell and run it. In this case, we are printing out the polarity and the magnitude associated with each of these codes. The polarity is positive if it's a positive sentiment, is negative if it's a negative sentiment and that makes sense. If you said to succeed, it must have tremendous perseverance, that's a very positive thing. But if you say, for example, when someone you love dies, that's a pretty negative thing, so polarity is negative. The magnitude is an indicator of how often very strongly worded language occurs in the text. The final piece that we're showing up here is a speech API, and as before, we have an audio file loaded into Cloud Storage and we're asking for the result of that speech to be made into text. We can go ahead and run that and we get back a JSON response, and the JSON response to that very high confidence is that the speech is in that audio file is how old is the Brooklyn Bridge? What we have done in this lab is that we've used data lab, to use Python APIs to essentially invoke the Machine Learning models. Remember that these are not Machine Learning models that we had to build. These are Machine Learning models that we could just go ahead and use. We could incorporate these Machine Learning models into our own applications. This is something that you want to recognize that not every ML thing that you need to do has to be done from scratch. If what you want to do is to recognize text and images, you might just use the Vision API.