[SOUND] In this video, we're going to learn about users giving informed consent to their data being used, and how systems can be design for that to happen. This lecture is based on a paper that's linked in the readings called Informed Consent by Design, and you should take a look at that reading before you go through this video. So, let's start off just by identifying the term what's informed consent? This is something that exists as a concept outside of the space of privacy online, but it certainly applies there. Basically informed consent says users understand what data is being collected and shared, and they consent to how it's use, so they're both informed about what's happening and they consent to it happening, this is a term that's commonly used in research ethics. For example, if you have a patient trying a new cancer drug you want them to be fully informed of all the risks, and to consent to participate in that medical study, knowing those risks. Here we're talking about the same kind of idea, but with people's information that they share online being informed about how it's shared and consenting to it being used that way. The authors of this study identify six components that are required for there to be informed consent in online privacy systems. The first is disclosure. That means that information is being disclosed to the user about what will be done with their data. So, somehow, we have to explain to users, what information's going to be collected, who has access to it, how long it's archived, what the information's used for, and how the individual's identity will be protected? This essentially covers all the information, about how user's personal data will be used and that needs to be disclosed in the first step. Then there's a step of comprehension. This means users need to understand what you've disclosed to them. So, for example, if you have a technical document that does disclose all this information, but you have an eight year old child reading it, it's unlikely that they comprehend what you've asked them to do. This third step is voluntariness, this means that there's no way to coerce the user into sharing their data. They're not being forced to do it, they're not being manipulated, they are voluntarily consenting for their data to be used this way. A system might be non voluntary if for example, users were manipulated into thinking there would be serious repercussions to their life, if they didn't share their data the way you want them to. The fourth step is competence, and this talks about people having the mental emotional and physical capabilities needed to give informed consent. So, again children will be an example of a group that may not be compitent to give informed consent. In fact in the United State the children's online privacy protection act, or COPA doesn't allow children 12 and under to give consent. It requires their parents to give permission. Because it's assumed the children aren't competent to consent to have their data shared. The fifth element is agreement. That means that users have to actually agree, to the policies that are being disclosed to them. So, they have to be able to accept them or decline them, and they need to have ongoing agreement, so I can't just agree at the beginning and then you can do whatever you want, I need to continually understand what's being done and agree to you, using the data in that way. And the final component is minimal distraction, and this means that user can't, shouldn't get overwhelmed by the information that is disclosed to them or by the process of giving consent. It should be a straight forward disclosure for what will be done, and a straight forward understanding and agreement to that policy. If you overwhelm users with the information that may distract them from what you are actually doing, and that can inhibit them from actually giving informed consent. So, this are the six component and what we're going to do now, is look at some specific systems and examples that embodies some or all of this, to see how they work in practice. So, let's come back to an example that we've looked at before, and that's advertising in Google's Gmail system. Again we're looking at an inbox here, with no messages, but there's an ad across the top. And that's an element of privacy for the user. Those ads are targeted based on the content of their messages. And so, what we want to know is it fair to say that users have given informed consent for those ads to be targeted to them, so we need to look at the six elements of informed consent and see if those actually happen here. First, is it disclosed to users, what information is collected and how is it used. To do that we need to look at the term in privacy, put your link down here at the bottom. And so, we have a first step of knowing that Google is at list disclosing what their terms and policy are, because they have a link clearly available on the Gmail page. If we click on that, it brings up the terms and privacy page for Google, and if were to go to the privacy policy, for example we can start going through this. Now I'm not going to read you the whole policy online, but there are a few things that we should note here. First is that it has pretty explicit sections on what information's collected, how it's collected, transparency and choice, information you share and so on. Indeed if you go through this, Gmail is quite explicit in disclosing what information's collected and how it's shared. If we look at the language here. As with the Facebook policy that we saw earlier. This isn't written in a lot of legal jargon. In fact, most major privacy policies aren't written with that kind of jargon. It's written in pretty comprehensible English that's easy to understand. If we just pick a random section, for example here, information we collect. And start reading that, we can see that it says, we collect information to provide better services to all our users. From figuring out basic stuff like what language you speak, to more complex things like which ad you'll find most useful. And if we mouse over that, we'd get an example. So, this is explicitly explaining the kind of thing that users might be concerned about, in terms of advertising and it's written in a pretty accessible way. So, that allows us to say that comprehension is probably easy to allow for Google. We can say yes, what they're presenting is something users can comprehend. Next, is this voluntary? We'll, we can definitely can say it's voluntary, because users have to see these terms before they sign up, and there's nothing in here that attempts to coerce or manipulate users into using Gmail. And in fact it's pretty common for online email systems to show ads especially when their free. Competence is the next issue and that's a harder one to assess, because it requires understanding the users themselves. But Google does have an element in this policy that says children can't sign up for the service without permission of an adult. The next element is Agreement. Users have to agree to this privacy policy when they sign up for Google. It's explicit and required. So, we can certainly see there's agreement. And finally is their Minimal Distraction. Well if we look at this though there is a decent amount of text here it's not a huge document, and there's nothing distracting or trying to overwhelm the user. It's pretty straightforward, and pretty clear about what's being done. And so, we can say pretty confidently that in the Gmail system users are giving informed consent for their data to be stored by google. For the way their using it including advertising. Another example discussed in the paper that gets to issues of informed consent, is that of browsers showing people that their on a secure connection. Here we can see for example, and we're security people so we know that there's an HTTPS in the address bar, and this lock is shown locked, which indicates we're on a secure connection. If we were on an insecure connection it would be shown open. So, we want to ask some of these same questions about whether the browser is giving information enough that users are giving informed consent for their data to be sent in whatever method is being used, secure or insecure. First we ask is the website disclosing that we're on a secure connection? And it is. It does it with the lock and with the HTTPS. But the next question is comprehension. Do users understand what information is being disclosed to them? They may understand that this means secure, but researchers have shown, and that's described in the paper that we read here, that users actually don't really know a lot of the time what a secure connection is. When they are asked to diagram where the security falls in the process of data being entered on their computer, transmitted over the internet, and received on a server. They don't provide consistently clear pictures of what that is. So, there's a, a real question about whether people actually are consenting to their data being sent in this way. Getting to the issue of informed consent both in the way it's used in research ethics and in the way it's used online. A recent story was published about an experiment that Facebook did with 700,000 users, where it tweaked their newsfeeds to show either neutral and happy posts or neutral and sad posts, to see how that affected the emotional state of people who were posting. There was a big outrage in response to this experiment. And Facebook said that while they didn't actually tell people they were experimenting on them, people had given consent to be part of the experiment, because Facebook has something in their terms of service. Is that the case? Well, let's go through the six steps that were required here. The first is disclosure. Is it fair to say that Facebook disclosed people would be part of an experiment? [SOUND] Here we're looking at the Facebook Data Use Policy again. And if we scroll way down in here, there's one line, when they're explaining how we use the information we receive that they say one option is for internal operations including trouble shooting, data analysis, testing, research and service improvement. This is the line particularly the research component that Facebook argues is a disclosure that people will be experimented on, and since people consent to the steady use policy when they register they are giving informed consent to be part of an experiment. Is that accurate? I think there is a fair question about whether this actually counts as disclosure. Recall that the components of disclosure include what information is collected, who has access to it. How long its archived, what it's used for, and how the identity of the individual is protected. None of that's spelled out for an experiment here, and Facebook argues that the information is used for research and that's enough detail. But there's a question about whether that's actually something people can consent to especially, when you get into the details of experiments like this. That also carries into our analysis of comprehension. Do people actually understand what they're being told when they see this word research? Again, it's unlikely, because people respond quite negatively when they find out about some of these experiments this Facebook runs without telling them about it. Another question is, is it voluntary? People are voluntarily using Facebook, but someone might not voluntarily participate in such an experiment. For example if someone's depressed, the last thing they need might be a newsfeed showing them only bad news. It might make them feel worse. And someone who's trying to work on their depression, may absolutely not voluntarily participate in an experiment that would manipulate their emotions. The only way they can get out of that since Facebook doesn't tell you when it's running these experiments, is to shut down their Facebook account. And that could be considered a type of coercion, because there's a lot of value in people's Facebook profiles, that they would lose if they had to shut their accounts down. So, just in these first three respects there's a real, and I think dramatic question of whether or not people are giving informed consent to be experimental subjects in the kinds of psychological experiments Facebook was running. And that becomes a real problem in terms of privacy and informed consent. So in conclusion. Useable privacy requires informed consent from users. And this something that we can apply in every privacy system that we're looking at. Informed consent means that users need to understand how their data is used and agree to it being used that way. And the six components of informed consent that we've looked at in this paper, and used an analysis here are things that can help you analyze a system for informed consent. So, take a look at your favorite system where your sharing data and go through those six components to see are you really being disclosed what the system is doing is it possible for you to understand that and all the implications. And do you agree to it? This is an interesting and important way to analyze privacy and it's usability, because it gets to the heart of what users understand, about their data and how it's being used.