In traditional experiments on human subjects, you have what is called prospective data collection. That is, you set up the experiment and you collect the data from this experiment that you have set up. The way this works is you first have a review by the IRB, you get approval. And this approval is required whenever there are human subjects, whether it's medical research or social science research. And then, you actually get to do the experiment and collect data. And this kind of process is enforced by US federal law for any government funded experiments. Now, this kind of process applies not just to medical experiments and social science experiments, but also to computer science experiments. So, if one is conducting a user study of a new user interface, for example, for a web program, that if it is a research study it requires IRB review. However, there is an exception in this which is a very important exception in terms of the law. And I think this is in the law because this is something that in terms of societal values what societal consensus is about, which is that this kind of review is required only if it is research. And it is not required if it is ordinary conduct of business. What does this mean? If you're a web company, you probably do something called A/B testing. This kind of testing is used not just by web companies, but it is almost always used by companies that are on the web because they have high customer volumes. And so, the kind of thing that they would do is take some fraction of the site visitors and show them version A, and they take the remainder and show them version B. And then they'd watch the reactions of people shown version A and people shown version be. And if it turns out that people shown version A behave in a certain way, let's say click on a link that generates revenue 29% of the time versus people shown version B only clicked 13% of the time, then version A wins this A/B test. And from that point on, they would only deploy version A of their web page design. Companies in the information technology industry conduct A/B tests all the time. In fact large companies like Google are said to be conducting hundreds of A/B tests simultaneously at any point in time. Clearly human user are impacted. And clearly this is an experiment. The company doesn't know whether the A group or the B group is going to do better. However, this is not considered human subjects research because the company is actually trying to do the best thing in terms of whatever it is that it's trying to maximize in choosing A and B. If there is some design of a web page that they expect is going to do poorly, they would usually not show it. They're showing A and B as choices because they think that A and B are both reasonable choices and they'd like to see which one people actually prefer. Similarly, companies understand that humans are not always rational and that there are psychological cues that can push our buttons. And they definitely try to push our buttons to buy more or to click more. And this is considered good business practice. Companies also are imperfect. Algorithms are used for many things and these algorithms are in most cases imperfect. They give poor results sometimes. But again, we as customers of these companies, as subjects are these algorithms, just live with these poor results. So all of these things are just best effort companies doing what they always do and there isn't a problem with these being human subjects research. In contrast, consider the following incident that took place. In 2012, researchers at Facebook and Cornell University conducted an experiment where the news feed of selected Facebook users was manipulated. And some users were shown more positive articles, others were shown more negative or disappointing or sad articles. What they found was people shown more positive articles posted more positive articles themselves on Facebook. People shown more negative articles posted more negative articles themselves. In other words, they demonstrated emotional contagion. And this was something that was considered an important result psychologically and a paper describing this experiment and the results was published in the Proceedings of the National Academies of Science in 2014. The users of Facebook did not know that their newsfeed was being manipulated. They did know that Facebook regularly tinkers with the algorithm to choose items that are shown in their newsfeed. And they did know that what they actually saw in their news feed was whatever Facebook thought was good for them to see. What they didn't know was that what Facebook claimed was good for them to see was potentially manipulated for the purposes of this research study. Once the results of this experiment became public, Facebook got a lot of bad press and really the point I think that came out of this is that companies should not lie to us intentionally, even if what they're doing in terms of the effects on us are not that much different from if they were just conducting regular business and we're just being incompetent. There's a company called OkCupid, which is a matchmaking company that also conducted experiments reporting compatibility scores that were different than they had actually estimated in their algorithm. And they actually thought that they were doing the right thing. So, Christian Rudder, the CEO of OkCupid, wrote a blog post saying, "We experiment on human beings in support of what Facebook had done.". What we learn from these examples is that when something new is possible such as high volume experiments on human subjects across the web, that we have to feel our way through to a social consensus on what is appropriate to do. Clearly, somebody like Christian Rudder thought that they were doing something that was appropriate at the time at which they did it. Today, I think that there is good consensus that companies should not conduct this kind of research even if they're legally allowed to do it. It's one of these things where we have an ethical consensus that it is not right for companies to lie to us in conducting research.