To begin looking more seriously at the role of values in science, I want to introduce a traditional philosophical distinction, the distinction between facts and values. In our discussions about objectivity, I said that facts are what is actually the case in the world and the true beliefs are beliefs about facts. We have true beliefs when we grasp the way that the world actually is. Now, value is what philosophers call a normative term. It describes a judgment about how something ought to be. When we're talking about values in this context, we are typically talking about judgments about what is good and bad, so it's a fact that the climate is warming rapidly due to greenhouse gas emissions, but its a value judgment that this is a bad thing. When a politician argues that regulatory decisions, including where we should get our energy should be made on scientific grounds, that's also a value judgment. Now, some might judge science to be our best means to a good future, but others might judge and in fact do judge otherwise. Those marching for science do think that science should guide our regulatory decisions. They think that not only should science be free of political meddling, but that it should play a central role in making decisions about how to run our country. When Clinton said, "I believe in science," what she implied is that we shouldn't use our judgment of political expediency to challenge the findings of science. Instead, we should have our policies guided by science. Now, Clinton is clearly making a value judgment, and this value judgment wasn't derived from science, she was stating her belief that a rational society let's it's policies be guided by science, societal values should be guided by science. The question for us to consider this week is the opposite of what Clinton was talking about. Should scientists ever be guided by values? Should what they consider to be good, just, or even expedient, be a factor in the experiments they perform, the conclusion that they draw, or the theories that they endorse? There's the traditional view that emphatically says no, science and values are entirely separate domains and science should keep itself independent of values. Philosophers have called this view the value-free ideal. Now, proponents of the value-free ideal reason like this. They say that science is about finding out what the world is like, independently of what you, me, or anyone else thinks about it. We ask questions and the world answers those questions. While there can be disagreement about what the world is saying as we conduct our observations, our simulations, and our experiments, ultimately there is a right answer to be found, and the right answer is settled by what the world is like alone. But what scientists think is good is of course a different beast. We can all be perfectly well acquainted with the same facts, it have different ideas about what is good. This is because in our personal value systems or our culture or our religious systems, we put different weights on different values. It can be perfectly reasonable to disagree about what is good. Science is different. True beliefs about facts are settled by what the world is like. Science shouldn't have the property that different people could rationally have different true beliefs. We can be uncertain or we can be wrong, but there's ultimately a single correct answer. Moreover, the argument goes, "If we want science to help guide our decisions, we want the outputs of science to be value-free." Figuring out the best action to take requires combining what scientists tell us about the world with what we want to happen. We're better able to bring about what we want to happen the more we know what the world is like. So the value-free ideal is the traditional view, but isn't the correct view? Several philosophers have thought not. They argue that when you look closely at scientific practice, you will see the judgments about facts and judgments about value intertwine. Sometimes this is overt, but other times it's very subtle. It's easiest to illustrate this challenge when we consider very extreme cases of values including terrible ones influencing science. So let me give you an example of just such a terrible case. Philadelphia Physician and Physical Anthropologists Samuel Morton was a major scientific figure in the 19th century. Among his more notorious achievements was the publication of a series of works that claim to demonstrate the superiority of Europeans on the basis of measurements of cranial capacity, a proxy for brain size. His work had tragic repercussions, it was used as a scientific basis for the Southern American Jim Crow laws which entrenched racial segregation in the late 19th and early 20th centuries. But for our purposes, the central issue is one of value. Morton was clearly racially prejudiced and most of the commentators on his work said that these value judgments influenced his science. For example, the biologist Stephen Jay Gould argued that Morton's racism subtly influenced both his measurements and his analysis of those measurements. His belief that Europeans were superior to Africans led them to subtle errors of measurement and calculation that aligned with these prejudices. Now, it's easy to look at this case and say, here is an example of bad science. A scientist took his everyday biases and let them infect his research. In fact, Gould put it this way, "Science has long recognized the tyranny of prior preference. But the only palliation is I know of are vigilance and scrutiny." In other words, Gould thinks that it's hard to avoid letting your prior values or preferences subtly influencing your work, but that the scientific community should be vigilant and subject all work to scrutiny in order to address it. Now, Gould is acknowledging the challenge to the idea that science is completely value-free, but he's still holds out hope that these values or preferences can be addressed. This has been a very common response to the observation that real scientific practice seems not to be value-free. The responses that we should take the value-free ideal to literally be an ideal. It's something that we strive for. When we fall short of it, we try to do better the next time. Now, there's a lot to be said for this argument, and much of the time I find myself agreeing with it. But I want to talk about a very interesting challenge to it that comes from the work of Helen Longino. Longino is a contemporary Philosopher of Science and a past President of the Philosophy of Science Association. She argues as follows. "No matter how much data scientists collect, there will always be more than one explanation compatible with what they have observed. In practice, scientists deal with this by using guiding principles such as: choose the simpler explanation, or choose the one most consistent with other explanations, or choose the explanation that is tightly connected to our grand theories like evolutionary theory or general relativity." These guiding principles are more like rules of thumb than they are like rules of logic. Their justification comes from their usefulness, they're not guaranteed to get us to the truth. Sometimes their answer does turn out to be more complicated, sometimes other theories point us in the wrong direction and so forth. "Once we've acknowledged that these guiding principles are more rules of thumb than rules of logic," says Longino, "We can see that these principles are really value judgments." Why do scientists assume that the simpler explanations are the right ones? Sure it makes their lives easier, but when applied in the social sciences, it could lead to ignoring important differences between groups. Sometimes complex phenomena might require complicated explanations, not simple ones. Similarly, sometimes in entirely novel, unexpected explanation is the right one. So why follow a rule of thumb that says unified explanations are better than diverse ones? "The reason individuals scientists generally adopt these position," Longino argues, "Is because they form part of a value system adopted by the scientific community as a whole." This value system explicitly prefer a simpler, more unified explanations whenever possible. It says that these are better explanations because scientific theories are preferable when they're unified. Unless we knew for sure that the world was simple and unified, then this has to be a value judgment. But there's more. Since they are values, not facts, other competing values could be substituted. In some contexts, is it better to have a value system that emphasizes what is unique, what is complex, and what is unified? If so, when? Longino thinks that we should explicitly consider the effect that a particular set of values has on the explanations they produce. Does the preference for simplicity and sociological explanations, for example, inevitably caused scientists to miss critical differences between how we might explain men's and women's choices in the labor force? If so, their explanations might be harming one or both of these groups by not taking the nuances of their differences into account. Ecologist and Philosopher of Biology Richard Levins goes even further. He once wrote, "When they encountered an unfamiliar animal in a picture book, my children would ask, 'What does it do to children?' Although philosophers go through great contortions to separate questions of reality from questions of ethics, the historical process unites them. Any theory of society has to undergo the test, what does it do to children?" So according to philosophers like Longino, and biologists like Levins, even if we can state the difference between facts and values, in the actual practice of science, they are united. This gives us room for serious value-based critique to enter to the practice of science. So you might now say sure, in practice, it's very hard to eliminate values from science. But isn't it still a good goal? If social scientists provide bad explanations because they oversimplified men's and women's experiences, isn't that just science gone wrong not paying sufficient attention to the way the world actually is? Why do we need a different value? Why not as a goal, no values at all even if we often fall short of this goal? This dispute between philosophers who think that science should maintain the goal of being value-free and those that think this is an impossible or naive goal is very much a live one. Both sides acknowledge that in practice, values interests scientific judgments, but they disagree about whether this is ultimately a good thing. In the 2018 IPCC report titled Global Warming of 1.5 Degrees Centigrade, the authors recommended that the best way to meet the greenhouse gas reduction goals is to impose a carbon tax. In a case like this one, it's easy to see why issues of fact and issues of value are so intertwined. Scientists take themselves to be stating a simple empirical fact. The carbon tax mechanism is the one that is most likely to keep emissions down, and keeping emissions down is our only chance of beating the global temperature goal. But for the argument to be fully persuasive, they also have to assume that everyone values keeping the global warming temperature down more than they value businesses having lower taxes. Similar assumptions have to be made throughout research that considers future climate scenarios. When there's uncertainty, should one be conservative or cautious about the future climate? or should one be optimistic? This has to come from judgments about what would be more bad or less bad in the future.