[MUSIC] Welcome back, folks. Today, we are going to talk about causality. Causality is that ineffable thing that says when you flip a light switch, a light is going to turn on and it is central to really all of medical science. But I want to talk to you first about a story that happens almost every night in my home. [MUSIC] My kids like chocolate, a lot. But, they seem to forget that fact until it's like 8:30 at night, and they're just about to go to bed, then they demand chocolate. My wife, who is made of sterner stuff, will resist, insisting that a late dose of chocolate will make the kids crazy. I, being completely incapable of discipline, always give in and yeah,the kids do get crazy. But is it really because of the chocolate? Does eating chocolate cause kids to become crazy or is the craziness just a random phenomenon brought about by a slew of other unstable factors in their lives? Which bus seat they sat in? How long they were on the swing at recess? What level of growth hormone is coursing through their bloodstream? Causality is hard, but it's also everything, at least when it comes to medical science. So my goals here are I want to explain why causality is central to all of medical science. I'm going to introduce you to something called The Bradford-Hill Criteria for Causality, something every young scientist should know. I'll show you that you don't necessarily need a randomized trial to infer causality. There are other sorts of sources of evidence that are important but randomized trials always help. So why is causality so important? Why are we so obsessed with it in medical science? Well, the big Insight here is that if A causes B, we can reduce B by reducing A. If smoking causes lung cancer, we should be able to reduce lung cancer by reducing smoking. Does that make sense? Whereas, things that are correlated but not causal, it doesn't work that way. So it may appear that drinking causes lung cancer. We might find the people who drink more are more likely to get lung cancer. Secretly It's because they also smoke more, but let's pretend we don't know that. Because the drinking isn't causally related to the lung cancer, if we sort of do something to decrease their drinking, we're not going to change the rates of lung cancer. That is the importance of causality. So we really need to figure this out. Now when we get started, I want to point out some of these things that I call causality weasel words. Okay. So if you read news reporting of medical studies, you'll often see words like this. Look weightlifting is linked to reduce depressive symptoms, linked. Study associates ozone exposure at birth to increase risk of developing asthma, associates. Study links one egg a day to reduced heart disease risk. What are these words? They're not saying causes, right? We're not saying weightlifting causes reduced depressive symptoms or weightlifting causes more happiness. And that is because the studies that these articles are talking about could not infer causality. They are linked, they're associated. Raise an eyebrow when you see those terms in reporting of the medical literature because it means that causality is not totally clear. Now, this brings us to sort of the classic correlation versus causation problem. So things can be statistically tied together. They can appear to run right along with each other and that is not enough to infer causality. So for example, this is real data which looks at the per capita consumption of turkey in the United States by year and the divorce rate by year in the United States. And you can see that these two factors appear to be in lockstep. And indeed, if you In a statistical test of the sort that we talked about early in the course, you would find yes, absolutely, these things are correlated. Does that mean that eating more turkey causes more divorce, or does it mean that divorces lead to more turkey eating? No. No. This is a correlation. It does not have enough evidence to support causality. And we'll talk about those Bradford-Hill criteria, how you can start to infer that there's a causal link here. One important concept is that we can't infer causation without variation. So the reason causality is so powerful in medicine is that if we can prove that thing A causes thing B, the inference is that if we change thing A we change thing B. Okay, that's really the cool part, right? because, we're always looking at diseases and stuff. We want something we can change that is actually going to reduce the risk of disease. Now, why is variation important? If you, for example, gave your kid chocolate every single night at eight pm like clockwork and they were crazy, can you infer causation? No, you can't, because they always get chocolate. Maybe they're crazy all the time. [LAUGH] Maybe you just have a crazy kid. You have to change something, or at least compare different states,, to begin to infer causation. So remember, factors that are invariable are very hard if not impossible to associate with causation in a medical study. You got to change something. All right, so let's look at a at an example. This was a study that looked at diet, smoking, and cardiovascular risk among a population of people with schizophrenia. It was about a hundred individuals with schizophrenia and the study basically found that they didn't eat much fruit, and they smoked a fair amount of tobacco, and had higher cardiovascular risk overall. Now, the reporting on this study was suggesting that schizophrenia caused reduced fruit consumption or schizophrenia caused increase smoking. But there is no variation in this study, this study only looked at people with schizophrenia. It did not compare them to a group of people without schizophrenia. Now, we sort of know what the smoking rate in the general population is, so you can make some inferences. But schizophrenia was not varied, and without variation, extremely hard to infer causation. So is it the schizophrenia that results in reduced fruit consumption or is it any other myriad factors that might track with schizophrenia? Correlation, causation, you can't say without variation. This is going to be like a really cool rap someday. No, I won't, I promise. Okay. So I told you about the Bradford-Hill Criteria. So, so far I've just kind of implied that you need to change something to infer causation. But Bradford and Hill got together and they came up with a bunch of criteria to sort of say these are evidence of causality. And it's important here to understand that the threshold that a clinical trial, a randomized clinical trial, is absolutely necessary to prove causality is not true. We can't do a randomized clinical trial of everything. We can't randomize people to smoking cigarettes versus not and see who gets lung cancer. So you have to be able to infer causality from other sources. But here, Bradford-Hill Criteria and we'll walk through these in the context of a specific study in a moment. But very briefly, first, there are the three direct evidence components. The first is Is experiment, you literally randomize people to something, right? Smoking versus not smoking, although that would be unethical. But you change something, you experiment, and you see what the outcome is, strong evidence. The strength of the effect, so in the absence of an experiment, maybe you're looking at the number of packs a day that someone smokes, right? The link, the statistical strength between the number of packs per day in the outcome of lung cancer, is further evidence that something really is going on here. Temporal spatial proximity, so this is sort of intuitive. It says that the thing that you think causes the outcome should happen before the outcome. And the closer It happens to the outcome, the stronger the evidence is. So if you're thinking about alcohol consumption and the risk of getting in a car accident, you would find that the closer you drink alcohol to getting in your car, the higher the likelihood of you getting in a car accident. That is a strong bit of evidence that drinking alcohol increases the risk of car accidents, that is temporal evidence. Then we have I think some really important things that often gets overlooked in the medical literature, which is called mechanistic evidence. This is sort of like does this make sense type of evidence? Okay. So one that we often refer to ais dose-response. So if you're looking at an exposure, whether it's drinking alcohol, or smoking, or a particular drug, the more you give someone or the more someone takes on their own, the more likely they are to have the outcome, right? People who smoke two packs a day are more likely to get lung cancer than people who smoke one pack a day, and they're more likely to get lung cancer than people who smoke half a pack a day. That's a dose-response phenomenon, good evidence of causality. Biological plausibility, does it make sense? If you read a study that says that holding this magic crystal is going to prevent Alzheimer's disease that does not make sense. There's no biologic plausibility there. It doesn't conform with our understanding of science. Doesn't say understanding of science is perfect, but you would require a much higher threshold of evidence when someone is making an argument that doesn't conform with the modern conception of science. So is it plausible that inhaling a bunch of smoke into your lungs would cause lung cancer down the road? That is plausible, right? It makes sense, really important. Finally, what we call parallel evidence, consistency, coherence, and analogy. These are studies that support the relationship you think is causal but might not directly evaluate it. So for example, consistency, what this says is that the effect you're looking at is seen across a variety of populations, people from different countries, different sexes, people from of different races. You keep seeing it again, and again, and again, the effect is consistent. Coherence really has to focus with often animal models. It says, we don't just see this in humans, we see it in experimental animals as well. It kind of all hangs together. And finally, analogy. We see similar effects based on the same biologic process in different situations. So for example, we were considering that smoking causes lung cancer. We might by analogy say, well what that really is is a toxin or inflammatory stimuli affecting a tissue. And by analogy, we have similar evidence that smoke to meats and things cause stomach cancer. So that's not evaluating the same relationship, but you can see their analogous to some sense, and that helps to support the causal argument for both of them. And finally, there's a kind of hanger on here that they call specificity, which says that the thing that causes the outcome is one of the only things that causes the outcome. This is not often used when we're evaluating medical causality. Because you might say okay, hey, smoking causes lung cance, r but other things cause lung cancer too, so smoking isn't specific. And that's somewhat true, but in in the medical world, it's just a fact of life that a lot of different things can cause a lot of different diseases in different people. Those are the Bradford-Hill Criteria. Let's take it through to an actual study and see if we can assess causality. So this was a nice review article that was making an argument that sugar-sweetened beverage consumption causes obesity. They write, there's sufficient evidence that decreasing sugar-sweetened beverage consumption will reduce the prevalence of obesity, see that. So if you argue causality, then you can say, well, if sugar-sweetened beverages cause obesity, then reducing sugar sweetened beverages should reduce obesity, that's the power. And this study actually went through all the Bradford-Hill Criteria and assessed causality on those basis. So, number one, experiment. Yes, there are randomized trials that show if you force kids to drink less sugar sweetened beverages, they have less obesity. Pretty strong evidence. Strength, there's a modest relationship between the consumption of sugar-sweetened beverages and obesity. It's not that strong, it's there, but it's not that strong. Why? Because there's a lot of other things that cause obesity too including food, [LAUGH] that isn't a beverage. Temporality, sugary drinks now are associated with weight gain in the future, right? So this isn't you don't want to see that, it's actually people that are overweight that subsequently drinks more sugary beverages. The temporality is in the other direction and that's that's there. Dose response, definitely. So although the overall association isn't strong, the more sugar-sweetened beverages you drink, the more likely you are to be overweight or obese, that's a dose response. Is there biologic plausibility? Well, of course, right? This is just, you're drinking a lot of calories, biologic plausibility, certainly there. Consistency, we see this across children and adults. We see different populations the same effect. Coherence, so they looked and they said, okay, well, this happens in rats too. Rats gain weight when given sugary water to drink, as opposed to regular water to drink. And analogy, other high-calorie foods associate with weight gain, so taking drinking aside, drinking sugar sweetened beverages aside, other high-calorie foods are also associated with weight gain. These are your Twinkies, and Ho Hos, and whatnot. So their conclusion was there is good support for sugary beverages causing weight gain. Is that blowing your mind? No, it's probably not blowing your mind. Because causality is really something that humans are built to sort of think about, built to recognize, and this is not terribly surprising. But I really appreciated the rigorous way you can apply Bradford-Hill Criteria to this data and only a small smidgen of it was based on a randomized trial. So our take-home points for this, one causality is powerful. It means we have something to change. That's why we're so obsessed with it because it suggests we can fix something. Two, we cannot assess causation without variation. Okay, all those studies that don't have a control group make it really hard to assess causation. Number three, correlation is not causation, but it's not not causation either. Okay, just because things are correlated doesn't mean they're not causally related, it just means it can't tell you one way or another. Four, beware of causality weasel words, associated with, linked to, tied to, this does not mean that A causes B. Randomized trial evidence support causality, but other factors do as well. Do not be a randomized controlled trials snob, all [LAUGH] evidence is valuable. All right, thanks. I'll see you next time.