Episode 4, "Zombie Pseudoscience"
Amelia: Before we begin episode 4, I just want to let you know that this episode will be pretty confusing if you haven't heard the previous episodes. So if you haven't already listened to those earlier episodes, I suggest you go back and do that now.
Amelia: Let’s recap. In the second episode, we spoke with Daniel Dennett about how researchers misconstrued an early paper of his when developing false-belief tests for “theory of mind,” and we heard how those tests were used to support the idea that autism is a theory of mind deficit. And in the third episode, we spoke with Tobi Abubakare about all of the problems with the research purporting to show that autism is a deficit in theory of mind. This body of research is plagued by methodological problems, in large part because researchers have failed to interrogate the values at the foundation of their research program. And we learned that this type of research has caused a lot of real-world harm; it reinforces negative stereotypes about autistic people, and contributes to the isolation of autistic people.
So, at this point it seems pretty clear to me that all this theory-of-mind-deficit research is bad science. But look, I don’t want to go around accusing long-standing research programs of being “bad science” just because I don’t like them. I want a specific diagnosis of what makes this bad science. And after speaking with today’s guests, what I learned is that this body of research might not be science at all.
This is NeuroDiving, a philosophy podcast about neurodivergence. I'm Amelia Hicks. In this episode, we're going to explore what makes theory of mind research bad science, and maybe even pseudoscience.
Amelia: When I first started to read about autism research focusing on theory of mind, I quickly became suspicious. This looked like bad research to me, in part because it seemed like, over time, the “theory of mind deficit” view of autism became an unfalsifiable theory. In other words, it became an idea for which there was no possible counterevidence.
So look, all these theory of mind researchers were trying to do quantitative scientific research—they were trying to precisely measure psychological phenomena, using numbers and fancy statistics. And if you want to do good quantitative research, you need to test falsifiable theories—like, it needs to be possible, at least in principle, to find a piece of evidence that disproves the theory you’re testing. When a theory isn’t falsifiable, nothing counts as evidence against it; but if you think about it, that means that nothing really counts as evidence for it, either.
For example: here’s an unfalsifiable theory, which I’ll call the sleepy invisible goblin theory. According to my theory, sleepy invisible goblins are to blame for all of my problems. When I fall down the stairs? That’s the invisible goblins at work. When I lose my wallet? That’s the goblins again. Fortunately, because the goblins get sleepy, they often have to take naps, which temporarily relieves me from their goblin torment.
OK, so now, Joanna…
Joanna: Yes, hello!
Amelia: Hey there! So, I would like you to try to find a piece of counterevidence to my sleepy invisible goblin theory. Like, try to persuade me to stop believing the theory.
Joanna: OK, well, first of all, I’ve never seen these goblins, and in fact I’ve never seen any goblins anywhere, at least not in real life.
Amelia: Ah yes, well, that’s because the goblins are invisible of course! So, just as my theory predicts, no one ever sees the goblins, including you.
Joanna: OK, well, second of all, you don’t fall down the stairs and lose your wallet all the time. Bad things aren’t constantly happening to you, so doesn’t that suggest that maybe you’re not constantly plagued by goblins?
Amelia: Not at all! According to my theory, the goblins get sleepy, and they need to take naps, and that’s how I sometimes manage to get through the day without any major mishaps. So, the fact that I have a few good days is totally compatible with the invisible sleepy goblin theory.
Joanna: So it seems like there’s basically nothing I can say to convince you to give up the invisible sleeping goblin theory; it’s like, the theory is compatible with anything anyone observes.
Amelia: Exactly! What a great theory! It’s impossible to disprove!
Joanna: But it also doesn’t tell us anything. Like, it doesn’t make any interesting predictions, ‘cause literally anything that happens is compatible with the theory.
Amelia: Right. Thanks for being such a good sport, Joanna, I know it’s super annoying to be asked to disprove an unfalsifiable theory. The whole point of this schtick is to illustrate why unfalsifiable theories are unscientific. They don’t make any interesting predictions, and as a result you can’t test them. And since you cannot test them, you cannot get any real evidence for them.
Joanna: This idea—that good scientific theories need to be falsifiable—was endorsed by the philosopher Karl Popper in the 1930s. According to Popper, real science involves making risky predictions—predictions that could, in principle, be shown to be false.
But let’s be fair to psychology researchers here. Your sleepy invisible goblin theory is a much sillier idea than the idea that autism is a theory of mind deficit. And in fact, there’s a big difference between them: the idea that autism is theory of mind deficit is falsifiable. Like, it makes predictions that could turn out to be false. For example, it predicts that autistic people will perform poorly on theory of mind tests.
Amelia: Yes, that’s true. And remember, that’s exactly what happened! Researchers asked autistic people and non-autistic people to take theory of mind tests, and they found that lots of autistic people performed well on those tests. So, theory falsified, right?
Joanna: But then something weird happened: instead of being like, “Whoops, I guess our theory was wrong, better go back to the drawing board!” researchers continued to think that autism was a theory of mind deficit, and they tried to come up with new theory of mind tests that autistic people would be bad at.
Amelia: This is why I said earlier that it seems like the theory of mind deficit view of autism became an unfalsifiable theory over time. It started off as a falsifiable theory, but then every time it was falsified researchers would just try to come up with a new way of measuring theory of mind.
Joanna: So maybe this is more like the flat Earth conspiracy theory. The idea that the Earth is a flat disc is falsifiable—it makes all sorts of predictions that turn out to be false. But for every piece of evidence against the theory, flat-Earthers have some complicated way of “explaining away” that counterevidence. This is how a falsifiable theory can become unfalsifiable over time. If you just gussy-up a theory every time you get evidence against it, you end up with a complicated Frankenstein theory that’s pretty much unfalsifiable.
Introducing Travis, and Lakatos on Degenerating Research Programs
Amelia: That was my sense of what was happening in autism research. Researchers would falsify the idea that autism is a theory of mind deficit, but instead of getting rid of the theory, they would just cook up a new test in order to hang on to the theory. And that’s bad science, right? Like, when you falsify a theory, you should get rid of it, right?
Travis: Of course, the falsification criterion is extremely restrictive.
Amelia: That’s Travis LaCroix. He’s an autistic philosopher of science who I had the pleasure of speaking with earlier this year.
Travis: It’s “La-Kwa” not “La-Croy”, not like the bubbly soda.
Travis: I'm an assistant professor at Dalhousie University, which is located in Halifax in Canada. I primarily do research on the philosophy of artificial intelligence, particularly deep learning, machine learning methods. And more recently, I've been looking into questions in the philosophy of autism and what I call autistic philosophy, which is slightly different.
Amelia: Travis helped me understand that we can’t simply use Karl Popper to explain why this theory of mind research is bad science. Because sometimes, even good science involves hanging on to theories that have been falsified.
Travis: Do we have time for an example?....
So, right, if we think of a sort of example from history, one that's talked about often is that Newtonian mechanics in physics should be falsified by the precession of the perihelion of Mercury.
Travis: … And, you know, the precession of the perihelion of Mercury sounds very jargony, like “what are you talking about?” But the perihelion just refers to the closest point of approach of a planet to the Sun. So the point in its orbit when the planet is closest to the Sun is referred to as the Perihelion.
Amelia: This is a super famous example in philosophy of science. All the planets in our solar system orbit around the Sun. But they don’t orbit around the Sun in a perfect circle; sometimes they’re closer to the Sun, sometimes they’re farther away. And the point at which a planet gets closest to the Sun in its orbit is called its “perihelion.”
Travis: And as it happens, the perihelion of planetary orbits precess, which means, right, the precession of the perihelion describes the fact that the perihelion itself, the closest point to the sun in the orbit, is not a fixed point in space, but rather the perihelion itself slowly orbits around the sun also. And so it's gonna be at a different point depending on when it's observed.
Amelia: The perihelion of a planet precesses. When a planet orbits around the sun, the point at which it gets closest to the Sun changes over time.
Back in the early 20th century, scientists already knew this, and they explained these facts using Newtonian Mechanics. Newtonian Mechanics was the dominant theory in physics at the time, because scientists hadn’t yet switched over to Albert Einstein’s theory of relativity. And Newtonian Mechanics did a pretty good job of predicting how planets would orbit around the Sun! Except for Mercury, whose pesky perihelion was just slightly… off.
Travis: And so in this historical example, right, Newtonian mechanics predicts this precession, but in the case of Mercury there's actually a discrepancy in the amount of precession that's predicted. And so Newtonian mechanics does a good job at predicting the precession of other planets, but Mercury is a kind of outlier. And so the discrepancy is not very big. It's, I think, something like 43 arc seconds per century, which is fractions of a degree. But these mistakes accumulate, and so this amounts to basically predicting an extra orbit every 3 million years or something like this. Which is not a lot, but it is not correct. It's clearly not correct. … The thing is that, historically speaking, as a sociological matter of fact, scientists didn't abandon Newtonian mechanics in light of this evidence that should be falsifying.
Amelia: Scientists realized that Newtonian Mechanics was making incorrect predictions about Mercury’s orbit—but they were not ready to give up on Newtonian Mechanics. Instead, they started to look for a hidden planet, called Vulcan, which might be affecting Mercury’s orbit. But was it bad science to hang on to Newtonian Mechanics?
Travis: In retrospect, right, on Popper's criterion, you might think that those scientists were being unscientific. But, Imre Lakatos thought that actually the scientists were right to not abandon the theory despite the fact that there is empirical evidence which seems to falsify it.
Amelia: Of course, scientists did eventually abandon Newtonian Mechanics in favor of Einstein’s theory of relativity. But according to the philosopher Imre Lakatos, it was reasonable for those scientists to continue believing in Newtonian Mechanics for a while longer. According to Lakatos, it’s sometimes reasonable to hang on to a theory, even after it has been falsified.
Joanna: To see why, notice that we never test a single theory in isolation–we always have to make background assumptions, too. For example, imagine that a group of psychologists are studying how people conceal their true emotions in different situations. That study will need to make some background assumptions, assumptions like people have emotions and people have subjective experiences that might not be obvious from their behavior. If you got rid of those background assumptions, the whole experiment would stop making any sense.
Amelia: Lakatos called these types of background assumptions “the hard core” of a research program. And he called the theories you’re trying to test “auxiliary hypotheses.”
Travis And so for Lakatos, a research program is a technical term. It consists of a hard core, which is the sort of main component of the theory or a collection of theories, which is surrounded by auxiliary hypotheses which connect the core of the program to the empirical world via predictions, but they also in some sense insulate or protect the core of the program, thus making the core of the theory effectively irrefutable. So, in the case of the perihelion, the hardcore is something like Newtonian mechanics, and the auxiliary hypotheses that are tinkered with might include something like positing the existence of a hitherto unobserved planet called Vulcan, which is affecting the orbit of Mercury and thereby explaining why Newtonian mechanics isn't getting it quite right. And so this is what happened, right, and there was a search for some decades for this hidden planet.
Amelia: According to Lakatos, when your theory makes a prediction that turns out to be false, you shouldn’t throw away your whole research program. Instead, you should tinker with auxiliary hypotheses, while leaving your basic background assumptions intact. So when scientists hung on to Newtonian Mechanics even after it made some bad predictions, they were still doing good science. Newtonian Mechanics was a basic background assumption—and you don’t want to get rid of a basic background assumption unless you have a really, really good reason.
Joanna: But sometimes scientists get really good reasons to throw out their background assumptions, just like scientists eventually threw out Newtonian Mechanics in favor of relativity theory. According to Lakatos, it’s time to start throwing out background assumptions when your research program degenerates. Sometimes a research program degenerates because it stops predicting anything interesting or new. And other times, a research program degenerates because it makes predictions that don’t come true. Either way, it stops making progress.
Amelia: Is this what happened with theory of mind in autism research? Is it a degenerating research program? Travis definitely thinks so. He thinks that the theory of mind deficit view of autism is an auxiliary hypothesis–it’s one of those parts of a research program that scientists should be willing to get rid of when the research program isn’t making new, accurate predictions. But, as we’ve already seen, researchers are often unwilling to get rid of it.
Travis: If we're going from a kind of technical perspective of taking a sort of Lakotosian kind of program here, you really are supposed to be wed to the hard core and be willing to shed the auxiliary hypotheses when they are falsified or when they don't obtain the evidence that you would expect or something like this, when you don't get the observations that you would expect. There is very little reason to think that theory of mind is a good auxiliary hypothesis, this has been documented extensively, like just absolutely incredible research by Gernsbacher and Yergeau in a paper from 2019....
Amelia: Yes, that Gernsbacher and Yergeau paper, from the previous episode. It really is the go-to summary of all of the empirical failures of the “theory of mind deficit” view of autism. Travis thinks that the theory of mind deficit view of autism is part of a degenerating research program, because that research program is not making new, accurate predictions. Sure, this research made some novel predictions back in the 80s; but it’s not making novel predictions now, in 2023. And as the Gernsbacher and Yergeau paper lays out so clearly, many of the predictions made by the theory of mind deficit view have turned out to be false.
Travis: It's an unbelievably extensive sort of review of the literature. And they point out all of these different empirical failures, right? Of course, many autistic children and adults will pass different tasks that are supposed to measure theory of mind. As well, these various tasks – things like, um, false belief tasks or second-order false belief tasks or reading the mind and the eyes – all of these different experiments that have been concocted to try to measure theory of mind fail to converge with one another. And so you get different results depending on which, uh, theory you're talking about. And the lack of convergent validity amongst these different tasks seems to Gernsbacher and Yergeau to undermine the core construct validity of the very idea of theory of mind.
Amelia: And it gets worse. When a research program degenerates too much, it goes from being bad science to pseudoscience. Is theory of mind in autism research now pseudoscience?
Travis: It is pseudoscientific. Now the details of, like, making that argument persuasive need to be worked out, but I do think it's pseudoscientific. And part of this just comes from kind of an intuitive understanding, or maybe not understanding but confusion around what even are you talking about, right? When you talk about theory of mind, like what do you mean? So it's just a very bizarre concept. It's the sort of thing that I think people have not paid sufficient attention to the fact that it is kind of in the realm of metaphor. Like it's not clear that there is a real thing in the world that's called theory of mind. This is just like a shorthand that we use to refer to some abilities that humans and maybe some non-human animals seem to have. But you know, Norbert Wehner is quoted as saying, “the price of metaphor is eternal vigilance.” And it strikes me that researchers who invoke the theory of mind metaphor have not sufficiently paid that sort of vigilance. They just invoke it and assume that you know what they're talking about, but it's actually a very, very vague concept. So it's not even clear to me that there is any such thing as theory of mind, but we have this sort of shorthand to refer to this suite of abilities that many people seem to have, but it's not entirely clear what that consists in.
Joe’s Eliminativist Critique of “Theory of Mind”
Amelia: It’s hard to say exactly when a research program goes from being bad science to pseudoscience. But Travis thinks that theory of mind research, especially as it relates to autism, is now in dangerous pseudoscience territory—and his evidence for this is that, when it comes to “theory of mind,” it’s often not clear what we’re even talking about. This is a problem we encountered over and over again while researching this topic; different scientists have totally different ways of conceptualizing and measuring theory of mind. And sometimes they even have incompatible ways of conceptualizing theory of mind. Joe Gough is a neurodivergent philosopher of science who’s about to begin a postdoctoral fellowship at Oxford, and he has written about this problem at length.
Joe: Hi, my name is Joe. I am interested in philosophy of psychiatry and everything kind of tied to it. So everything from philosophy of mind, philosophy of biology, philosophy of medicine, philosophy of science. A lot of what I do, it kind of belongs in general philosophy of science stuff, as applied to these kinds of issues. But yeah, I'm basically very interested in everything. And I'm trying to do what I think is kind of helpful.
Amelia: One thing Joe is an expert on is all of the different ways the concept of “theory of mind” gets used across psychology. It doesn’t just come up in autism research; for example, it’s also a big deal in animal psychology. And here’s where things get really weird. I asked Joe about the connection between theory of mind in autism research and theory of mind in animal psychology.
Joe: I think there's, the links look mysterious if you focus on the science itself, where in autism research, lots of autistic adults can reason very well about mental states. Some autistic children can reason very well about mental states. So the reason that this is taken as consistent with the theory of mind deficit theory, is that the idea is that autistic people are learning a kind of body of explicit knowledge which they're using to sort of, sometimes you can put it in terms of like “gaming” the tests, right? Like so that they're learning explicitly “if this person has seen this, but, and this person's seen this, then this is what you would, how you would expect them to act” rather than it just being a kind of innate thing that they just know from their own case or from an innate theory of knowledge, theory of the thing. So they're criticized basically for relying too much on explicitly represented knowledge, and the idea is that they’re kind of compensating through explicit learning for their lack of or for the impairment of their theory of mind.
Amelia: In autism research, relying on explicit reasoning in order to grasp other people’s perspectives disqualifies you from having “good theory of mind.” Often, when autistic people perform well on theory of mind tests, researchers will conclude that the autistic participants must have used their reasoning skills to “game” the tests.
Joe: Whereas in animals, precisely the opposite is true, where you see evidence that animals will do things like infer from gaze direction, the behavior of chimpanzees will infer from the direction of gaze of another chimpanzee where that chimpanzee might go and what it might know in some sense, or it seems like that. That's what it superficially looks like. And the idea is that, well, this doesn't count as theory of mind because they're not explicitly representing things in terms of mental states. They're just inferring directly from gaze that way will go that way, or some set of observable cues to a behavior. This is called the logical problem in animal mind reading research. And the idea is that they are just relying on a bunch of non-explicit, like, implicit links between observable behaviors and observable features, and don't have this explicitly represented body of knowledge.
Amelia: In animal psychology, we get a totally different understanding of theory of mind. In animal psychology, theory of mind requires explicit reasoning. So, when an animal engages in perspective-taking behavior, researchers will still often insist that the animal does not have theory of mind, because its perspective-taking behavior is too quick and instinctual.
Joe: So to kind of bring those together, on the one hand you have autistic people being claimed to lack a theory of mind or have an impaired theory of mind because they're too reliant on some explicit body of knowledge. And then on the other hand, you have animals being claimed not to have a theory of mind because they are not reliant enough, and in fact, lack this explicit body of knowledge. So it seems like we're talking about, like, really quite different constructs in the two cases.
Amelia: So clearly, “theory of mind” in autism research does not mean the same thing as “theory of mind” in animal research. And yet, these two bodies of research are often linked together, in a way that suggests autistic people are more like animals than non-autistic people.
Joe: And assuming people aren't just like naive or obviously mistaken, it seems like there's some kind of explaining to do of why these two things end up so consistently linked. And I think again that natural place to point is prior stigma and institutional incentives. Like if there's a there's a big call for theories of what makes humans special, and there's a big call for dehumanizing stigmatized conditions–you can hit both points at once if you kind of ignore the differences between these two constructs and treat them as somehow impaired in the same way.
Amelia: These two bodies of research create a link between autistic people and non-human animals, which dehumanizes autistic people. Both autistic people and animals allegedly lack an ability that makes human beings special. In autism research, that ability is denied to autistic people for one reason; and in animal research, that ability is denied to animals for a different, totally incompatible reason. But it’s rare for anyone to dig into the details to figure out that “theory of mind” means completely different things in these two fields. There just isn’t much incentive to do all that digging.
The concept of “theory of mind” pops up in other corners of psychology, too, like developmental psychology and child psychology, and researchers in those fields have their own ways of understanding and measuring theory of mind. Joe told me that some of this research is actually pretty interesting—he thinks we shouldn’t just throw away all of the findings this research has led us to. But he thinks we need to stop using the concept of “theory of mind” when interpreting those findings.
Joe: I'm not sure any theory of mind research is salvageable while still using the construct theory of mind. But at the same time, there's mechanistic findings. There's findings about the roles of some kind of neural networks. There's empirical findings about performances on different tests that it might be worthwhile to try and account for. These kind of like accrued findings might be worth, I mean, they're salvageable in that it's probably not a great idea to entirely abandon everything that's been found. I just don't think it should be accounted for in the same terms or communicated in the same terms.
Amelia: Joe thinks scientists should stop using the term “theory of mind.” That doesn’t mean we have to give up all research that has used the term theory of mind—but any worthwhile research needs to find new, more specific replacement terms. Part of Joe’s reason for thinking this is that the stakes here are high. Here he is discussing theory of mind in autism research:
Joe: I think it's methodologically flawed and I do think it's ethically dangerous. I think that the level to which it's methodologically flawed is frankly not super exceptional for psychology. I think that there's a somewhat simplistic, like some simplistic operationalizations, some dubious psychometric kind of, or like some dubious measures. There's not great signs of anything called, like, construct validity, i.e. ties to the thing that we were pretty theoretically interested in and coordination between the different measures. But stuff like this is not exactly unheard of in psychology, although it's not great whenever it appears. But it's the fact that it's so ethically dangerous that makes that grossly irresponsible rather than the normal level of "psychology is really hard, we're kind of doing our best," in my view.
Basically I think much higher stakes means a higher evidential bar. It seems like just part of doing science responsibly.
Joe on How We Got to This Awful Place
Amelia: How did we get here? How did we get to the point of researchers talking past each other, in ways that end up dehumanizing autistic people? How did we end up doing bad theory of mind research, which sometimes drifts into pseudoscience?
Joe: The view that I came to is basically that the theory of mind deficit view is a kind of a bad summary of a bunch of empirical findings that’s sort of masquerading as a theory.
Amelia: Joe thinks that the theory of mind deficit view of autism started off as a real scientific theory. But over time, it became shorthand for a bunch of disparate empirical findings. And that shorthand has led to sloppiness. Saying that autism is a theory of mind deficit glosses over the fact that “theory of mind” gets measured in different ways by different researchers; and it glosses over the fact that all those different theory of mind tests get mixed results, which we’re not very good at interpreting. How did we end up with this sloppy summary of all this confusing research?
Joe: I think it becomes a summary in a couple of ways. One of which is that it was very popular and influential at the time for various reasons, especially because of the ways it ties towards general theories of human specialness. But I think as well, frankly, there's a lot of stigma at the basis of it. So I think a lot of the reason it becomes such a popular summary is it says something that gives a sciency sounding name for something that a lot of neurotypical people have thought, which is like that autistic people don't get them, appear to be bad at empathy, whatever it may be. And it gives a kind of validation for that. So I think in terms of playing into pre-existing stigma, in terms of having a certain kind of cool factor which helps get funding, it's in the interests of the scientists, whether they know it or not, to keep using this kind of language to talk about autism.
Yeah, I wish I had a less cynical view, but I don't think I do.
Amelia: What Joe told me reminded me of what Tobi said in the previous episode—the theory of mind deficit view of autism became this untouchable idea in part because it reinforced stigmatizing views of autistic people, all while giving this stigmatizing idea a cool, science-y sounding name. This type of research then became embedded in institutions, which created more incentives for scientists to continue pursuing the theory of mind deficit research program.
Joe: Um, so currently I think it's uh, the actual research is just kind of ploughing away, gathering empirical findings, trying to ignore the failure of the theory. And often people who've, so some of the kind of applications of it in neuroscience, people have found effectively a productive way of asking questions where they're not taking the construct all that seriously and are just happy ploughing away, ignoring the really large problems with the broader body of research. I think where it's going is that I think it's very likely that this will continue because of the kind of sociological and institutional incentives until some other grand unifying theory that attracts a bunch of money comes along. In terms of where I think it should go, I think it should stop. I think that a lot of the research is fine, but it just shouldn't be going under the aegis of theory of mind. I think it's a kind of a zombie as it stands.
Amelia: Thanks to stigma and institutional incentives, the theory of mind deficit view of autism is a zombie—it just won’t die.
Amelia: Next time, on NeuroDiving:
Travis: Well, I think this is part of that kind of productive work that philosophy can do. The sort of standard argument for philosophy is, oh, critical thinking is a good thing and we need more critical thinking and all this sort of stuff. So if we just have our values, I think that this can become problematic when we don't reflect on them. So philosophy, particularly in the realm of discussing things like autism, theory of mind, and so on, gives us a chance to kind of reflect on those values and see like, does this actually stand up to scrutiny?
Amelia: We’ll hear about alternatives to the theory of mind deficit view of autism, and we’ll examine the roles of values in autism science.
This episode was written, hosted, and produced by me, along with Joanna Lawson. In our show notes, you can find links to the papers mentioned in this episode. Many thanks to Travis LaCroix and Joe Gough for speaking with me about theory of mind research and the nature of bad science. Travis recently received an awesome grant from the Canadian Social Sciences and Humanities Research Council called “Philosophy on the Spectrum: The Philosophy of Autism and Autistic Philosophy.” He’s also currently writing a paper about the pseudoscience that underpins the theory of mind deficit view of autism, so definitely keep an eye out for that paper. Joe just started a postdoctoral fellowship at Oxford, where he’ll be looking at legal and medical assessments of decision-making, and how they misfire for neurodivergent and cognitively disabled people. And of course thanks to the Marc Sanders Foundation and the Templeton Foundation for their support of the show.