Join myself and Dr. Ken Gillman, MD, as we discuss the inherent problems with modern scientific research, and the lack of emphasis put on building critical thinking skills endemic to our academic institutions. In Part 2 of our conversation, we delve furtther into how research has been commandeered by industry and our over-reliance on Randomized Controlled Trials, or RCTs, at the expense of all other forms of evidence and research, has led us astray as a scientific community. Ken uses the hilarious example of strangling the 'cockerel' to illuminate the incompleteness in evaluating a cause and effect relationship between the rooster crowing and the sun coming up, as establishing the mechanism is paramount to connecting the effect to the cause. ENJOY!
If you're passionate about what we do here at Renegade Psych, we're now on Patreon! If you'd like to support our work, you can! Or not... I'll continue putting out content as long as my other jobs pay the bills. Other things you can do: liking, commenting, and sharing our posts also go a long way! https://patreon.com/RenegadePsych.
Thanks for listening to the audio podcast... You should check out our posted video podcast on YouTube (https://www.youtube.com/channel/UCaZ1bds1MGMM4tSbY7ISqug) as there are graphics overlaying the video to make it all more interactive and educational. For more social media content, check us out on all social media platforms @RenegadePsych. If you have any comments, questions or challenges to the information we've presented here, if you'd like to be a guest to the show, or if you have general comments, questions, or suggestions, email us at Renegadepsych@gmail.com and follow the link https://renegade-psych.podcastpage.io/ to our website for source material, transcripts, and additional links for my guests. If you feel passionate about our message and what we're trying to do, and you'd like to donate, you can also follow the link in the show notes to our website.
Disclaimer, this podcast is for informational purposes only. The information provided in this podcast and related materials are meant only to educate. This information is not intended as a substitute for professional medical advice. While I am a medical doctor and many of my guests have extensive medical training and experience, nothing stated in this podcast nor materials related to this podcast, including recommended websites, texts, graphics, images, or any other materials should be treated as a substitute for professional medical or psychological advice, diagnosis or treatment. All listeners should consult with a medical professional, licensed mental health provider or other healthcare provider if seeking medical advice, diagnosis, or treatment
[00:00:00] Science is about causes, mechanisms and effects. If you're not looking at causes and mechanisms and their effects, you aren't doing science. End of story. I summed it up as saying, strangle the bloody cockerel. In other words, the key to establishing cause-effect relationships is to operate on the supposed cause or the supposed mechanism.
[00:00:29] But if you strangle the bloody cockerel, then we'll all be dead, Ken. Okay? Because the sun won't come up. I'll strangle the shaman instead then. Somebody get this guy some help!
[00:00:53] This information is not intended as a substitute for professional medical advice. While I am a medical doctor and many of my guests have extensive medical training and experience, nothing stated in this podcast nor materials related to this podcast, including recommended websites, texts, graphics, images, or any other materials, should be treated as a substitute for professional, medical, or psychological advice. Diagnosis or treatment. All listeners should consult with a medical professional, licensed mental health provider, or other healthcare provider if seeking medical advice, diagnosis or treatment. Or, put more simply... If you need help like this guy, call your own doctor. And do you know...
[00:01:23] And do you know... We're going off on another slight tangent here, but it's relevant for people's understanding and knowledge. One of the review papers that I'm most proud of is actually my review paper about neuroleptic malignant syndrome.
[00:01:37] In the early days, when I was first associated with Professor Ian White, everybody was saying that you can't distinguish between neuroleptic malignant syndrome and serotonin syndrome, and they're all on a spectrum, and the famous fellow Fink in America wrote about that, and blah, blah, blah. And I wrote way back, you know, in the 90s, that that was absolute rubbish. They were chalk and cheese. And that was actually the expression I used and got published somewhere.
[00:02:03] And I remember Ian, who was just in the early days then of setting up the fantastic toxicology database that they had at Newcastle, the academic unit at Newcastle, where he was the professor. He latched onto that, and he wrote about it.
[00:02:19] And so I got so fed up of people claiming that they were indistinguishable that I thought, well, I can't really argue authoritatively against this unless I make myself into something of an expert on neuroleptic malignant syndrome. So after I'd done one or two other reviews I wanted to do, I thought, right, let's tackle that.
[00:02:39] And when I started looking at it, I thought, wait a minute, the cause-effect relationship between neuroleptic drugs and this rather amorphous and poorly defined syndrome called neuroleptic malignant syndrome is actually extremely poor. And because of my long, skeptical analytical thinking background, I thought, well, somebody really needs to rethink this.
[00:03:06] And I started looking into how people were applying arguments about cause and effect relationships in medicine. And you know what? I was horrified how little attention that crucial question was getting. Now, I do hope that many of your listeners will be aware of Sir Austin Bradford Hill.
[00:03:31] He was the famous statistician who, with Dahl, Sir Richard Dahl, Sir Austin and Sir Richard, they did the pioneering work about cancer and cigarettes. Yep. Yep. And clashed swords in a major way with the fellow who I think has done an immeasurably great disservice to medicine. And that is Fisher, the famous statistician. He was knighted as well, I think.
[00:04:00] Alwyn Fisher or something. He had a funny name. Rather unpleasant character, I suspect. He was really more into agricultural research. But as I'm sure you know, he was one of the most famous statisticians of the last century. And his son became a famous statistician as well. And there were two departments of statistics at UCL. One existed because of eugenics.
[00:04:29] Galton, Darwin's cousin. Very fascinating polymath scientist of the 1870s, whatever around then. A fascinating fellow. And he got into eugenics a bit. And when he died, I think it was right at the beginning of the 20th century, I think he died. He left a lot of money at University College London to found a chair of eugenics.
[00:04:54] And both Fisher and Pearson were editors of the Journal of Eugenics until they changed its name in the 1930s. I wonder why. Fascinating history. So the chair that Pearson took up was created because they knew that they couldn't put Fisher and Pearson in the same department. Because they were so at loggerheads.
[00:05:25] They hated each other. They were on different floors. And they used to go to the common room for tea in the afternoon at different times. And Fisher's mob would drink Chinese tea and Pearson's would drink Indian tea. And it led to the wonderful quip that did the rounds. And I'm sure many people, well, perhaps a lot of people don't know it. They were so at loggerheads that the quip originated is, what do you call a group of statisticians?
[00:05:54] A quarrel. I thought that was excellent. A quarrel of statisticians. They can't agree on anything. So, yes, so Fisher produced all that wretched p-value stuff, which still bedevils psychiatry. And to bring this back to the sort of central theme that is in my mind, at least from all these different interesting things we've been mentioning,
[00:06:22] this lies at the heart, in my view, of the whole question of the epistemology of science and cause-effect relationships and this nonsense about p-values and statistics. Bayesian theory, I hope. Are people, you know, your generation of people taught about Bayesian theory? So, we're taught about it, but I know my statistics course was very brief. I think it was two weeks long.
[00:06:51] And then we don't touch it again. We don't get enough, you know, long-term practice with evaluating data statistically. This is a jolly good book. Bayesian theory is extremely important. Your estimation of whether something's likely to be true or not is almost inevitably based on your prior experience.
[00:07:18] This is something that Fisher could never understand and come to terms with. It's simply a reality. The statisticians who don't like Bayesian theory, who claim that you can't assign prior probabilities to things, as far as I can see, simply do not live in the real world. Because the moment you start doing science, you're essentially following Bayesian thinking, right?
[00:07:44] In medicine, what you decide to study is based on your prior knowledge and what you think the probabilities of it being a useful way of you occupying your time. If I said to you, Ethan, look, everybody's barking up the wrong tree. I think what you really need to get to grips with is the fact that you need to get people breathing a mixture of air that's got an extra 1% of argon in it every day for five minutes.
[00:08:13] That's the secret to helping your lungs work properly and your brain work properly. And people are just ignoring this. And you think, well, wait a minute. Everything I know about physics and biology and everything else says that that's highly likely to be utter rubbish. Whereas if I came to you and said, why don't you investigate and gave you something a bit more plausible, you'd be a thousand times more likely to agree it was worth investigating.
[00:08:38] So these statisticians who claim that, you know, you can't use Bayesian prior probabilities, think they're just not living in the real world because that's actually what they're doing half the time. The reason they're doing what they are when they are is essentially because of Bayesian prior probability, because they think other things aren't worth doing. I've come to regard this as an absolutely central thing in understanding and evaluating modern research, or any sort of research, but especially drug research.
[00:09:07] This whole question of probability and epistemology. P-values are of very little meaning because they don't take notice of prior probability. A good way of helping people to understand this is it's almost the same as the whole business of looking at the probability of a test giving a useful result.
[00:09:34] So I guess mammograms would probably be an excellent example of this. If the incidence or prevalence or whatever it is of a breast cancer in a certain age group is one in a thousand, and if your test has a certain degree of false positives and false negatives, what's the chance of a positive test altering your estimation of whether that person actually does have cancer or not?
[00:09:59] And when you look at those figures, you realize that almost all of these tests are of much lesser usefulness than most people seem to think. Because if the initial chance is one in a thousand and your test has a 98% thingajig and it's positive, there's still only a one in a hundred chance that you've actually got breast cancer. Right? Now, that's based on prior probabilities.
[00:10:24] You can't make that estimation of what's the real chance of you having that condition as a result of that test, unless you've got an estimate of the prior probabilities. Now, you can't estimate the prior probabilities accurately, unless you have a whole lot of background knowledge of, you know, populations and statistics and epidemiology and all those sorts of things. Because what population do you mean?
[00:10:50] Do you mean the white members of your population or members of the population who happen to be living in America, no matter whether they arrived yesterday or a thousand years ago, or what color they are or what language they speak? You know, how on earth are you going to define the population? So there are a whole lot of prior probabilities that go into that final estimate. And p-values are of absolutely no help at all there. Absolutely no help at all. Well, not absolutely no help, but no useful help.
[00:11:19] So the whole question of what's the cause-effect relationship between the things that you're looking at and the outcome you're interested in is absolutely key. And when it comes to randomised controlled trials, that becomes such a central issue that, in my submission, a lot of randomised controlled trials, or most randomised controlled trials, are simply of absolutely minimal value.
[00:11:47] And without going into the logic and the epidemiology in detail, the proof of the pudding is in the eating. So I would say to somebody, well, look, find me a randomised controlled trial that substantially altered the way you practised. Well, I guess it would be reasonable to suggest to people that we look at the Cipriani 21 antidepressant study that was published in The Lancet.
[00:12:15] But they picked what they considered all of the best trials over quite a long period of time. Although there were one or two things that were interesting because they were missing from that. But anyway, that aside, somebody has to decide which studies go in and which go out. And no matter how involved all the criteria look, they're still relatively subjective. But look at the end result of that trial.
[00:12:39] It doesn't give you any ability at all, I would submit, to make any useful decision in your clinical practice. Do you think people would disagree with me on that question? No, absolutely not. I mean, I agree with you 100%. Well, just look at the suffering, never mind the cost.
[00:13:02] I mean, the Helsinki Protocol or whatever they call it, says that bad research, ergo, cannot be ethical. Well, there's a shed load of bad research involved in that, which was done on hundreds of thousands of people by the criteria that most of the studies they looked at couldn't be included because they weren't good enough. So that's a reasonable cut off to say, well, that's bad research then. So it wasn't ethical.
[00:13:31] And even the research they thought was good research has led to no useful outcome. So we've subjected hundreds of thousands of people, never mind the billions of dollars it's cost. I tried to estimate it somewhere a while ago. But it is mind-boggling. You know, literally hundreds of thousands of trials over the last 50 years. And what's the result? Almost zilch.
[00:13:57] Everybody says, well, it seems to me that the majority view is that randomised controlled trials can produce evidence of a cause-effect relationship. Now, I've said this before, and let me be explicit in terms of explaining why I think that's absolutely central to the whole thing. Science is about causes, mechanisms and effects.
[00:14:22] If you're not looking at causes and mechanisms and their effects, you aren't doing science. End of story. It really is that simple. How many randomised controlled trials contribute anything whatsoever to revealing the cause and the mechanism of the end effect of the drug that they're studying? I would submit to you that virtually none of them do.
[00:14:52] Insofar as you can produce me a trial that does produce reasonable prima facie evidence of a cause-effect relationship between the long-term outcome of depression and an antidepressant drug, and you say, well, randomised controlled trial proved that, my answer to that is very simple. No, it didn't. The trial may have established a cause-effect relationship.
[00:15:18] Let's do something simple like antibiotics and penicillin. But the fact that it was clear that they got better with penicillin was nothing to do with randomisation or controls. If a so-called randomised controlled trial produces a clear enough result, the randomisations and the controls are largely irrelevant. And what everybody's forgotten is that's exactly what Sir Austin Bradford Hill said.
[00:15:48] When the results are clear, statistics are irrelevant, essentially. I can't remember his original words. And lots of other people have said that. In something I've written about RCTs, I look at the history of all the different famous people who've made comments like this. And repeatedly over the last three decades or more, lots of famous people have made statements like that. People who are the chair of the FDA and the committees in England that do all these things. They've all said that kind of thing.
[00:16:17] And yet the juggernaut of RCTs rumbles along unaffected by all that, as if it was the only way of investigating science. And I think that's been a catastrophic red herring in terms of misdirecting research funds and stifling the development of ideas about cause-effect relationships, smothering proper investigational science.
[00:16:45] People have become so obsessed by RCTs. Not only does the 21 antidepressants at Priani study not tell you anything useful, in actual fact, it's counterproductive in many ways. Because, of course, all the guidelines result from RCTs. I don't think you'll find anybody who'll argue that that's not essentially true. You know, all level one evidence has to be RCTs.
[00:17:12] So anything that hasn't got a significant number of RCTs in the relatively recent past simply doesn't get into the guidelines. That's why MAOIs aren't in the guidelines. We're getting to MAOIs eventually. Right, right. I wanted to point out with RCTs, probably one of the reasons why they are still considered the gold standard, they are so rife for manipulation. It's so easy to make the data look like you want it to look like.
[00:17:40] I know you've talked about mirtazapine and, you know, if you can make somebody sleepy enough and eat more, then you can move the needle of the scale of depression enough to show a benefit. But does that really benefit the person? I've got an example that drives me crazy. These new monoclonal antibodies for Alzheimer's, the adacanumab, lacanumab, the EMA, you know. I've heard of them.
[00:18:10] Exactly. The EMA refused to grant market authority for them to market these drugs. And my favorite example here is the use of relative risk versus absolute risk. So you see a commercial because, of course, the United States, we've approved it, even though it's $50,000 a year and it has a 20% chance of your brain swelling and bleeding and all the problems that may come from that and the deaths in the clinical trials.
[00:19:11] So you take the difference. So you take the difference and you get a 37% number by comparing those two. But if you look at the absolute risk, you're talking about a reduction of 2.5% at the expense or at the negative side effect existing in almost one out of five patients.
[00:19:35] Look, I'd have an even more fundamental criticism of that kind of thing is that it's clearly an absurd overestimation of the accuracy of the data that you're looking at. You know, a 37% risk. Sure, it wasn't 36%. I mean, it's farcical.
[00:19:58] And this is an epidemic of quoting spurious false degrees of accuracy for things, which, of course, gives it the very similitude of real science. You know, 37.5%. Oh, gosh, that must be science. Yes. If you said, oh, about a third. Well, that's not very scientific. But it's just as scientific as 37.5%. I mean, people are so easily conned, aren't they? It's just absurd.
[00:20:28] There's a 78% chance I'm slightly more intoxicated now than I was when we started to talk. Oh, dear. Oh, dear. Yes. So, randomized controlled trials and cause-effect relationships. I think it's a really important and neglected area of science. For people who like to read something that's a little bit new and different, Judea Pearl.
[00:20:55] He's the guy who got the Turing Prize, which is the equivalent of the Nobel Prize, for his work on causation in science. 2011, I think it was, a little while ago now. He's still ticking along. He's done tremendous work on the whole question of causation in science.
[00:21:15] As an illustration of how central and how simple that essentially is, is the old trope, post hoc ergo proctor hoc, the famed logical fallacy. After that, therefore, because of that.
[00:21:32] And I think going back quite a long time, some famous philosopher or luminary illustrated it with the error of reasoning that says that the cock crows just before dawn and that causes the sun to come up. And that's supposed to be the classical fallacy of post hoc ergo proctor hoc. But, of course, post hoc ergo proctor hoc is not a logical fallacy if you understand Bayesian reasoning and cause and effect relationships.
[00:22:02] So, first of all, if you are a Bayesian, you say, well, what's our prior probability of that being true? I mean, even if you were a Stone Age man banging rocks together, you would have had the ability to reason that there might not be a direct connection between those things because for 101 different reasons, you know. But our minds try to make that connection, right?
[00:22:27] If I do a rain dance and then it starts to rain, my mind will say, well, I think that worked. Maybe you should try that again. But you have to have that conscious ability to critically think and override that impulse of the mind that wants to order information, that wants to feel confident and secure that what I have done has led to the result. But as we're saying, that's not that simple.
[00:22:55] So that's why I want to backtrack and explain why Judea Pearl's thinking is so simple and so relevant to that analogy. And that is, I summed it up as saying, strangle the bloody cockerel, right? In other words, what pearls was the key to establishing cause-effect relationships is to operate on the supposed cause or the supposed mechanism, right?
[00:23:23] But if you strangle the bloody cockerel, then we'll all be dead, Ken. You can't do that, okay? Because the sun won't come up. I'll strangle the shaman instead then. There, we're back to religion again. Yeah, yeah. But that's this. Yes, that's another whole chapter of fascinating thinking there.
[00:23:46] But yes, so the point is that to do good science, you have to have a pre-existing theory, which is why Bayesianism is unavoidable. And in order to advance, you have to do operations on the supposed causal mechanism, the supposed causal factor and the mechanism which is bringing about the result that you think you observe. And if you're not doing that, you're not doing science.
[00:24:16] So my submission is that a great majority of RCTs simply aren't science. And what's terribly destructive about them, and you touched on this when you were speaking a moment ago. If you look at the rating scales that we're using, whether it's for dementia or for depression, they're the same as they were when I first started 50 years ago. The Hamilton rating scale has never been a good scale for depression, and yet it continues to be used.
[00:24:44] I know people are trying to use other scales, but isn't the Hamilton rating scale still more or less the most widely used scale? The point I'm making is that the obsession with RCTs and the fact they're all commercially sponsored and there's no incentive for them to look into improving the methods of measurement and assessment means that the science of measuring things has more or less stood still for five decades.
[00:25:11] We should have been doing much, much more research into how do we measure things to do with the illness. I don't think I've missed a world of research that's been done without me knowing. It's remarkably sparse. It's so frustrating, too. I mean, I'm not anything special. I'm not this totally altruistic being or anything. But the reason that I wanted to pursue science was to further my understanding and knowledge.
[00:25:40] And then you get to this stage in a career when you're through all of the training and you're out into clinical practice and you've realized that the system is not predicated on advancing our understanding of disease and treatment. It's a very tough pill for especially a young practitioner to swallow. I've got some advice for especially younger practitioners. You said clinical practice.
[00:26:07] Now, I think to my mind, one of the most negative, even disastrous aspects of this business we've been talking about, about randomized control trials and how badly they assess improvement, is that what has gone hand in hand with that is an essential devaluation of the value of clinical judgment. Right? Right?
[00:26:33] So we're now told that those clinical observations we make and the clinical conclusions that we make, that's not valid scientific evidence. You have to have a randomized controlled trial. Right? Well, once you've understood PERL and the DO operator and causes and mechanisms and effects, then you can see that that's clearly utter rubbish. I suppose you could take an example like ECT as being potentially a model of this.
[00:27:01] And that is that the mechanism, the induction of an electric, sorry, sorry, of an epileptic fit, not an electric discharge, of an epileptic fit. There are many experiments that show that the only thing that works is if people actually have a fit. You know, if you give a shock that doesn't cause a fit, they don't get better. So you can build up a whole science of cause and effect relationships.
[00:27:24] And what's even more important, and this has been almost universally ignored in antidepressant drug trial and other similar research, is the time relationship between those things.
[00:27:37] Because when I looked at the cause-effect relationship in the neuroleptic malignant syndrome review, what struck me immediately was that there was no time relationship between the initiation or the increase in dose of the drug and the onset of symptoms. Well, if there's no time relationship between the administration of a drug and the effect that you're looking at, you immediately, of course, have to doubt that there's a valid mechanism bringing about that effect.
[00:28:07] If you inject an antihypertensive drug through an IV line, then you know how it works. You know which receptors it affects. You know how long that takes to take effect. You know from animal experiments how long it takes the blood vessels to dilate or whatever you're talking about. And you can say, well, look, this is going to work in three to five minutes, right? So you give a certain dose. Five minutes later, nothing happens. You give twice that dose. Nothing happens. You get a bigger dose. Bang, it works. And so you plot your dose-effect relationships and blah, blah, blah.
[00:28:35] But the fact of the matter is you know that there's a distinct time relationship between the stimulus and the outcome. Well, if all of a sudden the change in blood pressure occurred after one minute, after five minutes, and then the next day, you'd immediately say, well, hang on a minute. There can't be a cause-effect relationship here. You can't explain why. In some people it works in one minute and other people it works in one day.
[00:28:59] Or if you can, it means you completely reassess the mechanism by which it's working and you come to a new understanding of what's going on. But the point is all of those things are subject to repeated experiments and investigating what's happening and da-da-da-da-da-da. Yep. Yeah. And we're not a homogenous population. We just aren't. Even, you know, you can create the most homogenous group. Of course, that's true. But nevertheless, there are certain fundamental biological processes that are pretty homogenous. Right, right.
[00:29:30] So those factors to do with the intervention and the timing, they're a very important part of establishing the cause-effect relationship. And that's one thing that's never been done with antidepressant drug trials. Disclaimer. This podcast is for informational purposes only. The information provided in this podcast and related materials are meant only to educate. This information is not intended as a substitute for professional medical advice. While I am a medical doctor and many of my guests have extensive medical training and experience,
[00:29:56] nothing stated in this podcast nor materials related to this podcast, including recommended websites, text, graphics, images, or any other materials, should be treated as a substitute for professional, medical, or psychological advice, diagnosis, or treatment. All listeners should consult with a medical professional, licensed mental health provider, or other health care provider if seeking medical advice, diagnosis, or treatment. Or, put more simply... If you need help like this guy, call your own doctor. Thanks again for watching and or listening.
[00:30:20] If you're passionate about the subjects that I discuss on the channel, do me a favor and like, comment, subscribe. Do whatever you can to make your voice heard. That these are problems that must be addressed in our society. If you have any questions, comments, or concerns, I want to hear them. Feel free to reach out on social media or email us at renegadesych at gmail.com.
[00:30:51] And if you'd like to be a guest of the show, or you have a connection to somebody that you think would be a good guest, let us know. Thanks again for listening.

