23.3 Drug Advertising: Ways Clinical Trials are Manipulated with Michael Shuman, PharmD
Renegade PsychJanuary 07, 2025x
3
41:5938.43 MB

23.3 Drug Advertising: Ways Clinical Trials are Manipulated with Michael Shuman, PharmD

Join myself and Dr. Michael Shuman, PharmD, BPCC, as we discuss the negative impact of Direct-To-Consumer-Advertising (DTCA) in the US Healthcare System. This is a recurring series where Dr. Shuman and I will talk about various examples of the negative impacts of DTCA in America over the last 40+ years. We're living and working in a system now that is rife with misinformation and poor-quality research, and we want to make everyone a little more aware of just how many examples exist in US healthcare history of poorly designed and carried out drug trials and direct manipulation of data, leading to tragedies like with Vioxx, and eventual recalls of drugs touted as 'state of the art' and a 'technological advance.' Hopefully, we can instill systematic change that will improve how we go about measuring the safety and efficacy profiles of each new drug.

In this 3rd episode, we talk about various ways clinical trials can be manipulated by those that design them, emphasizing the need for ACTUAL REGULATORY AGENCIES that do their job and remain independent of those they regulate. We need more peer review in the field of medicine; if a researcher can convince one of his competing researchers that his theory or his work is valid, it would lead to a lot more certainty that the medications that gain FDA approval are legitimately safe and effective. From ghostwriters to surrogate markers, using rating scales to show 'statistical significance' when clinical relevance is unproven, using enriched study designs and removing difficult data points/patients due to 'concurrent illnesses, inclusion/exclusion criteria that make the studied population nothing like the intended treatment population, not publishing negative trials, and major journal editors being paid by the pharmaceutical companies themselves, the problems are seemingly endless. We also discuss the potential pitfalls of AI being used as tool for propaganda, as well as the importance of journal clubs and for all of us reading these studies to be highly critical of them. Hope you enjoy. Michael will be a recurring guest for this series on DTCA for our listeners to enjoy over the next several months.

Thanks for listening to the audio podcast... You should check out our posted video podcast on YouTube (https://www.youtube.com/channel/UCaZ1bds1MGMM4tSbY7ISqug) as there are graphics overlaying the video to make it all more interactive and educational. For more social media content, check us out on all social media platforms @RenegadePsych. If you have any comments, questions or challenges to the information we've presented here, if you'd like to be a guest to the show, or if you have general comments, questions, or suggestions, email us at Renegadepsych@gmail.com and follow the link https://renegade-psych.podcastpage.io/ to our website for source material, transcripts, and additional links for my guests. If you feel passionate about our message and what we're trying to do, and you'd like to donate, you can also follow the link in the show notes to our website.

Disclaimer, this podcast is for informational purposes only. The information provided in this podcast and related materials are meant only to educate. This information is not intended as a substitute for professional medical advice. While I am a medical doctor and many of my guests have extensive medical training and experience, nothing stated in this podcast nor materials related to this podcast, including recommended websites, texts, graphics, images, or any other materials should be treated as a substitute for professional medical or psychological advice, diagnosis or treatment. All listeners should consult with a medical professional, licensed mental health provider or other healthcare provider if seeking medical advice, diagnosis, or treatment.

[00:00:00] So somebody wrote a paper and they said, when you're in a field of battle, you really don't have a lot of medications on your formula. What we need is one medication that can cover everything. And they said this medicine, Seroquel, helps with insomnia, helps with anxiety, it helps with PTSD, it helps with bipolar, it helps with depression. So because of that, and they, I believe they cited the package insert which said none of that, but out of that paper, that was kind of my patient zero I found when looking through how did it become so marketed off-label.

[00:00:31] Somebody get this guy some help!

[00:01:07] Dr. Justin Marchegiani Or put more simply, Dr. Justin Marchegiani If you need help like this guy, call your own doctor. Dr. Justin Marchegiani You know, I think one of the first kind of issues to discuss in some of these trials and how they're designed is how patients are selected. I know you got some strong feelings and thoughts about patient selection and the generalizability of these trials to the actual patient population that we're trying to treat.

[00:01:32] Dr. Justin Marchegiani Yeah, it's something again, you know, when you go to school, and then you talk about journal club and really hammering these studies and talking about them and breaking down what they say and what they do. And I don't know one of the things I was told was, you know, if you're going to pick on a journal, don't pick on one of the high end ones, because those are perfect. I've learned I kind of feel like the opposite. Like I tend to trust some of the ones in the lower level journals, because there's not as much of like a industry backing or such a desire that we have to publish this and show it. But that even some of these higher level ones, unfortunately, there still is a lot of the funding.

[00:02:02] Dr. Justin Marchegiani Stream is one thing you look at. And if there's, you know, there's a heavy financial investment in it, that can sometimes again, be easy to skew results or who the published this study who was in the paper, but the big one is inclusion exclusion. So if you're doing a study before you go there, though, because you brought it up the fact that, again, this is something that the general public I don't think realizes happens with almost every major medical journal or psychiatric journal.

[00:02:30] They are funded by the same companies that they're supposed to be critical of, right? And I don't mean like, I mean, critical in terms of you are critiquing their research, you are nitpicking every little detail you can. That's how it should be.

[00:02:47] Correct. And when you have an editor that is making the final decision on what papers get published and what papers don't get published, and that editor is making a couple of million dollars a year, just to edit and run that journal and select papers, they know that they are not going to keep their lucrative position as the editor, if they are publishing certain studies and not publishing other studies.

[00:03:15] Yeah, if you look at a study and you see like, you know, what is the position of the first author? And you'll a lot of times see, okay, this is a pretty big name. Sometimes phrase uses the key opinion leader. And maybe the last person in that coveted anchor position is another name that is well published in psychiatry or whatever the field is. But in between, you'll find all sorts of stuff. A lot of times you'll find that they're just not just but those are those individuals are paid employees of the company. And so again, there is going to be a vested interest if you bring in a expert so that lends credibility.

[00:03:45] That's not just company employees. But if the expert themselves is also paid by the same company, it's hard to, again, assume that they're objective. And there's been even implications of ghostwriting in some of the studies where they've gone back and questioned the authors and said, why did you put your name on this? And some of them said, well, you know, I didn't write that or it's not that I reviewed it. I just was told to, you know, to kind of give my rubber stamp on there and things like that.

[00:04:11] So sometimes even the authorship piece of it and as far as what's actually in it and who takes ownership over what's contained in it, even that can be kind of up in the air. So what is ghostwriting? Ghostwriting is if I have a paper under my name. So the idea is that I contributed to the paper, whether I contributed to the research design or to carrying out the research or to publishing the body of the paper or to at least critiquing it. There's the idea that I had some role in that versus someone else, part of a company that writes.

[00:04:40] And maybe they had somebody that did the writing and then I kind of was there more so for my name rather than my actual involvement. Yeah. So when you have a primary or principal investigator on a study that didn't actually write the study at all, didn't really carry out any of the clinical trials at the sites specified.

[00:05:03] It's very dishonest and manipulative because you're putting a name on there that is recognized in the field as expert. And you're just using that person's name. There are entire companies that have been developed that just write medical literature. These are not scientists. They are not researchers. They are not clinicians. They are writers. They are authors.

[00:05:29] And they are good at spinning a tail that fits the byline of the company. Yeah. Without going into a whole other realm. But this is where the question becomes, what is going to happen with the AI revolution? What is going to happen to ghostwriting? We don't even need people to do it anymore. We can have entire algorithms out there that can produce that same material. Who takes ownership over that?

[00:05:52] And how much is AI going to be able to parse out what is good data versus what is bad data? If it's as powerful as everybody is saying that it is, then maybe it will give us some legitimate data to look at.

[00:06:07] But if there is the potential for manipulation of AI, because when you and I think about AI, it immediately, and I say you and I as a reference to the general public or to anybody, AI, artificial intelligence, it sounds, it seems infallible to us. So if there's the potential for somebody to use AI to promote a message, it's going to happen. Right?

[00:06:37] It is absolutely going to happen. Look, AI said that SSRIs are the best treatment for depression. I can see right now you're getting ready to do the podcast one. It's hard to talk about another one that you hear like, can't let you do that, Dr. Short. Yeah. Something like that. And it's depending on the parameters you provide, you're going to get the output you desire. And so if those parameters are altered to provide a certain output, again, how do we fact check that? Who watches the Watchmen to get another nerdy sci-fi hit to in the last five minutes? Yeah, absolutely.

[00:07:06] And so patient selection-wise, I kind of see two kind of overarching problems, and you referenced it earlier. One is let's say that you want to prove that your drug works. Well, I'm going to recruit a patient population, let's say it's for depression, that is very severely depressed.

[00:07:27] Now, I want them to be young and healthy medically, but I want them to be extremely, extremely depressed so that their baseline depression ratings are very, very high. And same with the placebo, right? That's one thing that's a little bit harder to manipulate. I won't say it's not able to be done, but having matches in terms of age and medical comorbidities and how serious their baseline depression is.

[00:07:55] But the higher they score on a scale that says they're depressed, if it's a 30-point scale and they score 29 out of 30 on average, there's more room for them to drop. And it's also less likely that a placebo is going to drop that score to a degree that is going to promote a significant difference between placebo and the drug. And the reality is that the natural course of depression is going to get better with time.

[00:08:25] And the placebo groups almost always have, just probably based on having those therapeutic, even, you know, just any type of interaction with somebody else will lead to their score being lower over time. But if they recruit sicker patients at the beginning, then they're going to have more robust data at the end.

[00:08:45] Vice versa, you may want to show that something is like the Alzheimer's drugs, the new Alzheimer's drugs, that you know that they have a significant risk. And what you're trying to show is that they prevent progression. So you want to find the people who are the least sick. Alzheimer's, people live with it regularly for 8, 15 years.

[00:09:10] And so if you can find a patient population that's diagnosed in those first couple years of the illness, they're naturally not going to have a significant progression anyway, even if they had no real treatment. Yeah. And what I was getting is just the sad thing out of this is there's a high placebo response, which means that sometimes all people need to do well is to feel like somebody cares. You go into the study. There are people talking to you. There are people around who want you to succeed in life.

[00:09:38] And that's what they're doing in the study is they're meeting with you, talking to you. The sad thing is it's almost an indictment of just our need to just have people that give a crap about us. And so that's some of it is when I think about the high placebo responses, that idea about needing to feel valued is that's what's missing sometimes in the treatment plans. Oh, absolutely. I mean, it's one of the most effective treatments out there, you know, behind diet changes and exercise changes is having pro-social environmental change.

[00:10:07] And actually, speaking of social, that gets back to inclusion criteria, not for depression, but there was a very well-known VA study, a negative prazosin study that came out. And it was, I believe it was around 2017, 2018 that found prazosin no difference in terms of PTSD symptoms. One of the issues with it was the inclusion criteria. They did not allow anyone in the study who had, and this is the term they used, psychosocial instability.

[00:10:33] So essentially, if you had housing issues, if things in your life weren't going well, you wouldn't be in the study. And so they excluded a lot of people whose PTSD symptoms were so bad that it affected their lives. Who'd have thought? I know, right? And they said, you know, they ended up interviewing and got a chance to kind of ask questions of some of the people who designed this. They were very concerned about suicide risk. And so that was their impetus for doing it. But the unforeseen consequence is they excluded the people with the most severe PTSD symptoms.

[00:11:03] And so going back to what you said, because the symptoms weren't that severe, you compared them to placebo. It wasn't able to show a response because all the people who needed it, because their lives were so out of sorts, that's why they needed prazosin. And they weren't in the study by the nature of the inclusion and exclusion criteria. And how long has prazosin been around? It's been used for many, many years for all sorts of things.

[00:11:26] Really, you know, since the late 90s, I believe early 2000s, when Dr. Raskine and some of the individuals on the West Coast VAs really started using it. But it's even older than that in terms of its treatment for blood pressure, which kind of sucks at. Right. It was bad blood pressure medicine and then became a decent BPH medicine. And then it was like, well, it's the strangest thing. It helps with my nightmares. Oh, go on. Yeah. So it's been around for many years. But yeah, how you design this study can have huge consequences. So who designed that study is the question.

[00:11:56] And that's where I believe. Were they designing it in a way where they didn't want prazosin approval because it's another drug that doesn't make any money? Yeah, I think this one was VA. It was a federally funded study. I think in this case, they had some concerns for bad consequences to get you in the study and you don't do well and you hurt yourself. But yeah, they lost the ability to show benefit in that study. And so pretty much now moving forward, I think people are like, we're just going to pretend that study doesn't exist. Yeah.

[00:12:24] And this is skeptical me saying this. It also kind of paves the way for more use of something like Seroquel. Oh, yeah. And that's a – Which is the number two or three drug in the – or was it sometime? In the VA setting. Yeah, for a while that was within the top three. You think of all the drugs. The VA was spending most of their money drug-wise. All drugs. This is not just psych drugs. Right. This is all drugs. It was in the top three. Yeah. What was it behind? I don't remember which one it was behind, but you think about at the time – It's behind like Lipitor, wasn't it?

[00:12:53] Like a cholesterol drug. You think about all the drugs, right, that were being commonly used at the time, things like Lipitor, Crestor. And, yeah, there's only like one or two other drugs that were in front of it. A lot of that was the idea about the unregulated off-label use. Somebody wrote a paper and they said when you're in a field of battle, you really don't have a lot of medications on your formula. What we need is one medication that can cover everything. And they said this medicine, Seroquel, helps with insomnia. It helps with anxiety. It helps with PTSD. It helps with bipolar. It helps with depression.

[00:13:22] So because of that – and they – I believe they cited the package insert, which said none of that. But out of that paper, that was kind of my patient zero I found when looking through how did it become so marketed off-label. And kind of the furthest I could find back was to this paper written many, many years ago. But then out of that, it's used for everything I mentioned and everything under the sun that it can be used for because it's an antihistamine. It has antipsychotic properties, but it's got those antidepressant properties as well. And so you can think about it.

[00:13:53] That covers everything. And if you're thinking drug targets, yeah, you can get anxiety, depression, PTSD, anxiety, OCD. You can get a lot out of there. And, of course, then it turns out that the company might have been doing a little bit of extracurricular marketing of the drug for some of those off-label uses. At the very least, they certainly didn't discourage the use of it for things like insomnia. Yeah, absolutely. And we'll certainly go more into specific examples.

[00:14:20] Seroquel's got a couple of different avenues that it's taken that have been maybe a little bit – I don't know if I want to say immoral, but, again, manipulative. I'm going to have to go back to my thesaurus and find more synonyms than just manipulative. But other problems that exist – one of the banes of my existence, you and I have talked about this a dozen different times, is the use of these rating scales.

[00:14:48] And the rating scales themselves are not bad. I'm not going to say that. I mean, they do help you to track some form of progress with a very subjective complaint of depression, anxiety, and relatively subjective symptoms. We rely a lot on patient report. But one of the biggest problems with the rating scales is you have a rating scale that looks at nine dimensions of depression.

[00:15:15] And whether it is sleep, appetite, your interest in your normal hobbies, the guilt that you feel on a daily basis, the amount of energy and motivation that you have, how well you're sleeping, or how suicidal you are. Every single one of those nine categories counts the exact same towards what we consider to be depression.

[00:15:42] Now, I would argue that there's one of those nine categories that is so much more damn important than the other eight. And that is how suicidal somebody is. If somebody dies by suicide, there is no helping that person. The point I'm making here is that how can we give the same credit to what somebody's appetite is on or off of a drug as we do to how suicidal they are?

[00:16:09] Which, if you're looking at a rating scale and you're trying to show out of 30 points a statistically significant reduction, if you make a medicine that either stimulates or reduces appetite, depending on which extreme the patient is in, because they can be either one, right? Yep. Very helpful. And that also makes them sleepy. Bada boom.

[00:16:34] You've just potentially created a five or six point swing on that 30 point scale, even though they could be even more suicidal than they were before or at a higher risk of suicide than they were before. So this leads to a clinical trial setting where the makers know if they impact enough of the symptoms of depression, which we could do a whole nother thing on what actual symptoms of biologic depression are versus neurotic depression.

[00:17:03] But we'll save that for another day. I'm looking forward to it, though. But these rating scales in and of themselves promote the ability to create the data that you want. Yeah. And that's where, you know, one of my colleagues used the term surrogate markers that we have to be careful of in studies versus functional outcomes. You know, then there's a move toward that in some places, something like I'll throw out the name of one, the Sheehan disability scale.

[00:17:24] That one hand, we're talking about whether or not your symptoms got better, but did it affect your ability to take care of yourself, to take care of others, to do things that give you pleasure, to function in a meaningful way that you want to? You're not getting the most objective set of data, which, like you said, it really should come from functional outcomes. Let's measure how many days you have taken off of work. Let's measure, you know, things like how often you are maintaining your hygiene each day or each week.

[00:17:54] How often are you going out and having a pro-social activity with people that maybe doesn't involve, you know, going to the bar by yourself and drinking at the bar for four hours? Actually, that brings me to remembering one of the maybe not the best ads I've seen, but there was an ad. I hate to pick up the drug again, but it was for Seroquel. And it had on the x-axis Seroquel dose. And the y-axis was likelihood of calling mom on the phone. And that was how they advertised for that drug.

[00:18:23] Oh, this is Seroquel. Seroquel will help you by making you call mom on the phone. This is perfect because talk about an oversimplification of a medication that can, oh, yes, this drug may also cause your blood sugars to go so high you may become an increased risk of diabetes, metabolic syndrome, sedation, weight gain, dry mouth, all these things that may occur. But sure, we can reduce it to just simply you take this drug and you'll call your mom more. But better crank the dose up if you want to call her more often. Yeah.

[00:18:51] I had a patient one time that had an A1C of 15%. Did they call their mom more? Well, I think she didn't answer. But 15%, we took them off of Seroquel. And within three months, it was down to 9%. And again, that's a risk benefit. Right. You know, the reductionistic approaches to let's talk about this drug.

[00:19:15] I've had other patients who don't have any impact on their blood sugar, but it certainly is on a population level. It is certainly a major negative. Now, I will say about Seroquel, it is effective. It is a quote unquote dirty drug because it affects so many different systems. But it also does have significant impact, especially in anxiety, generally, which, by the way, is a symptom. It's not a disease. But that's another discussion for another day. I love it. I love it. So, yeah.

[00:19:44] So the rating scales are a problem. Not publishing negative trials. Now, there's been a push towards it. Again, I don't know who's enforcing this or anything. That's the thing the FDA is supposed to. Clinicaltrials.gov. I believe at one point the number was $150,000. You would be fine if you fail to publish it. If it's on clinicaltrials.gov, you should complete the loop by publishing the results of that study so it can be known. But what is the incentive to publish a negative study? It's not going to get you a promotion at work.

[00:20:14] It's probably not going to get you a promotion if you're in academia, if you're working in the pharmaceutical industry. But it should be done because that should enforce treatment. If something's negative, there should be replication studies. Even if one study's negative, replicate it. If somebody says they can make the best meatballs in the world and they have a recipe for it and no one else can make it and those meatballs come out garbage, you're not going to put them in a restaurant. Because you're like, well, something is wrong with your recipe because I can't make it. Yeah.

[00:20:42] And the incentivization should be because it's the right fucking thing. Oh, no, no, no, no. Like it is the right thing to do. It is the academic thing to do. It is the way that we move forward. Why is this such a difficult concept? There's a lot of it. Because you'll lose your job, right? Yeah, that's what's so important. And so the idea that you make realistic studies, if it shows that a negative result can enforce something too because then we can say, well, let's try a different route.

[00:21:12] Or maybe that teaches us something about the dose or the receptor or that there's a breakdown between, again, that bench or that in vitro, that molecularly this should work. Or on a cellular level, but then in the human body with all its process, it doesn't. That can inform a lot of things versus just saying, well, we're going to pretend no one ever did this study at all. Yeah.

[00:21:32] I'll have to review the official numbers, but I think it was Lexapro that maybe had eight or nine trials done until they could find two positives. So historically, the FDA has required that any drug has at least two positive clinical trials before you can put it up for FDA approval. And yeah, and that denominator doesn't matter, though.

[00:21:56] That's the biggest problem with these negative trials not being published, I think, is that a company can do 25 trials of their drug, and then they can manipulate those other factors that go into creating the trial to make sure they get an effect on that 26th trial. And they don't ever have to tell that, hey, this drug failed 24 times, but then it succeeded twice in subsequent studies.

[00:22:24] The way I try to tell it to students is imagine I gave you 30 darts and then had a dartboard there. And I said, all right, I want you to throw all 30 darts at that dartboard, and then every one that missed, we're going to take off the ground. And so it's going to – and the end result is you're going to have the darts exactly where you want them. And we're going to act like that's the only darts you threw were those. Makes you look like a really good dart thrower when maybe you only got two on the board out of 30.

[00:22:50] And the whole thing is we're trying to show is did these results occur by chance? And that's the whole goal of running a statistical analysis and accounting for unknown variables is could these results have been due to chance? And if you're scrapping all the false results, we've kind of violated the whole idea of why we're doing the studies the way we do is to show that, no, no, these results are accurate. And we have a pretty good degree of confidence in these results. But if we get rid of all the negatives, we've kind of lost that confidence.

[00:23:17] This is something that should be required to be included in that direct-to-consumer ad. It should say we did 11 studies in order to find the two. You should have to include that denominator. But the sad reality is that not only is that not communicated to the consumer, it's not communicated to me.

[00:23:41] It is trying to be held out of my knowledge base because that will change the way that I approach and evaluate that medication. Yeah, there was actually a drug that a colleague and I wrote a paper about trying to espouse this medication for PTSD. And we've actually found out that it was studied. And that study just got abandoned. Seems like maybe it didn't meet the enrollment criteria. But all we know is that there's kind of was a dead-end page on clinicaltrials.gov. What happened?

[00:24:09] You know, there could be some really important information that's known maybe from the results. What happened in that process? Was it an issue getting people to be in the study? Was there a hang-up? Did they find out something about the drug? Don't know because it just kind of dead ends there. Right. You know, we're talking about entire papers going missing, entire clinical trials going missing. But there's also little subsets of data and information about specific patients' experiences that magically kind of go poof into thin air. And it makes you wonder.

[00:24:40] If you're in a clinical trial and you have a headache from the side effect, that information is probably not disappearing into thin air. But if you become suicidal because of that drug, that's the stuff that we'll talk about in a little bit with Paxil where there can be all these pages missing from the original documentation. I'll say one thing I guess about the inclusion stuff, the diagnosis. So being careful to exclude people with a bipolar diagnosis from depression studies, excluding somebody with anxiety.

[00:25:10] But the big thing is anybody with substance use disorders gets excluded. And that's for – at least for my population, that's a huge issue because – We have the most substance use disorders in the world. So how do we apply it? There's a paper I think it was Zimmerman 2020. He did a real nice paper. He said, you know what? I'm going to look at these studies and see how many of my patients would actually be included. And he found that somewhere between – depending on the study – 76 to 99.1 percent of his patients would have been excluded from those studies.

[00:25:39] So the idea is well more than three out of every four studies, the results may not have applied to his patients because his patients were not the ones that were included. So how do we then say, well, this drug is approved for depression. I'm going to use it in my patients if the kind of patients we see weren't even the ones in the trial. So how do we know that that's going to work? Yeah, and that goes straight to taking patients out who are deemed to have had a concurrent illness. Yep.

[00:26:06] I've talked a lot about how the diagnosis of a mood disorder has been watered down over time. We used to call it manic depressive illness. We've now subcategorized it as bipolar disorder and major depressive disorder. And phenomenologically, that doesn't make sense because a lot of the people who only get depressed, they may also have a family history of bipolar. They just don't swing to that pole that aggressively.

[00:26:35] But what they do is they swing between a normal mood and a really depressed mood. And phenomenologically, it is very similar to bipolar. They just don't have the potential to get manic. And so what the pharmaceutical companies will do in these antidepressant trials in particular is I will give Michael, who's in the trial, a dose of the, let's say, SSRI.

[00:27:04] And Michael is a 21-year-old. Ah, I've gotten younger. I appreciate it. You got younger, yeah. And Michael's dealt with depression in the past, but because the nature of bipolar illness is people will experience usually multiple episodes of depression before they ever have an episode of mania. You don't call it bipolar disorder until they have an episode of mania.

[00:27:28] And so you cannot reliably predict, especially in a condition where the average age of a first manic episode is 22 years old in males. You cannot predict whether or not that person has bipolar disorder. Time will tell. But how this all loops around is SSRIs and folks who have bipolar illness or bipolarity genetics is much more likely to induce mania or to induce suicidality.

[00:27:58] So you're talking about removing people from a trial after the trial has already started, essentially after they've had what most likely is probably a suicidal event or ideation from taking the drug. But then you just take them out of the data set because they had a concurrent illness that you didn't realize.

[00:28:22] In the real world, you do not know in a 15 to 21-year-old, you don't know. You cannot reliably predict who is going to have that reaction to the medication. Maybe a question we should be asking is did this study in carrying out, did they have access to diagnostics or to technology that does not reliably exist in practice? And if so, again, how can we trust the result or at least how can we apply it?

[00:28:51] And that's something I know we'll get to with adacanumab. But the whole idea is they're saying we isolated. We knew what to look for. We isolated. But if you're not doing that in your clinical practice, then you are going to miss the exact population that the drug worked in, not because you're a terrible clinician, but just simply you didn't have access to a whole battery of people and tests and things like that. And look, there are people in these clinical trials who have killed themselves that are now dead.

[00:29:18] Those are people in a clinical trial. They just pull them out. They remove them. It's like they didn't even exist. When you're trying to tell me to do that with my patient population, the person is gone. Their family deals with the grief. I deal with the grief of it. I am the clinician that then has to live with the fact that I gave a drug to a patient that caused them to kill themselves.

[00:29:46] I mean, it's just it is absolutely not right to play these tricks when you're talking about something as serious as suicide. This concurrent illness. Oh, well, we didn't realize they had that in true clinical practice. You also don't know. And then if that's happening to my patients a lot, I'm not going to do this job very long.

[00:30:11] I'm not going to have the motivation to keep going on knowing internally that I'm hurting more people than I'm helping or that the consequences of the people that I'm hurting are the most severe consequences that you can have. I don't want to live that life. And all this is happening at the same time as the DSM is just broadening diagnoses, watering them down, trying to create a system where more people can meet criteria for depression.

[00:30:41] More people can meet criteria for this non-existent disease of generalized anxiety. And so the more people that you have in the pool of people taking these drugs, specifically SSRI, SNRI antidepressants, the more people are going to be harmed and are going to die from it. And you just can't have it both ways. Yeah, and it's tricky because like on one hand, at least it was sold to me initially was, well, we're going to expand it.

[00:31:10] Because, you know, in America, we have to have a diagnosis to bill and to receive treatment. So from that end, again, it seems a very altruistic idea that, okay, we're going to make it easier to get a diagnosis. That way you can bill and that way you can get the help you need. But as you said, then there could be another side of that. Yeah, almost 300 DSM-5 diagnoses. Most of them do not have validity.

[00:31:33] They are not distinguishable enough from each other or they don't have consistent enough family history or genetic information and response to treatment information. There's so much overlap between them that you really can't call them distinctly different from each other. That makes me think about all the like exclusions for, okay, how many people have we diagnosed with schizophrenia who are using drugs within the last four weeks? Yes, that's the one that makes me think about it.

[00:32:00] So John Kane, you know John Kane. Not personally, but I definitely know that. Well, yeah, yeah, yeah. So I interviewed Jose Rubio, one of his, you know, most well-accomplished pupils, I guess. And John Kane talked about when he was tasked by the FDA with taking all these people off of clozapine and then said, Yeah, I can't do it. Like these people really need this drug and they are so unstable without it. Like it's obviously clearly effective. The first comment he made on a podcast I was listening to, he said,

[00:32:30] Well, the first thing I did was I took all the people out that didn't have schizophrenia. I took all them off the drug, right? So he was able to delineate we're in a world of over-diagnosing the shit out of schizophrenia by people who are using meth. Right. I look at the psychologist every day in our treatment team and we talk about a diagnosis and he's like, one of us will be like, did they use meth? They're like, yep. I'm like, ah. Yeah, muddy the waters. Yeah, absolutely.

[00:33:00] Another big issue with these clinical trials is the length of the trial. Oh boy. That is a very, very important thing to keep in mind. When somebody gets depressed, biologically depressed, they don't move around as much. They don't show as much expression. They are not sleeping as well, but they may be laying in bed for long hours at a time. All these things are slowed down externally. They're not motivated. They're not energetic.

[00:33:30] But internally, they're ruminating. They're driving. They're thinking about things like suicide or they're guilty about things they've done in the past. Depression is something that is a self-limited illness, not in total, but in terms of each episode. So the time that somebody is depressed, it's going to end at some point. Now, there are certainly variability to that. Some people may be depressed for several months at a time. Most people, it tends to be a few weeks at a time.

[00:33:59] So I create a drug that takes six to eight weeks to work, and then I give it to enough people. If I can catch the right people at the right time in their depressive episode, and I don't let the trial last too long so that they go into another depressive episode, then what I can show is that the natural progression of depression is to cease after a period of time.

[00:34:24] And in terms of getting a difference from placebo, maybe you have a little bit different patient population in the placebo group that is earlier in their depressive period. But the length of these trials for so many different examples is extremely important. You've got other situations we've talked about with benzos and how you put somebody on Xanax for six weeks, and yeah, they're a lot less anxious than they were.

[00:34:51] And after only six weeks of a couple of doses daily, they're not going to develop these severe withdrawal syndrome. So you have a situation where you've shown that the drug is effective and that they don't have withdrawal from it. But the reality is that most people or at least a third plus people that are started on a benzo are maintained on it for longer than a year. But we don't have great data. Clinical trial wise.

[00:35:18] Yeah, I like I imagine, you know, again, bless her heart, but my grandmother, you know, you imagine taking a six week study and then putting somebody on a drug for 50 years. I mean, it is wild the idea that we say, OK, well, six weeks that showed the struggle safe and effective. And again, there may be no other extension studies and things like that. But we're extrapolating a lot. They're extrapolating so much. And we're trusting the company that stands to benefit from selling more Xanax or Klonopin or whatever it is.

[00:35:46] They don't have the incentive to do the long term studies because they know it is not going to promote that it's effective in the long term. And it's not going to promote that it's safe in the long term. And if you'll humor me, you've almost baited me into my one of my favorite things to talk about is the idea about the enriched study design. That's my favorite to talk about because it does like I honestly at times can't believe it exists. But for those who may not, the idea that what we're going to do is we're going to do a run and phase and we're going to give everybody the drug.

[00:36:16] And if they do well on the drug, we're going to keep you in the study. If you don't do on the drug, we get you out. And then we take people who did well on the drug and then we randomize you to keep getting the drug or to go off the drug. What do you think happens to somebody who got a drug, did well on the drug and get stopped from the drug? They're probably going to have a bad time. Right. And that's the question, too, is is there a physical dependence, a psychological dependence? Is withdrawal involved?

[00:36:43] Are they actually getting worse or are they withdrawing from the drug that you gave them and did well on and you took it away? And this I don't know why, but that became an accepted way about doing these studies. The FDA said we want it to be easier if these drugs get approved and we want to be able to use smaller sample sizes. So to do that, the FDA opened up the some ways, I guess, of Pandora's box by allowing these enriched study designs.

[00:37:07] And enriched study designs, again, this is something I'll credit Dr. Gami, who I believe you've interviewed before. He was the one that opened my eyes to this and one of the first presentations I ever went to and it blew my mind. And so he and some of the people, the World Federation of Societies of Biological Psychiatry does not roll off the tongue, but they looked at that and he said, we're going to look at studies that are enriched versus non-enriched. And wouldn't you know that among the non-enriched studies, it's lithium that seems to do some of the best data.

[00:37:35] But these enriched studies became so ingrained for a lot of these bipolar studies. And again, that is giving you information you don't have in your clinical practice. And so it goes back to that, that maybe that rule that we should be using is does this study use foresight that I can't have in clinical practice? And if so, should that give me a hesitation in how I apply it? Yeah, it's almost like we just need to wash out all the literature, start from scratch, not in terms of doing new studies,

[00:38:04] but start from scratch with a database of what is independently validated by these 10 people that we deem as the top scientists and researchers in their fields. And we also deem as ethical and trying to promote the understanding and awareness of the conditions that we're treating.

[00:38:27] And again, Dr. Ghani wrote a wonderful paper about that too, about the idea about having an external body or whether it's publicly funded or citizen funded, but repurposing generics and utilizing medications who are no longer have a financial incentive to do the research, but then determining, you know, these drugs. And can we maybe have a fair shake about all these drugs, regardless about who funds them? Let's see how all of these drugs stack up to one another.

[00:38:57] Well, that's what it should be. But yes, you know, you cannot get your drug approved until it completes an independent trial that is the most upfront, transparent form of research that you could imagine trying to replicate some of these studies. And that goes back to completing the loop about the results of these.

[00:39:21] Even if the results don't look pretty, even if it's a pretty ugly, there is information to be gained from that. We are just as a society, we're focusing on the short term monetary profit. And when we're all dead, none of us are taking any of our belongings and valuables with us into whatever the afterlife is. So if we're not trying to advance our understanding, I mean, that is how we move the world forward.

[00:39:51] And all these earthly things are going to be gone one day. So what is the point of living if you're not trying to to advance? Yeah. And to hopefully help a few people along the way. But again, something that goes back to, all right, making sure that we have the right goal. What is helping in the scenario? And that goes back to making sure what are the end goals? Are people with depression able to live the life that they desire? Are they able to have joy?

[00:40:16] Are you able to take your medication for bipolar disorder and meet your goal for what do you want to do with life? Yeah. Yeah. Live the life that is important and gives you fulfillment or meaning or, you know, makes you feel like what you're doing is important. Thanks again for watching and or listening.

[00:40:38] If you're passionate about the subjects that I discuss on the channel, do me a favor and like, comment, subscribe. Do whatever you can to make your voice heard that these are problems that must be addressed in our society. If you have any questions, comments or concerns, I want to hear them. Feel free to reach out on social media or email us at renegadesyke at gmail.com.

[00:41:08] And if you'd like to be a guest of the show or you have a connection to somebody that you think would be a good guest, let us know. Thanks again for listening.

[00:41:56] Call your own doctor.

big pharma,FDA,inclusion criteria,psychiatry,research,exclusion criteria,VA,journal club,surrogate markers,ptsd,psychology,DTCA,enriched study design,journal editor,negative trials,direct to consumer advertising,ghostwriter,rating scales,