Files
hpr-knowledge-base/hpr_transcripts/hpr2705.txt

168 lines
14 KiB
Plaintext
Raw Normal View History

Episode: 2705
Title: HPR2705: Evidence-based Medicine
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr2705/hpr2705.mp3
Transcribed: 2025-10-19 07:49:30
---
This is HPR episode 2,705 entitled Evidence Mace Menton and is part of the series Health and Health Care.
It is hosted by a huker and is about 17 minutes long and carrying a clean flag.
The summary is, Menton should be based on objective scientific evidence.
This episode of HPR is brought to you by AnanasThost.com.
Get 15% discount on all shared hosting with the offer code HPR15.
That's HPR15.
Better web hosting that's honest and fair at AnanasThost.com.
Hello, this is a huker, welcoming you to Hacker Public Radio and another exciting episode
in our series on taking care of your health and we're looking at something today that is called Evidence-Based Medicine.
Now, that may be something that you hadn't heard of before or maybe you assume all medicine is evidence-based.
Well, ideally it would be, but you know, doctors or people, having worked at hospitals,
I can tell you, as a demonstrable fact, some doctors are older.
They've been doing things a certain way for a long time and they're not going to change
and believe me, no one can resist change any better than a doctor.
And, you know, other times it's like, well, what other people do?
All right, so what we're talking about when we say Evidence-Based Medicine is say,
take a look at the best scientific research that is available. Okay, that's the idea behind
Evidence-Based Medicine. Now, I've put a number of links in the show notes. There is a Wikipedia
article and one from the National Institute of Health. They're saying basically the same thing,
but I think the Wikipedia definition is a little easier to follow and I'm going to read from it.
Okay, Evidence-Based Medicine is an approach to medical practice intended to optimize decision-making
by emphasizing the use of evidence from well-designed and well-conducted research.
Okay, right away, well-designed and well-conducted and isn't what we've been doing over the last
couple of episodes trying to get at some of the parameters of that. And it goes on to say,
although all medicine based on science has some degree of empirical support,
Evidence-Based Medicine goes further, classifying evidence by its epistemologic strength
and requiring that only the strongest types coming from meta-analyses, systematic reviews and
randomized controlled trials can yield strong recommendations. Weaker types, such as from case
control studies, can yield only weak recommendations. The term was originally used to describe an
approach to teaching the practice of medicine and improving decisions by individual physicians
about individual patients. Use of the term rapidly expanded to include a previously described
approach that emphasized the use of evidence in the design of guidelines and policies that apply
to groups of patients and populations. Evidence-Based Practice Policies. It is subsequently spread
to describe an approach to decision-making that is used at virtually every level of health care
as well as other fields, evidence-based practice. Whether applied to medical education,
decisions about individuals, guidelines and policies applied to populations or administration
of health services in general, evidence-based medicine advocates that to the greatest extent possible
decisions and policies should be based on evidence, not just the beliefs of practitioners,
experts or administrators. It thus tries to assure that a clinician's opinion, which may be limited
by knowledge gaps or biases, is supplemented with all available knowledge from the scientific literature
so that best practice can be determined and applied. It promotes the use of formal explicit
methods to analyze evidence and makes it available to decision-makers. It promotes programs to
teach the methods to medical students, practitioners and policy makers.
Well, that was rather a lot, but there's a lot in there, okay? It's probably going to take us
more than one bite at the apple to unpack all of this stuff. But the important thing is that
it is a rational approach to health. To start with, let's look at one passage near the end.
It thus tries to assure that a clinician's opinion, which may be limited by knowledge gaps or
biases, is supplemented with all available knowledge from the scientific literature,
so that best practice can be determined and applied. It promotes the use of formal explicit
methods to analyze evidence and makes it available to decision-makers. Now, this means that anyone
who is concerned, but most particularly physicians, needs to understand what the studies show and be
able to evaluate those studies. And that's why we've taken some time already and will take more
to get at how studies are done and the strengths and weaknesses of each of the different approaches.
Now, what we have to understand is that studies are not all equal.
All right, you may read a new story or hear a story on television that says,
a new study just came out that may change how you eat. Or maybe it'll be about a new hope for
cancer sufferers. What you generally will not hear is any discussion of the quality of the study,
which is a guide to how much you should believe it. Of course, as we have already pointed out,
until the study has been replicated and validated by additional research,
you should adopt a wait-and-see attitude. And of course, there is the rule advanced by Carl Sagan
that says, extraordinary claims require extraordinary evidence. So, what are the characteristics of
good studies? Well, randomized control trials. Let's start with that. That's one of the ones
mentioned in the definition that we read earlier as being strong. So, a good study is one that is
randomized and controlled, which aims to eliminate any potential bias. That's what the randomization is
all about. So, how do you do this? You start by defining the population of interest.
So, if you're testing a drug to fight malaria, your population would be all people who have malaria.
Now, since the major incidence of malaria is in Africa, you would probably go there to do the study
and select participants from the population there. The random part comes in how the participants
are selected for this population. The statistician's definition of a random sample is one where every
member of the population being studied has an equal chance of being selected.
Everyone has an equal chance. Bias would occur if they were not equally likely to be selected.
So, if your sample had only men, you would not have a randomized sample.
This was a problem with a number of drug trials many decades back. We still have questions about
adults versus children with a number of drugs that were never tested on children. Of course,
the key is to define the population properly. If you are testing a treatment for ovarian cancer,
for instance, then your sample really should just be women, and that's perfectly proper.
Your sample matches your population. Now, controlled means you have a test group and a control group
that the test group is compared to. Now, the allocation of participants to these groups should
be perfectly random as well. So, you'll start with a group randomly pulled out of the population,
and then some go into the test group and some go into the control group, and that
divvying up is a totally random process. Now, there are a couple of ways you do this.
If the study is what's called a placebo controlled study, the control group will receive
no active treatment. But in every respect should be treated precisely the same. If the test group
gets a pill, the control group will get a pill as well, usually something harmless like a sugar pill.
If the test group gets a shot, the control group will also get a shot, usually something like a
harmless saline solution. You do this because of the well-documented placebo effect, which shows
that people getting sugar pills and saline shots show signs of improvement. Even more interesting
is the finding that even when people know they are getting a placebo, they tend to approve.
They get better anyway. The purpose of the placebo controlled study is to make sure that the
treatment is really doing something, and not just making people feel better because they are
getting care. Now, this isn't the place to try and get at, you know, how does the placebo effect
work? You know, there are mind-body connections, and they are all fascinating.
Now, the other type of study is what's called a positive control study. Now, that's when there's
already a recognized standard treatment, and you are looking at something that is a potential
alternative. Maybe you think it's going to do better, or you just want to have another good
treatment in your arsenal. So, again, you need to randomly assign people to the test group and
the control group to eliminate bias. And you're testing to see if there is a significant
improvement using the new treatment compared to the existing treatment. If the new treatment is
no better than what you already have, there may not be any point in using it.
Now, I say may not, there could be. You know, there may be a part of the population that is not
responding to the standard treatment that might respond to this, and you know, it's worth giving
it a shot. But what you want to watch out for is a well-known problem of a drug company with a
drug that they have been selling at a very high price because it's patent protected.
And the patent is about to run out, as patents are supposed to do, you know.
And then at that point, anyone can produce the generic equivalent, and so the pharmaceutical
company is going to try and come up with an alternative that they can patent and sell at a high price.
But, you know, if it's really doing the same thing as the generic, there's no
no socially valid reason for doing that. So that's why you do that kind of positive control study.
Now, if you really want a gold standard for a study, you combine randomized controlled
methodology with something called double blind. Now, in pretty much every randomized controlled
trial, the participants do not know whether they are in the group receiving the treatment or in
the control group. If you stop there, it is single blind. But it has long been known that researchers
are human and may have a tenancy to see results where there are none. There's the famous example
of N-rays reported in 1903. This is a fascinating one because at the time, scientists were discovering
these things X-rays had just been discovered. And so there was this whole thing about radio activity
and radiation and all of these things that physicists are discovering and a research, a respected
researcher at the University of Nancy in France. Discovered this thing that he called N-rays,
the N-Frenancy, the University that he worked at. And he reported all of this, you know,
got a couple of other researchers to report it and then a skeptic who was an American as it happens.
He said, you know, I'm not so sure that the evidence is all that strong here and went to visit
this researcher at the University of Nancy. And at some point, the researcher left the room and
this American basically took the test substance that was supposedly emitting N-rays and removed it
from the apparatus and put in a block of wood instead. Well, then the researcher came back and
started demonstrating these N-rays that he was supposedly seeing. It was all imaginary,
probably a good example of what we've referred to before, confirmation bias, you know, that you
tend to see what you want to see. So double blind study tries to solve this problem by carefully
keeping the researchers in the blind about which person is in each group. You've got your test
group, you've got your control group. So you might assign every participant a number.
Then you have one person prepare the pills or the shots or whatever and label them with the
appropriate number. Then a different researcher would administer the treatment and record their
observations. All they would know is the number assigned to each person. So if they find improvement
in a certain group of numbers and that group proves to be the treatment group, you can have more
confidence that this is a legitimate result. Now, if you combine all of these methods into a
randomized controlled double blind study, you have one of the highest levels of validity that you
can have in research, whether medical or otherwise. Now, a note about this, this kind of research is
based on having the two separate groups. Sometimes you stop a trial. And usually it's because in the
middle of the study, some kind of compelling evidence is coming up. Now, in the worst case,
what you could find is that the treatment is causing harm. Three or four of the test subjects
have dropped dead after receiving this medication. That would be a signal like, okay, stop the study.
We are not going any further with this one. Now, in the best case, what you might see
is that the people in the treatment group are just getting better, fast and showing tremendous
improvement, blah, blah, blah. In case like that, you might stop the study and just start giving
the treatment to everyone. It's like, okay, we've already got enough data to confirm this.
Now, these are ethical questions. And generally speaking, most studies at universities,
in particular, they have what are called institutional review boards that are supposed to weigh in on
the ethical issues involved in doing a study and whether it's being done properly. So you might
consult with them. Even private companies usually have some people involved in making ethical
decisions like that. If you were being funded by the government, you could talk to the government
people about, okay, we think we need to stop the study and here's why. But this is one of the
things that can happen as well. And so with that, this is Ahuka for Hacker Public Radio,
signing off and as always reminding you to support free software. Bye-bye.
You've been listening to Hacker Public Radio at Hacker Public Radio dot org.
We are a community podcast network that releases shows every weekday Monday through Friday.
Today's show, like all our shows, was contributed by an HBR listener like yourself.
If you ever thought of recording a podcast and click on our contributing,
to find out how easy it really is. Hacker Public Radio was founded by the digital
dot pound and the infonomican computer club. And it's part of the binary revolution at binrev.com.
If you have comments on today's show, please email the host directly, leave a comment on the website
or record a follow-up episode yourself. Unless otherwise stated, today's show is released under
Creative Commons, Attribution, ShareLite, 3.0 license.