- MCP server with stdio transport for local use - Search episodes, transcripts, hosts, and series - 4,511 episodes with metadata and transcripts - Data loader with in-memory JSON storage 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
116 lines
12 KiB
Plaintext
116 lines
12 KiB
Plaintext
Episode: 2870
|
|
Title: HPR2870: Hierarchy of Evidence
|
|
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr2870/hpr2870.mp3
|
|
Transcribed: 2025-10-24 12:32:45
|
|
|
|
---
|
|
|
|
This is HPR Episode 2870 entitled, Hierarchy of Evidence, and in part on the series, Health and Health Care.
|
|
It is hosted by a huker and in about 14 minutes long and carrying a clean flag.
|
|
The summary is, all studies are not the same, some are better than others.
|
|
This episode of HPR is brought to you by archive.org.
|
|
Support universal access to all knowledge by heading over to archive.org forward slash donate.
|
|
Hello, this is Ahuka, welcoming you to Hacker Public Radio and another exciting episode in our series on health and health care issues.
|
|
And I want to continue our look at studies because I think it's important that we have an understanding of how these things work.
|
|
So what I want to do this time is talk about something that is called the Hierarchy of Evidence, which is another way of saying, you know, not all studies are equally valid.
|
|
There are better and worse ways of doing studies in medicine or other things for that matter.
|
|
So as we saw in our last episode of this series, the one on evidence-based medicine, the question that we looked at is that we formulate medical treatments on the basis of the best evidence from quality research studies.
|
|
Breathless posts on social media about the miracle breakthrough your doctor doesn't want you to know about are, of course, absolutely worthless.
|
|
As are pretty much any of the blatherings of celebrities like Gwyneth Paltrow.
|
|
And we said it was hard to beat double-blind, randomly controlled trials, but this is slightly more complicated than we consider how feasible it is to do these kinds of trials.
|
|
Like in our example of breakfasts and test scores, no decent person is going to deprive a group of children of having breakfast in a randomly controlled trial, even if that would get us high quality data.
|
|
Some things are just not done.
|
|
Now, there is an approach favored by advocates of evidence-based medicine called the hierarchy of evidence that ranks the quality of data by how the evidence was obtained.
|
|
The idea is that you would rely on such evidence only as much as the data deserve based on how the study was done.
|
|
If the data is low quality, you place a low value on it. It still may be something, but you would, of course, reject it if a better study came along with a different conclusion.
|
|
So, how does this hierarchy rank the kinds of studies?
|
|
I'm going to put a link in the show notes. You can go to get more information about this, but we're going to go through the ranking.
|
|
Now, as we do this, let's acknowledge this approach is not 100% supported by everyone in medicine, but we should also understand that it is vastly superior to doing so-called research on Facebook.
|
|
In a world where people are not vaccinating their children, promoting fad diets and shoving odd things into various orifices, this approach to validating evidence is a huge improvement.
|
|
So, let's start at the top. What is the best kind of study of all? Well, that would be systematic reviews and meta-analyses of randomized control trials with definitive results.
|
|
Kind of a mouthful. We're really going to break this one down so you understand exactly what's going on here.
|
|
Systematic reviews are gathering the whole body of literature on a topic to see where there is agreement.
|
|
By definition, you have to have multiple studies done before you can even think of a systematic review, but when you have them, and they all point in the same direction, that is very powerful.
|
|
And if these studies are themselves randomly controlled trials, the evidence becomes extremely persuasive.
|
|
When you have multiple studies pointing in the same direction, that is powerful because it addresses one of the biggest problems that of replicating results.
|
|
Now, it also matters that the individual studies being combined into the meta-analyses are themselves of high quality.
|
|
And if they are based on randomized control trials, that creates at least a strong presumption of quality.
|
|
So, that's the top of our hierarchy. One step down would be the individual randomized control trial.
|
|
And we have to understand what this is about because this is kind of the gold standard for trials.
|
|
So, here we're talking about an individual study. So, it's never going to be quite as persuasive as a group of studies that are in agreement.
|
|
But if done properly, it's a pretty good indication that there's something there that we want to take a look at.
|
|
We should also understand, though, that not all randomized control trials are equal in weight and reliability.
|
|
Here are some key questions that we would ask if we're taking a look at a randomized control trial, and it's going to help us determine how much validity we place in it.
|
|
So, first, did the study ask a clearly focused question?
|
|
A good study will be designed to address a specific question and focused on that question.
|
|
If you did a study of heart disease and along the way noticed a result affecting kidneys, that's not focused.
|
|
It may suggest something worth looking at, but the appropriate response would then be to design a study to look at the kidney problems.
|
|
Second, was the study a randomized control trial and was it appropriately so?
|
|
While randomized control trials are the gold standard, they're not always the most appropriate way to study something as we mentioned above.
|
|
Next, were participants appropriately allocated to intervention and control groups?
|
|
Now, this is a question of randomization.
|
|
The mathematics of probability require that every study member has an equal probability of being assigned to the control or to the study group.
|
|
But there can sometimes be reasons to use things like stratification.
|
|
This helps when you need to ensure, for example, that both men and women are properly represented in a study that is meant to apply to both sexes.
|
|
Next, were participants, staff, and study personnel blind to the participants' study groups?
|
|
This is the double blind requirement we have discussed previously.
|
|
And it is important to ensure that no one has a biased view of the outcomes.
|
|
All participants do not know if they are in the study group or the control group and neither do the people doing the study.
|
|
Next, were all the participants who entered the trial accounted for at its conclusion.
|
|
One thing you need to guard against is dropping inconvenient data.
|
|
If you started with 100 people in your study but only report results for 90, what happened to the other 10 people?
|
|
There can be legitimate reasons that people drop out or are dropped by the study, but you need to account for it.
|
|
So that we know you are not trying to bias the results by getting rid of data points that might contradict your conclusions.
|
|
Another question, were participants in all groups followed up and data collected in the same way?
|
|
Next, did the study have enough participants to minimize the play of chance?
|
|
Sample size matters in statistics.
|
|
To state the obvious, a study of two people is nothing more than an anecdote.
|
|
It may be right or it may be wrong, but you should never rely on it.
|
|
On the other hand, a study with a thousand people has a much higher probability of being right.
|
|
Next question, how are the results presented and what are the main results?
|
|
How precise are those results?
|
|
How big is the treatment effect and how does that compare to the margin of error?
|
|
If your study showed a decline of three points in cholesterol with a margin of error of plus or minus 20 points, it's not very precise.
|
|
There may be an effect, but you wouldn't place a lot of trust in it.
|
|
And the final question for randomized control trials were all important outcomes considered
|
|
and can the results be applied to your local population?
|
|
If you're a pediatrician and the study was entirely made up of adults, the results might be valid, but do they really apply to your population?
|
|
And if you're looking at the study applying to you, a similar question comes up, was the study entirely of men and you're a woman?
|
|
Was it all of people in a different racial group?
|
|
And yes, that can matter in some cases.
|
|
So randomized control trials can be a little bit tricky.
|
|
They're invaluable if done well, but you have to take great care in doing that.
|
|
Now, if you can't do a randomized control trial, the next level down in our hierarchy is something called a cohort study.
|
|
Cohort studies follow a group of similar people, i.e. a cohort, over time, and can be useful, particularly in epidemiological studies.
|
|
By definition, there is no control group involved, which is why these rank below randomized control trials in the hierarchy.
|
|
One of the classic cohort studies is the Framingham Heart Study.
|
|
It studies the residents of Framingham, Massachusetts, in the United States, and since its beginning in 1948, it has now moved to the third generation of participants.
|
|
Much of our current knowledge about hypertension and heart disease comes out of this massive cohort study.
|
|
But it also has been criticized for over-estimating some of the risks, and there are questions about how well its results apply to other populations.
|
|
Next step down is a Case Control Study.
|
|
These studies attempt to match people with a particular condition, with other similar people who do not have that condition.
|
|
These may appear to be superficially similar to randomized control trials, but are different in very important ways.
|
|
These are observational studies, and the people doing the study are not in any way blind, nor are their participants.
|
|
And there is no scope for randomization, because each participant was deliberately selected for the study.
|
|
One step down from that, cross-sectional surveys.
|
|
These are also observational, but in this case it is looking at a population of some kind at a specific instant of time.
|
|
So if Case Control Studies are sort of, you might think of it as the less useful version of randomized control trials, you could say cross-sectional surveys are the lesser version of cohort studies.
|
|
Generally these studies are done using general data that is routinely collected, and because that data is routinely collected, they are inexpensive to do.
|
|
But this also means that the data was not collected to answer the specific question you may have.
|
|
Finally, at the very bottom, Case Reports. These are reports about specific individual cases. They may provide a clue, but you don't have a sample, a control, etc.
|
|
A good example are the cases that Sigmund Freud reported. And when you understand how little validity Freud's results enjoy today, you see the weakness in this particular approach.
|
|
There is a reason it is at the bottom. To me, Case Reports are like the people who say, you know, I know this guy who was not wearing his seatbelt.
|
|
He got into an accident, was thrown clear, and if he had stayed in the car, he would die.
|
|
Therefore, I am never going to wear a seatbelt. And it is like, no, you are an idiot. I am sorry.
|
|
So, summarizing the hierarchy from best to worst looks like this. At the top, systematic reviews and meta-analyses of randomized controlled trials with definitive results,
|
|
then randomized controlled trials themselves, the individual studies, cohort studies, case control studies, cross-sectional surveys, and case reports.
|
|
So, you should place the most trust in systematic reviews and meta-analyses and the least trust in case reports.
|
|
So, this is Ahuka for Hacker Public Radio signing off and reminding you as always to support FreeSoftware. Bye-bye.
|
|
You've been listening to Hacker Public Radio at Hacker Public Radio.org.
|
|
We are a community podcast network that releases shows every weekday, Monday through Friday.
|
|
Today's show, like all our shows, was contributed by an HPR listener like yourself.
|
|
If you ever thought of recording a podcast, then click on our contributing to find out how easy it really is.
|
|
Hacker Public Radio was founded by the Digital Dove Pound and the Infonomicon Computer Club, and is part of the binary revolution at binwreff.com.
|
|
If you have comments on today's show, please email the host directly, leave a comment on the website or record a follow-up episode yourself.
|
|
Unless otherwise status, today's show is released on the Creative Commons, App Tribution, Share a Light, 3.0 license.
|