Files
hpr-knowledge-base/hpr_transcripts/hpr2685.txt
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

190 lines
12 KiB
Plaintext

Episode: 2685
Title: HPR2685: Scientific and Medical Reports
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr2685/hpr2685.mp3
Transcribed: 2025-10-19 07:29:29
---
This is HPR Episode 2685 entitled, Scientific and Medical Reports, and in part of the series,
Health and Health Care.
It is hosted by AHUKA, and in about 14 minutes long, and currently in a clean flag.
The summary is, we need to be careful about evaluating news reports about medical studies.
This episode of HBR is brought to you by an Honesthost.com.
With 15% discount on all shared hosting with the offer code, HPR15, that's HPR15.
Better web hosting that's Honest and Fair at An Honesthost.com.
Hello, this is AHUKA, welcoming you to Hacker Public Radio, and another episode in our series
on health and taking care of yourself.
I want to spend a few episodes talking about evaluating medical reports.
We get a lot of them, but do we really know how good they are?
How do we evaluate them?
There are some issues that I think really need to be discussed.
There I am at least, and I assume this is true for a lot of people.
You get a news story every day about some new medical breakthrough or discovery of some
kind, and God forbid you go online, so I'll kind of nonsense there, but just unlike, say,
a television news program, or a newspaper or a magazine, which are what people refer
to as legitimate news sources.
When you get online, it's always some amazing trick.
Doctors don't want you to know, and it's like, why don't they want you to know?
That's never really explained.
I'm not quite sure what's going on with some of this stuff.
Or an amazing new diet breakthrough that lets you eat as much as you want of anything
you like and still lose weight.
If you believe any of that, I would like to interest you in a bridge I have for sale.
But even the legitimate news sources have a problem, which is that they have what is
called the news hole that has to be filled every day.
And stories about health and medicine are popular, people like hearing about this stuff.
The problem is that making these stories sound exciting almost always means, at the very
least, overstating the results and may mean hyping a result that does not exist.
Now to give you some idea of how bad this is, I'm going to reference an article from a
place called journalistsresource.org, called covering health research, choose your studies
and words wisely.
This article is very enlightening if you have never looked closely at this issue.
Now in this article, they cover the results of a story by Noah Haber and others published
in something called PLOS-1.
Now you can go to the original PLOS-1 paper if you like.
And I've got links to all of this in the show notes, but if you're not used to reading
academic papers, I think the journalistsresource.org article is much more accessible.
Now Haber and his co-researchers and listed 21 reviewers, all of whom had at least a master's
degree and a majority of them had enrolled in or completed a doctoral program.
They in turn looked at 64 articles that were among the most chaired on Facebook and Twitter
and then at the 50 studies that were the basis for these stories.
Now the first issue they dealt with was causality.
To say in a scientific study that A causes B requires some pretty strict high quality evidence.
You may have heard the trueism that correlation is not causation, and that is true.
For example, a study of food and drug use can show that drinking milk as a child causes
opioid addiction.
After all, the addicts all consumes milk as children, didn't they?
Now, in reality, though that's not the case, and no reputable scientific study would
claim that.
But you have to watch out for the opposite error.
The opposite error is when someone piously pipes up with correlation is not causation,
and then dismisses anything they don't like.
This is an error because every causation relationship starts with a correlation by definition.
So correlation is not causation should be something that tells you, okay, there may or may not
be something we need further study to pin that down.
It's not to get out of jail free card, and I see a lot of people doing that these days.
Frequently, it's with something like climate change, and you point out all these studies
proving that there's climate change.
It's about correlation is not causation, I can ignore you.
Go away.
So that's not a good thing to do.
Now, in Habers' study, they felt that the claims in many papers were stronger than the
evidence really supported.
They said that a third of the papers they reviewed made claims that the data could not support.
And often the language used is a bit weasel worded, like saying there is an association
between A and B.
Well, association is just another term for a correlation, which may or may not mean anything.
It's not technically wrong to say that, but what happens when a journalist gets that paper?
Although the journalist B is careful as they should be, in many cases no.
But they also warn against dismissing all associational studies, as we were just talking about.
Closation always starts with correlation.
It's something you do want to pay attention to.
You just don't want to bet your life savings on it without better evidence.
Now, the article in JournalistResource.org is aimed at journalists.
And wants to encourage better articles, so they say, check with the author of a paper and
ask them straight out if what they found is causal.
Now for you and I, that might not be an option, though there is no law against it as far as
I know, but it is one way to get a handle on something.
Next you want to consider the peer review process.
The best quality research goes through peer review before it is published, which means
that scientists who are in the field have read the paper, examined the methods employed,
and looked at the conclusions to determine if the appropriate standards have been met.
In many cases, the reviewer will raise questions, or even suggest additional work be done
before a paper meets the standards for publishing.
This is certainly the process for the major journals, but lately there has been a push
to open up the process.
Many of the major journals are very expensive, and can delay publication by a year or more.
In the age of the internet, that is seen as unnecessary and a bit elitist, so many researchers
have taken to publishing their papers online.
PLOS 1, in fact, is an online journal that incorporates peer review, and it is part
of a larger family.
PLOS stands for Public Library of Science, and that public library of sciences and number
of journals, mostly focusing on biology, medicine, and life sciences.
Now in other sciences, there is something that is spelled ARXIV, but it is pronounced
ARXIV.
What ARXIV does is it focuses on what are called pre-print articles that may later be published
in a traditional journal, although I think ARXIV is starting to have some status on its
own.
There is moderation, but not necessarily any technical peer review for these articles.
They are called pre-publication, so that is supposed to suggest that later on they probably
will get published in a major journal of some kind.
Now, statistical significance.
Medical and biological statistics is complex.
People get PhDs in this stuff, and is regarded as one of the more difficult ones to get.
And full disclosure, I am not one of the people who has done this.
I do not have a PhD in medical and biological statistics, or bio-stats, as it is usually
referred to.
Now I have, however, taught statistics at the university level.
I am an economist by training, so the stats I taught were more in the business and economics
area, but I think I am qualified to give some basic guidance on how this stuff works.
Now if a study is well done, there will be a test of significance that determines whether
or not you have a real result.
Generally, the way you should proceed is to state an hypothesis up front.
For example, eating breakfast will raise a child's grades.
That is decent hypothesis, worth studying.
Then you gather the data that contests this hypothesis.
Ideally, you would have a study that has a study group that would be the children getting
breakfast, a controlled group, children who do not get breakfast, and right away you see
how tricky this is.
Who on earth is going to make a bunch of children go without breakfast?
I can just picture the politicians holding hearings on that one.
Anyway, once the data is gathered, you do a test.
The way this is done may be a little counterintuitive, but it works like this.
Your hypothesis is that eating breakfast results in better grades.
You therefore have something that is called a null hypothesis, which is that eating breakfast
does not improve grades.
You employs statistical tests.
In this case, let's suppose it's a T-test, which is a very common test in statistics.
You choose a level of significance, which generally, in most cases, it's going to be .05.
You compute a test statistic from the data, and you compare that test statistic to your
table of T-statistics.
You may well have data that looks like eating breakfast improves grades, but you want
to guard against any random chance.
So if the probability of getting that result due to pure chance from a population where
there is no effect of breakfast is .05 or more, you fail to reject the null hypothesis.
In other words, you did not find a statistically significant improvement in student grades.
Little tricky, didn't it?
So what's going on?
Table of test statistics.
So the T-test is just one of a number of tests that are out there.
There's an F-test, a chi-square test, etc.
All of them are based on an analysis of groups of data.
What you're trying to do is when you collect the data, you're trying to ask a question that
says, could I, by pure chance, have gotten this result in a population where there is
no relationship at all, and that's really what you're trying to get at.
And so when you take a significant level of .05, what you're saying is, I want to make
sure that there is a less than 5% chance, could be 4.9%, but less than 5, that I would get
the wrong result here.
Now, there's some interesting consequences to all of this.
By definition, a certain percentage of the time you will reach the wrong conclusion.
This is all based on probabilities.
Things refer to this as type 1 and type 2 error, but if you want, you could call it false
positive and false negative.
In this case, we assume a T-test with a significant level of .05, we will fail to reject
the null hypothesis if we get a p-value or probability of more than .05 or more than 5%.
Well, given randomness, that means we will be wrong in our conclusion 5% of the time,
or one time out of 20, even if the research was done 100% properly by good researchers,
who do not make any mistake at all.
Now the proper conclusion to all of this is not, as some might have it, that nobody knows
anything to do whatever you feel like.
We've made great strides in medicine in the last few decades.
Many diseases that were once automatic death sentences, such as many forms of cancer,
now can be managed or even cured.
We do have a big problem, though, with misplaced cynicism and distrust that leads to insane
ideas like the one that vaccinations are bad for you.
The only way to reliably avoid such things is to ground our thinking in science, but
that in turn means understanding how science works and how we should interpret the results
we get.
So, this is Ahuka for Hacker Public Radio signing off and is always reminding you to support
free software.
You've been listening to Hacker Public Radio at HackerPublicRadio.org.
We are a community podcast network that releases shows every weekday Monday through Friday.
Today's show, like all our shows, was contributed by an HBR listener like yourself.
If you ever thought of recording a podcast, then click on our contributing to find out
how easy it really is.
Hacker Public Radio was founded by the Digital Dog Pound and the Infonomicon Computer Club,
and is part of the binary revolution at binwreff.com.
If you have comments on today's show, please email the host directly, leave a comment on
the website or record a follow-up episode yourself.
Unless otherwise status, today's show is released on the Creative Commons, Attribution,