Files
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

212 lines
11 KiB
Plaintext

Episode: 3025
Title: HPR3025: Keep unwanted messages off the Fediverse
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3025/hpr3025.mp3
Transcribed: 2025-10-24 15:21:41
---
This is Hacker Public Radio Episode 3,025 for Friday 6 March 2020.
Today's show is entitled Keep Unwanted Messages of the Fediverse
and is part of the series' social media, it is hosted by Ahuka
and is about 15 minutes long
and carries a clean flag. The summer is
activity pub conference 2019.
Techniques for fighting spam and unwanted messages in the Fediverse.
This episode of HPR is brought to you by archive.org.
Support universal access to all knowledge
by heading over to archive.org forward slash donate.
Music
Hello, this is Ahuka. Welcome to Hacker Public Radio
and another exciting episode.
I'm going to continue my look at the activity pub conference of 2019.
This is a talk by Sarah Jorklowski.
And the title of the talk is Keep Unwanted Messages of the Fediverse.
And as with all of these, I've got a link to the video in the show notes.
So, you know, if there's something you want to get the full flavor of,
by all means, go take a look at the video.
So, unwanted messages. Well,
that's a problem everywhere, isn't it?
Spam and abuse can be found in all online social media.
But in a federated decentralized system,
there are particular problems because no one is in charge.
So, Serge, I think, designed my feeling as this talk was more about sparking discussion
than definitively resolving it.
But it's something that he obviously is very concerned with.
And he is active in this community.
So, what are the unwanted messages?
First of all, abusive messages and notifications.
I think that's fairly self-explanatory.
So, I'm not going to give examples.
I like to keep my G rating.
So, we all know what abusive is.
Now, there's something called Follow Spam.
This is when someone makes a follow request.
And when you click on it, you see a spam message.
Then there are the archive trolls.
That's when someone says, I'm going to monitor an archive everything you do online
so I can harass you.
Okay, unsolicited commercial messages.
Again, it's kind of self-explanatory.
That's the classic spam message.
And untargeted hate speech on the global feeds.
Now, the fact that it's untargeted, it's nasty, it's hate speech.
Not necessarily directed at you.
Okay?
Now, some people just love to spew this stuff all over.
And we'll get on any platform they can to put their trash out there.
Now, this is not unique to the Fediverse.
You know, any open communication system will have that.
Now, in some places, what that means is if you have surveillance capitalism going on
because you're on Facebook, let us say.
You know, they will have people who will monitor what is going on.
And you may have one of your posts removed because it violates their standards.
Twitter claims to have something similar.
Jack Dorsey says they will remove certain things.
So it's not just a Fediverse problem.
But what makes it more difficult, it's very difficult to stop any messages
that are outside your domain of control.
And in the Fediverse, your domain of control is pretty now.
Okay?
If it's on a different node, i.e. a different server,
it's up to the admin of that server of that node,
whether or not that's something they think should be removed.
And whether you can block them, you know, you can block a particular user.
But, you know, they may be talking to other people on your node,
unless you're the admin, how do you stop that?
So what that usually boils down to is finding a node that shares your values.
Now, you know, I'm on a node that is, and I talked about this.
My, you know, my mastodon, octodon.social has a list of rules.
And, you know, certain things are not tolerated.
And if someone posts something that violates those rules,
I can contact the admin and say, you know, this person's doing that.
And then it's up to the admin.
Do they remove them from the server entirely?
Do they give them a warning? You know, it's the admin controls that.
Now, one of the things we can do is we can learn from the past,
because none of this stuff is new.
Email systems and use net all had the same problems.
And believe me, I lived through all of that on use net.
Back when certain people would complain about September,
because that's when new students would be getting on the internet.
You don't know what it was like back in the 80s.
So, what answers do we have?
And this is where I think we need to start talking about potential solutions.
First of all, sender authentication.
If you can hide who you are, it's a lot easier to send abuse of messages and spam.
So, sender authentication is a very important part of combating this problem.
Open relays are bad. Email has proved that.
Open SMTP servers pretty much do not exist any longer,
because that was a prime gateway for spam.
It used to be that in the early days of the internet,
any SMTP server would accept any email message.
You didn't need to have an account there or anything.
It's just, oh, you want to send email fine. Give it to me. I'll move it along.
We don't do that anymore.
That's not allowed.
Networks that require pre-established relationships don't work, okay?
And why do they not work? Well, you know,
I can say you can only send me a message if we already have a relationship.
Okay, that works up to a point, but how do you find new friends?
If no one can, you know, there's got to be some sort of mechanism for finding people.
Okay?
And whatever we do, it has to be a decentralized system with no centralized mediators.
That's the whole point of the Fediverse.
So how are we going to, you know, that's an interesting problem.
This is not simple. When new accounts and new nodes can be created by the thousands in mere seconds,
human moderation does not scale.
There is no magic solution.
What is needed instead is defense in depth with a variety of techniques,
some layered and some independent of each other.
Now, there are some things that already work.
HTTP signatures provide sender authentication.
So we have that. That's a good thing.
We can have actor object validation by checking object IDs.
We can hide who we are following and who follows us to prevent spammers from discovering our social graph.
And there is a standard for JSON web signatures that could provide added protection if it is implemented widely.
But one of the big ideas is OCAP pub.
OCAP stands for object capabilities.
And this opens up another fascinating topic for further investigation.
I am probably going to come back to this one at some point.
But from their GitHub page, the foundation of our system will be object capabilities, OCAPs.
In order to give access to someone, we actually hand them a capability.
OCAPs are authority by possession.
Holding on to them is what gives you the power to invoke or use them.
If you don't have a reference to an OCAP, you can't invoke it.
Now, before we learn how to build a network of consent, we should make sure we learn to think about how an OCAP system works.
There are many ways to build OCAPs.
But before we show the very simple way, we'll be building ours.
Let's make sure we wrap our heads around the paradigm a bit better.
Now, OCAP.pub is a proposal by Chris Weber.
Remember him?
To bring object capabilities to activity pub.
They should be simpler than access control lists and can be revoked or transferred.
This should lead us to networks of consent.
Now, one of the more fascinating ideas is what I've been thinking of for a long time.
Stamps.
Why is spam such a problem?
Because it is absolutely free to crank out gazillions of messages that nobody wants.
What if we set up a system that said, hey, send all the messages you want, cost a penny a piece?
Now, that penny is paid to the recipient.
So if you're sending me spam, I'm going to get a penny for every message you send.
Now, the whole idea is that that mostly will discourage people from doing it.
Because if it's going to cost the money to send message, and they're trying to do this in bulk,
they're trying to send out 20 million of these things.
It's like, okay, that's $200,000.
And suddenly, that starts sounding like real money.
So wait a minute.
That means I have to pay.
Yes, but basically what happens is you pay a penny for every message you send.
You get a penny for every message you receive.
Probably it about averages out.
You're not going to be spending more than you take in, at least not by very much.
You know, I used to be an economist.
So I always think of things like this as market failure problems.
And this is a market solution that I think does a lot of good.
Because the system we have now places all the burden on the receiver of messages
and none on the sender.
As the receiver, I have to be looking at all of these things to protect me from spam.
And that's why spam fundamentally breaks things.
Now, on top of something like this, we can add a layer or two of traditional content classification
to filter using some techniques like Bayesian filtering, sentiment analysis,
image classification, and so on.
All of which are existing modes of providing security.
So what does the system like this look like?
Remembering that we call this defense in depth with layered techniques.
Here is just one possible example.
A message is generated and sent to you.
It first goes through OcapPub.
And if the sender is someone you have given appropriate rights to, it goes right to your priority inbox.
Now the priority inbox bypasses the rest of the filtering.
So, for instance, I give my wife a right to send me any message at any time
and it goes right to my priority inbox.
That's an object capability that I have given to her.
Otherwise, if it does not go to the priority inbox, OcapPub roots it to the public system
and now it starts to go through additional layers.
If it has a stamp, it gets passed to the next layer.
Otherwise, bit bucket, then the signature is checked.
If the signature passes sender authentication, it gets passed to the next layer.
Otherwise, DevNull, then the object validation is checked.
And either it passes or again gets discarded.
And then the content filtering is applied.
And if it passes all of that, it goes into your main inbox.
Now, this is, I think, a good starting point for discussion.
We could quibble with some of the details.
And I think that's really what Serge was doing here was saying,
well, if you want to solve it, here's what a solution starts to look like.
And we can then start having a discussion about,
is this really the way we want to go?
So, this is a hooker for hacker public radio signing off
and reminding you to support FreeSoftware.
Bye-bye.
You've been listening to Hacker Public Radio at HackerPublicRadio.org.
We are a community podcast network that releases shows every weekday, Monday through Friday.
Today's show, like all our shows, was contributed by an HPR listener like yourself.
If you ever thought of recording a podcast, then click on our contribute link
to find out how easy it really is.
HackerPublic Radio was founded by the Digital Dove Pound and the Infonomicon Computer Club
and is part of the binary revolution at binrev.com.
If you have comments on today's show, please email the host directly,
leave a comment on the website or record a follow-up episode yourself.
Unless otherwise stated, today's show is released under Creative Commons,
Attribution, ShareLite, Free.O license.