Files
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

527 lines
30 KiB
Plaintext
Raw Permalink Blame History

This file contains ambiguous Unicode characters
This file contains Unicode characters that might be confused with other characters. If you think that this is intentional, you can safely ignore this warning. Use the Escape button to reveal them.
Episode: 3082
Title: HPR3082: RFC 5005 Part 1 Paged and archived feeds? Who cares?
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3082/hpr3082.mp3
Transcribed: 2025-10-24 16:24:05
---
This is Hacker Public Radio episode 3,082 for Tuesday the 26th of May 2020.
Today's show is entitled RFC 50005 Part 1 Page and Archived Feeds,
who cares. Quote,
it is hosted by Klacky
and is about 35 minutes long
and carries a clean flag. The summary is
an interview with two passionate RFC 50005 fans on how to handle big atom feeds.
This episode of HPR is brought to you by AnanasThost.com,
get 15% discount on all shared hosting with the offer code
HPR15, that's HPR15.
Better web hosting that's honest and fair at AnanasThost.com.
.
.
.
.
Hi, I'm Klacky.
I'm here with Fluffy and Jamie.
Hello.
HPR uses RSS feeds and atom feeds to get stuff into our pod catchers and other places.
And when you do that, you need to make a choice.
If you want to have a full feed that covers all the episodes and in Hacker Public Radio's case,
that's over 2000 episodes, over 3000 now.
Or you have a shorter feed that just covers the latest stuff.
But who cares? Apparently at least three people care because we're going to talk about that.
Which one of you wants to start?
Why is it important to not send the whole list every time?
Well, I noticed with HPR in particular,
the whole list is five megabytes, it says here.
So if you're fetching the entire feed, five megabytes,
and your feed reader is fetching that once every hour,
and everyone who's listening to HPR is doing this, that's a lot of bytes.
At best, just wasteful.
But I've also been told that there are clients that will just fall over.
If you hand them too many entries in a feed,
I've been told that iTunes won't handle feeds above some certain size.
I gather for a while now and among podcasters, especially with technical issues,
with having feeds that have the full history in them.
Yeah, and also apart from the general feeling of wastefulness,
I'm using antenna pod on my mobile.
And there, if you waste five megabytes one or two times per day that adds up,
and it's actually noticeable in your wallet, possibly.
Right.
Yeah, and also there's a lot of websites,
which will have even more than 2,000 entries.
They'll have tens or even hundreds of thousands.
Look at new sites.
A lot of those sites will just only have the most recent 30 articles in them.
But then what happens if you as a reader,
you wanted to be like, you remember, oh, yeah, a couple months ago,
I read this thing on this site,
and I kind of know when it was,
and I know sort of what it was about,
or search engines really terrible.
I wish I could just look for it in my feed reader.
And theoretically, a feed reader could have a good search mechanism,
or it can show you which articles you read or the like.
But if your feed readers only keeping the most recent 30 articles,
most readers will discard articles that have fallen out of the feed
that information for how to find the thing that you want to find again.
Yeah, when I find a new podcast, generally,
I want to find old episodes and see,
okay, I like this latest episode that got me here.
Now, what other things have they be doing?
And then it's pretty annoying if I have to go to the website and search around,
rather than just having everything in my podcast app.
You want them having to switch tools, right?
You have to switch back and forth between the app that's built the way you want it to be.
It's focused on your needs.
And the website that's built to be one size fits all,
except it's actually one size fits none.
Or one size fits the advertisers.
Right, yeah.
Yeah, on occasion, I have sold this by finding old feeds through the Internet Archive
and piecing them together.
To have something to feed into my feed readers so that I could...
I mean, in the feed reader, I also have search on all the entries and all of that stuff.
And I'm missing that when I'm on the website.
So there's a solution to this.
And it's 13 years old.
Why is nobody using this?
I wish I knew.
I know Fluffy said things about that.
Yeah.
Oh, why you were...
My theories on it.
Yeah.
So, yeah.
So, I mean, RSS and Adam have had some amazing extensibilities.
But the readers never really caught on to a lot of the capabilities or a lot of them were just a minimum implementation.
And then when Google Reader got terminated, people thought that was the end of RSS.
And everyone just switched over to the...
What in the end do we call silos as far as the means of consuming stuff.
People moving over to Facebook and Google Plus, which is also now terminated, of course.
But then...
So all these features that were available in Adam and RSS were just never really implemented at large.
And you can kind of see a similar thing happened with XMPP, where XMPP was a really good core of an instant messaging protocol.
And it had an amazing ecosystem of extensions that never got widely supported.
And then the bigger adopters of XMPP in this case also Google ended up discontinuing their support for XMPP as well.
Which again, made people decide it wasn't worth doing because Google's advertisers weren't finding a way of monetizing it.
Yeah, that's like the ultimate betrayal, right?
So they adopted XMPP and everyone who likes XMPP were super excited.
Like, wow, finally I can talk to other people than my nerdiest friends on this.
And then they even added video and voice.
And it was like, wow, they're contributing lip jingles and all of these things.
And then Google said, like, no, actually we're not doing this.
So when was Google re-determinated? Do you remember?
It's several years now, right?
Yeah, so it was like a...
Geez, I don't remember when either of these shutdowns were.
They were just so long ago, it feels like they were four times.
Google Reader was around 2012, 2011.
So I think Google Buzz came around 2010.
And it was terminated pretty soon after it was launched.
So I'm sure that Google Reader was at least after 2010.
But maybe not so far after.
Yeah, I feel like Google Reader was terminated around when Google Plus was rolling out.
Because I have it in my recollection anyway,
that Google decided that Google Reader was now obsolete because Google Plus was,
however, one was going to share everything.
Yeah, but Google Plus launch feels recent to me,
but I think that just means I'm old.
Yeah, no, Google Plus I think was 2011.
Wow, yeah.
So we're talking about specifically RFC5005.
And it's a pretty simple specification, actually.
You can read it in, yeah, just 15 minutes to get a pretty good grasp of what it's about.
There's some details to sort out, but it's not super complicated.
It's just a couple of link rel and that's it really.
And you have both been implementing this specification, right?
Yeah, on both sides, both producer and consumer.
Yeah, I've been mostly doing on the producer side,
just because that's mostly what I've been developing indie webwise.
And I've been doing someone on each side.
And why do you do it?
If you're on the producer side, you have this chicken and egg problem.
Yes.
So you're trying to be the egg.
Pretty much.
Yeah, well, or it just feels right.
Like there's this thing that people should be using.
Yeah, I mean, it is so easy to implement.
At least with the with Pubble, which I, you know, my my site management system,
the whole like the RSS feed isn't anything special.
It just looks like any other index on the page on the site, rather.
And so adding in those two rel tags was the same as adding in the previous and next navigation links on an index.
So it was just it was just so easy to do.
And even if RFC five thousand five didn't exist,
I was going to just invent link rels for that anyway.
And it's just such an obvious name for the rels that they used that I mean,
I probably would have ended up accidentally doing it independent of the RFC in the exact same way.
Well, not quite because there's the there's the FH archive tag that gets added and to indicate whether an archive feed is an archive feed.
But other than that, I mean, that was the only thing that that was outside of what I had expected to put in the section four of RFC five thousand five,
the archive feeds is a little more complicated than then people seem to expect it to be at first glance for efficiency reasons.
And we should maybe go into that some, but section three, especially looks just like HTML fours link relations of the first pre of next class links.
Yeah, I didn't I didn't follow that up, but it says there's a note there to IANA or whoever manages these rel classes.
Yes, that just extend the meaning of the HTML rail class to mean basically this in item feeds.
So is that a part of HTML four, you said?
I believe it was interesting.
Yeah, well, link rel in general was was an HTML four thing and it did provide some semantic rels.
Indie web ended up inventing a whole bunch of new rels for various purposes.
But yeah, things like preview next up down.
Those are actually I forgot if down is a thing, but in any case, there's what is up is for like going up a category, I think.
Oh, I see.
Yeah.
Okay.
I'm not sure how down would work because you'd have more than one of those.
Well, I just have it on.
Yeah, that's right.
Yeah.
It's not like an ID.
You can have more than one.
Yeah.
Yeah.
So we mentioned on on a feddy before that mastodon had atom feeds with paging and apparently they're gone now because mastodon doesn't have atom feeds anymore.
And it never put it never put them in the RSS feeds.
Right.
Which you can.
Yeah.
Well, I mean, so mastodon was using atom because that's what those data is used.
And I don't think it ever implemented five thousand five compliant paging.
It did, but it was section three compliant.
But what do you actually need to do?
Because when I had a quick glance at 50 or five, it looked like everything here is basically optional.
Just just put prev and next and boom.
Now you're right.
Right.
Well, let's go through.
There are three major sections to RC five thousand five.
Section two is for complete feeds, which is when you wanted to clear that everything that appears in this one feed document is everything that there is.
If you don't see it here, that's because it's gone or it never never existed.
Right.
No longer applicable.
Yeah.
So you can do that today, obviously, but.
But what you need is some way of finding out that someone has done that so that you know when you see such a feed.
Whether it makes sense to save old stuff you've seen or not.
So section two introduces one tag, which is just the complete tag and it doesn't take any.
It doesn't have any content.
It doesn't have any any attributes.
It's just an empty tag that you just add to the beginning of your feed.
And now you've declared that this is a complete feed.
So that one's really easy.
Yeah.
So a feed reader that doesn't know about 505.
It will see this feed and then as the feed changes, it will just accumulate new links.
Depending on how.
But a feed reader that understands 505 will see this tag and yeah, okay, I'm going to clear up the old links right now that I have the new complete view on this feed.
Right.
And the one prominent place I'm aware of that's used section two is the web comic, Poke the penguin.
Which has nice all 700 something pages of the comic all in one feed.
And so I contacted the author and said, hey, how about you add this one tag?
And he's like, sure.
And so now it's there.
Okay.
I thought you were going to say he had like the top 10 most popular 10 top 10 most popular strips or something like that.
But it's really it is the complete feed of all the feed of all types.
Yeah.
There's a couple other web comics that do that.
So the web comic Oglaugh, which is not safe for work.
That one has an accidentally complete feed, but I don't think they have the FH complete tag in it.
I bet there's a lot of feeds, maybe the majority that are complete, but don't say it.
Right.
So I mean, just a lot of things where like the website's misconfigured or they didn't understand what a how a feed works or they're just treating the feed as a as a site map basically.
So a lot of those cases like it's accidentally a complete feed, except it doesn't market.
So that's complete.
And then there's the podcasts that occasionally have to go out with the short notice episode.
Like hi.
Sorry, everyone.
We've restructured our website.
And now you're going to get a lot of old episodes show off.
Yeah.
So section two.
And then there's section three.
Section three is paged feeds.
And paged feeds are like the HTML4 links.
You can have first, preve, next, and last tag links.
Those don't have much in the way of semantics.
There's some notion that there is a sequence of these feed documents.
I don't think there's really any indication of whether some of the entries are newer than others or what order these are in any sense.
My understanding is that that that section is intended for things like search results.
So if you've got something where you do a search and the results are delivered to you as an RSS or Adam feed,
then you could have the first page of results be the thing you get immediately.
And then it has a link rel equals next, say, to the next page of search results.
And you could have some kind of feed reader that would let you choose to explore however far along the chain you want to.
Right. Or, or basically just puts a lot more on us on the, or more complexity on the reader itself where the reader needs to be able to just page back and back and back and back until it sees that, OK,
these are all entries I've seen before and then it just knows how to stop or, or I can't reliably know how to see.
That's true.
Until there's no more, until there's no more links anyway.
So like this is a, this is like Tumblr style pagination where it's like a slash page slash five or whatever.
So it's a, it's not very stable pagination.
Like the, like the meaning of the URL will change as more content rotates through.
And so the, so I think the intention is that in the case of getting a full archive of a website.
This is just, OK, I'm doing a one time scan of the entire history of this site.
And then in the future, I will keep up to date just by updating from the most recent page.
Except you can't reliably just update from the most recent.
Well, right.
I mean, that's sort of.
I think it's even mentioned explicitly that a reader that consumes page subscription feeds as they call them.
Cannot know that it's got all of the entries.
Right.
Because the pages might change as it's traversed.
Yeah.
So that brings us to section four, which is archive feeds.
And this is where the really interesting part of the specification is.
It superficially looks a lot like section three.
You have these preview archive and next archive links.
But there's this one requirement that makes this for different,
which is that once you've published an archive feed at a given URL,
you must not change that in any meaningful way.
If you need to change stuff that's in that document,
then you need to publish it at a new URL.
And that gets kind of weird from an implementation perspective.
But the reason it's really important is that means that now a feed reader can just ignore any page at a URL it's already seen.
And that lets you really limit what you have to fetch.
Is it implied that the first page is allowed to change?
Yes.
Because that was not an archive page.
That was not an archive page.
But it didn't seem explicit in the spec to me.
But it would have to, because otherwise you don't have a feed.
No, you just have a dead archive, not a live updating archive.
I thought it was explicit in there, but.
So actually, where in the spec does it say that the archive ones can never change?
Oh, I see it's the set of entries contained in archive document.
Published at a particular URL, I should not change over time.
Okay.
So secondly, I'm not properly implementing section 4.
I think I've actually mentioned that to you before.
Probably.
And it also says explicitly that as a feed reader, when you reach a prev archive that you have seen, you can stop traversing.
Yes.
And that would imply that it has to not have changed.
The distinction, as far as which ones can change and which ones can't, they define specific terminology.
A subscription document.
The feed document that always taints the most recently added or changed entries.
Archive documents or feed documents that contain less recent entries.
And it's the archive documents that once published should not change over time.
So I'm going to lean on the fact that it's rfc speak should means that it can.
So it's should in the sense that yes, you absolutely can change them.
Nobody can stop you.
But your readers may not see those changes.
Right.
Well, so in the case of on on my feeds, like the as things expire from like I think on my main feed or on my subscription feed,
it's always the like 30 most recent entries or something like that.
And then as those fall off, they move into the first archive feed.
So like the if someone is actually subscribed to it, they're still going to get all the content.
The the assumption I had in my implementation was that something would see, okay, this isn't the subscription feed.
I'm going to pick it up and just add that to this thing.
And then you would just go back into the archive feeds for like the first time.
Like the first time someone subscribes to a thing if it's going to do a shallow mirror of the website or whatever.
That it would go back into the into the prior archived feeds.
And then and also I mean, every now and then I'll all backfill old content from like the previous iteration of my site.
So where does that go?
Because I don't necessarily want it to show up as a new entry for everyone who's subscribing by also don't.
But you know, it needs to show up in the archive somewhere.
So right.
It'll get added to the archive anyway.
Yeah, so this gets tricky.
I've spent a lot of time thinking very carefully about the requirements in this section.
And I think it's one of the most unfortunate things about this spec that.
That there's very little text here and it says exactly what the requirements are.
But it doesn't get tell you why they're important and it doesn't tell you how you might satisfy the requirements.
One way to deal with things like backfill is to think of your archives as being about last modified date instead of first published date.
One thing I was going to point out about.
You were just fibing that as things fall off.
Your subscription feed.
You add them to the most recent archive or the appropriate archive feed.
And I believe you're doing that by month, right?
That was sort of a.
Yeah.
So month was the.
Previously I was doing it based on the stable pagination IDs and you complained about that.
So.
To buy a month.
Yeah.
Was that like max ID kind of thing?
Right.
So I mean, actually the stable pagination IDs might actually be more appropriate than.
And I should probably go back to that.
So the idea was that.
So in public, one of my core tenets is that all pagination should be stable where.
If you see and like if an index gets a snapshot at a particular URL.
The content that is on that page should still be visible at that snapshot.
So when you paginate my website through through like new page level previous next.
It actually uses the ID of the of the first article that should appear in that index view.
So the idea is that like it's okay.
Show me the the next 30 entries after ID, you know, 752 or whatever.
So what was happening before I switched to a month based archive views was it was.
So every entry basically got its own archive feed associated with it.
And so that archive feed would include that entry and then the 30 before it.
And so those were perfectly stable.
It's just that now you've got basically one feed per entry and there's lots of overlap between them.
And so that just that becomes another issue in terms of like how.
How is the client going to actually interpret this?
Yeah, I think the result of that was that.
Basically have 30 different fees, right?
Effectly changing all the URLs.
Effectly changing all the URLs for all the archive pages every time you added a new.
And so basically every well, not not every not the entire history every time because every 30 entries.
Let's see what's not once you've seen 30 new entries then yes.
So yeah, one of the big.
But yeah, one of the big shortcomings in 2005 is it doesn't say anything about how readers are supposed to interpret anything.
And in fact, it doesn't even say anything about like what previous or next mean in in terms of like is.
Is previous the older stuff or is it like the next page of this index or what so it's it's hard to.
It's hard to really interpret what the RFC is supposed to mean in practical senses in the prior.
I will say section section 4.2, which is total consuming archive feeds does say a little bit about how readers are expected to work.
But it's what is this four paragraphs.
Yeah, it's very short.
Yeah, 4.1 is funny because it says publishers should construct their feed documents in such a way as to make duplicate removal on the biggest.
And this is C section 4.2.
And then it's not really clear how you.
That's the second.
You just said that yeah, you should have timestamps.
No, the second paragraph of 4.2 is.
Oh, yeah, it does talk about it.
Yeah, I should consider only the most recently updated entry.
And then it mentions that you should have the last updated date on the entry.
Yeah, but then also without them.
So it does actually.
There's also a notion of because archives.
The archive documents are in a particular order.
You can talk about what the most recent archive document is.
So you take the version of the entry from the most recent archive document where it appears.
If you can't figure out based on timestamps.
Does it say?
Yeah.
Oh, yeah, otherwise it's determined by its feed level item updated element.
I'm looking at the previous paragraph.
Actually, if duplicate entries have the save timestamp or no timestamps are available, the entry source from the most recently updated feed document should replace all of the duplicates of that entry.
Oh, yeah.
That's also a fairly implementation detail that's not an important one as opposed to the big questions that we'd have about the actual consumption and.
And like the deduplication of the feed itself.
So I mean, in dinner, I lean on the fact that Adam uses UIDs for everything.
And so like the idea is that if any two separate feeds refer to the same UID, they should have the same content anyway.
So like if you've got conflicts between two versions of that, you know, I guess it's just whichever one you see most recently wins.
So well, I don't think there's any reason to say that the two entries should have the same content because you're allowed to change entries over time.
Well, sure.
But I mean, like if you have, if you have, well, okay, I'm using a very public centered view of things.
They should logically point to the same priority.
Yeah, they should be equivalent to pointing to the same URL, even if the URL is nothing that changed.
So if you see like, you know, UID 12345 in two different feeds, but with the same target domain, I guess.
I mean, technically it should be even across domains, but if for obvious security reasons, you don't want to do that.
But the like you should have it so that that yeah, like if you've got multiple feeds pointing to the same entry with the same UID, they should in any reasonable implementation in my opinion.
Have the same content as well.
I think what's going on with RFC 5,000 5 when it was published in 2007, it looks to me like the assumption was that people were publishing static documents,
not even dynamic generation for their feeds.
And so the idea was that like maybe you take your current feed document and rename it to be the archive for the current month and then you leave it there.
And if you need to update something that was in the archives, you copy it into your current feed document.
But you don't touch the old copy.
And so it was important to be able to figure out what to do with it in that case.
And that's not going to be the common case now, because we're almost nobody hand rights.
There are assessments I think of like two counter examples, but yeah, like multiple content.
Question of content and diesel sweeties they hand right there are as still for some reason.
I was thinking of something positive.
Oh, they do.
Okay.
Yeah.
What is it with like the old school web cartoonists who all do that?
I know I know Randy of something positive is very proud of the fact that because there's no code running on his server, there's nothing to attack.
Yeah, but you can.
But we do that with static sites.
Yeah, I know.
I've been meaning to have that conversation with them someday.
So I was thinking how you do this with a static site generator.
And I thought, uh, maybe you, you list all of the, for example, blog articles that you have.
And then you, if you start from the end and you take every 15 and chunk them, then that would be stable, even when you add things at the front.
And then, and then the first, the head document could be a dynamic number of entries from 15 to 30 and then list to the top archive from there.
Yeah.
The one thing to think about is then what if you do go edit any of the old entries or delete an old entry or backfill something so that it's publication data sold.
Well, so the deletion has its own separate RFC that almost no one implements.
I do.
But almost no one does the tombstones or Adam tombstones, which, uh, where you basically have a little stub entry at the time of deletion, which I didn't always do until you corrected me on this.
But at the, uh, at the time of deletion, the, the Adam feed has a little tombstone that just gives at least the GUID and the deletion time of the, the entry.
And so then the, um, and then that tells readers of which none support as far as I know, but, uh, but in theory, it tells readers, OK, this item has been deleted.
Yeah.
But in the case of static site generators in particular, you may not know that there was an article there or a poster.
Yeah, you would have to store state somewhere and you frequently don't have states.
So there's a different approach you can take for a static site generator.
And this is what I did in the, um, pull request for the jackal feed plugin that for whatever reason never got merged, um, which is to.
He's not close, right.
So it might get merged.
Yeah.
Yeah, they acknowledge that I fixed all the issues they had with it and then they just ignored it.
Uh, but what I did with that was, um, generate a page of the oldest 10 entries or whatever, um, computer hash over the contents of that page, use that hash in the URL of that page.
And then generate the next page linking to that computer hash over that, you know, and so on.
So it's sort of the, um, uh, a mercury kind of approach to, uh, archived feed.
Yeah.
Now that that's something like that that I had in mind as well, but I didn't think about the changing contents as well.
But I mean, it's the easy way if you have, OK, I have these last 15 entries, the 15 oldest entries.
What should I name this file then, then taking a hash is the kind of all this way.
Yeah.
What's your void having to keep any state to do this correctly?
Yeah, but it still means you're going to have several different versions of your complete archive feed as history changes.
Yeah, although you don't have to host the old versions.
It's OK if you delete, uh, if you delete those old URL, you just, um, can't rely on people saying if you change things that those old URLs.
Yeah.
When, when you change, when you change the head of the whole chain, that means people need to rethink the whole thing.
Yes. So that's a little unfortunate, right?
But, um, but hopefully, I mean, what I expect is that most changes happen in the most recent posts.
And it's just every once in a while you touch something older.
All right.
So, uh, I didn't know there was so much about this one or two page.
That was interesting the other day.
This has been really cool.
Thank you.
So I'm clacky.
We can, uh, you can find me on the free social web clacky at libranette.de.
And, uh, what about you?
So I'm fluffy.
You can see all my stuff at beesbuzz.biz, which is somehow easier to say out loud than to spell.
Um, I'm not sure why.
Also, uh, queer.party slash fluffy is my mastodon presence.
And I'm Jamie.
Um, I spelled my name weird.
So it might be easier to find my website is minilop.net, uh, which just shows how much web comics have been a part of my life.
Because that's a reference to bun bun the mini-lop from sluggy freelance.
I was wondering about that.
It's a comic I haven't read for years.
But, you know, when I needed a domain name in 2003.
Uh, so mini-lop.net is you'll find my blog there.
I'm also on mastodon at Jamie at toot.cat.
Um, and I would love to chat with people about, uh, any questions about how to implement RFC 5,0005.
Um, I have a lot of, a lot of opinions about this.
A lot of thought into it.
Uh, I've done an implementation of a WordPress plugin, uh, that you could use if you want, uh, to publish RFC 5,0005 feeds on your WordPress blog.
Um, so yes, please talk to me.
Please implement this.
Please add this to your favorite thing.
Please, uh, go find your, the issue tracker for your favorite, uh, tools and ask them to do it.
You know, let's make this happen.
And if you like interviews with Jamie, actually you've been on HBO before 11, uh, nine years ago.
Yeah.
Yeah, talking about my work.
Talking about your ex org work.
Yeah.
Yeah.
That is episode 825, if you would like to hear that.
That's been a while.
Yeah.
Are you still doing exogue stuff now?
Not much.
Um, I, uh, I had to keep kind of an eye on it.
I haven't actually contributed any patches for quite a while now.
Okay, but I was trying to round this off.
So, uh, I'll stop myself there from following anything up.
Uh, until next time, this has been hacker public radio.
Thanks for having us.
Yeah, thanks.
We're listening to hacker public radio at hacker public radio not a work.
We are a community podcast network that releases shows every weekday Monday through Friday.
Today's show, like all our shows, was contributed by an HPR listener like yourself.
If you ever thought of recording a podcast and clicking our contribute link to find out how easy it really is,
hacker public radio was founded by the digital dog pound and the Infonomicon Computer Club,
and it's part of the binary revolution at binwrap.com.
If you have comments on today's show, please email the host directly.
Leave a comment on the website, or record a follow-up episode yourself.
Unless otherwise stated, today's show is released under a creative comment,
attributions share like 3.0, isn't it?
Ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba-ba.