Files
hpr-knowledge-base/hpr_transcripts/hpr2779.txt
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

102 lines
9.8 KiB
Plaintext

Episode: 2779
Title: HPR2779: HTTP, IPFS, and torrents
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr2779/hpr2779.mp3
Transcribed: 2025-10-19 16:43:35
---
This is HPR episode 2,779 entitled HTTB, IBS and Torrents.
It is hosted by Holdenp and is about 12 minutes long and carrying a clean flag.
The summer is replacing the web with new decentralized protocol.
This episode of HPR is brought to you by archive.org.
Support universal access to all knowledge by heading over to archive.org forward slash donate.
Support universal access to all knowledge by heading over to archive.org.
Support universal access to all knowledge by heading over to archive.org.
Support universal access to all knowledge by heading over to archive.org.
Support universal access to all knowledge by heading over to archive.org.
Support universal access to all knowledge by heading over to archive.org.
Hello listeners, I'm going to try doing this thing again and today I'm going to be talking more about things like HTTP and IPFS.
So let's get right into it.
I'm obviously heavily leaning towards decentralization type stuff but I'm actually going to talk about HTTP for a while.
Because it has done some things good and I think it's important to talk about what it has done well.
It does serve content, it does that, it does it well.
And it also gives the content creator a whole lot of power on how that content is delivered.
If I'm running an HTTP server I can be confident that my clients are going to be able to access the data that I'm serving to them.
And if I edit that data that they're going to receive the updated versions.
The problem is that it's centralized of course that I'm the only person who can serve that data which is not ideal.
Ideally you want to be able to do sort of like torrenting where you have multiple people sitting in the file.
Purely for efficiency that and it's like also a sort of security and performance thing like dealing with the central server going down.
But at the end of the day if you make the assumption that the central server is owned by the same person who created the content that kind of authority is great.
The content creator should have that authority.
So speaking of torrenting, torrenting is super cool.
It obviously isn't really suited as a replacement for something like HTTP though.
One of the main reasons for that is that you can't update a torrent, at least not really.
Not really. You have to download a new torrent file for that which isn't really ideal.
But if there was, you could theoretically have some kind of live update system built using like an external protocol.
There are already magnet links which are great.
It is a possibility.
Torrenting is certainly very interesting. I'm actually pretty happy with how torrents work right now.
I have a couple of ideas for ways these sorts of things could be improved.
But for the most part they work great. You just need some kind of additional service for actually indexing those torrents and updating them live and verifying, like signing them with a GPG key or something like that to ensure that they're actually created by the person that you think they are.
That can all be done. I think torrenting is actually probably the best place to start in terms of developing a sort of decentralized alternative to HTTP.
And there's IPFS. IPFS was a good idea but its actual implementation is not very good.
It has a whole lot of bloat in that. You have way too many things built into the same protocol.
You have IPFS which is that sort of indexing. It lets the creators actually sign the content that they're creating and update it live.
And then you have actual indexing of that content in that every single piece of content has an actual hash.
And I know that the hash is generated from the file itself. It's not like you have some massive database or anything but that's still very different from how torrenting does it where there isn't necessarily a single hash.
It's many, many hashes inside the torrent file.
And more recently they've started doing some other creative stuff like implementing live streaming which doesn't even make sense because it's supposed to be for permanent content.
And then it uses go and then they're very trigger happy with adding no JS.
So I personally don't like IPFS. I firmly believe that these sorts of things should be sort of split up into many different protocols.
And in all fairness, torrenting usually we just see torrenting as a single protocol but the different protocols involved with torrenting really are separate when you think about it.
Getting peers from trackers is a is a separate protocol from from actually getting the content from the peers, which allows you to do really interesting stuff like the recent DHT table that they added that allows you to have trackerless torrents, which is great.
That's the sort of thing that we need. I actually want to emphasize that torrenting would not be on my list of cool things if it wasn't for DHT that's unbelievably cool to me.
So I mentioned before that there were some things that I thought would be interesting.
I already mentioned sort of indexing torrents being able to to sign your torrents and update them live. That's all great.
Another thing that I'm sort of interested in is using zed standard compression.
So yes, first of all, you could just use zed standard in the same way that you use gzip. It doesn't really achieve anything.
What I'm more interested in is compressing things based off of the actual data.
So for example, let's say that I already have 10 chunks of a torrent. Chances are all of the other chunks are going to be very similar data.
So you could train, you could generate the zed standard compression dictionary based off of those chunks that you already have.
That would drastically improve compression in saved tons of bandwidth.
Now of course there's disadvantages, namely that uses a ton of computing power and it's pretty heavy on disk IO.
But bandwidth can be a pretty important commodity. I have a role internet connection. I have next to no bandwidth.
And many people are the same. Many people use public internet.
Bandwidth is often a nightmare. I would much rather use more computing power than use more bandwidth. And be fair.
Client side is one thing, but server side is a little bit more difficult because server side you have to worry about having many, many, many different clients to serve all at once.
But with something like torrenting, that's not as big of a deal, hopefully at least, as long as you have enough cedars of course.
And it would of course be the way I envision it, it would sort of be a sort of negotiation between the cedar and the client.
Whether or not to use said standard and how many files to use, which files to use, etc.
It would also be kind of interesting to use other torrents as well.
Being able to say, hey, I have all of these chunks, but I also have all of the chunks from this other torrent that could also improve compression.
Like say, I'm downloading a program and I already have a very similar program. I have an older version of the program that's in a different torrent though.
That could significantly improve compression if we use that to generate the dictionary, right?
So that's something I'd be really interested in seeing.
As far as I know, the current bit torrent protocol doesn't really do compression at all, not even like gzip or anything like that.
But I think it's something to look at.
Obviously, the disadvantage there is if you're linking to other torrents, there's possibly a sacrifice of privacy.
But that would of course be up to the user.
The user can choose not to list off all the torrents they own, as well as the peer can choose not to list off the torrents or even allow enhancing compression with the external torrents.
I think control of the user, sorry, user control, that's very important. The user should have the final say in how all of this works.
I think by far the most important thing though is some kind of mechanism for indexing torrents and actually signing that index.
So for example, I can change the torrent that my URL points to, which is interesting because you could actually do something like that with HTTP.
You could have an HTTP server that serves a torrent file, and that would work.
It would work fine.
The only downside is that now you're relying on a central server, so I would prefer something where it can be decentralized.
But I do think it's important to consider how things like HTTP have done things well.
It certainly is a noteworthy protocol. It's not like we use it everywhere or anything.
But yeah, indexing. I want indexing. I'm thinking about maybe working on something like this myself.
I'm not really sure. I am a little bit busy with that other protocol I'm working on if you follow me on IRC.
Again, sorry for the audio. I'm sure the audio is almost as bad as last time.
I have increased the quality quite a bit, and I'm using a new format now as well, which should be slightly better.
But the passing in and out getting farther and closer away from my mic, I haven't quite fixed that yet.
But I will. Thanks for listening. I hope you enjoyed.
Behold, do another one of these sometime. See ya.
You've been listening to HackerPublicRadio at HackerPublicRadio.org.
We are a community podcast network that releases shows every weekday Monday through Friday.
Today's show, like all our shows, was contributed by an HBR listener like yourself.
If you ever thought of recording a podcast, then click on our contribute link to find out how easy it really is.
HackerPublicRadio was founded by the digital dog pound and the infonomican computer club, and is part of the binary revolution at binrev.com.
If you have comments on today's show, please email the host directly, leave a comment on the website or record a follow-up episode yourself.
Unless otherwise stated, today's show is released on the creative comments, attribution, share a life, 3.0 license.