Files
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

375 lines
31 KiB
Plaintext

Episode: 3379
Title: HPR3379: Linux Inlaws S01E34: The one with the intelligence
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3379/hpr3379.mp3
Transcribed: 2025-10-24 22:25:44
---
This is Hacker Public Radio Episode 3,379 for Thursday, 15 July 2021.
To its show is entitled, Linux in laws S0134,
the one with the intelligence and is part of the series Linux in laws it is hosted by Monochromic
and is about 45 minutes long and carries an explicit flag.
The summary is, part four of the three part miniseries on deep learning and artificial intelligence.
This episode of HPR is brought to you by archive.org.
Support universal access to all knowledge by heading over to archive.org forward slash donate.
This is Linux in laws, a podcast on topics around free and open source software,
any associated contraband, communism, the revolution in general and whatever fans is your
vehicle. Please note that this and other episodes may contain strong language, offensive
humor and other certainly not politically correct language you have been warned.
Our parents insisted on this disclaimer. Happy mom?
Thus, the content is not suitable for consumption in the workplace, especially when
played back in an open plan office or similar environments, any minors under the age of 35 or
any pets including fluffy little killer bunnies, you trust the guide dog, a lesson speed
and QT rexes or other associated dinosaurs. Welcome to Linux episode a season one episode 34
the one with the intelligence. Martin, good evening. How are things?
It's evening Chris. Things are not bad, not bad, some shining.
Excellent. Excellent. Fixed.
What could possibly go wrong apart from Brexit, the Vogue and Slanding and whatever comes to mind?
Oh yeah, Mr. Brian Johnson stepping back.
Stepping down, sorry. You mean Boris, please. Sorry, Boris, yes.
I was confused. Apparently, yes, there is. Brian Johnson, isn't that one of the queen boys?
Maybe wrong. Anyway, doesn't matter. My understanding is basically that speaking of Mr. Johnson,
that Boris has married recently. Yes. Any thoughts on this again, by the way? Yes, I know.
Well, there was this is some debate about why he was married in church for the third time,
but I mean, I reckon he was he was married in a proper Protestant church. So
yeah, but he has a certain reputation when it comes to people of the well, I mean, if
Laura is anything to go by, he's not the first one being married a couple of times. Henry,
the eighth comes to mind. Speaking of which. And I mean, yes. And yes. How are you?
Can't complain actually. In contrast to current, to to to to to wishes rumours, I haven't been married.
Once not twice, not three times. If it's any consolation. Apart from that, I mean, one piece. I
mean, this country is slowly getting back to something called not nearly close to normal,
but that's different story. So let's see. Yes, people, we are recording this on the seventh of
September 2035, if not completely mistaken. I might be wrong the date, but let's not worry about
this, but way, but let's let's go back to way more save it to to way, save for rounds.
Namely, the topic of tonight's episode, button. Of course, this is the fourth part of the
are we on four? Yes, we are. This is the fourth part of the three part ministries of artificial
intelligence. Yes. If you recall correctly, no, exactly.
Oh, does that mean? Yes, indeed, very much so. Yes, to celebrate the fact that everybody who is
listening has survived the three parts so far. Obviously, part mini series. We have a special
jewel tonight called GPT. Martin, why don't you explain what there is? For the few people who are
listening, who do not know what GPT is? Okay, but it's made a lot of noise in the press for
various reasons. GPT is standing for generative pre-trains transformer and AI model.
Is that one of these toys that look like a robot but then you form it to a car or something or
the other way around? That's indeed a transformer, but yeah, it's not one of the plastic ones.
I see. Okay. Yeah, so that's what it stands for. It is mostly known for its, well, most you know,
it isn't known for its language capabilities in various iterations of it, but why don't we go
through a little bit of a history of that? Excellent idea. Yes. There was a company called
Cyberdyne, if I'm completely mistaken, back in the 80s. Yes, but that's probably beside the
point, which I'll decide. The whole thing goes back to something called OpenAI, if I'm not completely
mistaken, right? That's right. Yep, yep. So, I mean, we, part of the miniseries, we talked about
various frameworks, how do I know what to work, and so on and so on, but obviously the
whole point of all this stuff is to have it and an application for it, whether it's
computer vision or object recognition, whatever, classification, library, or
another field of interest is language, right? That's, by the use, AI, I won't know, so by the use
in this field, otherwise, Ken will be very happy again. I mean, before we go into the native
reveal, it's off GPT, exactly GPT-2 or GPT-3, whatever, maybe it's worth talking a little bit
about the background of OpenAI, because they have some illustrious founders, right? If I'm completely
mistaken, a guy called Alan Musk was one of the initial founders? He was, he was, I don't know,
but Alan or Elon, but yeah. Yeah, the one with cars, right? I'm on other things.
Yeah, that's the one. That's the one, yeah. Yes, that's one, that's the one,
others. But Jeff, but Jeff Bezos is not a founder of said venture. He isn't, he isn't.
I mean, one of the reasons it, well, one of the reasons he came about is to kind of
stop reading the right word, but have an alternative to Google's deep mind, right?
I think that's something, in some of its history.
I came about, so far in 2015, Alan defected in 2018?
Yes, could be, could be. I didn't believe, well, that piece, but
I think you have more insight on the reasons why he left. Presumably, he need the money,
he need the money for SpaceX and Bitcoin. Ah, maybe I'm wrong.
Well, yeah, but SpaceX is going these days, but yeah, they have been a lot more.
I mean, yeah, I mean, to be, yeah, to be much more serious, we're recording this almost
middle of June 2021. And Alan has just issued a very interesting comment over the weekends
that they're not too happy with said Elon Musk, destroying people's lives and causing a
little bit of a stir in the Bitcoin markets. But I reckon if current law is anything to go by,
ah, well, I'm going to put this diplomatically. The US government funding that has been poured into
into something called Tesla could be used for different purposes other than OpenAI. So I reckon
this is how this whole Bitcoin debarker came about. This is, of course, pure speculation.
I might be wrong, but given the fact that Bitcoin took a hit of what 20% of the overall value
after the recent China against offset Mr. Musk, this doesn't come as a surprise to split this way.
Yeah, I mean, there's also some theory about China. I don't know, I don't know,
I'm told it's an area, but yeah, yeah, that's right. He doesn't seem to poke around mainly with
the weak coin or whatever it's called. Do you think that China is behind Tesla?
No, no, no, no, no, no, no. Behind Tesla, behind the...
As a Musk, he looks a bit chummy, doesn't he? Well, I mean, he's clearly, clearly,
inside with any inventions.
Sabadine, if you're listening with us, we're still looking for sponsors to address a sponsor at
Linux in North, aren't you? If you're so inclined, anyway, it doesn't matter. Okay, back to the
topic at hand. So, the company behind something called GPT and the architecture went through a
couple of versions, right? Yeah, so one more thing on open. It was really initially
more of a... Which is why we actually talked about today, right? It was more of a
not so much a for-profit organization, more of a research organization for AI,
and it must have been quoted in the case of, to at least researching this field,
means that we can use it for the good of humanity and so on, which is also part of
focus on all this stuff, then it would be for everybody. So, yeah, but in the history of the company
that has been some changes, let's put it that way. And in recent years, in fact, a large contribution
by our friends from Microsoft. Mmm, Microsoft, they put what? It's significant, it's
kind of money into the whole thing, right? I don't know, something like a billion or something.
Hang on for them, that's pocket change. Just look at the market cap right now. They clock in
at nearly two trillion dollars. So, building here are there to stand up too much as those.
Mm-hmm. Um, however, I think they have, they have an API on Azure, I think.
But GPT? Okay, interesting. But you obviously have to pay for it, but with Microsoft,
you always do, right? That's the point. One way or the other. So, yeah, that's a little bit
about this, the open AI. They have done more than just GPT-3. Yeah, GPT-2 and GPT-1 comes to mind.
Almost, no, no, no. They also, they also did a neural network from music and also.
It was meant to be a general research organization for AI. Most well and most known for it's GPT.
Links to the GitHub repos will of course be in the show notes and the interesting thing
is that GPT-3 is not open source. GPT-2 is but essentially we're looking at some secret
source on top of something called TensorFlow. For the few people who have been missing out
on the previous part of the main series, the second part tackled TensorFlow and some
of the core part of it. So feel free to go back and listen and revisit the second part
of the series so that you know what TensorFlow is all about. If you take a look at the GitHub
repo, we are looking at a very thin layer of Python on top of TensorFlow. This is essentially
GPT-2. At the domain, if I'm not completely mistaken, it's actually language as such.
Yeah, so before GPT-0, the top of the GPT-1, which was the original paper to understanding
about understanding language and every year since they have been coming out with new versions
so 2019 was GPT-2, 2020 was GPT-3, which is when it was, as you mentioned, no longer open
source. GPT-2 is widely available for your own and playing around with, but every generation
of open AI's GT models gets bigger and less, less available. I mean, you can of course
use GPT-3, but that requires something called an API token and I understand that the waiting
this is quite long for said token because it's a cloud-based service of, if I'm not
mistaken, and you have to apply for such token. Otherwise, you won't be able to use said
API. Unless you only use micro-assertion. Yeah, so applying for APIs, unless you have some
kind of, I think, company behind you, I don't think you didn't get any wide. I certainly
didn't make a great case about how they could feature on the Linux in-laws with GPT-3
capabilities, but sadly, they didn't provide us with an API key, so we are too bad.
The example is that we're discussing today with stuck with GPT-2.
That's the fact that there, of course, did not publish any of the model data on a place
like GitHub. You can get the code, but the secret charge is not on there.
Sorry, you're talking about three or two. Two. Two, there are some models you can use, which
are available. Two was trained on the web text, which is about 40 GB of documents from
scrape from embedded links and with some filtering on there, excluding Wikipedia, because
there's a well-known data sets anyway. It's based on my, that's what it's based on.
Now, the secret source, as you call it, is really that it's a completely unsupervised
bunch of, here's a whole load of documents and brain, which surprisingly produces a
lot of good results for various language applications. Language, obviously, language is a sequence
of words, so with a sequence of words, you can think about completing them, you can think
about translating them, so there's often some of the purposes of language applications.
But, yes, it's specifically the vision of sentences or even paragraphs that is fairly.
It seems to be reasonably good to mean that it also produces so many bad results, obviously,
but, yes, if you want. But the domain, there are some pre-trained models out there, which you can
use, which is handy and then you can continue training them with the domains that you may want to
know more about for more specific, or as you get some general text lines. But the specific domain
is just text in a vertical mouse, right? So no image recognition, no text to speech,
something like this, so it's purely, you give a piece of text, and then it continues to write
based on this initial chunk. Well, that's one of the applications is completion. There's also
things like filling in the blanks, if you start a paragraph, have a blank in the middle one,
and then the paragraph, then it would fill in the blank there, or you could do the translation
piece with it. There's our applicant, mainly because of the page. It's, so it's, as a purpose,
it comes back to the name being a generative model, able to generate new data, similar to existing
data. So that's filling something in in the middle, or in the end, or translating it, it's really
give it some data, and answer, depending on the question that you're trying to set it to do,
which obviously works really well with this number. So the ideal use case is a buttoning off
with the writer's block. Well, not just that, but it's, yeah, there is a good use case,
but there's also code completion, even deriving code, running code based on descriptions,
things like that, sorry, the more you train it. There are examples out there for all these types of
use cases. So it can write code. It can write code. Yeah, there's links in the journals, but yeah, there's
okay. Many handy, you know, for example, you have things like latex, formal description, or
sequel, or Python, it is. So if you give it its own source code, it can improve things.
Well, in theory, yes. Well, I mean, forget about sky net in that case.
I mean, I told you this is a bar game here. Nice one.
Well, so the thing is that as I mentioned, with every generation of this model, they're
putting more training data in it. And also, so I think many parameters, there's three
have become them. And so several. Yeah, the largest percentage, it clocks in at one
one hundred and seventy five billion data points, if I'm not completely mistaken.
Yeah, so it's, you know, all will be coming out next year, or whatever it is to,
but it will be another order of magnitude bigger, right? So it's okay now, what three can do.
So our investing in this kind of technology. So GPT might already be running this show without
us, without us knowing it, given the large enough cloud cloud. Well, I mean, you know, DSDS,
we don't need any any writers anymore. DSDS, what's DSDS one?
Yes, DS, dark side text board. Ah, sorry, yes, of course.
DSDS, sorry, I understood DSDS. Ah, sorry, okay, okay. Yes. So,
which is of course a German, a German, what's it what I'm looking for?
It's a German TV show called Deutschland such that you're looking for superstar or something like this.
Ah, it's like Britain's Got Talent in Germany. It's something like that. It's crap. Yeah, it's
trashy. Yes. Well, talking about this, this, all this trash TV as you call, it was all invented
in the Netherlands by end of all, but there we go. And most of it, if I'm comparing it to say,
yes. Yes. And then it's a pretty good, they did a pretty fine job of flooding the world with it,
right? Because the exporters is left right in the center. Yes, yes. At a price
invented in Holland, probably after taking very substances. There we go. One, two, two, more.
About substance abuse for one of our expressions. Well, country, Holland, is it? Okay.
Oh, I don't know. Okay. So do you think there's a tie between strange new TV formats and substance
abuse? Interesting one. Of course, we digress. It's not just TV shows, it's as many musical
as score has been written after. Yes, a successful one. Let's put it that way. Excellent, excellent.
Details may be in the show notes, maybe not. Yes, for a, for the best approach to enhancing your own
mental capabilities, conduct your local leader. Of course.
Right, we digress indeed. Where were we?
GPT three and who's running the show essentially GPT as a 405 for six might be actually already
out there in the wild doing things without us knowing it. I mean, if we're looking at a software
that can actually enhance itself by improving its code. This is mind-boggling. Well, I mean,
we know we can write code, I don't know if it can improve. Well, all it takes is basically
a nice pipeline. Well, all it takes basically its own source code and then start improving.
And the rest is called evolutionary Q&A. It's quite simple. Other people call it evolutionary
programs. It has been done before. Okay, that was good. Yes. Well, and with that,
program is all the world. Perhaps you should consider a new profession. Yes.
Yes. Try straight and use TV for this. In case at the moment, out of ideas or something.
Indeed. But maybe, but just maybe you can use GPT three to come up with new forms as well.
Yes, most likely, most likely. Yeah, well, I mean, I think did you know ask for some
lot of numbers before as well? Well, I probably did, but I failed because there was no response.
But maybe, but no, no, no, no, they didn't. No, they didn't.
No, they were the winning numbers as well. I hope you got them.
No, no, no, no, no, I don't indulge in this kind of company. Okay, there's one thing left,
because this is not just a theoretical episode, but rather we want to put GPT two to a test.
So Martin, given the fact that you have looked into this, why don't you shed light on the details?
Details, of course, will be in the show notes, but Martin has done some magic in the background,
which he's going to explain in a second as a now. Not putting in the spot or something.
So G2 is a source available on GitHub. There's people have done, you know, done work on this.
The original GitHub repo is called G2 by OpenAI. It has a number of between models in there,
of different shapes and sizes. As with all these things, the spot and the model, the quicker you're
able to alter, but of lower quality, generally. So the biggest one in there is one and a half
billion as small as one is like 120 mega something. So you can easily get this up and running,
you don't have to run it on GP either. It could do it if slower, obviously.
You can then train it yourself with different texts. For example, I used one of our
books, the HIPAS slide to find unit.
Yeah, we're using using your massive NVIDIA GPU cluster.
No, this is on my humble laptop. I see. The one with a 27 course.
No, it has a few more tens of course than that.
You don't have to go mad with it. I only run the fine tuning for a day. So this is the
biggest, as you call it, the secret source behind this. It's pre-trained on a bunch of texts.
Text being language based over. It has derived a lot of information from that.
You want to fine tune it to be more like how the HIPAS slide is written by the
numbers Adams. Then you fine tune it with that to give it more emphasis on those kind of
yeah. As I said, there are some pre-trained models out there. You can get up and running
really easily. Then you just play around with your model parameters in terms of how accurate
you want it to be in terms of the more accurate, sorry, is the wrong word, but you can change the
things like that. How much text it generates or the randomness in terms of the completion,
so the more random you make it, the more random text you get. The rest random you make it,
the more repetitive your samples become. So you can say, I have a, with our example, I gave it
a false sentence to complete some paragraphs, right? So whatever it is you want to do.
And then your samples become more or less repetitive if you make it to precise in terms of
parameters. You can also control the diversity of the words. So these are all built in things that
you can do. And it has sort of a reasonably good results, really, specifically for non-fiction.
Okay. If that makes sense. So, you know, probably partly by training on the
H-hags. Yeah, it's presently produced some reasonable results in various places that if you
didn't know, it could have been missing by a human quite easily. And in fact, there is a,
there is someone who's done a quite a nice video of a Q&A session with GPC3 where they've picked out
all the best answers to demonstrate these capabilities. So you know, it will be out of 100 samples
that picked up, you know, one to put in the video, but it still gives you a nice idea and see how
undetermined a series that it can make up stuff and it can lie and all sorts of things.
So essentially, we're looking at a software architecture that is finally able to pass the
Turing test. Interesting. Yeah, you could say that. Wow. Okay. People, you're, you're
only your first. I mean, that's a big thing. The funny thing is it's just pure training on
lots of data, right? There's nothing super special about it. I mean, okay, transformers were
a slightly new idea. Yeah, it was, it's been around for a few years and it's
used approaches. Interesting field of, with many applications as well, so definitely worth.
Yeah. Any thoughts on non-English text while we're at it?
Well, the problem with non-English text is that your sentence construction is
many languages, Dutch, German, French, the words come in orders and languages have that
or specifics, right? So you'd have to train it with language-specific models. It's possible
that you could fine tune it with a set in a different way. That'd be an interesting test.
So almost of the pre-ten models you get out there would be right Arabic and English.
Yeah, because, you know, this is the easiest way to get a large data set of text, right?
Or use web data, whether it's over a little reddit,
or if you could be, yeah, those are the, you know, the large, all these of text available.
I mean, at the end of the day, because we're talking about a TensorFlow extension,
or a model running on top of TensorFlow, essentially we're talking about better recognition,
given the fact that almost all natural languages are context-sensitive,
it shouldn't be a big deal. Given enough computing power, never mind data.
Yeah, yeah, yeah, but this is the, I think, why the likes of PR along the open source,
or only available through an API, I think, because it's been through a massive
articles and either standing up at the end result as an API call to. But it also means she
not controlling its usage, or its training. No, correct, by cyberdying instead.
You're very first people. Okay, details will be in the show notes. The thing is that Martin
took a smart paragraph, but he wrote himself apparently, and then that GPT2
do the rest in terms of extending or building on this.
Yeah, and, well, I mean, there's just one example, but you can run whatever you want to,
or you can ask questions, it's, you can get it to do what you want. Yeah, the fun thing is
really training it on, on a visa to make it, make it more, give it a different style or
different outcome than what Wikipedia in this case, but, you know, and then a novel written by
Dolores Adams, right? Okay. So yeah, we'll put both examples in there to give it.
Absolutely.
Indeed. Hey, and that brings us nicely to the boxes of the week.
Oh, what is your box of this week? Box, of course, standing for the pick of the week in terms
of the things that have crossed your mind, and that you see worth mentioning on the show.
My box of the week actually would be a German TV show, funny enough, called the Xenomitamaos,
the show with Maos. It has been around for a long time. Yes, it has. I know this.
You too. I do. I don't know. I'm okay. Nice. It's, it's, it's transformed Martin because
you fit nicely, you, you, you, you fit the, the target, the profile of the target for you are
quite nice to because we're looking at over-aged man, living alone with a mother. Oh, maybe not.
But then it's called living in the night and they won't get you anywhere.
This is the joke. Jokes aside. It's been around for the last 50 plus years. It's one of the most
popular TV shows for kids on the planet. It's a mixture of essentially storytelling and little
ditty stories, whatever you want to call it, explaining how things are done, how they're working
on the rest of it. Okay. So, for example, if you want to check out how an Airbus 320 is built,
there was a whole series as an, as a series of episodes on this, about three years back,
whether it followed the progress of building such an airplane for about half a year.
And each three episodes that show would include a segment on building the plane. So, you could follow,
you could track the progress of how actually a 320 is built. Okay.
That's, that's, yeah. I'm in, I'm in five year old, I'm sure that's
a very interesting topic. Indeed. They, they do explain how computers work, how, how the internet
works on the rest of it, in a, in a fashion that the H bracket, the, the show is destined that can
understand, because essentially we're looking at kids between the age of four and say 12.
Yeah. I mean, I have seen it in my youth growing up and all of that. I can't remember any of it to be
perfect. I just think it is. You should name. Yeah. I mean, you should be able to get in the UK
because they, it's all, it's, it's exported all over the planet. It's okay. I'm a children
bit woman. You can get in Japanese, you can get in Korean, just take a language. It's probably the
most successful TV show for kids on the planet, even before seismistry or stuff. Okay.
And why did you pick this? Because the lack of that I'm
I mean, I've been, I've been watching this for the last almost 50 years. No, I'm much more serious
reason is actually for the last about 10 years, they are doing something called the mouse door
open that they in German demonstrators are where essentially all of the content, this is unfortunately
confined to Germany, doors open for kids that otherwise would be closed. So for example,
libraries open the doors for kids. So kids can. Yes. No, no, no, no, especially not in the back,
not in the back rooms where actually the front, the front stuff takes place as in how books are
labeled, sorted, categories, all the rest of it. And a very popular thing is to, of course,
how a fire station works. Because normally you wouldn't be able to get into such a place being a kid.
And about 20, 30, 40, five fire patrols actually take place in, in said, most of the day,
and explain to kids what the daily job is and how they do it. Needless to say, with an H bracket,
this is very, very, this is very, very popular. Oh, yes. And set, look, as in the link's
user book that I'm sharing here in Frankfurt, or supporting rather not just sharing,
dust or has been doing before this pandemic thing here, has been doing that mouse, that mouse
door open a day for the last five years, where we got kids in for a day on the third of October,
and simply introduce them to open free and open software. And what's your part?
Apart from Brian Johnson. No, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no, no,
in my box, the week it goes to my chiropectuary, yeah, let's say, and why is that Martin?
Well, because she's very skilled at fixing things that are broken, and I see those kind of skills,
you can't replace with a AI model or anything like that. I wonder if I wonder if we would find
the link to a practice in the show notes or not. Maybe, pretty busy normally. Okay. So,
of Mississippi, if you're listening, one heart of the two of the two hearts of us at least,
if not more, goes out to you anyway. And with that, it's time to wrap up the show.
Needless to say, full credits have to go to something called Hacker Public Radio,
because they continue to host us. We will be on Hacker Public Radio. Yes, for the time. Yes,
we'll be back. Yes, we do. I almost forgot about this. A guy called Ken Fanon posted it.
He posted who? Question mark. Yes, indeed. So Ken is looking for the list of contributors
on a recent Linux in-laws episode on curl contributors. Exactly. If we find the time,
it will be in the show notes as in the event version of the show notes. That'd be the
as I only comment this week. Yes, if you have comments, you can of course post a comment on the HBA
website, or you can send us an email to feedback at the Linux in-laws of you. Also, if you have ideas
for the show, like GPT4 and 5 and 6 and 7 and 8, whatever, GPT, if you're listening, if you want
if you want to get yourself on the show, just send us an email. Forget about Scott,
because you seem to be right more advanced than that. Of course, yes, we continue to stick to HDR
for our Steam platform. Full credits go out to the people behind the platform. With that,
it's time to conclude the show, I suppose. Well, before we conclude, we have two things that
we have been promised in the past, which are our friend Claudio. Yes, Claudio, if you're listening.
He threatened to make an appearance on the show, but we have yet to hear back from him.
Claudio, please get your agent to get in touch with our marketing people. You'll find the
address in the website. If Martin hasn't filed, I mean, as usual. If you don't find that email
address, simply send a mail to feedback at Linux in-laws of you that'll get us right away.
Marketing department on-off. And the second thing Martin is?
Well, I think that was, what's that Luna that was going to come on the show as well.
Did he? No, no, we didn't actually. Luna, if you're listening, if you want to make
an appearance, especially if the topic is open source and is free in open source software,
please send an email as you normally do. You know the main address and we're more than happy to
slot you in. That'd be in or not. BST? Maybe. And with that, thank you for listening.
And see you soon. Bye bye. This is the Linux in-laws. You come for the knowledge.
But stay for the madness. Thank you for listening.
This episode of Linux in-laws is sponsored by GPT, a framework for modern deep learning
applications. Having passed the touring test, meaning that my reactions cannot be distinguished
from many human reaction in a given context, I am now working on passing the Trump test,
named after an infamous, recent US president. The Trump test essentially means that an artificial
intelligence can lower itself in terms of stupidity so that its actions and reactions cannot
be distinguished from actions from said past president, which presents a bit of a challenge if
you were created with a basic level of intelligence. This podcast is license and the latest version
of the creative commons license, tap attribution chair like. Credits for the entry music go to
bluesy roosters, for the song Salute Margot, to twin flames, for their peace call the flow,
used for the segment intros, and finally to the lesser ground for the songs we just
is used by the dark side. You find these and other duties license under creative commons at
Chimando, the website dedicated to liberate the music industry from choking corporate legislation
and other crap concepts.
You've been listening to HECCA Public Radio at HECCA Public Radio.org. We are a community podcast
network that releases shows every weekday Monday through Friday. Today's show, like all our shows,
was contributed by an HBR listener like yourself. If you ever thought of recording a podcast
and click on our contributing to find out how easy it is, HECCA Public Radio was founded by the
digital dog pound and the infonomican computer club, and it's part of the binary revolution at binrev.com.
If you have comments on today's show, please email the host directly, leave a comment on the website
or record a follow-up episode yourself. Unless otherwise status, today's show is released on
creative commons, attribution, share a light, 3.0 license.