Files
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

222 lines
16 KiB
Plaintext

Episode: 2524
Title: HPR2524: General problem solver
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr2524/hpr2524.mp3
Transcribed: 2025-10-19 04:45:30
---
This is HPR episode 2,524 entitled General Problem Solver.
It is hosted by first-time host Tutoto and is about 18 minutes long and carrying a clean
flag.
The summary is, brief look into General Problem Solver System and how to use it solves
simple problems.
This episode of HPR is brought to you by an honesthost.com.
At 15% discount on all shared hosting with the offer code HPR15, that's HPR15.
Better web hosting that's honest and fair at An Honesthost.com.
Hi, I'm Tutoto and I'm going to talk about General Problem Solver.
Recently a book by Four Pus and the player called Building Problem Solver was released
publicly available.
It is another related book, Paradikings of Artificial Intelligence, Programming by Norvik is
also currently available for free.
Both are really interesting books.
They have really old books already, but the information within is still pretty valid.
Both of the books describe the General Problem Solvers.
The Four Pus and the player called it the Classical Problem Solver as Norvik calls it a General
Problem Solver.
When it was invented, it was really groundbreaking, it was really showing like, hey, computers
can solve pretty much any problem whatsoever.
As usual, that was a bit of overstatement.
While the algorithm can solve various kinds of problems, there are some problems that
just aren't solvable at all and some problems that are really, really hard and expensive to solve.
But this was still one of those things that could solve many different kinds of problems.
For example, if you are giving a maze with a start and end, this one can find a way through
the maze.
Or if you have that common problem that you have two chucks, four and five liter chucks,
and you want to measure exactly three liters in what order you have to put things.
This algorithm can solve such a thing.
So let's take a finding a path through a maze as an example.
The program, or algorithm, or system, or the solver, whatever you want to call it, has
couple parts that I'm going to describe next.
If you want to learn more about these things in more detail and see some code, I recommend
checking those two books.
First that we need to do is to define how to represent state, how to represent the current
state of the problem, let's say.
So in case of our finding a path through the maze, we could say that we could have
a two-dimensional grid and have a each grid location will tell what direction you can move
from that spot.
And then we would have a separately stored start and end location.
And that's for the state.
Then we need to define or tell the program, how does it know that it has reached the goal?
This is expressed as a function that then given a state, in our case, that maze and
your current location and goal, it will tell you that have you reached the location through
a false.
And then we need to tell the program what means it has movement towards the goal.
How means of moving towards the goal are, of course, the ingestate to the node, south,
east or west, depending on the current location you cannot walk through the walls.
These are functions that when given a state, they will return two things, they will return
a new state, in our case, that is the same maze, but your location has changed.
And then they will return a description what was done.
This is just a plain text explanation, what was done to, in this point.
This description is interesting, because later on it gives us the means of showing what
route we took through the maze.
We get the list of like move node, move node, move east, move node, move west.
So we see each and every step taken along the way how to get to the goal.
And then we need couple, couple utilities, like how to tell that two states are identical.
This is used to prevent the program to go in around in the loops forever.
And that's about it.
In a nutshell, the general program problem solver takes apart and produces more parts.
And those parts are stored in the internal data structure, and that data structure has
a section on how the search through the maze will be conducted.
And these parts are really parts, it's a list.
You can use pretty much any ordered data structure.
I chose to use this list here.
In the list we have a stage, each and every state that we transit on a true when going
from the start of the maze to the end of the maze, and a description in which direction
we moved.
And that's about it, that's really the thing in the nutshell.
I mentioned earlier that those parts are stored in the internal data structure that has
a section on how the search will be conducted.
Like let's take the basic sample again, when we are at the given position, our starting
position, the algorithm checks around, can I move to these four directions.
And that is done by calling the means of moving towards the goal, operators.
And if that move is possible, the operator function will return the new state and description
what was done.
And if that move isn't possible, the operator will return nothing or null or void or something
else that we can tell apart.
Depending on the programming language used, it will return something that we can clearly
tell apart that this was a failure, we cannot go to this direction.
So in our starting position, we call those four functions, get maybe two or three or
even four possible moves back.
And those, first we check that we haven't visited those yet in the current part, we store
them into the, for example, in this example we put them in the stack, one after another.
And of course, this is the good point to check that if any of those were actually goal
because then we know that this is the winning part, we don't need to proceed further and
we can return the whole thing.
But if none of them was a winning part, we have stack with, let's say, three parts so far.
Then we just pick the first one from the top, repeat the same thing, call those four functions,
get some amount of parts back, check them from the fourth goal and put them on the stack.
But then we repeat that until we either reach the goal and then we have a list of steps
to take, how to reach the goal or until our stack is empty.
That means that we have tried to find all possible routes through the ways and none of those
routes was a valid one.
And then the search fails, there is no solution to our problem.
If we replace the stack, let's see queue for example, then we again, same routine, but
instead of taking from the top of the stack, we keep taking from the bottom of the thingy.
So while the stack is first in, last out, first item to put in is the last item taken
out.
The queue is first item to be put in is the first one taken out.
Then we get our first search, we are going to put in our exam, in a previous example,
we get a three possible starting parts, then we are going to evaluate all those three and
maybe we get, if all of those have two possible steps, then we have six possible parts and
then we evaluate all of those.
And so on, basically the depth search will try to find the part through the maze, by
running one part as long as it can until it hits the dead end, then it backtracks to
the first intersection and tries a different route.
While as the fresh first search is going to look all the parts, it's going to take one
step on each part, then it's going to take another step on each part.
So while the depth search can be considered as a single mouse running through the maze,
trying to find the route as quickly as possible, the fresh search can be considered as a huge
amount of mice all running through the maze in unison, everyone taking a step at the
same time.
The choice between these two algorithms depends on what you exactly want to do and what
kind of problem you have and, but you think that might perform well in particular problem.
In a maze for example, the depth search might find the solution faster, but in a problem
where the problem space is infinite, like there's an infinite amount of space to cover and
somewhere there probably is some amount of solutions, the fresh search might be a better
step because that is guaranteed not to miss a goal.
For example, if we were to find a path through from my room into the kitchen in an infinite
universe, the depth search might decide to take a step through my window and keep running
until the edge of the universe that would take a long time.
While as fresh first search would step out of the window and step out of my door and
keep researching both paths and find the way to the kitchen much, much faster, the depth
first search in this case would not be able to find a solution or other, it might find
a solution depending on the decisions it makes.
Then there's a fan service of doing this search.
If you are able to estimate how long is the distance between you, any given state and
the goal you can use, test first search, that works by instead of putting the things in
the stack or queue, it gives an order at least when the top items or parts that are closest
to the goal.
It will always try to proceed following path that creates the closest distance to the goal.
Sometimes it is possible to estimate this distance, sometimes it's impossible, it's entirely
dependent on the problem.
If you couple this by keeping track how long have you traveled so far, so ordering your
paths by the distance travelled so far and it's estimated distance to the goal and picked
the best one you are getting on A star search.
And A star is a really good one, it will be able to find a good solution, it requires
a bit more processing than others, bit more calculations and again it cannot be used
if you don't know how to estimate the distance from your current position to the goal.
And really, sometimes it's just fun to try different algorithms and see what different
it makes to the both the solution time and amount of steps you need to take and what kind
of solution you get.
If there are multiple solutions, these four different ways of conducting the search
might give you different solutions.
And while I'm talking about the distance travelled so far and distance from the current state
to the goal, it's not limited only to the map problems.
For example, you might decide that instead of a distance you are talking about the cost,
for example you have to pick your kid from the school and you have different operations
you can do, you can go there by walking or you can go there by bus, the walking one
would be the cheapest but it would take most time, your own car would be the fastest but
maybe it has a broken battery so you have to replace that first, costing more money or
you could take the middle road, take the bus, which means that you have to wait at the
bus stop but the cost is cheaper than travelling by your own car and the speed is faster than
by walking.
You could treat the distance between the nodes or the states as some sort of combination
of time that this requires coupled with the cost that this requires.
But this example is from the Paradex of Artificial Intelligence Program by Noravik.
Another example is a Edgman's problem, this is a fun problem because it's a fairly simple,
it's easy to explain, it's not too huge in a normal way, it's usually posed but you
can make it harder to solve or rather more time consuming to solve by increasing the amount
of queens.
Okay, in this puzzle you are given a chess board which is 8x8 creed and 8 queens and
your task is to place 8 queens on the board so that nobody, no, not single one of them
can take another one so they cannot be on the same column, they cannot be on the same
row, they cannot be on the same diagram now.
One way of solving this is to represent queen positions as a list of 8 items because we
know that they cannot be on the same column, we can simplify the problem and just decide
that the queen 1 goes on the column 1, queen 2 goes on the column 2, queen 3 goes on the
column 3 and so on, so we have a list of 8 numbers ranging from 0 to 8 and then we
know that the not a number can be on the same, all those 8 numbers must be unique because
if they were the same that would mean that they are on the same row and then we have to
solve the diagonal still.
This we can check by making sure that the difference on the numbers is not the same as the difference
of the index, so if you are comparing positions of queen 1 and 2, the difference in the index
is 1 and we know that if the queen 1 is at the location 1, then the queen 2 cannot be on
the location 2 or 0 if you go with the simplest indexing, so now we have a problem encoding,
we have some clue but valid positions for the queens, our operators will be placing
just adding queen to one of the positions from range 1 to 8 and function to realize that
if we have reached the goal is to check that all the queen positions are valid and we have
placed 8 of them and that's it, that's all we need to do and of course we need to, sorry
and of course we need the way to detect this, we have two identical states that's just
comparing to lists if they have the same content and when we have defined this we can just
give this to the our general problem solver and it can start finding the solutions for us.
This is really interesting because now we have a way of solving a problem without writing
that algorithm over and over and over and over and we have this tiny little machine that
we can reconfigure by giving it different parts like this is how you represent the state,
here are the means of moving from one state to another, here's the starting state, here's
the function to tell how we are if we have reached the desired state, here's a way of telling
if those states are same or different and if we are doing a best first search or the A star
search, here's a couple functions that will tell you how far you are from the goal and how far
you have traveled so far and in the eight queens problem you cannot use the best first
search or A star search because you have no way of telling how close you are solving the
problem. I mean yeah you have a way of telling that I have six queens on the table but that
doesn't really tell you how close you are solving the how close you are solving the problem and
eight queens fossil is a example of a problem where we are not that much concerned on the how
to get to the end. We don't really care about the part like first you place queen here and then
here and then here all we care is the end result this is how the queens are placed this is the solution.
I think that's that's how it for now I have a block where I have been writing about this thing
I'm going to throw it in the show notes and of course I'm going to put the couple couple
books I mentioned earlier there really there really interesting I really recommend you to have
a look at them the book by Norvik starts of it slow it it has a section in the beginning about
how the program thinks in a list and if you're not interested in that you just skip that part
and go to the real meet the book starts slowly but when it picks up the speed it's it gets really
interesting okay that's for it now cheers
you've been listening to Hacker Public Radio at HackerPublicRadio.org
we are a community podcast network that releases shows every weekday Monday through Friday
today's show like all our shows was contributed by an HBR listener like yourself
if you ever thought of recording a podcast then click on our contributing to find out how easy it
really is HackerPublic Radio was founded by the digital dog pound and the infonomican computer club
and it's part of the binary revolution at binrev.com if you have comments on today's show
please email the host directly leave a comment on the website or record a follow-up episode yourself
unless otherwise stated today's show is released on the creative comments
attribution share a light 3.0 license