Files
hpr-knowledge-base/hpr_transcripts/hpr3319.txt

1076 lines
53 KiB
Plaintext
Raw Normal View History

Episode: 3319
Title: HPR3319: Linux Inlaws S01E28: Politicians and artificial intelligence part 1
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3319/hpr3319.mp3
Transcribed: 2025-10-24 20:46:21
---
This is hacker public radio episode 3,319 for Thursday, the 22nd of April 2021.
To its show is entitled, Linux in laws s0128, politicians and artificial intelligence part 1.
It is hosted by monochromic and is about 67 minutes long and carries an explicit flag.
The summary is part 1 of a miniseries on i, ml, dl and other fun.
This episode of hbr is brought to you by an honesthost.com.
Get 15% discount on all shared hosting with the offer code hbr15.
That's hbr15.
Better web hosting that's honest and fair at an honesthost.com.
This is Linux in laws, a podcast on topics around free and open source software,
any associated contraband, communism, the revolution in general and whatever else
fanciful. Please note that this and other episodes may contain strong language,
offensive humor and other certainly not politically correct language.
You have been warned. Our parents insisted on this disclaimer.
Happy mom? That's the content is not suitable for consumption in the workplace,
especially when played back on a speaker in an open plan office or similar environments.
Any miners under the age of 35 or any pets including fluffy little killer bunnies,
your trusted guide dog unless on speed and q2t rex's are other associated dinosaurs.
Good evening Martin, how are things? Good evening,
Chris things are fine and dandy as they would say across the world.
Excellent. How's the value? How's the lockdown treating you these days?
Same as before, very much before. It hasn't changed as such.
Very good, very good. Did you mention hairdressers opening last time?
No, I think I did.
They haven't here yet, but they are soon.
But then you get vaccinated left or right center apparently.
Because we are ahead of most countries.
Some people would call them stealing from Europe,
yes, but we won't go down that route anyway.
Well, it doesn't matter.
I'm just being very silly matted.
I know it's not good.
Yes, okay, no, no, it's a political show after all.
No, no, this is not the Brexit podcast for some special reason.
Russian community is excluded.
But rather the limits in us now we'll talk about.
Not when we talk about tonight.
Well, you can talk about people use a lot of terms for these things like
artificial intelligence, data science, machine learning,
deep learning, lots of kind of terms being thrown around.
But hopefully it will show clarifying some of these today.
Indeed, so in the olden days that would be known as
a smog smog texture.
So in the olden days,
these concepts would be all rolled up into one called smog texture.
Yeah, the first project that terribly went wrong.
That went wrong, Valerie.
Sorry, we probably have to cut this out because it's a political joke.
Yeah, and it's slightly confusing.
Esmosa.
I don't get it.
Has she was a politician in the 80s, right?
Or whatever it was.
Yes, correct.
But you see,
I didn't know that she did anything with machine learning.
No, but if she was an artificial intelligence, so.
Was she?
Oh, she was one.
Right.
Right.
We're getting there.
We're getting there.
Okay, okay.
Sorry.
Going back to somewhat safer route now.
I picked up a bit of a far-fetched link.
I do apologize.
It's not in the equal amount of beer at the moment.
Okay.
This is my second bottle.
I mean, this is nothing.
This is just warming up.
And I'm running low on this stuff, actually.
Okay.
Before we go into the details, yes.
Let's cover some of the basics first.
There is, of course, the wide and open field called machine learning.
No, no, no, no.
What about artificial intelligence?
Oh, sorry.
Anything not human.
What is that?
25.
Did you plan on having intelligence and cats and dogons?
Sorry.
Anything not living and breathing?
Okay.
Makes sense?
Yes, no, maybe.
That's it's quite reasonable.
Full disclosure, for the wax and where
beings listening to this podcast,
you may get offended further down the road,
but don't worry about it.
For the wax, right?
For the wax, where beings listening to this podcast
as in the grey zone between artificial and real intelligence.
Oh, sorry.
It's about fish.
No, you know something called a terminator?
Terminator?
Well, there was a new computer.
Like a computer, it's had a human or the other way around.
There's a, there is a grey zone.
No, there's a different design.
Right, lower name.
The fact that I'm alluding to it,
that there is actually a grey zone.
So it's not as black as white as you,
as some people may,
would make you believe.
It's very good to me.
Okay.
Now, okay, artificial intelligence,
not necessarily walking, running, breathing,
and other stuff.
Well, but doing solving problems,
making decisions, stuff like that, no?
Yeah, but that was the, that was probably for us intelligence, no?
Here we go.
That's intelligence, no?
Here, but that was, but that's, but that may be artificial intelligence, too, no?
Well, this is why I'm asking.
Sorry.
Sorry.
Okay, intelligent beings, anything outside politics.
Makes sense?
Well, okay, that's the end-of-state agency.
Yes.
Not yours.
Managers.
Okay.
Sorry.
Okay.
Okay.
Artificial intelligence.
Anything else, anything outside the grey zone
and the biological beings, that's the most.
Okay.
Yeah, yeah, go for it.
As an, sorry, as an originally man-made,
I think that's the, that's the most fitting definition, I suppose.
Well, that's the, there's a clue, there's the aliens, I guess, but yeah.
That's, that's, that's the, that's the man-made, isn't it?
Yes, aliens, if you're listening,
please send email to feedback.
I think it was in law starting here.
We are trying, in which we can understand this.
We are trying to be a politically correct podcast,
but so it doesn't work out.
Indeed, indeed.
Right.
Okay.
It's going back to the, coming back to machine learning.
Why, why, why is machine learning so important, Martin?
And I don't want to hear the word politics this now.
Why is it important?
Well, it depends who you ask, right?
If you ask, um, I'm asking you.
Okay.
If you're asking me, then, for me, it is a fact that you can automate
mundane tasks that would require some human-like
qualities like being able to recognize,
speech, images, those kind of things, and making decisions based on those.
And do you want to do that for the task that people don't want to do, maybe?
So, would it be fair?
Would it be fair to say or to assume, rather, that machine learning
would entail some sort of computer in one shape or another?
That would be fair.
Sorry, I agree with them.
Not computer, but algorithm.
Let's put it this way.
Yeah, now that's fair.
The free algorithm is good.
Yeah, he doesn't have to be.
Well, I mean, it depends on you to find a computer.
If it's the people thing of computers, things with electronics and stuff, but, you know,
a calculus is also a computer, right?
Because you can keep it on the computer anyway.
Oh, it's fair, yeah.
Sure, the computer in the widest, in the widest, meaning in terms of a touring machine.
Decemalistic or not?
Indeed.
And links, of course, will be in the show.
Alan, if you're listening, no sweat, we'll provide definitions.
Sorry.
Actually, we should get him on the show.
You don't know who, basically, you don't know who's listening to this, right?
I mean, this is not copyright.
No, no one took him on the show.
It's not copyright, it's the living means.
Oh, yes.
Deeds, did.
Do you know, or do you know, medium?
Well, there's, um,
That we could ask.
There's a popular writing, um,
I'm not talking about medium, that's it.
It makes it, that's what I'm talking about, man.
No, it's not medium.com.
Anyway, okay.
Yes, going back to where I went, right?
Exactly.
Exactly.
Okay.
Yeah, but what does your opinion on this then?
Machine learning has been run for ages, right?
I mean, no, no, no, no, but why would you?
What do you mean?
Well, why would you want to use machine learning?
Because humans are stupid, right?
As we all know, we just have to take a look at politicians, for example.
Well, because humans are complex, I think.
So,
most of the time, with this human point of view and this thing.
And sorry, let me rephrase that.
Machine learning, of course, would be the next step towards a better being
in terms of the next step of the evolution.
No, this maybe.
This is a very controversial statement.
I know what that is.
It is controversial statement, and also the technology isn't quite there.
Is it really, um, say it?
But in comparison to the 50s,
we really have, we have it, we have it, but I suppose.
We have, yes, we come to you.
Yeah, because the 50s was actually, yes, because the 50s was actually the first point in time,
I would reckon, where mankind thought about putting computers to good use in terms of
artificial intelligence, as an intelligence in computers.
How did they find good use?
Coming up with the concept and thinking about the consequences, to some extent.
No, no, no, but I mean, you know, what is a good use of computer?
Is it, um, advancing mankind?
Okay, it's not many are used for that.
Mr. Schwarzenegger, if you're listening, this shows for you.
Oh, we could get him on the show as well.
He's not there, at least.
Makes it slightly easier.
Okay.
Where were we?
Right.
History as a matter of fact.
Oh, yes, okay.
History.
Yes, so.
So, so the thing is basically, this whole artificial intelligence and machine learning was kind of,
well, the, the basics were all the rage in the 50s and 60s, but then it somewhat got down
because the technology wasn't quite there.
Indeed.
Uh, because mostly you were talking about mainframes and the mainframes had limited computing power,
nevermind storage and bandwidth and all the rest of it.
So, um, I think in the oversimplifying things in the, in the 70s, 80s and 90s,
I wouldn't say that the artificial intelligence took a, took a habanation of sorts,
but I think it's not too far from the truth.
Yes, very.
I'd say the, the computing power wasn't there, right?
So.
Yes.
And then came, of course, a startup called Google
with the, with the, with the, with the, uh, with the disposable income at hand,
to change things once again.
For the last, I reckon 10 or 15 years, something especially called deep,
um, especially machine learning and the shape of deep learning has been all the rage, right?
Might have been a sway call rage, but, um, it is Google's probably the best, well,
this, this Google, Facebook, Amazon, they're all using it and developing it.
Yes.
Um, so, somewhat, what's the difference between machine learning as in,
machines that are learning and deep learning?
Hmm.
Well, deep learning is the difference is that deep learning uses, um,
uh, no, cell train or networks, well, um, where the machine learning is really.
Uh, and algorithm such that has someone has developed, um, if you think about solving the problem,
you can, uh, boot up with a whole bunch of if statements, right?
But it would be many if statements.
So you'd be there for a long time.
So why not use a computer so you do that for you instead?
Before we go any further, we should probably, um, explain what a neural network is.
Okay, we can do that, yes, yes.
Yes, um, the beauty about neural network is actually that everybody has one, at least,
if not more than one.
Well, you say that.
Maybe a path on politicians, but I'm not entirely sure if we have, um,
the same functionality in our brains, but it's, it's loosely based on that, right?
Okay, at the end of the day, it's, um, okay, what do you find in your network, basically?
It's interconnected cells that are capable of modifying their behavior.
Let's split this way.
This definition don't quote me.
It's very loose, of course behavior in terms of the way they process signals.
If you take a look at the human brain, the human brain is made up of,
I'm, I'm, I'm grossly, I'm grossly oversimplifying things now,
but not everybody is a, is a new, is a new biologist, biologist, listen to this podcast.
So, um, so what is a neuron?
Exactly.
It's, it's exactly, sorry.
In your, I could just finish in your honest cell,
that is capable of transmitting electric electric current.
And the second, correct, classical trade-off in neurons, actually,
that it can change the way it transmits this current.
It transmits that current.
Okay, how does it do that?
Chemical foundations, as a matter of fact.
We won't go into the details because that episode is only about three hours long.
If we would go into the details, we would be easy looking at six hours.
No, in a nutshell, basically, neurons are able to change the way they transmit their current.
Okay, I guess the one question here is, um,
are they able to change this current in an analog way?
Neurons only work on the other bases.
Okay.
They're modulate the current in contrast to computers,
which normally are actually operating on a binary, in a binary fashion.
Unless you're talking quantum computers on all the rest of it,
but more on that in the later episodes.
Yeah, maybe in five years time in the HD.
Accessible to a mere mortal.
Yes.
D-wave, if you're listening, e-mail address, sponsor at linux.eu.
Turn one over now.
Yes, you're wondering what you want to do with all that money.
Indeed, indeed.
D-wave, of course, being one of the first companies
offering quantum computing computers on a commercial basis.
Hmm, the same goes for a company called IBM, IBM, or Watson, if you're listening.
Well, I think Watson, it probably is indeed.
e-mail address, anyway, I'd like rest.
Okay, back to the basics.
Okay, the idea with artificial neural networks is pretty much the same.
You have a simulation of a neuron that takes input values,
comparable to that car flowing into a neuron,
and then does something with it.
Normally, this is where the magic is, this is where the magic source is,
as in the function that takes the input value,
and then basically comes up with an output value.
And in traditional artificial neural networks,
this function would have one important property called something called a weight.
So it's basically a function.
Exactly.
And the very primitive neural network basically would just essentially take the input value,
apply the function to it, factor in the weight,
and then produce an output value.
Now, the beauty is basically, you modify this weight, you get a different behavior
of the function.
And this is how something called backpuprogation networks work in terms of the...
Yeah, so sorry Martin, go ahead.
Yeah, before you move on to backprogation.
So if we...
Because we don't have to have one neuron right, we have a bunch of neurons.
Sorry, yes Martin, correct of course.
Connected in layers, right?
And there's two main layers, the input layer, and the output layer,
and the whole bunch in between depending on what you want to do with it.
In traditional artificial neural networks, yes.
Correct.
You have to have an output layer because you have to have something that says,
where there's a catalogue door, right?
Mark, pardon the skipping hat, but that's a good name.
Anyway, that was just describing the whole network, right?
So you have a bunch of neurons that are...
You have to decide the input.
Well, you input layer, and then you've got a bunch of stuff in the middle where
connections happen.
And then there is your output layer that just makes the final decisions on what you want it to do.
For the following discussion, it may help to define Markov chain.
Before we progressed to hidden Markov chains.
Sorry, that was a joke we won't go there.
This is not a podcast on maths, but we'd like to keep it simple.
Having said that, it is all related to maths, anyway, or based on maths.
I don't know, I'll explain.
Okay. Now, Mark rightly interjected.
Normally, neural networks consist of several layers.
And the idea behind these layers is coming to something called no-back propagation networks.
There's one phase basically called the training phase, and then there's the
what's called inference phase.
Inference.
Inference, yes.
Sorry.
And the idea is basically, during the training phase, you actually modify these functions
including the weights.
Now, when you say you, you mean not you, really do you?
No, I must have personally.
There will be quite a busy task to do enough network with many neurons.
Okay, Mark, why don't you explain BNPs...
Sorry, BNPs, BPMs, a little bit more detail.
Okay, so we have our layers of neurons, right?
So if you imagine your input layer neurons every neuron on your input layer neuron,
this connects it to the next layer to neurons in that layer, right?
So if you imagine a think of your nodes in a network, all your nodes on your input layer are
connected to every single node in your next network, and then, and so on, depending on how many layers you have.
Fine, every of these neurons is a function produces an output,
and the bias determines whether it's being what the co-activated or not.
Sorry, what's the bias?
Oh, sorry, you just talked about weights.
Sorry, did I go too far already?
Sorry, you didn't explain what the bias is, sorry.
I thought you explained weights.
Yes, but biases...
Well, it can be put, can be boiled down to a weight, yes.
It's just not the truth.
Well, it's not the co-activated, but we leave it at that.
I mean, we didn't really talk about any numbers, so say your...
Okay, so basically, let's start, go back in one second.
So your neurons, what numbers do they work on numbers, right?
They have a number of inputs.
They work on input values, yes.
In the simplest type, in the simplest form, there would be numbers as in floating upon numbers, yes.
And ideally, you want these to be between 0 and 1 to give you...
That depends on the network and on the arithmetic, using all the rest of it.
Yeah, fair enough.
But okay, this is your basic principle of these networks.
So you're kind of coming back to using ordinary,
fun-noiming architectures.
Quantum neural networks are, of course, different.
However, it's a slightly confusing Martin.
But that's okay.
Go back to the fun-noiming, sounds great German.
Don't mention the Germans.
What do you say, Martin?
What do you say?
Well, we're worried.
So anyway, okay, now...
Okay, before we start with an example,
or whether we go down the internal structure first.
Anyway, so imagine you have a neuron, excuse the function,
and number comes out, right?
Fine.
You may decide that the function may produce 24,
and you may say, oh, this 24 is actually a way to...
I don't want this to activate unless it's over 30 or something like that.
So that's your bias, really.
Which is kind of switching on and off neurons in your network.
But again, so carry on.
So that's weight and biases.
Where are we?
You were explaining your back propagation network.
No, it was explaining weight and biases and connectedness between layers.
Before I rudely interrupted you.
Okay, so anyway.
So you assign a weight to your neuron, right?
Is that where we were?
Yes, go ahead.
Oh, well, not you, but...
No, the algorithm.
Yes.
There are no people involved.
People are good listening, you're not involved.
Right.
So, okay, so we got a weight for each neuron,
gives you a...
If you add all those those together, you get weighted sum, right?
Mm-hmm.
And that's where you're...
You can calculate your...
It's better with an example, really, isn't it?
Think of an example.
Because otherwise, we're just talking maths, really.
Can't think of one, actually.
Yes, not one.
Well, what would you solve with the neural network?
Doctor numbers?
Mm-hmm.
Okay, but that...
Okay, so what training data would you have for this?
Maybe give us...
Let's go back for one second.
So, the idea about the training is to say,
these are positive examples of a good outcome, right?
So you would have lots of numbers and win, yes, no.
And then so your...
In this case, your final layer is a yes, no decision, right?
Whether or number is a winning number.
Yes, you could do that, actually.
Oh, what you could also do, basically, is you would take the...
I think the lush number example is not too far-fetched,
because essentially, if you take the history of all the not...
Of all the not...
Yeah, yeah.
Sorry, you're still there.
Hello, hello, hello, hello.
If you just do it long enough, sorry.
Yes, you do that.
So the small...
90 interruptions out of...
To count you...
Sorry, yes, sir.
I see if you're listening.
You get corrected.
We will be fired in the morning, by Martin.
In the morning?
In the morning?
Tonight?
Yes, anyway, come here.
Okay, come here.
I mean, back to the...
And you did that again, because I don't know if it was...
Yes.
...in the end, that was...
No, I mean, the lot of numbers...
The lot of them are examples, actually, are too far-fetched,
because essentially, if you do this long enough,
you will be able to spot for a set of lot of numbers
being drawn in a certain environment,
you will be able to spot certain patterns.
Okay.
In terms of how the balls roll
and how the balls fall into particular columns,
which, of course, then represent the lot of numbers being drawn.
And as I said, if you do this often enough,
you will be able to spot
two isolated patterns.
And that's exactly how you predict the next set of lot of numbers,
given the current environment with all the physical parameters into account.
That makes sense.
So, why don't you talk us through that example in neural network?
Fair enough.
So essentially, what you do is,
for the first couple of million iterations,
you feed data into the network,
and then you would take the, well, the environmental parameters, I suppose.
Plus, plus the balls themselves.
For the time you say, there's no more data approach.
You have a one run of a network would be a single set of numbers,
is that what we're saying?
And then I can plus, plus probably more,
because you want to also take into account the time of the day,
because that would probably make tiny deviations in gravity.
You would also take our pressure into account,
because that would affect the way the balls will fall.
You want to take the alcohol consumption of the moderate
and by operating the machine.
You see it, it's getting complex.
But if you take all these parameters into account,
essentially, the way you do is,
you, for the first couple of millions of iterations,
you just let the balls roll.
They will come up with a random pattern,
which of course is wrong, because it won't reflect.
And then, sorry to interrupt.
The pattern, the pattern that you talk about here,
what does that mean in terms of a neural network?
And cut and say again, sorry, you will cut now.
All right, okay, yeah, something going on with my G today.
Where were we?
Yes, sorry, so my question is,
what would that pattern look like in a neural network?
What does that mean in neural network terms?
You would start with a random distribution of the weights
or the biases of the functions.
But then based on the stuff that you feed into the network,
and that what comes out of the network,
plus the, the historical data that you have at your disposal,
you can then go back and modify the bias and the weights
in the individual neural,
I'm self-reflecting, I'm sorry,
what was the pattern, what was the pattern,
what was the point in the individual neurons,
or individual basically entities representing the neurons.
Essentially, and this is where the term back propagation comes from.
So you do a run, you get a set of output values,
that set of output values deviates from the original one,
and then you go back modifying the individual structure
of the bias, of the function, of the neuron, and so forth.
So eventually arriving at the optimal configuration of each and every neuron,
which is then, and then the network is then able to predict,
for specific time, for specific algorithm assumption of the operator,
I'm operating the machine,
and all the rest of these parameters would be then able to predict
the outcome of the next lot of draw.
Yeah, so the,
does the term cost function mean anything to you?
Go ahead, partner.
Well, I mean, you just, you were talking about the output values.
So, okay, so imagine in your, your example,
for with every neural network, you have to define,
as I mentioned, your input and output layers, right?
So your input layer in your lot of example would be
older numbers from, I don't know, how many numbers are in a lot of 0 to 9,
always go up higher, no, it's higher than that, isn't it?
I don't find a lot of, 50, something or whatever.
Anyway, all the numbers from 0 to 55, the same, for example,
that those are your input values.
I think in German, it's 49 or something, but I may be wrong.
Ask the cartel because some, some level of funding,
because she comes from a lot of,
so, it does it?
The details will be in the show notes or not.
Excellent.
So, I mean, so it, so in your lot of example, your outputs would be,
okay, so then you decide whether you have a yes, no,
as an input as well, saying that this is a winning combination, right?
Or the very, yeah, in the very trivial example, yes.
Yeah, or you would have a yes, no, only output end.
Saying, you know, whatever numbers you feed in,
it comes in with a yes, no answer.
So, so does it, you're kind of, or you would, yeah,
or you would attach the sequence to the actual out,
to the extra ball drawn, ball drawn.
Yeah, yeah, exactly.
So, it kind of touches on a couple of things.
One is availability of data and then second one,
how to structure all this stuff to.
Absolutely, you want one of it because, you know,
it's if you just, if air pressure is an important factor,
and gravity is as well, then you don't have these things,
then you kind of have it list, list likely to have a good outcome.
Absolutely. So, the point that Martin is making here,
and that's a very valid point for the more tricky,
but a newer network is basically the function can be quite complex.
Or the functions in the, in the, in the neuron,
simulator list with this way can be quite complex.
And in the easiest on the simplest terms,
that would be just a weighted sum as Martin just explained.
But I reckon you won't get away for the short example
with that sort of simple function.
And then you run.
Yeah, so anyway, go back to the cost function scenario.
Your, your final layer, you're going to assign a cost function
on the difference of a good outcome and a bad outcome, right?
So, so if you have, I don't know, in the simplest form,
a yes, no, a couple of neurons on the end.
And the output, when you're training it,
because it's the self learning network,
because you already mentioned and the connection needs to be established
and the way it's involved needs to be determined.
Your cost function at the end says,
really, if this is a how good was this run, right?
So, and then you can start adjusting them based on that.
So the cost for me is just really a difference between,
again, function based on of the difference between a good outcome
and what the actual outcome is from the network at that point in time.
So, which is where you have to keep training it.
Yeah, carry on.
Why don't you do that?
Yeah, so, so, exactly.
So, in the simplest way, I reckon the cost function is essentially,
I'm killed this network by injecting some lethal poison going back to the network.
So that all the neurons will die.
But I'm exaggerating, of course.
This is called alcohol.
If you're talking human brain, that's probably a very bad approximation.
I have a fun fact for you.
It's drinking alcohol actually makes you cleverer.
I knew there was a reason.
You know why?
Oh, I can't pick up for multitudinal reasons, Mark.
Oh, God, why don't you like me then?
No, no, no, God, why don't you think of it?
No, I can't.
Well, there's something called, of course,
loosening the center and that's what alcohol does.
No, okay.
No, no, no, no, it actually kills off brain cells as well, right?
Well, it depends on the amount, no?
No, no, no, no, no.
Well, I mean, if it kills off the right ones.
Exactly, this is the point.
It kills off the right ones.
Only the poorly performing ones die first.
So it's actually a good thing to be.
So, of course, there's a fine line between a consumer,
I call, and drinking is up to death.
Yes.
And not to mention the liver doesn't agree with this.
Yes.
Charles Bukowski, if you're listening,
we would really love to have you on the show.
Medium.com or not.
Okay.
And we will be back progression that works.
Yes, of course.
We are trying to oversimplify things here.
And it is to say this whole thing is way more complicated
than we were able to fit in a episode of Linus and Lars.
There will, of course, be pointers in the show notes.
The bottom line is that the back propagation networks were one of the earliest
examples of something called ANNs are different neural networks
with the idea of having a model in place that would allow you to dynamically adapt.
Based on the training phase,
the functions, the weights, and all the rest of these parameters,
and then in inference, inference phase,
you would then put that training to use in terms of
let the back propagation networks predict things,
spot patterns, and all the rest of it.
And that hasn't really changed over the last couple of decades.
Indeed, indeed.
And so how do things like
other linear regression and
decision trees come into this?
I think I've talked enough for one episode, Martin.
Why do I have to do all the explaining here?
Come on, you're the expert.
If I ask a question, you have the answer.
So this is how it works.
It's kind of the basis of it.
Martin has a hard time acknowledging that I know everything, right?
From operating systems right up to
advance the back propagation networks.
But Martin, hey, no, it's going to go ahead.
Yeah, so I mean,
there's different ways to get to a
a prediction, right?
Which is not a neural network way.
So it's really an algorithm that
which is why we were talking about the difference between
machine learning and deep learning.
Anyway, so, okay, so having that with that,
who came up with this idea about deep learning,
or actually not the idea of who makes
is kind of made this more usable popular in the recent years.
Google.
Okay.
And how and what did they do?
They came up with a very important framework, which we
part of the next part of this 20-part mini-series
on artificial intelligence, quite tens of flows.
And any other companies that did a lot in this area?
I'm dead sure.
Mark Zuckerberg, if you're listening,
the email address is sponsored at feedback.
Sorry, it's sponsored.
It looks a lot starting you.
Martin, you used to work for Facebook one,
so why don't you give us an insight?
When did they use to work for Facebook, exactly?
Have I missed something?
Look at your LinkedIn page.
I'm dead sure it's on there.
You've hacked this, have you?
I thought you were an ethical hacker.
What is hacking, Martin?
No, I don't hack other people's LinkedIn profiles.
No, I'm glad to hear it.
Yeah, so Facebook obviously being the other company that did a lot in this area.
They come up with the torch, right?
Isn't it a pie torch?
Indeed, the other popular.
Which is, of course, another popular framework.
Implementing bad progression networks.
Because this is all what they do.
They just implement BNPs.
BNPs?
BPN.
BPNs.
Sorry, long day.
We cut this out anyway.
That's right.
It's just an advantage.
Propagation networks, yes.
Yes.
Yeah, so, okay.
I think we've kind of covered the basics enough
unless you want to go into the math of it,
which we probably don't want.
No.
So we talked about that.
Anything you will add to this topic from a mathematical perspective, Martin,
will it will go directly go into the outtakes?
I'm going to get in touch with the first product.
Okay, I'm just going to make sure.
No worries.
So you just work away, you just talk,
but it'll be the outtakes.
What about gradient descent?
Do you want to cover that?
No, you will.
And it will be part of the outtakes.
No, I won't bother them.
We'll be in notes.
We'll be in the show notes here.
Okay, gradient descent for the people who are still awake.
Essentially, it's an advanced model
to adjust the individual functions in that way and neurons.
That's what this is.
We won't go into the details.
Yeah, I wouldn't call it advanced, but it's fine.
Don't worry about it.
Okay, so when you speak to the thing about all this stuff is that when you speak to people
who are obviously familiar with this topic, they use all these terms, right?
So it's useful to know some of them.
Then why don't you go ahead and explain some of them,
including overfitting?
Overfitting.
Yes, well, that comes back to
gradient descent, right?
Go ahead.
Do we really want to do this?
Yes, Martin, because you teased it already.
Okay.
Okay.
Right, what's gradient descent, right?
If you have an outcome, right, and you want to decide whether the outcome,
the next outcome is better than the other.
Because we're okay.
So as you mentioned before, we're just training this thing with random stuff,
weights and biases until it comes, until we happy with the output of the cost function
represents our final, you know, yes, no answer or whatever it is that we're trying to
get out of the network as a decision.
So, where will we?
You were talking about overfitting and...
Oh, yes, yes.
Yeah, yeah, yeah.
So in a training phase, you run all these data through many, many times,
and so on.
And you compare the current value of the cost function with the previous value and say,
who is it bigger, is it smaller, or how much bigger is it, and so on.
And so you can work out how, if you're going in the right direction of the training,
and appropriately adjust your way to find...
Well, when we say you, it is the...
adjust the weights and biases, right?
So that's really what we're trying to do.
And with gradient descent, all we're doing is calculating the better we are,
are going in the right direction of getting a better answer, right?
And if so, by how much.
So if you imagine a simple two-dimensional graph to give it simple,
and it goes up or down, it may go up or down multiple times,
and your objective is to find your minimum of the difference between your,
you know, your current output of a cost function and your desired one.
For the 90% of the listenership...
Let me oversimplify things, yeah.
People imagine you're on a plane in terms of an airplane,
but rather on a two-dimensional surface that has dense reaching into a three-dimensions.
Yes, essentially, you're trying to get from point A to point B.
The trouble is, basically, if you're for some reason...
Actually, we're going to use mountains.
I was just looting to a hole and being stuck in a hole,
but you can, of course, do the same thing with mountains, yes.
Okay, so imagine you're on mountain and it's foggy.
So you can't see the top or the bottom.
The way you just go with the bottom.
You're on top of this, you climb this mountain, you're going down the other side,
and you don't know where the bottom of the mountain is.
And it's foggy, so you can't see either.
So you keep going down because, you know, your mountain has a gradient
until you get to the bottom, but how do you know it is the bottom?
Because it's foggy, exactly, yes.
Yeah.
So what do we do with the foggy?
Yes, we actually take out our cell phone and call a helicopter.
What did you say?
I'm just trying to break the example.
Well, if you want to break the example, and you have a phone and an altimeter,
then you could know how high up you were at the point.
This is the point that Martin is making here.
You don't know what it was.
Exactly, for sometimes it's hard, and this is basically where the magic comes in,
for a function to establish if they're not caught in something like a hole,
or if they're not just circling around the mountain trying to find its top,
or whether they have actually reached the top or not.
This is where the magic comes in with regards to modeling the functions of the neurons.
And this is also basically why overfitting is an important concept here.
Martin, why don't you explain what overfitting is because it's just one step further.
Yeah, fair enough, fair enough.
So how do we explain this with our mountain example?
Let's try to go somewhere with that.
Imagine you have about six or seven mountains on your plane that you're trying to cross.
Yes.
Overfitting then, again, I'm oversimplifying things.
You are trying to identify all the mountains, but for some reason, and again,
while it's down to the function again, as the model of the neuron,
you're just stuck around one mountain without being able to see the other ones.
That's a good example.
This is overfitting.
So overfitting essentially means you're training the network in the wrong direction.
Yes, because you can't oversimplify exactly.
Yes, we're the whole basis of all this is that you don't know where you're going.
Yes.
So, for example, taking a very simple domain-specific framework into account,
and probably we should skip ahead and explain what a domain-specific network is.
Essentially, a neural network is able, and Martin will explain what a convolutional network is in a minute.
A neural network essentially is able to extract certain features,
put them back together again, and then come to the conclusion that a certain item
is a smartphone, a laptop, an animal, a kind of Coke, or maybe even a pencil.
To stick to something called image recognition as the domain here now.
So, essentially, you take a JPEG, you take a PNG, you put this into the neural network,
and the neural network is then able to say that, but you don't actually put the JPEG in, right?
So, when you put the representation of a JPEG into the network, yes, sorry.
You feed the data into the network, and the network is then able to extract.
Well, this is an important thing.
There's a lot of things you can do before you feed your data in, right?
Yes, we can give you an image of the network.
The details will be covered in English and English in part three of the mini series.
This will be covered by Martin, and it will be quite a compact episode, only about five hours long.
Oh, I'm not sure we can do that in five hours.
Exactly. Let me tease the whole thing, and let me give you an outlook of how this is done.
You take the picture, and again, Martin, I'm oversimplifying things.
Now, stay tuned, it's all pretty simple as good.
It will all be really part three of the mini series, and it will be quite compact as I said,
five hours maximum. Anyway, it doesn't matter. Okay, you take the picture.
The neural network then basically takes a look at the picture, extract certain features,
and put them back together again. So at the end of the network run of the new of the inference
phase of the neural network, the neural network then comes to the conclusion we are looking at a
pencil, a laptop, a smartphone, a Zodakian, or even a cat. Coming back to my original remark.
Overfitting would that mean that only a certain species of cat can be recognized, but not the cat,
not the type of animal as in cat itself. So you would be looking at a name cat species,
like a mini line in terms of a cat that resembles a tiny line.
And other cats wouldn't be recognized if the neural network would be overfitting.
Okay, I get the details as I said in part three of the mini series called the ins and outs of
new of domain specific networks. Well, extended version would be about 10 hours, but the
okay, so how are we doing with that overview? I think we are still missing one or two things.
Martin will cover them in about an hour, namely, I mean, we all kind of touched upon CNN,
see it in convolution networks, but why don't you explain a little bit more detail what a
convolution network really is? I mean, this is the, there are lots of different classes of neural networks,
I think we covered the basics and without going into the details of each and every one of them,
which we don't want to do. We don't. Okay, there are, well, not nothing the overview episode.
Martin, now we try to do to perform the following miracle, explain the prominent five types
of neural networks with each five, yes, with two sentences for each type. Okay, you heard it
with your first. It's a welcome here. Martin, go ahead. Okay, so let's have a think. So
convolutional neural networks are used for image recognition mainly. That's one sentence, yes.
That's the end of the speech. Well, so we got, have you got any for me?
What about generation, generational as well as networks? Giants?
Yes, so they're quite a nice idea, really. They are debing. That's one sentence.
Okay, okay, the next sentence is going to be a little bit longer.
That's two sentences, but you didn't explain the fucking concept. No, no, the concept is you run more than one
and what you do is you, basically, you have a competition, right? It's a competition between
networks and you say, oh, this one is doing much better. Very important, yes. It goes in the bin.
And why is that important, Martin? Well, because you want to do it as fast as possible. So
the one that is obviously on to winner, if we think about our mountain example, then,
yeah, if you, I mean, yeah, go ahead, sorry. No, no, you carry on, carry on.
The core of a generation of a generational address network is essentially a competition
between two competing neural networks. The idea is to backfeed any optimization that was done
in one network to the other, meaning it's like spy versus spy, right?
The more these two spies fight, the better they get at fighting. And that's the overall idea.
So if you have a one network in charge of what's a good example, for defining a painting,
aimed at forging paintings. It'll start with the very basic forging algorithms in terms of
methods. But at the very same time, you have a second network trying to guess if a painting is
forged or not. By cross feeding the outcome of these two networks, they improve the other one.
And this is the overall idea behind this type of a new network. And this is the hard shit,
I think, at the moment, in that particular type of science, right?
Yeah, it's just way more efficient in the training, really, because I think that's its main reason.
All right, well, so we've got, we, of course, still have convolutional networks.
Yeah, used for image regulation. Yes, but why are they called convolutional?
Because they have more than one convolutional layer.
Okay, what's convolutional layer modern?
Well, you know about the layers we just talked about earlier.
I do, but probably most of our listeners don't.
I mean, it's basically, as you mentioned, the type of function inside the neural, right?
So in this case, a convolutional operation.
So the idea behind the CNN to put this slide in more.
Because I'm making the rules, Mark, it's quite simple.
Okay, the idea behind the CNN is essentially, of course, you still have a neural network at your
disposal and isn't you have an input layer and you have an open layer, but the layers in between
vary in terms of interconnectivity, in terms of a number of neurons in a particular layer and so
forth. So the idea behind the convolution network and hence the name is to break down the
recognition of patterns into certain steps. So, for example, again, over simplifying,
one layer would extract certain aspects of an image staying in the image recognition domain now,
would extract certain aspects of an image, then the next layer would have a nose or something.
Well, I was just getting there. The next layer would then essentially try to make sense of
this distracted feature and then the third layer would put them back together again in terms of
understanding the combined features. So let's take a look at the stuff that Martin has already
entered that. So if you have a decision, catdog or even the face, right? A face normally human
face has a nose, has two ears and two eyes and also has a mouth. So the first layer would take the
image, would take the bitmap and would try to extract two round shapes. I'm just using examples.
Sorry. We would try to extract two round shapes, we would try to expect something over,
we would try to extract something pointy and also we would try to extract something on the
sides prolonged oval shapes. The next layer would then take a look at these distracted features and
verify if this is a nose, if this is a pair of eyes or something like this. And then the third layer
would take a judgment of that outcome of the second layer if we're really looking at a face or not.
And if we are, if that's a human face, but that would probably be a fourth of a flare. So this
is how CNN work work in general. And as I said, this is valley over, this is again oversimplified.
Ah, what do you, what do you, details in the show notes?
So if people are still awake, we should probably tease now the second part of this mini series.
Oh, yes, which is, what was it?
A domain specific, sorry, a concrete example of a backpropagation network.
Was that the second one? I thought we were spoken about.
No, no, no, the third one would actually be a domain specific framework on top of that,
on top of that infrastructure that we're going to talk about next.
I'm confused.
So Martin is still struggling with artificial, with human intelligence, never mind,
artificial one. Sorry, next episode will be the discussion of a concrete infrastructure
for a backpropagation network. Yes, for a framework.
Yes, yes. So more than likely, this would be either torch or tens of low.
It depends on whether Martin can get his Nvidia ship together or not, I think.
Why are we doing coding on podcasts?
No, you were suggesting that.
Is that the next one? I thought that was a film. No, that was a joke. No, Martin, that was a joke.
Well, no way, confusing everybody. I'm not succeeding, I wonder.
Well, you usually do about creating rooms with the same name in about five times.
No, I mean, Martin, if you take a look at the plan,
that marketing came up with before you fired them, a couple of weeks back, I might add.
The marketing plan clearly speaks about one framework and it does mention tens of low or
pie torch. It does indeed. Yes, yes, yes. Okay. So that will be the second part of the 20
part miniseries to hit a, actually, you will go, you want to cover tens of low, aren't you?
We will cover one specific framework, yes. Oh, we could do both. We do one each.
It depends on whether we want to confine ourselves to a three hours.
No, one will be discussed, no, the idea people joke society. The idea is,
after this rather theoretical episode, the jury is allowed on this to give you a more
concrete example of one of two popular frameworks. And like cars, women or men, if you know one,
you know them pretty much all beer beer. No, I wouldn't go that far now.
Okay. Okay. So with that, basically, we have come to the epoxies as in the picks of the
picks of the week. What's your, what's your pick of the week, Martin, apart from politicians?
Why would I pick politicians? I'm saying apart from politicians.
Well, politicians are, I'll see an anti-pox, but they're every week.
Fair enough. I had fun. So why don't you go first and remind myself what it was?
My box of the week is something called Atlantic Elbastraterbecker. It's an Eastern Germany, yes,
it's an Eastern German brewery. Start-up okay, Philistening, the email address is sponsored at
Linux Elbastraterbecker. And we, yes, if you just put enough dowel into the kitty, we will
mention you more than once. I hope. Okay. Well, or we could always say things about the quality of
those approves. I just did. You missed it, yes. What are you just mentioned in their names?
So I said, it's, it's a pick of the week because I like to be here.
Ah, okay. That wasn't planned. Sorry. You did this without them paying us. What's going on here?
It's a pick of the week, Martin. Don't worry about it. You're missing it.
I did mention the email address in case you missed it. Okay, Martin, what's your pick?
What's my pick? Yes, my pick is. It's a good question. I have a few.
One will do. Yeah, I'm just trying to choose which one is because I normally choose movies or books.
But I'm going to go with, sorry, something called Nerva. What's this?
Which is a CPU-based cryptocurrency? Nerva. Nerva. Yeah. Nerva. Okay. Details will be shown
also. I hope. Quite like their idea behind this. And yeah, if you're listening, send...
Even as Nerva actually loves something. We'll be active tomorrow. Okay. Yeah. Entipox.
Oh, Entipox. Well, I think that's pretty easy, isn't it? It's obviously all the politicians in Europe
specifically who are being, well, like countries like France, one of them. They are saying very silly
things about certain vaccines, which is not very clever. Oh, but, okay. Indeed.
Probably. My Entipox in that case would, of course, be British politicians.
Is that right? My Entipox would, of course, in that case be British politicians.
Oh, that's okay.
Martin, we do have feedback. We have feedback. Yes, we do. Yes, you want to read this. We love
our feedback. We do, yes. I'm happy to read the feedback. So we have a feedback from nobody.
Who's nobody? Who's nobody? Thanks, nobody. Well, I saw on with that just some arms and some legs.
Anyway, he obviously has a head because there has a very good observation here.
So he mentions other Mac implementations. In the episode, you weren't quite sure if there
were other Macs for Linux, besides Acidinus and App Armor. And indeed, there are. There is
Mac, which is quite uninteresting, as is just another label based Mac, similar to Acidinus.
To me, the interesting one is Tomo Joe, which started as a path-name-based file system
similar to App Armor, but later started differentiating between applications based on their
process in vocation history. So this means you can apply different policies on say Snatchbin SH,
depending on the chain of execution leading to it. Colonel in it, Gettie, Logan, Shell,
VS Colonel in it, SSHD, SH, etc. Well, this is also possible in App Armor. It's quite a lot
more manual work and more difficult to reason about. Tomo Joe has a much nicer tool than either
of the more well-known Macs Acidinus has given Mac a bad name, which is very true. I would agree
with him. That's being hard and laborious to manage. Just read it out. Very, very good observations,
this nobody guy. If instead of Acidinus people would be first introduced to Tomo Joe,
they would probably be much more inclined to implement a Mac. Well, there you go, that's really
feedback. Indeed, nobody, if you're listening, thank you very much for my feedback. I thought Snatch
was the slang term for Harin, but apparently it's a Mac too. Yes, it's indeed, and so Tomo Joe is
actually well as the name implies something that was originated in Japan. So it's Tomo Joe rather
than Tomo Joe, okay? My Japanese is crap, so I don't really know. Japanese people,
if you're listening, please correct that. Exactly, the address is as usual. Feedback,
and it looks a lot to you. Yes, and if you throw a Japanese course at us, we might be able to
mention you and the sponsor ring notes, whatever. But I'm sure to take a look at Tomo Joe,
because that sounds pretty good. It does, it does, and yeah, so smack isn't a, well,
it is probably slang term for something else as well. In this case, it stands for Simplified Mac
Colonel. Interesting. You learned something new, this one I like about this podcast. We read all
the weekend day, we can read all day long, but it was thanks to, but three cool stuff. It comes
from our listeners. So then keep the feedback coming listeners, we do appreciate that.
People, thank you for listening. Yes, and thank you for saying your wake. Feel free to turn up
on the show. Please get touched with us first, yes, the email addresses feedback, the little
signals on the you. And before I forget, of course, Martin, we have to plug HBR once again.
Can if you're listening, thanks for hosting us again. I don't know, is he listening,
because we never hear from him. Well, you're not, I mean, you don't have to send feedback,
but the fact that we are still on HPR, they haven't kicked us out yet. So,
can, thank you so much. Does that mean they're not listening?
Can get in touch.
Joe, Joe, so sorry, HBR, thank you very much for hosting us. You have been doing so for
way over here, and we would like to really express our serious gratitude here.
And of course, we will keep mentioned, we will be mentioning you further on the road,
and we are glad to be part of this network. We are. Indeed. And with that, see you next time.
This is the Linux in-laws. You come for the knowledge, but stay for the madness.
Thank you for listening. This podcast is licensed under the latest version of the creative
commons license type attribution share like credits for the intro music go to blue zero stirs
for the songs of the market to twin flames for their piece called the flow used for the second
intros. And finally to celestial ground for the songs we just use by the dark side.
You find these and other dd's licensed under cc hmando or website dedicated to liberate
the music industry from choking copyright legislation and other crap concepts.
You've been listening to hecka public radio at hecka public radio dot org. We are a community
podcast network that releases shows every weekday Monday through Friday. Today's show, like all our
shows, was contributed by an hbr listener like yourself. If you ever thought of recording a
podcast then click on our contributing to find out how easy it really is. Hecka public radio was
founded by the digital dog pound and the infonomicon computer club and is part of the binary revolution
at binrev.com. If you have comments on today's show, please email the host directly, leave a comment
on the website or record a follow up episode yourself. Unless otherwise status, today's show is
released on the creative comments, attribution, share a like, 3.0 license.