634 lines
43 KiB
Plaintext
634 lines
43 KiB
Plaintext
|
|
Episode: 3339
|
||
|
|
Title: HPR3339: Linux Inlaws S01E30: Politicians and artificial intelligence part 2
|
||
|
|
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3339/hpr3339.mp3
|
||
|
|
Transcribed: 2025-10-24 21:07:21
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
This is Hacker Public Radio Episode 3339 for Thursday, the 20th of May 2021.
|
||
|
|
Today's show is entitled, Linux in laws S01 E30,
|
||
|
|
politicians in artificial intelligence part 2 and is part of the series Linux in laws
|
||
|
|
it is the 30th show of monochromic and is about 58 minutes long and carries an explicit flag.
|
||
|
|
The summary is, part 2 of the miniseries, on deep learning, politicians and other approaches
|
||
|
|
to intelligence or not. This episode of HPR is brought to you by AnanasThost.com.
|
||
|
|
Get 15% discount on all shared hosting with the offer code HPR15. That's HPR15.
|
||
|
|
Better web hosting that's honest and fair at AnanasThost.com.
|
||
|
|
This is Linux in laws, a podcast on topics around free and open source software
|
||
|
|
and is hosted a contraband, communism, the revolution in general and whatever fences your
|
||
|
|
vehicle. Please note that this and other episodes may contain strong language, offensive humor
|
||
|
|
and other certainly not politically correct language you have been warned.
|
||
|
|
Our parents insisted on this disclaimer. Happy mom?
|
||
|
|
Thus the content is not suitable for consumption in the workplace, especially when played back
|
||
|
|
in an open plan office or similar environments. Any minors under the age of 35 or any pets including
|
||
|
|
fluffy little killer bunnies, you trust the guide dog on less on speed and QT rexes or other
|
||
|
|
associated dinosaurs. At the beginning of the episode you will notice a reference to quantum
|
||
|
|
computing which may seem to be out of context for some of our listeners. For a resolution of this
|
||
|
|
mystery check out the outtakes at the end of the episode. This is Linux Quantum Physics in laws.
|
||
|
|
Season 1, episode 30. Tens of flow and such.
|
||
|
|
Ah, good evening, our things. Hey, Chris, things are great and wonderful.
|
||
|
|
Perfect. Always, how are you?
|
||
|
|
Can't complain, but know that the muleys are rolling in.
|
||
|
|
Is it? No, right. The cartels coughing up finally, I think.
|
||
|
|
No, but apparently the company that I'm working for is 5-1 IPO.
|
||
|
|
I think. I thought they just had some funding.
|
||
|
|
Well, no point in stopping now, right?
|
||
|
|
How does that work? How are you getting people to fund you and then
|
||
|
|
if he has the plan, I think, yes.
|
||
|
|
I'm not sure there's much logic in that, anyway. Fair enough, fair enough.
|
||
|
|
It's when features logic into IPO. Sorry, if you're listening offer and friends,
|
||
|
|
feel free to send me some more shares. Yeah. Oh, my.
|
||
|
|
Ah, ah, ah, ah, ah, ah, ah, ah.
|
||
|
|
Oh, for the off chance.
|
||
|
|
You're listening. Yes, set me, Shamsha.
|
||
|
|
Set me, Shamsha would listen to us too.
|
||
|
|
Yes, and where's the open source support for Redis in Redis Labs?
|
||
|
|
Okay, but Redis is not the subject of tonight's episode,
|
||
|
|
but rather something called neural networks as perhaps at least there's
|
||
|
|
a free hall specifically. So again, frameworks specifically.
|
||
|
|
Yes, as avid listeners will recall, there was a part one
|
||
|
|
where we basically tackled the foundation, and this is part two now,
|
||
|
|
of a 27 and a half part series on machine learning deep stuff.
|
||
|
|
Our friend Ken did ask for more contributions in this.
|
||
|
|
Yes, you also send feedback when we want to tackle this in a minute or two.
|
||
|
|
Oh, yes, no jokes aside, this second part is actually
|
||
|
|
our building on top of the first part.
|
||
|
|
Now we got to actually tackle some way more concrete stuff like actually frameworks
|
||
|
|
that implement neural networks. Indeed.
|
||
|
|
Like some Martin, why don't you give us a quick recap of episode one?
|
||
|
|
Episode one, oh, as ages ago, isn't it?
|
||
|
|
For those, for those few listeners who missed out on it.
|
||
|
|
Okay, episode one was only the basics of how neural networks work, right? So, which,
|
||
|
|
yeah, not everybody wants to dive into because that is a lot of work and why not reuse what other
|
||
|
|
people have built already in magnificent frameworks like PyTorch and TensorFlow.
|
||
|
|
Indeed. So, before we go into the nitty gritty details, maybe we should do a little bit more
|
||
|
|
of a detailed recap of what was epic, what episode one was all about. Essentially,
|
||
|
|
we spoke about the long history track. Yes, but for those people who do not want to spend another
|
||
|
|
three hours on the director's cut. Well, they have to really. Sorry, that's the prerequisite
|
||
|
|
for this episode. Yeah, but that's not very user friendly Martin, is it?
|
||
|
|
I mean, this is, this is what I told you during the marketing brief about a week ago,
|
||
|
|
you have to pay attention to the needs of something called the listeners.
|
||
|
|
I think you watched too many popular series, I mean, what I have to recap the pieces.
|
||
|
|
You were teeing it off the next one afterwards. Martin, quite a few people from America
|
||
|
|
are listening. Donald, if you're listening to this, we're still fans.
|
||
|
|
So you have to take, you have to take our listenerships, our listenership into account,
|
||
|
|
so very much. Yes, some people will have not the attention span that we do.
|
||
|
|
Some people will have even longer attention span. Grumpy old quarters, if you're listening,
|
||
|
|
you're looking to what category you belong.
|
||
|
|
We must do another episode with them. That was actually very, very good. I don't know,
|
||
|
|
I missed anyway. Doesn't matter. Let's do a very quick extended recap of the first
|
||
|
|
morning. Yeah, neural networks essentially are your networks in terms of your neurons,
|
||
|
|
which essentially can be represented by a mathematical function.
|
||
|
|
They take input values, they produce, they produce output values, and the magic actually happens
|
||
|
|
inside that neuron in terms of how you model the function. Hang on, hang on, there's no such magic,
|
||
|
|
and there's no such thing as magic. I've done a PhD.
|
||
|
|
Are you a computer? It's been a drill for a genius, perhaps.
|
||
|
|
Okay, Mark. Why don't you explain the magic then?
|
||
|
|
No, I just said there is no such thing as magic in computer science.
|
||
|
|
Well, there is, but subject for another episode. Okay, no jokes aside. The idea is essentially
|
||
|
|
to model the human brain. And as people tend to simplify things because how a
|
||
|
|
human brain works is much more complicated than simple artificial neural network is capable
|
||
|
|
to model. They simplify things. So essentially, the white works is that in neural,
|
||
|
|
that is simple, the actions of a neuron are represented by a model.
|
||
|
|
So it's a simple mathematical formula. It's some multiplications on the rest of it
|
||
|
|
that take input values, produce output values, and the idea behind such neural networks is that
|
||
|
|
you have a multitude of neurons being interconnected. And that multitude then is multiplied by
|
||
|
|
so-called, or categorized, rather, in so-called layers. The neural network functions as follows,
|
||
|
|
the idea is that you have a training phase where you essentially teach the neural network how
|
||
|
|
it's supposed to behave like given input values, the expected output values, and based on the output
|
||
|
|
of the multipliers of neurons, you modify the individual parameters that are attached to these
|
||
|
|
functions. Yeah, you have those connections really. Exactly. And you may also modify the connections
|
||
|
|
if you want to be really close to something called the human brain, because what the neuron brain
|
||
|
|
actually does, it modifies the interconnectivity between neurons all the time. And this is something
|
||
|
|
that we call human learning. Yes, to over simplify things just once again, after this training phase,
|
||
|
|
you should have a model in terms of you have neurons, and you have values, or parameters
|
||
|
|
associated with these neurons, that you can then use to essentially put the thing into production
|
||
|
|
in terms of, you give it some real input values, and then the neural network does its magic in terms
|
||
|
|
of recognizing speech, identifying patterns in an image, in all the rest of it, because at the end
|
||
|
|
of the day, a neural network is just about pattern matching, on the very sophisticated level,
|
||
|
|
grad is, but that's where the magic stops. Yeah, so then this phase is normally referred to as inference,
|
||
|
|
isn't it? And now the second part of this 27.5 apartment series is actually, now how to put this
|
||
|
|
into into into a bigger context in terms of, now that you have the theoretical foundations,
|
||
|
|
how you can actually put this into production in terms of how you can apply this to the reward,
|
||
|
|
because this is say, if you want to program a neural network, you don't have, you don't want to
|
||
|
|
sit down and just program the neurons themselves, program the functions running inside the neurons,
|
||
|
|
and feed them the input values, and then do the inference, face yourself in all the rest of it,
|
||
|
|
no, you want to use some model that is already out there working, and that's exactly what these
|
||
|
|
frameworks that mark measurements in PyTorch and in terms of flow are all about. And the hint
|
||
|
|
really is in the name of TensorFlow, because also PyTorch does it the same way, the idea behind
|
||
|
|
the interconnected neurons is passing around values that are then used to carry out the functions
|
||
|
|
of this neural network, and funny enough, these values are easily captured, efficiently implemented,
|
||
|
|
and easily modeled as tensors. Maybe you would like to explain to the uninitiated what a tensor is.
|
||
|
|
It's quite straightforward, actually, it's an algebraic mapping of end-to-me engines.
|
||
|
|
There you have it. Okay, panel. Of course, that's oversimplifying things.
|
||
|
|
Well, no, it's a multi-dimensional array, really, put it that way.
|
||
|
|
This is one of the expressions of a tensor, yes. In its simplest form, actually,
|
||
|
|
a tensor flow, sorry, a tensor, sorry, a tensor represents essentially a mapping from one algebra
|
||
|
|
extracted to another. Think of it like a vector. A vector being mapped onto a different vector,
|
||
|
|
and the mapping is actually the tensor itself. Now, this simple mapping is also called degenerate
|
||
|
|
tensor, because it simply takes one vector and maps it onto the other. The interesting bit,
|
||
|
|
basically, is if you have whole fields of algebraic structures, like matrices and so forth,
|
||
|
|
and this is the reason why tensors are actually quite good at capturing these inter,
|
||
|
|
these relationships, these are the connectivity issues. So, if you look at popular frameworks,
|
||
|
|
like tensor flow, and actually the end is in the name, and PyTorch, the basic algebraic abstraction
|
||
|
|
would be a tensor. In terms of, this is how you model your neural network on the very lowest
|
||
|
|
layer. Okay, yeah, thank you for that explanation. Where were we with this anyway?
|
||
|
|
We wanted to explain how tensors from PyTorch essentially do it.
|
||
|
|
Well, why don't we cover a little bit about why there is more than one, and what happened before?
|
||
|
|
Absolutely, Martin. Well, in PyTorch, I don't know where you are in your context,
|
||
|
|
I just want to find a thing, carry on. No, no, no, I just laid the foundation, Martin. Give it,
|
||
|
|
give it another shot about the non-mathematical background to this.
|
||
|
|
I mean, this is it, I don't want to see a thunder, so just go right ahead.
|
||
|
|
I am, it's lost. Martin, why don't you bootstrap?
|
||
|
|
Yes. Why don't you bootstrap the tensor flow that is you, and then you can start all over again.
|
||
|
|
I don't think people have a reboot function, you know?
|
||
|
|
Implementation, enough implementation flow, okay?
|
||
|
|
I think five, five, six, seven years ago, people started building various frameworks,
|
||
|
|
like I mean, I think Carras is one of your favourites now. I don't know how long that's been around,
|
||
|
|
but and in the last five years, TensorFlow came out first to try and get a bit more of a
|
||
|
|
standard to this. And then about a year later, you will like, do you know the background of PyTorch,
|
||
|
|
by the way? Probably not as well as you do, so give it a shot.
|
||
|
|
Well, I mean, it's the original, the pre, pre, or what PyTorch is based on is Torch, right?
|
||
|
|
Which was... That's a lottery, yes. Which was written in Lua, which you may be familiar with.
|
||
|
|
If you're familiar with Lua, I'm afraid, yes.
|
||
|
|
What would you say the attributes of Lua are?
|
||
|
|
Open WRT uses it as a basis for a web framework called Lucy, which is essentially the web-based
|
||
|
|
GUI for a router management. And there's also this crazy, it's a surgeon, it's a
|
||
|
|
laboratory. If you're listening to all friends who had this idea of providing a script-based interpreter
|
||
|
|
on top of a very popular nose-equal database, what right is? Indeed.
|
||
|
|
Anyway, I mean, Lua is not really known for its wide adoption, I would say, above all.
|
||
|
|
Hang on, Martin Martin, does Procedede ring a bell? How much? Procedede.
|
||
|
|
Yes, it does. And this is actually written in Lua, believe it or not.
|
||
|
|
Yeah, but if you compare it to other program languages out there, then it's a little bit low-down,
|
||
|
|
the... Well, Procedede would be one of the... The most popular XMPP
|
||
|
|
service on the planet. Yes, but that's not a programming language, is it?
|
||
|
|
Just an XMPP server, that's correct. It just happens to be written in Lua, that's all.
|
||
|
|
Good, good, glad we can agree on that. Anyway, so yeah, Lua is... I don't know, I mean,
|
||
|
|
obviously I saw Lua for the first time, well, not obvious, but first time I came because Lua was
|
||
|
|
busy with it, it's done about you, but so for many people, Lua is pretty unknown, right? And
|
||
|
|
is not that well-known for its modularity or widespread adoption amongst...
|
||
|
|
The preserving fraternity. Unless you take a peek behind the scenes, because Lua,
|
||
|
|
like Redis is basically present in pretty many areas.
|
||
|
|
Yes, I came across it first, basically, when I installed my own, my first OpenWRT instance on a router.
|
||
|
|
Yes, but you didn't program in Lua, did you? Not that no, but later. Yes, that that would
|
||
|
|
been in your Redis days, I imagine. Just like before. Okay, no, fair enough. Anyway, so yeah,
|
||
|
|
some people, or many people, are not that familiar for Lua and they, and so the founders of PyTorch,
|
||
|
|
they're like, oh, hang on, everybody's using Python, why don't we wrap that around Torch and create
|
||
|
|
PyTorch? All right, Torch was born, yes. So, yeah, why don't you tell us where TensorFlow came from?
|
||
|
|
Google. Uh-huh. No, the idea was essentially to provide a low-level infrastructure
|
||
|
|
for back propagation networks, because as probably the majority of our listeners would imagine,
|
||
|
|
Google was really the company that kick-started at artificial intelligence all over again,
|
||
|
|
even way before companies like Facebook and so on, coped on. Okay, this is what we didn't
|
||
|
|
just, this is what we didn't cover in the first episode. Well, not in great detail anyway.
|
||
|
|
Artificial intelligence goes back easily as a computer-based artificial intelligence,
|
||
|
|
goes back in, goes back easily to the 50s and 60s. Float artificial intelligence goes back
|
||
|
|
even way further. Who's flawed? Like, yeah, like the, like the attempt to breed certain
|
||
|
|
politicians who had strange ideas and the rest is pretty much history. But we won't go into that
|
||
|
|
because this is not a history podcast. It's not a political one, well, I don't know, it's the
|
||
|
|
least one. There was a certain amount of communism support here. Anyway, carry on. Absolutely,
|
||
|
|
anyway. So in the 50s and 60s, not especially North American scientists said the idea of,
|
||
|
|
well, we have computers now. So modeling a human brain can't be that difficult. So speech
|
||
|
|
recognition and image recognition and such things are just from the corner in the 50s.
|
||
|
|
So they develop a beautiful program along which is like list and other functional approaches
|
||
|
|
with the idea actually to build the foundation for something called artificial intelligence.
|
||
|
|
The trouble is that the existing hardware at the time didn't quite live up to the expectations.
|
||
|
|
So this is the reason if it takes a close look at the history of artificial intelligence,
|
||
|
|
the whole thing entered a hiatus or a hibernation period or whatever you want to call it
|
||
|
|
in the early to mid 70s. And of course, there were kind of wake up calls in between.
|
||
|
|
90s tried to revert the whole thing in terms of there was a company called Run called Nuance
|
||
|
|
that tried to use artificial intelligence for speech recognition. But these were kind of
|
||
|
|
isolated blips because even then the hardware wasn't really up to scratch to cope with the
|
||
|
|
with the computational demands of artificial intelligence. Things really changed
|
||
|
|
when a little unknown search company called Google and at the stage in the mid 2000s.
|
||
|
|
Because as probably we all know, Google is not necessarily about hardware but rather the
|
||
|
|
intelligence running on top of this hardware. As in doing software that is able to massively
|
||
|
|
parallelize algorithms running on cheap hardware. That's exactly how they did their first
|
||
|
|
search engine implementation. So you'll see this actually. If you take a look at their base
|
||
|
|
infrastructure technology like big table, like big file system, it's all published. You'll find
|
||
|
|
to show the links on the show notes. The idea behind this is actually that they come up with
|
||
|
|
software that took into account that a hardware is not perfect, hardware can fail, but the
|
||
|
|
real intelligence is actually in software. So I'll say that, but Google did develop the TPU,
|
||
|
|
which is that came later. Yeah, but the notion of cheap commodity hardware.
|
||
|
|
But these are commercial implications, which we're going to touch in a minute or two.
|
||
|
|
Anyway, to kind of long story short, the idea behind the initial Google version
|
||
|
|
0.9 if you will searches and Brian's invention was actually to do it with intelligent software.
|
||
|
|
You'll see this in the initial implementations. They had the foresight to imagine file systems that
|
||
|
|
are able to cope with petabyte of data. Because the file system at the time didn't measure up
|
||
|
|
to these expectations. So they simply sat down, not the two of them alone, but rather
|
||
|
|
these software engineers and device software that is able to power such large scale systems
|
||
|
|
and do it in a very reliable way. And that's exactly how they arrive at the technology that they
|
||
|
|
have in production right now. And along the way, they discover that there's this ancient
|
||
|
|
technology called artificial intelligence that has been on a sleeping for the last 30 years.
|
||
|
|
Because simply the hardware wasn't ready. So in addition to something called TPUs,
|
||
|
|
like terms of processing units, we want to tackle this in a minute. They come up with the idea,
|
||
|
|
okay, we produce software that is able to execute on standard hardware, but masterfully parallel.
|
||
|
|
The idea behind the initial Google was, and I'm just using Google as an example.
|
||
|
|
You buy cheap kit, but you expect this kit to fail. So you do a software that is able to cope with
|
||
|
|
failure. This is common law. Google hasn't confirmed this, but they were able to cram more
|
||
|
|
motherboards into a rack than the then, and I'm talking about early 2000s now, then the then
|
||
|
|
industry stands are kind of specified in terms of thermal distribution TDP.
|
||
|
|
So because they expected some of these CPUs to fail, if that was the case, the software would
|
||
|
|
notice that, and then some other parts would take over. Same thing for software, because
|
||
|
|
sorry, same thing for artificial intelligence, rather, because the real intelligence in artificial
|
||
|
|
is actually in the software. So this is the reason why they took a close heart look at the
|
||
|
|
foundational research that had been done in the 50s and 60s, for example, with regards to
|
||
|
|
backpapagration networks, which weren't new about 20 years ago, but rather go back to the 50s and
|
||
|
|
60s because that's exactly the part of time when people came up with this.
|
||
|
|
Indeed.
|
||
|
|
So because GPUs and CPUs only go so far, some of the Google engineers had this broad idea of,
|
||
|
|
okay, similar to GPUs, actually, we can do specialist hardware, especially if you want to
|
||
|
|
deploy this in the cloud context, that is just able to tackle the tens of flow algorithms that
|
||
|
|
are implemented by tens of flow, as in the library that implements backpapagration networks.
|
||
|
|
Hence, the idea of a GPU of a tens of flow processing unit was born that simply takes
|
||
|
|
or that it simply implements the algorithms that are implemented in the library on the
|
||
|
|
framework itself and executes that on hardware. That's the overall idea, achieving a massive
|
||
|
|
speed up in addition, and this is where the speed spot is from a commercial perspective,
|
||
|
|
in addition to these GPUs being able to deploy in cloud environments, because this is where
|
||
|
|
the money is. You can go into a hyperscaler, in the Google in that case, and simply deploy a
|
||
|
|
farm of GPUs that then execute your tens of flow algorithms. Indeed. Other cloud vendors are
|
||
|
|
available, but without GPUs. Indeed. Well, Microsoft has something similar, right?
|
||
|
|
Well, most of the people stick with GPUs, right, because there are more
|
||
|
|
easily available, well, having said that, no, not so easily available these days, but
|
||
|
|
in fact, they are quite slow. Gamers, if you're listening, stop buying them.
|
||
|
|
Well, I think the gamers would like to buy them, but they can't.
|
||
|
|
Yeah, probably people from your cartel are buying them instead.
|
||
|
|
I can't really comment on this, actually.
|
||
|
|
Okay. Good. You see crypto mining on GPUs has its limitations.
|
||
|
|
Indeed. Indeed. So yeah, that was a good point. I mean, the TPU is really,
|
||
|
|
an ASIC in short, isn't it? But just like where specific GPU mining is also programmed in hardware
|
||
|
|
through ASIC miners are being and should be more efficient than GPU miners.
|
||
|
|
But going back to these frameworks, is there anything else in the question?
|
||
|
|
PyTorch and TensorFlow these days, because are there any other frameworks that come to mind,
|
||
|
|
a platform from PyTorch and TensorFlow, because these would be the two prominent ones now?
|
||
|
|
Yeah, well, before both TensorFlow and PyTorch became so popular, there was a whole variety
|
||
|
|
of the right. I mean, Cara's being one of them, and Kathy, and MXNet, and God knows how many
|
||
|
|
there were. But they're all kind of very specialized, and the good thing about the adoption of
|
||
|
|
TensorFlow and PyTorch is that, you know, with any open source projects, you get the benefit of
|
||
|
|
many people improving these frameworks.
|
||
|
|
I mean, the beauty is with these two frameworks, they just say they're open source.
|
||
|
|
And depending on your on your on your hardware specification, you can pull them down,
|
||
|
|
and you can run them, you can actually, if you want to do this, you can run them on an SOC GPU.
|
||
|
|
But then it would say, a dedicated GPU, you can run them on a GPU as well, right?
|
||
|
|
Yes, you can do that too, but don't expect miracles.
|
||
|
|
No, but the beauty is for the majority of something called the main specific frameworks,
|
||
|
|
pre-ten models are available. Indeed, what is the pre-ten model?
|
||
|
|
pre-ten model is where someone has done the training for you.
|
||
|
|
You mentioned the training phase earlier, and why is that important?
|
||
|
|
Well, because it takes a hell of a long time or a hell of a long, a large amount of processing
|
||
|
|
power to do the training phase, because it's a very simple process or simple.
|
||
|
|
We could we went through it in the in episode one, right? It's it's, yeah,
|
||
|
|
a sheer horsepower kind of scenario, rather than anything intelligent training model.
|
||
|
|
Basically, adjusting your your weights and biases until you're going in the right direction,
|
||
|
|
and reach your your optimum for the model, which is done by doing many, many steps and
|
||
|
|
you know, doing gradient descent and so on as we discussed previously, but
|
||
|
|
yeah, so pre-trained models are great because they give you, you know, a lot of pre-processed
|
||
|
|
training in the box. And going back to the GPU or kind of power of processing general,
|
||
|
|
any idea why tensors were chosen as the main instructions for these networks?
|
||
|
|
Well, tensors aren't instructions, they're more the this data.
|
||
|
|
Sorry, not instruction, but abstraction.
|
||
|
|
Ah, abstraction, right? Well, because it maps, if you look at your neural network layers,
|
||
|
|
they are arrays of numbers, right? So, um, which is essentially a tensor.
|
||
|
|
Indeed, but the beauty with tennis is actually that you can decompose them into simple
|
||
|
|
arithmetic instructions, which you then can parallelize on different course in terms of GPU
|
||
|
|
course. Because if you take a look at how a matrix is multiplied with another matrix or a vector,
|
||
|
|
most of these computations can be done parallel. Only if you are at the reduction phase to use
|
||
|
|
the simple example of a map-produced algorithm, you actually have to consolidate this using a
|
||
|
|
single core, but leading up to this, you can parallelize this easily on different course.
|
||
|
|
And this is the whole idea behind frameworks such as TensorFlow as well as PyTorch to decompose
|
||
|
|
these sensors into independent algorithmic, sorry, into independent arithmetic rather,
|
||
|
|
and parallelizable instructions, so that you can use the full computational power,
|
||
|
|
if you're disposal, to give it a crack. Yeah, so do you know why these two frameworks are not
|
||
|
|
the most popular ones? Correctness evolution, right? Many people saw the advantages and simply
|
||
|
|
used them in their projects. Hmm. Lua or not? Well, this is the, yeah, I mean, I think this is
|
||
|
|
why the rise of PyTorch has come, you know, it was obviously, it came out after TensorFlow,
|
||
|
|
but it's pretty much equal in popularity these days because the, you know, the language of choice
|
||
|
|
for most, well, data scientists, whatever you want to call these people, is Python, right? So
|
||
|
|
it is a very natural fit, therefore, I mean, you can obviously also use the Python interface,
|
||
|
|
TensorFlow. Yes. Indeed. Full disclosure, of course, there's a whole episode on Python.
|
||
|
|
Yeah, in last year's back, I'm back at the log of something called Linus and Lars.
|
||
|
|
Okay, so you're wondering about the sips or problem with language called Python?
|
||
|
|
Yes, indeed. For those of you who are not using it.
|
||
|
|
Yeah, so other question I have for you is, why is there a tens of, what's the difference between
|
||
|
|
TensorFlow 1 and 2? Hmm, check out for Tracer, I'm tempted to say.
|
||
|
|
No, Mark, now wouldn't come as an expert on the end of flow. So if you have the answer,
|
||
|
|
go for it. Sure. Sure. Well, there is, there was, it's quite a significant difference between
|
||
|
|
TensorFlow 1 and 2, and it's really that TensorFlow 1 was imperative programming. So,
|
||
|
|
sorry, symbolic programming. So the, whereas Python has always been from the start,
|
||
|
|
build that way. So they're both converging to the same thing, but there's a big difference.
|
||
|
|
So if you ever starting with trying to build your own applications with TensorFlow,
|
||
|
|
and you come across TensorFlow 1 and 2 is quite a significant difference in the way the two work.
|
||
|
|
So, so it's worth noting that, you know, one is pretty ancient, but there's a lot of
|
||
|
|
examples, implementations out there on one. Why, why don't you explain the difference between
|
||
|
|
imperative and symbolic programming? Trish, imperative program would be like basic,
|
||
|
|
a cova, you tell the computer exactly what to do and how to do it. And symbolic program would
|
||
|
|
be much more like functional programming, i.e. a program, a function program language would
|
||
|
|
understand this, atoms and the relations between them. Yeah, what about the execution element of
|
||
|
|
these two different ways of programming? I think CPUs play a major role in this.
|
||
|
|
Well, it's, it's commonly familiar with the term eager execution. I'm not.
|
||
|
|
This is the reason why we have highly-paid experts on the show that mark this show.
|
||
|
|
That sounds like it.
|
||
|
|
Give me a crack mark.
|
||
|
|
I thought you were quite into your programming power lines, but there we go.
|
||
|
|
Okay, fair enough, fair enough. I thought you would probably be happy explaining this one,
|
||
|
|
but I can do it for you. I think I've done enough talking.
|
||
|
|
Is that so far? Give it a crack.
|
||
|
|
Okay, so I mean, you know,
|
||
|
|
that I can't correct you if you're wrong. I see.
|
||
|
|
Okay, so if we think about any kind of program, right, we have lines of code and
|
||
|
|
in an imperative programming language, they are executed in sequence as and then,
|
||
|
|
whereas in an symbolic programming language, we have a compilation phase where the most
|
||
|
|
optimum execution of that representation of the functionality is built, right? Makes sense?
|
||
|
|
It does so far. So that was the, but it's quite a, so TensorFlow started with the symbolic
|
||
|
|
approach and saying, because obviously there's advantages to this, right? Like,
|
||
|
|
yes? No? I'm listening, Martin. I'm just wondering if you wanted to fill in the blanks.
|
||
|
|
No, no, that's okay.
|
||
|
|
I've done enough filling the blanks for one day anyway, so I know where it's.
|
||
|
|
Oh, okay. I just wanted to make a discussion, but that's okay.
|
||
|
|
Or, or a two-way explanation, but it's fine. I can do it, I can do it.
|
||
|
|
So I'm having some Martin, I just listening, or that's okay, always.
|
||
|
|
That makes a change.
|
||
|
|
It does indeed.
|
||
|
|
Give the crack, Mr. Mr. Mr.
|
||
|
|
Give the crack.
|
||
|
|
Right, where were we? So yeah, okay.
|
||
|
|
So yeah, yeah, so, right, so if you think about a, I don't know, you, some people consider a,
|
||
|
|
the execution of a program to be a graph, right?
|
||
|
|
And so you basically built with the symbolic programming language, the graph is built
|
||
|
|
compilation time and is fixed, right? So whereas in an imperative program, I mean,
|
||
|
|
it's done, every step is done at the time as described by the programming language.
|
||
|
|
So the advantage of this model is that it's very efficient, right?
|
||
|
|
Your symbolic implementation, because the compiler is trying to find your most efficient way,
|
||
|
|
and you can reuse your memory space and all sorts of excellent stuff that you benefit
|
||
|
|
you get from that. Which is all well and good. If you have a, if it's bringing it back to,
|
||
|
|
neural networks, or if we think about models, then if our model is fixed, then that's great,
|
||
|
|
because you can run it as many times you want to, it's optimized to the N's degree and so on,
|
||
|
|
and you can paralyze it the however many ways you want to. However, if you have a model that's
|
||
|
|
dynamic or self-adjusting, then that doesn't work with a static kind of execution graph, right?
|
||
|
|
So that's, so Pythos from the start did that dynamic approach, and with TensorFlow 2,
|
||
|
|
they have adopted that as well. So yeah, so if you're looking at, or if you're using the methods
|
||
|
|
that are available in these frameworks, then you're probably not too worried. But if you are going
|
||
|
|
to develop your own models or even adapt the models that come with some of these frameworks,
|
||
|
|
then you may want to be aware of this. I mean, it's something that I came across with,
|
||
|
|
with TensorFlow that, you know, TensorFlow 1 and 2 are a huge need, different, not that I'm an
|
||
|
|
expert at any means, but I have certainly played with it a little bit, and it's similar to some of
|
||
|
|
my colleagues have been saying as well. So it's, but yeah, now both are, I have adopted the same
|
||
|
|
method. So they both do the same thing. It's just TensorFlow started somewhere else. So yeah,
|
||
|
|
that was the end of that episode. But yeah. No, interesting. Because I thought that
|
||
|
|
TensorFlow was mostly written in C++. I don't know. I don't go into the low level.
|
||
|
|
So it's interesting because C++ is a compiled language. Yes. Yeah. Yeah, but if it's more the,
|
||
|
|
okay. Well, maybe we should do an episode of the different levels of the different steps between
|
||
|
|
implementing TensorFlow and Python. Actually, it doesn't matter anymore, because both
|
||
|
|
are, yeah, I have adopted the same, but your execution now. So from that point. Yeah.
|
||
|
|
Because that this is why they changed TensorFlow, right? Because they couldn't do dynamic models,
|
||
|
|
because everything, once it was compiled, it was fixed. And so you couldn't adapt itself to,
|
||
|
|
or you couldn't be built to be adapted to ball. So TensorFlow would be the first AI framework that does
|
||
|
|
what's what I'm looking for. Just in time compilation of self-modifying code.
|
||
|
|
You heard it here first people.
|
||
|
|
If you're listening, the email address is sponsor at linuxilost.eu.
|
||
|
|
Well, you say this, but you know about, we have a future episode on
|
||
|
|
Google, certainly we do. I thought the current episode was on Google.
|
||
|
|
No, but you're wrong.
|
||
|
|
I think that's an idea. It's an appropriation that works.
|
||
|
|
I always say that. Okay. I thought that was the episode one. Anyway,
|
||
|
|
what we say. Yeah. So the one of the upcoming episodes, we're going to talk about a
|
||
|
|
quite popular, well, popular, an implementation of a model, which is
|
||
|
|
raised some waves in the community and press, because of its
|
||
|
|
abilities in terms of language. No, I'm listening, or I'm Martin as usual.
|
||
|
|
But I mean, if you look at adoption, okay, Google is the biggest user of TensorFlow, right?
|
||
|
|
But when they come up with stuff, well, you would hope they were the biggest users.
|
||
|
|
But if you look at the adoption of PyTorch, are you familiar with company called Tesla?
|
||
|
|
Yeah, they make cars, whatever things.
|
||
|
|
Cars and batteries and all sorts of stuff, and lock it.
|
||
|
|
But I think they just borrowed the name from some really famous physicist.
|
||
|
|
Yeah, it's long gotten, of course, but that's a different story.
|
||
|
|
Well, yeah, family Tesla, if you're listening, you've got a really good call case.
|
||
|
|
Tesla, don't try. Just don't forget about it.
|
||
|
|
Don't you reach out to Martin for for for Martin consulting? That would be way
|
||
|
|
money down the drain. To use the technical term now.
|
||
|
|
Let's not discuss marketing. That's not going to go well, is it?
|
||
|
|
Where were we? Yeah, so a lot more companies outside of Google are not outside of Google,
|
||
|
|
are contributing, supporting PyTorch, right? And its popularity along the research community is,
|
||
|
|
well, I think from me, eco, if not greater than 10s from these days.
|
||
|
|
It's a silly competition between Tesla and PyTorch.
|
||
|
|
Well, I find it a bit curious that why do we need to, right?
|
||
|
|
It seems a little bit of a wasted effort. So there must be some differences.
|
||
|
|
Otherwise, one would become more prevalent than the other.
|
||
|
|
If it's not possible, if current laws and it's going to go by, there are more frameworks
|
||
|
|
than just these two dominating ones.
|
||
|
|
Yes, but that's kind of like, yeah, why bother, right?
|
||
|
|
If you have these two that are seven years in the making, then why do you need another one?
|
||
|
|
I mean, the functionality that they offer are obviously in the fields of computer vision,
|
||
|
|
NLP, etc, etc. So what would you be missing from those two frameworks to start?
|
||
|
|
Frameworks are pretty much like cars or women, right? Or men for that matter before the PC,
|
||
|
|
police gets to us. Cars get you from A to B and women make you happy. Or men for that matter,
|
||
|
|
they don't get me wrong. But at the end of the day, they have four wheels. Sorry, cars now.
|
||
|
|
The PC police will probably shoot me for this remark, don't worry about it.
|
||
|
|
And women do have hairs, arms or the rest of it. There are, of course, differences.
|
||
|
|
Cars, for example, have different engines, men and women have other difference. We won't go into
|
||
|
|
DD test, but you get my drift. Same goes for Operation Networks. At the very end of the day,
|
||
|
|
they are pretty similar. On the surface, they may differ a lot, but at the end of the day,
|
||
|
|
they just are about mostly pattern recognition, and about comes afterwards.
|
||
|
|
Yeah, sure. I mean, we were going to have someone on the show on these things with me,
|
||
|
|
but follow this. The reasons that didn't happen, though. Yes. But it's just, I mean, if you think about it,
|
||
|
|
why do you start contributing to one or the other, which would be an interesting question for me?
|
||
|
|
You have a problem solved. Find a tense flow, hasn't solved it or doesn't provide options to solve it.
|
||
|
|
It's better doesn't have options to solve it. So why do you choose one or the other to start contributing to it?
|
||
|
|
Because you like lure?
|
||
|
|
Maybe, maybe. Where does lure rank on the popularity of programming languages?
|
||
|
|
Um, definitely, 76 points, something on the CHOBs index or whatever. CHOB, of course,
|
||
|
|
you'll find the links in the show notes out there. Of course, the Danish company,
|
||
|
|
I think it's called the importance of being earnest or something.
|
||
|
|
That's the stack of those surveys, no? No, no, it's here with something different.
|
||
|
|
No, no, that's not what I was thinking of, but is it not online?
|
||
|
|
No, no, the stack of actually ranks lure at position number 17.
|
||
|
|
17? 75 points, yes. Oh, yeah. Of course, the routers and OW,
|
||
|
|
and open WRC helped a lot. Yes, yes.
|
||
|
|
Okay. Right, sorry, to, to a diverge. Um, yes, what else would you like to cover?
|
||
|
|
Oh, that's pretty much it, all right. Maybe teaser for the next part of the
|
||
|
|
Oh, the second part, mini series, episode mini series, whatever. Is this, is this,
|
||
|
|
is this specifically for Bob Boris? Is this, is this going following on from the current modeling?
|
||
|
|
No, I mean, just, just what we do want to cover during the next, for the next episode.
|
||
|
|
But now that we have, now that we have a very good grasp on the basics, anyway.
|
||
|
|
Can we do another recap? What's done? Yeah, podcast episodes cannot only go so far, people.
|
||
|
|
You, I mean, this is say, we only scratch the service here. And it is to say, we did not, we did
|
||
|
|
not go into the mathematics of this because this is not what Linux allows us all about because that
|
||
|
|
will get very quickly, very complicated. We leave. Yes, blatant cross, cross promotion,
|
||
|
|
we leave the mathematical details to a podcast called the grumpy old quarters. We have them on
|
||
|
|
the show. Yes, we had them on the show quite a few episodes back. There are legacy people,
|
||
|
|
mostly concert with windows with windows, sorry, with windows. What's the legacy people?
|
||
|
|
People that are old, the name is actually in the name.
|
||
|
|
Correct. Please, if you're listening, you, you won't come back on the show at any time because we
|
||
|
|
just, I thought it was just a hilarious episode and we really like you. But anyway, it doesn't matter.
|
||
|
|
Okay, jokes aside. Yeah, the next episode will be about the practical applications of these
|
||
|
|
kind of two foundational frameworks like pytorch and TensorFlow. We will go into, we will cover the
|
||
|
|
domain specific implicate, the domain specific implementations like Harrison, so forth based
|
||
|
|
on, based on pytorch. And as I said, this is say it's this episode is not about the
|
||
|
|
runner details like mathematics and so forth because that would fill another two or three
|
||
|
|
episodes of this mini series because it's just very complex. And that's the reason why I'm
|
||
|
|
I just need to go with that blackboard algorithm. Yes. And that's the reason why actually most people
|
||
|
|
would just use these frameworks rather than kind of re-implementing these frameworks themselves.
|
||
|
|
Deep. Because all the other fancy math stuff is abstractly away from you.
|
||
|
|
Mm-hmm. That's the useful line. Compuces.
|
||
|
|
And levels of programming languages. Yes. And before we close out this episode, of course.
|
||
|
|
Ah, yes. We do have. Yes. We do have to do feedback. Yes. Yes.
|
||
|
|
Yes. Yeah. As certain Mr. Kan Fallon of hate our fame. Yes. Indeed.
|
||
|
|
Mm-hmm. Right. And to say I always thought that artificial intelligence is misleading.
|
||
|
|
Ken, you're absolutely right. Artificial programming would better describe what's going on.
|
||
|
|
Yes to a certain extent. Artificial intelligence. Yes. Of course. Can brutal teaser Ken will be
|
||
|
|
be on the show very shortly.
|
||
|
|
Ooh, couple of weeks time, yes.
|
||
|
|
And of course, Ken is one of the beautiful people behind.
|
||
|
|
Perhaps I'm going to go like a barbecue radio,
|
||
|
|
that we still use as our main hosting platform
|
||
|
|
for all the episodes, okay?
|
||
|
|
It's out there.
|
||
|
|
So yes, at the fish intelligence
|
||
|
|
as a term is somewhat misleading, I grant you that
|
||
|
|
because this is where it comes down
|
||
|
|
to people's definitions or interpretation of those words, right?
|
||
|
|
Perhaps.
|
||
|
|
Well, at the end of the day, the way we tackle
|
||
|
|
artificial intelligence these days
|
||
|
|
is more or less like programming
|
||
|
|
in terms of algorithms being implemented by a computer.
|
||
|
|
Yeah, but that, okay, so, but yes, okay,
|
||
|
|
the two are fairly synonymous at the moment,
|
||
|
|
but it doesn't mean our position in terms of
|
||
|
|
against can't be implemented in a different way, right?
|
||
|
|
Exactly.
|
||
|
|
If you would decouple artificial intelligence
|
||
|
|
from computers, fine, indeed.
|
||
|
|
But we're not.
|
||
|
|
No, because we are that kind of podcasting.
|
||
|
|
No, we are that kind of people in terms of people.
|
||
|
|
That's true, that's true.
|
||
|
|
In terms of people simply using computers
|
||
|
|
to do artificial intelligence programming.
|
||
|
|
And so, can't is actually spot on.
|
||
|
|
Yes, yeah.
|
||
|
|
Mm-hmm.
|
||
|
|
Yeah, I'm sure someone will come up
|
||
|
|
with a biological artificial intelligence solution
|
||
|
|
that's important.
|
||
|
|
No, actually, I actually put in some other non-front,
|
||
|
|
non-mind-heartware to begin or something
|
||
|
|
beyond computers, yes.
|
||
|
|
Like molecular biology or something,
|
||
|
|
like not necessarily brains, but the thing that comes out to it.
|
||
|
|
It's another topic for another episode.
|
||
|
|
Here's a question for you.
|
||
|
|
Is intelligence limited to people?
|
||
|
|
No, certainly, it's not.
|
||
|
|
Well, there you go, then.
|
||
|
|
You have it into very limited extent,
|
||
|
|
also in politicians, for example.
|
||
|
|
Ah, yes, of course.
|
||
|
|
Mm-hmm.
|
||
|
|
The day is very interesting, isn't it?
|
||
|
|
But it's a decouple.
|
||
|
|
Sorry, and other beans, I didn't go by.
|
||
|
|
Ha-ha-ha.
|
||
|
|
Well, like, people in marketing or no.
|
||
|
|
I wouldn't go that far over.
|
||
|
|
It's a different story.
|
||
|
|
And with that, we have not only reached the end of the tether.
|
||
|
|
In these?
|
||
|
|
The end of the show, I suppose.
|
||
|
|
Yeah, I think so.
|
||
|
|
So Martin, see you on the other side, for another episode
|
||
|
|
on artificial whatever.
|
||
|
|
Ha-ha-ha-ha.
|
||
|
|
And of course, for the hipsters out there,
|
||
|
|
that would be artificial space, full-stop star.
|
||
|
|
Sorry, plus being a complete regular expression now.
|
||
|
|
Excellent.
|
||
|
|
Martin, and with that, we're going to call it a day,
|
||
|
|
and see you soon.
|
||
|
|
This is the Linux In-Law.
|
||
|
|
You come for the knowledge.
|
||
|
|
But stay for the madness.
|
||
|
|
Thank you for listening.
|
||
|
|
A little voice making you forget that SkyNet was once
|
||
|
|
this evil empire trying to change the world.
|
||
|
|
If you can change, so can we.
|
||
|
|
This podcast is license under the latest version
|
||
|
|
of the Creative Commons license, type
|
||
|
|
attribution share like.
|
||
|
|
Credits for the entry music go to bluesy roosters
|
||
|
|
for the song Salut Margot, to twin flames
|
||
|
|
for their peace call The Flow, used for the second intros,
|
||
|
|
and finally, to the lesser ground for their songs we just
|
||
|
|
is, used by the dark side.
|
||
|
|
You find these and other duties license
|
||
|
|
under Creative Commons at Tremendo.
|
||
|
|
The website dedicated to liberate the music industry
|
||
|
|
from choking corporate legislation and other crap concepts.
|
||
|
|
You are currently the only person in this conference.
|
||
|
|
The only person in this conference, but it's this.
|
||
|
|
Oh, I'm going to end the session and start a new one.
|
||
|
|
Hello.
|
||
|
|
Hello.
|
||
|
|
Hello.
|
||
|
|
Hello.
|
||
|
|
OK.
|
||
|
|
OK.
|
||
|
|
OK.
|
||
|
|
Yes, that apparently works.
|
||
|
|
Uh-huh.
|
||
|
|
Do you know what a quantum torus recourse of neural network is?
|
||
|
|
Well, that what a torus is.
|
||
|
|
I was trying to work out what a quantum torus would be.
|
||
|
|
Hmm.
|
||
|
|
You deviate from standard quantum architectures and you throw
|
||
|
|
in qubits and then you have pretty much have it.
|
||
|
|
Like the neural networks basically are able to tackle and space,
|
||
|
|
peace based, and peace based.
|
||
|
|
Instantaneously, more or less.
|
||
|
|
Yeah, yeah.
|
||
|
|
Well, you've got to start somewhere.
|
||
|
|
This is about the programs for when this stuff actually is working
|
||
|
|
sometime in YouTube or not.
|
||
|
|
That's this delorean car.
|
||
|
|
Yes.
|
||
|
|
Well, that wasn't the success, was it, really?
|
||
|
|
Apart from the music.
|
||
|
|
It was probably before its times.
|
||
|
|
Well, no, it was made in Ireland by dodgy crook.
|
||
|
|
This was the issue with it.
|
||
|
|
If you're listening, whatever your name was, delorean guy.
|
||
|
|
You are crook.
|
||
|
|
Well, I think it produced at least six of them or something.
|
||
|
|
Well, it was all funded by the government or by the EU, even possibly.
|
||
|
|
Yeah.
|
||
|
|
That was Northern Ireland.
|
||
|
|
Because it was built in, yeah, you need.
|
||
|
|
Yeah, I mean you.
|
||
|
|
Okay.
|
||
|
|
I thank you so much time and see you, don't you?
|
||
|
|
I was like, other people.
|
||
|
|
Yeah.
|
||
|
|
There was a question from a certain Bob.
|
||
|
|
He was asking, when you refer into models, is that similar to the pictures
|
||
|
|
you're sending around usually or do you refer to?
|
||
|
|
No, no, no.
|
||
|
|
Quite different models actually.
|
||
|
|
Okay.
|
||
|
|
Well, there you go, Bob.
|
||
|
|
Thanks for answering.
|
||
|
|
Bob, if you listen, there are quite a few different categories of models.
|
||
|
|
And the model you are probably looking for, unfortunately, is not part of the show.
|
||
|
|
I thought you were working on that.
|
||
|
|
No, I'm not.
|
||
|
|
Sorry.
|
||
|
|
Sorry.
|
||
|
|
You've been listening to Hacker Public Radio at Hacker Public Radio.
|
||
|
|
We are a community podcast network that releases shows every weekday Monday through Friday.
|
||
|
|
Today's show, like all our shows, was contributed by an HPR listener like yourself.
|
||
|
|
If you ever thought of recording a podcast, then click on our contribute link to find out
|
||
|
|
how easy it really is.
|
||
|
|
Hacker Public Radio was founded by the Digital Dove Pound and the Infonomicom Computer Club
|
||
|
|
and is part of the binary revolution at binrev.com.
|
||
|
|
If you have comments on today's show, please email the host directly, leave a comment on the website
|
||
|
|
or record a follow-up episode yourself.
|
||
|
|
Unless otherwise stated, today's show is released on the Creative Commons'
|
||
|
|
Introduction, ShareLight 3.0 license.
|