Files
hpr-knowledge-base/hpr_transcripts/hpr4305.txt
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

727 lines
43 KiB
Plaintext

Episode: 4305
Title: HPR4305: My weight and my biases
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr4305/hpr4305.mp3
Transcribed: 2025-10-25 22:43:50
---
This is Hacker Public Radio Episode 4305 for Friday 31 January 2025.
Today's show is entitled, My Weight and My Biasis.
It is hosted by Troller Coaster and is about 53 minutes long.
It carries an explicit flag.
The summary is, a personal reflection on the ethics of AI in our society.
Okay, so first things first, My Weight.
I'm a bit overweight, so I would need to do more sports and eat less sugary.
I know.
Second, the following is certainly biased.
I'm not an expert.
I did do some research, but basically this is my own opinion.
So feel free to create your own podcast, explain your weights and your biases on the topic.
So who am I?
I'm Jürgen, also known as Troller Coaster here.
And I'm very much into software freedom.
And I'm also kind of a member of Hacker Space Brussels and I like to think or like the
figure stuff out, like to understand how stuff works.
And I'm assuming most of us already know what AI is, got some basics about it.
So we're not talking about the old school AI, we're talking about generative AI.
And about here, we'll be focusing on the ethical dimensions rather than the AI fundamentals.
So why this topic, well, of course, AI is everywhere.
It's the big business, it's the big money.
I'm impacting everything from recommendation engines to predictive policing, from voice assistance
to critical infrastructure.
AI is no longer a niche.
It's global policy, it's it affects business decisions, it affects personal data hack.
I even used it for a part to to prepare this talk.
So it's everywhere, I mean, come on, let's be honest.
But as a hacker or maybe a maker, if not in skill, then at least in ethics, we have
certain values that we hold dear, we care about the world, we have a certain responsibility.
So we're thinking about anyone who can thinker with technology, who develops technology,
we are the people who now today are influencing how AI is shaped and being deployed.
Maybe we missed the launch, maybe you didn't, I know, I missed it.
I wasn't there completely when it was getting prepared.
But when I was 12 and that's I'm 51 now, so that's like 30 years ago, I was at this
technology event in I think it was Brussels, I was a kid back then.
So it was called flander's technology, it wasn't against.
And I went to that event and there I saw all these cool things, I actually saw the first
computer mouse there, I never touched a thing like that.
So it was before the gooey stuff.
And but I also saw something there that was called fuzzy logic and it blew my mind,
it was about computers not thinking in ones and zeros, but there was also something in
between it.
I couldn't wrap my brain around it back then, but by now I've learned that something
between ones and zeros basically means a lot of ones and zeros.
And I think the thing goes for AI nowadays.
Anyway, we are the people who create the future.
So we have to step up now and be critical.
Be smart about how we use this stuff because it's new technology and new technology always
disrupts existing norms sometimes positively, sometimes negatively.
So as hackers, as open source advocates, as tech savvy people, we are the people who
need to identify pitfalls early on or even address them depending on your place in society.
So I'm calling to you to take your responsibility in your local area to do exactly this.
So this talk is not a bunch of answers, maybe it's even more like a bunch of questions
because I mean, there's this wisdom that says, the big, every fool can answer more questions
than the wisest man can and can reply and maybe I'm just a big fool.
Anyway, I'm asking questions, maybe even putting out some thoughts on what I think, but
I'm not giving answers.
Sorry if you expect that.
Anyway, I'd like to invite you all to reflect on real world examples and ethical dilemmas throughout
this episode.
So let's, so we're focusing on the commonly used AI applications, not like expert solutions
or not like very niche technology things.
So it's, we're talking about stuff like chat GPT, like image generators, like stable
diffusion, those are widely accessible.
I mean, for 20 or 25 bucks a month, you can play with it all, all you want.
I mean, that's not that expensive.
Anyway, AI is also, is very common and we won't be going into the specialized domains
like robotics or expert systems, et cetera.
Things are not the main focus in this episode.
We will touch some topics a little bit later that might fringe it, but we're not going
to into that all too deep.
So we're not going into strong AI, agi or the artificial general or even super general
intelligence.
We're not going into symbolic AI, not going into AI paradigms in academia.
Because we're all out of scope, it's too technical, just the impact of AI today on society.
That's what, what keeps me awake sometimes, but no, I actually quite well, basically.
So let's short, short, really short history, the 5060s.
There was like this really big optimism about reaching human level intelligence.
Back then they already talked about artificial intelligence.
It was a topic on in flammar's technology when I was 12 to even, I mean, I was standing
in a cage with a helmet on my head.
I was using a VR headset back then, but it weighed like another thing, 25 kilos or something.
It was good to have a whiplash.
Anyway, hypes in the past have led to AI winters where expectations didn't get met and
there was no funding every now and then people said, this is bubble and they popped it
and money was drying up.
But every now and then it came back because people just want, they want to wrap a human
brain into a computer.
It's something, I think it's an archetype.
Anyway, today we're going to emphasize how computer power is now finally strong enough
to actually do really complex stuff in such a way that it even seems to work like a human
brain.
It's using GPUs, cloud services, large data sets.
We have all the online communities and all the online stuff so that it has fed us a
lot of data to train on, maybe even cameras, street cameras, training data.
I don't know where they all got it, but they have it and that's the point.
So at this point, AI is here, it's here to stay and it's helping us in a lot of fields.
But it also sometimes damages us and this is something that we have to keep in mind.
The hacker ethos is what is my main principle here and why does it matter?
I mean, one of the hacker ideas is basically transparency and openness.
So we value open source principles, we think reverse engineering is something that should
be able to be done and really understand how something works.
So if you're a member of a hacker space, if you're a hacker in your own mind, please
try to understand stuff.
So I'm not going into the technical details, but please look them up because it's interesting
and it's important to understand what an AI is, how it works and what it's not.
So then you have the ethos, often clashes with black box nature of closed AI systems.
I mean, you pop things in, things pop out and you don't know what happens.
So we need to be conscious about this.
We need to need to be aware of what happens and what can be understood because in the end,
there is like some logic, there is like an ethical framework that even inside that AI
and inside that black box, there are decisions being made priorities.
That's what they call the weights and biases.
So there needs to be an ethical framework and this shouldn't be led by companies, yes,
they should be in there too because they are also important and they have the knowledge
and they have the, they are the people doing it.
But there should also be a bunch of just knowledgeable hacker-minded people in there who don't
have a commercial interest into the thing.
So open source AI initiatives that enable broader auditing experimentation and innovation.
And we also need really a grassroots vigilance that ensures technology involves in a matter
that respects the freedoms of a user and the well-being of society.
So there are a few ethical questions you could ask when choosing which AI to use.
So I mean, of course you have how good is it, how well are the answers formed, how nice
are the images that come out of it, et cetera.
Well, but I think it's also important to consider the backlashes, I mean some AI's do offer
access to the source code.
They show, they show that you can modify it, you can work on it, you can build it on
your own server if you want to and it allows you to innovate, to be accountable for what
you're doing.
And also to see how it works, how do you see that your system runs full power for 30 seconds
to answer a question.
And it gives you an image of how much energy an AI uses if you run it on your own system.
This is important knowledge, I think, a second ethical pointer you could look to is do we
have access to the weights, to the biases, to the training methods, I mean, it reveals
how decisions are made under the hood and allowing independent validation of AI outputs.
Because proprietary models often keep these details hidden, raising concerns about unchecked
biases and opaque error handling, then there's the training data data, what source, what's
the source for this data?
Are they scraped from the web, obtained via partnerships, crowdsourced, it could create
ethical and legal questions, I mean, there was this back in a few years ago, they clearly
scraped half the internet and they used the trick that copyright was not applicable because
texts were not recreated based on it and they are claiming just like every other artist
you're allowed to look at other art, to create your own stuff, the only difference being
that an artist doesn't create 25 pieces of art every second.
And so there's clearly a discrepancy.
On the other hand, of course, we could ask the question if AI data would have been based
just on explicit consent, how much data would we have had in the beginning and would it
even be possible to create an AI if we didn't have that?
But as I said, I don't have the answers, just asking questions here.
So you have open data sets, of course, or proprietary data sets, restricted data sets.
Because I can imagine some people give permission to share their data in training sets.
But then, of course, don't give permission to share these data sets to other people
because then in the fact you're actually sharing the artworks themselves with these people.
Because proprietary data is typically sourced in and will often be of higher quality.
And let's just be clear here.
If we just use 4chan and Reddit and Facebook and Twitter, sorry, X, as our data sources
for human intelligence, I don't think we will be having the greatest AI system.
So there are also some ethical questions.
I mean, there's a trade-off in permission and diversity.
This is what I already said.
I mean, if we rely on a group on broader scrape data, it could also by accident,
Violet privacy or exploit legal loopholes.
I really believe greater transparency will improve trust and foster collaboration,
but it also risks enabling malicious actors, of course.
It's a balance act between open information with safeguards.
And on the other hand, not leaking personal data from people.
Another aspect is the legal framework, the regulations.
I mean, every country, every town has its own legal framework.
Europe has one, the AI Act, America has some stuff.
I'm pretty sure every country has its own framework that allows you to think about
what is and what is not okay.
And that's a good thing because that prevents us from getting into situations
like the minority report unless, of course, our legal framework is crap.
And then we are toast.
So that's, I think, where we need a good legal framework and good government
with good leadership.
Unfortunately, that's not the case everywhere.
I won't get into details about what countries, but just think for yourself
is my country doing a good job now and how could we improve things in our region.
So, excuse me.
So there's also the question of accountability.
This is typically a question for self-driving cars.
But I mean, this also applies for everything that's AI in the broad sense.
If an AI makes a decision and this impacts real life or real people
who is to be held accountable or the developers or the people in the training data
or the companies deploying the AI, is it the end user?
I mean, this is like a whole new kind of worms that needs to be opened.
And I think we will be needing some legal framework here.
And if we let it up to the big companies,
I mean, then the decision will be that we, the user, are in the end always liable.
And I think we should make sure that the solution is more nuanced.
So it's up to us to also have a voice.
There's also some concerns about sustainability, of course.
I mean, I already touched a little bit, but I mean, AI systems, they eat energy.
They have a carbon footprint of both for training and inference and for use.
I mean, training a large model, large scale AI model requires substantial compute resources
leading to significant energy use during both training and ongoing inference.
So retraining or updating models is not something that you should be doing that often
because it really impacts our ecological footprint.
Data centers are being built all over the world for the moment.
And they're greenwashing them by putting solar panels on the roof.
And so because they have solar panels and maybe two windmills
or a nuclear nuclear power plant on some island here or there.
Now they're they're ecologically neutral.
We all know that's bullshit.
Because otherwise that same energy production would have been used for other purposes.
So I'm not buying that stuff.
So for the future, I do think and I'm happy to see that there are trends emerging
where power usage is becoming more efficient.
We have AI systems that start to become more selective in what train.
But what data they're using took to answer a question.
They're not using the big data anymore, but just using certain blobs of it
to be more energy efficient.
They work like in a two-step approach.
Now nowadays we're the first thing which AI models, which blocks will I be using?
And then they only use certain training models.
And this makes this thing a bit more energy efficient.
That's a good trend, I think.
And as in every new technology, at first it's energy, an energy hog,
and then it becomes less consuming.
I'm thinking of the first cars back in the days.
I think you were using like 15, 20, 25 liters of fuel per 100 kilometers.
And now if you have like an energy efficient car, it goes six, seven.
So it actually halved.
So an energy consumption is something that also evolves with technology advancing.
So I think there is some growth to be made and I'm looking forward to that.
But it will always remain an energy hog.
But it will of course also give advantages.
So it's as I said, it's difficult.
But there's some other stuff too.
So you have the ecological aspect.
You have the whole aspect of the legal framework, the data sources,
the application sources.
But there's also the biases and the social impact on society.
So first of all, you have the data bias.
We have all heard about how in the beginning certain minority groups
were negatively impacted by the training data because they were underrepresented.
And I can imagine that this will stay keep happening.
It may be not for the same big minority groups as nowadays,
but smaller minority groups might might still end up being underrepresented
and wrongly judged.
So there's also tools available to discuss and to check to audit these biases.
So I'm inviting you all the skilled people over there to audit data sets,
to use fairness metrics, to do algorithmic debiasing.
I mean, if you're one of the people who know about this,
please engage in it.
If you're interested in it, look it up.
It's interesting.
It's also important that if you're in one of these minority groups,
you actually check if your AI systems that are being used.
If they do treat you as a minority, as a member of a minority group,
if they actually treat you in a correct way.
And if it's not the case, please report it to the right places,
to this AI system.
There's typically always some feedback system in there and use it.
Because this is now is the time to optimize it,
to tweak it, to get it better.
There are a few sectors where we don't want an AI to work on its own.
I mean, in healthcare, in finance and law enforcement, they can be used.
But I think it's critical that in the end,
they create a transparent path of decision making,
and that a human will make the final decision.
And we need to be careful that we don't fall into these biases
because this is an interest point.
In the end, an expert is the one who has to make the decision.
Even if an AI is better in 98% of the cases,
those 2% are critical.
We don't want to be in those 2%.
I'm just AI decisions can also reinforce existing power and balances,
hitting marginalized communities harder.
I mean, we all know how people with a lot of money
have succeeded into getting certain legislation,
be worked out in their favor, who have managed to get technology faster
and more efficient in their context.
So let's make sure that not only the people with money,
but every person gets a fair context here, a fair judgment.
I'm often worried that AI will be used as a form of automated social engineering.
I mean, it's already being used for calling people,
for phishing goals, for manipulating people.
On a bigger scale, it can also influence a public opinion.
I mean, generative, generative AI can create persuasive content,
making it easier to match individuals or groups in specific directions,
be it politically or commercially,
or just even religiously or on some level of ideals.
People can be pushed against each other and this is a risk, I think.
You can also notice how several messages or targets that enters advertising
can exploit people's cognitive biases without them even knowing it.
Sometimes it's just a matter of what gets shown or what doesn't get shown,
what comments pop up in your threads and what pop-ups doesn't don't pop up.
And of course, we all know about misinformation,
I mean, alternative truths, deepfakes,
synthetic realistic media that just isn't true,
propaganda that is mass generated can really erode the trust in traditional news media.
So here lies a challenge for news media to go back to their roots
and stop with being trying to be the quickest source
trying to give the most clickbaity answer response.
But to every journalist out there, go back to 50 years ago
when you had three, four, five days to create an article.
Back then, if you read an article,
maybe it wasn't your quickest source, but it was a reliable source.
Be your reliable source again.
And if this means competing, stopping the compete with speed,
I mean, you will never compete against an AI system,
unless you're using an AI system yourself and with all respect,
but then you're not trustworthy anymore.
Another thing I would like to address,
so we have talked about a few topics here,
is how AI has a very addictive potential.
I mean, I'm calling them the AI buddies,
and they are everywhere.
I mean, if you go to chat GPT,
if you're on social media, they can be integrated there.
I thought Facebook was going to implement now AI buddies.
But so there's very subtle integration
and legal frameworks will again have to tell us how visible
or invisibly they are allowed to be on these systems
in their countries.
But they, at a certain point,
this online persona will feel like real people,
like after a while, they will feel like people that you know.
I mean, we all have the experience of listening to podcasters
and you've heard these people a lot of times.
And after a while, you have the feeling that you know this person.
And I can imagine if I would be in some shopping mall
and I hear the voice of Dave or Ken,
I would just walk, hey, Ken, how are you doing?
And I'm pretty sure he would say,
I don't know you because I mean,
he doesn't know me unless he maybe also recognizes my voice
from this podcast, but that's another story.
But I mean, just these voices in the cloud
or we have a relationship with them.
So not just imagine that,
I mean, we have notebook LM
that creates AI-generated podcasts.
They're kind of predictable, not great,
but things might evolve and get better.
And what will happen if we start building up relationships
and even personal discussions with our online heroes
who appear to be AI models?
I think so they're this continuous engagement loop.
I mean, the AI buddy is there 24-7.
He's going to be exactly who he expect him to be.
He's going to be so not predictable because not boring,
but he will give you a bet on the head.
He will tell you that you're a good boy or a good girl.
How great you are.
How much he understands you.
I mean, this is like the perfect,
the perfect friend or maybe temp,
the perfect friend for somebody who feels insecure,
who is lonely, who is sad.
I'm calling this in a certain way.
It's emotional masturbation if you engage in AI friendships.
And this has a very high release of endorphins,
of friendship hormones.
And I see this becoming a very addictive in a very short time.
Also, if I see how AI systems actually invest
in having a human voice, a human tone,
recently I was talking to ChatGPT
because I just try it out every now and then.
And I asked it something very ridiculous.
And it laughed at me.
It giggled.
I mean, we have an AI that just giggles
if you ask something stupid.
I mean, that giggle has no function.
Other than trying to connect to me.
So this is dangerous.
This is something that worries me.
This is something that worries me.
Not so much for myself because I'm aware of it.
But I do worry for my kids and my grandchildren
who are becoming after them
because they will be growing up in a world
where this is still a bit the Wild West
and where boundaries are not solid yet,
not clear yet and where investors and technicians
and big money will try to outsmart the legal framework.
And now is the time that a lot of us need to be vigilant,
need to be there, need to be attentive.
And that's why I'm calling every hacker out there.
Think about it.
Watch the system, try to hack it in all the good ways.
And then keep an eye on what's happened on trends and be vocal.
Show them also how the social engineering techniques
are being used there.
Another thing is, of course,
AI agents who will be doing all the boring work
in the beginning, they will have access to your agenda.
They will have access to all the systems that you're using.
And they will be offloading mundane tasks
like replying emails, scheduling, doing research.
And they will help you a lot because they will take a lot of your plate.
But all of the sudden, you're in.
I don't remember what the movie was called,
but there was like this guy.
He had like remote control and he could fast forward the world.
And he just just skip the boring parts.
But then all of all of the sudden,
he discovers that during those fast forward moments,
decisions were made that he didn't agree with.
But because he wasn't fast forward,
he had no impact on it.
And now he has to deliver the consequences.
So this will be happening here too.
I'm quite sure of that.
And once accustomed with this kind of convenience list too,
you will be reducing your own skills on this front.
I mean, if some agent creates my mails based on
on a few bullet points that I just flap out while saying them,
then maybe in a few months,
I won't be able to write my own mails by the way.
My mails will no longer be my style anymore either
because after a while,
there won't be any baseline for that AI to build on
because I haven't generated any mails anymore.
Or maybe it will just keep sending mails.
As if I was a 12 year old.
That was when I was my age when I when I sent my last real email.
Just imagine.
No, I don't like this anyway.
It's useful.
It helps.
It makes us.
It takes a lot of of our plate.
But it's a risk.
I mean, let's an example.
And Frank Museum.
There's like an AI bot that's built on all the
and Frank diaries and a lot of contacts from and Frank.
And so in this school AI,
you can talk with and Frank and ask her questions.
I mean, yeah, school because it teaches you about history,
but can seriously screw up.
And we don't want to go there.
I mean, for these very sensitive topics,
I think it's more important than ever that we do.
Do stay aware of the reality.
And don't let AI systems shape the future of our team.
There's teenagers AI systems will always call you great.
And wow, when you're right.
And this is also something that it's shaping the self image of our young people.
If a kid always keeps hearing how great he is,
yeah, it will certainly boost his confidence.
But I think it's also important that every now and then.
Somebody blatantly tells your kid that was wrong.
I still love you, but that was wrong.
And I don't see AI doing this.
This will make real world conflicts very hard to solve in the future
because people won't be used to conflicts anymore.
They will won't be used to handling these kind of stuff.
And this will deflate the competence of the future generations
to do conflict handling.
It may encourage and avoidance of genuine, genuine social interactions.
I mean, I'm going to send my IA agent to your IA agent.
And they'll talk together.
And in the end, I'll get a bullet point summary telling what we talked about.
And I'll store this bullet point summary into my markdown list for this person.
And next time my my AI goes to your AI,
we can recap on the same bullet points.
Please don't.
There are also design choices that really amplify attachment.
I mean, I already talked about the human-like tones and expressions.
There's also like the reward systems and leveling up.
I mean, some AI buddies in games, they actually gamify interaction.
They add friendship levels.
They have like daily check-in streaks.
I mean, my daughter just had a look once at like some website.
I think it was called character.
I don't know, I don't remember.
And I've learned her to use disposable email addresses.
So that's fine.
She doesn't she she doesn't get bugged with.
But that crappy system keeps sending her mails every day.
This this every every day.
Another another persona tries to engage with her in chat.
And they are trying to suck her into the system.
And hey, I haven't seen you for two days.
I'm missing you.
I mean, come on.
Computer can't miss you.
Anyway, so these tech techniques are used to suck people into chatting.
And to get them away from other people.
But because come on, let's be honest.
If they're if they're sitting behind their computer,
they're not engaging with other friends.
And not not building new friendships.
So this is important that we are aware of this.
And that we teach our kids to responsibly recognize these mechanisms
and educate them in in this kind of field.
And also that on the other side,
we need to have a legal framework that puts real cards there.
Because it's not fair to expect a kid to be able to sit in front of a big basket of candies and not take one.
Especially not if the candies keep jumping to you and say,
Hey, I'm delicious.
Hey, I'm delicious.
Hey, I'm healthy.
Hey, I'm healthy.
I'm a healthy candy bar.
No, don't expect that from them because you're putting a real high expectation on the kids.
That's maybe not realistic to expect from them because they only also only have their own development.
You can't expect a 12 year old or a native year old to have the brain of a 30, 40, 50, 60, 70 year old person in the sense of understanding mechanisms.
There's a few things I think we need to go into.
First of all, we need to be able to explain stuff and trust stuff.
And in certain environments, it will be worth the extra CPU cycles, the extra energy cost to create transparency of reasoning.
So that if an AI system can explain this is how I got to this conclusion.
And you can even question that AI system and challenge it and it's then able to even correct its own conclusions.
That would be a great system.
There are techniques to enhance explainability.
There's the local interpretable model agnostic explanation system line and there's also a shaft that can help illustrate how input features influence predictions.
There's a model distillation simplifying a complex model into a more interpretable while still retaining performance and these things aren't perfect yet.
But I think in certain contexts, they are needed and we should be clear in when do we need an excellent explanation?
And when don't we need an explanation and we still need to be critical because even if there is an explanation, it can also be biased.
It can also sound logic and still be crap.
I've been there a few times.
So uncertainty and confidence course is maybe something else that could be interesting.
If an AI system could say this is my answer and I'm 99% sure that's right.
Or if he says this is my answer that they generated and it's 25% sure that that's right.
That would be a good progress I think.
I would be really happy if chat GPT and all those other systems would indeed with every reply they give answer two things one.
This reply took me this much kilowatt hours of energy and do the certainty of my reply is so much percent.
This would really impact how people would use it.
But I think they won't do this because it will certainly lessen the usage of the systems because people don't want to be uncertain.
And they don't want to be aware of the fact that they're burning lots of energy.
Next thing I want to talk about is how it can also be used in military.
I mean they're in illegal frameworks.
I mean we all know that there is a legal framework and this tells us that AI cannot be used in certain contexts.
But the military is not bound by civil law.
The military is not bound by European laws or by international laws unless they are military laws.
And let's be honest, the criminals also don't tend to stick to the law.
Let's keep it to that.
So there will be AI systems in hacky and fishing to automated attacks to adapt to targets forging highly personalized fishing emails and even chat-based scams.
There's AI generates the social engineering tactics that trick humans more convincingly.
For example, tick-fick phone calls, mimicking voices of colleagues.
There's also the whole path of SyOps, both of automated social engineering.
I mean state actors, extremist groups or unscrupulous organizations could deploy AI driven campaigns to manipulate public opinion.
There's AI generated misinformation or troll armies that might infiltrate social media to spread propaganda,
inside unrest and so distrust.
And the AI system is great because it can translate in a whim.
It can create answers that just follow a certain trend and it's made for that kind of stuff.
So it's easy to be abused for malicious purposes too if you just tweak the weights and biases.
Deep fake videos, synthetic news can erode confidence in genuine sources or I told you that.
A society bombarded with possible AI generated content may struggle to distinguish fact from fiction, amplifying polarization.
So more than ever, it will be important that our reliable sources become more reliable.
That academic research is open research, that knowledge is publicly as accessible and that people can scrutinize it,
that they can look into it, that they can understand it, and that other experts can look into it too.
And a degree of confidence, a bachelor degree, a master's degree at PhD, does have value.
Not everything can be understood with common sense and we have to accept that.
That a biochemist has certain skills that I don't have.
Mathematician or statistician has certain skills that I'm lacking.
And if someone is explaining, well, it's simple, it's just like that.
We should be critical about things that are simple because often they are not simple.
If there's one thing that an AI system could can easily do, it's oversimplifying stuff.
Especially if you tell them in what direction it has to oversimplify.
So military applications, there's also a thing where you could embed AI systems into drones or tanks or devices that require minimal or even no human input and make their autonomous decisions surveillance powered by facial recognition and advanced data analysis tools.
I mean, we all know how cameras in the streets have been implemented to catch terrorists and child abusers.
But in reality, they're being used for people who parked their car in the wrong spot and people who piss against the tree.
So that's not really what I thought was the intention.
Now that we have them, we can all just as well use them is the logic.
I don't agree.
This is a negative evolution and AI will only go in that sense too.
So looking forward, there's of course innovation and I really like AI.
It has a lot of potential. It has cool things you can do with it.
I use it to translate websites for me.
I use it to summarize documents to understand legalese to talk about an article that I don't understand and just ask and ask and ask.
I mean, I've never had chat GPT tell me, come on, buddy, you're wasting my time.
No, he just keeps patiently answering my questions until I understand it.
That's great.
But we also need to be cautious because there is a balance to be struck.
We need to not get addicted to this stuff and have it on the right spot for the right people with the right biases and weights.
So responsible use and not have all those risks for certain individuals for the society for the environment.
There's practical considerations.
I mean, let's hope into disciplinary collaboration stays a thing and becomes a thing.
Maybe if it's lacking, I mean, a lot of engineers are working on AI systems.
But are there also ethicists in there?
Or are there also policy makers?
Last thing I heard, these are the points where people are trying to cut the costs because they also actually tamper advancement.
Because if you just assume everybody will do the right thing and then notice they won't they didn't.
I mean, this is where the philosophers, the ethicists, the sociologists, psychologists really have a role to play.
And I hope a psychologist not only tries to optimize efficiency, but also things in a medical sense as he's a medical practitioner.
And so find the equilibrium between development and due diligence.
We have also community involvement.
I hope open source will play a role here as a movement because not sharing knowledge, sharing advancement, sharing progress.
I think this is a good thing.
And yes, there are some risks connected to open source AI systems that can be abused by others because then they also have access to the system.
On the other hand, they already have access to it.
I mean, let's be fair.
If it's not open source, the only thing is that it's illegal to use the system and to reverse engineer it.
It's not impossible. It's just illegal.
And somebody who's in the illegality already already does illegal stuff.
So come on, let's be honest.
So grassroots initiatives need to shape ethical standards, create diverse inclusive AI solutions.
Let's have public debates. Let's raise awareness.
Let's talk about this stuff with multiple stakeholders, educators, civil society groups, everyday users to gather multiple perspectives.
Let's not be negative, Nancy.
So let's not all try to push out AI because it's going to happen.
But let's have the debate in a sensible way.
AI governance benefits when all voices are heard, I respect it.
Not only the ones that have the money, not only the ones that have the knowledge,
not only the ones that are in the right countries, not only the ones that are in major majorities.
So let's repeat it and repeat it again.
Let's think like a hacker.
Let's think or reverse engineer learn.
Let's be curious because I mean, let me be clear here.
I'm not encouraging you to engage in illegal activity.
I would say follow your conscience, obey your curiosity,
take up your responsibility in the world, be the judge of what that implies.
So explore existing models, try them out, experiment with the ones with open open open source framework,
play with them on your hardware, put them on shared devices.
If you have some server here or there, play with them in your hackerspace,
in your technology lab, in your school, in your university,
find the boundaries of the systems and see what can and what cannot be done with them.
tweak and fine tune these models to understand how changes in architecture or training data alter their behavior
and see what happens if something goes wrong because this is like the whole hacker thing.
And also try to figure out how these preparatory systems work,
reverse engineer them in the sense of try to figure out how the black box works,
attempt to analyze the inputs and outputs to deduce patterns, biases or hidden rules,
and share your findings to promote transparency, advocate for more open or auditable designs.
Have your own mini projects, create prototypes of chatbots, image generators, gay mayis,
to grasp the fundamentals like neural networks, data preprocessing or generative adversarial networks.
And by working on tiny scale version, you learn the core principles that allow apply to large scale systems.
It will give you a deeper understanding and find yourself a hacker space.
This is the great place to do this kind of stuff.
Champion openness and transparency, I mean these values are essential.
Contribute to open source AI, write improved documentation, submit bug fixes,
create tutorials, build new features, democratize open source AI systems development,
allow more eyes to spot errors or malicious code, and we all know it's not feel safe,
but I think it's better than the alternative.
Push for open weights and open data.
Engage in model auditing, help test AI systems for bias, security vulnerabilities and edge cases.
Publish your findings, be responsible in your disclosure.
Even for mainstream models, help developers patch flaws and reassure users about the systems robustness if it is there.
But also, think critically about ethics and privacy, be scrutinist about data collection.
Be aware of the fact that your data where you put it is being used by these systems
and bring it to the attention of other people too.
Remote privacy by design in development, but also in use of applications,
in choosing what application you put on your computer on your phone.
Show people how a torch app on your phone shouldn't need access to your contacts,
because that doesn't make sense.
For example, encourage minimal data retention and secure storage methods
for any personally identifiable information.
Question if certain applications actually need cloud access.
I mean, okay, cool, you have doorbell that can show who's at your door and who else knows.
Hacker ethos meets ethical AI.
So hackers deeply care about individual freedoms and transparency.
Extend these principles to AI by questioning black box models by questioning manipulative design
by questioning potential data abuses.
Take a stance on issues like mass surveillance, biased algorithms, addictive technologies,
leveraging the hacker community's collective voice to influence better standards,
collaborate and share knowledge, organize or participate in hackathons, research sprints,
and share the data.
No non disclosure agreements.
If it's a hackathon, it's open.
That's a premise.
Engage in mentorship and community engagement.
It's important that also tech junior level people get to think about the ethics
that they don't only think about what's possible but also about what's good for society.
Talk about peer review, do peer review, exchange with ideas with other fields
and do cross pollination.
I mean, if there's one thing, a hacker community is strong and it's thinking outside of the box
going into other boxes and exchanging ideas, hacked the bias.
I mean, literally a hacker tried to try to get into it, conduct bias hunts in open source
models, systematically testing how they treat different genders, ethnicities, languages.
You'll find the other blind spots if you keep looking and publicly share the results
up again, be responsible in your disclosure, propose fixes and encourage the community
to replicate and validate them, create biases resistant tools.
There are a few libraries out there that you could contribute to, like Fairlearn and
AIF 360 generate them into your AI pipeline if you work with AI.
So these checks become automatic and part of everyday development.
If you're an innovator, innovate responsibly.
I mean, experiment with purpose and be sustainable in your innovation.
I mean, we don't have endless energy sources, so don't assume that either.
Stay vigilant and this is a pet peeve of mine.
Stay vigilant on addictive and manipulative designs.
Be critical to examination.
I mean, if you're a social engineer, please, please, please point these things out to people.
Show them what, how certain software and how certain AI systems are trying to suck you in
and propose alternatives.
Suggest design improvements, user timeouts, balanced dialogues, disclaimers,
to mitigate these addictive qualities.
All for open source examples of ethical chat bots or companion AI set encourage real-world
interaction and mental wellbeing or maybe don't even use them at all.
I mean, a friend's a friend and just if you if you use an AI bot,
use it to challenge your own ideas, to question your to deeper understand stuff
and just let it go as soon as you as you feel affected to them.
Be a watchdog, sound the alarm, maybe be a whistleblower at moments.
I don't know that's your call to make because it's your conscience.
So report, flaws and exploits.
So we've navigated through a range of ethical concerns, privacy, bias, sustainability,
addictive design, military misuse and beyond.
Yet these are just the tip of the iceberg.
AI is rapidly evolving and its societal impact grows every day.
Now is the day to put on your hacker hat.
Ask the tough questions.
How is data gathered?
Whose interests do these models serve?
Which voices are being left out?
Tinker with open source tools, audit the black boxes, experiment and share your insights.
If your spot manipulative features or an ethical shortcuts, sound the alarm
by challenging assumptions and championing transparency.
You help keep AI aligned with the public good.
Carry your curiosity forward, but remember that hacking isn't only about breaking things.
It's about understanding and improving them.
The choices we make collectively will steer AI's trajectory.
Keep questioning, keep innovating and keep the dialogue allow alive in your communities,
online forums and at the grassroots level.
So thanks for joining in this.
I mean, it was a huge sew box and I'm sorry if I took your attention for too long.
If you actually got to this point, you must be kind of a nerd.
So welcome to the family.
Keep thinking critically, stay vocal and remember the future of AI will be shaped by all of us.
Especially those who dare to think like hackers.
You have been listening to Hacker Public Radio at HackerPublicRadio.org.
Today's show was contributed by a HBR listener like yourself.
If you ever thought of recording podcasts, then click on our contribute link to find out how easy it leads.
Hosting for HBR has been kindly provided by an honesthost.com, the internet archive and our syncs.net.
On this advice status, today's show is released under Creative Commons Attribution 4.0 International License.