Files

585 lines
21 KiB
Plaintext
Raw Permalink Normal View History

Episode: 4486
Title: HPR4486: A code off my mind
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr4486/hpr4486.mp3
Transcribed: 2025-11-22 14:57:05
---
This is Hacker Public Radio Episode 4486 from Monday 13 October 2025.
Today's show is entitled, A Code Off My Mind.
It is hosted by Lee and is about 21 minutes long.
It carries an explicit flag.
The summary is, Lee touches on a few aspects of coding as an occupation and ponder as
neurodivergence.
I am Lee.
Today I'm going to talk about a few of the things in a modern coding's life.
So these will be cybersecurity databases, test framework, say I, hardware and finally I'll
talk about neurodivergence.
So this is something that's been on my radar since the first virus is on boot sectors
of floppy disks.
And I'm fast forward to present day while studying this I find out about a company called BAE
who organizes cybersecurity capture of the flag events.
I've attended two of these online events in recent years.
Most of it is completing web-based cyber challenges.
There's generally a live stream that may be the start and end from the organizer.
And the event runs over two days with new challenges being released as time progresses.
For the purposes of scoring, you'll put into a team.
And while you can tackle every challenge as an individual, it helps if you coordinate
your efforts with your other teammates through a provided chat channel.
The challenges can be like analysing data packets in Wireshark,
debugging a C-program to find an explorer or using the JavaScript console in the browser,
development tools to hack your way through a simple online game.
I did enjoy some of the challenges and found them engaging in educational,
particularly one where you had to use a very basic drawing machine to build the ability
to process operations of binary arithmetic, such as add, then build multiplication
from repeated additions.
There was also a good one about regular expressions.
And I was one of the relatively few who was eventually 100% successful at this challenge.
Disifering text from a log of USB keyboard traffic, including backspaces and modify keys,
was also quite cool.
Increasingly though, and certainly when I came to this event at second time,
I took less interest in the competitive aspect.
Recently, I've started implementing a author authentication to allow third parties
to access data from a software system.
A author is one of the more robust ways of securing an API.
I've been dipping into a book called Secured by Design by Johnson, D.O. Go,
and so on, though.
And this is more for inspiration rather than directly telling me how to code this.
I've noted in reading about the main driven design,
the static typesaved is considered quite important.
This is basically saying that we have some variable,
and we want to know what type it is,
and we can't just pass any variable we like into a function,
has to be one of the right type.
So for example, this would stop you trying to pay someone
an amount of money based on the telephone number,
because the two things that currency value and a phone number
would have different types.
Pretty much every day I'm using even my SQL or SQL server.
You can run both of these on your own Linux PC,
and they also run in more enterprise fashion on their remote server.
I was running SQL queries and pasting the output to Joplin,
so much ended up writing a filter that I piped the output from my SQL,
and it turns it to a markdown table.
And I know you can in theory make markdown tables very wide,
but I really have a strong dislike to markdown tables
that look nice when rendered, but the source code is a complete mess.
I pat out the cells fully with spaces so the table is readable,
even as markdown source code.
And when the output is too wide, I had my filter flip the table
from vertical columns instead to multiple tabulo horizontal key value pairs.
Then one time I realised I was often getting CSV or other data
from various sources, I wanted to make this tabular too.
So I made an online tool that lets you quickly
apply regular expression to some rows of text
and extract data as a markdown table.
To be honest, since I wrote it, I haven't used it quite so much.
The other database I use from time to time is SQL Lite.
This is quite a nice way of storing small amounts of data for small-scale projects,
and things like Node, PHP and Python,
all of libraries built into bind to SQL Lite.
So a few applications for SQL Lite,
I've used directly a crossword generator,
a row where the dictionary and the joint index of words
with letters missing are in tables,
also a temperature humidity log,
which is a simple single table row by row date time location,
humidity temperature table.
Then also there's the static site generator for this podcast.
I recently became familiar with my first ever proper SQL database
I used at work was Oracle,
and this was on those green screen terminals
that had a keyboard,
but no CPU or hard drive as such.
Since the code might be running on a mainframe
in another part of the building,
or if I recall correctly,
it was maybe even in another part of the country,
and the terminal data was going back and forth
on some sort of company back bone in those pre-internet days.
So since there was no Windows interface,
the forms were all too e-based.
That's terminal user interface.
They did not know that nomenclature at the time,
and I'd be writing stored procedures
in something called PL SQL.
It's only quite recently I started working with SQL Server
I've been coding stored procedures again.
These are lumps of code actually inside the database,
the handle database operations,
I mean, incorporate whatever database
related to logic you want
without having to put that logic in your actual application code.
I think in the mid-90s,
I was here and there in my life using Microsoft Access,
even a year or so ago,
I was supporting a charity who's still used
an access database for specific form printing and mail-outs.
Access was something I'd almost imagined had gone away.
Access to kind of has at a place in my heart,
like a visual basic,
an other technology I thought was lost in the past,
but recently became quite moderately re-enquainted
with it in the form of Phoebe.net
that was still being used by a profitable software as a service provider.
The application having been originally developed
about a decade ago.
And now for that application,
there are plans to migrate to slightly more modern technologies.
C-shop.net core as well as perhaps react framework.
Thankfully, modern AI tools,
such as AnthropicsClaw.ai to name just one,
I found a thoroughly good job with directly translating
between visual basic and C-shop.
With the main complication being any dependencies
that might be in .net framework, but not in .net core.
Having data related codes separated from visual user interaction code
by layer of abstraction,
specifically in class library,
was a good design decision at the time
this application was developed.
And it has made, at least, this particular part
of the migration much more feasible.
The migration strategy is still working in early progress,
so I touched wood out soups-stition as I say this.
Now test frameworks.
I don't pay as much attention to different test frameworks
as they deserve,
and use them pretty much interchangeably
without caring which one I'm using.
I've never done proper DevOps,
or been in a real-life agile team,
at least not one anyone who actually does agile properly
with it recognises such.
My closest brush with DevOps is just running a web service
that converts HTML pages to mark down.
That was my own pet project,
really just because I needed it,
and wanted it to be the same exact version
of the code doing the work irrespective of platform
or having to deploy the latest code to the client devices.
And I prefer to do all the heavy lifting in the cloud,
not on the client device.
So I actually set up a Jenkins server at one point
that would take the code as it was pushed to the repository,
run the tests, build and deploy the thing,
assuming the tests had passed.
I also saw something similar with Serenity OS,
the C++ modern homage to 90s operating systems,
to work primarily,
but by now means exclusively anymore,
by Andrews Kling,
which while it can run on some hardware,
often runs best in a virtual machine,
namely QMU.
I didn't encounter a test framework
that I can't even remember what it was called,
when submitting a small patch for consideration
to them as inner Firefox code base.
My first test framework was actually
one I wrote myself in PHP,
and was not very fleshed out,
having no concept of test coverage,
which is quite a useful stat for knowing
how much of the code in the program has actually
been touched by a particular set of tests.
Anyway, for that work,
I soon moved on to a real framework.
I think it's called PHP unit.
And now AI,
of what you call this generative AI,
large language models,
I just call it AI for now.
Needless to say,
my experience of AI tools for coding
does not go back years.
I've never touched the big one.
We all know the name,
but I've used Google's Gemini.
I've used a large language model run
called Alarmma to use a cut-down version
of Gemini called Gemma 2,
on my PC graphics card.
I've very recently started using
the command line version of Gemini,
also the command line version
of Anthropics AI called Claude.
The mobs of AI,
which seems to want to creep into my life
is something called code pilot, I think.
I generally try to ask to creep back out of my life.
Not for any real reason,
then I like to actually know
when I'm using an AI,
not having it doing things for me
without any conscious attention.
So Claude,
I was only looking at because
they seem to be leading the way
with a practical called MCP.
That's now adopted by most of the AI systems.
That lets an existing application
or database be connected to an AI,
such that users of a software system
might talk to their chatbot
and get answers based on real-private data
extracted dynamically from the system.
And when I started ramping up what
the command line AI bots could do,
I saw they were quite good at generating
and refactoring entire code projects.
I also learned the hard way,
two things.
Firstly, there I will sometimes
get into a loop adding code,
listening to the problems you report,
then adding more code,
such that the amount of code grows
and grows with the problem getting
no close to being solved.
It takes some skill and patience
and experience to go through this,
then finally say to the thing,
stop, stop, stop.
Let's go back to the start and try
and solve things differently.
Let's be taking away lines of code
rather than adding them.
And the other problem is these
AI's were generally good and safe
and asking for permissions about
what they do,
sometimes might completely screw up
your code beyond repair.
One notable way a program can get
screwed up as if it's Python,
it just takes one bad indent
and this can waterfall into hundreds
of lines with wrong indentation.
So, granular backups are not
just advisable, they're essential
if you're messing with these tools.
Recently I've been greatly helped
or at least assisted,
or maybe even enabled to do questionable things faster
and more confidently than before.
Examples are being able to write
a fully functional extension
from my terminal of choice known as
console, something I'd never thought I could do.
Then implement my own version
of an open source terminal
based in pub book reader extending
a more basic version of GitHub
with the ability to render block images
in with the text as well as syntax
highlighting code and having
an easy to use bookmarker system.
I would have never attempted
either of these projects
and expected to see them through
onto hardware.
When I was first trying to get
into this field, there seemed to be
a domain problem that got put in front
of me more than once.
That was basically turning lights on and off.
I've noted before my pet gripe
about the modern world
that for ever reasoning workplaces
and places accessed by the public
received to have uninvented the light switch
in the same way user interface designers
uninvented the scroll bar
and until recently granular file
was uninvented compared to when I had
it and my fingertips on my work terminal
in the mid-90s.
So at one point I was asked to write software
for a layman to program a network
of actuators such as light switch relays.
Each with a mic processor in
similar to what would now be
recognised as an Arduino
but back then was something
revolutionary or at least potentially so.
I was greatly hindered
by the licensing model of the hardware architecture
that meant what a developer was
basically non-viable and my attempt
to get a third party programmer
paid to implement a hack that would
get around this bizarre licensing
fell onto deaf ears.
An offshoot of this was later being asked
to take on development of a serial
board based light switch controller
where the switches had no intelligence
but got digital signals that flicked
relays on and off.
All the smarts had to be in a central
PC running C code.
They wanted to power more lights
by the Ford Digit indexing system.
Think of large office buildings
in central London if you want to imagine
how many light bulbs this thing had
to control.
I was greatly hindered in several ways.
First, understanding the masses of existing code
that was elegantly written.
I did not get what
it was doing since the low-level stuff
was in assembler, something I knew
of but was not fluent in.
Then I had to even copy the code
from a Mac to a PC with proper line endings.
Then compiled the code
which it wouldn't because, as soon
as I added code, 64 k-segment
overflowed.
I had a single relay board
I'd been given to test with.
I didn't really know how that was meant to work either.
I quit this project unceremoniously
not just in frustration
more like in complete meltdown.
And why it was something
that could be recovered from in the short term
is set a pattern in motion that would keep me
out of the industry for some decades.
So playing with Arduino
is where I've repose
and lower one in the last decade has been a kind of therapy
that got me back comfortable
with hardware projects.
I don't have anything major on now
excepting the temperature humidity
personal project I mentioned in the previous podcast.
It has been suggested to me
that might be fruitful
for the basis of either the research
or some manner of software product I could market
but I have my doubts of this stage.
So this brings me to a fine
aspect of coding
that not all coders share
but a good number do
that is finding it difficult to fit
in with the world the way it is.
Part of the coding is real problem-solving
which people seem to find useful.
Turning technical knowledge
into formal academic research
is something I found difficult.
And I don't think it's just about the difference
between practical and academic.
Problem-solving is often a step-by-step thing.
That's a kind of thinking that
leads me to some of us.
But I have questions about how different types of intelligence
are recognised in education
and career systems.
Some minds excel at pattern recognition
and systematic thinking
so that's useful skills for debugging complex systems
on designing architectures
or whatnot.
But do these skills have more value in industry
than academia?
Is it because the industry cares about outcomes
where academia cares about
theory or methodology
or something else like
positioning your work in existing debates?
If I just wrote down
why are making technical decisions
with that qualified research?
And what is knowledge creation
anyway and who gets to decide what it is?
Another aspect of this
is that there's a whole concept of reasonable adjustments
for new divergent workers.
In organisations you hear about
accommodating employees.
That might be asked backwards
if you ask me.
This working style is actually
better suited to certain types of technical work.
So debugging
that systematic error tracking
and it's quite boring really.
Most people want to jump to the end
and try random fixes
they can move to more interesting problems.
If you actually enjoy methodically
working through edge cases
that's not an accommodation.
Maybe that's being better at it.
Putting someone with systematic thinking patterns
into work that benefits
so why are we framing this
as helping disabled people
rather than matching cognitive styles
to appropriate tasks?
And there's also remote work.
Obviously there's a pandemic that changed
what you consider workplace flexibility.
But remote work
might not just be about convenience.
It could also be about creating
best working conditions for someone.
So if you're coding you might work
better with the right lighting,
minimal noise distraction.
Sometimes when your brain needs processing time.
So is it all this
because someone is lazy antisocial
or is it because these conditions
allow them to do better technical work?
So for career development you think
of the stereotype of advancing
by networking lots of informal
conversations.
If someone does the best technical
working controlled environments,
how they do that.
So there's also technical specialisation
and career breadth.
It's without settling on anything solid
or have several low-level things
going on together.
So on LinkedIn or a CV
that would look scattered
and not focused.
From a technical problem-solving perspective
those experiences
will feed into each other though.
Our diverse technical backgrounds
are actually an advantage
even if they don't fit
any career categories.
And if you're newer to Virgin
or you're new to those.
Someone might assume
that technical people
volunteer or whatever because
of social obligation.
Maybe it just suits
that person's problem-solving preferences.
Many of these roles involve systematic approaches
to helping people navigate complex systems.
For example, if it's helping someone
use assistive features
or putting together categorised directories
of support services in the local area,
that's still technical problem-solving.
For example, if you're learning
about technical solutions
the domain is different
but the cognitive process is similar.
So it's the separation between technical careers
and helping careers artificial
when they might actually be complementary
for people who think in systematic ways.
And what does career sustainability look like for
new defergent technical professionals?
You hear about bone out there
and bone out that
but it assumes the solution
is better work-life balance
happens when you're forced to work in ways
that conflict with your cognitive preferences.
When you have to spend
energy on neurotypical performance
rather than actual technical problem-solving.
So what would sustainable
career development look like if you started
from cognitive strength?
Finally, another part of Coney's documentation.
What's the role of that
with knowledge sharing in technical careers?
Probably formal technical writing
gets treated as less important
than hands-on development.
Documentation is crucial for the system
to be maintainable
to transfer knowledge within the team
and bringing in new people especially.
It needs understanding complex technical concepts
deeply enough to explain them clearly.
It involves anticipating
what information different audiences need.
Even it might be an AI nowadays
reading this documentation
so it knows what's going on in the code.
So why isn't technical communication
valued more highly?
If you think systematically about
information and clear explanation
should be seen as a specialized skill
maybe industry values the practical more
than academia.
But there is applied research.
That's a systematic investigation
that happens when you're trying to solve
real technical problems.
A kind of knowledge creation
is systematic investigation
that leads to a new understanding.
It just happens to be understanding
that has immediate practical
application rather than theoretical significance.
What systematic technical problem-solving
how do you build a career that uses
those strengths rather than constantly trying to compensate
for neurotypical expectations?
More broadly as the tech industry matures
becomes more aware of new diversity
you're going to see
the new career paths
that better match different cognitive styles
or will the pressure always be
to fit into existing frameworks.
I don't really have answers to these questions
but it's worth thinking about.
Perhaps some of the answers
or at least better questions
are in a 2022 report called
The Changey Workplace
and namely Disability Inclusive Hybrid Working
by Heather Taylor, Rebecca Flores
and Melanie Wilkes
and Paula Holland.
So that's all for now.
So if you can guess which parts
in this podcast I got a little help
from the AI to script.
If this all resonates
with your own technical career experience
or if you found ways to build sustainable paths
with preferences rather than fighting them
I'm sure listeners would be interested to hear about it.
Let's keep questioning
how we organise work and who benefits
from different approaches
to technical problem solving.
You have been listening to
Hacker Public Radio
at Hacker Public Radio
does work.
Today's show was contributed by
a HBR listener like yourself.
If you ever thought of recording a podcast
how easy it really is
hosting for HBR has been
kindly provided by
an honesthost.com,
the internet archive, and our
things.net.
On the Sadois stages
today's show is released
under Creative Commons
Attribution 4.0
International License.