Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use - Search episodes, transcripts, hosts, and series - 4,511 episodes with metadata and transcripts - Data loader with in-memory JSON storage 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
214
hpr_transcripts/hpr4333.txt
Normal file
214
hpr_transcripts/hpr4333.txt
Normal file
@@ -0,0 +1,214 @@
|
||||
Episode: 4333
|
||||
Title: HPR4333: A Radically Transparent Computer Without Complex VLSI
|
||||
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr4333/hpr4333.mp3
|
||||
Transcribed: 2025-10-25 23:11:40
|
||||
|
||||
---
|
||||
|
||||
This is Hacker Public Radio Episode 4333 for Wednesday 12 March 2025.
|
||||
Today's show is entitled, a radically transparent computer without complex VLSI.
|
||||
It is the first show by new host Mark W. Able and is about 19 minutes long.
|
||||
It carries a clean flag.
|
||||
The summary is, short talk about dog 36.
|
||||
The world's most advanced transparently functioning computer.
|
||||
Today's show is licensed under a Creative Commons Attribution License.
|
||||
Welcome, Hacker Public Radio listeners.
|
||||
I'm having you join me for the final rehearsal before I give the last talk at the first
|
||||
IEEE conference on Secure and Trustworthy Cyber Infrastructure for IoT and Microelectronics.
|
||||
Today is February 27, 2025.
|
||||
My session chair's name is Dominic Morehart, so I begin the talk with these words.
|
||||
Thank you Dominic.
|
||||
Welcome everyone.
|
||||
We've reached our last talk for this conference, but not our last event, so I hope you can
|
||||
stick around.
|
||||
This talk is called, a radically transparent computer without complex VLSI.
|
||||
And as you're listening on Hacker Public Radio, I'll point out there are no slides
|
||||
for this talk, so you're not missing anything.
|
||||
My name is Mark Able, and I have a simple question I'd like for you to consider.
|
||||
Suppose you have a small process to automate.
|
||||
It might be information retrieval or a cyber-physical system or some other application, but you're
|
||||
going to need a computer.
|
||||
Only one computer.
|
||||
And suppose you're allowed to set it up in a fixed location that has all the physical
|
||||
security you need, but you're going to give it an internet connection.
|
||||
So my question is this, can you absolutely guarantee that the machine itself, including
|
||||
its software, is fully immune to remote exploits?
|
||||
In other words, if you had to, can you force even one machine to function as designed, assuming
|
||||
it's physically safe?
|
||||
Let's think for a moment about what that would take.
|
||||
The software would need to be absolutely perfect.
|
||||
That's actually quite easy, because all you need to do is not overwrite the software.
|
||||
If you write code well enough, and your program is small enough, you can find and remove
|
||||
every bug.
|
||||
I can do that much, and so can many other people.
|
||||
Now let's think about the hardware.
|
||||
It needs to be every bit as perfect as the software, with no logic defects whatsoever
|
||||
for even the most obscure corner cases.
|
||||
To keep our thought experiment simple, remember the machine is physically secure.
|
||||
No one is fuzzing the power supply, introducing ionizing radiation, pouring liquids, hammering
|
||||
on the cabinet or applying extreme temperatures.
|
||||
All I'm asking for the hardware is that its behavior be fully specified by the software
|
||||
that is running, according to the hardware's architectural specification.
|
||||
No more, and no less.
|
||||
Back to my question, can we absolutely guarantee this system will be immune to remote exploits?
|
||||
The answer is yes, if and only if we have objective proof of this immunity.
|
||||
This is easy for the software part, because we have the object code that we can disassemble
|
||||
to show that it matches defect-free source code that we wrote ourselves.
|
||||
But we have to inspect our hardware a lot more thoroughly than our software.
|
||||
For starters, we certainly need to inspect every gate, every signal and every wire, just
|
||||
as we would inspect every last word of object code.
|
||||
And you're not likely to find a manufacturer who is willing to share this information.
|
||||
Even on a risk-five platform.
|
||||
So okay, we're not looking to be control freaks.
|
||||
We only want to guarantee the safety of a machine in our data center.
|
||||
As cyber professionals, we're team players.
|
||||
So what if we allow the manufacturer to do their own assessment of their product?
|
||||
And sign a contract with us that they guarantee their hardware does nothing more, nothing
|
||||
less, and nothing differently than specified in their own documentation?
|
||||
The manufacturer would agree to reimburse whatever losses we incur that stem from logic
|
||||
defects in their own silicon.
|
||||
Because without the manufacturer's guarantee to us that their hardware does exactly what
|
||||
it's programmed to do, nothing more, and nothing less, we're unable to guarantee that any
|
||||
system we buy and install or use is safe from remote exploits.
|
||||
You can probably see where I'm going with this reasoning because it turns out that none
|
||||
of our VLSI manufacturers offer any kind of warranty that their system is free of exploitable
|
||||
defects.
|
||||
Instead, they've sold full catalogs of defective CPUs from at least 1985 to the present
|
||||
day, as well as built products and alliances that plausibly suggest the presence of intentional
|
||||
backdoors.
|
||||
So following back to my question, can we guarantee that even one machine in a secure location
|
||||
under a clearly defined threat scenario is immune to remote exploits?
|
||||
Not with the prevailing computers of our time.
|
||||
Cybersecurity is a pseudoscience built on blind faith that hopefully everything is going
|
||||
to be okay on our watch, and if it's not okay, our attackers are to blame.
|
||||
That's not good enough.
|
||||
Our country's enemies have been in our computers since at least 1986, almost 40 years, and
|
||||
not with standing decades of research, guidance, and rules, and at least hundreds of billions
|
||||
spent.
|
||||
Computer security today is the largest failure in the history of human engineering.
|
||||
Not only are we not prepared to defend our nation, we aren't even prepared to defend
|
||||
just one network-connected computer with any measurable assurance of confidence.
|
||||
To lift ourselves from this cybersecurity rut, I have some recommendations.
|
||||
First, a radical shift is needed in our concept of accountability.
|
||||
For me, cybersecurity is the discipline of scaling automation responsibly, and unlike
|
||||
authority, responsibility isn't something you can delegate to someone else.
|
||||
If you own a computer, you alone are ultimately responsible for its security.
|
||||
Not the folks you bought it from, the folks who made it, or its designers, or whoever
|
||||
wrote the software it runs, or your own customers.
|
||||
Because if you're unwilling to take responsibility for a system you bring into an environment,
|
||||
you're not going to find anyone in your supply chain who's a better steward than you,
|
||||
and the folks who hack into your machine aren't going to own up to anything either.
|
||||
I know I'm asking quite a bit here, and I wouldn't be if I didn't have faith in Americans
|
||||
to embrace responsibility.
|
||||
And for those interested, there's a best-selling book about doing exactly this, called Extreme
|
||||
Ownership, How U.S. Navy Seals, Lead, and Win, by Jocco Willink, and Lave Battle.
|
||||
I recommend this book Extreme Ownership, above every other book I've read about computer
|
||||
security.
|
||||
It's fitting that this book is about winning because it brings us to my second recommendation.
|
||||
Cybersecurity is often explained as being an arms race, where the good guys, usually Americans,
|
||||
hide and stay a few steps ahead of our adversaries overseas.
|
||||
This arms race metaphor is reckless and needs to be thrown out.
|
||||
For one thing, it's technically inaccurate because when we design and deploy computers
|
||||
that are free of remotely exploitable defense, it doesn't matter how smart our enemies
|
||||
on the other side of the world are.
|
||||
Nobody overseas has ever hacked into a secure computer here in the U.S.
|
||||
They only hack into insecure computers and we need to stop using them.
|
||||
The other problem with the arms race metaphor is, it's irresponsible statesmanship.
|
||||
It's not a coincidence that the four countries most known for cyber attacks against the United
|
||||
States are the same four that are most known for human rights abuses against their own people.
|
||||
Unfortunately, U.S. treatment of these regimes as plausible opponents in cyberspace tends
|
||||
to legitimize claims of sovereignty with respect to their crimes at home.
|
||||
To the extent we treat these nations as technological rivals by conceding access to our computers,
|
||||
we strengthen their claims at home against the rule of law and Western democracy.
|
||||
Another reason to throw out the arms race metaphor is that if we delay much longer,
|
||||
maybe artificial general intelligence could become able to hack into our computers.
|
||||
I think we've got some time yet, but let's not wait.
|
||||
Of course, this is only possible if our computers have exploitable defects in the first place.
|
||||
If and when our standard becomes zero defects, instead of zero known defects,
|
||||
no enemy regardless of intelligence will find a network exploitable defect to hack in.
|
||||
My third recommendation is, there need to be computers that make it possible to guarantee
|
||||
hardware safety.
|
||||
As I mentioned, the guarantee we're after is approval conformity to published specifications.
|
||||
In other words, we want a computer that does what it's programmed to do in every instance.
|
||||
Computers don't do what they're programmed to do.
|
||||
Instead, they do what they're wired to do.
|
||||
So we need a machine that, first of all, is wired to do only what it is programmed to do.
|
||||
And second of all, permits its owner to inspect every gate, every signal, and every wire,
|
||||
just as she should inspect every last word of object code.
|
||||
I said earlier, we need to inspect our hardware even more thoroughly than our software.
|
||||
Why?
|
||||
Software is simpler, because its data representation collapses into zeros and ones.
|
||||
But hardware, as it turns out, doesn't have any zeros and ones.
|
||||
Powered on hardware is an unruly rat's nest of stationary to microwave electrical and magnetic fields,
|
||||
moving at relativistic speeds.
|
||||
Tracks on the circuit board don't even behave like wires.
|
||||
Instead, there are waveguides around which these fields play like cats at a laser pointer
|
||||
convention. Now, I don't have a cat, but I do believe in eating my own dog food because I
|
||||
don't have a dog either, which is why in 2019 I set out to design the world's most advanced
|
||||
transparently functioning computer. I'd like to have a machine I would feel safe connecting to
|
||||
the internet, and never need to update it. That's a thing with defect free hardware and software.
|
||||
They don't need security updates. It took a few years to choose a name from my computer.
|
||||
I wanted something industrial sounding, possibly a little European, and easy to search for on the
|
||||
internet. Now, my middle initial happens to be W, which centuries ago was written with two
|
||||
U's, and there aren't a lot of words now that have to use next to each other, so I name to the
|
||||
line of computers, dog, DAUUG. Like the American four letters slang dog, but written without a W,
|
||||
but five letters, DAUUG. These five letters are all you need to find my work online.
|
||||
The first computer in the series has a 36-bit word size, so I gave it 36 as the model number as well,
|
||||
DAUG 36. I won't talk in depth about DAUG 36's architecture because I've already written 200,000
|
||||
words of documentation, and you can find this on the project website. It's called the DAUG House.
|
||||
And just for those here today, if you visit talk.dog.com, there's a 48-minute technical presentation.
|
||||
These are better sources for you than I can offer in a 20-minute talk. But here's a synopsis.
|
||||
DAUG 36 is what I term a solder-defined computer, meaning that none of the systems behavior is
|
||||
concealed in complex VLSI. Instead, the computer is a collection of basic logic gates that are
|
||||
assembled using simple tools at millimeter scale instead of nanometer scale. The outcome is that
|
||||
DAUG 36 is a free and open-source computer all the way down to the logic gate level, allowing us
|
||||
the transparency of design and operation we desperately need if we're to practice security
|
||||
conscientiously. Two key differences separate DAUG 36 from the discrete component computers
|
||||
of 50 years ago. First, decades of progress in surface-mount technology have drastically reduced
|
||||
size and cost while simplifying assembly. Second, the basic logic gate for DAUG 36 isn't the humble
|
||||
nand gate of years past, but a synchronous static RAM I see. It takes fewer than three dozen
|
||||
of these extremely simple gates to build the core logic for a very advanced computer. DAUG 36 has
|
||||
a 36-bit word, paged virtual memory, preemptive multitasking, and a surprisingly capable instruction
|
||||
set of about 200 op codes so far. It looks right now like early models will be able to execute
|
||||
10 million instructions per second. There are four reasons I expect DAUG 36 to be more secure
|
||||
than other architectures. The first is radical simplicity. On-purpose DAUG 36 has no cache memory,
|
||||
speculative execution, multi-processing, stack variables, dynamic random access memory,
|
||||
page faults, memory over commitment, swapping, peripheral direct memory access, non-privileged
|
||||
indirect jumps, hardware virtualization, or VLSI complex logic. These 12 features are treacherous
|
||||
for even experts to implement and each is associated with a large family of vulnerabilities.
|
||||
Yet, traditional computers tend to have every last one, which is especially reckless for
|
||||
users who don't need a lot of computing power and uses that don't require it.
|
||||
The second reason is radical separation. DAUG 36 partitions data code and stack memory
|
||||
ought to separate ships in separate address spaces. It's electrically impossible for a code,
|
||||
data, or stack operation to leak into another space. Instruction pointers are electrically
|
||||
isolated from registers and data memory. Stack memory holds return addresses and CPU flags only,
|
||||
never buffers or variables. Every peripheral has its own bus and is unable to reach other
|
||||
devascans or memory. Privileged op codes are trivially distinguishable from non-privileged.
|
||||
Programs can't even try to branch outside their own code. It's electrically impossible for
|
||||
the CPU to read, alter, or update its own firmware. Mainstream computers offer none of these
|
||||
eight simple guard rails. My third reason is radical ownership. DAUG 36 empowers end users to build
|
||||
their own CPUs, controllers, and mini computers using only maker scale assembly tools.
|
||||
All parts can be hand-soldered and remain visually and electrically inspectable after construction.
|
||||
There are no headaches from secret functionality. Vendor lock-in, encrypted or closed source
|
||||
firmware, license fees, use restrictions, copy left, planned obsolescence, right to repair
|
||||
infringements, code signing, non-standard parts, hidden state, unattributable s-boxes, patents,
|
||||
or VLSI backdoors. Reason 4 is radical courage. Sometimes the highest right is being right.
|
||||
Today, DAUG 36 is incompatible with every binary executable you've heard of,
|
||||
every language compiler, every tool chain. It's incompatible with rusts, integer sizes,
|
||||
IEEE 754's floating point flornat, LF executable files, parts of C, some of posics,
|
||||
and traditional debuts. DAUG 36's entire programming model is different, offering a huge number
|
||||
of registers, no stack pointer, improved integer arithmetic semantics, and a refreshingly
|
||||
capable instruction set. And instead of having the hardware run and operating system as most
|
||||
computers would, DAUG 36 is designed to be run by its operating system, and that's a much safer
|
||||
chain of control. My hope is that DAUG 36 will have broader influence than just providing an
|
||||
alternative hardware platform. It will be the first time conscientious owners are offered the
|
||||
choice of a transparently functioning architecture, and hopefully even VLSI designers will respond
|
||||
competitively with open guaranteed tape outs for some of their products. Once again, my name is
|
||||
Mark Abel, and thank you for all for attending the first IEEE conference on secure and trustworthy
|
||||
cyber infrastructure for IoT and microelectronics. It's time to step away from the microphone,
|
||||
but I'll be around the rest of the day.
|
||||
You have been listening to Hacker Public Radio, as Hacker Public Radio does work.
|
||||
Today's show was contributed by a HBR listener like yourself. If you ever thought of recording
|
||||
broadcast, you click on our contribute link to find out how easy it leads. Hosting for HBR has
|
||||
been kindly provided by an honesthost.com, the internet archive, and our syncs.net.
|
||||
On the Satellite status, today's show is released under Creative Commons Attribution 4.0 International
|
||||
Reference in New Issue
Block a user