Files
hpr-knowledge-base/hpr_transcripts/hpr0309.txt
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

106 lines
6.2 KiB
Plaintext

Episode: 309
Title: HPR0309: Compiling a linux kernel
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr0309/hpr0309.mp3
Transcribed: 2025-10-07 15:58:37
---
.
Deep Geek here, welcome to today's episode of Hacker Public Radio.
Today I'm going to give you a short episode telling my experience custom-compiling a Linux
kernel from my computer, which happens to be an AMD opt-iron computer, a single core
model.
Now, the reason I decided to do this was because the how-to type of documents and podcasts
are bound, such as ones right here at hackerpublicraer.org, as well as ones referred to us and you
can find them easily.
So I thought it would be just too repetitious to tell you how to do it again.
I didn't want to bore you, but I thought instead you might like to hear my impressions
of what it was like to have gone through the process.
So here goes.
Now why did I decide to custom-compile a Linux kernel?
Well many reasons are often given about why one should custom-compile.
Two common reasons are to learn about Linux internals and to tailor the kernel for your
hardware.
I found that I was drawn to compiling because I want to get a quicker boot from my system.
I thought that I could build a monolithic kernel with all the drivers built in and avoid
a step called initrd loading.
What initrd is is right when you start the computer, it quickly loads, your bootloader
quickly loads a initial RAM disk image into the system.
When that RAM disk image looks around, it's a version of the kernel, it's a micro-linux
system, it looks around and loads the drivers it needs to load, then it switches to the actual
kernel on your hard disk.
And I thought I could build all the drivers built into the hard disk version and skip that
step.
Now without that step, if your disk needs a driver, you can't switch the disk because there's
no driver for the disk and that's why that step is there.
So I thought I could shave that time off.
So what I found out was that I was mistaken about this because my main computer's hardware
uses an Nvidia Sator driver and I could not work things out so that loading this as a
module could be avoided.
However, I did manage to make a kernel that offered me a feature that I would otherwise
had to do without.
So I can say that I will be back to doing this again sometime as I like the feel of the
feature I have discovered.
So what was the learning experience like?
Well, to put it succinctly, it was frustrating.
I know, and you want more than that.
Well, the first thing I decided to do was to use the method known as MakeK Package and
have a Debian tool do the work for me.
I wanted to concentrate on the actual kernel, not the process, so I loaded up the tools
and followed a handy how-to podcast I found on using this Debian tool.
I began with the X-Base Configuration tool and found many interesting things in there.
But after finding my way around, I realized that the logical thing for me to do was to
take the configuration from my distribution stock kernel and modify it.
And that was as easy as copying a file from the slash boot directory to the directory
I was working in.
I started by getting out of the generic processor and specifying the processor I know I owned.
I then de-slurced the compile for best size option as I did not want to sacrifice any
speed for compactness.
Then I discovered all the things in there I did not need.
I took them out, not only for making better kernel size, but also because each one I
removed bit up the compiling time.
You see, I found that a stock kernel could take like an hour to compile.
During the stuff I did not need to cut that down to something like 20 minutes of compile.
It took me about 10 compiles to get a working system going.
So you can see that the time saved was important.
So I took out the things that I didn't need, like dial-up support and WAN support, as
I don't use dial-up anymore, and will probably never have a T3 line into my own computer.
Then I took out support for all the sound cards, not on my system, as well as using only
the SATA disk driver for my specific chipset.
It was relatively easy, as some of the help files named the module that would be loaded
if it was enabled, and X term and LS mod, the list mod command, became my best friend
for a while, because I could, LS mod, see what was actually installed, find that module,
enable it, and de-select the other ones.
All of these changes resulted in a smaller kernel.
As well as a smaller init RD file, however there was no speed enhancement, I could find
for doing this, there also was no slow down and speed either.
I did however find a feature that did cause a speed up, and that is a feature that allows
the kernel to have more of voluntary give-ups processor control.
And it is under load.
The x-base configuration did have this mock as being for desktop environments, and I guess
it is because Debian is a server as well as a desktop, as well as a development platform
that they did not have this feature as being on.
Now, while I did not notice any speed ups, I did notice that during my next backup,
which I have automatically scheduled under Cron, that the system was more responsive.
That takes a little explaining.
You see, speed is how fast things get done.
Nothing was faster.
However, while I have a backup running, I tend to launch web browser windows, open up
audio streams from radio stations, check e-mails, stuff like that.
Well, when the system was under the load of a full backup, it would launch quite slowly,
and typing would take longer to show up on the screen.
But with the more give-ups feature, the system would launch just as quickly as if it weren't
under load, and the typing speed was always responsive.
So having done this, what is my conclusion?
Should everybody compile?
I don't think everybody should try to run a custom kernel.
Your favorite distribution security updates to the kernel package is a huge trade-off.
This is different if you make a significant gain in speed, or a feature you really need,
or turning off a feature you don't want.
Else, the educational benefit, just being able to know that you have done it, may make
you want to do this.
That concludes today's episode of Hacker Public Radio.
I hope you enjoyed it, and I hope you'll tune in tomorrow for yet something completely
different.
Thank you.