Files
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

505 lines
39 KiB
Plaintext

Episode: 3249
Title: HPR3249: Linux Inlaws S01E21: The Big Linux Inlaws Peep Show
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3249/hpr3249.mp3
Transcribed: 2025-10-24 19:42:32
---
This is Haka Public Radio episode 3249 for Thursday, the 14th of January 2021.
Today's show is entitled, Linux In-Loneness 01821, the big Linux In-Lon deep show and is part of the series, Linux In-Lon.
It is hosted by Monochrome, and in about 53 minutes long, and carry an explicit flag.
The summary is the two chapters go to the full monitor and reveal it all.
This episode of HPR is brought to you by archive.org.
Support universal access to all knowledge by heading over to archive.org forward slash donate.
This is Linux In-Loneness, a podcast on topics around free and open source software, any associated contraband, communism, the revolution in general, and whatever else, fans is theoretical.
Please note that this and other episodes may contain strong language, offensive humor, and other certainly not politically correct language.
You have been warned.
Our parents insisted on this disclaimer.
Happy mom?
That's the content is not suitable for consumption in the workplace, especially when played back on a speaker in an open plan office or similar environments.
Any miners under the age of 35, or any pets including fluffy little killer bunnies, you trusted guide dog unless on speed, and Qt-Rexes or other associated dinosaurs.
Due to shortcomings with Martin's audio configuration, some parts of the podcast may be of suboptimal sound quality.
Full marks have to go to the in-laws post-production queue to salvage what could be salvaged, but unfortunately the sound quality you've come to expect from Linux and loss is not up to scratch with this episode for which the in-laws would like to apologize.
This is Linux and loss season one episode 21.
The big Linux in-laws peep show button, how are things?
Hey, good morning, Chris, things are fine and dandy and cold and snowy and even in these all of last year.
Even in these corona-ridden times.
Yeah, nothing has changed.
I'm not prospective, I've just been in it for the last year.
Right.
But do you have some vacation coming up now?
Possibly possibly.
Thank you, but from this across.
It's yet another announcement day, so let's see.
Is it?
What's going to be announced?
Lizzy stepping down finally?
Boris doing the same, that's all I think?
The usual corona briefing by the PM.
Ah, every time they make some changes.
I see.
But this is not the subject of today's episode because today's episode, no, it's not Martin.
In contrast to popular beliefs.
No, it's not Martin.
This is not a corona show, as a matter of fact, it's called the Linux in-laws peep show for reason because today's subject.
Yes, is actually how to take a look at your system in terms of the front stuff you always wanted to know
about debugging, tracing, monitoring, Linux.
Well, not monitoring, but rather focuses actually on tracing and debugging.
But we're always afraid to ask, with some Martin.
Why don't we get started with?
Yeah, I have a question for you.
Oh, but oh, me.
It's why don't we get started?
Getting started is a good idea.
Excellent.
It's vitrically welcome to the Linux in-laws season 1 episode 26, 21.
Yes, Martin, you had a question, sorry.
Yeah, so when would one want to do tracing?
Well, if when one wants to find something out about the system,
okay, the big picture.
Tracing is essentially the idea of to take a look at,
no, that's not proper English.
Tracing is essentially following the execution of a program, right,
a firmware aspect, whereas debugging basically takes that concept one step further in terms of
your concept breakpoints.
You can inspect variables, you can set watchpoints on all the rest of it.
Of course, Linux, especially, has quite a significant tool chain,
tool chain, tool chain, tool chain, tool chain around this.
Full cover out people, we won't go through the low level technical
idiots because that's exactly what my pages are, my pages are for.
This, the purpose of this episode is to give you a kind of a short overview of what you have
and your disposal focusing on Linux.
Some of the stuff is also available on other systems like OSX and so forth.
So you should be able to keep this episode to the absolute minimum,
the usual two and a half hours, I think.
I think it may be a bit of a bump on this one.
So Martin, why don't you get us started basically with give others?
Give others. Well, so, I mean, it depends what kind of programming you do, right?
Okay, old school back to the beginning.
Exactly, old school C type is looking on some premises here that people are actually writing low
level, not low level. Yes, not programming in Tyson or Java.
Correct, correct, correct.
And why don't we finally take a step back and take a look at where it all started
with an operating system called Unix, which of course was at the time of the time being back
on the 70s close source. And I think originally offered by a company or by a bunch of people
working at a company called AT&T and they had this great idea of doing this actually in a low
level program, a language called C, which funny enough, after 50 years in the making is still
around because if you take a look at GitHub, you will find out that quite a few projects,
I don't know the exact percentage, but my guess would be that at least a third, if not half,
maybe somewhere in between, of the code out on GitHub and several platforms is still in C.
Or some derivative, like C++ and stuff. What about the Linux Cunoid self, how much you'd let us see.
I suppose even with the advent of Rostrates and so forth for Java development,
details in the show notes, I would reckon that my guess would be around what, 96, 97%.
The Linux Cunoid structure, of course, into a machine-dependent part, where the low level
stuff is really living, like the hardware exception there. And then the rest is basically written
in C. There is, in contrast to probably, if there is no single piece of C++ code in the Linux
kernel, Linux was straightforward and quite clear about this in the, I think, late 90s that no
portion of the kernel would ever be written in C++. For a number of reasons, you'll find that
he doesn't show notes. Essentially, he doesn't see the advantages, or he didn't see the advantages.
I think that's still valid today. For writing portions of the kernel in C++, so the
kernel itself is written in C and machine-dependent as such. And of course, the first abstraction
started running on top of the kernel is something called G-Lip-C, also known as Lip-C in other
operating systems. Essentially, it's the layer that talks, exactly, that talks to the kernel,
like opening files, allocating memory in all the rest of it, or rather, kind of accessing files,
which is opening it. So, let's go back to how would you program a C-Lip-C?
No, why? Software. Funny enough, because if you didn't see, well, software is written by humans,
right? Some of it is, yeah. And funny enough, these people make mistakes when they write software.
And we should do away with them. If you want to sponsor a site, but I feel free.
Yes, Skyler, why don't you get touched?
The addresses, yes. The addresses. Sponsors, ads, Linux, in-lost, or the use.
No, jokes aside. So, of course, and that probably includes you as well, Martin,
if you write a program, it's never bug free from day one. So, this reason why you use GDB,
LLDB, and friends to ensure that you actually debug, and this is actually a clue, is in the name,
that you actually get rid of the bugs in the program. And this is exactly why you use debug as
for. Yes, Martin, there you go. So, so getting up early heads, it's already paid off for Martin.
Never mind, liquid, alcoholic refreshments are not at any early. Anyway.
It's only nine here, okay. Yeah. So, and of course, two famous debuggers come to Martin,
something called GDB, which is the new debugger. And the latest addition, I think, to that
game is actually something called LLDB from the LLVM project. LLVM, of course, standing for
low level virtual machine. Essentially, it's a, what's what I'm known for? It's not a counter project,
but rather it's a counter. Something or other to the new compiler collection, as in, yes, as in
a compiler collection with various front ends and C-lang would be one of them for C and C plus
plus, that basically allows the transformation of C code into machine instructions. And of course,
they have, they also have their own debugger, debugger called LLDB, also by the way used in something
called rust tool chain, because essentially if you take a look at the, at these standard
rust compiler implementation, this is actually based on LLVM. So the debugger of choice for
series rust programming would of course be LLDB in this case. And the usual functionality of
course is present in both debugger types or debugger limitations as per this way. Like certain,
certain function, certain, certain breakpoints, basically you, where you tell the program where to stop.
So when to set a breakpoint, the program execution stops and then you can expect variables,
registers and so forth. Yep, okay, good stuff. So that's really talking about the debugging side
of, but okay, you can carry on a bit about that in a minute, but there are also other purposes
why one might want to do inspections or running code assume with your security background.
I'm worried to ensure that there's a minimal attack surface as in if you find the program
basically that you want to know why it's breaking, how it's breaking. Sorry, fuzzing a program,
of course means pumping input values into a piece of software that would have the potential to
make it break because for example, there's input validation missing. IE, you don't take a look at
the data that is set into a program, but rather you just try to process it based on your algorithms
because what you normally would do is basically in order to ensure sanity of a piece of software
or data rather entering that code versus actually what you would do, you would actually take a look at
the input data in terms of validating it, making sure that it is to be the formal criteria that
you're hopefully specified and also making sure that the data and that makes sense in terms of
its valid and of course you can do it. And you can also do that with a debugger.
Someone else's programming skills abilities or whatever
whilst their input variables I guess. Yes. And one important thing that both debuggers actually
support is actually you can attach to an already running program that comes in handy,
for example, if you are debugging demons and server pieces of software. Assuming that of course
you have the necessary privileges need to say being an ordinary user, attaching to a server that
is running as root and then trying to inspect the, it's variables and stuff might be very tricky
for a reason of course. No, I think tricky is probably not the statement here.
Well, there's a reason why Linux explicitly forbids this. So if you want to debug an already
running server with a particular running as root, this is a you have to brood yourself.
What's without saying? Sorry, interrupts it your debugging.
No, that was pretty much it. I mean the usual, and you'll see this actually with any debugger,
not just related to GDB or LEDB. You have to use the functionality at your disposal like inspecting
variables, modifying variables, setting breakpoints, setting watchpoints where you
the difference of course between the breakpoint and the watchpoint and the difference between
the breakpoint and the watchpoint is that a breakpoint always interrupts the flow of the execution
in a program where as the watchpoint simply prints out the contents, for example, of variable,
but continues the program execution. It's handy basically when you just want to take a look at
how a variable is behaving when you execute a program. That's the only difference.
For the the hipster Java program is amongst us who want to venture into this area.
Are there any plugins for, say, their favorite UI is to run at a GDB because last time I used
it was like 20 years ago, it was all command line. I thought you were the Java fanboy among us,
no, no, no, it's changed.
Of course you do have, well, the usual, what's what I'm looking for,
integral development environments like the clips and so forth would have their own debuggers
built into the toolchain, even, oh, I think OpenJDK supports something called JDB, where you can
similar to JDB or LLDB, where you can basically inspect the compiled program running on
our JDM at run time. So the functionality is comparable to other debuggers.
Yes, but if you wanted to do the same for JDB and use your favorite GUI for this.
No, of course, the normal ID is like CDT, for example, for Eclipse, which is essentially
a development environment for C and C++ programmers running on top of Eclipse, would access
the existing debuggers like JDB and other DB from a GUI perspective as a graphical user interface.
So you would have full integration of set debuggers in your IDE.
Similar to Part Charm, yes, similar to Part Charm, of course, you can invoke the standard
Python debugger, which is called DB from your ordinary GUI. That means you have all the
semantics, all the nice CSS, but all the goodies in your GUI, in your IDE, you don't have to
resort to command line, but seriously, people who wants to use ID is for debugging programs, right?
We leave that to the hips generation.
Yes, we are quite old, so yes, we have grown up with the command line,
and funny enough, if I'm doing work on embedded systems, which I do quite a lot, as a matter of
fact, the only tools at my disposal is actually a C component in the shape of GCC, or something else,
and then maybe Emax, and of course, GDB, or some other commands on or into debugger.
The beauty is, even if you're a SSH, into an embedded system where you don't have an IDE at your
disposal, you simply can't do this on the command line, so. Yeah, it is command line is usually
always available, which is not handy. And that hasn't really changed for the last 50 plus years.
No, this fancy GUI stuff, remember when it came out, what was that?
For 90s. Turbo Pascal came out in the 80s, actually, and that I think was the first ASCII
or Tramble Day school IDE for something called Turbo Pascal, because this is what I use when
I was a uni that goes back to the mid 80s. You too? Let's go with uni one, okay. You didn't?
Yeah, just normal, let's go. Actually, we started with an algorithm, we didn't algorithm.
Anyway, level, this is a subject of a different episode.
But I didn't know that you go that your university that your university days go date back to the 60s.
I thought we were kind of off some age. My university was one that like to go on
principles, not on pensy. So people, I'm sure that 20s, the University of 20s different these days,
but in the old days, it was quite late back when it came to fancy newfangled stocks.
Yeah, I think they had a couple of years back, so the PDP 11's probably burned down.
Probably upgraded.
Excellent. So what are they running these days? Marco Vax's.
I even say, I even summed network PCs, which were on the range of the 80s.
What was the call that I think net does or something, right? Done my novel.
Oh, my God, yes. There wasn't an apocalyptic system, but I can't, but I can't remember the name.
If I feel like it, you have the links in the show notes.
I don't think there'll be many tickets for that one, so I'm worried.
Right, well, anyway, if we talk about debugging.
Yes. So the idea is essentially you take a somewhat compiled code base and then you take a look
at what's happening in the code base. And of course, the details are a little bit program
in language specific, but there's the beauty about about GDB and LDB, because you essentially tell
the compiler to generate debugging information. These debuts to be know how to do with that.
Okay, so say, for example, I wanted to do some debugging on L.
So I could just book that up and that's a break point. See what's going on.
L. So you mean the sound system? Well, some of the stuff basically is running in the kernel,
like the sound lovers themselves, but needless to say, of course, you can debug any libraries
as part of user land goes without saying. So if your program interface directly with L.
If it's kind of still kind of the legacy stuff, not talking to Pulse, because you would normally
do, you would normally use these days, an audio system like Pulse or Jack, if you want to do audio
processing, because L's are basically start when self or represents that with self, basically,
yes, of course, you can talk directly to the other API. And given the fact that this is
implemented as library, you can of course use GDB or LDB for this goes without saying.
Some word of advice, although people, especially if you're talking C or any draft language,
not just make sure that you use the corresponding optimization settings, because if you use the
wrong optimization settings, you, especially if you're the compiler to optimize the hell out of
the generator machine code, you'll have funky stuff in your debugger going on like variables,
not existing anymore, expressions basically going away, that sort of thing. So what I normally do
is basically I either admit optimization completely, or I tell the compiler to use a very basic
optimization like in GCC, like that would be minus capital 0, which essentially tells the
compiler not to bother apart from the very basic optimizations, because the more you optimize
the stranger the generated machine code will look to an already debugger.
Only tips right and so brings disassembliness onto tracing yet.
Well, we can. There's nothing stopping us, right?
Okay, the next breakpoint, yes, would be indeed tracing.
Okay, I, too, to a general approach is come to mind something called Strace and Ltrace,
the difference of course being that Strace allows you to trace system calls,
as in anything that crosses the boundary from the user line to the kernel and back,
this is basically what tracing, where Strace comes into play.
So everything that can be the function, I guess.
And I think that the GCC as a matter of fact passes on to the kernel, yes.
And Ltrace, of course, as the, and the hinge is actually the letter L here,
allows you to trace library calls. So anything that passes between your program and
any associated libraries that you link to can be checked out with Ltrace.
And the idea is that Ltrace is compared to Strace quite rather a high level tracing tool
that allows you to take a look at the flow of control between different components,
different libraries that you use, whereas Strace actually allows you to take a look at what's
past to the kernel and comes back. This is the main difference. Both approaches rely on
something called Ptrace, which is essentially a kernel capability allowing exactly like process
that allows a process to attach to another process, something that also GDB uses when you want to
inspect and already running piece of software. It's special capability, segue into capabilities,
capabilities essentially are in Linux at least, special rights that the threat has like tracing.
If a threat is lacking this capability, it cannot attach to a different process for the purpose
of tracing this thing. Other capabilities, for example, include the possibility to open a port
with a network value under 1024, even if you're not running this route. So, meaning if,
particularly executable as a capability set, you can open port 700 not being a user. These are
just two examples of capabilities found in Linux and other systems. Details are of course in the
show notes and just conclude the second way now. These capabilities on a fire level are normally
included in the extended attributes of a fire. Again, the details you find in the show notes.
Of course, it does depend on the file system supporting these extended attributes.
Every file system supports these, but in the Linux word, at least the X family of operating of
file systems come to mind, like X2, X3, X4, that support these out of the box, I need to say,
any newer file systems like Ryzen, RFS, but Ryzen, all the rest of them have more often than
unlimited support in terms of the number of bytes that can comprise an extended attribute can
actually quite, it's not limited. That's what I'm saying. X2, X2, 3, and 4 limit the extended file
attributes, whereas the newer files systems don't. But I've yet to see a software system basically
that runs into capacity issues when attaching a seconded file attributes to a file.
These limitations are on the theoretical in nature. Anyway, coming back to, especially S-trace,
S-trace is pretty handy if you want to know what's happening on a system level. IE, for example,
what files your program opens when it's executed. S-trace, you can follow up and there are other
tools for that as well, but what about the performance aspect of your code and
the number of different, you know, different things, the number of different measuring various
latency, frequency of calls and things like that. You can do that too because S-trace simply
gives you a listing of the system calls that you, or the episodes of it, generates. Nilsis Ae,
there's a performance penalty that is being paid, but if you redirect the S-trace output
to protect the files, like there's a command line switch for that call minus O, it goes directly
into file and it's not being displayed on the console, meaning yes, you have a performance impact,
but this is not that big. S-trace comes in handy, especially if you want to take a look at what's
happening inside a code base into a software where you don't necessarily have access to the source code
for. Because normally, of course, you could expect the source code, but if that source code for
whatever reason is not available, but you still want to see what's going on, S-trace is your
friend. Yeah, and it also, I mean, the difference between looking at source code and the running
program is obviously that you can get your number of calls to various functions and what a time
is. Also nice feature of S-trace is actually you can generate the system calls or the list of
system calls on a per thread basis, command line switch for that is actually minus ff, that means
follow each and every thread while it is executing, and dump the system calls to separate files,
details on the montage. That means especially if you have a heavily multithreader program,
you can exactly trace what's going on when and where with regards to system calls. Now the output
is quite comprehensive, needless to say, depending on the amount of system calls that a particular
thread makes, but it's quite handy, because as I said, especially if you pump the output of
S-trace into a file, you can then use your ordinary say tools like AWK, FGRAP, or the general
GRAP family, to zip through the output and to make sure that you get the information that you
want. For example, if you just interested in the files that a public program is opening,
you just wrap for the open statements in the S-trace output, if that makes sense.
Yeah, it makes perfect sense. Yeah, so if you have a particular thing that has a close
piece of software, then it comes in rather handy. See what it's actually doing.
Exactly, so S-trace, you can, I mean, go into the extreme, you can and regard S-trace as a poor man's
first step to reverse engineering, a piece of hardware. If you want to take a close look,
that's what's happening inside. If you're so inclined, you can say kids. Exactly, you can say kids,
don't try this at home unless you're trying professional.
S-mark, not quite a few times in the past, yes.
S-trace is really easy to use, it's just, yeah, you would only use it, well, I want to use it when
dealing with proprietary software too.
When did you last, when did you last use S-trace?
Probably about 10 years ago, yeah. Okay, not not my daily activity.
Full disclosure, Martin is now working for a company that deals with Blackmagic, right?
Yeah. Pretty much.
And it's not in the business of ripping off Postpress code, no, certainly not.
An open source project like this is that you don't need it either, really.
Well, it depends on what you want to do, right?
Well, the source code is available, so would you? Yes.
Trace it. Indeed. Okay.
So not that useful, though, but so.
Another joke head back for us. That's okay, Doris.
For disclosure, people, for those few listeners who haven't listened, who haven't listened to the
back at lock, Martin and myself used to work for writers' labs. And of course, writers' labs,
being the whole of writers, this kind of in-memory, no-sequel database thing.
Okay. I think it's a good word, yes.
Thing, yes, indeed.
That nicely, this thing brings us nicely, I think, to the next component,
which is of course something called the Berkeley Packet Filter.
Martin, do you know what the BPF?
Yeah, it's got to see past, right? The extended data, the moment, extended data,
the government want to expand that data, see this one.
There's not people reading about BPF.
Any idea why?
Is that like it?
Yes.
Where you can feel that.
Martin, Martin, you're being, yeah, Martin, you're being the very old guy here.
This deep trace rang about going back to the olden Sundays, yes.
Yeah, that is too long ago.
I think it was first implemented in Solaris, right?
Or even Solaris.
Yes, BPF.
Have you ever used the trace?
Martin.
Well, yes, but that would have been another 30.
About what, 30 years ago, yes.
If memory, if they recollection of things is still present, what can you remember about the
D-trace? In contrast to S-trace and L-trace, all the rest of these things.
Well, it's more like, it was more of a, that would have been used in an academic setting,
whereas S-trace and stuff had some practical use.
I mean, D-trace really allowed to instrument the kernel, and it refers to the extent that
you actually, that you could use expressions, for example, in contrast to S-trace,
that doesn't allow this. I could use expressions to filter, for example, the monitoring,
functionality of data being passed to the kernel and coming back.
So, in contrast to S-trace, we simply can capture all the files that are opened,
and the financial rest of them, D-trace would allow you to say,
now look, I'm just interested in files under this particular directory,
or I just want to know the files who are opened by a child process of the main program or the
other system. So, the idea behind your D-trace was to give you a programable level that S-trace
simply doesn't support. And there have been quite a few attempts being made to port D-trace
to Linux. I think, from a complete mistake, most of them failed, but then the Berkeley packet
filter came to the rescue. The bucket packet filter, of course, being the next on you,
generally, sorry, being the new, or next generation implementation from the called IP tables,
as in the table, low level firewall built into the kernel. And originally, the hint is actually
in the name, Berkeley packet filter. The original implementation was geared at filtering packets
in the kernel, as part of the network stack. But, and this is where the extended bit comes in,
essentially BPF, or EBPF these days, is a low level virtual machine running inside the kernel,
such as its own instruction sets, L-L-L-V-M, and G-C-C-S who port these, or supports this
instruction set. So, you can simply write a C-like program that simply talks about virtual machine,
or rather is executed inside the kernel on this virtual machine. And that exactly allows you to
do similar things that only before that, D-trace was able to do, like to instrument the stuff,
or the monitoring functionality on top of the kernel, if that makes sense.
Yeah, before, well, you'd have to use various kernel programs and stuff to get the information
before, and I think it makes that a lot easier. Yes, there are, of course, with the corresponding
two chain, and D-trace, of course, as usually in the show notes, you have various predefined
kernel probes, accurate disposal, but of course, you can modify them, you can write your own stuff.
So, for example, there was, at some stage, on one of my servers, a somewhat misconfigured,
I can't even remember what it was, I think Mailman instance that generated some
spurious locked directory in var, and for the hell of it, I couldn't figure out which component
of the Mailman 3 installation that was. So, I basically did a kernel probe that would capture
a specific path of this lock file as an var logs, and would then capture the PID, so I would
exactly know, and during the course of a couple of weeks, I would exactly know which process
created this file, and what the executable attached to this PID was, and that, of course,
would give me a hint of what component of the Mailman 3 installation was actually in charge of
creating this particular software directory, which comes in handy.
Yeah, that sounds brilliant.
Yeah, so, when have you last used?
Well, interesting. If you look at the, okay, if you have a piece of software,
or trace it, you use Strace inference, if you are, but you're saying you can use ebbf as
all. But normally, you see, yeah. Well, Strace and Ltrace is bound to the execution of a particular
executable, and it's possibly a generic child process, but the BPR, or the DBX,
send a purple packet, a purple packet filter, and the truth is wronging it basically,
allows you to monitor a system completely. In terms of it's not bound to an executable,
the kernel probes allow you basically to specify your conditions and the rest is up to you.
So, it gives you that level of flexibility that Strace simply don't.
Right, okay. I haven't used ebbf, but from what I understood is it's more you write your own
code that you then use stats on exactly. So, the idea is basically to use
CLARK language, which is as much of a pretty close to see, to write your kernel probes
not the rest of it. There are a huge number of examples coming with tools like BCC,
as in the Berkeley, Berkeley, I'm a compiler collection, I think it's called.
Details are again in the show notes. So, you basically simply take a look at what's out there,
and simply modify that to your heart's content. And of course, you're simply not bound to
to see as the, as the native language, there's also a Python binding available
that is supported by Python 2 and Python 3 right out of the box.
Okay, so say you are a, you want to start adding to the Linux kernel what tools we use.
First, let's start with your brain, I reckon.
So, Martin, Martin will never be a proper kernel programmer, I suppose.
Well, I mean, yes, you take it. You simply go to kernel of org and put down the existing source,
and then you take a look at the particular subsystem where you want it, that you want to extend,
or add a subsystem, and whatever, and you do it of course in C.
Yeah, the thing is really that if, you know, you'd, okay, find that's your first, yes, you need to start
on some programs and things, but you understand the pieces of the kernel, and when they are running,
then this is where today subject comes in right with the various abilities to trace.
Now, kernel tracing is a, yeah, is a completely, well, I wouldn't say completely different matter,
but there's also, there's a, there's a separate tools that available for, for, for,
for in kernel tracing. The idea is of ebpf and friends is rather to give you a perspective of
of what's happening between the user line and the kernel. So, it's, it's get more towards the
application program, if you will. Yeah, I understand. But if you want to take that step further,
and say all my program is, I don't know, doing these system calls,
how does that execute in kernel, right? So, you may want to build one step further.
Well, of course, you start with estrace, and then of course, for the more complex stuff,
there's always the ebpf at your disposal in Linux, or tools like detrace on other operating systems.
Yeah. Okay. And this is what, almost an hour done, after remaining, after remaining four?
Four. Maybe, maybe I got that wrong, I don't know. Anything else? Basically, we should cover
on this map before we close off the show. Yeah, I think, I mean, as, as I'm not so,
learnable kernel user, or Linux user, when estrace is, is the tool to use, if you want to find out
what your any piece of software is doing, right? In summary, if you want to go develop in
C, or programs like that, then we just think that's, then use your ebpfs and tools that are
in the other summaries that you would like to add to this.
Well, GDP and LLDB would be kind of a reckoning your first go to tools, if you're talking about
something that you ask. Yes, if you, if you, if you are somebody else that's written this,
but especially the ebpf and friends come in handy if you want to take a look at,
and modern operating systems like, then it can be pretty complex, with regards to male demons,
printers, the subsystems, all the rest of it. This is basically where ebpf and friends really
shine in terms of, if you want to know what's happening when I wear outside the ecosystem of your own,
or somebody else develop software in terms of a complete system, this is basically where these
tools really shine. Okay, yeah, that's another one handy overview. And yeah, as you said, if you are
like yourself, the writing software for embedded systems, then you're going to end up having to
use these things. No, absolutely. And of course, well, ebpf is available, are things in
kernel, three, that's something, four, that's something. So it's been around for a while.
Okay, excellent. No, that was a very nice insight of how to
announce what runs and how. Yes, and before I forget Martin, now it's probably the time to
again pick up the boxes and the end epoxes. Okay, cool. So Martin, what's your box of the week?
Yes. Okay, so Martin's week is a
epoch. Okay.
And it's called the Weapons of Math Destruction.
Weapons of, weapons of math Destruction. Indeed, indeed. Like equations and stuff and functions and
whatever. Math is sure for mathematics. Yes. Okay, good. So I just wonder why do you want to
destruct math? No, no, no, no. So this is a book that is really talking about how math is being
misused in algorithms and other likes of Google's, Amazon's, and other companies out there. And
it seems good read. Okay. It's on the show notes. Statistics, you can prove anything you want.
Can you, I wouldn't know about that. That is a land stuff, no.
So yeah, that's not what's your epox of the week? Box of the week.
Probably. I think we never mentioned this, but our destiny comes pretty close because this is
software that we use to produce the show. Okay. And you can say whatever you want about
audacity, but it has been and will be probably for the time where we make this broadcast.
A very handy too. Also, and just the post-production department just loves it. I never mind how many
how many teams mark and actually fires and are recruits, but that's probably beside the point.
If they all use audacity, yes. So that will be my epox of the week, and your entire epox mark.
I think I know what your entire epox is.
It does, it does. Yes. Funny that you mentioned that. Yeah, my entire epox of the week would be
Jitzi and particularly Jitzi meat maintainers. If you're listening, please, please, please, please,
please make sure that your documentation is up to scratch, especially when it comes down to the
more to the more advanced configurations, options of the value bridge and the meeting and the
meetings of and all the rest of it. It's not great. Full disclosure of people, I'm just in the
process or we are just in the process of migrating a big blue button instance to something called Jitzi
meat. I thought it was straightforward, especially with the new version too, that
apparently it's over that of the day earlier this year. We're recording this at the end of 2020,
but when we when we started this podcast about almost a year ago, we or I took a look at what's
out there, open meetings, Jitzi, big blue and all the rest of them. And let's suffice it to say,
eventually we are landed at big blue button, but this is now showing the signs of the time,
times, time doesn't matter. So we wanted to take a second look at what's out there. And I thought
Jitzi had improved with the latest version, but as suffice it to say, there's still a lot of room
for improvement. Let's put it this way. What's your entire epox? Where does that lead us? Okay,
this is a subject for different books. Yes, I state you in for the episode 24 where we
will all reveal about Jitzi meat. That episode will only be about five hours long and we're basically
content of all the details of our fruitless attempts to get this up and running. No, I'm kidding.
I'm sure it has its merits, but it's just hard to install. That's all.
Right. How long does it take you? The episode I'm installing it.
Well, I imagine they propose equal lengths.
All right, so my epox is, well, I have only had a few. Jipius?
It is really different formats for transmitting video data these days.
Okay. On a physical level. She's very annoying. When your laptop doesn't have a display port.
When your laptop doesn't have what? A display port. A display port? Okay, yes.
Yeah, there's these things keep moving and having different versions and you can do display
port. We use PC and then there's again a different depending on various reports here and which version
is running in 1.4 and 2.0 and different specs and different and it's all very annoying anyway.
We just like things to work. Indeed we do.
Yes, Jitzi people, that was a hint.
I think we get the whole thing.
Oh, I know why it used to say in recent musical, get the shit done.
Okay, and with that, I think we are almost done. As usual, we can be far down HPR. I can, if you're
listening, all the best for 2021. We will continue to be hosted on this platform until
for the notice. We would like to thank the great community out there of people who just keep
improving Linux and this running ecosystem for their work. The tools we discussed in this
episode are probably just the best examples of this and we will be of course, what's the
one I'm looking for? Yes, we will be looking for sponsors. So if you want to send money
by t-shirts or yes, and of course, we're also, we're always looking for feedback. So the email
address, yes, is a feedback at linuxinloss.eu and the address to send any cash or other entities
are with us. The Sponsor at linuxinloss.eu. And with that, I see you around.
This is the linuxinloss. You come for the knowledge, but stay for the madness.
Thank you for listening. This podcast is licensed under the latest version of the creative
commons license type attribution share like credits for the intro music go to blue zero stirs
for the songs of the market to twin flames for their piece called the flow used for the second
intros and finally to the lesser ground for the songs we just use by the dark side. You find
these and other dd's licensed under cc achamando a website dedicated to liberate the music industry
from choking copyright legislation and other crap concepts.
You've been listening to hecka public radio at hecka public radio dot org. We are a community podcast
network that releases shows every weekday Monday through Friday. Today's show, like all our shows,
was contributed by an hbr listener like yourself. If you ever thought of recording a podcast,
then click on our contributing to find out how easy it really is. Hecka public radio was found
by the digital dog pound and the infonomican computer club and is part of the binary revolution at
binrev.com. If you have comments on today's show, please email the host directly, leave a comment
on the website or record a follow up episode yourself. Unless otherwise status, today's show is
released on the creative comments, attribution, share a live 3.0 license.