Files
Lee Hanken 7c8efd2228 Initial commit: HPR Knowledge Base MCP Server
- MCP server with stdio transport for local use
- Search episodes, transcripts, hosts, and series
- 4,511 episodes with metadata and transcripts
- Data loader with in-memory JSON storage

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-26 10:54:13 +00:00

432 lines
34 KiB
Plaintext

Episode: 921
Title: HPR0921: Tag Team Chase Douglas Interview with Alison Chaiken
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr0921/hpr0921.mp3
Transcribed: 2025-10-08 05:00:07
---
In the following interview, Allison Chacon and I tagged him interview Chase Douglas from
Canonical, a lot of his work was gestures and multi-touch.
I had to go back and rerecord Allison's comments because of the connection to the mumble
disorder.
This was a collection of firsts for me, first time using mumble, first time talking
to Allison, and first time doing it, and I knew it wasn't in person, hope you enjoyed it.
Hey all, this is like my eighth podcast for Hacker Public Radio.
This is Marcos.
I'm here with Chase Douglas from Canonical, who works on gestures, stuff, and Allison
Chacon, who is very active in the community that I've discovered lately, so we're Allison
and I are a tag team interviewing Chase, so we'll see over mumble, so let's see how this
goes.
So Chase, you work for Canonical on gestures, can you tell us a little bit about yourself
which you're working on, kind of give an introduction?
Yeah, sure.
So as you said, I worked at Canonical primarily on Ubuntu, anything related to multi-touch
and gestures.
So I started on this project about a year and a half ago now in the summer of 2010, and
the goal was to create a framework for providing multi-touch gestures to applications, window
managers such as Unity, and everything is basically fallen from there, basically, so
that has sprawled out into upstream xinput multi-touch integration and support to a bunch
of tooling and then into gesture recognition system, and we're also currently looking
into adding support to applications like Events, which is a GTK-based PDF viewer, and I've
known which is a GTK-based image viewer, and various applications.
One of the things that we're working on right now is actually getting touch support and
gesture support into Chromium, so hopefully we get that in for Ubuntu 1204.
So the app support stuff, that's all in very application specific, like you're working
with the Google people to get that into Chrome?
Well, I will say we just started on Chromium.
We basically checked out the source code and I have dove into it.
We haven't actually written it yet, but Chromium is an open source project, and my understanding
is that you develop patches and you submit them on the Chromium.org site.
What I found was interesting was about two months ago in September.
Someone at Intel had extended Chromium for the prototype xinput multi-touch support
that we have in Ubuntu, so we're just now trying to build that and test it because
checking Chromium alone takes about an hour on a fast connection and then building it
takes forever and four gigs of RAM.
So just getting that running and built is a challenge in it of itself.
So we're close to being able to testing that and see how well it works.
We're kind of excited.
Allison asked Chase back up for a moment.
Can you talk a little bit about what xinput is and how xin general works in Linux?
Yeah, sure.
So x is the Windows server that is generally used as the basis for most graphical applications
and user interfaces on Linux distributions.
And it's been around for like 24 years now, so it's in a sense it's an ancient technology
that has been essentially rewritten many times over the years.
So with xinput, it's what's called an extension to x in the way back, way beginning of time,
x only had the concept of one keyboard and one mouse.
And over time, there have been extensions to the input protocol for x that it could
support multiple mice so that you can have a USB mouse and your track point and your
trackpad and they all would work.
Believe it or not, 10, 15 years ago, that just was not possible.
So then beyond that, we are now into the second revision of the input extension and we're
trying to add multi-touch capabilities into that so that an application can see that if
you touch with three fingers in its window, it sees three different streams of events.
Is that the main way in which touch is different from multiple?
Yeah, it's, well, so that then gets filtered through the toolkits.
So you have a higher level of abstraction to make things easier, but that's the fundamental
basis of how multi-touch will work on Linux and then how we are building our gesture stack
on top of that.
So we were trying to get something going, you know, a hack-ish prototype in a sense.
And now that we have developed a protocol for multi-touch and now we're if Peter Hunter
and I are working to implement that upstream and now we also have our stack working on top
of that as well.
So everything, that's kind of the foundation of everything that we're doing.
Do you have particular target hardware and do you think that you think about the development?
Allison asked, do you have any particular target hardware that you were thinking about during
its development?
Yeah, so we have classes of hardware to keep in mind.
There's touch screens which are also called direct touch devices because where you touch
on the screen is where the events go.
There's track pads or indirect devices.
The touch events that you create by touching on a track pad go to wherever the cursor is
on screen.
And then there's a third type which we've been calling independent.
And so that, an example of that is a mouse that has a touch surface on top of it.
So the difference between a mouse with a touch surface and a track pad is that when you
have one touch on the track pad, that will move your cursor around.
Whereas a mouse, you physically move the mouse and then the touches on top are like auxiliary
information so that you can scroll using one finger swipes, for example.
So we have three devices, device types.
The hardware that we generally have been using for our own development purposes.
For track pads, they have been the Apple Magic Trackpad and Synaptics Trackpad.
The Apple Magic Trackpad is by far the best trackpad available on the market.
It gives you up to 10 or 11 touches simultaneously.
You can have fun getting that on there because it's only about 5 inches by 4 inches square.
So like I don't know, all 10 fingers plus a nose.
You can get on there.
Touch screens.
We have a lot of laptops shipped with entry touch screens and beyond entry, we're also
looking into, we have some community members who have put Ubuntu on various Android tablets
like the Galaxy Tab or the Asus transformer.
So we have people who are using touch screens there.
And independent devices, the one that we know of that works is the magic mouse.
But we don't actually have support yet in any of our stack for that type of product yet.
Hey Chase, on the framework for all of those different types, is it pretty much a shared
framework or do you have to have different underlying, is it pretty different underneath
to support all those three different types?
It's the same framework.
It'll all be using the X input multi-touch foundation.
And then the multi-touch product call for X, it extends the devices, the information
you can get about devices so that you can tell what type it is.
And then you can do your gesture recognition, your work on top of that based on how those
devices function.
So it's all, it all works similar.
The differences you might see, for example, is if you're using the cute tool kit, which
we actually have multi-touch support working in Ubuntu, you can fire up an application
they have.
It's a demo example called finger paint.
And if you fire up the finger paint application, you get this white canvas and you start painting
on a touch screen, all of your touches go to wherever you're touching, exactly on the
screen.
On a track pad, for example, the way it works is it doesn't do anything until you put
at least two touches down because when you have one touch down, you're moving the cursor.
Once you start putting two touches down, it starts to draw, it virtually separates the
fingers based on how far apart they are on the track pad, but it draws them at the location
of the mouse.
So there are these intricacies of how, what type of device it is, ensures that you are
using it in a certain way.
But in terms of the plumbing layer, the foundation, it's all, it's all the same.
So let's see if you do like a three-finger touch and it goes across, on the screen it
goes across like two different applications.
That's all done in the gesture stack.
So it's actually done in the X server.
What happens if you drag like on a touch screen, if you drag three touches across a window,
those touches are, the technical term is implicitly grabbed by that window.
So that means that even if you pull those touches, you drag them outside of the window,
that window is still the only one receiving events until you lift those touches up.
This is the historical design of X and theoretically, if you think about it a bit, it's kind of
how you want it to work anyways.
You can extrapolate from the idea of if you take a mouse and you hold down the left mouse
button and you start highlighting text.
If you leave the window, you want to continue highlighting the text.
Maybe you left the window erroneously, maybe it's just easier for you to drag and scroll
down the screen outside of that window so you're not hiding parts of the text.
If your button were lifted up and then replaced down on a different window, that would really
throw things off.
So there's a lot of these gotchas and intricacies that are taken care of by the X server and once
you start using them, hopefully it feels intuitive and it feels like how it should work.
But when you're designing the protocol, they kind of create nightmares for me in a sense.
We expect the mouse and keyboard to be with us for the long term, are you really thinking
of all these touches, use and concerted mouse and keyboard, or do you think you might actually
be following away from that?
Allison asks, do we expect the mouse and keyboard to be with us in the long term?
Are you really thinking of all these touches used in concert with the keyboard and mouse,
or that we may be evolving away from that?
That's a good question.
I don't have a good answer for that.
I tend to think that there's going to be the two classes of devices, the tablets and the computers.
And tablets are going to rise up and probably become, I think that they'll probably be more prevalent
eventually than traditional computers with the full array of input devices,
keyboard and mouse. But I think that the computer will definitely still be with us for all of your
at least work and productivity types of use cases. So in that sense, I think that our reliance on
mice and keyboard will diminish, especially for consumer consumption-oriented tasks to the point
that maybe they do go away, and we just start left with tablets essentially.
For work and productivity stuff, I can't imagine that we'll really get away from keyboards and mice.
One thing that I have thought about is Microsoft has their connect for the Xbox 360,
and everyone on this podcast probably knows what it is, but I'll just remind, I suppose,
it's this camera that can take pictures or take video of a person standing in front of it,
who when you wave your hands, it can detect your body movements, and it can detect in 3D
in a which way you're moving. What I envision might happen there is someone might come along,
and I think Microsoft has just announced a rumors that swirled around that they might be going
in this direction that you'll get a connect that is meant more for you to be using it as
at your desktop scenario, where you're only a foot to two feet away from your computer,
and you essentially gesticulate with your fingers instead of waving your arms around and having
to do a whole exercise to do things. Maybe just simple, you lift up your fingers from your keyboard,
and you do kind of like a spread out to a jazz hands, and the camera picks that up, and it spreads
your windows out, or something like that. I think there's utility to, as close as they are today,
track pads and keyboards, mice and keyboards, people still sometimes are frustrated by,
I have to lift up my hands from my keyboard to go to my mouse to do certain things in the
SCUE application, and if there was a way to easily just lift your fingers up one inch and do a
little gesture, a flick or something, and maybe that'll have some utility.
Has anybody or you guys played around with taking a connect up to the gesture stuff?
No, we haven't. I think the biggest reason we definitely have looked at it in theory. The
biggest problem is, I guess what we did was we stepped back and we said, we don't have the bandwidth
right now to go full bore and look into hacking connect into our gesture framework, but why don't we
watch it and see if anyone does anything interesting? Of course, lots of people did really interesting
things with the connect, but when you step back and look at them and how some of those
people made them interact with user interfaces, you still had people having to stand up,
hold their hands out for extended periods of time, things that just people would not want to do,
or physically be able to do for a long time using that connect scenario, and the connect devices,
they have, I believe they have a certain focus and a certain camera, physical aspects to them
that make them only work if you're standing up five to ten feet away from them, so we can't
easily test this theory of what if we did some small gestures right above a keyboard, for example.
So, in that sense, we haven't really looked into it, and I'm hoping that maybe we start to see
some of this hardware come out that would allow this, and maybe we'll take some good ideas from
the community. So if anyone listened to this, wants to kind of get into
Xhacking and gesture stuff, they could grab a connect and see what they could do with it then, huh?
Yeah, sure. It will definitely provide challenges for us, because what we have right now is a system
that is geared towards 2D touch sensors. When you get touch events from the driver and the kernel,
you get X and Y, the connect is going to open that up to, okay, now we need X, Y and Z,
or maybe we need to somehow encode shapes. There's going to be a lot there that will have to be
figured out in a sense. When we have to extend the kernel interface, we'll have to extend X input
again for new types of input, or maybe it's just all user space, and we just assume one application
will handle it all. I don't know yet. That'd be kind of cool. Go ahead Allison, sorry.
It immediately makes the sense of the kernel and opens the L. You have to wonder with the
green or candle, and it comes with a definite language that people agree upon, or
actually aren't sort of the keyboard and mouse to call it.
Allison basically asks, is there talk about an agreed upon gesture language?
So I heard some of that. I'll try and rephrase, and you can tell me if I got that right.
You're asking about if there's the possibility for standardization of something like a gesture
language, sort of how Cronos group puts out OpenGL and OpenCL. Is that correct?
Correct.
Yeah, so there's actually some interesting work that some of the research laboratories
at universities around the world are doing. There's a couple of people who have attacked
the idea of a gesture language. We ourselves have had briefly looked into it at canonical.
The challenge for us is more that we aren't at the point yet where we have a stack that is
able to be exploited to provide a gesture language above and beyond that stack, but that will happen soon.
And what was interesting, I don't remember the name of who it was, but less than a year ago,
they posted to our multi-touch development mailing list that I think he might have been
their doctoral thesis. Part of it was on gesture language and how you would describe a sequence
of gesture events. And so there's a lot of interesting things there. The challenge is how do you
create a user interface that is complex enough to really use something like a gesture language
while not requiring the user to have training on it? Even as it is right now where we have simple
gestures like a spread motion to spread your windows out on your desktop and Unity or a
four-finger tap to bring up the dash. Those types of gestures, people have to hear about them
or they have to stumble upon them somehow. We're looking to make that better by
presenting kind of cheat sheet inside of Unity so that people can start to learn this,
but even with those very simple things, you still have to sort of be taught or become aware of
them. And if you get into a full gesture language that just elongates the process and maybe makes it
harder. Hey, so no one's working on a gesture ML yet then, not market language?
Yeah, well, you know that the one person who I mentioned, I just can't remember his name,
he did have a full language spec'd out. So there's definitely someone working on it.
What is the state of device driver support for positive screens that will support multi-faults
in the middle? Allison asks, what is the state of device driver support for capacitive screens
that will support multi-touch analytics? So it's at this point, most touch screens are actually
supported. And actually the device manufacturers are starting to get into it themselves,
which is very interesting. It's very nice to see that. In particular, we've seen
Elon Tech is a touchpad company. They have contributed upstream in the Linux kernel to bring
multi-touch to their trackpads. Who else have we seen? Cypress makes touch screens. I believe
that you can find them in the HP touchpad and they're contributing drivers upstream.
One thing that's cool about them is some of their devices support the ability to detect hovering.
So if your finger gets within an inch of the surface of the screen, it'll start to tell you
that, oh, a finger is coming into contact and here it is and things like that. It's really cool
stuff. So we're seeing not only that the hardware support is being expanded by
people who just have the hardware and hacked it together. Like,
Rafi Rubin did for the entry touch screens and I did for the Magic Trackpad. Michael
pulled it for the Magic Mouse. But we are also seeing companies get into the game. One interesting
company is next window. They provided a binary driver. It'd be nice if it was open source,
but at least they recognize the importance of Linux and they've actually created a binary driver
that they host on launch pad for people to use on Ubuntu, Debian, Red Hat. I haven't tried it myself,
but it's another one of these areas where it's just exciting to see companies get it and understand
that if you just give us the drivers, which is a somewhat simple task to do,
we will enable your product. We'll make sure that it works well.
Hit change. So with things like hovering and the gestures and stuff,
then where does the breakdown, where does that become between what the hardware is able to do
and the drivers do and what's done in user space? I mean, there's so many
differing capabilities between different hardware. How does that break down?
So it's all pretty much based on the hardware and or the firmware. To a certain extent,
we don't know what the difference is as users of the devices. It could be that it was a
firmware update. Certain devices could provide even more data. We don't really know,
but there is that sort of dividing line between the interface to the hardware and the Linux kernel
where the drivers are. And that's basically where everything ends up being defined. There's
essentially only one area where we sort of add some functionality to some less intelligent drivers.
There's a library called MTDev that Henrik Rydberg wrote. And that takes some devices,
some older devices. They provide you with the locations of each touch at a given point in time.
But what we really need is to be sure of where touches are moving. So each touch really needs to
have a tracking ID associated with it. So that when you start to move your fingers,
we know that, okay, that touch moved from this location to that location because it still has
the same ID as opposed to, you know, you've flipped your fingers and you don't know any more
which finger is which. I mean, in practice, physically, that's not really easy to do,
but in theory is possible. So Henrik wrote this library called MTDev that takes
untracked touches and follows them, does heuristics based on how close are these new touches to
these old touches and assigns tracking IDs. But outside of that, that's pretty much the only
thing that we do to massage the data that comes out of the device itself.
So it's very much done at the kernel driver level or is it pretty much just passed straight through
from the hardware to X? It's just passed straight through from the hardware. You know, very
occasionally we'll find that a hardware supports something that the kernel doesn't support yet
through its interfaces. We have to extend the kernel, but in that sense, we're still not
providing that data inside of Linux. We're just extending it so that we can pass even more
data from the device. So it sounds like you're not tied to Linux and this stuff could work from
the BSDs if someone was interested. Yeah, well, what it would take is a BSD multi-touch
inputs interface. I'm not sure if they have one yet or not and drivers to get the data from
the devices and send them to user space. Speaking of software coupling, are you looking at
Wayland already or is that still over the horizon? Allison asks, speaking of software coupling,
are you looking at Wayland already or is that still over the horizon? It's still a bit over the
horizon for our touch team. In a sense, we are still working on getting multi-touch through X.
It's been a big task and we're nearing the end of that, but it's been taking up all of our
resources in a sense. However, we have been watching Wayland and we have some thoughts on how
multi-touch could work in Wayland. X and Wayland are very different when it comes to how windows
are managed and that affects how events are propagated. And as such, there's some complexities
that we have to throw in into X because of that, but also because of a bunch of legacy
requirements and protocols that we won't have to deal with when we move to Wayland. We can start
a fresh and we can do it better a second time around. I'm hoping that for one is much easier
and much simpler, but on top of that, that we can re-architect a gesture framework in particular
to be considered essentially like a first class event of the Wayland input system. Whereas right
now with the U-Touch stack, it's a second class citizen. It's provided as a layer on top of
the X input system. So there's a lot of ideas we have there. And I've spoken with a few people
about those ideas. Thiago Vignotti at Intel, but of Qt and Troltec for fame. He has been looking
into this a bit for Wayland. And yeah, I think stuff will happen pretty shortly, but we haven't
been focusing on it ourselves at canonical for multi-touch yet. The automotive case seems like
a fascinating one. As far as touch and gesture days, I know that Ubuntu has an IBI unit and recently
Cadillac has come out with a multi-touch screen that has captured feedback and has some gestures
with it. It looks like a very exciting area for development of actual shipping products in 2012.
I don't know if it's familiar with that at all. Allison says the automotive case seems like a
fascinating one. As far as touch and gesture goes and Ubuntu has an IBI or something
and recently Cadillac has a multi-touch screen that has haptic feedback and
suggests your support. This looks like a very exciting area for development actually shipping
products in 2012. I don't know if you're familiar with that at all. Yeah, that's a little bit beyond
the range of what I can keep track of in a sense, but yeah, I agree. It is a very interesting
range of products there, especially because in the automotive world it's all about
quickly interacting with the user interface without having to move your eyes from the road
and being sure that you get that the interface does the correct thing. I think there's a lot
that we can help in doing that through multi-touch and gestures, but we haven't looked at those
use cases. At least our team hasn't looked at those use cases yet. I think I've exhausted my list
of questions. What's your favorite part that you're working on? Did you find the most challenging
or the most interesting thing, really interesting things that you've learned that you really
most people may not know, things like that, anything. There's two things that give me a lot of
enjoyment. The first on a technical level is anytime I pick up a new technology and it's
obvious that the technology was well thought out and I'm integrating with it. It's a pleasure to do
so. When I created a UTouch QML plug-in, it was very interesting to integrate with that,
and the thing that I've been doing recently is integrating Google Test as a test framework for
our products. That also to me is very well written and it's just a joy to work on things like that.
You just kind of feel like I'm doing things the right way and it's awesome.
The other side, the other thing that really I enjoy personally is interacting with people,
especially the community outside of canonical. So going to the cute developer conference last year
where I learned about our QML and got some pointers for integrating things there. When I go to
the ex-developers summits and conferences and just the interaction on the mailing lists,
it takes it away from just being a job in a sense to being a, I'm helping the community at large.
There are random people who aren't even in those developer communities who I know are benefiting
from this work and it's that type of, that aspect of the work brings me a lot of joy.
Do you anticipate contributing the multi-touch work to Phenome and Debian as well?
Allison asks, do you participate contributing the multi-touch work to Nome and Debian as well?
Yeah, so one of the biggest blockers for us has been that our stack required deep patches to the
ex-server. Literally thousands of lines of code are stuffed into the Maverick, Nadi, and Oneric ex-servers.
Our goal is that for this next cycle, when we release in April, that we will be using the upstream
implementation of ex-input multi-touch and that we will have pushed all of our U-touch gesture
stack to the client side of ex built on top of the multi-touch protocol. And so we won't have
these hairy nasty patches in our ex-server. Once that's the case, it becomes much more reasonable
to go to other distributions, Debian, Red Hat, Fedora, whoever, and say, look, you've got your
ex-bits. Your ex-bits are upstream. They provide you multi-touch. You can just package up our
user space U-touch framework on top of that. Then we can go to the Toolkits and say, look, everyone's
using U-touch. We think it's a great framework for gestures. We think we should collaborate and
create some frameworks, much like we do with QML. In a sense, it's all being built up slowly
and the big stumbling block so far has been our huge patches in the ex-server, which hopefully
will be alleviated shortly. So this is an area where conicals really, really good about working
with upstream then. I like to think so. Now, I certainly have a view that is biased by the people
that I work with. I like to think that a lot of upstream issues are due to misunderstandings,
miscommunications and all that. I think that everyone is trying to get a little bit better in this
regard. So in a sense, I don't want to say that it's that canonical is specifically working
better with upstream for U-touch or whatnot. It's all of canonical working with all of upstream.
I think it's more that everyone is learning how to work with everyone else and it just came across
a little bit better this time with U-touch and some of the ex-work.
QML. What new features can we anticipate if any user-visible for the type hanging in,
which is called in the area of both effects and gestures?
Allison asked, what new features can we anticipate that will be user-visible for precision in
the area of multi-touch and gestures? So what we have right now is a framework that is built for
performing gestures that are, I like to think of them as intentional gestures. So that means
that they have a threshold that you have to cross. For example, if you got your touch screen or
your trackpad and you do a three-finger spread, it should maximize your window in Unity.
And so you have to cross a threshold of how far apart your fingers have moved before it triggers
this atomic action. That's great for things like window manager interactions. What it falls down for
is like smooth scrolling, kinetic scrolling because in that instance, you want to immediately touch
down with two fingers on a trackpad and start moving the page when your fingers start moving.
You don't want to wait for a threshold to be crossed. So one of the things that we've added
into our stack that will be coming out this next cycle is the ability to specify thresholds
so that you can, for scrolling, you would essentially have zero threshold. As soon as you start
moving, the page would start moving. So that's one area that we have some good features.
The other area is right now, you can only perform one gesture at a time per device.
In fact, I don't really recommend someone go out and try and perform two gestures using two
different devices. I have no clue what you'd do there anyways. He might expose race bugs.
But what we are expanding to is the ability to have multiple simultaneous gestures at the same time
on the screen, even within the same application, the same window and everything.
To do that, it actually requires a combinatorial analysis of gestures. For example, let's say that
you have a standard like diff viewer tool. It's got two panes. On the right side is the new version
of a file. On the left side is the old version of the file. And so you want to be able to scroll
on both panes independently using two different fingers on a touch screen, for example.
So what our stack does, if you start, let's say you put two fingers down on one on each
pane and you start scrolling downwards in each pane at the same time. So our stack only works
at the window level. And so both of these touches are occurring in the same top level window.
So our stack sees these two touches going downwards. It interprets it as three potential gestures.
The first is one of the gestures or one of the touches being dragged downwards. The second is the
other touch being dragged downwards. And the third is a two finger touch being dragged downwards.
So it recognizes all three of those combinations. It sends those, all of those combinations to
the toolkit, let's say it's GTK. And GTK will look at its hierarchy of widgets and it'll see,
okay, that one touch over in this pane, that matches what I want. Okay, that looks right.
One touch over in the other pane, that looks right too. The two touch drag, oh, that's
spanning these two different widgets. It probably, that doesn't make any sense to me. So I'm going to
reject that. And so it allows for the toolkit and the application to receive any combination of
gestures and then process them, however is most appropriate. So that's a big architectural change
in a big feature that we're going to be adding in the cycle. It takes to the
toolkits do all that internally or are the QT, the cute guys and the GK guys trying to work
together to come up with some underlying common stuff. So it in a sense needs to be done
individually because the widget layers are different between the tool toolkits.
Q uses Q widget and GTK uses G object or G widgets. I'm not sure. I'm not as familiar with GTK
myself. So in a sense, it's a completely different setup that they have. And so there's no
possible way that they can really collaborate here. But the good news is that essentially,
you just need to implement it once per toolkit. And you know, obviously the big ones are cute
and GTK. Once we have those implemented for those toolkits, we'll have a reference platform,
if you will, for other toolkits like enlightenment, if they want to, we're going to create frameworks for
that. There's only a handful of places where we need to do this. So in theory and hopefully,
it's not that big of a burden, even though it will still have to be done per toolkit.
Sounds good. And you answered my follow-up question in enlightenment. So I think I'm about
out of things to pass to you with. Anything to draw in people who are curious about helping,
you know, either ex-specifically or open source projects or anything like that. Anything you want to
tell people? Well, we do have a project page on Launchpad, Launchpad.net slash UTouch. And then
you can find all of our projects there. We have a mailing list, multi-hyphen touch hyphen dev
at lists.launchpad.net. And we have an IRC channel, pound Ubuntu hyphen touch on IRC.
Any of those forums, you can contact us and we're happy to help you out, provide answers,
take contributions. That's specifically for you touch. For ex, obviously, the ex upstream
development community is great. I mostly hang out on the exorg hyphen dev.l at lists.ex.org mailing
list. And on the pound exorg hyphen dev.l IRC channel. And yeah, I think, you know, I've worked
in the kernel in ex and UTouch. Certainly at UTouch, we try to be very welcoming. And we try not to
flame or put people down. That doesn't seem to happen on ex either, which I really like. It can
happen on the kernel. But mostly just throwing it out there to say work, we try to be gentle,
nice people. So feel free to come and pester us. We're happy to chat. Well, very cool. Well,
thank you for taking time to pop with Allison. I appreciate it. My pleasure. Allison, like say
anything. I think I'm happy, although I must mention that I was 10 years, 24 years ago,
beside the team since I was at MIT, when they started to create this project. You're young,
whipper snappy. I'm sorry. That was very, very fascinating. No idea, there's some of
the activities going on. I'm really excited to see what these people are coming out and what
we take to do. Well, thanks. Well, all right. I guess this wraps up our awesome interview.
And yeah, again, this is the Hacker Public Radio. Thanks for listening.
You have been listening to Hacker Public Radio, where Hacker Public Radio does our.
We are a community podcast network that releases shows every weekday on their free Friday.
Today's show, like all our shows, was contributed by a HPR listener by yourself.
If you ever consider recording a podcast, then visit our website to find out how easy it
really is. Hacker Public Radio was founded by the digital dog pound and the
economical and computer club. HPR is funded by the binary revolution at binref.com.
All binref projects are proudly sponsored by Lina Pages.
From shared hosting to custom private clouds, go to LinaPages.com for all your hosting needs.
Unless otherwise stasis, today's show is released under a creative comments,
attribution, share a like, details or license.