Files
hpr-knowledge-base/hpr_transcripts/hpr3116.txt

194 lines
14 KiB
Plaintext
Raw Normal View History

Episode: 3116
Title: HPR3116: Unscripted ramblings on a walk: Crisis at The Manor
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3116/hpr3116.mp3
Transcribed: 2025-10-24 17:06:52
---
This is Hacker Public Radio episode 3,116 for Monday, 13 July 2020. Today's show is entitled,
Unscripted Ramblings on a Walk, Crisis of the Manor. It is hosted by Christopher Monsieur Hobbs
and is about two minutes long
and carries an explicit flag. The summary is
a walk in a talk about a lightning strike sapping a network.
This episode of HPR is brought to you by An Honesthost.com. Get 15% discount on all shared hosting
with the offer code HPR15. That's HPR15.
Better web hosting that's Honest and Fair at An Honesthost.com
Hello again, Hacker Public Radio.
Got another unscripted rambling here. Hopefully some of these will be abuse to folks.
And again, if they get to be too much or not interesting enough, let me know in the comments
and I'll cease in the system. Maybe finally put together that DNS episode that I promise can't.
Taking another walk this morning. Have not yet had my copy, so we'll see how this one goes.
Just got up a few minutes ago. I submitted the last unscripted ramblings.
Should come out in a few days here, so I'll go ahead and make sure that this one gets submitted
for a much later date. We'll kind of spread them out a little bit.
Today, I want to talk a little bit about what happened at a little community network
that we called the Manor.
Many months ago, I'll shoot a couple of years ago, one of my earliest HPR episodes
could have been as early as 2013 or 2014.
I posted an episode about Libernil.net, L-I-B-E-R, N-I-L.net.
Domain, I don't believe exists anymore, and it's probably been squatted if it does,
but the show was a network for family and friends.
What I did was I just had some computing resources put together that I gave some friends
in my web space, shell accounts, I think XMPP, and that sort of thing.
Over the years, that network grew, and eventually I moved into an office
for work with a friend of mine.
It's an old Victorian house that was converted into office spaces, and we have,
at some point, decided to call that place the Manor.
Somewhere along that time too, I don't really remember when it changed,
we picked up some more users that were not just family and friends,
there were people we knew online, and we renamed the network to the Manor.
So the whole network is now called the Manor, and say the whole network like it's big,
it's actually still quite small, but it's bigger than it was.
The general idea is that we were trying to use recycled computers
as much as we could, and we wanted to use free software.
If you go to Manor.space, you can see the big tag line at the top.
And we gave out accounts to people, hosted all kinds of things,
various services, let people install things.
It was just kind of an experiment and free culture really.
It seemed to be going quite well for a long time.
There are two people with admin accounts myself and then a backup.
But typically, it's kind of a flat structure as far as management is concerned.
The reason there's only two of us with those accounts is to reduce the surface area for attacking.
If somebody's account gets hacked, we don't want them to have privileges.
But we install pretty much anything the user base has to install.
And then we take little polls and things like that.
So it had been trucking along just fine for several years.
And mostly consisted of a couple of servers in my office that I had access to on a not so great internet connection.
And then some VPSs.
We have VPS with Tranquility.
That's two Ls Tranquility.se.
It's Swedish VPS provider and that the hosts of that service on the Fediverse a long time ago and have been using this VPSs for a while.
And we had every now and then would have the VPS with SDF.org.
Just convenient service.
And we had those VPSs to sort of spread our services out a little bit and avoid a single point of failure.
And that's what this episode is about, a single point of failure.
So somewhere over time we kind of moved all not necessarily all, but most of the resources back to my office where everything was hosted.
We had great backups.
And we were hosting a lot of important things for the network, namely things like DNS, all of the web servers.
We're hosting XMPP. We had a couple of IRC servers.
We had some bots.
We had people's personal websites. Almost everything was hosted in my office.
The pandemic came along and did not affect us very much except for the fact that my office is basically sealed up.
I don't go in there. I try to avoid going over to that town. I live in a smaller town. That's a larger town. I try to avoid going over there.
Well, I had a big lightning strike. Big, big lightning strike that people that were in the building when the lightning hit said that it was probably on the building itself.
They said it hurt their ears and it was, you know, super bright, big lightning strike about the worst thing we could get.
How do you PS with the search protector on it and that sacrificed itself?
Another important lesson had the power equipment on a search protector, but did not have the cable modem on a search protector.
The search went through the cable modem via the cable through the network equipment into the servers, popped a couple of servers, popped a bunch of network equipment, big mess.
Luckily we had backups, like I said, really good backups.
One thing we don't have is a lot of time.
So I'm slowly rebuilding that and what we've done since we can't use the local servers right now as we went ahead and spread out a little bit.
I got a couple more servers from tranquility. Got another VPS from SDA. I went ahead and got a, oh, some kind of cloud compute thing from Gandhi.
We use Gandhi now for DNS. All of our domains were registered with Gandhi.
So because our DNS servers were posted in my office and DNS kind of fell apart.
We did have secondary and tertiary DNS servers, but for whatever reasons, things didn't resolve.
And other people ran those servers and I wasn't going to harass them. So we just moved all the DNS to Gandhi.
So we're a lot less self-hosted than we used to be.
I guess technically we're self-hosted still because we're on VPS, but other people are managing our DNS for us.
We're relying on virtual servers until I can get to my office and install new ones, which is going to be several months.
And it's really kind of a kick in the pants. It's nice to know that having those backups, we didn't lose any data.
Very pleased with that. The rebuilding time is a huge lesson.
And I'm trying to think of ways that in the future, on my own, I can rebuild much quicker.
Some points of failure that are really frustrating to me right now is maybe not points of failure, but difficult things to work with.
The big one right now is SSL. I've always had issues with SSL really.
It's an unnecessary, sorry, it is unnecessary evil, but I really hate dealing with it.
And I love that Let's Encrypt has gone up and kicked the certificate cartels in the shin, but it's quite difficult to set up and manage.
I've found every time I try to automate it, something goes wrong, and then we get security warnings and it's just a hassle.
At the moment, I've got, I think, 30 days until my certificates expire, and I'm trying to figure out how to get automation working, how to propagate those certificates across a bunch of servers.
It's just a just big hassle. The other thing I need to do is get the user web accounts up.
I opted to use a separate VPS for the user web accounts that I'm using for the main web server.
We were using tilde directories for user web accounts, and people want to keep doing that.
I'm trying to figure out how to have engine X rewrite rules on one web server that will redirect to another web server that also has rewrite rules to redirect people's home directories.
And it's just a just a gigantic hassle.
We have a couple of our IRC bots when I upgraded, you know, when they got moved over to an upgraded operating system because they were moved on to VPS.
They started having weird seg faults because of new libraries, and I did not take the time to upgrade the bot software, so I've got to do that.
So kind of a retrospective here for folks. If you have networks that you're supporting people with.
Number one, I hope you have backups. I mean, that's kind of a given, right?
But I would suggest you find some means of testing your backups.
It's hard for us to test our backups with DNS, but the other backups should certainly have been tested.
They'll work but assembling them. My gosh, what a pain reassembling rather.
I think I would say is automate your work as much as possible. I get really frustrated at the modern world of configuration management through things like Ansible, and Salt, and Huppet, and whatever else.
And I get upset with containers and crap like that. But I do see the value in being able to just mash a button and receive infrastructure.
Maybe not go that far if it's a personal project, but automate what you can.
This weekend, I'm going to make a big push to hopefully get the last of the work done.
It took me about a weekend and a day to get the other services back up, not including, of course, the
time spent waiting on DNS to propagate. That's fine.
Things that I'm a little bummed about is that we're no longer so posted on my hardware.
And I think that's okay because the services we elected to use are.
We've got an alignment with the things that work with the ideas and network to start with.
They are not in opposition to it. Tranquility is a cool service.
Very quick response, very kind.
And we've got good servers there. STFs and open public network as well.
And Tranquility is not an open network. But I mean, STF is open and public like like ours.
Different missions. They're public access units. We were sort of public free culture.
We've added our user base a little more tightly than STF does because we do not have a lot of computing power.
So we share with a small number of people and encourage people to start their own.
I think of DNS providers. I probably one of the least shifty would like to move away from their cloud.
Can't see me making wild gestures, air quotes. They're cloud compute service.
But at least it's not Amazon. And the reason I went with their cloud service.
What we're hosting there is our XMPP server and our IRC servers.
I wanted those to stay running if calamity struck because that was the other thing I noticed is when everything went down we mostly lost our means communication.
We have a free node channel.
It's hash manner.
Used to be the manner, but then we got an official channel with greener. And so it's just manner.
And oddly enough, not all of the users idle there. It's a very quiet channel with only about six or seven people.
But we have close to 25 users.
So most of them use XMPP or they email with me.
So contacting everybody was, wow, what a mess. Speaking of email, I'm glad we weren't hosting mail.
Mail hosting comes with its own challenges.
And the manner dot space email, it's just web for or sorry, mail forwards with Gandhi for simplicity.
I mean, that makes it way easier.
So it's kind of a kick in the project because everything was initially computers with self hosted computers that were all recycled that had free software only free software running on them.
We're running mostly free software at the moment. I'm pretty sure it's all Debbie and I haven't changed any packages.
So the things that I've installed are free software, but the things propping up our servers maybe not be maybe not free software, definitely not.
Sort of Lebra Hardware really.
It's kind of a tough thing to run and build.
Moving forward, I'll probably quite frankly, I'll probably end up changing the mission a little bit.
I used to be pretty strict about.
Good morning.
Yes, ma'am. Sure.
Yeah, indeed.
That's one of the things we'll have to worry about with these boxes.
Visitors anyway.
We'll still do what we can to use free software. I'm not going to explicitly install any proprietary software.
I'm going to rely on other things, I think, with the size that we are now.
And I don't know where the network's going to go in the future.
I'm going to keep running it. I need to come up with a decent plan for it though.
Some contingency for when things go down.
I wasn't exactly caught off guard, but I wasn't.
I was prepared either.
It's a little wild.
I think at this point, I've probably rambled enough.
It could probably provide some more lessons about most things.
If you want to hear more about how it is a ran, what sorts of things are hosting software, how we got it running, that sort of thing.
Reach out to me.
My email address should be visible.
Big truck coming.
Yeah, so if you want to hear podcasts about how we built some things, maybe the current layout of the manner.
Some of the services we hosted, how we hosted, then how to run certain types of services.
Those all seem like really good ideas, in my opinion, for podcasts.
So just either email me, email address should be visible, or better yet.
Put it in the comments. We'll take a look at it.
And these walks take about 20 minutes so I can probably do some unscripted ramblings on those as well.
If you've not submitted an episode to HPR, please consider doing so.
It doesn't take anything, but in this case, I'm on my phone.
I've used all sorts of recorders in the past.
It's an easy, easy process, simple thing to do.
So consider submitting an episode.
If you have submitted an episode in the past, it should be pretty straightforward for you to submit again.
So call when you folks out to please submit as well.
Let's keep this thing going.
Thanks to the folks who run HPR.
We're really grateful to have this service, both for listening and publishing.
And with that, we'll catch you guys next time. Happy hanging.
You've been listening to Heka Public Radio at HekaPublicRadio.org.
We are a community podcast network that releases shows every weekday Monday through Friday.
Today's show, like all our shows, was contributed by an HPR listener like yourself.
If you ever thought of recording a podcast, then click on our contributing to find out how easy it really is.
Heka Public Radio was founded by the Digital Dove Pound and the Infonomicon Computer Club.
And it's part of the binary revolution at binrev.com.
If you have comments on today's show, please email the host directly.
Leave a comment on the website or record a follow-up episode yourself.
Unless otherwise stated, today's show is released on the creative commons,
and the production, share a light, three-dot-oh license.