249 lines
27 KiB
Plaintext
249 lines
27 KiB
Plaintext
|
|
Episode: 3214
|
||
|
|
Title: HPR3214: Rant about websites
|
||
|
|
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3214/hpr3214.mp3
|
||
|
|
Transcribed: 2025-10-24 18:58:23
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
This is Hacker Public Radio Episode 3214 for Thursday, 26 November 2020. Today's show is entitled,
|
||
|
|
rant about websites. It is hosted by Operator
|
||
|
|
and is about 31 minutes long
|
||
|
|
and carries an explicit flag. The summary is,
|
||
|
|
I go over history of websites and the complex nature of security and complex websites.
|
||
|
|
This episode of HPR is brought to you by an honesthost.com.
|
||
|
|
Get 15% discount on all shared hosting with the offer code
|
||
|
|
HPR15. That's HPR15.
|
||
|
|
Better web hosting that's honest and fair at an honesthost.com.
|
||
|
|
Music
|
||
|
|
Hello and welcome to another episode of Hacker Public Radio.
|
||
|
|
It should be pretty short, apologize for the noise.
|
||
|
|
I want to kind of go on a rant slash,
|
||
|
|
let you guys know where the state of crawling and spidering websites is.
|
||
|
|
So I'm going to give you a brief history about websites
|
||
|
|
and how we crawled them back in the day and how they changed over time
|
||
|
|
and how we can start crawling and spidering and stuff today.
|
||
|
|
So today it's a mess, but I'll start from the beginning.
|
||
|
|
So the first thing websites were all HTML right back in the day
|
||
|
|
and all that content was hosted on the site.
|
||
|
|
So if you had ads, for example,
|
||
|
|
you had to like pull in the ads from wherever
|
||
|
|
and maybe you got like a unique key or whatever to go with your ads
|
||
|
|
and that domain was bound to,
|
||
|
|
that domain, the ad domain and your key was bound to whatever
|
||
|
|
so they would know where that content was coming from.
|
||
|
|
But even before that, the ads were static.
|
||
|
|
So you had to like, you know, I wasn't part of this unclean,
|
||
|
|
but before all of that, you had to like manually put in ads for your stuff.
|
||
|
|
So you couldn't like, there wasn't really an easy way to, you know,
|
||
|
|
do dynamic ads or anything like that.
|
||
|
|
So anyways, you had ads to put in ads manually and then you kind of,
|
||
|
|
we kind of evolved into a state where you could, like I said,
|
||
|
|
have like a unique key and to an image or a banner or whatever.
|
||
|
|
So that unique key every time it was called on the server side,
|
||
|
|
they knew that that was your reference or your referral.
|
||
|
|
And then we started getting into tracking and things like that later.
|
||
|
|
So ads tracking, they're all the same awful beast.
|
||
|
|
So they started to find out that, you know,
|
||
|
|
not only could we use this for ad revenue,
|
||
|
|
we could use this for tracking across other sites or whatever.
|
||
|
|
So they can pull in different pieces of data
|
||
|
|
and that's why we have things like privacy badger and ad block,
|
||
|
|
ad block edge, ad block, you block origins and privacy badger.
|
||
|
|
It used to be that I liked a ad blocker called
|
||
|
|
that basically just got kicked off the map and off the app store
|
||
|
|
because they were serving up malware.
|
||
|
|
So it's, if you're using it, it's probably disabled right now
|
||
|
|
because Chrome disabled it out of the web store.
|
||
|
|
But anyways, it's nano ad blocker and Nanner Defender both got sold off
|
||
|
|
and then seven days later people started talking about their accounts
|
||
|
|
like, you know, getting, you know, their Instagrams voting for people
|
||
|
|
that they didn't vote for or something and they were pulling
|
||
|
|
caching credentials out of other websites and stuff to generate revenue
|
||
|
|
or whatever clicks.
|
||
|
|
That's kind of how I was set up at first.
|
||
|
|
But anyways, that's a brief thing on ads and ad networks and ad blockers
|
||
|
|
and whatever.
|
||
|
|
But anyways, what happened after that?
|
||
|
|
We started getting more into JavaScript-based ads,
|
||
|
|
JavaScript-based tracking, things like that.
|
||
|
|
So it was a little bit more tricky.
|
||
|
|
You had to have something like a Phantom GS was kind of popular
|
||
|
|
towards the end there.
|
||
|
|
But you could, your ads were 99.9% of the time your ads and or tracking stuff.
|
||
|
|
I'm going to just say ads is tracking because tracking is ads and ads is tracking.
|
||
|
|
They're kind of both the same for our purpose.
|
||
|
|
So you had ads in JavaScript, 90% of the time they were in JavaScript.
|
||
|
|
Then when we started kind of to evolve into this content distribution network
|
||
|
|
which goes on the premise of none of your content is actually hosted on your website
|
||
|
|
because you can't handle it because you can't handle 50,000 users
|
||
|
|
or you have DDoS issues or whatever.
|
||
|
|
So the CDN started popping up to host your content and host your tables
|
||
|
|
and host your bloatware.
|
||
|
|
You have these giant JavaScript payloads that you serve up to your customers
|
||
|
|
and they're browser and then you can you bitch and mow and complain that
|
||
|
|
you know, it's our website slow.
|
||
|
|
Well, yeah, you're serving up a, you know, JS query and all this other crap
|
||
|
|
and it's slow on your network down because you're having to serve up
|
||
|
|
you know, all these all these JavaScript apps and all this crap and content
|
||
|
|
that nobody wants anyways.
|
||
|
|
So the content distribution network started getting more and more popular
|
||
|
|
and now they're pretty much a requirement.
|
||
|
|
If you're going to run any kind of website, you have to use a CDN.
|
||
|
|
If you want to get any kind of footprint or you want to get any kind of sustainability
|
||
|
|
you're going to be using a CDN.
|
||
|
|
So like that started to get more popular, CDN started to get,
|
||
|
|
and when CDN started to get into the mix, things started to get really complicated.
|
||
|
|
So from a security standpoint, I've been on the security side.
|
||
|
|
It's really hard to tell when something bad happens if you're just looking at domain names.
|
||
|
|
So for example, you go to a website, it's a perfectly legitimate website.
|
||
|
|
It's got its own content maybe even, but it has ads from three different places.
|
||
|
|
Okay, one of those is Google and another one is, you know, double click.
|
||
|
|
Okay, those are usually generally okay, but for example, maybe it pulls in three or four other ones
|
||
|
|
that are kind of shady and they have an ad network that maybe there is some point in time
|
||
|
|
their ad network got compromised and now they got put on a blacklist
|
||
|
|
and it's triggering some kind of thing on your corporate blocking or your corporate domain blocking
|
||
|
|
or your local, you know, broad domain blocking stuff.
|
||
|
|
So looking at just the DNS alone, you can look at the timestamps and kind of tell,
|
||
|
|
okay, well, it looks like Robert went to this website and somewhere on this website,
|
||
|
|
he loaded up some content from a domain that triggered a bad domain.
|
||
|
|
So with that said, we get the alert of the bad domain,
|
||
|
|
but we have no idea what website you went to without proxy logs or access to the reveal host or whatever.
|
||
|
|
There's a number of other ways to get that information, but put simply,
|
||
|
|
there's no way to access that information because it's part of either a proxy log
|
||
|
|
which we're going to start losing that too.
|
||
|
|
You've got things like secure DNS.
|
||
|
|
So we're going to start having browsers built in with this secure DNS stuff.
|
||
|
|
So unless you implicitly say that you want to use your corporate DNS for internet-based stuff,
|
||
|
|
your DNS is all going to go over SSL to who knows where your content provider or your ISP.
|
||
|
|
They've been talking about, you know, ISPs are serving up these quote-unquote secure DNS servers,
|
||
|
|
which it's just the same old crap they're doing before that as they're capturing all that information
|
||
|
|
and logging in and correlating it or whatever.
|
||
|
|
So you're going to start losing visibility into DNS and not even having proxy logs either
|
||
|
|
because people are working on, we're trying to work on making everything secure into N.
|
||
|
|
So for example, if you had a corporate network and you're using the electrical curve type of SSL
|
||
|
|
where you can't break the encryption locally over the wire.
|
||
|
|
So traditionally you had no SSL, you had no security, you had open internet, everything was HTTP.
|
||
|
|
Then we started getting on this HTTPS bus, which it's now just a cluster and it's not even worth even using SSL anymore
|
||
|
|
for advanced attacks because these guys can get SSL certificates for anything.
|
||
|
|
Point aside, they started getting more SSL heavy and now we're looking at trying to do N as SSL.
|
||
|
|
So for example, if Charter Chromecast, whoever your provider is, if you're using this electrical curve type of encryption
|
||
|
|
or any kind of end to end encryption, it doesn't matter what they do, they're not going to be able to see the data going across the wire.
|
||
|
|
So they're going to see a bunch of garbage and that's good for us, but bad for security and bad for people
|
||
|
|
that want to monitor you and log everything that you're doing.
|
||
|
|
But with that said, a lot of them kind of inject, we'll tend to inject a JavaScript into your streams and make it do potentially selling your malicious or get what they need done through that wheelhouse.
|
||
|
|
So anyways, so what I talked about, you know, kind of the state of BNS and how it's difficult to pull back the curtain on an alert that comes through on a DNS hit.
|
||
|
|
And if you correlate that with, for example, a bad DNS name, okay, it was a malware domain.
|
||
|
|
Okay, if you look at all the domains that hit around that time and it ends up being, you know, a bunch of content distribution networks, a bunch of Google stuff, it's all ad networks and a big mess of whoever what.
|
||
|
|
So you filter out your ad networks and whatever else is left, left is generally probably what website that they were actually on and or how many different requests are they pulling down from that one website.
|
||
|
|
So for example, I'm going to a site that has malicious content on it and I click around and I click three or four times, I'm going to pull down a bunch of content from that website.
|
||
|
|
So I'm going to have a bunch of hits for that website around that time, right, a bunch of connections outbound to it, but only one DNS request.
|
||
|
|
So depending on what's going on, you, if you look at the network data, you'll have, you know, one hit for whatever the malicious domain is and then a bunch of hits for whatever web page you want that called that malicious domain inside of the web page.
|
||
|
|
So you got, you know, you got to filter out all these CDNs, you got to filter out all these ad networks and then you're end up with whatever website the theory is that you would end up with whatever websites that they were on when they got to that thing.
|
||
|
|
Now you don't know the URL, you don't know the path, you also don't know that that's also a content distribution network.
|
||
|
|
It looks like a third of the time. So when, when we get these alerts, you know, they're probably content distribution networks or some kind of whatever.
|
||
|
|
And that, that's, that's one of the reasons why you need, you know, you need proxy logs or you need an EBR tool that can pull out URLs or specifically where they're going, the process they were running while they got to it.
|
||
|
|
So, for example, if Chrome reaches out to a malicious site and, you know, whatever happens happens and that content gets pulled down, it's not necessarily the biggest deal because if, if Chrome's reaching out to something and nothing else is reaching out to a malicious site, then it's not really that big of a deal because you're checking the mail.
|
||
|
|
Sorry, your Chrome is not really in itself going to be a big issue. So you can confidently say if a website is going to a malicious domain or if a, if Chrome or your browser is going to a malicious domain, generally Chrome, I wouldn't say internet explorer, I would say Chrome is going to a malicious website.
|
||
|
|
You're likely going to not really care too much about that, right? So if I reach out to the malicious domain from Chrome, I get one or two hits, it's not the biggest deal. Maybe it's an ad network that kind of compromise or who what.
|
||
|
|
Now, if a different binary or explorer or internet explorer were to reach out to that same domain at a later time after it got down, maybe a stage or payload or whatever, then you can start saying, okay, well, not only did the browser reach out to this something else reach out to this later or it reached out to another domain that was on a blacklist or malicious potentially malicious.
|
||
|
|
So that's kind of the state of where things are as far as where you should be concerned. Now, the problem with doing all this is the approach I try to take is is pulling in all those domains and figuring out trying to figure out what website that they were on to bring up that content and a number of reasons why they can't automate this process.
|
||
|
|
So have a list of domains, say, have a list of 200 domains. One of those domains is where they were actually on and the rest of them might be content distribution networks, they might be ad networks, they might be the actual websites that they were on.
|
||
|
|
And depending on how many hits you have, you can kind of guess and poke and prod and guess which one it is just by kind of looking at it.
|
||
|
|
But in general, if you try to take that and run it through automation, all these, you don't have the full URL. So a lot of these CDNs and these all these CDNs and ad networks, if you just go to the domain, you're not going to get anything.
|
||
|
|
You're going to get a 404, you're going to get some kind of service error or 500 error, meaning that there was an error and you didn't provide your API key or your customer key or whatever to get you a, you know, a tenth of a percent of a tenth of a percent, whatever of a penny.
|
||
|
|
So you're going to not going to get anything back. More importantly, and more frustrating is that all of these sites are shrouded in mystery, they're shrouded in CDNs, they're shrouded in ads, they're shrouded in JavaScript, and then they're also behind the ones you care about and the ones that generally tend to be an issue are behind some kind of cloud of cloud protection.
|
||
|
|
And, you know, a big, a big provider in that spaces is cloud flare.
|
||
|
|
You got folks like Akamai that do DDoS protection now.
|
||
|
|
The idea is that you set your website up, you put it in front of one of these DDoS protections like Akamai or cloud flare and it by default, it kind of blocks the bad guys, right?
|
||
|
|
It blocks tour, it blocks the bad guy, the known bad guys, and you're done. You don't really have to do much.
|
||
|
|
The problem with that is it also blocks people using automation and scripting to crawl your website or scrape your data.
|
||
|
|
And of course, nobody wants their website scraped, but to try to figure out what's going on or you need to look through a bunch of domains at once and try to figure out which one is potentially malicious or maybe there's some malicious course code on one of them.
|
||
|
|
Or get an idea of what is on that website.
|
||
|
|
It becomes more difficult because you've got these these cloud flare type of customers.
|
||
|
|
And I would say a large portion of them are protected by some kind of, you know, DDoS protection or scripting or scraping protection, whatever.
|
||
|
|
So your standard back in the day, I could say I could use a tool called W get and I could say W get my website.com and dash R and it would download the mirror the entire website and those were the days you could rip an entire website just by using one tool.
|
||
|
|
Then again, we started getting into JavaScript issues and you know, the code around it to get it to scrape it or to pull the data of a website.
|
||
|
|
Then people started using Java obfuscation and the CDNs and now we've got DDoS prevention and stuff.
|
||
|
|
And it's starting to make things more difficult as far as scraping and spidering and whatever and crawling websites because these these there's so much garbage honestly garbage on all these websites that you can't see the source code or automated very easily to see what's being pulled down.
|
||
|
|
So we're having to use things like puppeteer and scapey and puppeteer crawler and it's linear to basically the only way nowadays and to confidently pull down a website or crawl a website is you got to fire up a browser man.
|
||
|
|
So it's it's extremely frustrating to me because if I want to quickly pull down a website, I can't do that. I had to fire up chrome with like salineum or puppeteer and engineer some wackado crawling script to crawl that website and you know, okay, I want to crawl it three deep and I want to pull down, you know, all the source code for that website not the images not anything like that.
|
||
|
|
Just pull down the source code so I can look through it and see if there's anything malicious, whatever.
|
||
|
|
The problem with doing that is that it's pretty complicated to do nowadays.
|
||
|
|
There's not really an easy way to just call a scraper or call a spider tool because of these CDNs and the way JavaScript works and the way websites work.
|
||
|
|
It can go down rabbit holes fairly quickly, depending on how the site is coded.
|
||
|
|
If you decide you're going to use a crawler and click the elements of the web page to crawl the web page, things can get really complicated and difficult.
|
||
|
|
And I can't give you a specific example, but I have seen this with crawl jacks, especially this was before puppeteer things like node.
|
||
|
|
I would try to use crawl jacks or whatever to crawl a website and it would take, you know, you had like four windows up at one time and these bots are going through and clicking and clicking and keeping track of where they are on the website, whatever.
|
||
|
|
And you're talking about a fairly simple website that would take.
|
||
|
|
I don't know, maybe a hundred times longer to spider than a traditional, you know, a traditional like the tool like W get or the other one is JS.
|
||
|
|
Oh, excuse me. I don't know what it's called.
|
||
|
|
Fan of JS is what kind of has been depreciated because of this whole mess of the JavaScript and everything else is going on.
|
||
|
|
Fan of JS doesn't really pull up to the task. So kind of first it was it was W get.
|
||
|
|
And then you had things like scapey or scappy or have your bounce it to pull down stuff that wasn't Python.
|
||
|
|
Then you started having like all these other like ripping website tools or frameworks to do stuff like no curry with like Ruby and I've messed around with scapey again.
|
||
|
|
I've messed around with fan of JS. Fan of JS is kind of depreciated or whatever.
|
||
|
|
Messed around with Selidium for like Android and app phone app automation stuff.
|
||
|
|
And all these other frameworks for like testing websites, but not for not really designer built to crawl websites because they're so complicated now.
|
||
|
|
To to crawl that it's hard to tell which direction to go. So for example, if you're clicking elements on a webpage.
|
||
|
|
You don't know if that web page is outside of the scope of the web page because nobody hosts their own content.
|
||
|
|
So you're not sure what you are else to click where to go through what JavaScript to load up.
|
||
|
|
And it gets confusing as to where you're supposed to go because nobody has anything on their own.
|
||
|
|
And nobody has host their own content. And it's just a big cluster.
|
||
|
|
I know I'm kind of venting on not really providing a whole lot of throughput or solutions or whatever.
|
||
|
|
But I'm basically going to have to do the work now to figure out this puppeteer scraper crawler framework and build a more generic slash stealthy crawler that will go around and through
|
||
|
|
and force its way through any of these CDNs, any of these DDoS mitigation slash anti-scripting, they have extra plugins to be like stealthy and kind of work around these anti spidering techniques that some of these cloud providers use because a large proportion of them and a large percentage of them are starting to have these type of protections in place.
|
||
|
|
So if I want to automate checking thousands of websites or crawling thousands of websites or just looking at the first page for thousands of websites, I can't do that today because of all these CDNs, all these content distribution,
|
||
|
|
couple of blockers and cloud blocky things. So, you know, two thirds of what I'm supposed to be able to see and look at, I can't because I'm being blocked through anti automation techniques, which is, you know, that's what people have to do.
|
||
|
|
But at the same time, I'm coming at it from a security standpoint, I need to figure out whether or not there's anything malicious on your website.
|
||
|
|
There needs to be a way for me to query that website to pull down stuff for me to decide whether or not it's malicious or whatever.
|
||
|
|
So, look out for another episode once I've completed my kind of spider tool or web crawler, whatever you want to call it.
|
||
|
|
I'll go over kind of how it works, how it doesn't work, hopefully I can get it to work across any website.
|
||
|
|
And the idea is to kind of make a user emulator. And this user emulator would essentially go to a URL and pull down all the content and go through it.
|
||
|
|
Folks like Google's email scanners and things like that, they will, they will click a link, they will open up a zip file, they will click that, whatever the contents are in that zip file, they will open up a PDF and click any links that are to PDF,
|
||
|
|
process any metadata or scripting that's in, for example, a office document or a flash file or any JavaScript.
|
||
|
|
Inside of a PDF or something, and it will crawl it, for example, maybe one or two deep, depending on how deep it already is.
|
||
|
|
So, the idea is to have a application that will, that I can send URLs to and it will crawl it for however deep I want to crawl it.
|
||
|
|
And however far I want to go outside a scope. So, I can say, okay, well crawl too deep no matter what websites on this webpage and give me the content for all of that.
|
||
|
|
You know, source code, you know, any documentations, any binaries, any PDFs, whatever, then take that to another local, local service that will, a local script that will scan that metadata, scan for bad, potentially bad things and give you indicators to say, okay, well, here's all the stuff that I pulled down, here's all the, all the code that looks like it might have some potentially bad stuff in it.
|
||
|
|
And it'll print out a report and tell you, okay, here's all the potentially bad stuff within this website or within this link.
|
||
|
|
And that's my goal, but the problem is that, you know, again, I'm still complaining, is that everything has gotten so complicated, it's hard to do that automation stuff.
|
||
|
|
So, anyways, oh, this kind of helps you understand the mess that we're in as far as, you know, security and websites and browsers, because, you know, there's all this other garbage, you know, not to mention like apps and in mind types and loading up different types of applications through just the browser.
|
||
|
|
And Chrome, again, does a pretty good job of kind of sandboxing, by default, it kind of stays in a sandbox, so if something happens, you know, to try to, they try to do some kind of explaining, you know, writing or whatever, that sandbox helps the browser and the computer stay safe from any kind of exploits within Chrome.
|
||
|
|
But, you know, it's, it's, it's, it's because it's gotten to become such a mess just to try to look at the source code for a website.
|
||
|
|
It's a mess and you can't actually really do it anymore, you know, probably 10 years ago, even probably more 15 years ago, I could actually look at the source code on a website and see and tell what it was actually doing today.
|
||
|
|
I can't do that. I can't, I can't go to a fairly big bloated website, load it up like AJC.com Atlanta Journal Constitution.
|
||
|
|
If I try to load that up in the browser and try and read and get a feel like for exactly everything that that's website is loading in, I can't do it.
|
||
|
|
I can't, it's so much messy job is good bloat that I can't look at a website anymore and tell, you know, essentially get an idea of what it is, what it's doing, what it's not doing, where it's born and content from.
|
||
|
|
I can't do that anymore because of all this garbage and all this extra JavaScript crap all over the place.
|
||
|
|
And it's only, I think it's only going to get worse, unfortunately, until some kind of other simpler code or simpler HTML basically, they get, they can get loaded that people can just read arbitrarily.
|
||
|
|
And then that would, you know, kind of screw up everybody else because then, you know, everybody could scrape the website and whatever.
|
||
|
|
So it's, it's kind of like a cat and mouse. I feel like some of it, but more of it, it's more just people adding more and more junk onto the browser and allowing the browser to do more things.
|
||
|
|
I mean, just, just the amount of stuff that a browser can do natively alone is just like disgusting.
|
||
|
|
Like you can like fire up like DOS and JavaScript entirely and run like DOS and JavaScript or you can fire up something else and JavaScript or Java and like have it all emulated in the browser real time and like, you know, if you see any of these coding websites, you can like code and real time.
|
||
|
|
They've, well, they essentially load up the libraries dynamically kind of in the browser.
|
||
|
|
And when you execute the code, it's actually executing it locally on your browser, which is just, it's just more stuff to have to worry about from security standpoint, because if it's accessing local resources or libraries that may have issues security issues.
|
||
|
|
These plugins or these, these basically these JavaScript blobs, they get fired up can cause, you know, can give you leave you holes in your browser.
|
||
|
|
But anyways, hopefully you'll got to, you guys will hear from me when I get some kind of easy scraper going.
|
||
|
|
They will call however deep you want any website and regardless of, you know, what's on it and it can use different techniques to try to pull down the content.
|
||
|
|
So for example, if it's a mobile only phone and you go to that website, it's going to look different. It's going to not even provide content. Maybe you need to provide a path or whatever.
|
||
|
|
And it's going to, the idea is it's going to kind of detect that and then it would change itself to be able to pull down that content and or try to find specific URLs to go to based on like Google searches or whatever to try to find URLs to go to within that domain to pull up content or whatever.
|
||
|
|
So I have a few ideas around how to get content out of these CDNs or get content out of these websites that if you just go to the domain, you don't get anything.
|
||
|
|
And the same thing with IP addresses is even worse because IP addresses are posting multiple domains.
|
||
|
|
So you never know what website and IP address on is on and that provide that is also another issue around around DNS and understanding security.
|
||
|
|
You should always think stupid thread intel feeds of IP addresses and domains, but it were the whole thing is such a cluster that even if you do get a hit for an IP or a domain and it's bad.
|
||
|
|
Maybe it was bad last week and it's fine this week or maybe the IP address is so see multiple websites and just because your IP is hosted multiple websites like Cloudflare or something.
|
||
|
|
You're going to get blacklisted or it's going to blacklist that site and it be a legitimate website.
|
||
|
|
Just simple things like Google docs or document sharing platforms you can put whatever you want on there like GitHub and pull down malicious content from GitHub nobody's ever going to block it.
|
||
|
|
It's a content distribution that we're basically so things like office 365 those type of websites they're never going to be blacklisted so you have to interrogate them like you would interrogate a malicious website and assume.
|
||
|
|
That anything on these websites is bad because you know they host all kinds of all kinds of content, you know, including binary files and stuff.
|
||
|
|
So anyways hopefully I'll get something together with a couple of ones here and I'll be able to kind of show my findings and show.
|
||
|
|
Give you guys some examples of scraping websites and pulling stuff down and getting a feel for you know what what that looks like.
|
||
|
|
Anyways, have a good one recorded episode, get a phone and go crazy.
|
||
|
|
You've been listening to heckaPublicRadio at heckaPublicRadio.org.
|
||
|
|
We are a community podcast network that releases shows every weekday Monday through Friday.
|
||
|
|
Today's show like all our shows was contributed by an HBR listener like yourself.
|
||
|
|
If you ever thought of recording a podcast then click on our contribute link to find out how easy it really is.
|
||
|
|
HeckaPublicRadio was founded by the Digital Dove Pound and the Infonomicon Computer Club and is part of the binary revolution at binrev.com.
|
||
|
|
If you have comments on today's show, please email the host directly, leave a comment on the website or record a follow-up episode yourself.
|
||
|
|
Unless otherwise stated, today's show is released under Creative Commons, Attribution, Share a Life, 3.0 license.
|