Files
hpr-knowledge-base/hpr_transcripts/hpr0529.txt

653 lines
55 KiB
Plaintext
Raw Normal View History

Episode: 529
Title: HPR0529: Interview with Peterwood
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr0529/hpr0529.mp3
Transcribed: 2025-10-07 22:35:13
---
music
You're about to hear was originally recorded for the Trackseq podcast.
You can find out more about the Trackseq podcast at www.trackseq.com.
So listeners and welcome to the Trackseq security podcast.
My name is Aaron Finnan and welcome to the show.
Today we've got a very interesting show lined up.
I've got an interesting guest on.
We'll be really nice to speak to him in a little bit.
I think we'd best off start off with a bit of Trackseq news,
which is quite strange.
Seems as we're only on to the second episode.
But we have a new host, Robert Leidiman.
Robert, why don't you introduce yourself to the Trackseq listeners?
Okay, my name's Robert Leidiman.
I run a small auditing software development company in Scotland.
And we're joined also by Tom McKenzie.
You're doing all right, thank you.
Just hanging on.
Good on.
What have you been up to?
Yeah.
I've just started my own web series, interview with the Black Hat.
So I've been running around in Black Hat forums and IRC channels trying to get hold of
some Black Hat, I suppose.
And also I've been trying to get my head around,
surveying or sabbillon for some people like to call it.
Yeah, you don't have to pronounce it to use it.
And I would, I'm very privileged to finish off,
Robert Stein Trackseq and Ryan has moved on to,
it won't be joining us anymore as a host.
But everyone at Trackseq wishes him the best and certainly with DVWA and all the good things
that are happening over there.
So yeah, if you're listening Ryan, thanks very much, buddy.
Right, so today's interview is we're interviewing someone who I've spoken to you before,
a lot of respect for and it's very nice of him to come on to the call.
It's the famous Pete Wood.
Are you there, Pete?
I am, young man, I am.
Brilliant.
Pete, could you introduce yourself to everyone please?
Yeah, with pleasure.
Yeah, my name is Peter Wood.
I've been playing around with computers and electronics for the past 41 years,
which is quite worrying.
And I've been in the security side of the IT business for the last 20.
I run a consultancy call first-based technology,
which is an ethical hacking firm,
with the sort of guys who provide unbiased, I hope,
and independent testing services.
We don't sell anything and we don't install anything except in order to break it.
And I also run a group called whitehats.co.uk,
which is a vendor independent user group for security professionals.
Awesome, Pete.
I mean, if anyone's ever listened to some of the stuff we've done on HPR,
I've interviewed you before, so I'm Black Sam.
I'm really happy to have got you on for TrackSack as well.
I suppose we should start off with a pretty standard question for you.
How did you get into information security then?
Well, yeah.
I did say during the sort of warm-up to you that you could take an hour,
I'll really try to distill it down sensibly,
but use the specific code word if you need to stop me.
I think the first piece of security work I did was in the mid-80s,
working with early Nobel servers,
and configuring them what we hoped was correctly,
as you'd expect with any networking environment.
There were a lot of settings, and most people just took it out of the box and used it.
And we had a few clients who had what they thought of as sensitive and important information,
and they wanted to segregate areas that were going to improve the access controls and all that sort of stuff.
So that was probably the first really hands-on security work that I did,
and that was sort of mid to late 80s.
And then I guess it was about 1993, 1994.
I was asked by one of our big pharmaceutical clients to try and put together
an IT security policy for them, because they'd had a visitor on site
to the contractor who accidentally I guess introduced a virus onto their networks,
and their HR department told them that they couldn't do anything about it,
because they didn't have any rules about it.
So they asked me to write a security policy, and I did that based on a document,
which the TTI, as it was then, had published, which subsequently became BS7799,
which eventually became ISO 27,000.
So that was about 93, 94, and I really got into the widest plan of information security
rather than just IT security.
I mean, how long have you been involved in ethical hacking then, TTI?
Well, knowing you're going to ask me that question, TTI, there's a surprise question.
I did a bit of research back on our archive server, and the first real,
what I would describe as a proper ethical hacking test, we conducted in November 1996.
I mean, you said there what you defined as an ethical hacking kind of.
I mean, what in your mind is, how would you describe ethical hacking in your mind in your dictionary?
Yeah, what a lovely question. Everybody has a different peer, don't they?
I see it as a combination of finding, and then quite often,
if you're doing a job properly proving vulnerabilities in an organization security profile.
We're looking at the attack surface to see where they might be vulnerable.
I guess we're also looking at it from a context of what that organization does for a living,
what sort of information it has, and therefore all the attackers might be together,
a view of risk against the vulnerabilities as well.
I know a lot of people will confuse maybe just vulnerability scanning with ethical hacking.
In our view, it's not the same thing at all.
An ethical hacker should be really in our view, replicating a criminal attack as far as it's possible
while staying within the law, and obviously without actually committing any damage to the systems
and this sacrificial system.
Okay, talking so really kind of like a black cut, but good.
Yeah, thank you, Dwight, yeah.
I mean, in your time involved in InfoSec, an InfoSec, an ethical hacking,
what do you think the biggest changes that you've seen in your time anyway, Pete?
Oh, an enormous, enormous change in terms of breadth and methodology probably,
and a very significant change amazingly in the responses from the clients as well.
I mean, when we first started, I looked back through the first couple of years
that we actually did real penetration testing, ethical hacking,
and all the clients were in the finance sector, but obviously that's changed.
They were always going to be the early adopters being concerned about fraud and treason.
I think, you know, when we start, if we were kind of making it up as we went along
because there wasn't a clear methodology published by anyone,
there wasn't an official qualification in this sort of role,
so we were kind of pioneering, and everybody calls the early days of the Internet,
the Wild West, although some people say it's still that way.
And I think, you know, we were the early sheriffs or trying to be,
or maybe the Pinkerton detectives anyway.
And we were really making things up as we went along,
and ourselves and lots and lots of other people,
probably clever enough, now I think, put together a real framework
that describes how to do it, what the objectives are,
and what sort of tools are useful.
And it's become a much more formalised process,
but hopefully without some of the creative elements being lost.
I was going to say, you know, when those early days, when you were kind of, like,
captaining in the ship and navigating your way through this,
where were you taking inspiration from?
Where were you saying, well, this is, we, you know,
we need to simulate how a hacker attacks the system.
Where were you, like, going to find information and saying we really should look at this,
and we really should look at that.
I mean, was there documentation that you were able to find that was able to assist you,
or was it just really sitting in a boardroom with a couple of, you know,
a couple of your hackers, and basically thinking out the box and saying,
this is what we have to test.
Well, we started looking at the products, first of all,
that were being deployed as defensive measures, you know,
looking at some of the early firewalls, how they worked,
and reverse engineering in our minds, why they were set up that way.
We looked at a few small groups of really clever people,
like, you'll remember ISS now been subsumed into IBM.
But when we started ISS, it was like half a dozen people.
They were doing a lot of good security research,
and we learned a lot from them.
And we learned quite a bit from a number of more groups in larger organizations, too.
But in terms of methodology, you know, we really just kind of tried to,
try to think like a criminal and say to ourselves, you know,
what would we do if we wanted to break into this company or this system or this building?
And, you know, the logical thing that we developed,
we wrote our own methodology, and we were very, very positively,
and very positively, when we saw other methodologies emerging
that were in the same large framework, really, you know,
the scoping out, the background research, the footprinting, followed by the analysis
of potential vulnerabilities, and then the proving of those vulnerabilities.
And, you know, that stepwise approach was founded on good engineering,
good scientific principles.
I was just about to, I was just about to say the same thing,
and I have to be that it's just, you know,
you sound like you're describing being an engineer almost.
Yeah, well, you know, I was, and I was trained to think that way.
And I think a lot of hackers I know started by taking things to bits at home,
you know, and the real hardware hacking.
And then we'll talk a little bit about later, perhaps,
about some of the stuff I did in the early days of microcomputers,
and things like Commodore 64s, you know, reverse engineering,
software, and stuff, which is quite interesting.
And it's just a sort of inquisitive asset, tried that.
But in terms of, you know, the actual methodologies and processes,
they were, yeah, you know, I remember thinking,
how would I create the reports when I first started?
And I thought back to what I was taught in science classes at school.
And, you know, you would have, like, the objective of the experiment,
the methods, the results, and the conclusion.
And apart from the fact that when we're presenting the results,
people don't like reading reports, so we have to put conclusions at the front,
rather than at the end.
It's remaining in that same track, yeah.
Yes, I'm sorry.
I'm laughing a bit there, but having done data protection audits and clinical research,
or it's exactly the same, pointyhead boss wants a single page usually at the front
with what's the bad news and good news.
So, yes, it's a simple page.
I think that was safe.
No words, long as in two syllables, unless, well, what we call it, it's fair.
Yes, indeed.
Just a left-field question, if you've seen hardware hiking and taking things apart,
how many times you've been electrocuted then?
Absolutely lost count, really.
I used to work as a bench engineer back in the 80s, late 70s, perhaps it was.
And I used to work on instrumentation equipment, you know,
sort of lovely digital displays you get in old-fashioned measurement equipment.
And trying to repair those, you have to have them switched on while you're working on them.
And in those days, they took mainstream stuff before.
Just to get a jolt across the test about once a day, just to keep everything ticking out of you.
I guess hold off now for doing repairs to the house wiring without switching it off,
but apparently it's supposed to.
Yeah, only if you get it wrong though.
He's complaining about repeating, having blown.
Yeah, well, we weren't talking about that, but...
We had the count last week, by the way, just to stick this in for that parent reason.
Between the three of us in this household, we've got 15 computers here now.
So it's not going to be long until there's a first-based cloud then.
Pete, what do you think is probably the biggest asset?
A potential white hacker could pan possess, you know, what's the key asset?
With our question, that came straight out of my pen when I thought about it this morning.
And I think I've said it to you before, thinking outside the box unquestionably.
So many of the people we work with when we're providing testing services who are on the defending side don't.
They'll do what it says on the tin.
Well, sometimes you can read the manual, but that's it.
The front door into a web application.
Let's take something big and complex like SAP for accounting.
If they want to spec out a test for that, they will assume that we're using the front end, the GUI front end, provided by SAP.
They'll assume that we'll be sitting only on the client that she using the interface provided for the user.
Where, of course, you know and I know that we'll think, oh, hang on a minute.
What ports are open on the server? How does it do the authentication?
What's the data stream like on the wire? Is that unencrypting?
Can I just play in, look, didn't say, am I?
So, you know, not looking at something in the way that the designer intended it.
But looking at it with a much wider vision is, I think, the key, the absolute key thing.
I mean, to kind of talk about some of the other things as well.
I mean, what are your thoughts on the rise of ethical hacking related degrees in the UK?
Do you think that that's a good thing?
And if so, what do you think the positivist industry can take out of having these degrees there?
I was delighted. I mean, when Colin first approached me and told me what he was doing, I was absolutely thrilled.
I think it's the most positive thing we can be in terms of any form of formal training for this sort of role.
I think it's critical. There's no question that almost every training course I've been on or have talked to people about.
Whether it's a commercial course or an academic course, does not put security at the top of the agenda,
whether it's indeed it's there at all, and certainly doesn't teach the hack away of thinking.
So, I'm delighted with it. I'm very excited by it.
I mean, the question's related to that one, you know.
But, I mean, what would you expect to hack and graduate to possess in the way of a skill set by the time they actually finish the course?
Well, I think I can give you a top five things that was what kind of rose to the surface when I asked myself that question.
I think there will be a difference between what I expect to see and what I'd like to see.
And that's not cynical answer, because, you know, someone in my position always thinks they know best.
I mean, that's just life, isn't it?
You know, when you've done something for a long time, you think you know the best thing that they're going to come out of it.
From my point of view, what I'd like to see and what I have seen, because I've spent time talking to you
and to several other people from Abbotay, is firstly, the intuitive thinking outside the box mindset
that just becomes part of your daily life, doesn't it?
It becomes life hacking as well as ethical hacking.
The second is discipline, because when you're doing something that's exciting and stimulating,
it's very easy to blow yourself off course to go down a particular avenue and to really just dig into something
and forget to return back from subroutine, if you'd like, to the main thrust to carry on and do all the other components
even though they're not so interesting or maybe not going to give you the same rewards.
So the discipline of making sure that you check everything, even though you've found some really wazzoic,
so I think stuff along the way.
The third is patience, because of course everybody that looks at this from the outside
and particularly with the Hollywood spin or whatever that we see on hacking, which is always quite hilarious,
is patience, because you know and I know again that we're spending an awful lot of time panning for gold
and having to have the discipline and the patience to keep going and looking for the nuggets
when it really becomes quite dreary at times is an important thing, because of course when you get the gold nugget
appear then it's even more exciting.
Fourthly, I would say a real low level understanding of the way that computers and networks work,
which I see absent in so many IT disciplines.
When we go on site and talk to clients and talk to the techies,
we find a dramatic difference from one person to the next, from one company to the next.
Some people really understand the way that computers work and have a good internal visual model of what's going on.
And others have a more sort of surface view, what I call points and clickkitties,
and they don't really understand what's going on under the surface.
So when we're taking on a new person, the first thing we do is start at the bottom and work up
the computer and all the other person.
And so we're looking at network level protocols, we're looking at packet sniffing on the wire,
we're talking about the way that the computer itself works internally, the way the operating system works.
I think if you've got that as a grounding, you can bolt almost anything IT-wise on top of it.
And the same thing would be true of the low level understanding of people
when you're talking about the physical components of ethical hacking, like social engineering,
building entrance and that sort of stuff.
Again, you need to understand how people work and how people think,
otherwise you're just sort of shooting in the dark.
And then lastly, I might sound a bit tri-type, I apologize, but it's ethics,
because it is so easy to get carried away with a particular attack that you're mounting,
that it's very easy to cross the boundary between what was permitted within the scope of engagement and what isn't.
And unfortunately we've had more than one person who we've met and spoken to and interviewed
who's admitted to crossing that line and not really understanding the consequences of it.
So I think the ethical part of it is really what differentiates us from anybody who's just interested in breaking stuff.
That kind of springs on separate questions for me, actually, I have to be honest with you.
It's funny that you're ended with talking about crossing the line,
because when you raised on discipline, what I kind of wanted to get your thoughts on was,
you know, you run a hacking company.
What do you do when you have a hacker cross the line?
How is it that not just you, but in your thoughts, how you handle that situation?
It can be as innocent as maybe not understanding that I wasn't allowed to port scam that system.
But at the end of the day, the laws of the laws, you can only do what you're permitted to do and so on and so forth.
How do we handle errors in judgment?
We push people out of the industry and say, you've burnt your bridges, that's it.
You've made a mistake. You can't come back.
Or do we have some form of monitoring and read almost education?
What are your thoughts on that, Pete?
I'll give you two examples of things that have happened.
And that might put it into context for you.
Firstly, I'm not a draconian-minded person.
I'm the last person to say I've never made a mistake because we all do.
Whether the mistake is due to getting every excited about something or just not thinking clearly or even just a typo.
Everyone can make mistakes.
I think what matters is to learn from those mistakes and what matters is to appreciate the reason that perhaps the guidance is there if it's well written.
First example is testing the wrong IP range and everybody in penetration testing will have done this whether they admit to it or not.
We, for example, if we engage with the client, we require them to complete a consent form that includes a list of if we're just looking at external IPs, say a list of those IPs and sometimes what devices they are depends on the sort of engagement.
A list of IP addresses. Now, if they themselves make a typo on that consent form, then of course there's a possibility that we could end up testing the wrong people.
And we've had two examples where this happened, which we learned from as a team in the early days.
The first was a range of telephone numbers to Wardile and they gave us the client's question gave us a range of numbers which started with like a thousand for the switchboard and went up to direct our numbers.
And they made a typo in the higher range.
So we ended up running an automated Wardile exercise that exceeded the client's telephone range and went into some other business which was in the same building.
It was one of these rented buildings where they have a common switchboard.
And that was one experience that we had. The other was a range of IP addresses where one of the one of the octets was transposed around.
And we ended up testing somewhere in Romania by mistake, which was quite interesting.
And there didn't seem to be any comeback fortunately.
As far as you know.
Yeah, well, it may be why my guest is cut off. I don't know.
The payback is like seven years later or something.
What we did talk about patience, Pete.
Yeah, right.
So what we did as a result of that was to change the process.
The first of all, when one pentester is asked to rent by a piece,
another person member of the team has to just check that what they put into their tools is the same as what was on the paper.
And the other thing we do is, which is quite obvious now, of course, that we didn't think of it in the early days,
was to simply do mainly look up to them who is to make sure that it appears behind by the right people.
But when that happened, the person who did the test was in no sense punished or reprimanded because in my view,
as our processes that were wrong, we didn't have a checking process in place.
And as the employer, it was my mistake, not theirs.
If we have that process in place and they don't follow it, then they would be reprimanded.
How they would be recommended would depend very much on the severity of the situation.
Obviously, if they're being careless, again, that can happen to anybody after a rough night or something like that.
But if it's a simple port scan, then they would be told off and told not to do it again.
If it was something more serious, then in the worst possible case where we've had somebody who had the rules explained to them very clearly,
wasn't an employee, actually, it was someone working under contract for us.
But we explain the scope of the job very clearly, they have a copy of the scope of the client assigned.
And they deliberately exceeded it for probably the best reasons.
But I hope that's not them hacking for one.
They deliberately exceeded it.
They thought for a good reason they felt that our scope wasn't adequate and that it needed some additional testing.
As a result, they brought a quite an important system down for about an hour,
which the net result was it cost us as a firm about £7,000 in feeds we couldn't charge.
And we were very lucky that that was as far as the client took it.
Because of course they could have gone for consequential loss as well.
And because that person was a third party, we had to agree with the client or the client requirements to never use that person again.
And we had to discontinue the relationship with them.
And I'm not sure that was entirely fair because they were doing it for the right reasons.
But commercially we didn't have any choice.
If it had been an employee then in the same situation.
If it were not their first mistake of that nature, we would probably have to let them go.
I presume that was Chris jumping on to the call there.
Yeah, sorry I'm a little bit late goes.
And you were saying that tankkeeping is important in ethical hacking as well.
I can't say that I was five minutes late man.
He gave the game away Pete, that's just not hard.
Sorry, that's not my personal advice.
That's Chris joining everyone by the way.
How you doing Chris are you okay?
Hi Chris.
Yeah, hi guys, sorry it's been one of those days.
You've not been poor.
It's Pete, by the way.
And you're also in a chance.
Sorry to interject.
Having suffered from the other side of somebody exceeding the bounds of what they were asked to do.
I do actually understand why that company would say to you don't use that contractor ever again.
Particularly if you were paying for the service as well, it would fill you with distrust.
In fact I had one question which is related to your heating really I suppose.
But if you have something fairly dangerous done like having a gas fixed, you look for corgi registration.
I just wondered if there's something similar as we're talking about qualifications.
Is there something similar for ethical hacking that I as someone who's going to hire a hacker or a company that does it?
Can look for a stamp of approval or certification or a charter as it was.
Is there something available like that?
Yeah, I wish.
I think we will get there.
I'm involved with the Institute of Information Security professionals.
I'm involved as a fellow in the BCS and the member of their security consultants register and lots and lots of other organisations.
And a lot of those organisations are pushing towards some sort of equivalent for the corgi qualification.
I think the problem is you see that if you typically your average heating engineer, if he's technically competent and enjoys his job,
is not going to exceed the scope of what he does because the boiler could blow up and fill people.
In terms of his technical skills, that's easy to measure.
And of course, we could put something similar in place and there are things in place that could measure the technical skills of a penetration tester.
That doesn't tell you whether they'll exceed scope or not.
Well, no, no, really that's my concern.
I think the scope, the ethical side.
Yeah, I think there won't be an ethical hacking firm in the world that hasn't had some employees do something after scope at some time.
It's how they respond to it, how honestly they are about it, and how they prevent it recurring that matters in my view.
We've been in business for 20 years now.
We've been doing technical, remote and local penetration testing and ethical hacking since 1996, according to my records.
So that's quite a long time.
And in that whole time, we've only had one occasion where something took place that caused the system outage.
Now, that happened because I think I made a mistake in trusting the background and qualification of the individual that we were using, who was a third party, who was self-employed.
He was more than technically competent, but we hadn't inducted him because he wasn't a member of staff into our way of thinking and our business process.
We run a very small business here. There's only seven or eight employees at any time at the moment, and everybody works very closely together.
We don't have a division of function really. Everybody is a techie.
And everybody understands that scope is critical to not only behaving ethically, but to the future of the business.
So, and therefore to their careers, so what I'd done wrong, I think, was to assume that I could use a third party and they would have the same type of constraints around their behaviors.
And they wouldn't think that they knew better than the people who wrote the scope.
And that I know better mindset doesn't work in our firm, doesn't work in first place.
You should also use it.
No, I was just going to say, I mean, I see your point that people kind of go beyond the scope, and you're talking about how testers can kind of go outside of scope and continue to test things.
Maybe there shouldn't be. As a test to myself, I've come across the opposite where you test something where the client specifically said this is what you should be testing.
You know, where the person who you think is technically competent at the client doesn't realize that the IP address is giving you appears to be the productive system instead of a test network.
Yeah, well, that can also have that.
Yeah, I mean, whilst we could say, well, that's tough, like we don't, we spend all the time with the client when we get on-site, if we're doing on-site work, for example, to ensure that it is what they think it is.
We have something we call a pre-quite questionnaire before we even raise a proposal to the client that asks a lot of detail questions about what they want done when we're into what.
And for example, when you're talking about whether a system is a production system or whether it's a test system or development system, all of that is asked at the time.
And when we get, if we're going on-site, which is where the most damage can occur, then we double-check when they're on-site as well.
And we will usually be introduced to the system only when we check with them.
So we do everything we can, and so far, I don't know how many tests we've run, I couldn't count it.
But we run something like three to four web application tests a week.
We run something like three or four on-site, big network tests a month.
So we're doing a lot of work against a lot of systems, and we've learned over the years how to avoid mistakes, because that's what it usually is.
It's either, you know, if it's on the client side, there will always be the possibility of mistakes.
So we look for, you know, a sort of internal sense in what they provide us with.
If they give us a bunch of IP addresses, and one of them looks wildly different, we'll check it.
If they give us a number of main hosts in a domain, and one of them appears to be in another domain, we'll check it.
If we do a network discovery exercise and find lots of subnets that they didn't mention, we take it to them and say,
are these in scope or not before we go any further?
If the scope changes, they have to sign it and say it's OK.
So we do try and encourage them to think about what we're actually going to be doing as well as I was thinking about it.
Yeah, I mean, sorry.
When our clients ask us this sort of question, they do a lot.
My answer is, you know, you can come and inspect our processes.
You can talk to the people who will be doing the job, and you can certainly talk to our other clients who we've worked within some cases for a dozen years.
Who do you know that we try and do the job in a responsible and ethical way?
And I don't think you can do more than that.
It becomes a person-to-person relationship because, you know, it is such a variable subject.
It's no different, really, than someone repairing your boiler or maintaining your aircraft or something else except that it doesn't cause lots of life at least, apparently, and in most cases.
But we take it as seriously as that, and we encourage the client to do so as well.
So we think it's about, you know, the guy is doing the job forming the relationship with the client and making sure that they are all on the same page.
Yeah, and I suppose the client, in a lot of ways, has a responsibility to make sure you're not working on a life system to a certain extent.
Because in terms of something like the DP Act, you could be being exposed to sensitive personal information that you shouldn't be being exposed to as well if they put you on a life system as it were.
Yeah, well that's why we go, you know, we're security cleared.
We always have an NDA in place with the client.
Lots of other things that we believe provide them with a feeling of comfort.
In the end, like anyone in this situation, we have to be trusted.
And, you know, we work on sensitive systems, up to government, not restricted and sometimes secret, and, you know, things that are perceived to be important and sensitive.
And, you know, in the end, you have to trust the people doing the work.
And we do conduct background tests checks that make sure as best we can, in the same way as government does, that the people that are doing the work are, you know, of an ethical nature and have the right approach.
And, you know, when the very small team becomes relatively, relatively certain that people are what they say they are.
And if they do get exposed to data like that, they act responsibly with it because it does happen.
Bob had a great question about this when he was talking to me earlier on about the, you know, NDAs and the Data Protection Act and how.
And you're probably the best person to ask it how, you know, at the end of the day, you can have NDAs and you can see, you know, you're not supposed to look at this and, you know, you can act responsibly.
But under, you know, under the Data Protection Act, if by law you're not allowed to view advanced sensitive data.
I mean, do you, I mean, Bob, you're probably the better person to ask this question.
Well, well, yeah, I mean, it is all, I mean, we do audits as well as a company.
You know, we do clinical research audits with the farmer company, pharmaceutical companies and so on.
And we have the same sort of issues there in that we can accidentally be exposed to a patient record, which for a start they should maybe shouldn't have.
But we as auditors shouldn't see.
And it is awkward really in that the DP Act is the law and, and as Aaron was employing their NS obviously, well anyway, and NDA doesn't protect you either.
That's really why I was asking that because I've seen a number of contracts that we've dealt with that never even mentioned the DP Act as if it doesn't exist.
Now, I also saw a template for, in fact, it was four ethical hacking testing.
Wasn't the UK one, I think it was a Dutch one.
And that made no mention of the Data Protection Act or the various EU directives that related to it really.
I've never seen my concern in NDA that does.
No, indeed. And it's, I find it a little, we usually remind the client of it, particularly as it looks like this coming April, that's 2010.
The UK Information Commissioner may get the ability to find companies up to half a million pounds for exposing data for losing or exposing data to people.
And I wondered recklessly, yes, I mean, that is an important word.
And I did wonder whether you feel that might have an effect on whether or not companies pull you in because if you find something that they are in effect exposing data and you bring that up.
Do you think that's going to have an impact on your business?
Are we going to have to wait for the first case that comes up in front of the ICO? Have you in any opinions on that?
Yeah, lots.
I don't, it might be eventually that someone's somewhere will take notice of it.
All I can say is that the vetting of our staff, the monitoring of our staff and the physical and technical controls we put around any better.
Even if it's temporarily in our custody on site, which includes for this encryption and cryptidio speed sticks and no, not the sort that we might have heard about recently.
Then we're actually taking better care of that data than most clients that we interface with.
Yeah, that's a big issue.
If we're working for government and dealing with data that falls under the data protection act and falls under the official secret act in their view as being security cleared is sufficient.
Again, really, I suppose it comes back to if as a company or an individual, you are going to look to hire an ethical penetration or hacking company.
These are the sort of things you really need to again, akin to my own field, I suppose.
You have to ward it them first before you even hire them or at least see that they have standard operating procedures, quantifications, methodologies.
They speak the right language.
We are enormously anal about stuff like that.
We're lucky enough to have one of the partners here who's enormously picky about stuff like that.
She's created the most detailed and draconian processes and procedures to protect information and to protect those and our clients as businesses.
In my view, I think it's as good or better than most and not wishing to keep playing the same tune.
The fact that we are vetted by the UK government to see the fact that we sign up to the BCS code of conduct and to, well, numerous other organisations, codes of conduct.
And I would say that as a business, we understand all too clearly that whether to be any form of data leakage information loss or whatever is not only, obviously, unacceptable as far as the client.
I would expect to be able to be out of business immediately if we were to have that happen to us.
So we take it enormously, enormously, seriously.
Unfortunately, many clients will take a fantastic company on without making consideration of those sorts of issues.
We do see very few clients who actually ask for evidence of the processes and procedures which we do in fact have.
But, nevertheless, we set the fire as far as we can for ourselves. And in that sense, I would say again, without question, that we protect data belonging to the client while it's in our position in pretty much every case better than they protected themselves.
That sounds quite familiar as well from, at least it's not unique to your trade, if you follow me. I think it's anything to do with auditing.
Could I ask one more question? This is with my developer's hat on, if you like. And really, it's a pen test in question.
Do you find, if you've done pen testing and so on on a particular application, whatever, that the developer's ego gets in the way of perhaps using your results properly at all law?
We encounter that a lot less so recently, I'm pleased to say. One of the mechanisms we've put in place to try and counter that, especially where frequently either the developer is a separate organization that's been hired in by the client, or if it's a distinct department within the client, we, as far as we can, insist on a teleconference once we've submitted the report.
Where we go through the findings, both with both the client been gauged as present and the developer so that if there is a response that says, well, we don't believe that's important or, you know, I don't believe that's a real risk and so on, we will debate it through with them.
And we also provide a retest for things like web applications where we revisit the application, typically a couple of months after the first test, and look explicitly at the vulnerabilities we found to see whether the organization is remediated in properly or not.
Yeah, I come across quite a lot when you're doing reach X and you come back to it and the developer or the company that's created the product says yes, yes, it's all been fixed, it's all been fixed, you come back to it and all they've basically done is taken your proof of concept code from your report and made it so it doesn't work by changing two characters or you're using another function suddenly, it does exactly the same thing and they haven't really patched the problem, they've just patched the example and sometimes companies think that that's all they need to do to make things better.
Yeah, I've seen, I've seen examples of that, not commonly interestingly, most of our clients are large corporates and when they do get involved in this, they get involved in it reasonably seriously.
So we tend to be engaged by people who know what they're buying.
It's been, just to add a light note into this, it's been my policy, if you like, since I found it, that if when the client interacts with us, we have to explain to them what an IP address is, we won't do business with them.
You know, it's amusing, but it's true, it's what it is.
To only do business, we're trying to engage with us who understand what they're buying and who can understand our reports and interpret them properly.
I don't want to go recruiting a whole bunch of people who convert legitimate technical findings into some watered down nonsense.
So when we're doing these sorts of tests, I mean, talking about weather application tests explicitly, as you raised it, most of our testing is now manual increasingly over the years.
We've switched from any form of automated testing apart from things like spidering and crawling to manual testing, for exactly these reasons.
Because if we find an example of let's take cross-site scripting, which we will, on some pages, we'll use those as examples.
We will not only use different cross-site scripting proof of concept techniques, both at the time of testing and at the time of retest, but also, of course, we'll look not only in the pages where we found it initially, but in other pages to cross-check.
The problem has been fixed inerically rather than specifically.
Right.
I'm just taking off to interject just for a second.
I'm literally running to my ethical hacking class now.
So thank you very much, Pete, and if there are any questions that are on there, I'll send you an email.
Okay.
Thank you very much. See you later, guys.
By Pete, I've got a couple of quick questions and I'm conscious that we've monopolised a large chunk of your time so far.
What do you look for for candidates when fulfilling vacancies, what sort of thing that you're looking from your potential ethical hackers?
I thought of this actually yesterday because I was helping with an interview yesterday.
I came up with four things that are important to me.
I think different people look for different things, but these four would probably appear in most people's list.
Firstly, an understanding of ethics, because even if they've misinterpreted an ethical stance in some examples, they may fight or get asked about.
If they have a fundamental understanding of wanting to do the right thing, I think they can be trained or educated for the right approach.
So firstly, a core ethical mindset, and we don't want vigilantes.
Secondly, creativity, and that really is another way of saying that, thinking outside the box thing again.
Someone who can demonstrate a creative approach to problem solving or problem making.
Thirdly, and thirdly, a sense of humour, because I think if people take themselves too seriously to the point of having no sense of humour at all, they're probably not going to work with the rest of their team.
And fourthly, and it's not too important in order of importance, it's fourthly enthusiasm or passion.
I don't want someone who's just doing it for a job. It's got to be a career or a vocation.
Yeah, I made the mistake of asking trucks at listeners what they thought was their minimum qualification.
And that was meaning professional qualification, that people thought you needed to be an ethical hacker penetration tester.
And Chris will tell you as well that the huge chunk of every answer that I got from everyone on Twitter was in some way connected to with passion, enthusiasm, and commitment.
Chris, I mean, Chris, you had the great point to say that that's without doubt that's absolutely true, but it's not really what HR's looking for.
I'm actually seeing quite a lot of technical companies that really are hiring really technical people.
Don't involve HR in the process. I know specifically there's companies out there that will use social media and interact quite a lot with people in the industry.
And if they're looking for someone, they will purely look through social media. Twitter specifically, if someone says they're looking for a job, you can pretty much see that there's companies on there
that really do highly technical work that are approaching them through Twitter, through social media, or just at conferences and really going directly to the people who they want to hire instead of advertising a position and getting in and they do with CVs,
but people who've just done a certified ethical hack, of course, release your counsel and think that God's gift.
I mean, the answers you got on Twitter were mostly, you know, enthusiasm and the right thought process, everything else you can learn, everything else you can be taught.
I mean, I'd rather, I'd rather almost hire someone who wants to learn and has a lust for knowledge and can think slightly differently than your average developer and then just teach them everything from that point. It's kind of almost easier.
Yeah, I agree completely. We've never actually recruited anybody who has trained penetration first or ethical hacker. It's never happened.
Well, I wouldn't say you must be then.
Didn't say it wouldn't happen. Chris, if you've done plumbing and central heating and you spare time, you're guaranteed a job.
That's right. Absolutely. The anti-problem is all those evenings slaving away my garage.
What's your advice, Pete, for people actually gang into hacking, you know, into InfoSec and penetration testing?
I mean, what do you think the route, well, the route actually is like?
To use your phrase and left field answer to that. I think first of all, do it because you love it.
And secondly, don't become a vigilante to prove it.
I don't think that, you know, if you go through a traditional job filtering system, you know, through a recruitment agency or through a large firm, then I'm sure all the guidance would be different.
I ain't never been a large firm. I don't want to be.
And the way we recruit primarily is people write to us and say, I'm interested in doing this. Have you got any jobs, Gav?
And we will filter out, you know, those that don't fit the profile or can't string a sentence together or whatever because, you know, communication is a critical part of the commercial model for us.
But in the end, I think if someone's very keen to do it, has a real passion for it and has the ability to learn technical issues.
I think that's all we really need. I mean, we have on our team an ex carpenter, someone who was halfway through a degree in chemistry
and then got a job changing watch batteries, someone who used to sell paper, which you can do apparently.
Someone who used to be a self-employed accountant, you know, it's not in our view necessarily about having a traditional route, although it's easy for a small company to talk like this and not so easy for someone to approach a large firm
with what they perceive to be the wrong skill set. But I talked to somebody yesterday who was, he's got a first class on his degree and won a BCS award for what he did at university, but was turned down by recruitment company because he didn't have the right A levels.
Now, as far as I'm concerned, that, you know, that's the epitome of stupidity. And if we get to recruit him because they've earned him down then I'll be very pleased.
But, you know, I think, you know, you have to decide where you want to work and fit the profile for the employer, probably how to talk that sound.
And if you want to work for a really big organization, I'm sure that their HR people will have a set of rules and a checklist that will control whether you even get to interview.
In smaller firms, unfortunately, a lot of ethical hacking and pen testing firms are smaller firms. Then I think you've got a much better chance of getting it because of who you are rather than what you can write on a bit of paper.
So I suppose we kind of know the answer to this question, but it's a very, I emailed it to you on it. It's a strange question to ask, but degree versus professional qualifications versus self-taught.
I mean, which of those three options do you have a sneaky feeling that I know what you're answer to here is kind of paper, which is, which do you do you think is the best?
I mean, for me, I think self-taught or hacker shows that, you know, if they've got demonstrable proof of how well they've self-taught themselves is a good indicator of how they'll act within hacking.
Yes, I agree. It answers your right, mate. It answers a lot of questions in one. You're right.
I used to be much more passionate about this answer than I am now, interestingly. And some of it is since I met you and some of your colleagues and also spent time at more than one university looking at the way that these BSEs and MSCs have put together.
I have a lot more respect for them than I used to have. I really have. We've got two people on staff who've done MSC and information security, and it's given them an excellent breadth of knowledge, which is useful with our client-based bearing in mind.
We're mainly doing with large corporates and government. Those that have taken it seriously and done some really interesting work, particularly as part of their thesis or whatever.
They've really demonstrated a number of factors to us as potential employers in terms of being able to have the discipline, being able to have the technical insight to really get into a topic and that in our case, the English, to actually put it down on paper in a comprehensible way and present it comprehensively.
That was something I was going to mention actually being able to report it properly, sometimes you get something almost unreadable, it's terrible.
We couldn't tolerate that. I am such a grammar fascist. I'm awful. Honestly, I can see double-white spaces. It's a nightmare working for me.
We have both a technical review and an English readability QA review for every report before it goes out, which is very expensive for us, but it gives us a report that's usually right.
Anybody who does that role in our firm has to be able to write the report as well. Although we have a lot of standard wording that people can pick and choose from to try and make things consistent and sensible and reporting.
If they can't use English properly, they're really not going to be able to do the job properly. There's no question about that. Probably the guy I've got the most respect for, there is one of our guys who's Indian by birth and whose English is his second language and he does a great job at that. Very impressive.
Regarding professional or industry qualifications, I think it depends what. I probably don't have sufficient experience to talk really knowledgeably about it, to be fair. Despite everybody taking the mickey out of it, I think being able to get a CISSP demonstrates a reasonable breadth of knowledge even if it is only three inches thick.
And I think that's quite useful because it does mean that you've got some idea about things like risk analysis and physical security and personnel security as well as the technical elements and I think that's important.
So I believe it or not, I quite like that. Of course, someone who's able to just read a book and then answer a load of tick boxes may not be the right output person from that sort of thing, but it's an indicator.
But self-taught, yeah, it tells us two things. It tells us firstly if they're able to do the job in the way that we've talked about already.
The thinking outside the box, they're able to absorb interesting technical data and actually implement it. But it also tells us instantly about their ethical stance because if they're self-taught, what did they teach themselves on?
And if they taught themselves on other people's sites, they can't strike back out the door again.
If they set up their own lab and set up some nice interesting virtual machines, then they'd be curious.
I'm going to try and wrap off as soon as possible for you. I suppose I have four questions and I'll try and make them quick.
The one about, and I think we already know the answer, but how important is it for ethical hackers to be able to operate in a client's environment?
And I mean in that way that you can trust that ethical hacker not to burn the bridge with that client or lose that business.
I mean, how important?
Yeah, because most penetration testing firms are small and even the penetration testing teams in the big four are very small.
It's critical that they can behave ethically and they can communicate intelligently with the client.
And we have a role that says when someone starts with us, they do remote testing only to begin with.
And only when we're satisfied that they can do that well.
Do we then move to taking them on site in a monitored way with an experienced person?
And only when they've done that and proved their ability to interface with the client face-to-face as well as do the technical job, would they be allowed to do it on their own?
If the ability to write plain English, the ability to communicate clearly and dispassionately by phone and the ability to talk pleasantly face-to-face are essential parts of doing this in a commercial environment.
I'm sure if you work inside GCHQ, it doesn't matter then.
But if you're working in a commercial environment, you've got to be able to operate in a business environment.
And only go and tear your hair out, scream, shout, and have a good laugh.
Not like Bill.
Oh, and yelling, yes!
When you've brought into someone's system isn't best practice.
Although everybody's gone.
Surely you're allowed to do a happy dance of some kind.
You know, the NFL stopped it.
You remember when the NFL stopped them doing those enzyme dances?
I'm a big fan of American football.
They used to do some great dances when they scored a touchdown and the NFL said it was an unnecessary waste of time.
No, I think, you know, you just got to find the right place to do it.
It may need just to pop out to the toilet and have a little dance around in there.
Although I did get caught out like that once and that's a whole other story.
That might lead on to a very nice end question.
I mean, this is a fluffy question that I suppose you would expect a security podcast
that asks someone in the security industry.
What do you think the biggest threat is that we're facing over the next 12 months?
To business and to society's trotions?
Oh, yes, well.
They're maturing very fast.
Now, some of the really, I mean, there are some weirdly clever people writing these things, weirdly clever.
And there's some very bloody clever people I've met who try and defend against these
and take them to bits in all kinds of roles in government and industry.
But I think, you know, given the broad brush ignorance of the normal person,
that's not ours and not our listeners, but the normal person out there
hasn't the faintest idea of what it means.
Not just, you know, they think antivirus is a solution.
They are so screwed.
They're safe because they're behind a firewall.
Well, yeah, yeah.
Firewall antivirus, very nice big tick, go home, oh dear.
The one I like is the, but I only go to sites that I trust.
Sites like the New York Stock Exchange and things like that have never been hacked.
So you couldn't possibly get a infection from others.
Yeah, yeah.
Microsoft.
God, I'm glad I brought a big packet of cigarette for this interview.
What do you, Pete?
Well, I mean, in the same vein, what's been the most worrying incident that you've seen in your professional career,
if you can talk about it, of course?
Yeah, well, it can because it happens a lot of places.
It's a lack of security culture.
The fact that people think that buying sexy boxes with blue flashing lights on
and having great security gates on the main entrance will give them security.
The staff don't know they haven't been told.
They tried to do the right thing.
They still let you in the back door when you turn up with a cigarette in your hand or a phone in your hand.
It's the complete lack of security culture in most organisations will be there downfall
if the Chinese really take it seriously.
For people who want to find out more information about you, you run a blog if that's correct.
Yeah, I do.
You can go to ptowood.com, which will link to the blog.
And you can also find out this about my background on ptowood.com.
The blog itself is fpws for famous peatwood security.
I've got a blog box, I think, or somebody like that.
We'll put a link in the show notes and that.
Yeah, but the background on ptowood.com and the occasional commentaries on fpws.
Okay, okay.
All that's left for me to do now is thank you once again, Pete, for joining us on TrackSec.
And is there anything else you want to add to the audience before you shoot off?
I just want to say many, many thanks for having me, Aaron.
You really are one of the good guys.
And keep listening, guys.
Keep listening.
Thank you very much, Pete.
Thank you for listening to Hack with Public Radio.
This is PR sponsored by Carol.net.
So head on over to C-A-R-O dot anything for all of those things.
Thank you very much.