Files
hpr-knowledge-base/hpr_transcripts/hpr0653.txt

927 lines
37 KiB
Plaintext
Raw Normal View History

Episode: 653
Title: HPR0653: Intro to Black Box Testing
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr0653/hpr0653.mp3
Transcribed: 2025-10-08 00:28:11
---
5
Hello Hackers! How you doing? I am Heisenbug.
And I am Cloud4. Welcome to our show.
Yeah, we're here on the hacker public radio.
It's awesome, isn't it?
Yeah, it's pretty cool. It's an open source kind of radio thing and just kind of fun with it.
Open source is good.
Open source is always good. So what we're doing today is we're going to do a little topic on Blackbox Testing.
Terrific!
What is Blackbox Testing?
That is an excellent question. Probably the best first question that you can ask.
What is Blackbox Testing? Blackbox Testing is testing some sort of code
without the knowledge of the actual code itself. So not looking at the source basically of the code.
So not looking at the lines that's written and just kind of testing the software as it is.
Does that make sense?
Do you think it's a guess at what it is?
Yeah, to guess at how it's created to find errors.
To guess at what the code is.
Yes.
Okay.
Yeah. Does that make sense?
Sure.
All right.
So the Blackbox Testar becomes an expert in the relationship between the program and the world in which it runs.
You can think of Blackbox Testing.
It's named that way because the software that you're running is this Blackbox.
You don't know what's inside of it. It's this ninja that's out there, right?
It's ninja software.
It's ninja software, right?
So you know it has an input and you know it creates output.
You know there's some sort of processing inside that.
You know what the input is because you put it in.
You know what the output is because you see it.
But you don't know what's going on inside, right?
Okay.
Does that make sense?
Sure.
Okay.
So the programmer, so basically it's named that way because the eyes of the tester
can't see it.
It cannot see what's inside the box.
So you look for incorrect or missing functions,
interface errors, errors in data structures, and external database access errors, behaviors,
and performance errors, and you know initialization and termination errors.
In order to figure out what it is.
In order to figure out what's kind of what's running that engine underneath.
Does that make sense?
Yes.
All right.
So it's applicable to you know unit testing, integration testing, system testing,
acceptance testing, a whole bunch of testing that we'll get into later here in this you know.
Oh good.
Okay.
All right.
I look forward to it.
So what is it not?
I mean, if black box testing is one type of testing, what's the exact opposite of that, right?
White box testing.
White box testing is a great thing.
It's also called glass box testing, clear box testing, open box testing, transparent testing,
structural testing.
It's what?
Is that testing when you know what it is?
Absolutely.
You know what the code is.
So you have access to that code.
Does that make sense?
Yes.
Okay.
So the glass box testers often ask, ask the question, does this code do what the programmer expects?
In contrast to the black box tester, does this product fail to do what users,
either the human or software expect, right?
So does it fail to do what it expects or does it do what the code says it should do?
Does that make sense?
Sure.
So it's a different type of testing and it's applicable to a whole bunch of different types.
Also, so you're, so does that make sense?
Do you get the differences between that or?
I think so.
You had me write up to the end there.
Until the end there.
So what part aren't you getting?
About the different questions.
I didn't, that didn't quite flow.
Okay, so, so let's look at it this way, right?
Glass box testing, you have access to the code,
and black box testing, you don't have access to the code.
Right.
So you're looking at, at glass box testing, you're looking at, okay,
how does this code work?
Does it do what it's supposed to do?
I'm looking at the code here, where's, I'm breaking down the code.
And black box testing, you don't have that viewable to that code.
It's an opaque lens, you know, you can't see it.
So you're, instead of saying, does this code do what it does?
Do you say, does this program do what it's supposed to do?
Does this input make sense?
Does this input make the output I expect?
Okay.
Okay.
Now, you can manipulate these things later,
and we'll get into manipulating them, and breaking them,
and, and, you know, hacking into these sorts of things.
But, but first of all, just testing it in general,
you want to find out, you know, you want to put you at a baseline,
you want to find out, does it do what you should expect in the first place?
So there's different advantages to, to black box and glass box testing, right?
If you, if you were to use this and say, in a, in the business world, right?
So in glass box testing, you know, you can be commenced at an earlier stage,
because you don't have to wait for the user interface to be created,
to figure out, does the back bound structure of this program do what it does?
Does the, does the, um, does the back end function right?
You don't need to have the funk, the fancy buttons and switches,
and anything that the user does, you can just put in the code
and see if the engine behind it works.
Right.
You don't need to create the steering wheel to turn it, right?
Does that make sense?
Uh-huh, uh-huh.
Okay, so, so that's useful, so you can do it earlier,
and you can test the possibility, you know, sometimes testing can be more thorough,
because you can input things in places that you normally couldn't
in black box testing.
They only allow the user inputs.
Oh.
Is that, that, you can change variables on the fly and try to break things
and other things like that?
Okay.
The disadvantages are, since the test can be very complex,
you know, you need a highly skilled resource,
you need a programmer, you need somebody who really knows the code itself,
the language itself, in order to be able to do it,
so it can't be costly, it can be complex,
and there's, and there's also a maintenance of test scripts,
uh, there's regression testing that needs it,
there's a whole bunch of things that need to be done with this.
Um, and since it's, uh,
code tied closely with the application,
a lot of people miss things that they do, um,
so if the person, you know, writing the code,
you're looking at the code, and you're like,
oh, okay, well, it should do this, you know,
but it doesn't, and it doesn't, and you can't figure it out,
you know, it's because you're looking at that code,
and you expect it to do something,
but it's not because it's the way it's set up.
So, a lot of times when somebody can't see
what the code's supposed to do, they can go,
okay, well, this is the obvious error,
and then once you find out the error,
you can, you can look at the code a little bit differently.
Does that make sense, or?
Yeah, sort of.
Okay, so there are advantages and disadvantages
to black box testing also, right?
Test can be done from, um,
the user's point of view, which is great,
because you, you do test from the user's point of view.
Right, so I can see definite advantages there.
Yeah, so you know if it breaks, and you know if it doesn't break,
you know, a lot of times, uh, also,
you can write this code, and you can do things,
and it works a million times,
but you're not expecting a certain input
that the user would obviously put in,
or that the user could put in,
you know, like if you put something with money in there,
and somebody put a dollar sign at the front of it,
and you weren't expecting them to put dollar signs in the...
Oh, I can mess up everything.
Yeah, I can mess up everything.
If you're expecting, if you weren't, you know,
figuring that out.
So, they can be done independent of the developers,
that, you know, and you can design tests based off of specifications,
requirements of projects, those sorts of things.
So, the disadvantages are without clear specs.
Sometimes it's difficult to see and design test cases.
And also, the tests can be redundant of what the software programmers
already done, and, um, but there's a lot of useful things about it too.
Does that make sense, or?
Yes.
All right.
So, we've used some buzzwords in here, right?
I think I'm going to skip some of the buzzwords,
and I'm going to go into different types of testing, right?
So, what is testing?
What are you doing when you're testing something, right?
To see if something works, how you expect.
That's part of it, right?
Right?
Okay, so, there's two different kinds, right?
There's a very verification and validation, right?
Okay, so, with verification, you're saying,
are we building this product right?
And with validation, you say,
are you building the right product?
Right?
Oh.
So, with verification, you're saying,
okay, if, um, I like this example,
it's the monopoly example, right?
So, it says, if I land on go, do I collect $200?
Oh, right.
Yeah.
So, that's verification, right?
Validation is saying, okay, this has a go,
and it has to collect $200.
But this is the game of life.
You know, this isn't even close to the same game I wanted.
That's the wrong game.
It's the wrong game.
Yeah, okay.
So, yeah, you can collect money in it.
It has $200.
It has a go, but it's like, okay,
but it's not even the game I want.
It doesn't even do what I need it to do.
So, that, so, verification and validation,
or those sorts of things.
Greetings, thank you.
That's a great example.
Excellent.
Okay, so that you can have different things,
the different problems, right?
With these methods of verification and validation, right?
You can have mistakes, faults, or defects, failures, and errors.
Now, we kind of go through the difference of these.
Okay, a mistake, right?
What's the difference between a mistake, a fault, and a failure?
Heck, I have no idea.
You have no idea.
So, okay, so a mistake is a programmer makes a mistake.
Okay, so a programmer would make a mistake.
It's a typo, missing a semicolon,
you know, an infinite loop.
I mean, tons of things can happen.
So, the mistake is a typo.
It's the wrong one.
Okay, or something.
It's not necessarily a typo.
It's maybe a mistake in math,
a mistake in methodology, a mistake in, you know,
those sorts of things.
Okay.
Does that make sense?
Yes.
Okay, so a fault is where the fault is a defect in the program, right?
So, the program is screwing up is a fault.
So, the program messes up as a fault.
A failure is if somebody observes this fault.
Okay.
So, it's not a failure in the system.
It's not observed.
No, it's no failure unless somebody knows about it.
It's just a fault in the program.
It's a faulty program.
So, there's no failure.
You have no idea where it's failing.
If it's not failing,
there's just fault in the program.
There's no failure.
So, other faults may remain latent for a while,
but then eventually pop up and then they become failures.
Well, wouldn't a mistake also cause a failure then?
A mistake could cause a failure and could cause a fault,
but a mistake will cause those.
They're not the same as them.
Gotcha.
So, yeah.
So, punching someone in the face can cause bleeding,
but it is not bleeding itself.
Does that make sense?
Yes.
All right.
Okay.
So, we get into some of the different types of testing, right?
So, when you're testing black box testing,
there's different ways of testing here.
And so, oh, man.
So, I guess we should talk about equivalence partitioning.
So, equivalence partitioning is a software testing technique
that divides the input of the data.
Into software units and partitions, right, which is test classes.
So, in principle,
the test classes are designed to cover each partition at least once.
And so, the technique tries to define the test cases
that it covers classes of errors.
They're by reducing the total number of test cases
that must be developed.
So, for example, if you're talking about numbers,
negative numbers to zero is one partition.
So, you're breaking the numbers up.
So, one to six is another partition.
Seven to twelve is another partition.
And then, 13 and on is an invalid partition.
So, it's not even supposed to be yours.
So, you're not supposed to have negative numbers, right?
We'll say we got a program here
that if you enter one to six, you get one result.
If you enter seven to twelve, you get another result.
Anything below one is a failure,
and anything above twelve is a failure, right?
Okay.
So, what people will do,
well, they'll try to figure out those partitions.
They'll input a few inputs on either side of those
and see if it breaks and see if you get the correct solution
or see if you get a valid error or if the program faults.
So, you'll enter a negative number in the field
to see if it screws things up.
You'll enter a zero that's right on,
you know, to see if somebody's dividing by zero
or something like that.
You'll enter a six and then a seven.
Okay, does a six give you the first section?
Does a seven give you the second section?
And then you'll do the like a 12, 13.
Does a 12 still give you the second section?
Does a 13 fail like it's supposed to?
What does a 12.5 do?
What is a, you know, 12.99 do?
What if you put fractions in there?
You know, those sorts of things.
Does that make sense, sir?
Yes, so you would do that in order to test the partitioning?
Yeah, you can do that a lot.
You can do some boundary value analysis on it.
So, you test those boundaries by boundary value analysis,
basically just testing edges of those data.
And you can, what you can do is you can create a graph on this.
So, a cause and effect graph.
Have you seen cause and effect graphs before?
They're like fishbone diagrams?
Fishbone?
Yeah.
So, a fishbone diagram, it's a cause and effect diagram.
So, they're called, I believe they're called Ishigawa diagrams.
There was a, someone knew, you know, the works of Deming and Crosby
and a lot of quality analysis people.
There's a Japanese guy called Ishigawa,
and he created the fishbone diagram.
Oh, okay.
Okay, so, so this fishbone diagram
is a cause and effect diagram where you put in, you know, different things
and you try to figure out, okay, you test for measurements,
you know, calculations, human errors, load problems, stress problems,
security problems, and then you put your, you know, okay,
well, this is a cause, this is the effect.
This is the cause, this is the effect.
And you can, you can map the problems in that, that software that way.
That sounds very useful.
I know, it sounds pretty useful.
So, why, why should you black box test?
So, if it's a company, why should they black box test in the first place?
Because then you're looking for errors from a different point of view.
You are looking from a very different point of view.
Also, there's a, there's a,
our cost, a lower cost when you're not having actual developers
or the same developers that do it, you know,
and, you don't have to be as knowledgeable in it,
you know, and you can test more things.
You're testing it at the user level,
which is where problems are going to occur.
Right.
And those sorts of things,
why do you need to test them early?
Why should you even test in the first place?
Well, there's a cost in finding and fixing errors.
The earlier you are on the process, the cheaper it is to fix.
That makes sense.
So, there's less dependencies.
There's less, you know, it's easier to fix something
that earlier on it is in the process.
So, if you're in the design phase and you find an error,
it's a lot easier to fix than if it's already in production.
And there's a whole bunch of, you know, and you're like,
oh man, you know, the database is completely off of where it should be.
And we need to do change, I mean,
go back to formula on some of these things.
It's going to be a lot cheaper to fix that earlier.
So, there's a curve, you know,
when you find your requirements,
when it's actually coded and when it's released.
Right.
And it's called the 10-100 curve.
So, if it costs you a dollar to fix
at requirements, right?
When you find out your requirements,
it's going to cost you $10 to fix,
or 10 times as much to fix at coding level.
And it's going to cost you $100 or 100 times as much to fix
at, after it's already in production.
Does that make sense?
Of course it does.
All right.
So, you, you want to basically go,
I mean, there are money talks.
So, you can't, can't fix every error.
But the cost of finding and fixing errors is,
how much does it cost to find the bug?
How much does it cost to fix the bug?
And how much does it cost to distribute the bug fix?
That's really, that's really where your cost
increases later on in the process.
Right.
So, sometimes it pays to figure out the viewpoints of your stakeholders.
So, is this, so this is a really easy fix?
But it really does nothing for the program.
Nobody's even used this anyway.
So, should we just pull it out of the program?
Or this is a really difficult fix that's going to take a lot of time
but it's really, really useful.
Better fix it.
So, you better fix it, you know.
So, it's that sort of thing.
You've got to look at the cost of finding
and the cost of fixing it.
So, basically, at the programmer's viewpoint,
if you find bugs in requirements,
you can fix them without even coding anything
because it hasn't been coded yet.
Right?
Sure.
Okay.
So, programmers can find their own bugs
and they can fix them in file bug reports
but it's hugely expensive or deal to deal with bugs
that are already in the customer's hands.
So, you would do quality cost analysis
and those sorts of things when you find these
in black box testing.
It's a lot better to test them earlier in the process.
And so, testing is important.
It's a necessary thing in software.
And especially when you think of the views of the client
of when they get a faulty piece of software,
they really think that you're doing a crappy job.
This piece of crap.
Yeah, and you're like,
it's kind of funny that a lot of people
will throw things out there with bugs
that they know they're just going to throw an update later.
Microsoft is common for doing this.
Right.
Putting out really crappy code
that they know they're going to have to throw an update in later
just to get it out.
That's not the best way to do things.
You put a stigma in people's minds.
Right.
Yeah.
So, is there, I mean, can you do complete testing?
When you test something,
when you try to find errors,
when you try to find faults,
when you try to find, I mean,
when you try to reverse engineer different things
and you try to find different
exploits in different things.
Right.
Can you completely do it?
No.
No, that's right.
There will always be errors.
There will always be errors.
Can you test every line,
every branch, every basis path?
You know, it's,
testers will find bugs,
but they will, they find all of them.
Complete testing is not complete.
You'll always have unknown bugs at the end.
So, you would have to talk about coverage.
So, how are you going to cover the maximum amount you can?
You know, how can you cover everything?
Because you can't put in everything.
So, you try to hit the major parts.
Here's an idea here.
Okay.
So, you input A and input B.
Right.
Okay.
So, you have input A and input B.
And you're going to print A over B.
So, A divided by B.
Okay.
Right.
So, I'm going to test this.
I'm going to set A to two and B to one.
Okay.
So, two divided by one,
I've tested it.
Right.
But does this achieve complete testing?
Two divided by one works.
It works.
It achieves the desired result.
Does it, is it complete testing?
No.
No.
No, because what if I put a zero in the denominator?
What if I'm dividing by zero?
Then it's host.
Then it's host.
So, it's like, can you test every single number
and every single division or every single, you know, thing?
It's really difficult.
So, you also have to spend time on it.
You know, you only have a limited amount of time.
So, any time that you spend analyzing, troubleshooting,
effectively, you know, and effectively describing a failure,
is time you no longer have available for designing tests.
Executing tests, reviewing inspections,
retooling, documenting tests,
automating tests, you know, supporting, you know,
other tech support and training the staff.
You just don't have the time.
There's a trade-off in getting that sort of test done.
Right.
All right.
All right.
So, there are enormous number of possible tests that you can do.
But can you test every possible input
to every variable,
including output variables and immediate results variables?
Can you test every possible combination of inputs
to every combination of variables?
Can you test every possible sequence through the program?
Can you test every hardware or software configuration,
including configurations of servers,
not even under your control?
Of course not.
No.
Can you test in every way in which a user might try to use the program?
No, you can't.
So consider this.
So this is one thing I thought that was interesting.
So Doug Hoffman, he worked on this mass-par computer,
which is a massly parallel computer processing system.
And this computer has several built-in math
mathematical functions.
And he's going to consider an integer or a larger root.
So this function takes a 32-bit word as an input.
And any bit pattern in that word can be interpreted as an integer
whose value is between 0 and 2 to the 32nd power.
Minus 1.
Now there are 4 billion, 294 million,
967,296 possible inputs to this function.
How many of them should you test?
Four.
How many of them would you test?
A lot.
Yeah, I mean,
is it possible?
Okay.
So do you test all the possible values?
It's interesting.
So when he did it, he tested every single one of them.
And it took this computer about six minutes to run the test on this
to compare the results to an article.
So there were two errors and neither were near any boundary.
So without an exhaustive test, he wouldn't have found those errors.
It's interesting.
So there's more complex things to testing.
So you have Easter eggs, which are bizarre inputs by design.
You have edited inputs.
Easter eggs?
Yeah, Easter eggs.
So bizarre inputs.
Okay.
So an Easter egg is like something that's a surprise.
Like in a DVD, you have an Easter egg hidden at the end or something like that.
You know, where they have the long pause.
Remember the old tapes?
Oh, GD.
Oh, there was a long pause.
Yeah, well, the tapes.
And the CDs came out.
You're like, oh, okay, well, there's obviously something.
No, I have a CD where the song ends.
But it's just blank silence for a long time.
And at the end, there's more.
Yeah.
At the end of that track.
So there are usually hidden messages or jokes or something created.
Maybe messages or jokes, that's fun.
Created within some program.
Oh, people do it all the time.
It's crazy.
It's wild what people put in their software.
So there's edited inputs where somebody will edit some sort of input.
There's variations on timing, right?
So do you test it on your own box?
Yeah, you're testing the software on your own box, right?
But what if somebody runs it on another box?
What if they're running it on a server?
What if they're running it on something
that is a slower machine than yours?
What if they're running it on your own box and you put it in production?
Is production going to work at the same speed as your own box?
Is production going to work with a heavy load as well as it would with a not heavy load?
Is there a timing issue where if you just put a whole bunch of crap into that thing,
does it break?
Can you find an error in there?
Can you exploit that error?
Does that make sense, sir?
Yes.
Oh, all right.
Okay.
Okay.
And then you get the extreme value test, right?
So what you say, when you say, okay, no user would do that.
What that really means is no user that I can think of who I like would do that on purpose.
You know what I mean?
So no one like you would do that.
Yes.
Exactly.
If you're saying nobody would put a dollar sign at the beginning of this number.
Nobody would put a comma in there.
Nobody would put a fraction in there.
Nobody would put accidentally paste a space into there.
Nobody would, nobody's that dumb.
Nobody would put, you know,
nobody would put, if you say enter the number of dollars,
what if somebody put 126 space in the word dollars?
Oh my.
Yeah.
Would somebody do that?
Could somebody put that in Excel?
I can think of people.
I could think of people, yes.
Would somebody go, oh, well, that's not going to work unless I make it blue.
It looks pretty when it's blue.
No user would do that.
No, users will do that.
Dollars mean blue.
If you can think of it, users will do it.
And if you can't think of it, users will do it.
They will find a way to defy the laws of computer programming.
And they'll do it.
Math does not exist when users are involved.
Does that make sense, sir?
It makes sense.
So there's combination testings, right?
That you can do that.
Boy, memory leaks in certain texts that will have problems.
They're sequence testing that you can do that are certain problems.
But let's avoid all that since we're just doing
a basic overview right now, okay?
Okay.
So let's talk about the different types of testing, right?
Okay.
So we've got unit testing, integration testing,
functional or system testing, acceptance testing, regression testing, beta testing.
Have you heard of any of these before?
Nope.
Okay, so let's start with one of them, unit testing.
What is unit testing?
It's a white box basic version of testing, right?
So we're white box testers, usually the developers creating the code will verify that the code
does what it's intended to do at very low structural levels, right?
Right.
So it doesn't if then sequence.
Does it go through the sequence?
Basically, you know, you're at a particular unit.
Gotcha.
Okay.
So integration testing can be black and white box testing.
It's a low and high level testing design.
Integration testing is testing the software components and hardware components that are combined
and tested to evaluate the interaction between them.
So if you've got little drop down boxes, right?
Uh-huh.
So you're testing different combinations of, so you know that the left drop down box works
and the right drop down box works, right?
But what if you put certain things in one box and other things in the other boxes?
Does the interaction between these, you know, have a result that could break the program?
Okay.
That's a good thing to test.
Yeah.
Yes.
And there's functional and system testing, which is almost all black box testing.
It's a high level design requirements, usually requires specifications, requirements,
you know, those sorts of things where you're testing testers examine the design
and the customer's requirements back and plan the test cases to ensure that the code does what
it's intended to do, does it do what it's intended to do?
Functional testing involves ensuring that the functionality specified in the requirement,
specification works, doesn't do what it's going to do.
So you do stress testing, performance tested, usability testing, these sorts of things.
Does that make sense?
Yes.
Okay.
So does it, you're looking kind of, days, is it, are you sure?
I'm sure.
Okay.
So let me go a little bit more into, okay.
So stress testing.
What do you think stress testing is?
Gilling at it.
You fucking brother, you son of a bitch?
You, no, no, no.
So where you jump up and down on it?
No, no, you're, yeah, you jump up and down.
You bend it.
So you bend your laptop and your stress testing.
Let me stand on it.
So it's valuating a system or component beyond the limits of its spec or requirement.
So you're saying, okay, well, I've got this software.
It's able to handle, handle, you know, up to 30 cash registers, right?
So let's say all 30 cash registers are running at the exact same time.
Is it able to handle that?
So you're able to hook it up to 30 cash registers, but is it able to,
actually deal with it, actually deal with the load on Christmas day?
Oh, you know, it would be Christmas Eve, usually Christmas day.
Christmas Eve or the day after Thanksgiving or whatever the big day is, right?
So you want to make sure that you're stressing it beyond belief.
Yes, theoretically, it works.
You can hook all 30 up, right?
But are all 30 cash registers at any, so if you go to Walmart,
you should do it when there's coupons.
When there's coupons or whatever things.
Yeah, so when you go to Walmart and you look at the cash registers,
are all of them being used at the same time?
Oh, no.
No, they rarely would, there would rarely be a condition to test that, right?
Yeah, there would rarely be a condition to ever test that.
So that's one thing you want to go, okay, what if something happens and we need them all?
Will it take down the system?
Because that would be bad.
So there's performance testing also, right?
So what do you think performance testing is?
How fast does it run?
Close, yeah, yeah.
So you're testing, it's testing conducted to evaluate the compliance of a system
or component with specified performance requirements.
Okay.
It says it should look up this price in less than a second.
Does it?
Oh, okay.
I'm going to take a spot watch, stop watch out, right?
Does it perform as it should to the requirements?
So you're basing these things off for requirements.
So the first one was stress testing.
Okay, the requirements said 30 cash registers.
We're turned on all 30 on.
You know, the second one is performance testing.
It says it should do it in less than a second.
Does it?
Right.
So the third one is usability in functional and system testing,
in this black box testing, usability testing, right?
So usability testing is testing conducted to evaluate the extent
to which a user can learn to operate,
prepare inputs for interpret outputs for the system, right?
So while stress and usability testing can be and is often automated,
usability testing is done by human computer interaction.
So is this software usable?
Is it usable to its specifications?
Did they put a whole bunch of weird check boxes and weird things you have to do?
Do you have to tab and type a whole bunch of stuff on every single page?
Have you seen any gone to any place where people have done,
like they've typed a million things and then they scanned,
and then they've done all this stuff?
And they're like, okay, now I can do, yeah.
And you're like, that's not usable at all.
Yeah, that's ridiculous.
It's exactly.
So that's where you get usability testing.
So another type of testing is the black box testing is acceptance testing.
So after all your functional and system testing is done,
the product is delivered to a customer and the customer runs black box acceptance
testings based on their expectations of the functionality.
So it's a formal testing conducted to determine whether or not a system satisfies its acceptance
criteria. And the criteria, a system must satisfy to be accepted by a customer.
And to enable the customer to determine whether or not they accept the system.
That's basically you get a bunch of check boxes saying the customer, the client,
whoever, the next, and the customer doesn't have to be an external customer.
It can be the next person, the other group.
If you're a development group and you create this for operations group,
it could be the operations group could be your customer.
Okay. And they do their own testing on it.
Does it work, right? But they shouldn't just go run one version of the software the next day,
run the next version. There should be some sort of user acceptance testing at the end.
And that's almost always black box testing.
And the regression testing is,
regression testing is selective retesting of a system and components.
It's basically to specify that you're meeting requirements.
Regression testings are sometimes automated.
They're both white black box testing. And it's basically just common tests that you do over
and over every time something changes. And then beta testing. So if you heard a beta testing before?
Yes. Oh, right.
I don't know what it is, but I've heard of it.
So you've heard of beta testing, right?
They have it in in accounting.
Okay. So what are they? What are they? What is it? Do you think?
I've forgotten, but I know about it. I know when you talk about it, I'll remember.
Okay. Okay. So you've got an advanced version of this software.
And it could be a partial version. It could be a full version.
It could be Google where they leave everything in beta testing all the time.
But basically, you've got a partial or full version of this software that's available
that development organizations can offer it free to one or sometimes more potential users.
And they're called beta testers. And they'll try out this program and they'll look for bugs.
They'll look for identification of unexpected errors.
Wider population search for these errors, right?
You've got all these people working on it.
Oh, right. Okay.
They are testing your product for you.
That wasn't what I was thinking it was.
Okay.
So Linux distributions often do this.
Well, they'll have a development version of the software.
And then they'll have a production version of the software, right?
Okay.
And so why would you do this, right?
So you can identify these unexpected errors.
Right. Yeah, before it's distributed, the larger.
Because you've got a ton of people looking at this.
You got you got a huge population of people from wide variety of environments using them on a
bunch of different hardware and software configurations.
And you know, all these all these sorts of things that you couldn't even test it for.
The costs are low because you're offering it for free and they're doing it for free usually.
Right. That's perfect.
You know, and the disadvantages are that you often get low quality error reports.
You know, the quality is the users may not actually report errors,
or they may not put enough detail or those.
They don't care.
They don't, well, they care, but they're not.
Maybe they're not as knowledgeable.
Because if anybody can do it, you know, or it's okay, yeah.
And much effort is necessary to examine these reports,
particularly when there's a lot of beta testers.
And there are a lot of people are coming up with the same thing over and over again.
You know, and you've got to go through all these reports,
trying to figure out what's working, what's not.
And then there's a lack of systematic testing.
Because each user uses the product in a manner they choose.
Right.
So you don't know from beginning to end,
you know, how it's going, how it works.
Right.
Right?
Okay.
So let's recap here so we don't bore everybody.
Good idea.
Yeah.
And then, um, and then, yeah.
So what do we talk about?
We talked about the unit testing, right?
Which is a low level design test.
It's a white box test in the actual code itself.
It's a small unit, usually no larger than a class or a function.
Right.
That's your if then statements, you know,
your specific functions, you're testing a unit.
Right.
The integration testing.
That's a, um, multiple classes or multiple functions.
It's, uh, it could be white or black box testing.
That's, does it integrate with the other part of the software?
To drop down boxes.
Exactly.
The example is to drop down boxes exactly.
The functional prop, um, you're, it's a high level design as a whole product.
It's done by an independent tester.
It's usually black box.
That's a lot of people consider black box testing functional testing.
Does it work as intended?
Right.
System testing and these are, these are more requirements analysis.
Is the whole product in, or in its representative environments.
Okay.
So you're testing the system itself, you know, as, as it functions, um,
acceptance testing.
That's usually done by the customer or the client or whoever owns the software.
They accept the software.
They test it and go, okay, accept this function.
I accept this.
I accept that.
I accept that.
It works.
Beta testing.
Okay.
There's no requirements here.
It's more of an ad hoc type of thing.
It can be useful.
It can be, it can be overdone.
You got a lot of people looking at it.
But many of them are unknowledgeable in testing and, um, and, um,
A bit about reporting and usually not good about reporting or not reporting in a consistent manner.
So usually, um, the reports are like all over the place because you usually don't have a form,
a specific form, you know, to submit reports in a unified way that they could be looked at
and searched and that sort of thing.
So there's a lot of work behind that one.
People do it that way.
Right.
But it's very useful because you got a ton of people looking at it.
And then regression testing, which is usually some sort of changed documentation.
Something changed in the program or, um, a part of it that you continually test over and over.
Okay.
So, um, that can be programmer or independent testing.
It could be black or white.
It could be any sort of scope.
Uh, does that make sense, sir?
It sure does.
All right.
So, um, I'm Heisenbug.
I'm Cloud4.
And, uh, if you like this, if you want us to keep on going and go back a little bit farther
into the testing, the types of testing, how to do these testings and how to, um,
how to find bugs and problems in software and how to exploit those to make software do things
that wasn't intended to do or you maybe wasn't, didn't want to do, send me an email at, uh,
little code monkey at gmail.com and, uh, tell us to continue the show.
So, thanks for listening to Hacker Public Radio.
Thanks for listening.
All right.
Thank you for listening to Hacker Public Radio.
HPR is sponsored by Carol.net.
So, head on over to C-A-R-O dot-E-T for all of us in need.