177 lines
12 KiB
Plaintext
177 lines
12 KiB
Plaintext
|
|
Episode: 3994
|
||
|
|
Title: HPR3994: Lastpass Response
|
||
|
|
Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr3994/hpr3994.mp3
|
||
|
|
Transcribed: 2025-10-25 18:25:13
|
||
|
|
|
||
|
|
---
|
||
|
|
|
||
|
|
This is Hacker Public Radio Episode 3994 for Thursday the 23rd of November 2023.
|
||
|
|
Today's show is entitled, Last Pass Response.
|
||
|
|
It is hosted by Operator and is about 13 minutes long.
|
||
|
|
It carries an explicit flag.
|
||
|
|
The summary is I talk about Last Pass.
|
||
|
|
Hello everyone and welcome to another episode of Hacker Public Radio with your host Operator.
|
||
|
|
So I'm going to respond to Episode 39E9 with Ouka who talked about Last Pass.
|
||
|
|
And just kind of get some general guidance for people around security and what you can
|
||
|
|
do to help prevent some of that stuff.
|
||
|
|
And what I'll say is the easiest thing to do is, you know, make sure your browser is
|
||
|
|
up to date, right?
|
||
|
|
The browser itself is a sandbox.
|
||
|
|
Make sure the browser is up to date and for the instances that your software is not
|
||
|
|
up to date.
|
||
|
|
What you do is you create a new user, you don't make them an administrator user, and you
|
||
|
|
specify a single folder for their downloads to go into that you share back and forth.
|
||
|
|
Meaning that if you were to accidentally run something or run an executable or anything
|
||
|
|
like that, it's only going to affect that internet user which doesn't actually have anything
|
||
|
|
on it and it doesn't have anything for them to escalate to your user too.
|
||
|
|
So it's another sandbox.
|
||
|
|
So they escape out of the sandbox of your browser or you execute something that causes some
|
||
|
|
kind of code execution, be it old software, whatever, just use a different user.
|
||
|
|
That way they have to move laterally to your user and get code execution that way.
|
||
|
|
There's just not a lot of options for people once they land and the user they have to pivot
|
||
|
|
to another user and then try to pivot to the local administrator and then try to pivot
|
||
|
|
some other means to get execution of higher level stuff.
|
||
|
|
So I would say the best thing people can do is install privacy badger, install an ad blocker
|
||
|
|
and ad block and basically that will protect you from a lot of attacks because these websites
|
||
|
|
are all gross and all these places that you get spam from are sometimes often watering
|
||
|
|
whole attacks for other types of services.
|
||
|
|
So for example, if you're hosting a banner or posting some ad content or a CDN, those
|
||
|
|
have been known to get serve up malware or serve it malicious to have a script that will
|
||
|
|
forward you off to some kind of malicious pages that will have you execute code which when
|
||
|
|
that happens it won't really matter because you will be a local restricted user and it
|
||
|
|
will have internet access.
|
||
|
|
So that's kind of where I'm at.
|
||
|
|
With last pass, yes they were breached but everybody's going to get breached or is breached.
|
||
|
|
And I would almost argue that last pass with, it's going to take change, change takes time
|
||
|
|
especially the bigger organizations.
|
||
|
|
I would argue that last pass is going to be more secure or as secure as the other, as
|
||
|
|
their peers, as they're getting hit several times.
|
||
|
|
So everybody in your neighborhood has the same locks, right?
|
||
|
|
Everybody in your neighborhood has the same set up, everybody eats all the same houses,
|
||
|
|
all the same doors, all the same locks.
|
||
|
|
And if your neighbor gets breached or compromised, right, if they get broken into their house
|
||
|
|
and something happens, then maybe you buy new locks but that's not really going to happen.
|
||
|
|
The person that got breached or the person house that got broken into, they're probably
|
||
|
|
going to change their locks, they're at least going to upgrade their locks, maybe they'll
|
||
|
|
put in a kick plate, maybe they'll set up some glass break sensors, maybe they'll set
|
||
|
|
up monitoring and cameras, right?
|
||
|
|
But the neighbor is not going to do much about it, even if the neighbor might not even
|
||
|
|
know about it.
|
||
|
|
So with that said, not everybody in the neighborhood is going to upgrade all their security when
|
||
|
|
one person gets breached, the only person that's going to upgrade their security is the
|
||
|
|
person that gets breached.
|
||
|
|
So I would argue someone that gets breached and gets breached enough to where they're
|
||
|
|
publicly kind of humiliated several times, their security is going to be better over time,
|
||
|
|
which is not necessarily true.
|
||
|
|
But I would argue that everybody gets compromised and if you, you know, you get the naughty stick
|
||
|
|
waived at you enough, you will eventually start prioritizing security and then at least
|
||
|
|
the as good, it's not better than your peers on security side.
|
||
|
|
So that's just my two cents, I would almost rather someone that's been breached or at least
|
||
|
|
publicly telling that they've been breached because they've all been breached.
|
||
|
|
Everyone's been breached whether they like it or not or whether it detected, you can't
|
||
|
|
prove a negative.
|
||
|
|
So I can't say that we haven't been breached.
|
||
|
|
You can't say that you haven't been breached because it's the thing that doesn't bear any
|
||
|
|
fruit unless someone does something with that data.
|
||
|
|
So your computer might have been breached ten times, but since there was no useful information
|
||
|
|
on it or there was no useful data on it or maybe you didn't fit up a specific profile
|
||
|
|
then they didn't come back in and escalate or use you as a proxy or use you for denial
|
||
|
|
service or whatever.
|
||
|
|
So just because you don't know that you got breached or whatever doesn't mean something
|
||
|
|
happened on your computer at some point in time.
|
||
|
|
So anyways, I hope that kind of helps somebody out, it's kind of a vent slash kind of giving
|
||
|
|
some perspective to folks because everybody's been breached in some form of capacity and
|
||
|
|
just because they're not on the news doesn't mean they weren't breached.
|
||
|
|
We're talking about global companies, companies that operate in Rhode Island, companies that
|
||
|
|
operate in California where laws and breach notifications are different and the way it
|
||
|
|
works in the States.
|
||
|
|
As far as I know, as if you can't prove data left the company then you don't have the
|
||
|
|
report that you were breached.
|
||
|
|
So what do we do?
|
||
|
|
We don't log anything.
|
||
|
|
We don't tell anybody about anything.
|
||
|
|
We don't have logs for when we do have things.
|
||
|
|
We don't want to know when something bad happened because if we know something bad happened
|
||
|
|
then we have to report it as a breach.
|
||
|
|
So that mentality is still around companies that get breached and want to have an easy
|
||
|
|
out or an easy excuse.
|
||
|
|
The way to follow all this based on the podcast is all of the insurance companies, right?
|
||
|
|
Insurance companies are going bananas.
|
||
|
|
We've got started out reasonable prices on breach insurance and now they're putting
|
||
|
|
all these clauses in there and you have to get extra writers for stuff like ransomware
|
||
|
|
or non-nation state attacks or non-ex-of-work because they're realizing these companies are
|
||
|
|
getting breached and then they're just trying to use the insurance to be like, oh, well,
|
||
|
|
I don't, you know, whatever.
|
||
|
|
So they're raising their requirements for audit and doing actual testing before they
|
||
|
|
onboard them and then they're not giving them these carte blanche policies that just let
|
||
|
|
them get breached however often they want and then, okay, well, your $3 million have
|
||
|
|
a nice day.
|
||
|
|
They lost a lot of money doing that.
|
||
|
|
So now they're realizing that this is a bigger problem than, you know, we really think about.
|
||
|
|
So I think the insurance companies guide in based on how much cyber insurance is, if that's
|
||
|
|
even a thing anymore, based on how expensive cyber insurance is and gets, that basically
|
||
|
|
tells us the state of information security as a whole, right?
|
||
|
|
Insurance, if anybody knows anything, insurance companies know risk because that's their business,
|
||
|
|
it's understanding risk, not consulting companies, not, you know, wildball, you know, consultants
|
||
|
|
that come in and big consulting firms that come in and try to pretend like they know what
|
||
|
|
they're talking about and then they just take data from your organization and re-print
|
||
|
|
it and put it on a different header and put it on a slide deck and then, you know, pretend
|
||
|
|
like that's something different when all of their employees and team leaders and people
|
||
|
|
that actually do the work have been telling them for 10 years, the same problems.
|
||
|
|
It's the insurance companies that have that true visibility because when something bad
|
||
|
|
does happen, then they go in and they realize that nobody's allowing anything, nobody's
|
||
|
|
watching anything, nobody's reacting to anything, nobody's actively looking for anything, nobody
|
||
|
|
is, you know, there's very little in that space and as things happen and as we, you know,
|
||
|
|
approach this AI stuff, you know, we're talking about AI's interacting with APIs and that is
|
||
|
|
a job security if I've ever heard it, right?
|
||
|
|
If you can imagine any API or any company that has a UI interface or a web UI interface
|
||
|
|
or a API to access, we're going to start seeing APIs access other APIs with AI and that's
|
||
|
|
going to be part of people's business is using these tools to accelerate business and accelerate
|
||
|
|
everything else, right?
|
||
|
|
And so the idea that I've heard is that, you know, once we start having APIs and AI's
|
||
|
|
talk to themselves, they, things are going to get sideways pretty quickly, right?
|
||
|
|
That is opening the door to all kinds of misuse from attackers and unintentional things and
|
||
|
|
people are going to start training their own large language models and, you know, maybe
|
||
|
|
your entire work identity gets stolen by a company, right?
|
||
|
|
So say you're working in IT or you're working in a specific group and you support your users.
|
||
|
|
They take all that information and they put it into a system, a local language model and
|
||
|
|
create a automated response system for ticketing to close out issues quicker or to work with
|
||
|
|
other people, you know, based on the training data, they can say, okay, well, this particular
|
||
|
|
tool works this way, this particular tool works that way and that has part of your personality
|
||
|
|
in it, right?
|
||
|
|
So when they get breached and that model disappears, you can say, okay, reform this email and act
|
||
|
|
as Robert McCurdy. So you could essentially have an instance where entire personalities
|
||
|
|
or fingerprints of actual people can be stolen, right?
|
||
|
|
Maybe in the future we get companies kind of having our avatars or our personalities, right?
|
||
|
|
What happens when an entire company's personality gets breached or their entire large language
|
||
|
|
model gets breached or the model for all their, you know, internal stuff that they, whatever.
|
||
|
|
We've already seen large language models leak.
|
||
|
|
We've already seen AI art models leak that's already happened.
|
||
|
|
So as more information is contained within these large language models, it's sensitive
|
||
|
|
and as companies rush to take their sensitive data and put it in the cloud instead of
|
||
|
|
training their own models because it's millions of dollars to do it effectively, you know,
|
||
|
|
that's going to start making things very interesting very quickly because once someone, someone
|
||
|
|
has information that's valuable at scale and can use that information with AI, once that
|
||
|
|
information gets leaked, then things can go sideways pretty quickly, right?
|
||
|
|
So that's kind of, we have our work cut out for us, those of us that are in IT and those
|
||
|
|
of us that are in security because this is only going to get more interesting as AI develops
|
||
|
|
and large language models develop and other types of AI develop and to, you know, the robots
|
||
|
|
take over or whatever.
|
||
|
|
Anyways, I hope that helped somebody.
|
||
|
|
If you have any questions about security or want me to do any workshops or lunch and
|
||
|
|
learns, whatever you want to call it, reach out to me, looking for people to kind of co-host
|
||
|
|
with and interview.
|
||
|
|
Just basic interview questions, fun stuff, nothing crazy or complicated.
|
||
|
|
And just tell stories about things that you're passionate about, feel free to reach out
|
||
|
|
to me at, I guess, F-R-E-E-E-L-O-A-D-101 at Yahoo.com.
|
||
|
|
That's freeload101 at Yahoo.com.
|
||
|
|
I said Yahoo, that's right.
|
||
|
|
Y'all take it easy, stay safe.
|
||
|
|
You have been listening to Hacker Public Radio at Hacker Public Radio, does work.
|
||
|
|
Today's show was contributed by a HBR listener like yourself, if you ever thought of recording
|
||
|
|
a podcast, you click on our contribute link to find out how easy it really is.
|
||
|
|
Hosting for HBR has been kindly provided by an honesthost.com, the Internet Archive
|
||
|
|
and our Sync.net.
|
||
|
|
On the Sadois status, today's show is released under Creative Commons, Attribution 4.0 International
|
||
|
|
License.
|