Episode: 2950 Title: HPR2950: NotPetya and Maersk: An Object Lesson Source: https://hub.hackerpublicradio.org/ccdn.php?filename=/eps/hpr2950/hpr2950.mp3 Transcribed: 2025-10-24 13:47:10 --- This in HP are episode 2950 entitled Not Petty and Merck, an object lesson and in part of the series Privacy and Security, it is hosted by a huker and in about 14 minutes long and carry a clean flag. The summary is, looking at an object lesson for proper IT management processes and the cost of failure. This episode of HBR is brought to you by an honest host.com. At 15% discount on all shared hosting with the offer code HBR15, that's HBR15. Better web hosting that's honest and fair at An honest host.com. Hello, this is a huker, welcoming you to Hacker Public Radio and another exciting episode again in our security and privacy series that is ongoing and I want to talk about kind of an object lesson of how not to do things and how not taking care of your security can get you into a lot of trouble. And this has to do with Not Petty, which you may have heard of. Now Not Petty is pretty clearly Russian in origin and I think that that matters to some degree, although that's not really the primary focus of this particular episode. But just as a bit of background, my first degree was in history and my focus was on history of Russia and Eastern Europe, although I think these days we refer to a lot of that now as Central Europe, but that's either way. It was an interesting thing to do because when I was doing it, very, it was not something that that many people were studying and I remember some of the professors at the university I went to. So why are you doing that? Nothing interesting ever happens there. Well, that was in the 70s. So a lot of things have changed. But one of the things I learned is that there is something that actually predates even the Bolshevik Revolution and the Soviet Union and that is this kind of love-hate relationship between Russia and the West. So I think you can read, like, say, Turgainov and Dostoevsky as almost carrying on a conversation about should we approach and emulate the West or should we resist the West and make it our enemy with Turgainov being a westernizer and Dostoevsky being a Russophile. And I mentioned that just to say that the enmity that came out from the Soviet Union originally and to my mind still continues now has been there for a long time and I think we should probably just recognize that we are in a state of war to some degree, undeclared, but we are in a state of war with Russia. But right now, the weapons are code instead of missiles and bullets because Russia really can't compete on the grounds of missiles and bullets. It's actually not that wealthy or advanced a country in a lot of ways, but it can write code. Now, a good example of this is the not-petch-a-malware, which hit various networks in June of 2017. It initially looked like a variant of the Petya ransomware, but it quickly developed that that was purely a masquerade. It was never intended to collect ransoms. In fact, you could not get your data back even if you paid a ransom because it was all about destruction. It was designed to be malicious. Now, the initial target was in Ukraine, which is in a somewhat hotter state of war with Russia, but because of the way software networks connect with each other, it quickly spread to a number of networks in Western Europe and in the United States. Now, this is not really an episode about the malware itself. It's actually about the Danish shipping company, Mersk. Now, Mersk is a critical part of the infrastructure of the Western economy. It handles containerized shipping around the world, delivering parts and raw materials to manufacturers and finished goods to markets. You may have sometimes seen the giant container ships that are absolutely massive that travel around the globe. In us, that's Mersk's business. In today's manufacturing environment, we have this thing called just-in-time manufacturing that relies on parts and raw materials being delivered each day, maybe several times a day, just where they're needed. Companies no longer maintain any significant level of inventories for these things. These regular daily or multi-day times a day shipments are required to keep the machines and operation and the workers employed. This is something I think is going to come very apparent, unfortunately, to people in Britain because they're busy destroying all of the networks that keep manufacturing going. So, Mersk is a vital part of this, not the only one, but they're a big one. And to manage this, companies like Mersk rely on large computer networks to keep everything moving smoothly. So it stands to reason. They would treat their network like the Crown Jewels, right? Well, not so much. In fact, there were significant problems that had gone unaddressed that were about to bite Mersk in the butt. This makes it a good object lesson in light of our previous article on the NIST Cybersecurity Framework. That may have seemed a bit dry. So let's look at this case study to put some flesh on those dry bones. Mersk suffered days of downtime that affected their operations in 76 ports all over the world and 800 of those giant container ships that they operated. Now the losses they officially acknowledged came to 300 million. That is probably a deliberately low estimate. And it doesn't count the losses inflicted on other participants, such as the ports, the trucking companies, the customers, and so on. Mersk was not the one to suffer the largest losses. Pharmaceutical company Merck came in with an estimate of 870 million. And the White House estimated the total for losses at 10 billion. So we're here to look at what Mersk did to get in so much trouble. First it is worth noting that the IT department had been pushing for security improvements, but implementing them was not a priority for Mersk management. As the saying goes, you don't miss your water till the well runs dry. So what was the IT department pointing out? The first one, Mersk was still running some very old servers. Some of which were running the Windows 2000 operating system. But by 2017 that old operating system was no longer supported and Microsoft was not issuing any security updates. Of course part of the problem was no doubt that a newer operating system would also require newer hardware, that's usually how it works. But it should be a matter of course to plan on updates to both hardware and software on a regular schedule to keep up to date. A good IT organization will plan to regularly test patches and new operating systems as they are released. This should include regression testing and testing all currently installed hardware and software. If an incompatibility is surfaced the appropriate response is not to maintain the old hardware or software if that is going to be creating a risk. But to plan on finding replacements or getting the vendor to update and support the new environment. This costs money of course, but probably a lot less than this malware cost, Mersk. Another problem, network segmentation was lacking. In a large network you do not want any intruder to have full run of the whole network. This by the way is also what happened to Sony when the North Koreans got into its network. The way you do this properly is to segment your network so that systems that truly need to interact are on a network segment that does not easily communicate with other segments. At the very least you need to have password protection with separate passwords for each segment, even better would be to incorporate multi-factor authentication. Both were missing at Mersk and the Sony one, how did that happen? They had everything tied together in one big network and all it took was for a secretary to succumb to a fishing attack and let some malware into the system and the North Koreans had the run of everything. So segment your network, this is not rocket science, just basic security. Another problem, backups, there was problems with what Mersk was doing with backups. Now a good rule, not perfect, we can argue the details, but a good rule is the 321 rule. You want three different copies of the backup in at least two different media and at least one off site. And of course the off site one needs to be off the network completely. The reason is that any malware that gets on your network will attack any system connected to the network. For example, ransomware will look for any attached backup system and encrypt all of that data as well. In the case of Mersk, they had made all of their domain controllers connect to each other so that they would always be in sync, which is good as long as you keep in mind the other copies which they did not have. Now Mersk got lucky here, since one server out of the approximately 150 domain servers on the network located in Ghana somehow escaped destruction. They were able to fly the hard drive back to their IT headquarters to rebuild the network. And how did the Ghana server survive? Dumb luck. A power failure in Ghana had knocked it off line and it was disconnected from the rest of the network while not Petchy was busy destroying everything. You don't want to rely on getting lucky. That's a rule, I would say, insecurity management. But the biggest problem was the lack of urgency. Everything we've talked about here was unknown to the IT department. They were very aware of the potential problems and they were concerned and they had communicated their concerns to management. They had also obtained general agreement that, yeah, these problems should be addressed, but addressing these problems was never part of what is termed a key performance indicator for anyone in management, including IT management. In the final analysis, if this is not going to determine my bonus, if this is not going to determine my raise, if this is not going to determine whether or not I can keep my job, it's not that important. So you need to build those things in. This is an ongoing problem in many organizations because IT is often seen only as a cost and not as a competitive advantage, and yet for mayors, literally their entire business relied on excellent IT implementation. So this is something that really has to be addressed. Now I remember some years ago, a television advertisement in the United States for, I think it was an auto maintenance service, and the punchline is, you can pay me now or you can pay me later. Well, mayors suddenly did make those IT improvements at top priority, but only after a large loss and disruption to their business relationships. This is a hard way to learn the lesson. Maybe the biggest lesson is that executives have to be held responsible for this stuff. Now as the NIST framework says, and I'm quoting, there is a formal organization wide approach to managing cybersecurity risk, and see your management monitors this just as they monitor financial risks and other organizational risks. That's the desired approach that NIST is laying out. And so if you were listening to the show we did on that NIST framework and thought, oh, this is very abstract, there is reality there. You have to pay attention to it to understand they're talking about real things happening in real organizations. So with that, this is Ahuka signing off for Hacker Public Radio, and as always reminding you to support FreeSoftware. Bye-bye. You've been listening to Hacker Public Radio at Hacker Public Radio.org. We are a community podcast network that releases shows every weekday Monday through Friday. Today's show, like all our shows, was contributed by an HBR listener like yourself. If you ever thought of recording a podcast and click on our contributing to find out how easy it really is. Hacker Public Radio was founded by the Digital Dove Pound and the Infonomicom Computer Club, and is part of the binary revolution at binwreff.com. If you have comments on today's show, please email the host directly, leave a comment on the website or record a follow-up episode yourself, unless otherwise stated, today's show is released under Creative Commons, Attribution, ShareLite, 3.0 license.